diff --git a/-_Architecture_Center_home.txt b/-_Architecture_Center_home.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d23e9222ef3bf863a73cab210a2f046077c4e9d --- /dev/null +++ b/-_Architecture_Center_home.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture +Date Scraped: 2025-02-23T11:42:26.739Z + +Content: +Home Stay organized with collections Save and categorize content based on your preferences. Cloud Architecture Center Discover reference architectures, design guidance, and best practices for building, migrating, and managing your cloud workloads. See what's new! construction Operational excellence security Security, privacy, compliance restore Reliability payment Cost optimization speed Performance optimization Explore the Architecture Framework Deployment archetypes Learn about the basic archetypes for building cloud architectures, and the use cases and design considerations for each archetype: zonal, regional, multi-regional, global, hybrid, and multicloud. Infrastructure reliability guide Design and build infrastructure to run your workloads reliably in the cloud. Landing zone design Design and build a landing zone that includes identity onboarding, resource hierarchy, network design, and security controls. Enterprise foundations blueprint Design and build a foundation that enables consistent governance, security controls, scale, visibility, and access. forward_circle Jump Start Solution guides 14 Learn and experiment with pre-built solution templates book Design guides 97 Build architectures using recommended patterns and practices account_tree Reference architectures 83 Deploy or adapt cloud topologies to meet your specific needs AI and machine learning 26 account_tree Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search book Build and deploy generative AI and machine learning models in an enterprise forward_circle Generative AI RAG with Cloud SQL More arrow_forward Application development 72 book Microservices overview account_tree Patterns for scalable and resilient apps More arrow_forward Databases 13 book Multi-cloud database management More arrow_forward Hybrid and multicloud 33 book Hybrid and multicloud overview account_tree Patterns for connecting other cloud service providers with Google Cloud account_tree Authenticate workforce users in a hybrid environment More arrow_forward Migration 26 book Migrate to Google Cloud book Database migration More arrow_forward Monitoring and logging 14 account_tree Stream logs from Google Cloud to Splunk More arrow_forward Networking 32 book Best practices and reference architectures for VPC design account_tree Hub-and-spoke network architecture account_tree Build internet connectivity for private VMs More arrow_forward Reliability and DR 11 book Google Cloud infrastructure reliability guide book Disaster recovery planning guide forward_circle Load balanced managed VMs More arrow_forward Security and IAM 42 book Identity and access management overview book Enterprise foundations blueprint account_tree Automate malware scanning for files uploaded to Cloud Storage More arrow_forward Storage 14 book Design an optimal storage strategy for your cloud workload More arrow_forward stars Google Cloud certification Demonstrate your expertise and validate your ability to transform businesses with Google Cloud technology. verified_user Google Cloud security best practices center Explore these best practices for meeting your security and compliance objectives as you deploy workloads on Google Cloud. Google Cloud Migration Center Accelerate your end-to-end migration journey from your current on-premises environment to Google Cloud. Google Cloud partner advantage Connect with a Google Cloud partner who can help you with your architecture needs. Send feedback \ No newline at end of file diff --git a/AI-ML_image_processing_on_Cloud_Functions.txt b/AI-ML_image_processing_on_Cloud_Functions.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb333ff7dec6435fe0f3b23fc6e3c89ceffc2651 --- /dev/null +++ b/AI-ML_image_processing_on_Cloud_Functions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml/image-processing-cloud-functions +Date Scraped: 2025-02-23T11:46:27.561Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: AI/ML image processing on Cloud Functions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-29 UTC This guide helps you understand, deploy, and use the AI/ML image processing on Cloud Functions Jump Start Solution. This solution uses pre-trained machine learning models to analyze images provided by users and generate image annotations. Deploying this solution creates an image processing service that can help you do the following, and more: Handle unsafe or harmful user-generated content. Digitize text from physical documents. Detect and classify objects in images. This document is intended for developers who have some familiarity with backend service development, the capabilities of AI/ML, and basic cloud computing concepts. Though not required, Terraform experience is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives Learn how a serverless architecture is used to create a scalable image processing service. Understand how the image processing service uses pre-trained machine learning models for image analysis. Deploy the image processing service and invoke it through REST API calls or in response to image upload events. Review configuration and security settings to understand how to adapt the image processing service to different needs. Products used The solution uses the following Google Cloud products: Cloud Vision API: An API offering powerful pre-trained machine learning models for image annotation. The solution uses the Cloud Vision API to analyze images and obtain image annotation data. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. The solution uses Cloud Storage to store input images and resulting image annotation data. Cloud Run functions: A lightweight serverless compute service that lets you create single-purpose, standalone functions that can respond to Google Cloud events without the need to manage a server or runtime environment. The solution uses Cloud Run functions to host the image processing service's endpoints. For information about how these products are configured and how they interact, see the next section. Architecture The solution consists of an example image processing service that analyzes input images and generates annotations for the images using pre-trained machine learning models. The following diagram shows the architecture of the Google Cloud resources used in the solution. The service can be invoked in two ways: directly through REST API calls or indirectly in response to image uploads. Request flow The request processing flow of the image processing service depends on how users invoke the service. The following steps are numbered as shown in the preceding architecture diagram. When the user invokes the image processing service directly through a REST API call: The user makes a request to the image processing service's REST API endpoint, deployed as a Cloud Run function. The request specifies an image as a URI or a base64 encoded stream. The Cloud Run function makes a call to the Cloud Vision API to generate annotations for the specified image. The image annotation data is returned in JSON format in the function's response to the user. When the user invokes the image processing service indirectly in response to image uploads: The user uploads images to a Cloud Storage bucket for input. Each image upload generates a Cloud Storage event that triggers a Cloud Run function to process the uploaded image. The Cloud Run function makes a call to the Cloud Vision API to generate annotations for the specified image. The Cloud Run function writes the image annotation data as a JSON file in another Cloud Storage bucket for output. Cost For an estimate of the cost of the Google Cloud resources that the AI/ML image processing on Cloud Functions solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The amount of data stored in Cloud Storage. The number of times the image processing service is invoked. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/serviceusage.serviceUsageAdmin roles/iam.serviceAccountAdmin roles/resourcemanager.projectIamAdmin roles/cloudfunctions.admin roles/run.admin roles/storage.admin roles/pubsublite.admin roles/iam.securityAdmin roles/logging.admin roles/artifactregistry.reader roles/cloudbuild.builds.editor roles/compute.admin roles/iam.serviceAccountUser Deploy the solution This section guides you through the process of deploying the solution. To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the AI/ML image processing on Cloud Functions solution. Go to the AI/ML image processing on Cloud Functions solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Next, to try the solution out yourself, see Explore the solution. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also includes the image processing service's entry point URL, the name of the input Cloud Storage bucket for uploading images, and the name of the output Cloud Storage bucket that contains image annotation data, as shown in the following example output: vision_annotations_gcs = "gs://vision-annotations-1234567890" vision_input_gcs = "gs://vision-input-1234567890" vision_prediction_url = [ "https://annotate-http-abcde1wxyz-wn.a.run.app", "ingressIndex:0", "ingressValue:ALLOW_ALL", "isAuthenticated:false", ] To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Next, you can explore the solution and see how it works. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore the solution In this section, you can try using the solution to see it in action. The image processing service can be invoked in two ways: by calling its REST API directly or by uploading images to the input Cloud Storage bucket. Note: The image processing service might respond with errors if you're making a very large volume of requests, due to product usage quotas and limits. See Cloud Run functions quotas and Cloud Vision API quotas and limits for details. Invoke the service through the REST API In scenarios where you want to process images synchronously in a request-response flow, use the image processing service's REST API. The annotate-http function deployed by the solution is the entry point to the image processing service's REST API. You can find the URL of this function in the console, or if you deployed by using the Terraform CLI, in the output variable vision_prediction_url. This entry point URL exposes an endpoint named /annotate for making image processing requests. The /annotate endpoint supports GET and POST requests with the following parameters: Parameter Description image (POST requests only) Image content, uploaded in binary format or specified as base64-encoded image data. image_uri A URI pointing to an image. features (Optional) A comma-separated list of Vision API annotation features to request. Possible feature values are: CROP_HINTS DOCUMENT_TEXT_DETECTION FACE_DETECTION IMAGE_PROPERTIES LABEL_DETECTION LANDMARK_DETECTION LOGO_DETECTION OBJECT_LOCALIZATION PRODUCT_SEARCH SAFE_SEARCH_DETECTION TEXT_DETECTION WEB_DETECTION To specify the image to be analyzed, only include one of the image or image_uri parameters. If you specify both, image_uri is used. For example, to perform object detection on an image with an internet URI, you can send a GET request such as the following using curl: curl "YOUR_ENTRYPOINT_URL/annotate?features=OBJECT_LOCALIZATION&image_uri=YOUR_IMAGE_URI" Alternatively, to specify image content directly using a local image file, you can use a POST request such as the following: curl -X POST -F image=@YOUR_IMAGE_FILENAME -F features=OBJECT_LOCALIZATION "YOUR_ENTRYPOINT_URL/annotate" The response contains the image annotations from the Vision API in JSON format. Invoke the service by uploading images to Cloud Storage In scenarios where you want to process images asynchronously or by batch upload, use the image processing service's Cloud Storage trigger, which automatically invokes the service in response to image uploads. Follow the steps to analyze images using the Cloud Storage trigger: In the console, go to the Cloud Storage Buckets page. Go to Cloud Storage Click the name of your input bucket (vision-input-ID) to go to its Bucket details page. In the Objects tab, click Upload files. Select the image file or files you want to analyze. After the upload is complete, go back to the Cloud Storage Buckets page. Go to Cloud Storage Click the name of your annotation output bucket (vision-annotations-ID) to go to its Bucket details page. The Objects tab contains a separate JSON file for each image you uploaded. The JSON files contain the annotation data for each image. Note: If a corresponding JSON file doesn't appear in the annotation output bucket, wait a moment for image processing to complete and refresh the page. Customize the solution This section provides information that Terraform developers can use to modify the AI/ML image processing on Cloud Functions solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. The Terraform configuration for this solution provides the following variables you can use to customize the image processing service: Variable Description Default value region The Google Cloud region in which to deploy the Cloud Run functions and other solution resources. See Cloud Run functions Locations for more information. us-west4 gcf_max_instance_count The maximum number of Cloud Run functions instances for the service. This helps control the service's scaling behavior. See Using maximum instances for more information. 10 gcf_timeout_seconds The timeout for requests to the service, in seconds. This controls how long the service can take to respond. See Function timeout for more information. 120 gcf_http_ingress_type_index Controls whether the service can be invoked by resources outside of your Google Cloud project. See Ingress settings for more information. Possible values are: 0 (Allow all) 1 (Allow internal only) 2 (Allow internal and Cloud Load Balancing) 0 (Allow all) gcf_require_http_authentication Controls whether authentication is required to make a request to the service. See Authenticating for invocation for more information. false gcf_annotation_features A comma-separated list of Vision API annotation features for the service to include by default. This can be overridden for individual requests. Possible feature values are: CROP_HINTS DOCUMENT_TEXT_DETECTION FACE_DETECTION IMAGE_PROPERTIES LABEL_DETECTION LANDMARK_DETECTION LOGO_DETECTION OBJECT_LOCALIZATION PRODUCT_SEARCH SAFE_SEARCH_DETECTION TEXT_DETECTION WEB_DETECTION FACE_DETECTION,PRODUCT_SEARCH,SAFE_SEARCH_DETECTION To customize the solution, complete the following steps in Cloud Shell: Make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Open your terraform.tfvars file and make the required changes, specifying appropriate values for the variables listed in the previous table. Note: For guidance about the effects of such customization on reliability, security, performance, cost, and operations, see Design recommendations. Validate and review the Terraform configuration. Provision the resources. Design recommendations As you make changes to the solution either by changing the values of the provided Terraform variables or modifying the Terraform configuration itself, refer to the resources in this section to help you develop an architecture that meets your requirements for security, reliability, cost, and performance. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Security By default, the image processing service allows requests from the internet and does not require authentication for requests. In a production environment, you might want to restrict access to the service. You can control where requests to your service are allowed to originate by modifying the gcf_http_ingress_type_index Terraform variable. Take caution against unintentionally making the solution's service endpoints publicly accessible on the internet. See Configuring network settings in the Cloud Run functions documentation for more information. You can require authentication for requests to the image processing service's REST API by modifying the gcf_require_http_authentication Terraform variable. This helps to control individual access to the service. If you require authentication, then callers of the service must provide credentials to make a request. See Authenticating for invocation in the Cloud Run functions documentation for more information. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Reliability When users upload images to the input Cloud Storage bucket, they might experience varying levels of latency in the resulting annotation output. By default, users must poll the output bucket to determine when the annotations are available. To make your application reliably take action as soon as image processing is complete, you can subscribe to Cloud Storage events in the output bucket. For example, you might deploy another Cloud Run function to process the annotation data - see Cloud Storage triggers in the Cloud Run functions documentation for more information. For more recommendations, refer to the following guides to help optimize the reliability of the products used in this solution: Cloud Run functions tips and tricks Best practices for Cloud Storage For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Performance The throughput of the image processing service is directly affected by the Cloud Run functions scaling ability. Cloud Run functions scales automatically by creating function instances to handle the incoming traffic load, up to a configurable instance limit. You can control the scaling of the functions, and in turn the image processing service's throughput, by changing the maximum instance limit or removing the limit altogether. Use the gcf_max_instance_count Terraform variable to change the limit. See Using maximum instances and Auto-scaling behavior in the Cloud Run functions documentation for more information. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Cost For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-ml-image-annotation-gcf/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. If an API not enabled error persists, follow the link in the error message to enable the API. Wait a few moments for the API to become enabled and then run the terraform apply command again. Cannot assign requested address When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Learn more about serverless computing on Google Cloud. Learn more about machine learning for image analysis on Google Cloud. Learn more about event-driven architectures. Understand the capabilities and limits of products used in this solution: Cloud Vision API documentation Cloud Storage documentation Cloud Run functions documentation For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Send feedback \ No newline at end of file diff --git a/AI_and_ML.txt b/AI_and_ML.txt new file mode 100644 index 0000000000000000000000000000000000000000..341d65bc9694a2e496324f50c08ea2f70412514d --- /dev/null +++ b/AI_and_ML.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/ai +Date Scraped: 2025-02-23T11:57:10.775Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceGenerative AI on Google CloudBring generative AI to real-world experiences quickly, efficiently, and responsibly, powered by Google’s most advanced technology and models including Gemini. Plus, new customers can start their AI journey today with $300 in free credits.Try it Contact salesProducts and servicesExplore tools from Google Cloud that make it easier for developers to build with generative AI and new AI-powered experiences across our cloud portfolio. For more information, view all our AI products.Build applications and experiences powered by generative AIWith Vertex AI, you can interact with, customize, and embed foundation models into your applications. Access foundation models on Model Garden, tune models via a simple UI on Vertex AI Studio, or use models directly in a data science notebook.Plus, with Vertex AI Agent Builder developers can build and deploy AI agents grounded in their data.Your ultimate guide to the latest in gen AI on Vertex AIRead the blogGoogle named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Read the reportCustomize and deploy Gemini models to production in Vertex AI Gemini, a multimodal model from Google DeepMind, is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI using text, images, video, or code. With Gemini’s advanced reasoning and generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images.Vertex AI Gemini API QuickstartDocumentationKickstart your AI journey with our 10-step planDownload the guideNew generation of AI assistants for developers, Google Cloud services, and applicationsGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. Additional Gemini for Google Cloud offerings assist users in working and coding more effectively, gaining deeper data insights, navigating security challenges, and more. Get to know Gemini Code Assist EnterpriseWatch nowPowering Google Cloud with GeminiRead the blogExplore Google Cloud's open and innovative AI partner ecosystemThe emergence of generative AI has the potential to transform entire businesses and entire industries. At Google Cloud, we believe the future of AI will be open. Our broad ecosystem of partners provides you choice while maximizing opportunities for innovation.Explore our partner ecosystemFind a partnerHelping businesses with generative AIRead the blogTurn your gen AI ideas into reality with the Google for Startups Cloud ProgramUnlock up to $200,000 USD (up to $350,000 USD for AI startups) in Google Cloud and Firebase credits and supercharge your innovation with cutting-edge cloud infrastructure and generative AI tools. Get access to expert guidance, relevant resources, and powerful technologies like Vertex AI to accelerate your development journey.Apply now and start building with the Google for Startups Cloud Program today.Google Cloud Startup SummitRegister to watch on demandUnlock your startup’s gen AI potential with the Startup Learning CenterGet started Vertex AIBuild applications and experiences powered by generative AIWith Vertex AI, you can interact with, customize, and embed foundation models into your applications. Access foundation models on Model Garden, tune models via a simple UI on Vertex AI Studio, or use models directly in a data science notebook.Plus, with Vertex AI Agent Builder developers can build and deploy AI agents grounded in their data.Your ultimate guide to the latest in gen AI on Vertex AIRead the blogGoogle named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Read the reportGemini modelsCustomize and deploy Gemini models to production in Vertex AI Gemini, a multimodal model from Google DeepMind, is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI using text, images, video, or code. With Gemini’s advanced reasoning and generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images.Vertex AI Gemini API QuickstartDocumentationKickstart your AI journey with our 10-step planDownload the guideGemini for Google Cloud New generation of AI assistants for developers, Google Cloud services, and applicationsGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. Additional Gemini for Google Cloud offerings assist users in working and coding more effectively, gaining deeper data insights, navigating security challenges, and more. Get to know Gemini Code Assist EnterpriseWatch nowPowering Google Cloud with GeminiRead the blogGenerative AI partnersExplore Google Cloud's open and innovative AI partner ecosystemThe emergence of generative AI has the potential to transform entire businesses and entire industries. At Google Cloud, we believe the future of AI will be open. Our broad ecosystem of partners provides you choice while maximizing opportunities for innovation.Explore our partner ecosystemFind a partnerHelping businesses with generative AIRead the blogAI for Startups Turn your gen AI ideas into reality with the Google for Startups Cloud ProgramUnlock up to $200,000 USD (up to $350,000 USD for AI startups) in Google Cloud and Firebase credits and supercharge your innovation with cutting-edge cloud infrastructure and generative AI tools. Get access to expert guidance, relevant resources, and powerful technologies like Vertex AI to accelerate your development journey.Apply now and start building with the Google for Startups Cloud Program today.Google Cloud Startup SummitRegister to watch on demandUnlock your startup’s gen AI potential with the Startup Learning CenterGet startedsparkLooking to build a solution?I want to build a solution where users can upload documents on a case and the system generates a summary of the caseI want to support customer service agents with an AI that can get answers from external and internal documentsI want to automatically generate social media tags and metadata for marketing videosMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSummarize documentsprompt_suggestionSupport agents with AIprompt_suggestionGenerate metadata and tagsCustomer success storiesFOX Sports uses Vertex AI to store, organize, and surface video highlights for sports broadcastsWatch the video 0:30Wendy’s reimagined drive-thru takes and displays custom orders with help from generative AIWatch the video (1:29)GE Appliances uses Google Cloud AI to craft recipes from what’s already inside the fridgeWatch the video (1:34)UKG and Google Cloud Announce Partnership to Transform Employee Experiences with Generative AIUKG is bringing generative AI to their HCM apps using Vertex AI and proprietary data.3-min readGitLab and Google Cloud Partner to Expand AI-Assisted Capabilities with Customizable Gen AI Foundation ModelsGitLab employs Vertex AI to power new vulnerability detection feature.3-min readMidjourney Selects Google Cloud to Power AI-Generated Creative PlatformMidjourney provides users with a seamless creative experience with Google Cloud's TPUs and GPUs. 3-min readSnorkel AI Teams with Google Cloud to speed AI deployment with Vertex AISnorkel AI and Vertex AI are equipping enterprises to solve their most critical challenges.4-min readAnthropic Forges Partnership With Google Cloud to Help Deliver Reliable and Responsible AIAnthropic uses Google Cloud infrastructure to train their LLMs quickly and sustainably.3-min readView MoreBusiness use cases Learn how generative AI can transform customer service, enhance employee productivity, automate business processes, and more. Build a single knowledge base from disparate datasets Video (3:58)Transform chatbots to full-on customer service assistants Video (2:48)Find and summarize complex information in moments Video (2:37)Generate creative content for multi-channel marketing Video (2:40)Simplify how products are onboarded, categorized, labeled for search, and marketed Video (2:12)Automated data collection and documentation processes Video (3:45)Machine-generated events monitoring to predict upcoming maintenance Video (3:08)Accelerate product innovation through data-led insightsVideo (3:36)Build virtual stylists that help consumers find what they need Video (3:29)Get care-related information in real time to deliver a better patient experience Video (3:52)Control operating costs and improve content performance Video (2:47)Summarize, transcribe, and index recordings to include more voices in the discussionVideo (2:29)Improve carbon performance with advanced data analysis and communication toolsVideo (2:52)Detect and resolve concerns faster for improved customer experience Video (3:39)Consistent voice experience across devices Video (3:31)Deliver personalized content recommendations including music, video, and blogsWatch how media companies can provide personalized content discovery Video (1:47)Improve developer efficiency with code assistance Video (2:30)Accelerate generative AI-driven transformation with databasesView MoreDeveloper resourcesIntroduction to generative AI No cost introductory learning course 45 minutesTips to becoming a world-class Prompt EngineerVideo (1:53)Learn how generative AI fits into the entire software development lifecycle5-min readFind out how to build a serverless application that uses generative AIVideo (8:53)Domain-specific AI apps: A three-step design pattern for specializing LLMs5-min readLearn how to enrich product data with generative AI using Vertex AI5-min readCode samples to get started building generative AI apps on Google Cloud3-min readContext-aware code generation: Retrieval augmentation and Vertex AI Codey APIs6-min readHow to build a gen AI application: Design principles and design patterns5-min readUnlock gen AI’s full potential with operational databasesView MoreExecutive resourcesLearn how nonprofits are tackling climate action, education, and crisis response with AIDownload the guideJoin Google experts for a deep dive into how companies are putting AI to workListen to the podcastUnlocking gen AI success: Five pitfalls every executive should knowRead the blogView MoreConsulting servicesCreate with Generative AITransform your creative process. Boost productivity by automatically generating writing and art. Learn moreDiscover with Generative AIBuild AI-enhanced search engines or assistive experiences to enhance your customer experience.Learn moreSummarize with Generative AITake long-form chats, emails, or reports and distill them to their core for quick comprehension.Learn moreAutomate with Generative AITransform from time-consuming, expensive processes to efficient ones.Learn moreWe believe GenAI can be a tremendously powerful tool that changes how people go about analyzing information and insights at work. Our collaboration with Google Cloud will help employees and leaders make better decisions, have more productive conversations, and anticipate how today's choices can impact tomorrow's operations and workplace culture overall.Hugo Sarrazin, Chief Product and Technology Officer at UKGStart your AI journey today Try Google Cloud AI and machine learning products. New customers can get started with up to $300 in free credits.Try it in consoleContact salesJoin our technical community to build your AI skillsGoogle Cloud Innovators Read our latest AI announcementsLearn moreGet updates with the Google Cloud newsletterSubscribeGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/AI_and_Machine_Learning.txt b/AI_and_Machine_Learning.txt new file mode 100644 index 0000000000000000000000000000000000000000..7e7c15b72314a236bbc3c0ad00178a5c123f5880 --- /dev/null +++ b/AI_and_Machine_Learning.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/ai +Date Scraped: 2025-02-23T12:01:56.855Z + +Content: +AI and machine learning productsTry Gemini 2.0 models, the latest and most advanced multimodal models in Vertex AI. See what you can build with up to a 2M token context window, starting as low as $0.0001.Try it in consoleContact salesSummarize large documents with generative AIDeploy a preconfigured solution that uses generative AI to quickly extract text and summarize large documents.Deploy an AI/ML image processing pipelineLaunch a preconfigured, interactive solution that uses pre-trained machine learning models to analyze images and generate image annotations.Create a chat app using retrieval-augmented generation (RAG)Deploy a preconfigured solution with a chat-based experience that provides questions and answers based on embeddings stored as vectors.sparkLooking to build a solution?I want to support customer service agents with an AI that can get answers from external and internal documentsI want to build a solution where users can upload documents on a case and it gives them a summary of their case, and makes arguments based on the data.I want to automatically flag videos with inappropriate content My use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSupport agents with AIprompt_suggestionSummarize documentsprompt_suggestionAnalyze videosProducts, solutions, and servicesCategoryProducts and solutions Good forGenerative AIVertex AI StudioA Vertex AI tool for rapidly prototyping and testing generative AI models. Test sample prompts, design your own prompts, and customize foundation models and LLMs to handle tasks that meet your application's needs.Prompt design and tuning with an easy-to-use interface Code completion and generation with CodeyGenerating and customizing images with ImagenUniversal speech modelsVertex AI Agent BuilderCreate a range of generative AI agents and applications grounded in your organization’s data. Vertex AI Agent Builder provides the convenience of a no code agent building console alongside powerful grounding, orchestration and customization capabilities.Building multimodal conversational AI agents Building a Google-quality search experience on your own dataEnjoy powerful orchestration, grounding and customization tools Generative AI Document SummarizationThe one-click solution establishes a pipeline that extracts text from PDFs, creates a summary from the extracted text with Vertex AI Generative AI Studio, and stores the searchable summary in a BigQuery database.Process and summarize large documents using Vertex AI LLMsDeploy an application that orchestrates the documentation summarization processTrigger the pipeline with a PDF upload and view a generated summaryMachine learning and MLOPsVertex AI Platform A single platform for data scientists and engineers to create, train, test, monitor, tune, and deploy ML and AI models. Choose from over 150 models in Vertex's Model Garden, including Gemini and open source models like Stable Diffusion, BERT, T-5. Custom ML trainingTraining models with minimal ML expertiseTesting, monitoring, and tuning ML models Deploying 150+ models, including multimodal and foundation models like GeminiVertex AI NotebooksChoose from Colab Enterprise or Vertex AI Workbench. Access every capability in Vertex AI Platform to work across the entire data science workflow—from data exploration to prototype to production. Data scientist workflowsRapid prototyping and model developmentDeveloping and deploying AI solutions on Vertex AI with minimal transitionAutoMLTrain high-quality custom machine learning models with minimal effort and machine learning expertise.Building custom machine learning models in minutes with minimal expertiseTraining models specific to your business needsSpeech, text, and language APIsNatural Language AI Derive insights from unstructured text using Google machine learning.Applying natural language understanding to apps with the Natural Language APITraining your open ML models to classify, extract, and detect sentimentSpeech-to-TextAccurately convert speech into text using an API powered by Google's AI technologies.Automatic speech recognitionReal-time transcriptionEnhanced phone call models in Google Contact Center AIText-to-SpeechConvert text into natural-sounding speech using a Google AI powered API. Improving customer interactions Voice user interface in devices and applicationsPersonalized communication Translation AIMake your content and apps multilingual with fast, dynamic machine translation.Real-time translationCompelling localization of your contentInternationalizing your productsImage and video APIsVision AIDerive insights from your images in the cloud or at the edge with AutoML Vision or use pre-trained Vision API models to detect objects, understand text, and more.Accurately predicting and understanding images with MLTraining ML models to classify images by custom labels using AutoML VisionVideo AIEnable powerful content discovery and engaging video experiences.Extracting rich metadata at the video, shot, or frame levelCustom entity labels with AutoML Video IntelligenceDocument and data APIsDocument AIDocument AI includes pre-trained models for data extraction, Document AI Workbench to create new custom models or uptrain existing ones, and Document AI Warehouse to search and store documents. Extracting, classifying, and splitting data from documents Reducing manual document processing and minimizing setup costsGaining insights from document dataAI assistance and conversational AIConversational Agents (Dialogflow)Conversational AI platform with both intent-based and generative AI LLM capabilities for building natural, rich conversational experiences into mobile and web applications, smart devices, bots, interactive voice response systems, popular messaging platforms, and more. Features a visual builder to create, build, and manage virtual agents. Natural interactions for complex multi-turn conversationsBuilding and deploying advanced agents quicklyEnterprise-grade scalabilityBuilding a chatbot based on a website or collection of documentsCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Creating advanced virtual agents in minutes that smoothly switch between topicsReal-time, step-by-step assistance for human agentsMultichannel communications between customers and agentsGemini Code Assist Gemini Code Assist offers code recommendations in real time, suggests full function and code blocks, and identifies vulnerabilities and errors in the code—while suggesting fixes. Assistance can be accessed via a chat interface, Cloud Shell Editor, or Cloud Code IDE extensions for VSCode and JetBrains IDEs. Code assistance for Go, Java, JavaScript, Python, and SQLSQL completions, query generation, and summarization using natural language Suggestions to structure, modify, or query your data during database migrationIdentify and troubleshoot errors using natural languageAI InfrastructureTPUs, GPUs, and CPUsHardware for every type of AI workload from our partners, like NVIDIA, Intel, AMD, Arm, and more, we provide customers with the widest range of AI-optimized compute options across TPUs, GPUs, and CPUs for training and serving the most data-intensive models. AI Accelerators for every use case from high performance training to inferenceAccelerating specific workloads on your VMsSpeeding up compute jobs like machine learning and HPCGoogle Kubernetes EngineWith one platform for all workloads, GKE offers a consistent and robust development process. As a foundation platform, it provides unmatched scalability, compatibility with a diverse set of hardware accelerators allowing customers to achieve superior price performance for their training and inference workloads.Building with industry-leading support for 15,000 nodes in a single clusterChoice of diverse hardware accelerators for training and inferenceGKE Autopilot reduces the burden of Day 2 operationsRapid node start-up, image streaming, integration with GCSFuseConsulting serviceAI Readiness ProgramOur AI Readiness Program is a 2-3 week engagement designed to accelerate value realization from your AI efforts. Our experts will work with you to understand your business objectives, benchmark your AI capabilities, and provide tailored recommendations for your needs.AI value benchmarking and capability assessmentReadout and recommendationsAI planning and roadmapping Products, solutions, and servicesGenerative AIVertex AI StudioA Vertex AI tool for rapidly prototyping and testing generative AI models. Test sample prompts, design your own prompts, and customize foundation models and LLMs to handle tasks that meet your application's needs.Prompt design and tuning with an easy-to-use interface Code completion and generation with CodeyGenerating and customizing images with ImagenUniversal speech modelsMachine learning and MLOPsVertex AI Platform A single platform for data scientists and engineers to create, train, test, monitor, tune, and deploy ML and AI models. Choose from over 150 models in Vertex's Model Garden, including Gemini and open source models like Stable Diffusion, BERT, T-5. Custom ML trainingTraining models with minimal ML expertiseTesting, monitoring, and tuning ML models Deploying 150+ models, including multimodal and foundation models like GeminiSpeech, text, and language APIsNatural Language AI Derive insights from unstructured text using Google machine learning.Applying natural language understanding to apps with the Natural Language APITraining your open ML models to classify, extract, and detect sentimentImage and video APIsVision AIDerive insights from your images in the cloud or at the edge with AutoML Vision or use pre-trained Vision API models to detect objects, understand text, and more.Accurately predicting and understanding images with MLTraining ML models to classify images by custom labels using AutoML VisionDocument and data APIsDocument AIDocument AI includes pre-trained models for data extraction, Document AI Workbench to create new custom models or uptrain existing ones, and Document AI Warehouse to search and store documents. Extracting, classifying, and splitting data from documents Reducing manual document processing and minimizing setup costsGaining insights from document dataAI assistance and conversational AIConversational Agents (Dialogflow)Conversational AI platform with both intent-based and generative AI LLM capabilities for building natural, rich conversational experiences into mobile and web applications, smart devices, bots, interactive voice response systems, popular messaging platforms, and more. Features a visual builder to create, build, and manage virtual agents. Natural interactions for complex multi-turn conversationsBuilding and deploying advanced agents quicklyEnterprise-grade scalabilityBuilding a chatbot based on a website or collection of documentsAI InfrastructureTPUs, GPUs, and CPUsHardware for every type of AI workload from our partners, like NVIDIA, Intel, AMD, Arm, and more, we provide customers with the widest range of AI-optimized compute options across TPUs, GPUs, and CPUs for training and serving the most data-intensive models. AI Accelerators for every use case from high performance training to inferenceAccelerating specific workloads on your VMsSpeeding up compute jobs like machine learning and HPCConsulting serviceAI Readiness ProgramOur AI Readiness Program is a 2-3 week engagement designed to accelerate value realization from your AI efforts. Our experts will work with you to understand your business objectives, benchmark your AI capabilities, and provide tailored recommendations for your needs.AI value benchmarking and capability assessmentReadout and recommendationsAI planning and roadmapping Ready to start building with AI?Try Google Cloud's AI products and services designed for businesses and professional developers.Get startedExplore our ecosystem of Gemini products to help you get the most out of Google AI.Learn more about GeminiLearn from our customersSee how developers and data scientists are using our tools to leverage the power of AINewsPriceline rolls out new gen AI powered tools to enhance trip planning and improve employee productivity5-min readBlog postOrange utilizes AI to tackle a range of projects from retail recommendations to complex wiring jobs5-min readCase StudyChristus Muguerza developed a model that can predict 77% of acute pain in patients undergoing surgery5-min readVideoWisconsin Department of Workforce Development cleared a backlog of 777,000 claims with the help of Doc AIVideo (3:05)See all customersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Start your AI journey todayTry Google Cloud AI and machine learning products in the console.Go to my console Have a large project?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/AI_for_Data_Analytics.txt b/AI_for_Data_Analytics.txt new file mode 100644 index 0000000000000000000000000000000000000000..e6ca2a0316027131245d9fc4863ccea5eef30eb4 --- /dev/null +++ b/AI_for_Data_Analytics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/use-cases/ai-data-analytics +Date Scraped: 2025-02-23T11:59:12.320Z + +Content: +Store data and run query analyses with free usage of BigQuery, up to monthly limitsAI data analyticsWrite SQL, build predictive models, and visualize data with AI data analyticsUse foundational models and chat assistance for predictive analytics, sentiment analysis, and AI-enhanced business intelligence.Go to consoleRequest a demoPage highlightsWhat is AI data analytics?Write queries, SQL, and code with AI chat assistanceBigQuery Studio explainedBigQuery in a minute01:26OverviewWhat is AI data analytics?AI data analytics refers to the practice of using artificial intelligence (AI) to analyze large data sets, simplify and scale trends, and uncover insights for data analysts.How can AI be used with data analytics?AI data analytics is designed to support, automate, and simplify each stage of the data analysis journey. AI tools can help with data collection (ingesting from multiple sources) and preparation (cleaning and organizing for analysis). Machine learning (ML) models can be trained and applied on prepared data to extract insights and patterns. Finally, AI can help analysts interpret trends and insights for more informed decision-making.VIDEOAI tools for data professionals7:30How can data analysts use AI data analytics?Data analysts across industries can use AI data analytics to enhance their work. From real-time credit card fraud detection and assisting in disease diagnosis to demand forecasting in retail and propensity modeling for gaming apps, AI data analytics can assist with all types of industry-specific use cases.Can AI do the work of a data analyst?AI data analytics is designed to enhance the core contributions and skill sets of data analysts. Given their subject matter expertise, critical thinking abilities, and the capacity to pose insightful queries, data analysts are critical to the success of any AI-assisted data analysis.View moreHow It WorksBigQuery Studio provides a single, unified interface for all data practitioners to simplify analytics workflows from data preparation and visualization to ML model creation and training. Using simple SQL, access Vertex AI foundational models and chat assist directly in BigQuery for a variety of data analytics use cases.Get startedLearn how to use AI code assistance in BiqQuery StudioCommon UsesAI-powered predictive analytics and forecastingBuild predictive and forecasting models using SQL and AILeverage your existing SQL skills to build, train, and deploy batch predictive models directly within BigQuery or your chosen data warehouse with BigQuery ML. Plus, BigQuery ML integrates with Vertex AI, our end-to-end platform for AI and ML, broadening your access to powerful models that generate real-time, low-latency online predictions, including identifying new audiences based on current customer lifetime value, recommending personalized investment products, and forecasting demand.Try BigQuery ML3:38How to simplify AI models with Vertex AI and BigQuery MLManage BigQuery ML models in Vertex AI documentationFull list of supported AI resources for BigQueryML-compatible remote modelsPredictive forecasting data analytics design patterns How-tosBuild predictive and forecasting models using SQL and AILeverage your existing SQL skills to build, train, and deploy batch predictive models directly within BigQuery or your chosen data warehouse with BigQuery ML. Plus, BigQuery ML integrates with Vertex AI, our end-to-end platform for AI and ML, broadening your access to powerful models that generate real-time, low-latency online predictions, including identifying new audiences based on current customer lifetime value, recommending personalized investment products, and forecasting demand.Try BigQuery ML3:38How to simplify AI models with Vertex AI and BigQuery MLManage BigQuery ML models in Vertex AI documentationFull list of supported AI resources for BigQueryML-compatible remote modelsPredictive forecasting data analytics design patterns Sentiment analysisRun sentiment analysis on datasets using BigQuery MLFrom understanding customer feedback on social media or product reviews to developing market research through competitor analysis and campaign effectiveness, data analysts use sentiment analysis to parse positive, negative, and neutral scores on their datasets. With BigQuery ML, you use SQL to train models to automatically run sentiment analysis and predictions for stronger insights, including customer pain points, product feature enhancements, and more. Try BigQuery MLSentiment analysis with BigQuery ML guide Sentiment analysis tutorial using Natural Language APIBigQuery interactive tutorialHow-tosRun sentiment analysis on datasets using BigQuery MLFrom understanding customer feedback on social media or product reviews to developing market research through competitor analysis and campaign effectiveness, data analysts use sentiment analysis to parse positive, negative, and neutral scores on their datasets. With BigQuery ML, you use SQL to train models to automatically run sentiment analysis and predictions for stronger insights, including customer pain points, product feature enhancements, and more. Try BigQuery MLSentiment analysis with BigQuery ML guide Sentiment analysis tutorial using Natural Language APIBigQuery interactive tutorialImage and video analysisAnalyze unstructured data images and videos with AIEffortlessly analyze images and videos to extract valuable information, streamline processes, and enhance decision-making with Google Cloud AI. To analyze unstructured data in images, use remote functions in BigQuery like Vertex AI Vision or perform inference on unstructured image data with BigQuery ML. For video analysis, Video Description on Vertex AI summarizes short video clip content, providing detailed metadata about videos for storing and searching.Get started6:01Analyzing images, video, and other unstructured data in BigQuery with Vertex AI Analyze an object table using Vertex AI Vision tutorialRun inference on image object tables tutorialSummarize short video clips with Video Description on Vertex AIHow-tosAnalyze unstructured data images and videos with AIEffortlessly analyze images and videos to extract valuable information, streamline processes, and enhance decision-making with Google Cloud AI. To analyze unstructured data in images, use remote functions in BigQuery like Vertex AI Vision or perform inference on unstructured image data with BigQuery ML. For video analysis, Video Description on Vertex AI summarizes short video clip content, providing detailed metadata about videos for storing and searching.Get started6:01Analyzing images, video, and other unstructured data in BigQuery with Vertex AI Analyze an object table using Vertex AI Vision tutorialRun inference on image object tables tutorialSummarize short video clips with Video Description on Vertex AIAI assistance for SQL generation and completion Write queries, SQL, and code with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features including help with writing and editing SQL or Python code, visual data preparation, and intelligent recommendations for enhancing productivity and optimizing costs. You can leverage BigQuery’s in-console chat interface to explore tutorials, documentation, and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?”Request access to Gemini in BigQuery preview3:42Introduction to Gemini in BigQueryWrite BigQuery queries with Gemini setup guideGenerate data insights in BigQuery with GeminiWrite code in a Colab Enterprise notebook with Gemini setup guideHow-tosWrite queries, SQL, and code with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features including help with writing and editing SQL or Python code, visual data preparation, and intelligent recommendations for enhancing productivity and optimizing costs. You can leverage BigQuery’s in-console chat interface to explore tutorials, documentation, and best practices for specific tasks using simple prompts such as: “How can I use BigQuery materialized views?” “How do I ingest JSON data?” and “How can I improve query performance?”Request access to Gemini in BigQuery preview3:42Introduction to Gemini in BigQueryWrite BigQuery queries with Gemini setup guideGenerate data insights in BigQuery with GeminiWrite code in a Colab Enterprise notebook with Gemini setup guideAI-enhanced data visualizationUse AI chat to derive data insights, generate reports, and visualize trendsGain insights and build data-powered applications with AI-powered business intelligence from Looker. Using Gemini in Looker, chat directly with your data to uncover business opportunities, create entire reports or advanced visualizations, and build formulas for calculated fields—all with only a few sentences of conversational instruction.Get started2:54Learn how to use AI-powered business intelligence with LookerLooker Studio quickstart guideLooker documentationLooker best practices guideHow-tosUse AI chat to derive data insights, generate reports, and visualize trendsGain insights and build data-powered applications with AI-powered business intelligence from Looker. Using Gemini in Looker, chat directly with your data to uncover business opportunities, create entire reports or advanced visualizations, and build formulas for calculated fields—all with only a few sentences of conversational instruction.Get started2:54Learn how to use AI-powered business intelligence with LookerLooker Studio quickstart guideLooker documentationLooker best practices guideNatural language-driven analysis Discover, transform, query, and visualize data using natural languageReimagine your data analysis experience with the AI-powered BigQuery data canvas. This natural language centric tool simplifies the process of finding, querying, and visualizing your data. Its intuitive features help you discover data assets quickly, generate SQL queries, automatically visualize results, and seamlessly collaborate with others—all within a unified interface.Request access to Gemini in BigQuery preview6:03Example prompts of a typical BigQuery data canvas workflowSet up Gemini in BigQueryHow-tosDiscover, transform, query, and visualize data using natural languageReimagine your data analysis experience with the AI-powered BigQuery data canvas. This natural language centric tool simplifies the process of finding, querying, and visualizing your data. Its intuitive features help you discover data assets quickly, generate SQL queries, automatically visualize results, and seamlessly collaborate with others—all within a unified interface.Request access to Gemini in BigQuery preview6:03Example prompts of a typical BigQuery data canvas workflowSet up Gemini in BigQueryStart your proof of conceptStore data and run query analyses with free usage of BigQuery, up to monthly limitsGet startedLearn more about BigQueryView BigQueryData analytics design patternsView sample codeQuery data—without a credit card—with BigQuery sandboxRun sample queryData analytics technical guidesView docsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/API_Gateway.txt b/API_Gateway.txt new file mode 100644 index 0000000000000000000000000000000000000000..628396ca61e615866e8578569a34e3dca1cd9f47 --- /dev/null +++ b/API_Gateway.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/api-gateway/docs +Date Scraped: 2025-02-23T12:09:45.221Z + +Content: +Home API Gateway Documentation Stay organized with collections Save and categorize content based on your preferences. API Gateway documentation View all product documentation API Gateway enables you to provide secure access to your backend services through a well-defined REST API that is consistent across all of your services, regardless of the service implementation. Clients consume your REST APIS to implement standalone apps for a mobile device or tablet, through apps running in a browser, or through any other type of app that can make a request to an HTTP endpoint. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Secure traffic to a service with the gcloud CLI Quickstart: Secure traffic to a service with the Google Cloud console Choosing an Authentication Method Creating an API Creating an API config Authentication between services Deploying an API to a gateway Configuring the development environment About quotas find_in_page Reference REST API info Resources Pricing Quotas and limits Release notes Getting support Billing questions Related videos \ No newline at end of file diff --git a/APIs_and_Applications.txt b/APIs_and_Applications.txt new file mode 100644 index 0000000000000000000000000000000000000000..12e5e361378ad1461d727d38b8db8f088dd0f2a4 --- /dev/null +++ b/APIs_and_Applications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/apis-and-applications +Date Scraped: 2025-02-23T11:58:50.821Z + +Content: +Majority of CIOs cite “increasing operational efficiency” as their primary goal for innovation. Learn more.APIs and applicationsAccelerate digital innovation by securely automating processes and easily creating applications without coding by extending your existing data with APIs.Contact usGoogle Cloud APIs and applications solutionsSolutionUnlock legacy applications using APIsExtend and modernize legacy applications alongside new cloud services.Speed up time to marketDeliver dynamic customer experiencesEmpower developers and partnersOpen new business channels using APIsAttract and empower an ecosystem of developers and partners.Drive more adoption and consumption of your APIsGenerate new revenue sourcesMonitor and manage APIs to measure successOpen banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Accelerate open banking complianceGrow an ecosystem of partners and customersPromote internal and external innovationHealthAPIxEasily connect healthcare providers and app developers to build FHIR API-based digital services.Reduce risks during care transitionsDeliver patient-centric digital servicesImprove chronic condition managementSolutionUnlock legacy applications using APIsExtend and modernize legacy applications alongside new cloud services.Speed up time to marketDeliver dynamic customer experiencesEmpower developers and partnersOpen new business channels using APIsAttract and empower an ecosystem of developers and partners.Drive more adoption and consumption of your APIsGenerate new revenue sourcesMonitor and manage APIs to measure successOpen banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Accelerate open banking complianceGrow an ecosystem of partners and customersPromote internal and external innovationHealthAPIxEasily connect healthcare providers and app developers to build FHIR API-based digital services.Reduce risks during care transitionsDeliver patient-centric digital servicesImprove chronic condition managementWant to learn more? Find out how our APIs and applications solutions can help you accelerate digital innovation.Contact usNext OnAir: Powering business applications with APIs, microservices, AI, and no-code app development.Watch video Learn from our customersCase StudySee how Pitney Bowes reduced the time to get products to market from 18 months to five.5-min readCase StudyCitrix connects people with security and speed by proactively monitoring APIs.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Active_Assist(1).txt b/Active_Assist(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..9aa3b9be515e87e92ea0060827de62ca07af83d1 --- /dev/null +++ b/Active_Assist(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/active-assist +Date Scraped: 2025-02-23T12:05:55.860Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Active AssistActive Assist is a portfolio of intelligent tools that helps you optimize your cloud operations with recommendations to reduce costs, increase performance, improve security, and even help you make more sustainable decisions.View your recommendationsRead documentationThe Total Economic Impact of Google Active AssistFrom fewer security breaches to faster troubleshooting, learn how Active Assist can benefit your organization.Modernize with AIOps to Maximize your ImpactLeverage AIOps to increase efficiency and productivity across your day-to-day operations. Google Active Assist Sustainability SpotlightLearn how Active Assist can help your organization reduce CO2 emissions associated with your cloud applications.Active Assist value categories and benefitsExplore your recommendations and insights in-context on individual product pages, together in your Recommendation Hub, or through our Recommender API. Value Categoryhow this benefits your cloudSample Solutions to get startedCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderActive Assist value categories and benefitsCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderFeeling inspired? Let’s solve your challenges together.Explore how Active Assist can help you proactively reduce costs, tighten security, and optimize resources. Interactive demoCheck out our latest blogs to see what's new with Active Assist.Blog postsSign up for our Active Assist Trusted Tester Group to get early access to new features as they're developed.How customers are maximizing performance and security while reducing toilBlog postKPMG makes sure VMs run optimally with automated rightsizing recommendations.5-min readVideoFlowmon uses clear insights to optimize firewall rules quickly, easily.4:20Blog postRandstad reduces networking troubleshooting effort, saves significant time.5-min readBlog postQuickly discovered and turned off 200+ idle VMs using proactive recommendations.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.View recommendationsSee the full listList of RecommendersRead all about itRead documentationGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Active_Assist.txt b/Active_Assist.txt new file mode 100644 index 0000000000000000000000000000000000000000..419e06504987ff12afb0c702c1dd59214701f061 --- /dev/null +++ b/Active_Assist.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/active-assist +Date Scraped: 2025-02-23T11:59:43.866Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Active AssistActive Assist is a portfolio of intelligent tools that helps you optimize your cloud operations with recommendations to reduce costs, increase performance, improve security, and even help you make more sustainable decisions.View your recommendationsRead documentationThe Total Economic Impact of Google Active AssistFrom fewer security breaches to faster troubleshooting, learn how Active Assist can benefit your organization.Modernize with AIOps to Maximize your ImpactLeverage AIOps to increase efficiency and productivity across your day-to-day operations. Google Active Assist Sustainability SpotlightLearn how Active Assist can help your organization reduce CO2 emissions associated with your cloud applications.Active Assist value categories and benefitsExplore your recommendations and insights in-context on individual product pages, together in your Recommendation Hub, or through our Recommender API. Value Categoryhow this benefits your cloudSample Solutions to get startedCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderActive Assist value categories and benefitsCostManage your cost wiselyHelp you manage your cost wisely, such as recommending to delete unused or idle resources, downsizing VMs to fit your workload needs, or using committed use discounts to save money.VM machine type recommenderCommitted use discount recommenderIdle VM recommenderCloud SQL overprovisioned instance recommenderSecurityMitigate your security risks proactivelyHarden your security posture by applying recommended actions to reduce over-granted permissions, enable additional security features, and help with compliance and security incident investigations.IAM recommenderFirewall insightsCloud Run recommenderPerformanceMaximize performance of your systemImprove the performance of your cloud resources and workloads through prediction and automation that take your infrastructure one step ahead of what your applications need next.VM machine type recommenderManaged instance group machine type recommenderReliabilityDeliver highly available services to your end usersIncrease the availability and reliability of your cloud resources and your workloads running on Google Cloud via various health checks, auto-scaling capabilities, and Business Continuity and Disaster Recovery options.Compute Engine predictive autoscalingCloud SQL out-of-disk recommenderPolicy TroubleshooterPolicy AnalyzerManageabilitySpend less time managing your cloud configurationEnhance your management experience on Google Cloud via simplification and automation so that you spend less time managing your cloud configuration and spend more time on innovating your digital businesses and delighting your customers.Network Intelligence CenterProduct suggestion recommenderPolicy SimulatorSustainabilityReduce the carbon footprint of your workloadsOffer you the insights and simple-to-use tools to allow you assess, manage, and reduce the carbon footprint of your workloads running on Google Cloud.Unattended project recommenderFeeling inspired? Let’s solve your challenges together.Explore how Active Assist can help you proactively reduce costs, tighten security, and optimize resources. Interactive demoCheck out our latest blogs to see what's new with Active Assist.Blog postsSign up for our Active Assist Trusted Tester Group to get early access to new features as they're developed.How customers are maximizing performance and security while reducing toilBlog postKPMG makes sure VMs run optimally with automated rightsizing recommendations.5-min readVideoFlowmon uses clear insights to optimize firewall rules quickly, easily.4:20Blog postRandstad reduces networking troubleshooting effort, saves significant time.5-min readBlog postQuickly discovered and turned off 200+ idle VMs using proactive recommendations.5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.View recommendationsSee the full listList of RecommendersRead all about itRead documentationGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Active_Directory_single_sign-on.txt b/Active_Directory_single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..c0c1946f3deb6c15b5ea39b710e93eb6752efb0c --- /dev/null +++ b/Active_Directory_single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-configuring-single-sign-on +Date Scraped: 2025-02-23T11:55:42.550Z + +Content: +Home Docs Cloud Architecture Center Send feedback Active Directory single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This article shows you how to set up single sign-on between your Active Directory environment and your Cloud Identity or Google Workspace account by using Microsoft Active Directory Federation Services (AD FS) and SAML Federation. The article assumes that you understand how Active Directory identity management can be extended to Google Cloud and have already configured user provisioning. The article also assumes that you have a working AD FS 4.0 server that is running on Windows Server 2016 or a later version of Windows Server. To follow this guide, knowledge of Active Directory Domain Services and AD FS is required. You also need a user in Cloud Identity or Google Workspace that has super-admin privileges and a user in Active Directory that has administrative access to your AD FS server. Objectives Configure your AD FS server so that Cloud Identity or Google Workspace can use it as an identity provider. Create a claims issuance policy that matches identities between Active Directory and Cloud Identity or Google Workspace. Configure your Cloud Identity or Google Workspace account so that it delegates authentication to AD FS. Costs If you're using the free edition of Cloud Identity, following this article will not use any billable components of Google Cloud. Before you begin Verify that your AD FS server runs Windows Server 2016 or later. While you can also configure single sign-on by using previous versions of Windows Server and AD FS, the necessary configuration steps might be different from what this article describes. Make sure you understand how Active Directory identity management can be extended to Google Cloud. Configure user provisioning between Active Directory and Cloud Identity or Google Workspace. Consider setting up AD FS in a server farm configuration in order to avoid it becoming a single point of failure. After you've enabled single sign-on, the availability of AD FS determines whether users can log in to the Google Cloud console. Understanding single sign-on By using Google Cloud Directory Sync, you've already automated the creation and maintenance of users and tied their lifecycle to the users in Active Directory. Although GCDS provisions user account details, it doesn't synchronize passwords. Whenever a user needs to authenticate in Google Cloud, the authentication must be delegated back to Active Directory, which is done by using AD FS and the Security Assertion Markup Language (SAML) protocol. This setup ensures that only Active Directory has access to user credentials and is enforcing any existing policies or multi-factor authentication (MFA) mechanisms. Moreover, it establishes a single sign-on experience between your on-premises environment and Google. For more details on single sign-on, see Single sign-on Create a SAML profile To configure single sign-on with AD FS, you first create a SAML profile in your Cloud Identity or Google Workspace account. The SAML profile contains the settings related to your AD FS instance, including its URL and signing certificate. You later assign the SAML profile to certain groups or organizational units. To create a new SAML profile in your Cloud Identity or Google Workspace account, do the following: In the Admin Console, go to SSO with third-party IdP. Go to SSO with third-party IdP Click Third-party SSO profiles > Add SAML profile. On the SAML SSO profile page, enter the following settings: Name: AD FS IDP entity ID: https://ADFS/adfs/services/trust Sign-in page URL: https://ADFS/adfs/ls/ Sign-out page URL: https://ADFS/adfs/ls/?wa=wsignout1.0 Change password URL: https://ADFS/adfs/portal/updatepassword/ In all URLs, replace ADFS with the fully qualified domain name of your AD FS server. Don't upload a verification certificate yet. Click Save. The SAML SSO profile page that appears contains two URLs: Entity ID ACS URL You need these URLs in the next section when you configure AD FS. Configure AD FS You configure your AD FS server by creating a relying party trust. Creating the relying party trust Create a new relying party trust: Connect to your AD FS server and open the AD FS Management MMC snap-in. Select AD FS > Relying Party Trusts. On the Actions pane, click Add relying party trust. On the Welcome page of the wizard, select Claims aware, and click Start. On the Select data source page, select Enter data about the relying party manually, and click Next. On the Specify display name page, enter a name such as Google Cloud and click Next. On the Configure certificate page, click Next. On the Configure URL page, select Enable support for the SAML 2.0 WebSSO protocol, and enter the ACS URL from your SAML profile. Then click Next. On the Configure identifiers page, add the Entity ID from your SAML profile. Then click Next. On the Choose access control policy page, choose an appropriate access policy and click Next. On the Ready to Add Trust page, review your settings, and then click Next. On the final page, clear the Configure claims issuance policy checkbox and close the wizard. In the list of relying party trusts, you see a new entry. Configuring the logout URL When you're enabling users to use single sign-on across multiple applications, it's important to allow them to sign out across multiple applications: Open the relying party trust that you just created. Select the Endpoints tab. Click Add SAML and configure the following settings: Endpoint type: SAML Logout Binding: POST Trusted URL: https://ADFS/adfs/ls/?wa=wsignout1.0 Replace ADFS with the fully qualified domain name of your AD FS server. Click OK. Click OK to close the dialog. Configuring the claims mapping After AD FS has authenticated a user, it issues a SAML assertion. This assertion serves as proof that authentication has successfully taken place. The assertion must identify who has been authenticated, which is the purpose of the NameID claim. To enable Google Sign-In to associate the NameID with a user, the NameID must contain the primary email address of that user. Depending on how you are mapping users between Active Directory and Cloud Identity or Google Workspace, the NameID must contain the UPN or the email address from the Active Directory user, with domain substitutions applied as necessary. UPN In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: UPN Outgoing claim type: Name ID Outgoing name ID format: Email Select Pass through all claim values and click Finish. Click OK to close the claim issuance policy dialog. UPN: domain substitution In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: UPN Outgoing claim type: Name ID Outgoing name ID format: Email Select Replace incoming claim e-mail suffix with a new e-mail suffix and configure the following setting: New e-mail suffix: A domain name used by your Cloud Identity or Google Workspace account. Click Finish, and then click OK. Email In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Add a rule to lookup the email address: In the dialog, click Add Rule. Select Send LDAP Attributes as Claims, and click Next. On the next page, apply the following settings: Claim rule name: Email address Attribute Store: Active Directory Add a row to the list of LDAP attribute mappings: LDAP Attribute: E-Mail-Addresses Outgoing Claim Type: E-Mail-Address Click Finish. Add another rule to set the NameID: Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: E-Mail-Address Outgoing claim type: Name ID Outgoing name ID format: Email Select Pass through all claim values and click Finish. Click OK to close the claim issuance policy dialog. Email: domain substitution In the list of relying party trusts, select the trust that you just created and click Edit claim issuance policy. Add a rule to lookup the email address: In the dialog, click Add Rule. Select Send LDAP Attributes as Claims, and click Next. On the next page, apply the following settings: Claim rule name: Email address Attribute Store: Active Directory Add a row to the list of LDAP attribute mappings: LDAP Attribute: E-Mail-Addresses Outgoing Claim Type: E-Mail-Address Click Finish. Add another rule to set the NameID value: Click Add rule On the Choose rule type page of the Add transform claim rule wizard, select Transform an incoming claim, then click Next. On the Configure claim rule page, configure the following settings: Claim rule name: Name Identifier Incoming claim type: E-Mail-Address Outgoing claim type: Name ID Outgoing name ID format: Email Select Replace incoming claim e-mail suffix with a new e-mail suffix and configure the following setting: New e-mail suffix: A domain name used by your Cloud Identity or Google Workspace account. Click Finish, and then click OK.single-sign-on Exporting the AD FS token-signing certificate After AD FS authenticates a user, it passes a SAML assertion to Cloud Identity or Google Workspace. To enable Cloud Identity and Google Workspace to verify the integrity and authenticity of that assertion, AD FS signs the assertion with a special token-signing key and provides a certificate that enables Cloud Identity or Google Workspace to check the signature. Export the signing certificate from AD FS by doing the following: In the AD FS Management console, click Service > Certificates. Right-click the certificate that is listed under Token-signing, and click View Certificate. Select the Details tab. Click Copy to File to open the Certificate Export Wizard. On the Welcome to the certificate export wizard, click Next. On the Export private key page, select No, do not export the private key. On the Export file format page, select Base-64 encoded X.509 (.CER) and click Next. On the File to export page, provide a local filename, and click Next. Click Finish to close the dialog. Copy the exported certificate to your local computer. Complete the SAML profile You use the signing certificate to complete the configuration of your SAML profile: Return to the Admin Console and go to Security > Authentication > SSO with third-party IdP. Go to SSO with third-party IdP Open the AD FS SAML profile that you created earlier. Click the IDP details section to edit the settings. Click Upload certificate and pick the token signing certificate that you exported from AD FS. Click Save. Note: The token-signing certificate is valid for a limited period of time. Depending on your configuration, AD FS either renews the certificate automatically before it expires, or it requires you to provide a new certificate before the current certificate expires. In both cases, you must update your configuration to use the new certificate. Your SAML profile is complete, but you still need to assign it. Assign the SAML profile Select the users for which the new SAML profile should apply: In the Admin Console, on the SSO with third-party IDPs page, click Manage SSO profile assignments > Manage. Go to Manage SSO profile assignments In the left pane, select the group or organizational unit for which you want to apply the SSO profile. To apply the profile to all users, select the root organizational unit. In the right pane, select Another SSO profile. In the menu, select the AD FS - SAML SSO profile that you created earlier. Click Save. Repeat the steps to assign the SAML profile to another group or organizational unit. Test single sign-on You've completed the single sign-on configuration. You can check whether SSO works as intended. Choose an Active Directory user that satisfies the following criteria: The user has been provisioned to Cloud Identity or Google Workspace. The Cloud Identity user does not have super-admin privileges. User accounts that have super-admin privileges must always sign in by using Google credentials, so they aren't suitable for testing single sign-on. Open a new browser window and go to https://console.cloud.google.com/. On the Google Sign-In page that appears, enter the email address of the user, and click Next. If you use domain substitution, you must apply the substitution to the email address. You are redirected to AD FS. If you configured AD FS to use forms-based authentication, you see the sign-in page. Enter your UPN and password for the Active Directory user, and click Sign in. After successful authentication, AD FS redirects you back to the Google Cloud console. Because this is the first login for this user, you're asked to accept the Google terms of service and privacy policy. If you agree to the terms, click Accept. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud terms of service. If you agree to the terms, click Yes, and then click Agree and Continue. At the upper left, click the avatar icon, and click Sign out. You are redirected to an AD FS page confirming that you've been successfully signed out. If you have trouble signing in, you might find additional information in the AD FS admin log. Keep in mind that users that have super-admin privileges are exempted from single sign-on, so you can still use the Admin console to verify or change settings. Optional: Configure redirects for domain-specific service URLs When you link to the Google Cloud console from internal portals or documents, you can improve the user experience by using domain-specific service URLs. Unlike regular service URLs such as https://console.cloud.google.com/, domain specific-service URLs include the name of your primary domain. Unauthenticated users that click a link to a domain specific-service URL are immediately redirected to AD FS instead of being shown a Google sign-in page first. Examples for domain-specific service URLs include the following: Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ To configure domain-specific service URLs so that they redirect to AD FS, do the following: In the Admin Console, on the SSO with third-party IDPs page, click Domain-specific service URLs > Edit. Go to domain-specific service URLs Set Automatically redirect users to the third-party IdP in the following SSO profile to enabled. Set SSO profile to AD FS. Click Save. Optional: Configure login challenges Google sign-in might ask users for additional verification when they sign in from unknown devices or when their sign-in attempt looks suspicious for other reasons. These login challenges help to improve security, and we recommend that you leave login challenges enabled. If you find that login challenges cause too much inconvenience, you can disable login challenges by doing the following: In the Admin Console, go to Security > Authentication > Login challenges. In the left pane, select an organizational unit for which you want to disable login challenges. To disable login challenges for all users, select the root organizational unit. Under Settings for users signing in using other SSO profiles, select Don't ask users for additional verifications from Google. Click Save. Clean up If you don't intend to keep single sign-on enabled for your organization, follow these steps to disable single sign-on in Cloud Identity or Google Workspace: In the Admin Console and go to Manage SSO profile assignments. Go to Manage SSO profile assignments For each profile assignment, do the following: Open the profile. If you see an Inherit button, click Inherit. If you don't see an Inherit button, select None and click Save. Return to the SSO with third-party IDPs page and open the AD FS SAML profile. Click Delete. To clean up configuration in AD FS, follow these steps: Connect to your AD FS server and open the AD FS MMC snap-in. In the menu at left, right-click the Relying Party Trusts folder. In the list of relying party trusts, right-click the relying party trust you created, and click Delete. Confirm the deletion by clicking Yes. What's next Learn more about federating Google Cloud with Active Directory. Learn about Azure Active Directory B2B user provisioning and single sign-on. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with our best practices for managing super-admin users.. Send feedback \ No newline at end of file diff --git a/Active_Directory_user_account_provisioning.txt b/Active_Directory_user_account_provisioning.txt new file mode 100644 index 0000000000000000000000000000000000000000..e80051f64409ea1e98c5aad75d9783d19212da07 --- /dev/null +++ b/Active_Directory_user_account_provisioning.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-synchronizing-user-accounts +Date Scraped: 2025-02-23T11:55:39.357Z + +Content: +Home Docs Cloud Architecture Center Send feedback Active Directory user account provisioning Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document shows you how to set up user and group provisioning between Active Directory and your Cloud Identity or Google Workspace account by using Google Cloud Directory Sync (GCDS). To follow this guide, you must have an Active Directory user that is allowed to manage users and groups in Active Directory. Also, if you don't yet have a Cloud Identity or Google Workspace account, you'll need administrative access to your DNS zone in order to verify domains. If you already have a Cloud Identity or Google Workspace account, make sure that your user has super admin privileges. Objectives Install GCDS and connect it to Active Directory and Cloud Identity or Google Workspace. Configure GCDS to provision users and, optionally, groups to Google Cloud. Set up a scheduled task for continuous provisioning. Costs If you're using the free edition of Cloud Identity, following this guide won't use any billable Google Cloud components. Before you begin Make sure you understand how Active Directory identity management can be extended to Google Cloud. Decide how you want to map identities, groups, and domains. Specifically, make sure that you've answered the following questions: Which DNS domain do you plan to use as the primary domain for Cloud Identity or Google Workspace? Which additional DNS domains do you plan to use as secondary domains? Do you need to use domain substitution? Do you plan to use the email address (mail) or User Principal Name (userPrincipalName) as common identifiers for users? Do you plan to provision groups, and if so, do you intend to use the common name (cn) or email address (mail) as common identifiers for groups? For guidance on making these decisions, refer to the overview document on extending Active Directory identity and access management to Google Cloud. Before connecting your production Active Directory to Google Cloud, consider using an Active Directory test environment for setting up and testing user provisioning. Sign up for Cloud Identity if you don't have an account already, and add additional DNS domains if necessary. If you're using the free edition of Cloud Identity and intend to provision more than 50 users, request an increase of the total number of free Cloud Identity users through your support contact. If you suspect that any of the domains you plan to use for Cloud Identity could have been used by employees to register consumer accounts, consider migrating these user accounts first. For more details, see Assessing existing user accounts. Plan the GCDS deployment The following sections describe how to plan your GCDS deployment. Decide where to deploy GCDS GCDS can provision users and groups from an LDAP directory to Cloud Identity or Google Workspace. Acting as a go-between for the LDAP server and Cloud Identity or Google Workspace, GCDS queries the LDAP directory to retrieve the necessary information from the directory and uses the Directory API to add, modify, or delete users in your Cloud Identity or Google Workspace account. Because Active Directory Domain Services is based on LDAP, GCDS is well suited to implement user provisioning between Active Directory and Cloud Identity or Google Workspace. When connecting an on-premises Active Directory infrastructure to Google Cloud, you can run GCDS either on-premises or on a Compute Engine virtual machine in Google Cloud. In most cases, it's best to run GCDS on-premises: Because the information that Active Directory manages includes personally identifiable information and is usually considered sensitive, you might not want Active Directory to be accessed from outside the local network. By default, Active Directory uses unencrypted LDAP. If you access Active Directory remotely from within Google Cloud, you should use encrypted communication. Although you can encrypt the connection by using LDAPS (LDAP+SSL) or Cloud VPN. Communication from GCDS to Cloud Identity or Google Workspace is conducted through HTTPS and requires little or no change to your firewall configuration. You can run GCDS on either Windows or Linux. Although it's possible to deploy GCDS on the domain controller, it's best to run GCDS on a separate machine. This machine must satisfy the system requirements and have LDAP access to Active Directory. Although it's not a prerequisite for the machine to be domain joined or to run Windows, this guide assumes that Cloud Directory Sync runs on a domain-joined Windows machine. To aid with setting up provisioning, GCDS includes a graphical user interface (GUI) called Configuration Manager. If the server on which you intend to run GCDS has a desktop experience, you can run Configuration Manager on the server itself. Otherwise, you must run Configuration Manager locally and then copy the resulting configuration file to the server, where you can use it to run GCDS. This guide assumes that you run Configuration Manager on a server with a GUI. Decide where to retrieve data GCDS uses LDAP to interact with Active Directory and to retrieve information about users and groups. To make this interaction possible, GCDS requires you to provide a hostname and port in the configuration. In a small Active Directory environment that runs only a single global catalog (GC) server, providing a hostname and port is not a problem because you can point GCDS directly to the global catalog server. In a more complex environment that runs redundant GC servers, pointing GCDS to a single server does not make use of the redundancy and is therefore not ideal. Although it's possible to set up a load balancer that distributes LDAP queries across multiple GC servers and keeps track of servers that might be temporarily unavailable, it's preferable to use the DC Locator mechanism to locate servers dynamically. By default, GCDS requires you to explicitly specify the endpoint of an LDAP server and does not support using the DC Locator mechanism. In this guide, you complement GCDS with a small PowerShell script that engages the DC Locator mechanism so that you don't have to statically configure endpoints of global catalog servers. Prepare your Cloud Identity or Google Workspace account This section describes how to create a user for GCDS. To enable GCDS to interact with the Directory API and Domain Shared Contacts API of Cloud Identity and Google Workspace, the application needs a user account that has administrative privileges. When signing up for Cloud Identity or Google Workspace, you already created one super admin user. Although you could use this user for GCDS, it's preferable to create a separate user that is exclusively used by Cloud Directory Sync: Open the Admin console and sign in by using the super admin user that you created when signing up for Cloud Identity or Google Workspace. In the menu, click Directory > Users, and then click Add new user to create a user. Provide an appropriate name and email address, such as: First Name: Google Cloud Last Name: Directory Sync Primary email: cloud-directory-sync Retain the primary domain in the email address, even if the domain does not correspond to the forest that you're provisioning from. Ensure that Automatically generate a new password is set to Disabled, and enter a password. Ensure that Ask for a password change at the next sign-in is set to Disabled. Click Add New User. Click Done. To enable GCDS to create, list, and delete user accounts and groups, the user needs additional privileges. Additionally, it's a good idea to exempt the user from single sign-on—otherwise, you might not be able to re-authorize GCDS when experiencing single sign-on problems. Both can be accomplished by making the user a super admin: Locate the newly created user in the list and open it. Under Admin roles and privileges, click Assign Roles. Enable the Super Admin role. Click Save. Warning: The super admin role grants the user full access to Cloud Identity, Google Workspace, and Google Cloud resources. To protect the user against credential theft and malicious use, we recommend that you enable 2-step verification for the user. For more details on how to protect super admin users, see Security best practices for administrator accounts. Configure user provisioning The following sections describe how to configure user provisioning. Create an Active Directory user for GCDS To enable GCDS to retrieve information about users and groups from Active Directory, GCDS also requires a domain user with sufficient access. Rather than reusing an existing Windows user for this purpose, create a dedicated user for GCDS: Graphical Interface Open the Active Directory Users and Computers MMC snap-in from the Start menu. Navigate to the domain and organizational unit where you want to create the user. If there are multiple domains in your forest, create the user in the same domain as the GCDS machine. Right-click on the right window pane and choose New > User. Provide an appropriate name and email address, such as: First Name: Google Cloud Last Name: Directory Sync User logon name: gcds User logon name (pre-Windows 2000): gcds Click Next. Provide a password that satisfies your password policy. Clear User must change password at next logon. Select Password never expires. Click Next, and then click Finish. PowerShell Open a PowerShell console as Administrator. Create a user by running the following command: New-ADUser -Name "Google Cloud Directory Sync" ` -GivenName "Google Cloud" ` -Surname "Directory Sync" ` -SamAccountName "gcds" ` -UserPrincipalName (-Join("gcds@",(Get-ADDomain).DNSRoot)) ` -AccountPassword(Read-Host -AsSecureString "Type password for User") ` -Enabled $True Note: You can use the "Path" argument to create a user under a specific organizational unit (OU). For example: -Path "OU=dest,OU=root,DC=domain,DC=com". You now have the prerequisites in place for installing GCDS. Install GCDS On the machine on which you will run GCDS, download and run the GCDS installer. Rather than using a browser to perform the download, you can use the following PowerShell command to download the installer: (New-Object net.webclient).DownloadFile("https://dl.google.com/dirsync/dirsync-win64.exe", "$(pwd)\dirsync-win64.exe") After the download has completed, you can launch the installation wizard by running the following command: .\dirsync-win64.exe If you have already had GCDS installed, you can update GCDS to ensure that you are using the latest version. Create a folder for the GCDS configuration GCDS stores its configuration in an XML file. Because this configuration includes an OAuth refresh token that GCDS uses to authenticate with Google, make sure that you properly secure the folder used for configuration. In addition, because GCDS doesn't require access to local resources other than this folder, you can configure GCDS to run as a limited user, LocalService: On the machine where you installed GCDS, log on as a local administrator. Open a PowerShell console that has administrative privileges. Run the following commands to create a folder that is named $Env:ProgramData\gcds to store the configuration, and to apply an access control list (ACL) so that only GCDS and administrators have access: $gcdsDataFolder = "$Env:ProgramData\gcds" New-Item -ItemType directory -Path $gcdsDataFolder &icacls "$gcdsDataFolder" /inheritance:r &icacls "$gcdsDataFolder" /grant:r "CREATOR OWNER:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "BUILTIN\Administrators:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "Domain Admins:(OI)(CI)F" /T &icacls "$gcdsDataFolder" /grant "LOCAL SERVICE:(OI)(CI)F" /T To determine the location of the ProgramData folder, run the command Write-Host $Env:ProgramData. On English versions of Windows, this path will usually be c:\ProgramData. You need this path later. Connect to Google You will now use Configuration Manager to prepare the GCDS configuration. These steps assume that you run Configuration Manager on the same server where you plan to run GCDS. If you use a different machine to run Configuration Manager, make sure to copy the configuration file to the GCDS server afterward. Also, be aware that testing the configuration on a different machine might not be possible. Launch Configuration Manager. You can find Configuration Manager in the Windows Start menu under Google Cloud Directory Sync > Configuration Manager. Click Google Domain Configuration > Connection Settings. Authorize GCDS and configure domain settings. In the menu, click File > Save as. In the file dialog, enter PROGRAM_DATA\gcds\config.xml as the filename. Replace PROGRAM_DATA with the path to the ProgramData folder that the PowerShell command returned when you ran it earlier. Click Save, and then click OK. Connect to Active Directory The next step is to configure GCDS to connect to Active Directory: In Configuration Manager, click LDAP Configuration > Connection Settings. Configure the LDAP connection settings: Server Type: Select MS Active Directory. Connection Type: Select either Standard LDAP or LDAP+SSL. Host Name: Enter the name of a GC server. This setting is used only for testing. Later, you will automate the discovery of the GC server. Port: 3268 (GC) or 3269 (GC over SSL). Using a GC server instead of a domain controller helps ensure that you can provision users from all domains of your Active Directory forest. Also, ensure authentication after Microsoft ADV190023 update. Authentication Type: Simple. Authorized User: Enter the User Principal Name (UPN) of the domain user that you created earlier: gcds@UPN_SUFFIX_DOMAIN. Replace UPN_SUFFIX_DOMAIN with the appropriate UPN suffix domain for the user. Alternatively, you can also specify the user by using the NETBIOS_DOMAIN_NAME\gcds syntax. Base DN: Leave this field empty to ensure that searches are performed across all domains in the forest. To verify the settings, click Test connection. If the connection fails, double-check that you've specified the hostname of a GC server and that the username and password are correct. Click Close. Decide what to provision Now that you've successfully connected GCDS, you can decide which items to provision: In Configuration Manager, click General Settings. Ensure that User Accounts is selected. If you intend to provision groups, ensure that Groups is selected; otherwise, clear the checkbox. Synchronizing organizational units is beyond the scope of this guide, so leave Organizational Units unselected. Leave User Profiles and Custom Schemas unselected. For more details, see Decide what to provision. Provision users To provision users, you configure how to map users between Active Directory: In Configuration Manager, click User Accounts > Additional User Attributes. Click Use defaults to automatically populate the attributes for Given Name and Family Name with givenName and sn, respectively. The remaining settings depend on whether you intend to use the UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace, and whether you need to apply domain name substitutions. If you're unsure which option is best for you, see the article on how Active Directory identity management can be extended to Google Cloud. UPN In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Change Email Address Attribute to userPrincipalName. Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(!(userPrincipalName=gcds@*))) This rule matches all non-disabled users but ignores computer and managed service accounts, as well as the gcds user account. Base DN: Leave blank to search all domains in the forest. Click OK to create the rule. UPN: domain substitution In Configuration Manager, click the User Accounts > User Attributes tab. Click Use defaults. Change Email Address Attribute to userPrincipalName Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(!(userPrincipalName=gcds@*))) This rule matches all non-disabled users but ignores computer and managed service accounts, as well as the gcds user account. Base DN: Leave blank to search all domains within the forest. Click OK to create the rule. Click Google Domain Configuration > Connection Settings, and choose Replace domain names in LDAP email addresses with this domain name. Email In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Click the Search Rules tab, and then click Add Search Rule. Enter the following settings: Scope: Sub-tree Rule: (&(objectCategory=person)(objectClass=user)(mail=*)(!(userAccountControl:1.2.840.113556.1.4.803:=2))) This rule matches all non-disabled users with a non-empty email address but ignores computer and managed service accounts. Base DN: Leave blank to search all domains in the forest. Click OK to create the rule. Email: domain substitution In Configuration Manager, click User Accounts > User Attributes. Click Use defaults. Click proxyAddresses > Remove if you don't want to sync alias addresses. Click the Search Rules tab, and then click Use defaults. Click Google Domain Configuration > Connection Settings, and choose Replace domain names in LDAP email addresses with this domain name. For further details on mapping user attributes, see Set up your sync with Configuration Manager. Deletion policy So far, the configuration has focused on adding and updating users in Cloud Identity or Google Workspace. However, it's also important that users that are disabled or deleted in Active Directory be suspended or deleted in Cloud Identity or Google Workspace. As part of the provisioning process, GCDS generates a list of users in Cloud Identity or Google Workspace that don't have corresponding matches in the Active Directory LDAP query results. Because the LDAP query incorporates the clause (!(userAccountControl:1.2.840.113556.1.4.803:=2)), any users that have been disabled or deleted in Active Directory since the last provisioning was performed will be included in this list. The default behavior of GCDS is to delete these users in Cloud Identity or Google Workspace, but you can customize this behavior: In Configuration Manager, click User Accounts > User Attributes. Under Google Domain Users Deletion/Suspension Policy, ensure that Don't suspend or delete Google domain admins not found in LDAP is checked. This setting ensures that GCDS won't suspend or delete the super admin user that you used to configure your Cloud Identity or Google Workspace account. Optionally, change the deletion policy for non-administrator users. If you use multiple separate instances of GCDS to provision different domains or forests to a single Cloud Identity or Google Workspace account, make sure that the different GCDS instances don't interfere with one another. By default, users in Cloud Identity or Google Workspace that have been provisioned from a different source will wrongly be identified in Active Directory as having been deleted. To avoid this situation, you can move all users that are beyond the scope of the domain or forest that you're provisioning from to a single OU and then exclude that OU. In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Organization Complete Path Match Type: Exact Match Exclusion Rule: Enter the OU path and its name. For example: ROOT OU/EXCLUDED OU Replace ROOT OU/EXCLUDED OU with your OU path and the excluded OU's name. Click OK to create the rule. Alternatively, if excluding a single OU doesn't fit your business, you can exclude domain or forest base on users' email addresses. UPN In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!UPN_SUFFIX_DOMAIN).*)$ Replace UPN_SUFFIX_DOMAIN with your UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. UPN: domain substitution In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. Email In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!MX_DOMAIN).*)$ Replace MX_DOMAIN with the domain name that you use in email addresses, as in this example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. Email: domain substitution In Configuration Manager, click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: User Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the email domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. For further details on deletion and suspension settings, see Learn more about Configuration Manager options. Provision groups The next step is to configure how to map groups between Active Directory and Cloud Identity or Google Workspace. This process differs based on whether you plan to map groups by common name or by email address. Configure group mappings by common name First, you need to identify the types of security groups that you intend to provision, and then formulate an appropriate LDAP query. The following table contains common queries that you can use. Type LDAP query Domain local groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483652)) Global groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483650)) Universal groups (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483656)) Global and universal groups (&(objectCategory=group)(|(groupType:1.2.840.113556.1.4.803:=2147483650)(groupType:1.2.840.113556.1.4.803:=2147483656))) All groups (objectCategory=group) The query for global groups also covers Active Directory–defined groups such as domain controllers. You can filter these groups by restricting the search by organizational unit (ou). The remaining settings depend on whether you intend to use UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace. UPN In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add two default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, enter the following settings: Group Email Address Attribute: cn User Email Address Attribute: userPrincipalName Click the Prefix-Suffix tab. In the Group Email Address box, enter the following settings: Suffix: @PRIMARY_DOMAIN, where you replace @PRIMARY_DOMAIN with the primary domain of your Cloud Identity or Google Workspace account. Although the setting seems redundant because GCDS appends the domain automatically, you must specify the setting explicitly to prevent multiple GCDS instances from erasing group members that they had not added. Example: @example.com Click OK. Click the second rule cross icon to delete that rule. Email In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add a couple of default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, edit Group Email Address Attribute to enter the setting cn. Click OK. The same settings also apply if you used domain substitution when mapping users. Configure group mappings by email address First, you need to identify the types of security groups that you intend to provision, and then formulate an appropriate LDAP query. The following table contains common queries that you can use. Type LDAP query Domain local groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483652)(mail=*)) Global groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483650)(mail=*)) Universal groups with email address (&(objectCategory=group)(groupType:1.2.840.113556.1.4.803:=2147483656)(mail=*)) Global and universal groups with email address (&(objectCategory=group)(|(groupType:1.2.840.113556.1.4.803:=2147483650)(groupType:1.2.840.113556.1.4.803:=2147483656))(mail=*)) All groups with email address (&(objectCategory=group)(mail=*)) The remaining settings depend on whether you intend to use UPN or email address to map Active Directory to users in Cloud Identity or Google Workspace. UPN In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add two default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. In the Groups box, edit User Email Name Attribute to enter the setting userPrincipalName. Click OK. Click the second rule cross icon to delete that rule. Email In Configuration Manager, click Groups > Search Rules. Click Use Defaults to add a couple of default rules. Click the first rule edit icon. Edit Rule to replace the LDAP query. Click OK. Click the second rule cross icon to remove this rule. If you have enabled Replace domain names in LDAP email addresses with this domain name, it also applies to email addresses of groups and members. Deletion policy GCDS handles the deletion of groups similarly to the deletion of users. If you use multiple separate instances of GCDS to provision different domains or forests to a single Cloud Identity or Google Workspace account, make sure that the different GCDS instances don't interfere with one another. By default, a group member in Cloud Identity or Google Workspace that has been provisioned from a different source will wrongly be identified in Active Directory as having been deleted. To avoid this situation, configure GCDS to ignore all group members that are beyond the scope of the domain or forest that you're provisioning from. UPN Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!UPN_SUFFIX_DOMAIN).*)$ Replace UPN_SUFFIX_DOMAIN with your UPN suffix domain, as in the following example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. UPN: domain substitution Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the UPN suffix domain, as in this example: .*@((?!corp.example.com).*)$ Click OK to create the rule. Email Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!MX_DOMAIN).*)$ Replace MX_DOMAIN with the domain name that you use in email addresses, as in the following example: .*@((?!corp.example.com).*)$ If you use more than one UPN suffix domain, extend the expression as shown: .*@((?!corp.example.com|branch.example.com).*)$ Click OK to create the rule. Email: domain substitution Click Google Domain Configuration > Exclusion Rules. Click Add Exclusion Rule. Configure the following settings: Type: Group Member Email Address Match Type: Regular Expression Exclusion Rule: If you use a single UPN suffix domain, enter the following regular expression: .*@((?!SUBSTITUTION_DOMAIN).*)$ Replace SUBSTITUTION_DOMAIN with the domain that you use to replace the email domain, as in the following example: .*@((?!corp.example.com).*)$ Click OK to create the rule. For more information about group settings, see Learn more about Configuration Manager options. Configure logging and notifications Keeping users in sync requires that you run GCDS on a scheduled basis. To allow you to keep track of GCDS activity and potential problems, you can control how and when GCDS writes its log file: In Configuration Manager, click Logging. Set File name to PROGRAM_DATA\gcds\gcds_sync.#{timestamp}.log. Replace PROGRAM_DATA with the path to the ProgramData folder that the PowerShell command returned when you ran it earlier. Click File > Save to commit the configuration changes to disk, then click OK. Note: You can click File > Save or Save as to commit the configuration changes to disk when you complete each of the preceding steps. If the configuration is ready to be tested, you can click Go to the simulation tab. Otherwise, you can click Skip simulation. In addition to logging, GCDS can send notifications by email. To activate this service, click Notifications and provide connection information for your mail server. Simulate user provisioning You've completed the GCDS configuration. To verify that the configuration works as intended, you need to at first save the configuration to disk then simulate a user provisioning run. During simulation, GCDS won't perform any changes to your Cloud Identity or Google Workspace account, but will instead report which changes it would perform during a regular provision run. In Configuration Manager, click Sync. At the bottom of the screen, select Clear cache, and then click Simulate sync. After the process completes, review the Proposed changes section of the log that is shown in the lower half of the dialog and verify that there are no unwanted changes such as deleting or suspending any users or groups. Initial user provisioning You can now trigger the initial user provisioning: Warnings Triggering user provisioning will make permanent changes to users and groups in your Cloud Identity or Google Workspace account. If you have a large number of users to provision, consider temporarily changing the LDAP query to match a subset of these users only. Using this subset of users, you can then test the process and adjust settings if necessary. After you've successfully validated results, change back the LDAP query and provision the remaining users. Avoid repeatedly modifying or deleting a large number of users when testing because such actions might be flagged as abusive behavior. Trigger a provision run as follows: In Configuration Manager, click Sync. At the bottom of the screen, select Clear cache, and then click Sync & apply changes. A dialog appears showing the status. After the process completes, check the log that is shown in the lower half of the dialog: Under Successful user changes, verify that at least one user has been created. Under Failures, verify that no failures occurred. Schedule provisioning To ensure that changes performed in Active Directory are propagated to your Cloud Identity or Google Workspace account, set up a scheduled task that triggers a provisioning run every hour: Open a PowerShell console as Administrator. Check if the Active Directory PowerShell module is available on the system: import-module ActiveDirectory If the command fails, download and install the Remote Server Administration Tools and try again. In Notepad, create a file, copy the following content into it, and save the file to %ProgramData%\gcds\sync.ps1. When you're done, close the file. [CmdletBinding()] Param( [Parameter(Mandatory=$True)] [string]$config, [Parameter(Mandatory=$True)] [string]$gcdsInstallationDir ) import-module ActiveDirectory # Stop on error. $ErrorActionPreference ="stop" # Ensure it's an absolute path. $rawConfigPath = [System.IO.Path]::Combine((pwd).Path, $config) # Discover closest GC in current domain. $dc = Get-ADDomainController -discover -Service "GlobalCatalog" -NextClosestSite Write-Host ("Using GC server {0} of domain {1} as LDAP source" -f [string]$dc.HostName, $dc.Domain) # Load XML and replace the endpoint. $dom = [xml](Get-Content $rawConfigPath) $ldapConfigNode = $dom.SelectSingleNode("//plugin[@class='com.google.usersyncapp.plugin.ldap.LDAPPlugin']/config") # Tweak the endpoint. $ldapConfigNode.hostname = [string]$dc.HostName $ldapConfigNode.ldapCredMachineName = [string]$dc.HostName $ldapConfigNode.port = "3268" # Always use GC port # Tweak the tsv files location $googleConfigNode = $dom.SelectSingleNode("//plugin[@class='com.google.usersyncapp.plugin.google.GooglePlugin']/config") $googleConfigNode.nonAddressPrimaryKeyMapFile = [System.IO.Path]::Combine((pwd).Path, "nonAddressPrimaryKeyFile.tsv") $googleConfigNode.passwordTimestampFile = [System.IO.Path]::Combine((pwd).Path, "passwordTimestampCache.tsv") # Save resulting config. $targetConfigPath = $rawConfigPath + ".autodiscover" $writer = New-Object System.IO.StreamWriter($targetConfigPath, $False, (New-Object System.Text.UTF8Encoding($False))) $dom.Save($writer) $writer.Close() # Start provisioning. Start-Process -FilePath "$gcdsInstallationDir\sync-cmd" ` -Wait -ArgumentList "--apply --config ""$targetConfigPath""" Configuration Manager created a secret key to encrypt the credentials in the config file. To ensure that GCDS can still read the configuration when it's run as a scheduled task, run the following commands to copy that secret key from your own profile to the profile of NT AUTHORITY\LOCAL SERVICE: New-Item -Path Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp -Force; Copy-Item -Path Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp\util ` -Destination Microsoft.PowerShell.Core\Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp\util If the commands fail, ensure that you started the PowerShell console as Administrator. Create a scheduled task by running the following commands. The scheduled task will be triggered every hour and invokes the sync.ps1 script as NT AUTHORITY\LOCAL SERVICE. Warning: After it starts, the scheduled task will make permanent changes to your Cloud Identity or Google Workspace account. $taskName = "Synchronize to Cloud Identity" $gcdsDir = "$Env:ProgramData\gcds" $action = New-ScheduledTaskAction -Execute 'PowerShell.exe' ` -Argument "-ExecutionPolicy Bypass -NoProfile $gcdsDir\sync.ps1 -config $gcdsDir\config.xml -gcdsInstallationDir '$Env:Programfiles\Google Cloud Directory Sync'" ` -WorkingDirectory $gcdsDir $trigger = New-ScheduledTaskTrigger ` -Once ` -At (Get-Date) ` -RepetitionInterval (New-TimeSpan -Minutes 60) ` -RepetitionDuration (New-TimeSpan -Days (365 * 20)) $principal = New-ScheduledTaskPrincipal -UserID "NT AUTHORITY\LOCAL SERVICE" -LogonType ServiceAccount Register-ScheduledTask -Action $action -Trigger $trigger -Principal $principal -TaskName $taskName $task = Get-ScheduledTask -TaskName "$taskName" $task.Settings.ExecutionTimeLimit = "PT12H" Set-ScheduledTask $task For more information, see Schedule automatic synchronizations. Test user provisioning You've completed the installation and configuration of GCDS, and the scheduled task will trigger a provision run every hour. To trigger a provisioning run manually, switch to the PowerShell console and run the following command: Start-ScheduledTask "Synchronize to Cloud Identity" Clean up To remove GCDS, perform the following steps: Open Windows Control Panel and click Programs > Uninstall a program. Select Google Cloud Directory Sync, and click Uninstall/Change to launch the uninstall wizard. Then follow the instructions in the wizard. Open a PowerShell console and run the following command to remove the scheduled task: $taskName = "Synchronize to Cloud Identity" Unregister-ScheduledTask -TaskName $taskName -Confirm:$False Run the following command to delete the configuration and log files: Remove-Item -Recurse -Force "$Env:ProgramData\gcds" Remove-Item -Recurse -Path Registry::HKEY_USERS\S-1-5-19\SOFTWARE\JavaSoft\Prefs\com\google\usersyncapp What's next Configure single sign-on between Active Directory and Google Cloud. Review GCDS best practices and FAQ. Find out how to troubleshoot common GCDS issues. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with best practices for managing super admin users. Send feedback \ No newline at end of file diff --git a/Align_spending_with_business_value.txt b/Align_spending_with_business_value.txt new file mode 100644 index 0000000000000000000000000000000000000000..2bed2f3b71f88fcdb3ab9b8859975213262f47a6 --- /dev/null +++ b/Align_spending_with_business_value.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization/align-cloud-spending-business-value +Date Scraped: 2025-02-23T11:43:49.288Z + +Content: +Home Docs Cloud Architecture Center Send feedback Align cloud spending with business value Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-18 UTC This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to align your use of Google Cloud resources with your organization's business goals. Principle overview To effectively manage cloud costs, you need to maximize the business value that the cloud resources provide and minimize the total cost of ownership (TCO). When you evaluate the resource options for your cloud workloads, consider not only the cost of provisioning and using the resources, but also the cost of managing them. For example, virtual machines (VMs) on Compute Engine might be a cost-effective option for hosting applications. However, when you consider the overhead to maintain, patch, and scale the VMs, the TCO can increase. On the other hand, serverless services like Cloud Run can offer greater business value. The lower operational overhead lets your team focus on core activities and helps to increase agility. To ensure that your cloud resources deliver optimal value, evaluate the following factors: Provisioning and usage costs: The expenses incurred when you purchase, provision, or consume resources. Management costs: The recurring expenses for operating and maintaining resources, including tasks like patching, monitoring and scaling. Indirect costs: The costs that you might incur to manage issues like downtime, data loss, or security breaches. Business impact: The potential benefits from the resources, like increased revenue, improved customer satisfaction, and faster time to market. By aligning cloud spending with business value, you get the following benefits: Value-driven decisions: Your teams are encouraged to prioritize solutions that deliver the greatest business value and to consider both short-term and long-term cost implications. Informed resource choice: Your teams have the information and knowledge that they need to assess the business value and TCO of various deployment options, so they choose resources that are cost-effective. Cross-team alignment: Cross-functional collaboration between business, finance, and technical teams ensures that cloud decisions are aligned with the overall objectives of the organization. Recommendations To align cloud spending with business objectives, consider the following recommendations. Prioritize managed services and serverless products Whenever possible, choose managed services and serverless products to reduce operational overhead and maintenance costs. This choice lets your teams concentrate on their core business activities. They can accelerate the delivery of new features and functionalities, and help drive innovation and value. The following are examples of how you can implement this recommendation: To run PostgreSQL, MySQL, or Microsoft SQL Server server databases, use Cloud SQL instead of deploying those databases on VMs. To run and manage Kubernetes clusters, use Google Kubernetes Engine (GKE) Autopilot instead of deploying containers on VMs. For your Apache Hadoop or Apache Spark processing needs, use Dataproc and Dataproc Serverless. Per-second billing can help to achieve significantly lower TCO when compared to on-premises data lakes. Balance cost efficiency with business agility Controlling costs and optimizing resource utilization are important goals. However, you must balance these goals with the need for flexible infrastructure that lets you innovate rapidly, respond quickly to changes, and deliver value faster. The following are examples of how you can achieve this balance: Adopt DORA metrics for software delivery performance. Metrics like change failure rate (CFR), time to detect (TTD), and time to restore (TTR) can help to identify and fix bottlenecks in your development and deployment processes. By reducing downtime and accelerating delivery, you can achieve both operational efficiency and business agility. Follow Site Reliability Engineering (SRE) practices to improve operational reliability. SRE's focus on automation, observability, and incident response can lead to reduced downtime, lower recovery time, and higher customer satisfaction. By minimizing downtime and improving operational reliability, you can prevent revenue loss and avoid the need to overprovision resources as a safety net to handle outages. Enable self-service optimization Encourage a culture of experimentation and exploration by providing your teams with self-service cost optimization tools, observability tools, and resource management platforms. Enable them to provision, manage, and optimize their cloud resources autonomously. This approach helps to foster a sense of ownership, accelerate innovation, and ensure that teams can respond quickly to changing needs while being mindful of cost efficiency. Adopt and implement FinOps Adopt FinOps to establish a collaborative environment where everyone is empowered to make informed decisions that balance cost and value. FinOps fosters financial accountability and promotes effective cost optimization in the cloud. Promote a value-driven and TCO-aware mindset Encourage your team members to adopt a holistic attitude toward cloud spending, with an emphasis on TCO and not just upfront costs. Use techniques like value stream mapping to visualize and analyze the flow of value through your software delivery process and to identify areas for improvement. Implement unit costing for your applications and services to gain a granular understanding of cost drivers and discover opportunities for cost optimization. For more information, see Maximize business value with cloud FinOps. Previous arrow_back Overview Next Foster a culture of cost awareness arrow_forward Send feedback \ No newline at end of file diff --git a/AlloyDB_for_PostgreSQL.txt b/AlloyDB_for_PostgreSQL.txt new file mode 100644 index 0000000000000000000000000000000000000000..fb8b8579ed9a1297c7d94576fa88e2e929df5218 --- /dev/null +++ b/AlloyDB_for_PostgreSQL.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/alloydb +Date Scraped: 2025-02-23T12:03:54.417Z + +Content: +The future of PostgreSQL is here, and it's built for you. Download our new ebook to discover how AlloyDB can transform your business.AlloyDB100% PostgreSQL-compatible database that runs anywherePower your most demanding enterprise workloads with superior performance, availability, and scale, and supercharge them with AI.Go to consoleDocumentationGet started with a 30-day AlloyDB free trial instance.In addition, new Google Cloud customers get $300 in free credits.Product highlightsEliminates your dependency on proprietary databasesScalable for applications of all sizes 99.99% availability SLA, inclusive of maintenanceWhat is AlloyDB?3:57FeaturesBetter price-performanceAlloyDB is more than 4x faster for transactional workloads and provides up to 2x better price-performance compared to self-managed PostgreSQL.* It's suitable for the most demanding enterprise workloads, including those that require high transaction throughput, large data sizes, or multiple read replicas. Read connections scale horizontally, backed by low lag, scale-out read replica pools, and support cross-region replicas. Committed use discounts offer additional savings for one to three year commitments.AlloyDB vs. self-managed PostgreSQLA price-performance comparisonRead the blogFully featured vector databaseAlloyDB AI can help you build a wide range of generative AI applications. It uses the Google ScaNN index, the technology that powers services like Google Search and YouTube, to scale to over a billion vectors and deliver up to 4 times faster vector queries than the HNSW index in standard PostgreSQL.* It also helps you generate vector embeddings from within your database, access data stored in open-source gen AI tools such as LangChain, and access AI models in Vertex AI and other platforms.Build gen AI apps with PostgreSQLCombine AI models with real-time operational dataRead the blogRuns anywhereAlloyDB Omni is a downloadable edition of AlloyDB, designed to run anywhere—in your data center, on your laptop, at the edge, and in any cloud. It’s powered by the same engine that underlies the cloud-based AlloyDB service and provides the same functionality. AlloyDB Omni is a fraction of the cost of legacy databases, so it’s an attractive way to modernize to an enterprise-grade version of PostgreSQL with support from a Tier 1 vendor.Quickstart5:34AI-driven development and operationsSimplify all aspects of the database journey with AI-powered assistance, helping your teams focus on what matters most. AlloyDB Studio offers a query editor where you can use Gemini to write SQL and analyze your data using natural language prompts. Database Center, in Preview, provides a comprehensive view of your database fleet with intelligent performance and security recommendations, and you can enable Gemini for an easy-to-use chat interface to ask questions and get optimization recommendations.Manage your data using AlloyDB StudioBuild database applications faster using natural languageRead documentationReal-time business insightsThanks to its built-in columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries*, with zero impact on operational performance when running business intelligence, reporting, and hybrid transactional and analytical workloads (HTAP). You can also stream data or use federated queries with BigQuery and call machine learning models in Vertex AI directly within a query or transaction.Under the hood: columnar engineHow the AlloyDB columnar engine accelerates your analytical queriesRead the blogFully managed database serviceAlloyDB is a fully managed database service with cloud-scale architecture and industry-leading performance, availability, and scale. It automates administrative tasks such as backups, replication, patching, and capacity management, and uses adaptive algorithms and machine learning for PostgreSQL vacuum management, storage management, memory management, data tiering, and analytics acceleration, so you can focus on building your applications.Bayer Crop Science unlocks harvest data efficiency2:07PostgreSQL compatibilityPostgreSQL has emerged in recent years as a leading alternative to legacy, proprietary databases because of its rich functionality, ecosystem extensions, and enterprise readiness. AlloyDB is fully compatible with PostgreSQL, providing flexibility and true portability for your workloads. It’s easy to migrate existing workloads and bring your existing PostgreSQL skills over.Is AlloyDB compatible with PostgreSQL?2:37High availabilityAlloyDB offers a 99.99% uptime SLA, inclusive of maintenance. It automatically detects and recovers from most database failures within 60 seconds, independent of database size and load. The architecture supports non-disruptive instance resizing and database maintenance, with planned operations incurring less than 1 second of application downtime. Read pools are updated with zero downtime.ScalabilityScale AlloyDB instances up and down to support almost any workload and deploy up to 20 read replicas for read scalability. You can also create secondary clusters in different regions for disaster recovery, have them continuously replicated from the primary region, and promote them in case of an outage. Regional replicas deliver more than 25x lower replication lag than standard PostgreSQL for high throughput transactional workloads. They also improve read performance by bringing data closer to your users.Character.ai scales its growing gen AI platform2:46Secure access and connectivityAlloyDB encrypts data in transit and at rest. It supports private connectivity with Virtual Private Cloud (VPC) for secure connectivity to your applications, and allows you to access your database instances via private IP without going through the internet or using external IP addresses. You can manage your own encryption keys to encrypt data at rest for your database and your backups, and you can choose between Identity and Access Management (IAM) and PostgreSQL user roles for database authentication.Customer-friendly pricingEliminate your dependency on high-cost, proprietary databases, and take advantage of transparent and predictable pricing with no proprietary licensing or opaque I/O charges. Storage is automatically managed and you're only charged for what you use, with no additional storage costs for read replicas. An ultra-fast cache, automatically provisioned in addition to instance memory, allows you to maximize the price-performance ratio. Committed use discounts offer additional savings for one to three year commitments. And AlloyDB Omni offers another simple and affordable deployment option.FLUIDEFI addresses DeFi industry challengesAlloyDB boosts response speed and reduces costsRead the blogContinuous backup and recoveryProtect your business from data loss with point-in-time recovery to any point within your defined retention window. Restore the database to a specific date and time for development, testing, and auditing purposes, or recover your production database from user or application errors. You no longer need expensive hardware and software solutions to achieve enterprise-level data protection.Under the hood: business continuityBuild highly resilient applications with AlloyDBRead the blogEasy migrationsNo matter where your source database is located—whether on-premises, on Compute Engine, or in other clouds—Database Migration Service (DMS) can migrate it to AlloyDB securely and with minimal downtime. DMS leverages native PostgreSQL replication capabilities and automated schema and code conversion, as necessary, to maximize the reliability of your migration. Gemini adds AI-powered assistance for heterogeneous database migrations, helping you review and convert database-resident code and offering full explanability. For an assessment of your migration needs, try the Database Modernization Program.View all featuresCompare PostgreSQL options on Google CloudGoogle Cloud serviceOverviewKey benefitsAlloyDBFull PostgreSQL compatibility, with superior performance and scale and the most comprehensive generative AI supportTry AlloyDB free to enjoy:4x faster performance for transactional workloads*Hybrid transactional and analytical processing (HTAP)Scalability for the most demanding enterprise workloadsCloud SQLFully managed, standard version of open source PostgreSQL in the cloudLearn more about how Cloud SQL provides:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionSpannerCloud-native with unlimited scalability and PostgreSQL interface and toolingLearn more about Spanner if you need:Unlimited scale and global consistency99.999% availability SLASupport for relational and non-relational workloadsAlloyDBOverviewFull PostgreSQL compatibility, with superior performance and scale and the most comprehensive generative AI supportKey benefitsTry AlloyDB free to enjoy:4x faster performance for transactional workloads*Hybrid transactional and analytical processing (HTAP)Scalability for the most demanding enterprise workloadsCloud SQLOverviewFully managed, standard version of open source PostgreSQL in the cloudKey benefitsLearn more about how Cloud SQL provides:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionSpannerOverviewCloud-native with unlimited scalability and PostgreSQL interface and toolingKey benefitsLearn more about Spanner if you need:Unlimited scale and global consistency99.999% availability SLASupport for relational and non-relational workloadsHow It WorksAlloyDB provides full compatibility with open source PostgreSQL, including popular extensions, with higher performance and scalability. Each cluster has a primary instance and an optional read pool with multiple read nodes, and you can replicate to secondary clusters in separate regions. AlloyDB features continuous backup and offers high availability with non-disruptive maintenance for most workloads. It uses adaptive algorithms to eliminate many routine administration tasks.View documentationA deep dive into AlloyDB with Andi GutmansCommon UsesTransactional workloadsThe future of PostgreSQL is hereAlloyDB is fully PostgreSQL-compatible but is more than 4x faster for transactional workloads, making it ideal for e-commerce, financial applications, gaming, and any other workload that demands high throughput and fast response times. Independent scaling of compute and storage, an ultra-fast cache, and intelligent database management give your applications room to grow and eliminate tedious database administration tasks.Ebook: AlloyDB, the database built for youBlog: Learn how to use Index Advisor to optimize performanceQuickstart: Create and connect to a databaseLearning resourcesThe future of PostgreSQL is hereAlloyDB is fully PostgreSQL-compatible but is more than 4x faster for transactional workloads, making it ideal for e-commerce, financial applications, gaming, and any other workload that demands high throughput and fast response times. Independent scaling of compute and storage, an ultra-fast cache, and intelligent database management give your applications room to grow and eliminate tedious database administration tasks.Ebook: AlloyDB, the database built for youBlog: Learn how to use Index Advisor to optimize performanceQuickstart: Create and connect to a databaseAnalytical workloadsGet real-time business insightsAs a data analyst, you can get real-time insights by running queries directly on your AlloyDB operational database. With its built-in, automatically managed columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries, making it ideal for real-time business intelligence dashboards, fraud detection, customer behavior analysis, inventory optimization, and personalization.Video: Learn how the columnar engine can boost performanceBlog: AlloyDB columnar engine under the hoodLab: Accelerating analytical queries with the columnar engineLearning resourcesGet real-time business insightsAs a data analyst, you can get real-time insights by running queries directly on your AlloyDB operational database. With its built-in, automatically managed columnar engine, AlloyDB is up to 100x faster than standard PostgreSQL for analytical queries, making it ideal for real-time business intelligence dashboards, fraud detection, customer behavior analysis, inventory optimization, and personalization.Video: Learn how the columnar engine can boost performanceBlog: AlloyDB columnar engine under the hoodLab: Accelerating analytical queries with the columnar engineGenerative AI applicationsBuild gen AI apps with state-of-the-art modelsAs generative AI gets integrated into every type of application, the operational database becomes essential for storing and searching vectors and grounding models in enterprise data. AlloyDB AI offers fast vector search, helps you generate vector embeddings from within your database, and offers access to AI models on Vertex AI and other platforms.Documentation: Query and index embeddings using pgvectorBlog: Build gen AI apps with LangChain and Google Cloud databasesBlog: AlloyDB supercharges PostgreSQL vector searchLearning resourcesBuild gen AI apps with state-of-the-art modelsAs generative AI gets integrated into every type of application, the operational database becomes essential for storing and searching vectors and grounding models in enterprise data. AlloyDB AI offers fast vector search, helps you generate vector embeddings from within your database, and offers access to AI models on Vertex AI and other platforms.Documentation: Query and index embeddings using pgvectorBlog: Build gen AI apps with LangChain and Google Cloud databasesBlog: AlloyDB supercharges PostgreSQL vector searchDatabase modernizationMove your apps to modern, fully managed databasesWhether you're an existing PostgreSQL user or you're looking to transition from legacy databases, you have a path to fully managed databases in the cloud. Database Migration Service (DMS) can help with assessment, conversion, and migration so you can experience the enhanced performance, scalability, and reliability of AlloyDB while maintaining full PostgreSQL compatibility.Documentation: An overview of migration to AlloyDBDemo: Build and modernize an e-commerce applicationBlog: How B4A achieves beautiful performance for its beauty platformLearning resourcesMove your apps to modern, fully managed databasesWhether you're an existing PostgreSQL user or you're looking to transition from legacy databases, you have a path to fully managed databases in the cloud. Database Migration Service (DMS) can help with assessment, conversion, and migration so you can experience the enhanced performance, scalability, and reliability of AlloyDB while maintaining full PostgreSQL compatibility.Documentation: An overview of migration to AlloyDBDemo: Build and modernize an e-commerce applicationBlog: How B4A achieves beautiful performance for its beauty platformMulticloud and hybrid cloudRun anywhere, at a fraction of the costWhile the cloud is the typical destination for database migrations, you may need to keep some workloads on premises to meet regulatory or data sovereignty requirements, and you might need to run them on other clouds or at the edge. AlloyDB Omni is a downloadable edition of AlloyDB that’s powered by the same engine as the cloud-based service and offers the same functionality, so you can standardize on a single database across all platforms.Blog: AlloyDB Omni, the downloadable edition of AlloyDBQuickstart: Learn how to install AlloyDB OmniVideo: AlloyDB Omni on Google Compute Engine QuickstartLearning resourcesRun anywhere, at a fraction of the costWhile the cloud is the typical destination for database migrations, you may need to keep some workloads on premises to meet regulatory or data sovereignty requirements, and you might need to run them on other clouds or at the edge. AlloyDB Omni is a downloadable edition of AlloyDB that’s powered by the same engine as the cloud-based service and offers the same functionality, so you can standardize on a single database across all platforms.Blog: AlloyDB Omni, the downloadable edition of AlloyDBQuickstart: Learn how to install AlloyDB OmniVideo: AlloyDB Omni on Google Compute Engine QuickstartPricingHow AlloyDB pricing worksPricing for AlloyDB for PostgreSQL is transparent and predictable with no expensive, proprietary licensing, and no opaque I/O charges.ServiceDescriptionPrice (USD)ComputevCPUsStarting at$0.06608per vCPU hourMemoryStarting at $0.0112per GB hourStorageRegional cluster storageStarting at $0.0004109per GB hourBackup storageStarting at$0.000137per GB hourNetworkingData transfer inFREEData transfer within regionFREEInter-region data transferStarting at$0.02per GBOutbound traffic into the internetStarting at$0.08per GBSpecialty networking servicesVariesGet full details on pricing and learn about committed use discounts.*Based on Google Cloud performance tests.How AlloyDB pricing worksPricing for AlloyDB for PostgreSQL is transparent and predictable with no expensive, proprietary licensing, and no opaque I/O charges.ComputeDescriptionvCPUsPrice (USD)Starting at$0.06608per vCPU hourMemoryDescriptionStarting at $0.0112per GB hourStorageDescriptionRegional cluster storagePrice (USD)Starting at $0.0004109per GB hourBackup storageDescriptionStarting at$0.000137per GB hourNetworkingDescriptionData transfer inPrice (USD)FREEData transfer within regionDescriptionFREEInter-region data transferDescriptionStarting at$0.02per GBOutbound traffic into the internetDescriptionStarting at$0.08per GBSpecialty networking servicesDescriptionVariesGet full details on pricing and learn about committed use discounts.*Based on Google Cloud performance tests.PRICING CALCULATOREstimate your monthly AlloyDB costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet started with a 30-day AlloyDB free trial instanceStart free trialLearn how to use AlloyDBView trainingLearn about the key benefits of AlloyDBWatch videoCompare AlloyDB to self-managed PostgreSQLLearn moreBuild gen AI applications with AlloyDBLearn moreBusiness CaseLearn from customers using AlloyDB"As our customer base grew, we ran into PostgreSQL vertical scalability limits and problems like CPU, memory, and connection exhaustion. We were thrilled that AlloyDB gave us a drop-in PostgreSQL replacement with much more efficient reads and writes. AlloyDB requires less CPUs to hit our throughput and latency goals, lowering our cost by 40-50% and preparing us for the next phase of customer growth."JP Grace, Chief Technology Officer, EndearRead customer storyRelated content FLUIDEFI nets 3x gains in processing speed with AlloyDBHow Bayer Crop Science unlocked harvest data efficiency with AlloyDBHow Character.ai uses AlloyDB to scale its growing Gen AI platformFeatured benefits and customersDemand more from your database? AlloyDB offers superior speed, reliability, and ease of use.With a downloadable version, AlloyDB can tackle any project, anywhere.AlloyDB is perfect for building gen AI apps and is equipped with its own impressive AI capabilities.Partners & IntegrationAccelerate your workloads by working with a partnerData integration and migrationBusiness intelligence and analyticsData governance, security, and observabilitySystems integration - globalSystems integration - regionalStreamline the process of moving to, building, and working with AlloyDB. Read about Google Cloud Ready - AlloyDB validated partners and visit the partner directory for a full list of AlloyDB partners.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Analyst_reports.txt b/Analyst_reports.txt new file mode 100644 index 0000000000000000000000000000000000000000..2effd6e69e35f0d0bd291e2b35335496521464f2 --- /dev/null +++ b/Analyst_reports.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/analyst-reports +Date Scraped: 2025-02-23T11:57:37.341Z + +Content: +Analyst reportsLearn what top industry analyst firms are saying about Google Cloud.Contact usMore from Google CloudSolutionsWhitepapersExecutive insightsFilter byFiltersLearn more about Google Cloud’s momentumsearchsendLearn more about Google Cloud’s momentumRead what industry analysts are saying about Google Cloud. The reports listed here are written by third-party industry analysts that cover Google Cloud’s strategy, product portfolio, and differentiation. You can also learn more by reading whitepapers written by Google and the Google community.Google Cloud NGFW Enterprise Certified Secure Test ReportRead Miercom's test results on Google Cloud Next Generation Firewall Enterprise.Google Cloud NGFW Enterprise CyberRisk Validation ReportRead SecureIQlab's test results on Google Cloud Next Generation Firewall Enterprise.Google is a Leader in The Forrester Wave™: AI Foundation Models for Language, Q2 2024Access your complimentary copy of the report to learn why Google was named a Leader.Google is a Leader in the 2024 Gartner® Magic Quadrant™ for Cloud AI Developer Services (CAIDS) Access your complimentary copy of the report to learn why Google was named a Leader.Boston Consulting Group: Any company can become a resilient data championInsights from 700 global business leaders reveal the secrets to data maturity.Google is a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Access your complimentary copy of the report to learn why Google was named a Leader.Google is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud Database Management Systems (DBMS)Access your complimentary copy of the report to learn why Google was named a leader and further in vision among vendors.Google Cloud is named a Leader in The Forrester Wave™: Streaming Data Platforms, Q4 2023Access your complimentary copy of the report to learn why Google Cloud was named a leader.Google is named a Leader in The Forrester Wave™: Cloud Data Warehouses, Q2 2023Access your complimentary copy of the report to learn why Google Cloud was named a leader.Forrester names Google Cloud a Leader in The Forrester Wave™: IaaS Platform Native Security Q2 2023Access your complimentary copy of the report to learn why Google Cloud was named a Leader.Forrester names Google Cloud a Leader in The Forrester Wave™: Data Security Platforms Q1 2023Access your complimentary copy of the report to learn why Google Cloud was named a LeaderGoogle is a Leader in the 2023 Gartner® Magic Quadrant™ for Enterprise Conversational AI PlatformsAccess your complimentary copy of the report to learn why Google was named a LeaderThe Total Economic Impact™ Of Google Cloud’s Operations SuiteCost Savings And Business Benefits Enabled By Google Cloud’s Operations Suite.The Forrester Wave™: Web Application Firewalls, Q3 2022Report shows how Cloud Armor stacks up against web application firewall (WAF) providers.Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS) 2022Google is a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud Infrastructure and Platform Services (CIPS).Forrester’s Total Economic Impact of Google Cloud AnthosThis report outlines customers cost savings and business benefits enabled by Anthos.IDC: A Built-In Observability Tool Adoption Blueprint for Public CloudWhitepaper for DevOps, Development, Operations, and SRE Forrester’s Opportunity Snapshot: State Of Public Cloud Migration, 2022Get things right through your cloud migration journeyThe 2022 Gartner® Magic Quadrant ™ for Full Life Cycle API ManagementAccess your complimentary copy of the report to learn why Google Cloud Apigee was named a leader.The Forrester Wave™: Public Cloud Container Platforms, Q1 2022Access your complimentary copy of the report to learn why Google Cloud was named a Leader.Gartner® Magic Quadrant™ for Cloud AI Developer Services Gartner names Google a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud AI Developer Services report.IDC names Google a Leader in the IDC MarketScape: Asia/Pacific (Excluding Japan) AI Life-Cycle Software Tools and Platforms 2022 Vendor AssessmentGet your complimentary copy of the report excerpt to learn why Google was named a Leader.Strategies for Migration to Public Clouds: Lessons Learned from Industry LeadersIDC surveyed 204 US-based IT decision makers with experience in successfully migrating.The Forrester Wave™: Document-Oriented Text Analytics Platforms, Q2 2022Access your complimentary copy of the report to learn why Google was named a Leader The Forrester Wave™: API Management Solutions, 2022Forrester names Google a Leader in the 2022 Forrester Wave™ for API Management Solutions WaveGoogle Cloud wins Frost & Sullivan Technology Innovation Award 2022Solve your complex healthcare challenges with us.Gartner® names Google Cloud a leader in the 2022 Cloud Database Management Systems Magic QuadrantLearn where the cloud database and analytics market stands and why Google Cloud was named a Leader.Forrester Research names Google as a Leader in The Forrester Wave™: AI Infrastructure, Q4 2021Access your complimentary copy of the report to learn why Google was named a Leader in AI Infrastructure.Forrester's Total Economic Impact of Cloud RunThis report outlines cost savings and business benefits enabled by Cloud Run.The Total Economic Impact™ Of Migrating Expensive OSes and Software to Google CloudCost savings and benefits by modernizing with open platforms and managed services in Google Cloud.Modernize With AIOps To Maximize Your ImpactEstablish your competitive edge by leveraging AIOps to address cloud operations challenges.Forrrester's Total Economic Impact of GKEThe report outlines cost savings and business benefits enabled by Google Kubernetes Engine (GKE).Forrester Research names Google Cloud a Leader among Public Cloud Development and Infrastructure PlatformsIn this Forrester Wave™: Public Cloud Development and Infrastructure Platforms report, Forrester evaluated the top six cloud vendors and identified Google as a Leader in this year’s report.Forrester Research names Google Cloud a Leader in The Forrester Wave™: Streaming Analytics, Q2 2021Download your copy of this report to explore how Dataflow empowers customers like you to process and enrich data at scale for streaming analytics.Forrester Research names Google Cloud a Leader in The Forrester Wave™: Unstructured Data Security Platforms, Q2 2021 reportFor this report, Forrester evaluated the capabilities of 11 providers that help customers secure and protect their unstructured data, naming Google Cloud a Leader and rated the highest in current offering. ISG Provider Lens™ Quadrant Report for Mainframe services and solutionsDownload your copy of the report to explore the strengths that help modernize mainframes451 Research, S&P Global Market IntelligenceAccess your complimentary copy of the Spanner Market Insight Report which showcases Spanner and how Google Cloud continues to innovate on the product.ESG Technical Validation: Google Cloud for GamingAnalyst report validating the scalable, secure, and reliable gaming infrastructure.The Forrester Wave™: Cloud Data Warehouse, Q1 2021Download your copy of the report to explore the strengths that help empower our customers to create big opportunities with big data.Gartner® names Google a Leader in the 2021 Magic Quadrant for Cloud AI Developer Services reportAccess your complimentary copy of the MQ to learn why Google was named as a Leader in this evaluation for the second year in a row.Gartner® names Google Cloud a leader in the 2020 Cloud Database Management Systems Magic QuadrantAccess your complimentary copy of the Cloud DBMS MQ report to learn why Google Cloud was named a Leader when compared to 17 vendors in the space.IDC MarketScape names Google a Leader in Cloud Data Analytics Platforms in Asia PacificIDC MarketScape evaluated vendors in Asia Pacific and noted that Google Cloud is built for cloud-native agility and outcome-based digital innovation, and is steadily expanding its customer base from digital natives to very large organizations actively working to become Intelligent Enterprises.Google is named a Leader in 2020 Magic Quadrant for Cloud Infrastructure and Platform ServicesGoogle is named a Leader in the Gartner® Magic Quadrant for Cloud Infrastructure and Platform Services for the third year in a row.Forrester Research Names Google Cloud a Leader among Public Cloud Development and Infrastructure Platforms in ANZIn this Forrester Wave™: Public Cloud Development and Infrastructure Platforms Australia/New Zealand, Q3 2020 report, Forrester evaluated seven top cloud vendors and identified Google as a Leader in this report.The Forrester Wave™: Data Management for Analytics, Q1 2020Forrester names Google Cloud a Leader in the 2020 Data Management for Analytics Forrester Wave™. Google Cloud received the highest score possible in categories such as: roadmap, performance, high availability, scalability, data ingestion, data storage, data security, and customer use cases.IDC Research: The Power of the Database for the Cloud: SaaS Developer PerspectivesCurious about how SaaS developers are evaluating database technology? This whitepaper examines SaaS developer perspectives, wants, and behaviors through IDC’s qualitative and quantitative research method.Gartner® 2020 Magic Quadrant for Cloud AI Developer ServicesGartner positions Google Cloud as a Leader in Cloud AI Developer Services.Forrester New Wave™: Computer Vision (CV) Platforms Q4, 2019Forrester positions Google Cloud a Leader in Computer Vision Platforms. Google Cloud received the highest score among the vendors evaluated and was also the only provider to receive the highest possible score of “differentiated” across all 10 evaluation criteria.Gartner® 2019 Magic Quadrant for Operational Database Management SystemsGartner names Google a Leader in the 2019 Magic Quadrant for Operational Database Management Systems.The Forrester Wave™: Streaming Analytics, Q3 2019Forrester names Google Cloud a Leader in its evaluation for stream analytics solutions. Forrester identified the most significant providers for evaluation, with Google receiving the highest score possible in 11 different categories.The Forrester Wave™: Cloud Native Continuous Integration Tools, Q3 2019Google comes out on top and named a Leader in the cloud native continuous integration tools market. Forrester identified the 10 most significant providers and evaluated them against 27 key criteria.Google Cloud industries: AI Acceleration among ManufacturersLearn how the pandemic may have sparked an increase in the use of AI among manufacturers.IDC MarketScape names Google a Leader in Vision AI Software Platforms in Asia PacificGet your complimentary copy excerpt of the report to learn why Google was named a leader.Gartner® Critical Capabilities for Cloud AI Developer ServicesDownload your copy of the report to explore Gartner’s analysis of this market.IDC whitepaper: Deploy faster and reduce costs for MySQL and PostgreSQL databasesMigrating your databases to Cloud SQL can lower costs, boost agility, and speed up deployments. Get details in this IDC report.Google is a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud AI Developer ServicesAccess your complimentary copy of the report to learn why Google was named a Leader.The Forrester Wave™: Data Lakehouses, Q2 2024Google is named a leader in The Forrester Wave™: Data Lakehouses Q2 2024 report. The 2024 Gartner® Magic Quadrant™ for Analytics and Business Intelligence PlatformsAccess your complimentary copy of the report to learn why Google was named a leader.Google is named a Leader in The Forrester Wave™: Data Lakehouses, Q2 2024Access your complimentary copy of the report to learn why Google was named a leader.The 2024 Gartner® Magic Quadrant™ for Data Science and Machine Learning PlatformsAccess your complimentary copy of the report to learn why Google was named a Leader.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Analytics_Hub.txt b/Analytics_Hub.txt new file mode 100644 index 0000000000000000000000000000000000000000..35f31461fd9c49da8accbda8b4476926c920de25 --- /dev/null +++ b/Analytics_Hub.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/analytics-hub +Date Scraped: 2025-02-23T12:03:49.209Z + +Content: +Learn more about privacy-centric data sharing with our recent announcement of BigQuery data clean rooms.Jump to Analytics HubAnalytics HubAnalytics Hub is a data exchange that allows you to efficiently and securely exchange data assets across organizations to address challenges of data reliability and cost. Curate a library of internal and external assets, including unique datasets like Google Trends, backed by the power of BigQuery.Go to consoleIncrease the ROI of data initiatives by exchanging data, ML models, or other analytics assetsDrive innovation with unique datasets from Google, commercial data providers, or your partnersSave time publishing or subscribing to shared datasets in a secure and privacy-safe environment2:01Analytics Hub in a minute: Learn how to share analytics assets with easeBenefitsSave costs and efficiently share and exchange data Analytics Hub builds on the scalability and flexibility of BigQuery to streamline how you publish, discover, and subscribe to data exchanges and incorporate into your analysis, without the need to move data. Centralized management of data and analytics assetsAnalytics Hub streamlines the accessibility of data and analytics assets. In addition to internal datasets, access public, industry, and Google datasets, like Looker Blocks, or Google Trends data.Privacy-safe, secure data sharing with governanceData shared within Analytics Hub automatically includes in-depth governance, encryption, and security from BigQuery, Cloud KMS, Cloud IAM, VPC Security Controls, and more.Key featuresThe Analytics Hub differenceBuilt on a decade of data sharing in BigQuerySince 2010, BigQuery has supported always-live, in-place data sharing within an organization’s security perimeter (intra-organizational sharing) as well as data sharing across boundaries to external organizations, like vendor or partner ecosystems. Looking at usage over a one week period in September 2022, more than 6,000 organizations shared over 275 petabytes of data in BigQuery, not accounting for intra-organizational sharing. Analytics Hub makes the administration of sharing assets across any boundary even easier and more scalable, while retaining access to key capabilities of BigQuery like its built-in ML, real-time, and geospatial analytics.Privacy-centric sharing with data clean roomsCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore use cases for data clean rooms.Curated exchanges with subscription management and governanceExchanges are collections of data and analytics assets designed for sharing. Administrators can easily curate an exchange by managing the dataset listings within the exchange. Rich metadata can help subscribers find the data they're looking for, and even leverage analytics assets associated with that data. Exchanges within Analytics Hub are private by default, but granular roles and permissions can be set easily for you to deliver data at scale to exactly the right audiences. Data publishers can now easily view and manage subscriptions for all their shared datasets. Administrators can now monitor the usage of Analytics Hub through Audit Logging and Information Schema, while enforcing VPC Service Controls to securely share data.A sharing model for scalability, security, and flexibility Shared datasets are collections of tables and views in BigQuery defined by a data publisher and make up the unit of cross-project/cross-organizational sharing. Data subscribers get an opaque, read-only, linked dataset inside their project and VPC perimeter that they can combine with their own datasets and connect to solutions from Google Cloud or our partners. For example, a retailer might create a single exchange to share demand forecasts to the thousands of vendors in their supply chain—having joined historical sales data with weather, web clickstream, and Google Trends data in their own BigQuery project—then sharing real-time outputs via Analytics Hub. The publisher can add metadata, track subscribers, and see aggregated usage metrics.Search and discovery for internal, public, or commercial datasetsExplore the revamped search experience to browse and quickly find relevant datasets. In addition to easily finding your organization's internal datasets in Analytics Hub, this also includes Google datasets like Google Trends and Earth Engine, commercial datasets from our partners like Crux, and public datasets available in Google Cloud Marketplace.CustomersDelivering business value and innovation through secure data sharingVideoLearn how TelevisaUnivision uses Analytics Hub for secure internal data sharing20:00Blog postHear from Navin Warerkar, Managing Director and US Google Cloud Data & Analytics GTM Lead5-min readBlog postHear from Di Mayze, Global Head of Data and Artificial Intelligence5-min readBlog postHear from Will Freiberg, Chief Executive Officer5-min readSee all customersWe are excited to partner with Google to leverage Analytics Hub and BigQuery to deliver data to over 400 statisticians and data modelers as well as securely sharing data with our partner financial institutions.Kumar Menon, SVP Data Fabric and Decision Science, EquifaxLearn moreWhat's newWant to learn more about Analytics Hub? Blog postSoundCommerce: Power Retail Profitable Growth with Analytics Hub Read the blogVideoWatch Analytics Hub demo and more from Data Cloud Summit 2022Watch videoBlog postSecurely exchange data at scale with Analytics Hub, now available in PreviewRead the blogBlog postInternational Google Trends dataset now in Analytics HubRead the blogDocumentationDocumentationArchitectureIntroduction to Analytics HubWith Analytics Hub, you can discover and access a data library curated by various data providers. Explore architecture for publisher and subscriber workflows.Learn moreGoogle Cloud BasicsManage data exchangesGet started by learning how to create, update, or delete a data exchange and manage Analytics Hub users.Learn moreGoogle Cloud BasicsManage listingsA listing is a reference to a shared dataset that a publisher lists in a data exchange. Learn how to manage listings as an Analytics Hub publisher.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.PricingSimple and logical pricingPricing for Analytics Hub is based on the underlying pricing structure of BigQuery, with the following distinctions for data publishers and data subscribers.Organizations publishing data into an exchange pay for the storage of that data according to BigQuery storage pricing.Organizations subscribing to data from an exchange only pay for query processing from within their organization, and according to their BigQuery pricing plan (flat-rate or on-demand).For detailed pricing information, please view the BigQuery pricing guide.View pricing detailsPartnersThousands of datasets made available via public and commercial data providersIf you're interested in becoming a data provider or learning about our Data Gravity initiative, please contact Google Cloud sales. See all data analytics partnersExplore public datasets in the marketplaceThis product is in early access. For more information on our product launch stages, see hereTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Analytics_hybrid_and_multicloud_patterns.txt b/Analytics_hybrid_and_multicloud_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..6f28eceb4436f754291d372820ebc07eb3fd3c2c --- /dev/null +++ b/Analytics_hybrid_and_multicloud_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/analytics-hybrid-multicloud-pattern +Date Scraped: 2025-02-23T11:50:05.163Z + +Content: +Home Docs Cloud Architecture Center Send feedback Analytics hybrid and multicloud pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC This document discusses that the objective of the analytics hybrid and multicloud pattern is to capitalize on the split between transactional and analytics workloads. In enterprise systems, most workloads fall into these categories: Transactional workloads include interactive applications like sales, financial processing, enterprise resource planning, or communication. Analytics workloads include applications that transform, analyze, refine, or visualize data to aid decision-making processes. Analytics systems obtain their data from transactional systems by either querying APIs or accessing databases. In most enterprises, analytics and transactional systems tend to be separate and loosely coupled. The objective of the analytics hybrid and multicloud pattern is to capitalize on this pre-existing split by running transactional and analytics workloads in two different computing environments. Raw data is first extracted from workloads that are running in the private computing environment and then loaded into Google Cloud, where it's used for analytical processing. Some of the results might then be fed back to transactional systems. The following diagram illustrates conceptually possible architectures by showing potential data pipelines. Each path/arrow represents a possible data movement and transformation pipeline option that can be based on ETL or ELT, depending on the available data quality and targeted use case. To move your data into Google Cloud and unlock value from it, use data movement services, a complete suite of data ingestion, integration, and replication services. As shown in the preceding diagram, connecting Google Cloud with on-premises environments and other cloud environments can enable various data analytics use cases, such as data streaming and database backups. To power the foundational transport of a hybrid and multicloud analytics pattern that requires a high volume of data transfer, Cloud Interconnect and Cross-Cloud Interconnect provide dedicated connectivity to on-premises and other cloud providers. Advantages Running analytics workloads in the cloud has several key advantages: Inbound traffic—moving data from your private computing environment or other clouds to Google Cloud—might be free of charge. Analytics workloads often need to process substantial amounts of data and can be bursty, so they're especially well suited to being deployed in a public cloud environment. By dynamically scaling compute resources, you can quickly process large datasets while avoiding upfront investments or having to overprovision computing equipment. Google Cloud provides a rich set of services to manage data throughout its entire lifecycle, ranging from initial acquisition through processing and analyzing to final visualization. Data movement services on Google Cloud provide a complete suite of products to move, integrate, and transform data seamlessly in different ways. Cloud Storage is well suited for building a data lake. Google Cloud helps you to modernize and optimize your data platform to break down data silos. Using a data lakehouse helps to standardize across different storage formats. It can also provide the flexibility, scalability, and agility needed to help ensure that your data generates value for your business, rather than inefficiencies. For more information, see BigLake. BigQuery Omni, provides compute power that runs locally to the storage on AWS or Azure. It also helps you query your own data stored in Amazon Simple Storage Service (Amazon S3) or Azure Blob Storage. This multicloud analytics capability lets data teams break down data silos. For more information about querying data stored outside of BigQuery, see Introduction to external data sources. Best practices To implement the analytics hybrid and multicloud architecture pattern, consider the following general best practices: Use the handover networking pattern to enable the ingestion of data. If analytical results need to be fed back to transactional systems, you might combine both the handover and the gated egress pattern. Use Pub/Sub queues or Cloud Storage buckets to hand over data to Google Cloud from transactional systems that are running in your private computing environment. These queues or buckets can then serve as sources for data-processing pipelines and workloads. To deploy ETL and ELT data pipelines, consider using Cloud Data Fusion or Dataflow depending on your specific use case requirements. Both are fully managed, cloud-first data processing services for building and managing data pipelines. To discover, classify, and protect your valuable data assets, consider using Google Cloud Sensitive Data Protection capabilities, like de-identification techniques. These techniques let you mask, encrypt, and replace sensitive data—like personally identifiable information (PII)—using a randomly generated or pre-determined key, where applicable and compliant. When you have existing Hadoop or Spark workloads, consider migrating jobs to Dataproc and migrating existing HDFS data to Cloud Storage. When you're performing an initial data transfer from your private computing environment to Google Cloud, choose the transfer approach that is best suited for your dataset size and available bandwidth. For more information, see Migration to Google Cloud: Transferring your large datasets. If data transfer or exchange between Google Cloud and other clouds is required for the long term with high traffic volume, you should evaluate using Google Cloud Cross-Cloud Interconnect to help you establish high-bandwidth dedicated connectivity between Google Cloud and other cloud service providers (available in certain locations). If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect. Use consistent tooling and processes across environments. In an analytics hybrid scenario, this practice can help increase operational efficiency, although it's not a prerequisite. Previous arrow_back Partitioned multicloud pattern Next Edge hybrid pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Analytics_lakehouse.txt b/Analytics_lakehouse.txt new file mode 100644 index 0000000000000000000000000000000000000000..71eccf479a2867455395c7bdefb2291b1d7d9edd --- /dev/null +++ b/Analytics_lakehouse.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/big-data-analytics/analytics-lakehouse +Date Scraped: 2025-02-23T11:48:50.653Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Analytics lakehouse Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This guide helps you understand, deploy, and use the Analytics lakehouse Jump Start Solution. This solution demonstrates how you can unify data lakes and data warehouses by creating an analytics lakehouse to store, process, analyze, and activate data using a unified data stack. Common use cases for building an analytics lakehouse include the following: Large scale analysis of telemetry data combined with reporting data. Unifying structured and unstructured data analysis. Providing real-time analytics capabilities for a data warehouse. This document is intended for developers who have some background with data analysis and have used a database or data lake to perform an analysis. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives Learn how to set up an analytics lakehouse. Secure an analytics lakehouse using a common governance layer. Build dashboards from the data to perform data analysis. Create a machine learning model to predict data values over time. Products used The solution uses the following Google Cloud products: BigQuery: A fully managed, highly scalable data warehouse with built-in machine learning capabilities. Dataproc: A fully managed service for data lake modernization, ETL, and secure data science, at scale. Looker Studio: Self-service business intelligence platform that helps you create and share data insights. Dataplex: Centrally discover, manage, monitor, and govern data at scale. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. BigLake: BigLake is a storage engine that unifies data warehouses and lakes by enabling BigQuery and open source frameworks like Spark to access data with fine-grained access control. The following Google Cloud products are used to stage data in the solution for first use: Workflows: A fully managed orchestration platform that executes services in a specified order as a workflow. Workflows can combine services, including custom services hosted on Cloud Run or Cloud Run functions, Google Cloud services such as BigQuery, and any HTTP-based API. Architecture The example lakehouse architecture that this solution deploys analyzes an ecommerce dataset to understand a retailer's performance over time. The following diagram shows the architecture of the Google Cloud resources that the solution deploys. Solution flow The architecture represents a common data flow to populate and transform data in an analytics lakehouse architecture: Data lands in Cloud Storage buckets. A data lake is created in Dataplex. Data in the buckets are organized into entities, or tables, in the data lake. Tables in the data lake are immediately available in BigQuery as BigLake: tables. Data transformations using Dataproc or BigQuery, and using open file formats including Apache Iceberg. Data can be secured using policy tags and row access policies. Machine learning can be applied on the tables. Dashboards are created from the data to perform more analytics by using Looker Studio. Cost For an estimate of the cost of the Google Cloud resources that the analytics lakehouse solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/biglake.admin roles/bigquery.admin roles/compute.admin roles/datalineage.viewer roles/dataplex.admin roles/dataproc.admin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/servicenetworking.serviceAgent roles/serviceusage.serviceUsageViewer roles/vpcaccess.admin roles/storage.admin roles/workflows.admin Deploy the solution This section guides you through the process of deploying the solution. Note: To ensure the solution deploys successfully, make sure that the organizational policy constraint constraints/compute.requireOsLogin is not enforced in the project you want to deploy to. Go to the Policy details page for your project, and confirm that the Status is Not enforced. To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Analytics lakehouse solution. Go to the Analytics lakehouse solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the solution, return to the Solution deployments page in the console. Click the more_vert Actions menu. Select View Looker Studio Dashboard to open a dashboard that's built on top of the sample data that's transformed by using the solution. Select Open BigQuery Editor to run queries and build machine learning (ML) models using the sample data in the solution. Select View Colab to run queries in a notebook environment. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-google-analytics-lakehouse/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-google-analytics-lakehouse/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Whether or not to enable underlying apis in this solution. # Example: true enable_apis = true # Whether or not to protect Cloud Storage and BigQuery resources from deletion when solution is modified or changed. # Example: false force_destroy = false Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also lists the following additional information that you'll need: The Looker Studio URL of the dashboard that was deployed. The link to open the BigQuery editor for some sample queries. The link to open the Colab tutorial. The following example shows what the output looks like: lookerstudio_report_url = "https://lookerstudio.google.com/reporting/create?c.reportId=79675b4f-9ed8-4ee4-bb35-709b8fd5306a&ds.ds0.datasourceName=vw_ecommerce&ds.ds0.projectId=${var.project_id}&ds.ds0.type=TABLE&ds.ds0.datasetId=gcp_lakehouse_ds&ds.ds0.tableId=view_ecommerce" bigquery_editor_url = "https://console.cloud.google.com/bigquery?project=my-cloud-project&ws=!1m5!1m4!6m3!1smy-cloud-project!2sds_edw!3ssp_sample_queries" lakehouse_colab_url = "https://colab.research.google.com/github/GoogleCloudPlatform/terraform-google-analytics-lakehouse/blob/main/assets/ipynb/exploratory-analysis.ipynb" To view and use the dashboard and to run queries in BigQuery, copy the output URLs from the previous step and open the URLs in new browser tabs. The dashboard, notebook, and BigQuery editors appear in the new tabs. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Customize the solution This section provides information that Terraform developers can use to modify the analytics lakehouse solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. After you've seen how the solution works with the sample data, you might want to work with your own data. To use your own data, you put it into the Cloud Storage bucket named edw-raw-hash. The hash is a random set of 8 characters that's generated during the deployment. You can change the Terraform code in the following ways: Dataset ID. Change the Terraform code so that when the code creates the BigQuery dataset, it uses the dataset ID that you want to use for your data. Schema. Change the Terraform code so that it creates the BigQuery table ID that you want to use to store your data. This includes the external table schema so that BigQuery can read the data from Cloud Storage. Zone. Create the lake zones that match your business need (usually a two or three tier zoning based on data quality and usage). Looker dashboards. Change the Terraform code that creates a Looker dashboard so that the dashboard reflects the data that you're using. PySpark jobs. Change the Terraform code to execute PySpark jobs using Dataproc. The following are common analytics lakehouse objects, showing the Terraform example code in main.tf. BigQuery dataset: The schema where database objects are grouped and stored. resource "google_bigquery_dataset" "ds_edw" { project = module.project-services.project_id dataset_id = "DATASET_PHYSICAL_ID" friendly_name = "DATASET_LOGICAL_NAME" description = "DATASET_DESCRIPTION" location = "REGION" labels = var.labels delete_contents_on_destroy = var.force_destroy } BigQuery table: A database object that represents data that's stored in BigQuery or that represents a data schema that's stored in Cloud Storage. resource "google_bigquery_table" "tbl_edw_taxi" { dataset_id = google_bigquery_dataset.ds_edw.dataset_id table_id = "TABLE_NAME" project = module.project-services.project_id deletion_protection = var.deletion_protection ... } BigQuery stored procedure: A database object that represents one or more SQL statements to be executed when called. This could be to transform data from one table to another or to load data from an external table into a standard table. resource "google_bigquery_routine" "sp_sample_translation_queries" { project = module.project-services.project_id dataset_id = google_bigquery_dataset.ds_edw.dataset_id routine_id = "sp_sample_translation_queries" routine_type = "PROCEDURE" language = "SQL" definition_body = templatefile("${path.module}/assets/sql/sp_sample_translation_queries.sql", { project_id = module.project-services.project_id }) } Cloud Workflows workflow: A Workflows workflow represents a combination of steps to be executed in a specific order. This could be used to set up data or perform data transformations along with other execution steps. resource "google_workflows_workflow" "copy_data" { name = "copy_data" project = module.project-services.project_id region = var.region description = "Copies data and performs project setup" service_account = google_service_account.workflows_sa.email source_contents = templatefile("${path.module}/src/yaml/copy-data.yaml", { public_data_bucket = var.public_data_bucket, textocr_images_bucket = google_storage_bucket.textocr_images_bucket.name, ga4_images_bucket = google_storage_bucket.ga4_images_bucket.name, tables_bucket = google_storage_bucket.tables_bucket.name, dataplex_bucket = google_storage_bucket.dataplex_bucket.name, images_zone_name = google_dataplex_zone.gcp_primary_raw.name, tables_zone_name = google_dataplex_zone.gcp_primary_staging.name, lake_name = google_dataplex_lake.gcp_primary.name }) } To customize the solution, complete the following steps in Cloud Shell: Verify that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse. If it isn't, go to that directory: cd $HOME/cloudshell_open/terraform-google-analytics-lakehouse Open main.tf and make the changes you want to make. For more information about the effects of such customization on reliability, security, performance, cost, and operations, see Design recommendations. Validate and review the Terraform configuration. Provision the resources. Design recommendations This section provides recommendations for using the analytics lakehouse solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. As you begin to scale your lakehouse solution, you have available a number of ways to help improve your query performance and to reduce your total spend. These methods include changing how your data is physically stored, modifying your SQL queries, and changing how your queries are executed using different technologies. To learn more about methods for optimizing your Spark workloads, see Dataproc best practices for production. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the solution deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete the deployment through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete the deployment using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-google-analytics-lakehouse/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying the solution through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying the solution using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Errors accessing data in BigQuery or Looker Studio There is a provisioning step that runs after the Terraform provisioning steps that loads data to the environment. If you get an error when the data is being loaded into the Looker Studio dashboard, or if there are no objects when you start exploring BigQuery, wait a few minutes and try again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Create a data lake using Dataplex Create BigLake external tables for Apache Iceberg Use Apache Spark on Google Cloud Learn about BigQuery Send feedback \ No newline at end of file diff --git a/Analyze_FHIR_data_in_BigQuery.txt b/Analyze_FHIR_data_in_BigQuery.txt new file mode 100644 index 0000000000000000000000000000000000000000..5e152055b094d12535e17c86d8f591483d9779ff --- /dev/null +++ b/Analyze_FHIR_data_in_BigQuery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/analyzing-fhir-data-in-bigquery +Date Scraped: 2025-02-23T11:49:19.830Z + +Content: +Home Docs Cloud Architecture Center Send feedback Analyzing FHIR data in BigQuery Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-29 UTC This document explains to researchers, data scientists, and business analysts the processes and considerations for analyzing Fast Healthcare Interoperability Resources (FHIR) data in BigQuery. Specifically, this document focuses on patient resource data that is exported from the FHIR store in the Cloud Healthcare API. This document also steps through a series of queries that demonstrate how FHIR schema data works in a relational format, and shows you how to access these queries for reuse through views. Using BigQuery for analyzing FHIR data The FHIR-specific API of the Cloud Healthcare API is designed for real-time transactional interaction with FHIR data at the level of a single FHIR resource or a collection of FHIR resources. However, the FHIR API is not designed for analytics use cases. For these use cases, we recommend exporting your data from the FHIR API to BigQuery. BigQuery is a serverless, scalable data warehouse that lets you analyze large quantities of data retrospectively or prospectively. Additionally, BigQuery conforms to ANSI:2011 SQL, which makes data accessible to data scientists and business analysts through tools that they typically use, such as Tableau, Looker, or Vertex AI Workbench. In some applications, such as Vertex AI Workbench, you get access through built-in clients, such as the Python Client library for BigQuery. In these cases, the data returned to the application is available through built-in language data structures. Accessing BigQuery You can access BigQuery through the BigQuery web UI in the Google Cloud console, and also with the following tools: The BigQuery command-line tool The BigQuery REST API or client libraries ODBC and JDBC drivers By using these tools, you can integrate BigQuery into almost any application. Working with the FHIR data structure The built-in FHIR standard data structure is complex, with nested and embedded FHIR data types throughout any FHIR resource. These embeddable FHIR data types are referred to as complex data types. Arrays and structures are also referred to as complex data types in relational databases. The built-in FHIR standard data structure works well serialized as XML or JSON files in a document-oriented system, but the structure can be challenging to work with when translated into relational databases. The following screenshot shows a partial view of a FHIR patient resource data type that illustrates the complex nature of the built-in FHIR standard data structure. The preceding screenshot shows the primary components of a FHIR patient resource data type. For example, the cardinality column (indicated in the table as Card.) shows several items that can have zero, one, or more than one entries. The Type column shows the Identifier, HumanName, and Address data types, which are examples of complex data types that comprise the patient resource data type. Each of these rows can be recorded multiple times, as an array of structures. Using arrays and structures BigQuery supports arrays and STRUCT data types—nested, repeated data structures—as they are represented in FHIR resources, which makes data conversion from FHIR to BigQuery possible. In BigQuery, an array is an ordered list consisting of zero or more values of the same data type. You can construct arrays of simple data types, such as the INT64 data type, and complex data types, such as the STRUCT data type. The exception is the ARRAY data type, because arrays of arrays are not currently supported. In BigQuery, an array of structures appears as a repeatable record. You can specify nested data, or nested and repeated data, in the BigQuery UI or in a JSON schema file. To specify nested columns, or nested and repeated columns, use the RECORD (STRUCT) data type. The Cloud Healthcare API supports the SQL on FHIR schema in BigQuery. This analytics schema is the default schema on the ExportResources() method and is supported by the FHIR community. BigQuery supports denormalized data. This means that when you store your data, instead of creating a relational schema such as a star or snowflake schema, you can denormalize your data and use nested and repeated columns. Nested and repeated columns maintain relationships between data elements without the performance impact of preserving a relational (normalized) schema. Accessing your data through the UNNEST operator Every FHIR resource in the FHIR API is exported into BigQuery as one row of data. You can think of an array or structure inside any row as an embedded table. You can access data in that "table" in either the SELECT clause or the WHERE clause of your query by flattening the array or structure by using the UNNEST operator. The UNNEST operator takes an array and returns a table with a single row for each element in the array. For more information, see working with arrays in standard SQL. The UNNEST operation doesn't preserve the order of the array elements, but you can reorder the table by using the optional WITH OFFSET clause. This returns an additional column with the OFFSET clause for each array element. You can then use the ORDER BY clause to order the rows by their offset. When joining unnested data, BigQuery uses a correlated CROSS JOIN operation that references the column of arrays from each item in the array with the source table, which is the table that directly precedes the call to UNNEST in the FROM clause. For each row in the source table, the UNNEST operation flattens the array from that row into a set of rows containing the array elements. The correlated CROSS JOIN operation joins this new set of rows with the single row from the source table. Investigating the schema with queries To query FHIR data in BigQuery, it's important to understand the schema that's created through the export process. BigQuery lets you inspect the column structure of every table in the dataset through the INFORMATION_SCHEMA feature, a series of views that display metadata. The remainder of this document refers to the SQL on FHIR schema, which is designed to be accessible for retrieving data. Note: To view tables in BigQuery, we recommend using the preview mode instead of SELECT * commands. For more information, see BigQuery best practices. The following sample query explores the column details for the patient table in the SQL on FHIR schema. The query references the Synthea Generated Synthetic Data in FHIR public dataset, which hosts over 1 million synthetic patient records generated in the Synthea and FHIR formats. When you query the INFORMATION_SCHEMA.COLUMNS view, the query results contain one row for each column (field) in a table. The following query returns all columns in the patient table: SELECT * FROM `bigquery-public-data.fhir_synthea.INFORMATION_SCHEMA.COLUMNS` WHERE table_name='patient' The following screenshot of the query result shows the identifier data type, and the array within the data type that contains the STRUCT data types. Using the FHIR patient resource in BigQuery The patient medical record number (MRN), a critical piece of information stored in your FHIR data, is used throughout an organization's clinical and operational data systems for all patients. Any method of accessing data for an individual patient or a set of patients must filter for or return the MRN, or do both. The following sample query returns the internal FHIR server identifier to the patient resource itself, including the MRN and the date of birth for all patients. The filter to query on a specific MRN is also included, but is commented out in this example. In this query, you unnest the identifier complex data type twice. You also use correlated CROSS JOIN operations to join unnested data with its source table. The bigquery-public-data.fhir_synthea.patient table in the query was created by using the SQL on FHIR schema version of the FHIR to BigQuery export. SELECT id, i.value as MRN, birthDate FROM `bigquery-public-data.fhir_synthea.patient` #This is a correlated cross join ,UNNEST(identifier) i ,UNNEST(i.type.coding) it WHERE # identifier.type.coding.code it.code = "MR" #uncomment to get data for one patient, this MRN exists #AND i.value = "a55c8c2f-474b-4dbd-9c84-effe5c0aed5b" The output is similar to the following: In the preceding query, the identifier.type.coding.code value set is the FHIR identifier value set that enumerates available identity data types, such as the MRN (MR identity data type), driver's license (DL identity data type), and passport number (PPN identity data type). Because the identifier.type.coding value set is an array, there can be any number of identifiers listed for a patient. But in this case, you want the MRN (MR identity data type). Joining the patient table with other tables Building on the patient table query, you can join the patient table with other tables in this dataset, such as the conditions table. The conditions table is where patient diagnoses are recorded. The following sample query retrieves all entries for the medical condition of hypertension. SELECT abatement.dateTime as abatement_dateTime, assertedDate, category, clinicalStatus, code, onset.dateTime as onset_dateTime, subject.patientid FROM `bigquery-public-data.fhir_synthea.condition` ,UNNEST(code.coding) as code WHERE code.system = 'http://snomed.info/sct' #snomed code for Hypertension AND code.code = '38341003' The output is similar to the following: In the preceding query, you reuse the UNNEST method to flatten the code.coding field. The abatement.dateTime and onset.dateTime code elements in the SELECT statement are aliased because they both end in dateTime, which would result in ambiguous column names in the output of a SELECT statement. When you select the Hypertension code, you also need to declare the terminology system that the code comes from—in this case, the SNOMED CT clinical terminology system. As the final step, you use the subject.patientid key to join the condition table with the patient table. This key points to the identifier of the patient resource itself within the FHIR server. Note: The patient resource identifier is different from the patient MRN, which is defined earlier in this document. Bringing the queries together In the following sample query, you use the queries from the two preceding sections and join them by using the WITH clause, while performing some simple calculations. WITH patient AS ( SELECT id as patientid, i.value as MRN, birthDate FROM `bigquery-public-data.fhir_synthea.patient` #This is a correlated cross join ,UNNEST(identifier) i ,UNNEST(i.type.coding) it WHERE # identifier.type.coding.code it.code = "MR" #uncomment to get data for one patient, this MRN exists #AND i.value = "a55c8c2f-474b-4dbd-9c84-effe5c0aed5b" ), condition AS ( SELECT abatement.dateTime as abatement_dateTime, assertedDate, category, clinicalStatus, code, onset.dateTime as onset_dateTime, subject.patientid FROM `bigquery-public-data.fhir_synthea.condition` ,UNNEST(code.coding) as code WHERE code.system = 'http://snomed.info/sct' #snomed code for Hypertension AND code.code = '38341003' ) SELECT patient.patientid, patient.MRN, patient.birthDate as birthDate_string, #current patient age. now - birthdate CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),MONTH)/12 AS INT) as patient_current_age_years, CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),MONTH) AS INT) as patient_current_age_months, CAST(DATE_DIFF(CURRENT_DATE(),CAST(patient.birthDate AS DATE),DAY) AS INT) as patient_current_age_days, #age at onset. onset date - birthdate DATE_DIFF(CAST(SUBSTR(condition.onset_dateTime,1,10) AS DATE),CAST(patient.birthDate AS DATE),YEAR)as patient_age_at_onset, condition.onset_dateTime, condition.code.code, condition.code.display, condition.code.system FROM patient JOIN condition ON patient.patientid = condition.patientid The output is similar to the following: In the preceding sample query, the WITH clause lets you isolate subqueries into their own defined segments. This approach can help with legibility, which becomes more important as your query grows larger. In this query, you isolate the subquery for patients and conditions into their own WITH segments, and then join them in the main SELECT segment. You can also apply calculations to raw data. The following sample code, a SELECT statement, shows how to calculate patient's age at disease onset. DATE_DIFF(CAST(SUBSTR(condition.onset_dateTime,1,10) AS DATE),CAST(patient.birthDate AS DATE),YEAR)as patient_age_at_onset As indicated in the preceding code sample, you can perform a number of operations on the supplied dateTime string, condition.onset_dateTime. First, you select the date component of the string with the SUBSTR value. Then you convert the string into a DATE data type by using the CAST syntax. You also convert the patient.birthDate field to the DATEfield. Finally, you calculate the difference between the two dates by using the DATE_DIFF function. What's next Analyze clinical data using BigQuery and AI Platform Notebooks. Visualizing BigQuery data in a Jupyter notebook. Cloud Healthcare API security. BigQuery access control. Healthcare and life sciences solutions in Google Cloud Marketplace. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Anti_Money_Laundering_AI.txt b/Anti_Money_Laundering_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..2497cc948a63c6271053dd78532579077ac8db2c --- /dev/null +++ b/Anti_Money_Laundering_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/anti-money-laundering-ai +Date Scraped: 2025-02-23T12:05:00.404Z + +Content: +Jump to Anti Money Laundering AIDetect suspicious, potential money laundering activity faster and more precisely with AI.Contact usFocuses on retail and commercial banking Designed to support model governance requirements in financial servicesExplainable to analysts, risk managers, and auditorsAdopted in production as system of record in multiple jurisdictions for transaction monitoringSupports customer extensible data and featuresCelent names HSBC the Model Risk Manager of the Year 2023 for its AML AI implementationGet the reportBenefitsIncreased risk detectionDetect nearly 2-4x1 more confirmed suspicious activity, strengthening your anti-money laundering program.1As measured by HSBCLower operational costsEliminate over 60% of false positives1 and focus investigation time on high-risk, actionable alerts.1As measured by HSBCRobust governance and defensibilityGain auditable and explainable outputs to support regulatory compliance and internal risk management.Key featuresBring your data out from hiding—and AML risk to the surfaceGenerate ML-powered risk scoresAI-powered transaction monitoring can replace the manually defined, rules-based approach and harness the power of financial institutions’ own data to train advanced machine learning (ML) models to provide a comprehensive view of risk scores.Pinpoint the highest weighted risksTapping into a holistic view of your data, the model directs you to the highest weighted money laundering risks by examining transaction, account, customer relationship, company, and other data to identify patterns, instances, groups, anomalies, and networks for retail and commercial banks.Make explaining risk scores easierEach score provides a breakdown of key risk indicators, enabling business users to easily explain risk scores, expedite the investigation workflow, and facilitate reporting across risk typologies.News"Google Cloud Launches Anti-Money-Laundering Tool for Banks, Betting on the Power of AI", Wall Street JournalWhat's newSee the latest updates about AML AISign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoLearn more about AML AILearn moreNewsGoogle Cloud launches AI-powered AML product for financial institutionsLearn moreBlog postAI in financial services: Applying model risk management guidance in a new worldRead the blogReportApplying the existing AI/ML model risk management guidance in financial servicesRead reportDocumentationDocumentationTutorialSet up AML AISee how to incorporate AML AI into your AML process.Learn moreArchitectureSecurity architecture overviewAML AI is designed with your customers' data in mind. It supports security features like data residency and access transparency.Learn moreAPIs & LibrariesFinancial services REST APIAML AI provides a simple JSON HTTP interface that you can call directly.Learn moreAPIs & LibrariesAML input data modelLearn more about the schema and data input requirements for AML AI.Learn moreNot seeing what you’re looking for?View all product documentationPricingAML AI pricing detailsAnti Money Laundering AI has two pricing components:1) AML risk scoring is based on the number of banking customers the service is used for, billed on a daily basis2) Model training and tuning is based on the number of banking customers used in the input datasetsContact sales for full pricing details.PartnersTrusted partner ecosystemOur large ecosystem of trusted industry partners can help financial services institutions solve their complex business challenges.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact us about AML AINeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Apigee_API_Management(1).txt b/Apigee_API_Management(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..4ba27e0cd01a76e0b5d08dc28bc4899653f66aa5 --- /dev/null +++ b/Apigee_API_Management(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/apigee +Date Scraped: 2025-02-23T12:04:49.900Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Apigee_API_Management(2).txt b/Apigee_API_Management(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..945fec64045debdbe5708dbeb5c3343dbc73bd83 --- /dev/null +++ b/Apigee_API_Management(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/apigee +Date Scraped: 2025-02-23T12:05:20.817Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Apigee_API_Management.txt b/Apigee_API_Management.txt new file mode 100644 index 0000000000000000000000000000000000000000..4d44c5732693844b05406f9edb466f721f6cc45f --- /dev/null +++ b/Apigee_API_Management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/apigee +Date Scraped: 2025-02-23T12:01:43.328Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Apigee API ManagementManage APIs with unmatched scale, security, and performanceGoogle Cloud’s native API management tool to build, manage, and secure APIs—for any use case, environment, or scale.Get startedContact usExplore Apigee for free in your own sandbox for 60 days.Jumpstart your development with helpful resources.Product highlightsAutomated controls powered by AI/ML to build and secure APIsSupport for REST, SOAP, GraphQL, or gRPC architectural stylesFlexibility to choose pay-as-you-go or subscription pricingProduct overviewLearn Apigee in 5 minutesFeaturesUsing Gemini Code Assist in Apigee API ManagementCreate consistent quality APIs without any specialized expertise. You can create new API specifications with prompts in Apigee plugin, integrated into Cloud Code. In Apigee, Gemini Code Assist considers your prompt and existing artifacts such as security schema or API objects, to create a specification compliant with your enterprise. You can further slash development time by generating mock servers for parallel development and collaborative testing. Lastly, Gemini also assists you in turning your API specifications into proxies or extensions for your AI applications.Explore Gemini Code Assist in ApigeeUniversal catalog for your APIsConsolidate API specifications—built or deployed anywhere—into API hub. Built on open standards, API hub is a universal catalog that allows developers to access APIs and govern them to a consistent quality. Using autogenerated recommendations provided by Gemini, you can create assets like API proxies, integrations, or even plugin extensions that can be deployed to Vertex AI or ChatGPT.Organize your API information in API hubAutomated API Security with ML based abuse detectionAdvanced API Security detects undocumented and unmanaged APIs linked to Google Cloud L7 Load Balancers. Advanced API Security also regularly assesses managed APIs, surfaces API proxies that do not meet security standards, and provides recommended actions when issues are detected. ML-powered dashboards accurately identify critical API abuses by finding patterns within the large number of bot alerts, reducing the time to act on important incidents.Get started with Advanced API SecurityAutomated API Security with ML based abuse detectionHigh-performance API proxiesOrchestrate and manage traffic for demanding applications with unparalleled control and reliability. Apigee supports styles like REST, gRPC, SOAP, and GraphQL, providing flexibility to implement any architecture. Using Apigee, you can also proxy internal microservices managed in a service mesh as REST APIs to enhance security. Get started with your first API proxy todayVIDEOBuild your first API proxy with Apigee7:47Hybrid/multicloud deploymentsAchieve the architectural freedom to deploy your APIs anywhere—in your own data center or public cloud of your choice—by configuring Apigee hybrid. Host and manage containerized runtime services in your own Kubernetes cluster for greater agility and interoperability while managing APIs consistently with Apigee.Learn more about Apigee hybridVIDEOChoose your own deployment environment3:13Traffic management and control policiesApigee uses policies on API proxies to program API behavior without writing any code. Policies provided by Apigee allow you to add common functionality like security, rate limiting, transformation, and mediation. You can configure from a robust set of 50+ policies to gain control on behavior, traffic, security, and QoS of every API. You can even write custom scripts and code (such as JavaScript applications) to extend API functionality.Add your first policy to your APIVIDEOHow to add policies to your APIs in Apigee?5:53Developer portals integrated into API life cycleBundle individual APIs or resources into API products—a logical unit that can address a specific use case for a developer. Publish these API products in out-of-the-box integrated developer portals or customized experiences built on Drupal. Drive adoption of your API products with easy onboarding of partners/developers, secure access to your APIs, and engaging experiences without any administrative overhead.Onboard developers to your APIsVIDEOHow to create a developer portal in 5 minutes?5:33Built-in and custom API analytics dashboardsUse built-in dashboards to investigate spikes, improve performance, and identify improvement opportunities by analyzing critical information from your API traffic. Build custom dashboards to analyze API quality and developer engagement to make informed decisions.Start gaining insights from your APIsVIDEOExplore Apigee API Analytics7:13Near real-time API monitoringInvestigate every detail of your API transaction within the console or in any distributed tracing solution by debugging an API proxy flow. Isolate problem areas quickly by monitoring their performance or latency. Use Advanced API Operations to identify anomalous traffic patterns and get notified on unpredictable behaviors without any alert fatigue or overheads.Start monitoring your APIs todayVIDEOHow to debug your APIs?5:42API monetizationCreate rate plans for your API products to monetize your API channels. Implementing business models of any complexity by configuring billing, payment model, or revenue share with granular details.Enable API monetization in Apigee todayVIDEOHow to monetize your datasets using Apigee?5:06View all featuresOptions tableProductDescriptionWhen to use this product?ApigeeFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleManaging high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridComprehensive API management for use in any environment—on-premises or any cloudMaintaining and processing API traffic within your own kubernetes clusterCloud EndpointsCustomer managed service to run co-located gateway or private networkingManaging gRPC services with locally hosted gateway for private networkingAPI GatewayFully managed service to package serverless functions as REST APIsBuilding proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.ApigeeDescriptionFully managed comprehensive solution to build, manage, and secure APIs—for any use case or scaleWhen to use this product?Managing high value/volume of APIs with enterprise-grade security and dev engagementApigee hybridDescriptionComprehensive API management for use in any environment—on-premises or any cloudWhen to use this product?Maintaining and processing API traffic within your own kubernetes clusterCloud EndpointsDescriptionCustomer managed service to run co-located gateway or private networkingWhen to use this product?Managing gRPC services with locally hosted gateway for private networkingAPI GatewayDescriptionFully managed service to package serverless functions as REST APIsWhen to use this product?Building proof-of-concepts or entry-level API use cases to package serverless applications running on Google CloudLearn which Google Cloud solution is appropriate for your use case here.How It WorksApigee provides an abstraction or facade for your backend services by fronting services with API proxies. Using these proxies you can control traffic to your backend services with granular controls like security, rate limiting, quotas, and much more.Build an API proxyGet started with ApigeeCommon UsesCloud-first application developmentUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshTutorials, quickstarts, & labsUse APIs to build modern applications and architecturesBuild API proxies that enable your applications to access data and functionality from your Google Cloud back end or any system, service, or application. Scale your applications based on demand using load balancing for your APIs. As you scale, you can unlock greater business agility by decoupling your monolithic application into microservices. Reference guide to refactor monolith into microservicesJoin our experts in this Cloud Study Jam for a hands-on experienceBuilding your first API proxy on Apigee API ManagementHow to reduce microservices complexity with Apigee and Service MeshModernize legacy apps and architecturesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceLearning resourcesPackage legacy applications using RESTful interfacesModularize your application components using API proxies as a gateway for your legacy systems and microservices. Build API proxies to create an abstraction layer that insulates client-facing applications from legacy backend services and microservices. Using Apigee, you can reach the scale required for modern cloud applications while securing traffic to legacy services.Learn how LL Bean modernized its IT infrastructureHow APIs help National Bank of Pakistan modernize the banking experienceNew business channels and opportunitiesPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesTutorials, quickstarts, & labsPublish and monetize your API products in developer portalsConsolidate APIs built anywhere into a single place to enable easy access for developers with API hub. Package multiple APIs or methods into API products to drive consumption. Publish these API products in developer portals to onboard partners or customer developers. Define comprehensive rate plans to monetize your API product consumption with any business model.Step-by-step guide to publish your APIsCheck out this quickstart to managing API productsStart building a rate plan to monetize your API productsLearn how to use out-of-the-box integrated portals or custom drupal experiencesUniform hybrid or multicloud operationsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsTutorials, quickstarts, & labsOperate in any environment with consistencyUse APIs to expose services that are distributed across any environment—private data centers or public clouds. With Apigee hybrid, you can host containerized runtime services in your own K8S cluster to blend your legacy and existing systems with ease. This way, you can adhere to compliance and governance requirements—while maintaining consistent control over your APIs and the data they expose.Learn more about Apigee hybridStep-by-step instructions to install and configure Apigee hybridJoin us for hands-on experience on installing and managing Apigee for hybrid cloudLearn best practices on managing APIs at a large scale in hybrid/multicloud environmentsWeb application and API securityImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsTutorials, quickstarts, & labsImplement security in multiple layers with advanced controlsSecurity is top priority today. Google Cloud launched WAAP (Web App and API Protection) based on the same technology Google uses to protect its public-facing services against vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. It combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats.Protect your web applications and APIsUse Apigee’s Advanced API security to detect API misconfigurations and malicious botsJoin our Cloud Study Jam for hands-on experience on securing your APIsContact us to get access to WAAP today or for any questionsPricingHow Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsPricing modelDescriptionPrice (USD)EvaluationExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysFreePay-as-you-goAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyStarting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environments$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityStarting at$20 per 1M API callsSubscriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemContact us for a custom quote or any further questionsCheck out this pricing page for further details.How Apigee pricing worksApigee offers 3 flexible pricing options—evaluation, pay-as-you-go, and subscription—to suit any API management needsEvaluationDescriptionExperience industry-leading API management capabilities in your own sandbox at no cost for 60 daysPrice (USD)FreePay-as-you-goDescriptionAPI callsCharged on the volume of API calls processed by the API proxy you deployed. Apigee provides the ability to deploy 2 types of proxies:Standard API ProxyExtensible API ProxyPrice (USD)Starting at$20Up to 50M API calls (per 1M API calls)EnvironmentsCharged on the usage of deployment environments per hour per region. Apigee provides access to 3 types of environments:BaseIntermediateComprehensiveDescriptionStarting at$365 per month per regionProxy deploymentsCharged on the number of API proxies/shared flows deployed to an environment. Additional deployments are available for purchase only in Comprehensive environmentsDescription$0.04 per hour per regionAdd-onsChoose and pay for additional capacity or capabilities per your requirements. Using Pay-as-you-go pricing, you can add the following:API AnalyticsAdvanced API SecurityDescriptionStarting at$20 per 1M API callsSubscriptionDescriptionStandardTo start building your enterprise-wide API programEnterpriseFor high volume of APIs and engaging partners/developersEnterprise PlusFor an API-first business with a thriving ecosystemPrice (USD)Contact us for a custom quote or any further questionsCheck out this pricing page for further details.Pricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptExplore Apigee in your own sandbox Try Apigee for freeStart using Apigee with no commitmentGo to consoleBuild your first API proxy on Apigee todayQuickstartExplore helpful resources and examplesResourcesJoin our Google Cloud Innovator communityBecome an Apigee innovatorFAQExpand allWhy is an API used?APIs enable seamless communication between applications, servers, and users in today's tech-driven world. As their numbers grow, API management has become crucial, encompassing design, development, testing, deployment, governance, security, monitoring, and monetization within the software development life cycle.What is a RESTful API?RESTful APIs adhere to REST (Representational State Transfer) architecture constraints. Following these architecture constraints enables APIs to offer scalability, speed, and data versatility. An API of this kind accesses data by using HTTP requests and is the most common type of API used in modern applications today.What makes Apigee different from other API management solutions?Apigee is Google Cloud’s fully managed API management solution. Trusted by enterprises across the globe, Apigee is developer-friendly and provides comprehensive capabilities to support diverse API architectural styles, deployment environments, and use cases. Apigee also provides flexible pricing options for every business to get started and become successful on the platform.Which API protocols does Apigee support?Apigee currently supports REST, SOAP, GraphQL, gRPC, or OpenAPI protocols.Why should you secure your APIs?Companies worldwide rely on application programming interfaces, or APIs, to facilitate digital experiences and unleash the potential energy of their own data and processes. But the proliferation and importance of APIs comes with a risk. As a gateway to a wealth of information and systems, APIs have become a favorite target for hackers. Due to the prevalence of such attacks, there is a need for a proactive approach to secure APIs.What makes an API secure?Apigee helps organizations stay ahead of security threats by offering protection in three layers:1. Robust policies that protect every API transaction from unauthorized users.2. Advanced API security provides automated controls to identify API misconfigurations, malicious bot attacks, and anomalous traffic patterns without overhead and alert fatigue. Web application and API security based on the same technology used by Google to protect its public-facing services against web application vulnerabilities, DDoS attacks, fraudulent bot activity, and API-targeted threats. Google Cloud WAAP combines three solutions (Apigee, Cloud Armor, and reCAPTCHA Enterprise) to provide comprehensive protection against threats and fraud. Explore resources and examplesResources pageHave questions about Apigee?Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/AppSheet_Automation.txt b/AppSheet_Automation.txt new file mode 100644 index 0000000000000000000000000000000000000000..06656067508f57c72ddfd53908748d0326a1337d --- /dev/null +++ b/AppSheet_Automation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/appsheet/automation +Date Scraped: 2025-02-23T12:07:59.386Z + +Content: +Jump to AppSheet AutomationAppSheet AutomationReclaim time—and talent—with no-code automation.Contact usImprove efficiency by removing unnecessary barriersMaintain IT governance and security with citizen-led developmentQuickly and easily create custom automations and applications with an open cloudBenefitsFocus on impactReclaim time for high-impact work rather than manual tasks.Reduce context switchingBuild automations and applications on a unified platform. Improve collaborationStreamline processes, such as approvals and onboarding, across your organization. Key featuresProcess automation Intelligent document processingLeverage the power of Google Cloud Document AI to automatically extract data from unstructured sources like W-9s and receipts to run processes more efficiently. Data change eventsConfigure bots to detect data changes on work in concert with external sources, such as Google Sheets and salesforce, to trigger processes and approvals.ModularityCreate automation bots from completely reusable components—events, processes, and tasks.Seamless connectivityConnect directly with APIs, data sources, webhooks, and legacy software, or use data export to export, back up, or sync application data with external platforms. DocumentationFind resources and documentation for AppSheet AutomationQuickstartAppSheet Automation: the essentialsExplore the fundamentals of creating automations with no-code.Learn moreQuickstartCreating a botAutomations begin with the configuration of a bot.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for AppSheetPricingPricingPricing for AppSheet is based on the number of users rather than the number of automations or applications. To learn more, click the link below or start creating AppSheet apps and automations for free.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Try it freeNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/App_Engine(1).txt b/App_Engine(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..c92a16d79fbf1b07abe5a839dbffbddbd9f9d1cd --- /dev/null +++ b/App_Engine(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/appengine +Date Scraped: 2025-02-23T12:09:41.388Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to App EngineApp EngineBuild monolithic server-side rendered websites. App Engine supports popular development languages with a range of developer tools.Go to consoleContact salesFree up your developers with zero server management and zero configuration deploymentsStay agile with support for popular development languages and a range of developer toolsExplore more products in our serverless portfolioLooking to host or build scalable web applications and websites?Try Cloud RunKey featuresKey featuresPopular programming languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.DocumentationDocumentationGoogle Cloud BasicsChoosing the right App Engine environmentLearn how to run your applications in App Engine using the flexible environment, standard environment, or both.Learn moreGoogle Cloud BasicsApp Engine standard environmentSee how the App Engine standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data.Learn moreGoogle Cloud BasicsApp Engine flexible environmentFind out how App Engine allows developers to focus on what they do best: writing code.Learn morePatternLooking for other serverless products?If your desired runtime is not supported by App Engine, take a look at Cloud Run.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresAll featuresPopular languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.Powerful application diagnosticsUse Cloud Monitoring and Cloud Logging to monitor the health and performance of your app and Error Reporting to diagnose and fix bugs quickly.Application versioningEasily host different versions of your app, and easily create development, test, staging, and production environments.Application securityHelp safeguard your application by defining access rules with App Engine firewall and leverage managed SSL/TLS certificates by default on your custom domain at no additional cost.Services ecosystemTap a growing ecosystem of Google Cloud services from your app including an excellent suite of cloud developer tools.PricingPricingApp Engine has competitive cloud pricing that scales with your app’s usage. There are a few basic components you will see in the App Engine billing model such as standard environment instances, flexible environment instances, and App Engine APIs and services. To get an estimate of your bill, please refer to our pricing calculator.App Engine runs as instances within either the standard environment or the flexible environment.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/App_Engine.txt b/App_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..b921261667cc8ca65a262cd1719b581a39444860 --- /dev/null +++ b/App_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/appengine +Date Scraped: 2025-02-23T12:02:35.371Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to App EngineApp EngineBuild monolithic server-side rendered websites. App Engine supports popular development languages with a range of developer tools.Go to consoleContact salesFree up your developers with zero server management and zero configuration deploymentsStay agile with support for popular development languages and a range of developer toolsExplore more products in our serverless portfolioLooking to host or build scalable web applications and websites?Try Cloud RunKey featuresKey featuresPopular programming languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.DocumentationDocumentationGoogle Cloud BasicsChoosing the right App Engine environmentLearn how to run your applications in App Engine using the flexible environment, standard environment, or both.Learn moreGoogle Cloud BasicsApp Engine standard environmentSee how the App Engine standard environment makes it easy to build and deploy an application that runs reliably even under heavy load and with large amounts of data.Learn moreGoogle Cloud BasicsApp Engine flexible environmentFind out how App Engine allows developers to focus on what they do best: writing code.Learn morePatternLooking for other serverless products?If your desired runtime is not supported by App Engine, take a look at Cloud Run.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresAll featuresPopular languagesBuild your application in Node.js, Java, Ruby, C#, Go, Python, or PHP.Fully managedA fully managed environment lets you focus on code while App Engine manages infrastructure concerns.Powerful application diagnosticsUse Cloud Monitoring and Cloud Logging to monitor the health and performance of your app and Error Reporting to diagnose and fix bugs quickly.Application versioningEasily host different versions of your app, and easily create development, test, staging, and production environments.Application securityHelp safeguard your application by defining access rules with App Engine firewall and leverage managed SSL/TLS certificates by default on your custom domain at no additional cost.Services ecosystemTap a growing ecosystem of Google Cloud services from your app including an excellent suite of cloud developer tools.PricingPricingApp Engine has competitive cloud pricing that scales with your app’s usage. There are a few basic components you will see in the App Engine billing model such as standard environment instances, flexible environment instances, and App Engine APIs and services. To get an estimate of your bill, please refer to our pricing calculator.App Engine runs as instances within either the standard environment or the flexible environment.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Application_Integration.txt b/Application_Integration.txt new file mode 100644 index 0000000000000000000000000000000000000000..6e1990b73784ab23f5641aa6509a55bd4180cc35 --- /dev/null +++ b/Application_Integration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/application-integration +Date Scraped: 2025-02-23T12:05:17.637Z + +Content: +Explore how you can use Gemini Code Assist in Application Integration to build automations for your use case, without toil.Application IntegrationConnect your applications visually, without codeIntegration Platform as a Service (iPaaS) to automate business processes by connecting any application with point-and-click configurationsGo to consoleContact usExplore Gemini in Application Integration by joining our trusted tester program. Jumpstart your development with our quickstarts and use cases.Product highlightsBuild integrations with natural language, no coding requiredReady-to-use connectors for Google or third-party appsGet started for free with no financial commitment What is Application Integration?FeaturesUsing Gemini Code Assist in Application IntegrationAutomate your SaaS workflows with just clicks or prompts, such as "Update a case in Salesforce when a new issue is created in JIRA." Based on the prompt and existing enterprise context, such as APIs or applications, Gemini suggests multiple flows tailored for your use case. Gemini automatically creates variables and pre-configures tasks, making the integration ready for immediate use. Gemini doesn't just respond to prompts, it intelligently analyzes your flow and proactively suggests optimizations, such as replacing connectors or fine-tuning REST endpoint calls.Using Gemini Code Assist in Application IntegrationPlug and play connectorsOur 90+ pre-built connectors make it easy to connect to any data source, whether it's a Google Cloud service (for example, BigQuery, Pub/Sub) or another business application (such as Salesforce, MongoDB, MySQL). With our connectors, you can quickly and easily connect to a growing pool of applications and systems without the need for protocol-specific knowledge or the use of custom code.Learn how to provision and configure connectorsVisual integration designerIntuitive drag-and-drop interface allows anyone to build workflows quickly and easily without the need for complex coding or manual processes. With the visual integration designer, anyone can simply drag-and-drop individual control elements (such as edges, forks, and joins) to build custom integration patterns of any complexity. Learn how to use the visual integration designerAutomated event-driven triggersAs an entry point to integrations—the event bound to the trigger initiates the execution of tasks in the integration. Application Integration offers a wide range of triggers out-of-the-box, including API, Cloud Pub/Sub, Schedule, Salesforce, and Cloud Scheduler triggers. Associate one or more triggers to different tasks, to automate your integrations. Learn more about event-driven triggersData transformationsTransform and modify the data in your workflow with accuracy and efficiency. Intuitive drag-and-drop interface in the Data Mapping Editor makes it easy to map data fields within your integration, eliminating the need for coding. Comprehensive mapping functions enable you to reduce development time and address tailored business requirements. Learn how to build data mapping across applicationsIntegration performance and usage monitoring Proactively detect issues and ensure smooth operations with pre-built monitoring dashboards and detailed execution log messages. Monitoring dashboards provide a graphical overview of integration performance and usage, customizable by different attributes or timeframes. Execution log messages provide valuable insights into the status of each step in an integration, or to troubleshoot a failure.Learn how to proactively identify potential problems Versatile integration tasksUse tasks to define individual actions within a workflow to facilitate seamless data transfer, communication, and synchronization between applications. Automate and streamline business processes using tasks like data mapping, API call integrations, REST API integrations, email notifications, control flow, approval, connectors, and much more.Learn more about ready-to-use integration tasksView all featuresIntegration servicesProduct nameDescriptionDocumentationCategoryApplication IntegrationConnect to third-party applications and enable data consistency without codeRetrieve API payload and send an emailApplications and services integrationWorkflowsCombine Google Cloud services and APIs to build applications and data pipelinesCreate a workflowApplications and services integrationEventarcBuild an event-driven architecture that can connect any serviceReceive direct events from Cloud StorageApplications and services integrationDataflowUnified stream and batch data processing that is serverless, fast, and cost-effectiveCreate a Dataflow pipeline using PythonData integrationPub/SubIngest data without any transformation to get raw data into Google CloudPublish and receive messages in Pub/Sub by using a client libraryData ingestionApplication IntegrationDescriptionConnect to third-party applications and enable data consistency without codeDocumentationRetrieve API payload and send an emailCategoryApplications and services integrationWorkflowsDescriptionCombine Google Cloud services and APIs to build applications and data pipelinesDocumentationCreate a workflowCategoryApplications and services integrationEventarcDescriptionBuild an event-driven architecture that can connect any serviceDocumentationReceive direct events from Cloud StorageCategoryApplications and services integrationDataflowDescriptionUnified stream and batch data processing that is serverless, fast, and cost-effectiveDocumentationCreate a Dataflow pipeline using PythonCategoryData integrationPub/SubDescriptionIngest data without any transformation to get raw data into Google CloudDocumentationPublish and receive messages in Pub/Sub by using a client libraryCategoryData ingestionHow It WorksApplication Integration offers comprehensive tools to connect applications (Google Cloud and others). With an intuitive drag-and-drop designer, out-of-the-box triggers, and plug-and-play connectors, you can create integrations to automate business processes.Explore an exampleCommon UsesBusiness process or workflow automationAutomate sequences of tasks in line with business operationsStreamline business processes by mapping workflows in areas like lead management, procurement, or supply chain management. Initiate and automate task sequences based on a schedule or on a defined external event. Leverage built-in tools to achieve complex configurations like looping, parallel execution, conditional routing, manual approvals, or much more. Map workflows in visual designerStore Salesforce opportunity details in Cloud SQLExplore connectors for business process automationData Mapping taskLearning resourcesAutomate sequences of tasks in line with business operationsStreamline business processes by mapping workflows in areas like lead management, procurement, or supply chain management. Initiate and automate task sequences based on a schedule or on a defined external event. Leverage built-in tools to achieve complex configurations like looping, parallel execution, conditional routing, manual approvals, or much more. Map workflows in visual designerStore Salesforce opportunity details in Cloud SQLExplore connectors for business process automationData Mapping task360 degree view of your customer Centralize customer data spread across diverse sourcesSynchronize data across CRM systems (for example, Salesforce), marketing automation platforms (for example, HubSpot), customer support tools (for example, Zendesk), and much more to maintain a consistent, up-to-date view of customer information, enhancing customer relationships and communication. Map and transform data using predefined tasks to ensure ease of use, accuracy, and consistency. Learning resourcesCentralize customer data spread across diverse sourcesSynchronize data across CRM systems (for example, Salesforce), marketing automation platforms (for example, HubSpot), customer support tools (for example, Zendesk), and much more to maintain a consistent, up-to-date view of customer information, enhancing customer relationships and communication. Map and transform data using predefined tasks to ensure ease of use, accuracy, and consistency. Cloud-first application developmentStreamline access to siloed data and capabilitiesConnect applications with external APIs, data sources, and third-party services to enrich application functionality, access external data, or leverage specialized services, such as payment processing, raising support tickets, or much more. Leverage APIs built in Apigee or events from Pub/Sub to orchestrate communication and data exchange between different application components.Retrieve API payload and send an email Listen to Cloud Pub/Sub topic and send an emailCloud Function taskLearning resourcesStreamline access to siloed data and capabilitiesConnect applications with external APIs, data sources, and third-party services to enrich application functionality, access external data, or leverage specialized services, such as payment processing, raising support tickets, or much more. Leverage APIs built in Apigee or events from Pub/Sub to orchestrate communication and data exchange between different application components.Retrieve API payload and send an email Listen to Cloud Pub/Sub topic and send an emailCloud Function taskPricingHow Application Integration pricing worksThe pricing model is based on the number of integrations executed, infrastructure used to process messages, and data processed. Pricing modelDescriptionPrice (USD)Free tierUp to 400 integration executions and 20 GiB data processed is free per month along with two connection nodesFree tier is limited to integrations with Google Cloud services onlyFreePay-as-you-goIntegration executionsNumber of integrations processed, whether they are successful or not$0.5For every 1,000 executionsConnection nodes (third-party application)Number of connection nodes used every minute (unit of infrastructure that processes messages to third-party target systems)Billed for a minimum of one minStarting at$0.7per node, per hourConnection nodes (Google Cloud applications)Number of connection nodes used every minute (unit of infrastructure that processes messages to Google Cloud target systems)Billed for a minimum of one min$0.35per node, per hourData processed(Sum of total number of bytes received and sent through Application Integration and connections) / 2^10$10per GiBNetworking usageData transfer and other services when moving, copying, accessing data in Cloud Storage or between Google Cloud servicesCheck your network product for pricing informationSubscriptionStandardMaintain predictable costs while building integrations at scaleContact us for a custom quote or any further questionsPricing details for Application Integration and Integration ConnectorsHow Application Integration pricing worksThe pricing model is based on the number of integrations executed, infrastructure used to process messages, and data processed. Free tierDescriptionUp to 400 integration executions and 20 GiB data processed is free per month along with two connection nodesFree tier is limited to integrations with Google Cloud services onlyPrice (USD)FreePay-as-you-goDescriptionIntegration executionsNumber of integrations processed, whether they are successful or notPrice (USD)$0.5For every 1,000 executionsConnection nodes (third-party application)Number of connection nodes used every minute (unit of infrastructure that processes messages to third-party target systems)Billed for a minimum of one minDescriptionStarting at$0.7per node, per hourConnection nodes (Google Cloud applications)Number of connection nodes used every minute (unit of infrastructure that processes messages to Google Cloud target systems)Billed for a minimum of one minDescription$0.35per node, per hourData processed(Sum of total number of bytes received and sent through Application Integration and connections) / 2^10Description$10per GiBNetworking usageData transfer and other services when moving, copying, accessing data in Cloud Storage or between Google Cloud servicesDescriptionCheck your network product for pricing informationSubscriptionDescriptionStandardMaintain predictable costs while building integrations at scalePrice (USD)Contact us for a custom quote or any further questionsPricing details for Application Integration and Integration ConnectorsPricing calculatorEstimate your monthly costs, including network usage costs.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptBuild your first integrationGet startedTry the sample integrationQuickstartGet started with your first integrationQuickstartsAccelerate your delivery with samplesExplore code samplesAsk questions or connect with our community Ask an expertGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Application_Migration(1).txt b/Application_Migration(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..11c1da32561ad80fa77d579f831e1daf73fdb430 --- /dev/null +++ b/Application_Migration(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/application-migration +Date Scraped: 2025-02-23T12:06:42.952Z + +Content: +Application migrationMigrating to Google Cloud gives your applications the performance, scalability, reliability, and security they need to deliver the high-quality experience that customers expect in today's digital world.Try Migration CenterFree migration cost assessmentLearn more about migrating your applications to Google Cloud1:45BenefitsMigration is about more than your applications. It's about your business.Improve customer experiencesWith things like price-performance optimized VM families (Tau), automatic sizing recommendations, easy scalability, and custom machine types, every application is empowered to deliver a world-class experience. Migrate quicker and easier than you thought possibleWhether you're moving 1 or 1,000 applications, we've got the automated tools and migration expertise to make sure everything is fast, easy, and low risk.We're your cloud migration partnerWith a true partnership from beginning to end, our migration experts focus on building and executing the right migration plan for you, and delivering the technological and business outcomes that matter to you.Looking to migrate your applications quickly, easily, and efficiently? Google Cloud Migration Center is the unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration. Start using it today, or watch this short video to learn more. Craft the migration journey that works best for youMigrate workloads to the public cloud: an essential guide & checklistRead the guide and checklistHow to use Migration Center to get your applications into Google CloudExplore Migration Center demoUnderstand your cloud capabilities and identify new competencies for migration successStart assessmentCustomersCustomers see tangible gains when migrating applications to Google CloudCase studyReshaping Flipkart’s technological landscape with a mammoth cloud migration5-min readCase studyMajor League Baseball migrates to Google Cloud to drive fan engagement, increase operational efficiency.10-min readBlog postGoogle's chip design team moves to Google Cloud, increases daily job submissions by 170%. 5-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud.9-min readVideoCardinal Health successfully performed a large-scale app migration to Google Cloud.49:07Case studySee how Loblaw reduced compute resources by one-third with Google Cloud.7-min readSee all customersPartnersFind the right experts to drive your success with cloud migration servicesExpand allCloud migration services at Google CloudFull-service migration partnersSee all partnersRelated servicesMigrate your applications to Google CloudPick the right migration tools and strategies for your unique set of applications and workloads. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience.Google Cloud VMware EngineLift and shift VMware workloads to a VMware SDDC in Google Cloud, providing a fast, easy migration path for VMware.Migrate to Virtual MachinesMigrate applications from on-premises or other clouds into Compute Engine with speed and simplicity, plus built-in testing, rightsizing, and rollback.Migrate to ContainersModernize traditional applications away from virtual machines and into native containers on GKE. Rapid Migration and Modernization Program (RaMP)Our holistic, end-to-end migration program to help you simplify and accelerate your success, starting with a free assessment of your IT landscape.Architecture CenterDiscover migration reference architectures, guidance, and best practices for building or migrating your workloads on Google Cloud.*The Migrate to Virtual Machines and Migrate to Containers products/services are at no charge; consumption of Google Cloud resources will be billed at standard rates.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Application_Migration.txt b/Application_Migration.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa7792c0cc4f630f8fe7a51132ee8d400d064521 --- /dev/null +++ b/Application_Migration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/application-migration +Date Scraped: 2025-02-23T11:59:34.709Z + +Content: +Application migrationMigrating to Google Cloud gives your applications the performance, scalability, reliability, and security they need to deliver the high-quality experience that customers expect in today's digital world.Try Migration CenterFree migration cost assessmentLearn more about migrating your applications to Google Cloud1:45BenefitsMigration is about more than your applications. It's about your business.Improve customer experiencesWith things like price-performance optimized VM families (Tau), automatic sizing recommendations, easy scalability, and custom machine types, every application is empowered to deliver a world-class experience. Migrate quicker and easier than you thought possibleWhether you're moving 1 or 1,000 applications, we've got the automated tools and migration expertise to make sure everything is fast, easy, and low risk.We're your cloud migration partnerWith a true partnership from beginning to end, our migration experts focus on building and executing the right migration plan for you, and delivering the technological and business outcomes that matter to you.Looking to migrate your applications quickly, easily, and efficiently? Google Cloud Migration Center is the unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration. Start using it today, or watch this short video to learn more. Craft the migration journey that works best for youMigrate workloads to the public cloud: an essential guide & checklistRead the guide and checklistHow to use Migration Center to get your applications into Google CloudExplore Migration Center demoUnderstand your cloud capabilities and identify new competencies for migration successStart assessmentCustomersCustomers see tangible gains when migrating applications to Google CloudCase studyReshaping Flipkart’s technological landscape with a mammoth cloud migration5-min readCase studyMajor League Baseball migrates to Google Cloud to drive fan engagement, increase operational efficiency.10-min readBlog postGoogle's chip design team moves to Google Cloud, increases daily job submissions by 170%. 5-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud.9-min readVideoCardinal Health successfully performed a large-scale app migration to Google Cloud.49:07Case studySee how Loblaw reduced compute resources by one-third with Google Cloud.7-min readSee all customersPartnersFind the right experts to drive your success with cloud migration servicesExpand allCloud migration services at Google CloudFull-service migration partnersSee all partnersRelated servicesMigrate your applications to Google CloudPick the right migration tools and strategies for your unique set of applications and workloads. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience.Google Cloud VMware EngineLift and shift VMware workloads to a VMware SDDC in Google Cloud, providing a fast, easy migration path for VMware.Migrate to Virtual MachinesMigrate applications from on-premises or other clouds into Compute Engine with speed and simplicity, plus built-in testing, rightsizing, and rollback.Migrate to ContainersModernize traditional applications away from virtual machines and into native containers on GKE. Rapid Migration and Modernization Program (RaMP)Our holistic, end-to-end migration program to help you simplify and accelerate your success, starting with a free assessment of your IT landscape.Architecture CenterDiscover migration reference architectures, guidance, and best practices for building or migrating your workloads on Google Cloud.*The Migrate to Virtual Machines and Migrate to Containers products/services are at no charge; consumption of Google Cloud resources will be billed at standard rates.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Application_Modernization.txt b/Application_Modernization.txt new file mode 100644 index 0000000000000000000000000000000000000000..bd844e8233da18647a0418e24fea1a9bfcdd4502 --- /dev/null +++ b/Application_Modernization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/camp +Date Scraped: 2025-02-23T11:58:08.720Z + +Content: +Cloud App Modernization Program (CAMP)CAMP has been designed as an end-to-end framework to help guide organizations through their modernization journey by assessing where they are today and identifying their most effective path forward. Download CAMP WhitepaperBook a consultationCAMP overviewWatch a quick intro to CAMPSolutionsCommon patterns for modernizationApp Mod efforts fall into three common categories and CAMP provides customers with best practices and tooling for each in addition to assessments that guide them on where to start.Move and improveModernize traditional applicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Explore modernization options for traditional applications like Java and .NET.Migrate from PaaS: Cloud Foundry, OpenshiftMove your containers into Google's managed container services for a more stable and flexible experience.Learn about tools and migration options for modernizing your existing platforms.Unlock legacy with ApigeeAdapt to changing market needs while leveraging legacy systems.Learn how to connect legacy and modern services seamlessly.Migrate from mainframeMove on from proprietary mainframes and innovate with cloud-native services. Learn about Google's automated tools and prescriptive guidance for moving to the cloud.Build and operateModernize software delivery: Secure software supply chain, CI/CD best practices, Developer productivityOptimize your application development environment, improve your software delivery with modern CI/CD, and secure your software supply chain.Learn about modern and secure cloud-based software development environments. DevOps best practicesAchieve elite performance in your software development and delivery.Learn about industry best practices that can help improve your technical and cultural capabilities to drive improved performance. SRE principles Strike the balance between speed and reliability with proven SRE principals.Learn how Google Cloud helps you implement SRE principles through tooling, professional services, and other resources.Day 2 operations for GKESimplify your GKE platform operations and build an effective strategy for managing and monitoring activities.Learn how to create a unified approach for managing all of your GKE clusters for reduced risk and increased efficiency. FinOps and optimization of GKEContinuously deliver business value by running reliable, performant, and cost efficient applications on GKE.Learn how to make signal driven decisions and scale your GKE clusters based on actual usage and industry best practices.Cloud and beyondRun applications at the edgeUse Google's hardware agnostic edge solution to deploy and govern consistent, localized, and low latency applications.Learn how to enhance your customer experience and employee productivity using an edge strategy.Architect for multicloudManage workloads across multiple clouds with a consistent platform. Learn how Google allows for a flexible approach to multicloud environments for container management and application delivery.Go serverlessEasily build enterprise grade applications with Google Cloud Serverless technologies.Learn how to use tools like Cloud Build, Cloud Run, Cloud Functions, and more to speedup your application delivery. API management Leverage API life cycle management to support new business growth and empower your ecosystem.Learn about usage of APIs as a power tool for a flexible and expandable modern application environment. Guided assessments DevOps best practicesDORA assessmentCompare your DevOps capabilities to that of the industry based on the DORA research and find out how to improve. Learn about the DORA research and contact us to see if a DevOps assessment is right for you. Modernizing traditional apps mFit assessmentPlatform owners can leverage our fit assessment tool to evaluate large VMware workloads and determine if they are good candidates for containerization. Learn about mFit and Google cloud's container migration options and schedule a consultation with us to review your strategy. CAST assessmentThis code level analysis of your traditional applications allows you to identify your best per application modernization approach. Learn more about CAST and contact us to see if this is the right assessment for you.Modernizing mainframe platformsMainFrame application portfolio assessment (MAPA)This assessment is designed to help customers build a financial and strategic plan for their migration based on complexity, risk and cost for each application.Learn about this survey based application level assessment and contact us to start your mainframe migration today. Feeling inspired? Let’s solve your challenges togetherAre you ready to learn about the latest application development trends? Contact usCloud on Air: Watch this webinar to learn how you can build enterprise ready serverless applications. Watch the webinarCustomersCustomer storiesVideoHow British Telecom is leveraging loosely coupled architecture to transform their businessVideo (2:35)VideoHow CoreLogic is replatforming 10,000+ Cloud Foundry app-instances with GoogleVideo (16:16)VideoHow Schlumberger is using DORA recommendations to improve their software delivery and monitoring. Video (2:10)Case studyGordon Food Services goes from four to 2,920 deployments a year using GKE 5-min readSee all customersPartnersOur partnersTo help facilitate our customers' modernization journey, Google works closely with a set of experienced partners globally. See all partnersTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Architect_for_Multicloud.txt b/Architect_for_Multicloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..7029429fc04849b56910b2225e71b4754ee5e09b --- /dev/null +++ b/Architect_for_Multicloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/architect-multicloud +Date Scraped: 2025-02-23T11:58:32.360Z + +Content: +Architect for multicloudUnderstand various multicloud patterns and practical approaches to implementing them using Anthos.Contact usBenefitsGet hands-on as you explore multicloudPersona based user journeysThe workshop focuses on user journeys tailored to specific roles.Applications and services across cloudsLeveraging a service mesh to manage services across clusters and clouds.Opinionated automationDeep dive on an opinionated way for automating and deploying infrastructure resources across clouds.Key featuresGet more out of the workshopInsights into your workloadsHighlighting key SRE golden signals.Keep your applications and services runningSee how deploying workloads across clusters can help maximize reliability.Ready to get started? Contact usTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Architect_your_workloads.txt b/Architect_your_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..4dd4bd2dad68ec3038e878bfb02dac15c22333e2 --- /dev/null +++ b/Architect_your_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-across-regions/architect-workloads +Date Scraped: 2025-02-23T11:52:20.556Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architect your workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-24 UTC This document helps you design workloads in a way that minimizes the impact of a future expansion and migration of workloads to other Google Cloud regions, or the impact of a migration of workloads across regions. This document is useful if you're planning to do any of these activities or if you're evaluating the opportunity to do so in the future and want to explore what the work might look like. This document is part of a series: Get started Design resilient single-region environments on Google Cloud Architect your workloads (this document) Prepare data and batch workloads for the migration The guidance in this series is also useful if you didn't plan for a migration across regions or for an expansion to multiple regions in advance. In this case, you might need to spend additional effort to prepare your infrastructure, workloads, and data for the migration across regions and for the expansion to multiple regions. This document helps you to do the following: Prepare your landing zone Prepare your workloads for a migration across regions Prepare your computing resources Prepare your data storage resources Prepare for decommissioning the source environment Prepare your landing zone This section focuses on the considerations that you must make to extend a landing zone (also called a cloud foundation) when migrating across regions. The first step is to re-evaluate the different aspects of any existing landing zone. Before you can migrate any workload, you must already have a landing zone in place. Although you might already have a landing zone in place for the region that's hosting the workloads, the landing zone might not support the deployment of workloads in a different region, so it must be extended to the target region. Some landing zones that are already in place might have a design that can support another region without significant rework to the landing zone (for example, identity and access management or resource management). However, additional factors such as network or data might require that you do some planning for the extension. Your re-evaluation process should take into account the major requirements of your workloads to allow you to set up a generic foundation that can be specialized later during the migration. Enterprise considerations When it comes to aspects such as industry and government standards, regulations, and certifications, moving workloads to another region can have different requirements. Workloads running on Google regions that are physically located in different countries must follow the laws and regulations of that country. In addition, different industry standards might have particular requirements for workloads running abroad (especially in terms of security). Because Google Cloud regions are built to run resources in a single country, sometimes workloads are migrated from another Google region to that country to adhere to specific regulations. When you perform these "in-country" migrations, it's important to re-evaluate data running on-premise to check if the new region allows the migration of your data to Google Cloud. Identity and access management When you are planning a migration, you probably don't have to plan for many identity and access changes for regions that are already on Google Cloud. Identity decisions on Google Cloud and access to resources are usually based on the nature of the resources rather than the region where the resources are running. Some considerations that you might need to make are as follows: Design of teams: Some companies are structured to have different teams to handle different resources. When a workload is migrated to another region, due to change in structure of the resources, a different team may be the best candidate to be responsible for certain resources, in which case, accesses should be adjusted accordingly. Naming conventions: Although naming conventions might not have any technical impact on the functionalities, some consideration might be needed if there are resources defined with name conventions that refer to the specific region. One typical example is when there are already multiple replicated regions in place, such as Compute Engine virtual machines (VMs), which are named with the region as prefix, for example, europe-west1-backend-1. During the migration process, to avoid confusion or, worse, breaking pipelines that rely on a specific naming convention, it's important to change names to reflect the new region. Connectivity and networking Your network design impacts multiple aspects of how the migration is executed, so it's important to address this design before you plan how to move workloads. Keep in mind that on-premises connectivity with Google Cloud is one of the factors that you must re-evaluate in the migration, since it can be designed to be region specific. One example of this factor is Cloud Interconnect, which is connected to Google Cloud through a VLAN attachment to specific regions. You must change the region where the VLAN attachment is connected before dismissing that region to avoid region-to-region traffic. Another factor to consider is that if you're using Partner Interconnect, migrating the region can help you select a different physical location on which to connect your VLAN attachments to Google Cloud. This consideration is also relevant if you use a Cloud VPN and decide to change subnet addresses in the migration: you must reconfigure your routers to reflect the new networking. While virtual private clouds (VPCs) on Google Cloud are global resources, single subnets are always bound to a region, which means you can't use the same subnet for the workloads after migration. Since subnets can't be overlapping IPs, to maintain the same addresses, you should create a new VPC. This process is simplified if you're using Cloud DNS, which can exploit features like DNS peering to route traffic for the migrated workloads before dismissing the old region. For more information about building a foundation on Google Cloud, see Migrate to Google Cloud: Plan and build your foundation. Prepare your workloads for a migration across regions Whether you're setting up your infrastructure on Google Cloud and you plan to later migrate to another region, or you're already on Google Cloud and you need to migrate to another region, you must make sure that your workloads can be migrated in the most straightforward way to reduce effort and minimize risks. To help you ensure that all the workloads are in a state that allows a path to the migration, we recommend that you take the following approach: Prefer network designs that are easily replicable and loosely coupled from the specific network topology. Google Cloud offers different products that can help you to decouple your current network configuration from the resources using that network. An example of such a product is Cloud DNS, which lets you decouple internal subnet IPs from VMs. Set up products that support multi-region or global configurations. Products that support a configuration that involves more than one region, usually simplify the process of migrating them to another region. Consider managed services with managed cross region replicas for data. As described in the following sections of this document, some managed services allow you to create a replica in a different region, usually for backup or high availability purposes. This feature can be important to migrate data from one region to another. Some Google Cloud services are designed to support multi-region deployments or global deployment. You don't need to migrate these services, although you might need to adjust some configurations. Prepare your computing resources This section provides an overview of the compute resources on Google Cloud and design principles to prepare for a migration to another region. This document focuses on the following Google Cloud computing products: Compute Engine Google Kubernetes Engine Cloud Run VMware Engine Compute Engine Compute Engine is Google Cloud's service that provides VMs to customers. To migrate Compute Engine resources from one region to another, you must evaluate different factors in addition to networking considerations. We recommend that you do the following: Check compute resources: One of the first limitations you can encounter when changing the hosting region of a VM is the availability of the CPU platform in the new target region. If you have to change a machine series during the migration, check that the operating system of your current VM is supported for the series. Generally speaking, this problem can be extended to every Google Cloud computing service (some new regions may not have services like Cloud Run or Cloud GPU), so before you plan the migration, make sure that all the compute services that you require are available in the destination region. Configure load balancing and scaling: Compute Engine supports load balancing traffic between Compute Engine instances and autoscaling to automatically add or remove virtual machines from MIGs, according to demand. We recommend that you configure load balancing and autoscaling to increase the reliability and the flexibility of your environments, avoiding the management burden of self-managed solutions. For more information about configuring load balancing and scaling for Compute Engine, see Load balancing and scaling. Use zonal DNS names: To mitigate the risk of cross-regional outages, we recommend that you use zonal DNS names to uniquely identify virtual machines using DNS names in your environments. Google Cloud uses zonal DNS names for Compute Engine virtual machines by default. For more information about how the Compute Engine internal DNS works, see Overview of internal DNS. To facilitate a future migration across regions, and to make your configuration more maintainable, we recommend that you consider zonal DNS names as configuration parameters that you can eventually change in the future. Use the same managed instance groups (MIGs) template: Compute Engine lets you create regional MIGs that automatically provision virtual machines across multiple zones in a region automatically. If you're using a template in your old region, you can use the same template to deploy the MIGs in the new region. GKE Google Kubernetes Engine (GKE) helps you deploy, manage, and scale containerized workloads on Kubernetes. To prepare your GKE workloads for a migration, consider the following design points and GKE features: Cloud Service Mesh: A managed implementation of Istio mesh. Adopting Cloud Service Mesh for your cluster lets you have a greater level of control on the network traffic into the cluster. One of the key features of Cloud Service Mesh is that it lets you create a service mesh between two clusters. You can use this feature to plan the migration from one region to another by creating the GKE cluster in the new region and adding it to the service mesh. By using this approach, it's possible to start deploying workloads in the new cluster and routing traffic to them gradually, allowing you to test the new deploy while having the option to rollback by editing mesh routing. Config Sync: A GitOps service built on an open source core that lets cluster operators and platform administrators deploy configurations from a single source. Config Sync can support one or many clusters, allowing you to use a single source of truth to configure of the clusters. You can use this Config Sync function to replicate the configuration of the existing cluster on the cluster for the new region, and potentially customize a specific resource for the region. Backup for GKE: This feature lets you back up your cluster persistent data periodically and restore the data to the same cluster or to a new one. Cloud Run Cloud Run offers a lightweight approach to deploy containers on Google Cloud. Cloud Run services are regional resources, and are replicated across multiple zones in the region they are in. When you deploy a Cloud Run service, you can choose a region where to deploy the instance, and then use this feature to deploy the workload in a different region. VMware Engine Google Cloud VMware Engine is a fully managed service that lets you run the VMware platform in Google Cloud. The VMware environment runs natively on Google Cloud bare metal infrastructure in Google Cloud locations including vSphere, vCenter, vSAN, NSX-T, HCX, and corresponding tools. To migrate VMware Engine instances to a different region you should create your private cloud in the new region and then use VMware tools to move the instances. You should also consider DNS and load balancing in Compute Engine environments when you plan your migration. VMware Engine uses Google Cloud DNS, which is a managed DNS hosting service that provides authoritative DNS hosting published to the public internet, private zones visible to VPC networks, and DNS forwarding and peering for managing name resolution on VPC networks. Your migration plan can support testing of multi-region load balancing and DNS configurations. Prepare your data storage resources This section provides an overview of the data storage resources on Google Cloud and the basics on how to prepare for a migration to another region. The presence of the data already on Google Cloud simplifies the migration, because it implies that a solution to host them without any transformation exists or can be hosted on Google Cloud. The ability to copy database data into a different region and restore the data elsewhere is a common pattern in Disaster Recovery (DR). For this reason, some of the patterns described in this document rely on DR mechanisms such as database backup and recovery. The following managed services are described in this document: Cloud Storage Filestore Bigtable Firestore This document assumes that the storage solutions that you are using are regional instances which are co-located with compute resources. Cloud Storage Cloud Storage offers Storage Transfer Service, which automates the transfer of files from different systems to Cloud Storage. It can be used to replicate data to a different region for backup, and also for region to region migration. Cloud SQL Cloud SQL offers a relational database service to host different types of databases. Cloud SQL offers a cross-region replication functionality that allows instance data to be replicated in a different region. This feature is a common pattern for backup and restore of Cloud SQL instances, but also lets you promote the second replica in the other region to the main replica. You can use this feature to create a read replica in the second region and then promote it to the main replica once you migrate workloads. In general, for databases, managed services simplify the process of data replication, to make it easier to create a replica in the new region during migration. Another way to handle the migration is by using Database Migration Service, which lets you migrate SQL databases from different sources to Google Cloud. Among the supported sources there is also another Cloud SQL instance, with the only limitation that you can migrate to a different region, but not to a different project. Filestore As explained earlier in this document, the backup and restore feature of Filestore lets you create a backup of a file share that can be restored to another region. This feature can be used to perform region to region migration. Bigtable As with Cloud SQL, Bigtable supports replication. You can use this feature to replicate the same pattern described. Check in the Bigtable location list if the service is available in the destination region. In addition, as with Filestore, Bigtable supports backup and restore. This feature can be used, as with Filestore, to implement the migration by creating a backup and restoring it in another instance in the new region. The last option is exporting tables, for example, on Cloud Storage. These exports will host data in another service, and the data is then available to import to the instance in the region. Firestore Firestore locations might be bound to the presence of App Engine in your project, which in some scenarios forces the Firestore instance to be multi-region. In these migration scenarios, it's also necessary to take into account App Engine to design the right solution for Firestore. In fact, if you already have an App Engine app with a location of either us-central or europe-west, your Firestore database is considered multi-regional. If you have a regional location and you want to migrate to a different location, the managed export and import service lets you import and export Firestore entities by using a Cloud Storage bucket. This method can be used to move instances from one region to another. The other option is to use the Firestore backup and restore feature. This option is less expensive and more straightforward than import and export. Prepare for decommissioning the source environment You must prepare in advance before you decommission your source environment and switch to the new one. At a high level, you should consider the following before you decommission the source environment: New environment tests: Before you switch the traffic from the old environment to the new environment, you can do tests to validate the correctness of the applications. Other than the classic unit and integration tests that can be done on newly migrated applications, there are different strategies of testing. The new environment can be treated as a new version of the software and the migration of traffic can be implemented with common patterns such as A/B testing used for validation. Another approach is to replicate the incoming traffic in the source environment and in the new environment to check that functions are preserved. Downtime planning: If you select a strategy of migration like blue-green, where you switch traffic from an environment to another, consider the adoption of planned downtime. The downtime allows the transition to be better monitored and to avoid unpredictable errors on the client side. Rollback: Depending on the strategies adopted for migrating the traffic, it might be necessary to implement a rollback in the case of errors or misconfiguration of the new environment. To be able to rollback the environment, you must have a monitoring infrastructure in place to detect the status of the new environment. It's only possible to shut down services in the first region after you perform extended tests in the new region and go live in the new region without error. We recommend that you keep backups of the first region for a limited amount of time, until you're sure that there are no issues in the newly migrated region. You should also consider if you want to promote the old region to a disaster recovery site, assuming there isn't already a solution in place. This approach requires additional design to ensure that the site is reliable. For more information on how to correctly design and plan for DR, see the Disaster recovery planning guide. What's Next For more general design principles for designing reliable single and multi-region environments and about how Google achieves better reliability with regional and multi-region services, see Architecting disaster recovery for cloud infrastructure outages: Common themes. Learn more about the Google Cloud products used in this design guide: Compute Engine GKE Cloud Run VMware Engine Cloud Storage Filestore Bigtable Firestore For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Valerio Ponza | Technical Solution ConsultantOther contributors: Marco Ferrari | Cloud Solutions ArchitectTravis Webb | Solution ArchitectLee Gates | Group Product ManagerRodd Zurcher | Solutions Architect Send feedback \ No newline at end of file diff --git a/Architecting_for_cloud_infrastructure_outages.txt b/Architecting_for_cloud_infrastructure_outages.txt new file mode 100644 index 0000000000000000000000000000000000000000..0039cc8e4f15acf981a641e3d54b7de4655a3436 --- /dev/null +++ b/Architecting_for_cloud_infrastructure_outages.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/disaster-recovery +Date Scraped: 2025-02-23T11:54:41.577Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecting disaster recovery for cloud infrastructure outages Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-10 UTC This article is part of a series that discusses disaster recovery (DR) in Google Cloud. This part discusses the process for architecting workloads using Google Cloud and building blocks that are resilient to cloud infrastructure outages. The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages (this document) Introduction As enterprises move workloads on to the public cloud, they need to translate their understanding of building resilient on-premises systems to the hyperscale infrastructure of cloud providers like Google Cloud. This article maps industry standard concepts around disaster recovery such as RTO (Recovery Time Objective) and RPO (Recovery Point Objective) to the Google Cloud infrastructure. The guidance in this document follows one of Google's key principles for achieving extremely high service availability: plan for failure. While Google Cloud provides extremely reliable service, disasters will strike - natural disasters, fiber cuts, and complex unpredictable infrastructure failures - and these disasters cause outages. Planning for outages enables Google Cloud customers to build applications that perform predictably through these inevitable events, by making use of Google Cloud products with "built-in" DR mechanisms. Disaster recovery is a broad topic which covers a lot more than just infrastructure failures, such as software bugs or data corruption, and you should have a comprehensive end-to-end plan. However this article focuses on one part of an overall DR plan: how to design applications that are resilient to cloud infrastructure outages. Specifically, this article walks through: The Google Cloud infrastructure, how disaster events manifest as Google Cloud outages, and how Google Cloud is architected to minimize the frequency and scope of outages. An architecture planning guide that provides a framework for categorizing and designing applications based on the desired reliability outcomes. A detailed list of select Google Cloud products that offer built-in DR capabilities which you may want to use in your application. For further details on general DR planning and using Google Cloud as a component in your on-premises DR strategy, see the disaster recovery planning guide. Also, while High Availability is a closely related concept to disaster recovery, it is not covered in this article. For further details on architecting for high availability see the Google Cloud architecture framework. A note on terminology: this article refers to availability when discussing the ability for a product to be meaningfully accessed and used over time, while reliability refers to a set of attributes including availability but also things like durability and correctness. How Google Cloud is designed for resilience Google data centers Traditional data centers rely on maximizing availability of individual components. In the cloud, scale allows operators like Google to spread services across many components using virtualization technologies and thus exceed traditional component reliability. This means you can shift your reliability architecture mindset away from the myriad details you once worried about on-premises. Rather than worry about the various failure modes of components -- such as cooling and power delivery -- you can plan around Google Cloud products and their stated reliability metrics. These metrics reflect the aggregate outage risk of the entire underlying infrastructure. This frees you to focus much more on application design, deployment, and operations rather than infrastructure management. Google designs its infrastructure to meet aggressive availability targets based on our extensive experience building and running modern data centers. Google is a world leader in data center design. From power to cooling to networks, each data center technology has its own redundancies and mitigations, including FMEA plans. Google's data centers are built in a way that balances these many different risks and presents to customers a consistent expected level of availability for Google Cloud products. Google uses its experience to model the availability of the overall physical and logical system architecture to ensure that the data center design meets expectations. Google's engineers take great lengths operationally to help ensure those expectations are met. Actual measured availability normally exceeds our design targets by a comfortable margin. By distilling all of these data center risks and mitigations into user-facing products, Google Cloud relieves you from those design and operational responsibilities. Instead, you can focus on the reliability designed into Google Cloud regions and zones. Regions and zones Regions are independent geographic areas that consist of zones. Zones and regions are logical abstractions of underlying physical resources. For more information about region-specific considerations, see Geography and regions. Google Cloud products are divided into zonal resources, regional resources, or multi-regional resources. Zonal resources are hosted within a single zone. A service interruption in that zone can affect all of the resources in that zone. For example, a Compute Engine instance runs in a single, specified zone; if a hardware failure interrupts service in that zone, that Compute Engine instance is unavailable for the duration of the interruption. Regional resources are redundantly deployed across multiple zones within a region. This gives them higher reliability relative to zonal resources. Multi-regional resources are distributed within and across regions. In general, multi-regional resources have higher reliability than regional resources. However, at this level products must optimize availability, performance, and resource efficiency. As a result, it is important to understand the tradeoffs made by each multi-regional product you decide to use. These tradeoffs are documented on a product-specific basis later in this document. How to leverage zones and regions to achieve reliability Google SREs manage and scale highly reliable, global user products like Gmail and Search through a variety of techniques and technologies that seamlessly leverage computing infrastructure around the world. This includes redirecting traffic away from unavailable locations using global load balancing, running multiple replicas in many locations around the planet, and replicating data across locations. These same capabilities are available to Google Cloud customers through products like Cloud Load Balancing, Google Kubernetes Engine (GKE), and Spanner. Google Cloud generally designs products to deliver the following levels of availability for zones and regions: Resource Examples Availability design goal Implied downtime Zonal Compute Engine, Persistent Disk 99.9% 8.75 hours / year Regional Regional Cloud Storage, Replicated Persistent Disk, Regional GKE 99.99% 52 minutes / year Note: These are design guidelines. Google Cloud service level commitments can be found at Google Cloud Service Level Agreements. Compare the Google Cloud availability design goals against your acceptable level of downtime to identify the appropriate Google Cloud resources. While traditional designs focus on improving component-level availability to improve the resulting application availability, cloud models focus instead on composition of components to achieve this goal. Many products within Google Cloud use this technique. For example, Spanner offers a multi-region database that composes multiple regions in order to deliver 99.999% availability. Composition is important because without it, your application availability cannot exceed that of the Google Cloud products you use; in fact, unless your application never fails, it will have lower availability than the underlying Google Cloud products. The remainder of this section shows generally how you can use a composition of zonal and regional products to achieve higher application availability than a single zone or region would provide. The next section gives a practical guide for applying these principles to your applications. Planning for zone outage scopes Infrastructure failures usually cause service outages in a single zone. Within a region, zones are designed to minimize the risk of correlated failures with other zones, and a service interruption in one zone would usually not affect service from another zone in the same region. An outage scoped to a zone doesn't necessarily mean that the entire zone is unavailable, it just defines the boundary of the incident. It is possible for a zone outage to have no tangible effect on your particular resources in that zone. It's a rarer occurrence, but it's also critical to note that multiple zones will eventually still experience a correlated outage at some point within a single region. When two or more zones experience an outage, the regional outage scope strategy below applies. Regional resources are designed to be resistant to zone outages by delivering service from a composition of multiple zones. If one of the zones backing a regional resource is interrupted, the resource automatically makes itself available from another zone. Carefully check the product capability description in the appendix for further details. Google Cloud only offers a few zonal resources, namely Compute Engine virtual machines (VMs) and Persistent Disk. If you plan to use zonal resources, you'll need to perform your own resource composition by designing, building, and testing failover and recovery between zonal resources located in multiple zones. Some strategies include: Routing your traffic quickly to virtual machines in another zone using Cloud Load Balancing when a health check determines that a zone is experiencing issues. Use Compute Engine instance templates and/or managed instance groups to run and scale identical VM instances in multiple zones. Use a regional Persistent Disk to synchronously replicate data to another zone in a region. See High availability options using regional PDs for more details. Planning for regional outage scopes A regional outage is a service interruption affecting more than one zone in a single region. These are larger scale, less frequent outages and can be caused by natural disasters or large scale infrastructure failures. For a regional product that is designed to provide 99.99% availability, an outage can still translate to nearly an hour of downtime for a particular product every year. Therefore, your critical applications may need to have a multi-region DR plan in place if this outage duration is unacceptable. Multi-regional resources are designed to be resistant to region outages by delivering service from multiple regions. As described above, multi-region products trade off between latency, consistency, and cost. The most common trade off is between synchronous and asynchronous data replication. Asynchronous replication offers lower latency at the cost of risk of data loss during an outage. So, it is important to check the product capability description in the appendix for further details. Note: In BigQuery, a multi-region location does not provide cross-region replication nor regional redundancy. Data will be stored in a single region within the geographic location. If you want to use regional resources and remain resilient to regional outages, then you must perform your own resource composition by designing, building, and testing their failover and recovery between regional resources located in multiple regions. In addition to the zonal strategies above, which you can apply across regions as well, consider: Regional resources should replicate data to a secondary region, to a multi-regional storage option such as Cloud Storage, or a hybrid cloud option such as GKE Enterprise. After you have a regional outage mitigation in place, test it regularly. There are few things worse than thinking you're resistant to a single-region outage, only to find that this isn't the case when it happens for real. Google Cloud resilience and availability approach Google Cloud regularly beats its availability design targets, but you should not assume that this strong past performance is the minimum availability you can design for. Instead, you should select Google Cloud dependencies whose designed-for targets exceed your application's intended reliability, such that your application downtime plus the Google Cloud downtime delivers the outcome you are seeking. A well-designed system can answer the question: "What happens when a zone or region has a 1, 5, 10, or 30 minute outage?" This should be considered at many layers, including: What will my customers experience during an outage? How will I detect that an outage is happening? What happens to my application during an outage? What happens to my data during an outage? What happens to my other applications due to an outage (due to cross-dependencies)? What do I need to do in order to recover after an outage is resolved? Who does it? Who do I need to notify about an outage, within what time period? Step-by-step guide to designing disaster recovery for applications in Google Cloud The previous sections covered how Google builds cloud infrastructure, and some approaches for dealing with zonal and regional outages. This section helps you develop a framework for applying the principle of composition to your applications based on your desired reliability outcomes. Customer applications in Google Cloud that target disaster recovery objectives such as RTO and RPO must be architected so that business-critical operations, subject to RTO/RPO, only have dependencies on data plane components that are responsible for continuous processing of operations for the service. In other words, such customer business-critical operations must not depend on management plane operations, which manage configuration state and push configuration to the control plane and the data plane. For example, Google Cloud customers who intend to achieve RTO for business-critical operations should not depend on a VM-creation API or on the update of an IAM permission. Step 1: Gather existing requirements The first step is to define the availability requirements for your applications. Most companies already have some level of design guidance in this space, which may be internally developed or derived from regulations or other legal requirements. This design guidance is normally codified in two key metrics: Recovery Time Objective (RTO) and Recovery Point Objective (RPO). In business terms, RTO translates as "How long after a disaster before I'm up and running." RPO translates as "How much data can I afford to lose in the event of a disaster." Historically, enterprises have defined RTO and RPO requirements for a wide range of disaster events, from component failures to earthquakes. This made sense in the on-premises world where planners had to map the RTO/RPO requirements through the entire software and hardware stack. In the cloud, you no longer need to define your requirements with such detail because the provider takes care of that. Instead, you can define your RTO and RPO requirements in terms of the scope of loss (entire zones or regions) without being specific about the underlying reasons. For Google Cloud this simplifies your requirement gathering to 3 scenarios: a zonal outage, a regional outage, or the extremely unlikely outage of multiple regions. Recognizing that not every application has equal criticality, most customers categorize their applications into criticality tiers against which a specific RTO/RPO requirement can be applied. When taken together, RTO/RPO and application criticality streamline the process of architecting a given application by answering: Does the application need to run in multiple zones in the same region, or in multiple zones in multiple regions? On which Google Cloud products can the application depend? This is an example of the output of the requirements gathering exercise: RTO and RPO by Application Criticality for Example Organization Co: Application criticality % of Apps Example apps Zone outage Region outage Tier 1 (most important) 5% Typically global or external customer-facing applications, such as real-time payments and eCommerce storefronts. RTO Zero RPO Zero RTO Zero RPO Zero Tier 2 35% Typically regional applications or important internal applications, such as CRM or ERP. RTO 15mins RPO 15mins RTO 1hr RPO 1hr Tier 3 (least important) 60% Typically team or departmental applications, such as back office, leave booking, internal travel, accounting, and HR. RTO 1hr RPO 1hr RTO 12hrs RPO 12hrs Step 2: Capability mapping to available products The second step is to understand the resilience capabilities of Google Cloud products that your applications will be using. Most companies review the relevant product information and then add guidance on how to modify their architectures to accommodate any gaps between the product capabilities and their resilience requirements. This section covers some common areas and recommendations around data and application limitations in this space. As mentioned previously, Google's DR-enabled products broadly cater for two types of outage scopes: regional and zonal. Partial outages should be planned for the same way as a full outage when it comes to DR. This gives an initial high level matrix of which products are suitable for each scenario by default: Google Cloud Product General Capabilities (see Appendix for specific product capabilities) All Google Cloud products Regional Google Cloud products with automatic replication across zones Multi-regional or global Google Cloud products with automatic replication across regions Failure of a component within a zone Covered* Covered Covered Zone outage Not covered Covered Covered Region outage Not covered Not covered Covered * All Google Cloud products are resilient to component failure, except in specific cases noted in product documentation. These are typically scenarios where the product offers direct access or static mapping to a piece of speciality hardware such as memory or Solid State Disks (SSD). How RPO limits product choices In most cloud deployments, data integrity is the most architecturally significant aspect to be considered for a service. At least some applications have an RPO requirement of zero, meaning there should be no data loss in the event of an outage. This typically requires data to be synchronously replicated to another zone or region. Synchronous replication has cost and latency tradeoffs, so while many Google Cloud products provide synchronous replication across zones, only a few provide it across regions. This cost and complexity tradeoff means that it's not unusual for different types of data within an application to have different RPO values. For data with an RPO greater than zero, applications can take advantage of asynchronous replication. Asynchronous replication is acceptable when lost data can either be recreated easily, or can be recovered from a golden source of data if needed. It can also be a reasonable choice when a small amount of data loss is an acceptable tradeoff in the context of zonal and regional expected outage durations. It is also relevant that during a transient outage, data written to the affected location but not yet replicated to another location generally becomes available after the outage is resolved. This means that the risk of permanent data loss is lower than the risk of losing data access during an outage. Key actions: Establish whether you definitely need RPO zero, and if so whether you can do this for a subset of your data - this dramatically increases the range of DR-enabled services available to you. In Google Cloud, achieving RPO zero means using predominantly regional products for your application, which by default are resilient to zone-scale, but not region-scale, outages. How RTO limits product choices One of the primary benefits of cloud computing is the ability to deploy infrastructure on demand; however, this isn't the same as instantaneous deployment. The RTO value for your application needs to accommodate the combined RTO of the Google Cloud products your application utilizes and any actions your engineers or SREs must take to restart your VMs or application components. An RTO measured in minutes means designing an application which recovers automatically from a disaster without human intervention, or with minimal steps such as pushing a button to failover. The cost and complexity of this kind of system historically has been very high, but Google Cloud products like load balancers and instance groups make this design both much more affordable and simpler. Therefore, you should consider automated failover and recovery for most applications. Be aware that designing a system for this kind of hot failover across regions is both complicated and expensive; only a very small fraction of critical services warrant this capability. Most applications will have an RTO of between an hour and a day, which allows for a warm failover in a disaster scenario, with some components of the application running all the time in a standby mode--such as databases--while others are scaled out in the event of an actual disaster, such as web servers. For these applications, you should strongly consider automation for the scale-out events. Services with an RTO over a day are the lowest criticality and can often be recovered from a backup or recreated from scratch. Key actions: Establish whether you definitely need an RTO of (near) zero for regional failover, and if so whether you can do this for a subset of your services. This changes the cost of running and maintaining your service. Step 3: Develop your own reference architectures and guides The final recommended step is building your own company-specific architecture patterns to help your teams standardize their approach to disaster recovery. Most Google Cloud customers produce a guide for their development teams that matches their individual business resilience expectations to the two major categories of outage scenarios on Google Cloud. This allows teams to easily categorize which DR-enabled products are suitable for each criticality level. Create product guidelines Looking again at the example RTO/RPO table from above, you have a hypothetical guide that lists which products would be allowed by default for each criticality tier. Note that where certain products have been identified as not suitable by default, you can always add your own replication and failover mechanisms to enable cross-zone or cross-region synchronization, but this exercise is beyond the scope of this article. The tables also link to more information about each product to help you understand their capabilities with respect to managing zone or region outages. Sample Architecture Patterns for Example Organization Co -- Zone Outage Resilience Google Cloud Product Does product meet zonal outage requirements for Example Organization (with appropriate product configuration) Tier 1 Tier 2 Tier 3 Compute Engine No No No Dataflow No No No BigQuery No No Yes GKE Yes Yes Yes Cloud Storage Yes Yes Yes Cloud SQL No Yes Yes Spanner Yes Yes Yes Cloud Load Balancing Yes Yes Yes This table is an example only based on hypothetical tiers shown above. Sample Architecture Patterns for Example Organization Co -- Region Outage Resilience Google Cloud Product Does product meet region outage requirements for Example Organization (with appropriate product configuration) Tier 1 Tier 2 Tier 3 Compute Engine Yes Yes Yes Dataflow No No No BigQuery No No Yes GKE Yes Yes Yes Cloud Storage No No No Cloud SQL No Yes Yes Spanner Yes Yes Yes Cloud Load Balancing Yes Yes Yes This table is an example only based on hypothetical tiers shown above. To show how these products would be used, the following sections walk through some reference architectures for each of the hypothetical application criticality levels. These are deliberately high level descriptions to illustrate the key architectural decisions, and aren't representative of a complete solution design. Example tier 3 architecture Application criticality Zone outage Region outage Tier 3(least important) RTO 12 hoursRPO 24 hours RTO 28 daysRPO 24 hours (Greyed-out icons indicate infrastructure to be enabled for recovery) This architecture describes a traditional client/server application: internal users connect to an application running on a compute instance which is backed by a database for persistent storage. It's important to note that this architecture supports better RTO and RPO values than required. However, you should also consider eliminating additional manual steps when they could prove costly or unreliable. For example, recovering a database from a nightly backup could support the RPO of 24 hours, but this would usually need a skilled individual such as a database administrator who might be unavailable, especially if multiple services were impacted at the same time. With Google Cloud's on demand infrastructure you are able to build this capability without making a major cost tradeoff, and so this architecture uses Cloud SQL HA rather than a manual backup/restore for zonal outages. Key architectural decisions for zone outage - RTO of 12hrs and RPO of 24hrs: An internal load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone. Even though the RTO is 12 hours, manual changes to IP addresses or even DNS updates can take longer than expected. A regional managed instance group is configured with multiple zones but minimal resources. This optimizes for cost but still allows for virtual machines to be quickly scaled out in the backup zone. A high availability Cloud SQL configuration provides for automatic failover to another zone. Databases are significantly harder to recreate and restore compared to the Compute Engine virtual machines. Key architectural decisions for region outage - RTO of 28 Days and RPO of 24 hours: A load balancer would be constructed in region 2 only in the event of a regional outage. Cloud DNS is used to provide an orchestrated but manual regional failover capability, since the infrastructure in region 2 would only be made available in the event of a region outage. A new managed instance group would be constructed only in the event of a region outage. This optimizes for cost and is unlikely to be invoked given the short length of most regional outages. Note that for simplicity the diagram doesn't show the associated tooling needed to redeploy, or the copying of the Compute Engine images needed. A new Cloud SQL instance would be recreated and the data restored from a backup. Again the risk of an extended outage to a region is extremely low so this is another cost optimization trade-off. Multi-regional Cloud Storage is used to store these backups. This provides automatic zone and regional resilience within the RTO and RPO. Example tier 2 architecture Application criticality Zone outage Region outage Tier 2 RTO 4 hoursRPO zero RTO 24 hoursRPO 4 hours This architecture describes a data warehouse with internal users connecting to a compute instance visualization layer, and a data ingest and transformation layer which populates the backend data warehouse. Some individual components of this architecture do not directly support the RPO required for their tier. However, because of how they are used together, the overall service does meet the RPO. In this case, because Dataflow is a zonal product, follow the recommendations for high availability design. to help prevent data loss during an outage. However, the Cloud Storage layer is the golden source of this data and supports an RPO of zero. As a result, you can re-ingest any lost data into BigQuery by using zone b in the event of an outage in zone a. Key architectural decisions for zone outage - RTO of 4hrs and RPO of zero: A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone. Even though the RTO is 4 hours, manual changes to IP addresses or even DNS updates can take longer than expected. A regional managed instance group for the data visualization compute layer is configured with multiple zones but minimal resources. This optimizes for cost but still allows for virtual machines to be quickly scaled out. Regional Cloud Storage is used as a staging layer for the initial ingest of data, providing automatic zone resilience. Dataflow is used to extract data from Cloud Storage and transform it before loading it into BigQuery. In the event of a zone outage this is a stateless process that can be restarted in another zone. BigQuery provides the data warehouse backend for the data visualization front end. In the event of a zone outage, any data lost would be re-ingested from Cloud Storage. Key architectural decisions for region outage - RTO of 24hrs and RPO of 4 hours: A load balancer in each region is used to provide a scalable access point for users. Cloud DNS is used to provide an orchestrated but manual regional failover capability, since the infrastructure in region 2 would only be made available in the event of a region outage. A regional managed instance group for the data visualization compute layer is configured with multiple zones but minimal resources. This isn't accessible until the load balancer is reconfigured but doesn't require manual intervention otherwise. Regional Cloud Storage is used as a staging layer for the initial ingest of data. This is being loaded at the same time into both regions to meet the RPO requirements. Dataflow is used to extract data from Cloud Storage and transform it before loading it into BigQuery. In the event of a region outage this would populate BigQuery with the latest data from Cloud Storage. BigQuery provides the data warehouse backend. Under normal operations this would be intermittently refreshed. In the event of a region outage the latest data would be re-ingested via Dataflow from Cloud Storage. Example tier 1 architecture Application criticality Zone outage Region outage Tier 1(most important) RTO zeroRPO zero RTO 4 hoursRPO 1 hour This architecture describes a mobile app backend infrastructure with external users connecting to a set of microservices running in GKE. Spanner provides the backend data storage layer for real time data, and historical data is streamed to a BigQuery data lake in each region. Again, some individual components of this architecture do not directly support the RPO required for their tier, but because of how they are used together the overall service does. In this case BigQuery is being used for analytic queries. Each region is fed simultaneously from Spanner. Key architectural decisions for zone outage - RTO of zero and RPO of zero: A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another zone. A regional GKE cluster is used for the application layer which is configured with multiple zones. This accomplishes the RTO of zero within each region. Multi-region Spanner is used as a data persistence layer, providing automatic zone data resilience and transaction consistency. BigQuery provides the analytics capability for the application. Each region is independently fed data from Spanner, and independently accessed by the application. Key architectural decisions for region outage - RTO of 4 hrs and RPO of 1 hr: A load balancer is used to provide a scalable access point for users, which allows for automatic failover to another region. A regional GKE cluster is used for the application layer which is configured with multiple zones. In the event of a region outage, the cluster in the alternate region automatically scales to take on the additional processing load. Multi-region Spanner is used as a data persistence layer, providing automatic regional data resilience and transaction consistency. This is the key component in achieving the cross region RPO of 1 hour. BigQuery provides the analytics capability for the application. Each region is independently fed data from Spanner, and independently accessed by the application. This architecture compensates for the BigQuery component allowing it to match the overall application requirements. Appendix: Product reference This section describes the architecture and DR capabilities of Google Cloud products that are most commonly used in customer applications and that can be easily leveraged to achieve your DR requirements. Common themes Many Google Cloud products offer regional or multi-regional configurations. Regional products are resilient to zone outages, and multi-region and global products are resilient to region outages. In general, this means that during an outage, your application experiences minimal disruption. Google achieves these outcomes through a few common architectural approaches, which mirror the architectural guidance above. Redundant deployment: The application backends and data storage are deployed across multiple zones within a region and multiple regions within a multi-region location. For more information about region-specific considerations, see Geography and regions. Data replication: Products use either synchronous or asynchronous replication across the redundant locations. Synchronous replication means that when your application makes an API call to create or modify data stored by the product, it receives a successful response only once the product has written the data to multiple locations. Synchronous replication ensures that you do not lose access to any of your data during a Google Cloud infrastructure outage because all of your data is available in one of the available backend locations. Although this technique provides maximum data protection, it can have tradeoffs in terms of latency and performance. Multi-region products using synchronous replication experience this tradeoff most significantly -- typically on the order of 10s or 100s of milliseconds of added latency. Asynchronous replication means that when your application makes an API call to create or modify data stored by the product, it receives a successful response once the product has written the data to a single location. Subsequent to your write request, the product replicates your data to additional locations. This technique provides lower latency and higher throughput at the API than synchronous replication, but at the expense of data protection. If the location in which you have written data suffers an outage before replication is complete, you lose access to that data until the location outage is resolved. Handling outages with load balancing: Google Cloud uses software load balancing to route requests to the appropriate application backends. Compared to other approaches like DNS load balancing, this approach reduces the system response time to an outage. When a Google Cloud location outage occurs, the load balancer quickly detects that the backend deployed in that location has become "unhealthy" and directs all requests to a backend in an alternate location. This enables the product to continue serving your application's requests during a location outage. When the location outage is resolved, the load balancer detects the availability of the product backends in that location, and resumes sending traffic there. Access Context Manager Access Context Manager lets enterprises configure access levels that map to a policy that's defined on request attributes. Policies are mirrored regionally. In the case of a zonal outage, requests to unavailable zones are automatically and transparently served from other available zones in the region. In the case of regional outage, policy calculations from the affected region are unavailable until the region becomes available again. Access Transparency Access Transparency lets Google Cloud organization administrators define fine-grained, attribute-based access control for projects and resources in Google Cloud. Occasionally, Google must access customer data for administrative purposes. When we access customer data, Access Transparency provides access logs to affected Google Cloud customers. These Access Transparency logs help ensure Google's commitment to data security and transparency in data handling. Access Transparency is resilient against zonal and regional outages. If a zonal or regional outage happens, Access Transparency continues to process administrative access logs in another zone or region. AlloyDB for PostgreSQL AlloyDB for PostgreSQL is a fully managed, PostgreSQL-compatible database service. AlloyDB for PostgreSQL offers high availability in a region through its primary instance's redundant nodes that are located in two different zones of the region. The primary instance maintains regional availability by triggering an automatic failover to the standby zone if the active zone encounters an issue. Regional storage guarantees data durability in the event of a single-zone loss. As a further method of disaster recovery, AlloyDB for PostgreSQL uses cross-region replication to provide disaster recovery capabilities by asynchronously replicating your primary cluster's data into secondary clusters that are located in separate Google Cloud regions. Zonal outage: During normal operation, only one of the two nodes of a high-availability primary instance is active, and it serves all data writes. This active node stores the data in the cluster's separate, regional storage layer. AlloyDB for PostgreSQL automatically detects zone-level failures and triggers a failover to restore database availability. During failover, AlloyDB for PostgreSQL starts the database on the standby node, which is already provisioned in a different zone. New database connections automatically get routed to this zone. From the perspective of a client application, a zonal outage resembles a temporary interruption of network connectivity. After the failover completes, a client can reconnect to the instance at the same address, using the same credentials, with no loss of data. Regional Outage: Cross-region replication uses asynchronous replication, which allows the primary instance to commit transactions before they are committed on replicas. The time difference between when a transaction is committed on the primary instance and when it is committed on the replica is known as replication lag. The time difference between when the primary generates the write-ahead log (WAL) and when the WAL reaches the replica is known as flush lag. Replication lag and flush lag depend on database instance configuration and on the user-generated workload. In the event of a regional outage, you can promote secondary clusters in a different region to a writeable, standalone primary cluster. This promoted cluster no longer replicates the data from the original primary cluster that it was formerly associated with. Due to flush lag, some data loss might occur because there could be transactions on the original primary that were not propagated to the secondary cluster. Cross-region replication RPO is affected by both the CPU utilization of the primary cluster, and physical distance between the primary cluster's region and the secondary cluster's region. To optimize RPO, we recommend testing your workload with a configuration that includes a replica to establish a safe transactions per second (TPS) limit, which is the highest sustained TPS that doesn't accumulate flush lag. If your workload exceeds the safe TPS limit, flush lag accumulates, which can affect RPO. To limit network lag, pick region pairs within the same continent. For more information about monitoring network lag and other AlloyDB for PostgreSQL metrics, see Monitor instances. Anti Money Laundering AI Anti Money Laundering AI (AML AI) provides an API to help global financial institutions more effectively and efficiently detect money laundering. Anti Money Laundering AI is a regional offering, meaning customers can choose the region, but not the zones that make up a region. Data and traffic are automatically load balanced across zones within a region. The operations (for example, to create a pipeline or run a prediction) are automatically scaled in the background and are load balanced across zones as necessary. Zonal outage: AML AI stores data for its resources regionally, replicated in a synchronous manner. When a long-running operation finishes successfully, the resources can be relied on regardless of zonal failures. Processing is also replicated across zones, but this replication aims at load balancing and not high availability, so a zonal failure during an operation can result in an operation failure. If that happens, retrying the operation can address the issue. During a zonal outage, processing times might be affected. Regional outage: Customers choose the Google Cloud region they want to create their AML AI resources in. Data is never replicated across regions. Customer traffic is never be routed to a different region by AML AI. In the case of a regional failure, AML AI will become available again as soon as the outage is resolved. API keys API keys provides a scalable API key resource management for a project. API keys is a global service, meaning that keys are visible and accessible from any Google Cloud location. Its data and metadata are stored redundantly across multiple zones and regions. API keys is resilient to both zonal and regional outages. In the case of zonal outage or regional outage, API keys continues to serve requests from another zone in the same or different region. For more information about API keys, see API keys API overview. Apigee Apigee provides a secure, scalable, and reliable platform for developing and managing APIs. Apigee offers both single-region and multi-region deployments. Zonal outage: Customer runtime data is replicated across multiple availability zones. Therefore, a single-zone outage does not impact Apigee. Regional Outage: For single-region Apigee instances, if a region goes down, Apigee instances are unavailable in that region and can't be restored to different regions. For multi-region Apigee instances, the data is replicated across all of the regions asynchronously. Therefore, failure of one region doesn't reduce traffic entirely. However, you might not be able to access uncommitted data in the failed region. You can divert the traffic away from unhealthy regions. To achieve automatic traffic failover, you can configure network routing using managed instance groups (MIGs). AutoML Translation AutoML Translation is a machine translation service that allows you import your own data (sentence pairs) to train custom models for your domain-specific needs. Zonal outage: AutoML Translation has active compute servers in multiple zones and regions. It also supports synchronous data replication across zones within regions. These features help AutoML Translation achieve instantaneous failover without any data loss for zonal failures, and without requiring any customer input or adjustments. Regional outage: In the case of a regional failure, AutoML Translation is not available. AutoML Vision AutoML Vision is part of Vertex AI. It offers a unified framework to create datasets, import data, train models, and serve models for online prediction and batch prediction. AutoML Vision is a regional offering. Customers can choose which region they want to launch a job from, but they can't choose the specific zones within that region. The service automatically load-balances workloads across different zones within the region. Zonal outage: AutoML Vision stores metadata for the jobs regionally, and writes synchronously across zones within the region. The jobs are launched in a specific zone, as selected by Cloud Load Balancing. AutoML Vision training jobs: A zonal outage causes any running jobs to fail, and the job status updates to failed. If a job fails, retry it immediately. The new job is routed to an available zone. AutoML Vision batch prediction jobs: Batch prediction is built on top of Vertex AI Batch prediction. When a zonal outage occurs, the service automatically retries the job by routing it to available zones. If multiple retries fail, the job status updates to failed. Subsequent user requests to run the job are routed to an available zone. Regional outage: Customers choose the Google Cloud region they want to run their jobs in. Data is never replicated across regions. In a regional failure, AutoML Vision service is unavailable in that region. It becomes available again when the outage resolves. To run their jobs, we recommend that customers use multiple regions. In case a regional outage occurs, direct jobs to a different available region. Batch Batch is a fully managed service to queue, schedule, and execute batch jobs on Google Cloud. Batch settings are defined at the region level. Customers must choose a region to submit their batch jobs, not a zone in a region. When a job is submitted, Batch synchronously writes customer data to multiple zones. However, customers can specify the zones where Batch VMs run jobs. Zonal Failure: When a single zone fails, the tasks running in that zone also fail. If tasks have retry settings, Batch automatically fails over those tasks to other active zones in the same region. The automatic failover is subject to availability of resources in active zones in the same region. Jobs that require zonal resources (like VMs, GPUs, or zonal persistent disks) that are only available in the failed zone are queued until the failed zone recovers or until the queueing timeouts of the jobs are reached. When possible, we recommend that customers let Batch choose zonal resources to run their jobs. Doing so helps ensure that the jobs are resilient to a zonal outage. Regional Failure: In case of a regional failure, the service control plane is unavailable in the region. The service doesn't replicate data or redirect requests across regions. We recommend that customers use multiple regions to run their jobs and redirect jobs to a different region if a region fails. Chrome Enterprise Premium threat and data protection Chrome Enterprise Premium threat and data protection is part of the Chrome Enterprise Premium solution. It extends Chrome with a variety of security features, including malware and phishing protection, Data Loss Prevention (DLP), URL filtering rules and security reporting. Chrome Enterprise Premium admins can opt-in to storing customer core contents that violate DLP or malware policies into Google Workspace rule log events and/or into Cloud Storage for future investigation. Google Workspace rule log events are powered by a multi-regional Spanner database. Chrome Enterprise Premium can take up to several hours to detect policy violations. During this time, any unprocessed data is subject to data loss from a zonal or regional outage. Once a violation is detected, the contents that violate your policies are written to Google Workspace rule log events and/or to Cloud Storage. Zonal and Regional outage: Because Chrome Enterprise Premium threat and data protection are multi-zonal and multi-regional, it can survive a complete, unplanned loss of a zone or a region without a loss in availability. It provides this level of reliability by redirecting traffic to its service on other active zones or regions. However, because it can take Chrome Enterprise Premium threat and data protection several hours to detect DLP and malware violations, any unprocessed data in a specific zone or region is subject to loss from a zonal or regional outage. BigQuery BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse designed for business agility. BigQuery supports the following location types for user datasets: A region: a specific geographical location, such as Iowa (us-central1) or Montréal (northamerica-northeast1). A multi-region: a large geographic area that contains two or more geographic places, such as the United States (US) or Europe (EU). In either case, data is stored redundantly in two zones within a single region within the selected location. Data written to BigQuery is synchronously written to both the primary and secondary zones. This protects against unavailability of a single zone within the region, but not against a regional outage. Binary Authorization Binary Authorization is a software supply chain security product for GKE and Cloud Run. All Binary Authorization policies are replicated across multiple zones within every region. Replication helps Binary Authorization policy read operations recover from failures of other regions. Replication also makes read operations tolerant of zonal failures within each region. Binary Authorization enforcement operations are resilient against zonal outages, but they are not resilient against regional outages. Enforcement operations run in the same region as the GKE cluster or Cloud Run job that's making the request. Therefore, in the event of a regional outage, there is nothing running to make Binary Authorization enforcement requests. Certificate Manager Certificate Manager lets you acquire and manage Transport Layer Security (TLS) certificates for use with different types of Cloud Load Balancing. In the case of a zonal outage, regional and global Certificate Manager are resilient to zonal failures because jobs and databases are redundant across multiple zones within a region. In the case of a regional outage, global Certificate Manager is resilient to regional failures because jobs and databases are redundant across multiple regions. Regional Certificate Manager is a regional product, so it cannot withstand a regional failure. Cloud Intrusion Detection System Cloud Intrusion Detection System (Cloud IDS) is a zonal service that provides zonally-scoped IDS Endpoints, which process the traffic of VMs in one specific zone, and thus isn't tolerant of zonal or regional outages. Zonal outage: Cloud IDS is tied to VM instances. If a customer plans to mitigate zonal outages by deploying VMs in multiple zones (manually or via Regional Managed Instance Groups), they will need to deploy Cloud IDS Endpoints in those zones as well. Regional Outage: Cloud IDS is a regional product. It doesn't provide any cross-regional functionality. A regional failure will take down all Cloud IDS functionality in all zones in that region. Google Security Operations SIEM Google Security Operations SIEM (which is part of Google Security Operations) is a fully managed service that helps security teams detect, investigate, and respond to threats. Google Security Operations SIEM has regional and multi-regional offerings. In regional offerings, data and traffic are automatically load-balanced across zones within the chosen region, and data is stored redundantly across availability zones within the region. Multi-regions are geo-redundant. That redundancy provides a broader set of protections than regional storage. It also helps to ensure that the service continues to function even if a full region is lost. The majority of data ingestion paths replicate customer data synchronously across multiple locations. When data is replicated asynchronously, there is a time window (a recovery point objective, or RPO) during which the data isn't yet replicated across several locations. This is the case when ingesting with feeds in multi-regional deployments. After the RPO, the data is available in multiple locations. Zonal outage: Regional deployments: Requests are served from any zone within the region. Data is synchronously replicated across multiple zones. In case of a full-zone outage, the remaining zones continue to serve traffic and continue to process the data. Redundant provisioning and automated scaling for Google Security Operations SIEM helps to ensure that the service remains operational in the remaining zones during these load shifts. Multi-regional deployments: Zonal outages are equivalent to regional outages. Regional outage: Regional deployments: Google Security Operations SIEM stores all customer data within a single region and traffic is never routed across regions. In the event of a regional outage, Google Security Operations SIEM is unavailable in the region until the outage is resolved. Multi-regional deployments (without feeds): Requests are served from any region of the multi-regional deployment. Data is synchronously replicated across multiple regions. In case of a full-region outage, the remaining regions continue to serve traffic and continue to process the data. Redundant provisioning and automated scaling for Google Security Operations SIEM helps ensure that the service remains operational in the remaining regions during these load shifts. Multi-regional deployments (with feeds): Requests are served from any region of the multi-regional deployment. Data is replicated asynchronously across multiple regions with the provided RPO. In case of a full-region outage, only data stored after the RPO is available in the remaining regions. Data within the RPO window might not be replicated. Cloud Asset Inventory Cloud Asset Inventory is a high-performance, resilient, global service that maintains a repository of Google Cloud resource and policy metadata. Cloud Asset Inventory provides search and analysis tools that help you track deployed assets across organizations, folders, and projects. In the case of a zone outage, Cloud Asset Inventory continues to serve requests from another zone in the same or different region. In the case of a regional outage, Cloud Asset Inventory continues to serve requests from other regions. Bigtable Bigtable is a fully managed high performance NoSQL database service for large analytical and operational workloads. Bigtable replication overview Bigtable offers a flexible and fully configurable replication feature, which you can use to increase the availability and durability of your data by copying it to clusters in multiple regions or multiple zones within the same region. Bigtable can also provide automatic failover for your requests when you use replication. When using multi-zonal or multi-regional configurations with multi-cluster routing, in the case of a zonal or regional outage, Bigtable automatically reroutes traffic and serves requests from the nearest available cluster. Because Bigtable replication is asynchronous and eventually consistent, very recent changes to data in the location of the outage might be unavailable if they have not been replicated yet to other locations. Performance considerations Important: You must ensure that CPU usage and data sizes remain within the recommended maximum values, and your clusters must remain adequately provisioned at all times in order for replication to perform predictably. When CPU resource demands exceed available node capacity, Bigtable always prioritizes serving incoming requests ahead of replication traffic. For more information about how to use Bigtable replication with your workload, see Cloud Bigtable replication overview and examples of replication settings. Bigtable nodes are used both for serving incoming requests and for performing replication of data from other clusters. In addition to maintaining sufficient node counts per cluster, you must also ensure that your applications use proper schema design to avoid hotspots, which can cause excessive or imbalanced CPU usage and increased replication latency. For more information about designing your application schema to maximize Bigtable performance and efficiency, see Schema design best practices. Monitoring Bigtable provides several ways to visually monitor the replication latency of your instances and clusters using the charts for replication available in the Google Cloud console. You can also programmatically monitor Bigtable replication metrics using the Cloud Monitoring API. Certificate Authority Service Certificate Authority Service (CA Service) lets customers simplify, automate, and customize the deployment, management, and security of private certificate authorities (CA) and to resiliently issue certificates at scale. Zonal outage: CA Service is resilient to zonal failures because its control plane is redundant across multiple zones within a region. If there is a zonal outage, CA Service continues to serve requests from another zone in the same region without interruption. Because data is replicated synchronously there is no data loss or corruption. Regional outage: CA Service is a regional product, so it cannot withstand a regional failure. If you require resilience to regional failures, create issuing CAs in two different regions. Create the primary issuing CA in the region where you need certificates. Create a fallback CA in a different region. Use the fallback when the primary subordinate CA's region has an outage. If needed, both CAs can chain up to the same root CA. Cloud Billing The Cloud Billing API allows developers to manage billing for their Google Cloud projects programmatically. The Cloud Billing API is designed as a global system with updates synchronously written to multiple zones and regions. Zonal or regional failure: The Cloud Billing API will automatically fail over to another zone or region. Individual requests may fail, but a retry policy should allow subsequent attempts to succeed. Cloud Build Cloud Build is a service that executes your builds on Google Cloud. Cloud Build is composed of regionally isolated instances that synchronously replicate data across zones within the region. We recommend that you use specific Google Cloud regions instead of the global region, and ensure that the resources your build uses (including log buckets, Artifact Registry repositories, and so on) are aligned with the region that your build runs in. In the case of a zonal outage, control plane operations are unaffected. However, currently executing builds within the failing zone will be delayed or permanently lost. Newly triggered builds will automatically be distributed to the remaining functioning zones. In the case of a regional failure, the control plane will be offline, and currently executing builds will be delayed or permanently lost. Triggers, worker pools, and build data are never replicated across regions. We recommend that you prepare triggers and worker pools in multiple regions to make mitigation of an outage easier. Cloud CDN Cloud CDN distributes and caches content across many locations on Google's network to reduce serving latency for clients. Cached content is served on a best-effort basis -- when a request cannot be served by the Cloud CDN cache, the request is forwarded to origin servers, such as backend VMs or Cloud Storage buckets, where the original content is stored. When a zone or a region fails, caches in the affected locations are unavailable. Inbound requests are routed to available Google edge locations and caches. If these alternate caches cannot serve the request they will forward the request to an available origin server. Provided that server can serve the request with up-to-date data, there will be no loss of content. An increased rate of cache misses will cause the origin servers to experience higher than normal traffic volumes as the caches are filled. Subsequent requests are be served from the caches unaffected by the zone or region outage. For more information about Cloud CDN and cache behavior, see the Cloud CDN documentation. Cloud Composer Cloud Composer is a managed workflow orchestration service that lets you create, schedule, monitor, and manage workflows that span across clouds and on-premises data centers. Cloud Composer environments are built on the Apache Airflow open source project. Cloud Composer API availability isn't affected by zonal unavailability. During a zonal outage, you retain access to the Cloud Composer API, including the ability to create new Cloud Composer environments. A Cloud Composer environment has a GKE cluster as a part of its architecture. During a zonal outage, workflows on the cluster might be disrupted: In Cloud Composer 1, the environment's cluster is a zonal resource, thus a zonal outage might make the cluster unavailable. Workflows that are executing at the time of the outage might be stopped before completion. In Cloud Composer 2, the environment's cluster is a regional resource. However, workflows that are executed on nodes in the zones that are affected by a zonal outage might be stopped before completion. In both versions of Cloud Composer, a zonal outage might cause partially executed workflows to stop executing, including any external actions that the workflow was configured by you to accomplish. Depending on the workflow, this can cause inconsistencies externally, such as if the workflow stops in the middle of a multi-step execution to modify external data stores. Therefore, you should consider the recovery process when you design your Airflow workflow, including how to detect partially unexecuted workflow states and repair any partial data changes. In Cloud Composer 1, during a zone outage, you can choose to start a new Cloud Composer environment in another zone. Because Airflow keeps the state of your workflows in its metadata database, transferring this information to a new Cloud Composer environment can take additional steps and preparation. In Cloud Composer 2, you can address zonal outages by setting up disaster recovery with environment snapshots in advance. During a zone outage, you can switch to another environment by transferring the state of your workflows with an environment snapshot. Only Cloud Composer 2 supports disaster recovery with environment snapshots. Cloud Data Fusion Cloud Data Fusion is a fully managed enterprise data integration service for quickly building and managing data pipelines. It provides three editions. Zonal outages impact Developer edition instances. Regional outages impact Basic and Enterprise edition instances. To control access to resources, you might design and run pipelines in separate environments. This separation lets you design a pipeline once, and then run it in multiple environments. You can recover pipelines in both environments. For more information, see Back up and restore instance data. The following advice applies to both regional and zonal outages. Outages in the pipeline design environment In the design environment, save pipeline drafts in case of an outage. Depending on specific RTO and RPO requirements, you can use the saved drafts to restore the pipeline in a different Cloud Data Fusion instance during an outage. Outages in the pipeline execution environment In the execution environment, you start the pipeline internally with Cloud Data Fusion triggers or schedules, or externally with orchestration tools, such as Cloud Composer. To be able to recover runtime configurations of pipelines, back up the pipelines and configurations, such as plugins and schedules. In an outage, you can use the backup to replicate an instance in an unaffected region or zone. Another way to prepare for outages is to have multiple instances across the regions with the same configuration and pipeline set. If you use external orchestration, running pipelines can be load balanced automatically between instances. Take special care to ensure that there are no resources (such as data sources or orchestration tools) tied to a single region and used by all instances, as this could become a central point of failure in an outage. For example, you can have multiple instances in different regions and use Cloud Load Balancing and Cloud DNS to direct the pipeline run requests to an instance that isn't affected by an outage (see the example tier one and tier three architectures). Outages for other Google Cloud data services in the pipeline Your instance might use other Google Cloud services as data sources or pipeline execution environments, such as Dataproc, Cloud Storage, or BigQuery. Those services can be in different regions. When cross-regional execution is required, a failure in either region leads to an outage. In this scenario, you follow the standard disaster recovery steps, keeping in mind that cross-regional setup with critical services in different regions is less resilient. Cloud Deploy Cloud Deploy provides continuous delivery of workloads into runtime services such as GKE and Cloud Run. The service is composed of regional instances that synchronously replicate data across zones within the region. Zonal outage: Control plane operations are unaffected. However, Cloud Build builds (for example, render or deploy operations) that are running when a zone fails are delayed or permanently lost. During an outage, the Cloud Deploy resource that triggered the build (a release or rollout) displays a failure status that indicates the underlying operation failed. You can re-create the resource to start a new build in the remaining functioning zones. For example, create a new rollout by redeploying the release to a target. Regional outage: Control plane operations are unavailable, as is data from Cloud Deploy, until the region is restored. To help make it easier to restore service in the event of a regional outage, we recommend that you store your delivery pipeline and target definitions in source control. You can use these configuration files to re-create your Cloud Deploy pipelines in a functioning region. During an outage, data about existing releases is lost. Create a new release to continue deploying software to your targets. Cloud DNS Cloud DNS is a high-performance, resilient, global Domain Name System (DNS) service that publishes your domain names to the global DNS in a cost-effective way. In the case of a zonal outage, Cloud DNS continues to serve requests from another zone in the same or different region without interruption. Updates to Cloud DNS records are synchronously replicated across zones within the region where they are received. Therefore, there is no data loss. In the case of a regional outage, Cloud DNS continues to serve requests from other regions. It is possible that very recent updates to Cloud DNS records will be unavailable because updates are first processed in a single region before being asynchronously replicated to other regions. Cloud Run functions Cloud Run functions is a stateless computing environment where customers can run their function code on Google's infrastructure. Cloud Run functions is a regional offering, meaning customers can choose the region but not the zones that make up a region. Data and traffic are automatically load balanced across zones within a region. Functions are automatically scaled to meet incoming traffic and are load balanced across zones as necessary. Each zone maintains a scheduler that provides this autoscaling per-zone. It's also aware of the load other zones are receiving and will provision extra capacity in-zone to allow for any zonal failures. Zonal outage: Cloud Run functions stores metadata as well as the deployed function. This data is stored regionally and written in a synchronous manner. The Cloud Run functions Admin API only returns the API call once the data has been committed to a quorum within a region. Since data is regionally stored, data plane operations are not affected by zonal failures either. Traffic automatically routes to other zones in the event of a zonal failure. Regional outage: Customers choose the Google Cloud region they want to create their function in. Data is never replicated across regions. Customer traffic will never be routed to a different region by Cloud Run functions. In the case of a regional failure, Cloud Run functions will become available again as soon as the outage is resolved. Customers are encouraged to deploy to multiple regions and use Cloud Load Balancing to achieve higher availability if desired. Cloud Healthcare API Cloud Healthcare API, a service for storing and managing healthcare data, is built to provide high availability and offers protection against zonal and regional failures, depending on a chosen configuration. Regional configuration: in its default configuration, Cloud Healthcare API offers protection against zonal failure. Service is deployed in three zones across one region, with data also triplicated across different zones within the region. In case of a zonal failure, affecting either service layer or data layer, the remaining zones take over without interruption. With regional configuration, if a whole region where service is located experiences an outage, service will be unavailable until the region comes back online. In the unforeseen event of a physical destruction of a whole region, data stored in that region will be lost. Multi-regional configuration: in its multiregional configuration, Cloud Healthcare API is deployed in three zones belonging to three different regions. Data is also replicated across three regions. This guards against loss of service in case of a whole-region outage, since the remaining regions would automatically take over. Structured data, such as FHIR, is synchronously replicated across multiple regions, so it's protected against data loss in case of a whole-region outage. Data that is stored in Cloud Storage buckets, such as DICOM and Dictation or large HL7v2/FHIR objects, is asynchronously replicated across multiple regions. Cloud Identity Cloud Identity services are distributed across multiple regions and use dynamic load balancing. Cloud Identity does not allow users to select a resource scope. If a particular zone or region experiences an outage, traffic is automatically distributed to other zones or regions. Persistent data is mirrored in multiple regions with synchronous replication in most cases. For performance reasons, a few systems, such as caches or changes affecting large numbers of entities, are asynchronously replicated across regions. If the primary region in which the most current data is stored experiences an outage, Cloud Identity serves stale data from another location until the primary region becomes available. Cloud Interconnect Cloud Interconnect offers customers RFC 1918 access to Google Cloud networks from their on-premises data centers, over physical cables connected to Google peering edge. Cloud Interconnect provides customers with a 99.9% SLA if they provision connections to two EADs (Edge Availability Domains) in a metropolitan area. A 99.99% SLA is available if the customer provisions connections in two EADs in two metropolitan areas to two regions with Global Routing. See Topology for non-critical applications overview and Topology for production-level applications overview for more information. Cloud Interconnect is compute-zone independent and provides high availability in the form of EADs. In the event of an EAD failure, the BGP session to that EAD breaks and traffic fails over to the other EAD. In the event of a regional failure, BGP sessions to that region break and traffic fails over to the resources in the working region. This applies when Global Routing is enabled. Cloud Key Management Service Cloud Key Management Service (Cloud KMS) provides scalable and highly-durable cryptographic key resource management. Cloud KMS stores all of its data and metadata in Spanner databases which provide high data durability and availability with synchronous replication. Cloud KMS resources can be created in a single region, multiple regions, or globally. In the case of zonal outage, Cloud KMS continues to serve requests from another zone in the same or different region without interruption. Because data is replicated synchronously, there is no data loss or corruption. When the zone outage is resolved, full redundancy is restored. In the case of a regional outage, regional resources in that region are offline until the region becomes available again. Note that even within a region, at least 3 replicas are maintained in separate zones. When higher availability is required, resources should be stored in a multi-region or global configuration. Multi-region and global configurations are designed to stay available through a regional outage by geo-redundantly storing and serving data in more than one region. Cloud External Key Manager (Cloud EKM) Cloud External Key Manager is integrated with Cloud Key Management Service to let you control and access external keys through supported third-party partners. You can use these external keys to encrypt data at rest to use for other Google Cloud services that support customer-managed encryption keys (CMEK) integration. Zonal outage: Cloud External Key Manager is resilient to zonal outages because of the redundancy that's provided by multiple zones in a region. If a zonal outage occurs, traffic is rerouted to other zones within the region. While traffic is rerouting, you might see an increase in errors, but the service is still available. Regional outage: Cloud External Key Manager isn't available during a regional outage in the affected region. There is no failover mechanism that redirects requests across regions. We recommend that customers use multiple regions to run their jobs. Cloud External Key Manager doesn't store any customer data persistently. Thus, there's no data loss during a regional outage within the Cloud External Key Manager system. However, Cloud External Key Manager depends on the availability of other services, like Cloud Key Management Service and external third party vendors. If those systems fail during a regional outage, you could lose data. The RPO/RTO of these systems are outside the scope of Cloud External Key Manager commitments. Cloud Load Balancing Cloud Load Balancing is a fully distributed, software-defined managed service. With Cloud Load Balancing, a single anycast IP address can serve as the frontend for backends in regions around the world. It isn't hardware-based, so you don't need to manage a physical load-balancing infrastructure. Load balancers are a critical component of most highly available applications. Cloud Load Balancing offers both regional and global load balancers. It also provides cross-region load balancing, including automatic multi-region failover, which moves traffic to failover backends if your primary backends become unhealthy. The global load balancers are resilient to both zonal and regional outages. The regional load balancers are resilient to zonal outages but are affected by outages in their region. However, in either case, it is important to understand that the resilience of your overall application depends not just on which type of load balancer you deploy, but also on the redundancy of your backends. For more information about Cloud Load Balancing and its features, see Cloud Load Balancing overview. Cloud Logging Cloud Logging consists of two main parts: the Logs Router and Cloud Logging storage. The Logs Router handles streaming log events and directs the logs to Cloud Storage, Pub/Sub, BigQuery, or Cloud Logging storage. Cloud Logging storage is a service for storing, querying, and managing compliance for logs. It supports many users and workflows including development, compliance, troubleshooting, and proactive alerting. Logs Router & incoming logs: During a zonal outage, the Cloud Logging API routes logs to other zones in the region. Normally, logs being routed by the Logs Router to Cloud Logging, BigQuery, or Pub/Sub are written to their end destination as soon as possible, while logs sent to Cloud Storage are buffered and written in batches hourly. Log Entries: In the event of a zonal or regional outage, log entries that have been buffered in the affected zone or region and not written to the export destination become inaccessible. Logs-based metrics are also calculated in the Logs Router and subject to the same constraints. Once delivered to the selected log export location, logs are replicated according to the destination service. Logs that are exported to Cloud Logging storage are synchronously replicated across two zones in a region. For the replication behavior of other destination types, see the relevant section in this article. Note that logs exported to Cloud Storage are batched and written every hour. Therefore we recommend using Cloud Logging storage, BigQuery, or Pub/Sub to minimize the amount of data impacted by an outage. Log Metadata: Metadata such as sink and exclusion configuration is stored globally but cached regionally so in the event of an outage, the regional Log Router instances would operate. Single region outages have no impact outside of the region. Cloud Monitoring Cloud Monitoring consists of a variety of interconnected features, such as dashboards (both built-in and user-defined), alerting, and uptime monitoring. All Cloud Monitoring configuration, including dashboards, uptime checks, and alert policies, are globally defined. All changes to them are replicated synchronously to multiple regions. Therefore, during both zonal and regional outages, successful configuration changes are durable. In addition, although transient read and write failures can occur when a zone or region initially fails, Cloud Monitoring reroutes requests towards available zones and regions. In this situation you may retry configuration changes with exponential backoff. When writing metrics for a specific resource, Cloud Monitoring first identifies the region in which the resource resides. It then writes three independent replicas of the metric data within the region. The overall regional metric write is returned as successful as soon as one of the three writes succeeds. The three replicas are not guaranteed to be in different zones within the region. Zonal: During a zonal outage, metric writes and reads are completely unavailable for resources in the affected zone. Effectively, Cloud Monitoring acts like the affected zone doesn't exist. Regional: During a regional outage, metric writes and reads are completely unavailable for resources in the affected region. Effectively, Cloud Monitoring acts like the affected region doesn't exist. Cloud NAT Cloud NAT (network address translation) is a distributed, software-defined managed service that lets certain resources without external IP addresses create outbound connections to the internet. It's not based on proxy VMs or appliances. Instead, Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud network so that it provides source network address translation (source NAT or SNAT) for VMs without external IP addresses. Cloud NAT also provides destination network address translation (destination NAT or DNAT) for established inbound response packets. For more information on the functionality of Cloud NAT, see the documentation. Zonal outage: Cloud NAT is resilient to zonal failures because the control plane and network data plane are redundant across multiple zones within a region. Regional outage: Cloud NAT is a regional product, so it cannot withstand a regional failure. Cloud Router Cloud Router is a fully distributed and managed Google Cloud service that uses the Border Gateway Protocol (BGP) to advertise IP address ranges. It programs dynamic routes based on the BGP advertisements that it receives from a peer. Instead of a physical device or appliance, each Cloud Router consists of software tasks that act as BGP speakers and responders. In the case of a zonal outage, Cloud Router with a high availability (HA) configuration is resilient to zonal failures. In that case, one interface might lose connectivity, but traffic is redirected to the other interface through dynamic routing using BGP. In the case of a regional outage, Cloud Router is a regional product, so it cannot withstand a regional failure. If customers have enabled global routing mode, routing between the failed region and other regions might be affected. Cloud Run Cloud Run is a stateless computing environment where customers can run their containerized code on Google's infrastructure. Cloud Run is a regional offering, meaning customers can choose the region but not the zones that make up a region. Data and traffic are automatically load balanced across zones within a region. Container instances are automatically scaled to meet incoming traffic and are load balanced across zones as necessary. Each zone maintains a scheduler that provides this autoscaling per-zone. It's also aware of the load other zones are receiving and will provision extra capacity in-zone to allow for any zonal failures. Zonal outage: Cloud Run stores metadata as well as the deployed container. This data is stored regionally and written in a synchronous manner. The Cloud Run Admin API only returns the API call once the data has been committed to a quorum within a region. Since data is regionally stored, data plane operations are not affected by zonal failures either. Traffic will automatically route to other zones in the event of a zonal failure. Regional outage: Customers choose the Google Cloud region they want to create their Cloud Run service in. Data is never replicated across regions. Customer traffic will never be routed to a different region by Cloud Run. In the case of a regional failure, Cloud Run will become available again as soon as the outage is resolved. Customers are encouraged to deploy to multiple regions and use Cloud Load Balancing to achieve higher availability if desired. Cloud Shell Cloud Shell provides Google Cloud users access to single user Compute Engine instances that are preconfigured for onboarding, education, development, and operator tasks. Cloud Shell isn't suitable for running application workloads and is instead intended for interactive development and educational use cases. It has per-user runtime quota limits, it is automatically shut down after a short period of inactivity, and the instance is only accessible to the assigned user. The Compute Engine instances backing the service are zonal resources, so in the event of a zone outage, a user's Cloud Shell is unavailable. Cloud Source Repositories Cloud Source Repositories lets users create and manage private source code repositories. This product is designed with a global model, so you don't need to configure it for regional or zonal resiliency. Instead, git push operations against Cloud Source Repositories synchronously replicate the source repository update to multiple zones across multiple regions. This means that the service is resilient to outages in any one region. If a particular zone or region experiences an outage, traffic is automatically distributed to other zones or regions. The feature to automatically mirror repositories from GitHub or Bitbucket can be affected by problems in those products. For example, mirroring is affected if GitHub or Bitbucket can't alert Cloud Source Repositories of new commits, or if Cloud Source Repositories can't retrieve content from the updated repository. Spanner Spanner is a scalable, highly-available, multi-version, synchronously replicated, and strongly consistent database with relational semantics. Regional Spanner instances synchronously replicate data across three zones in a single region. A write to a regional Spanner instance is synchronously sent to all 3 replicas and acknowledged to the client after at least 2 replicas (majority quorum of 2 out of 3) have committed the write. This makes Spanner resilient to a zone failure by providing access to all the data, as the latest writes have been persisted and a majority quorum for writes can still be achieved with 2 replicas. Spanner multi-regional instances include a write-quorum that synchronously replicates data across 5 zones located in three regions (two read-write replicas each in the default-leader region and another region; and one replica in the witness region). A write to a multi-regional Spanner instance is acknowledged after at least 3 replicas (majority quorum of 3 out of 5) have committed the write. In the event of a zone or region failure, Spanner has access to all the data (including latest writes) and serves read/write requests as the data is persisted in at least 3 zones across 2 regions at the time the write is acknowledged to the client. See the Spanner instance documentation for more information about these configurations, and the replication documentation for more information about how Spanner replication works. Cloud SQL Cloud SQL is a fully managed relational database service for MySQL, PostgreSQL, and SQL Server. Cloud SQL uses managed Compute Engine virtual machines to run the database software. It offers a high availability configuration for regional redundancy, protecting the database from a zone outage. Cross-region replicas can be provisioned to protect the database from a region outage. Because the product also offers a zonal option, which is not resilient to a zone or region outage, you should be careful to select the high availability configuration, cross-region replicas, or both. Zonal outage: The high availability option creates a primary and standby VM instance in two separate zones within one region. During normal operation, the primary VM instance serves all requests, writing database files to a Regional Persistent Disk, which is synchronously replicated to the primary and standby zones. If a zone outage affects the primary instance, Cloud SQL initiates a failover during which the Persistent Disk is attached to the standby VM and traffic is rerouted. During this process, the database must be initialized, which includes processing any transactions written to the transaction log but not applied to the database. The number and type of unprocessed transactions can increase the RTO time. High recent writes can lead to a backlog of unprocessed transactions. The RTO time is most heavily impacted by (a) high recent write activity and (b) recent changes to database schemas. Finally, when the zonal outage has been resolved, you can manually trigger a failback operation to resume serving in the primary zone. For more details on the high availability option, see the Cloud SQL high availability documentation. Regional outage: The cross-region replica option protects your database from regional outages by creating read replicas of your primary instance in other regions. The cross-region replication uses asynchronous replication, which allows the primary instance to commit transactions before they are committed on replicas. The time difference between when a transaction is committed on the primary instance and when it is committed on the replica is known as "replication lag" (which can be monitored). This metric reflects both transactions which have not been sent from the primary to replicas, as well as transactions that have been received but have not been processed by the replica. Transactions not sent to the replica would become unavailable during a regional outage. Transactions received but not processed by the replica impact the recovery time, as described below. Cloud SQL recommends testing your workload with a configuration that includes a replica to establish a "safe transactions per second (TPS)" limit, which is the highest sustained TPS that doesn't accumulate replication lag. If your workload exceeds the safe TPS limit, replication lag accumulates, negatively affecting RPO and RTO values. As general guidance, avoid using small instance configurations (<2 vCPU cores, <100GB disks, or PD-HDD), which are susceptible to replication lag. In the event of a regional outage, you must decide whether to manually promote a read replica. This is a manual operation because promotion can cause a split brain scenario in which the promoted replica accepts new transactions despite having lagged the primary instance at the time of the promotion. This can cause problems when the regional outage is resolved and you must reconcile the transactions that were never propagated from the primary to replica instances. If this is problematic for your needs, you may consider a cross-region synchronous replication database product like Spanner. Once triggered by the user, the promotion process follows steps similar to the activation of a standby instance in the high availability configuration. In that process, the read replica must process the transaction log which drives the total recovery time. Because there is no built-in load balancer involved in the replica promotion, manually redirect applications to the promoted primary. For more details on the cross-region replica option, see the Cloud SQL cross-region replica documentation. For more information about Cloud SQL DR, see the following: Cloud SQL for MySQL database disaster recovery Cloud SQL for PostgreSQL database disaster recovery Cloud SQL for SQL Server database disaster recovery Cloud Storage Cloud Storage provides globally unified, scalable, and highly durable object storage. Cloud Storage buckets can be created in one of three different location types: in a single region, in a dual-region, or in a multi-region within a continent. With regional buckets, objects are stored redundantly across availability zones in a single region. Dual-region and multi-region buckets, on the other hand, are geo-redundant. This means that after newly written data is replicated to at least one remote region, objects are stored redundantly across regions. This approach gives data in dual-region and multi-region buckets a broader set of protections than can be achieved with regional storage. Regional buckets are designed to be resilient in case of an outage in a single availability zone. If a zone experiences an outage, objects in the unavailable zone are automatically and transparently served from elsewhere in the region. Data and metadata are stored redundantly across zones, starting with the initial write. No writes are lost if a zone becomes unavailable. In the case of a regional outage, regional buckets in that region are offline until the region becomes available again. If you need higher availability, you can store data in a dual-region or multi-region configuration. Dual-region and multi-region buckets are single buckets (no separate primary and secondary locations) but they store data and metadata redundantly across regions. In the case of a regional outage, service is not interrupted. You can think of dual-region and multi-region buckets as being active-active in that you can read and write workloads in more than one region simultaneously while the bucket remains strongly consistent. This can be especially attractive for customers who want to split their workload across the two regions as part of a disaster recovery architecture. Dual-regions and multi-regions are strongly consistent because metadata is always written synchronously across regions. This approach allows the service to always determine what the latest version of an object is and where it can be served from, including from remote regions. Data is replicated asynchronously. This means that there is an RPO time window where newly written objects start out protected as regional objects, with redundancy across availability zones within a single region. The service then replicates the objects within that RPO window to one or more remote regions to make them geo-redundant. After that replication is complete, data can be served automatically and transparently from another region in the case of a regional outage. Turbo replication is a premium feature available on a dual-region bucket to obtain a smaller RPO window, which targets 100% of newly written objects being replicated and made geo-redundant within 15 minutes. RPO is an important consideration, because during a regional outage, data recently written to the affected region within the RPO window might not yet have been replicated to other regions. As a result, that data might not be accessible during the outage, and could be lost in the case of physical destruction of the data in the affected region. Cloud Translation Cloud Translation has active compute servers in multiple zones and regions. It also supports synchronous data replication across zones within regions. These features help Translation achieve instantaneous failover without any data loss for zonal failures, and without requiring any customer input or adjustments. In the case of a regional failure, Cloud Translation is not available. Compute Engine Compute Engine is one of Google Cloud's infrastructure-as-a-service options. It uses Google's worldwide infrastructure to offer virtual machines (and related services) to customers. Compute Engine instances are zonal resources, so in the event of a zone outage instances are unavailable by default. Compute Engine does offer managed instance groups (MIGs) which can automatically scale up additional VMs from pre-configured instance templates, both within a single zone and across multiple zones within a region. MIGs are ideal for applications that require resilience to zone loss and are stateless, but require configuration and resource planning. Multiple regional MIGs can be used to achieve region outage resilience for stateless applications. Applications that have stateful workloads can still use stateful MIGs, but extra care needs to be made in capacity planning since they do not scale horizontally. It's important in either scenario to correctly configure and test Compute Engine instance templates and MIGs ahead of time to ensure working failover capabilities to other zones. See the Develop your own reference architectures and guides section above for more information. Sole-tenancy Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Sole-tenant nodes, like Compute Engine instances, are zonal resources. In the unlikely event of a zonal outage, they are unavailable. To mitigate zonal failures, you can create a sole-tenant node in another zone. Given that certain workloads might benefit from sole-tenant nodes for licensing or CAPEX accounting purposes, you should plan a failover strategy in advance. Recreating these resources in a different location might incur additional licensing costs or violate CAPEX accounting requirements. For general guidance, see Develop your own reference architectures and guides. Sole-tenant nodes are zonal resources, and cannot withstand regional failures. To scale across zones, use regional MIGs. Networking for Compute Engine For information about high-availability setups for Interconnect connections, see the following documents: 99.99% availability for Dedicated Interconnect 99.99% availability for Partner Interconnect You can provision external IP addresses in global or regional mode, which affects their availability in the case of a regional failure. Cloud Load Balancing resilience Load balancers are a critical component of most highly available applications. It is important to understand that the resilience of your overall application depends not just on the scope of the load balancer you choose (global or regional), but also on the redundancy of your backend services. The following table summarizes load balancer resilience based on the load balancer's distribution or scope. Load balancer scope Architecture Resilient to zonal outage Resilient to regional outage Global Each load balancer is distributed across all regions Cross-region Each load balancer is distributed across multiple regions Regional Each load balancer is distributed across multiple zones in the region An outage in a given region affects the regional load balancers in that region For more information about choosing a load balancer, see the Cloud Load Balancing documentation. Connectivity Tests Connectivity Tests is a diagnostics tool that lets you check the connectivity between network endpoints. It analyzes your configuration and, in some cases, performs a live data plane analysis between the endpoints. An endpoint is a source or destination of network traffic, such as a VM, Google Kubernetes Engine (GKE) cluster, load balancer forwarding rule, or an IP address. Connectivity Tests is a diagnostic tool with no data plane components. It does not process or generate user traffic. Zonal outage: Connectivity Tests resources are global. You can manage and view them in the event of a zonal outage. Connectivity Tests resources are the results of your configuration tests. These results might include the configuration data of zonal resources (for example, VM instances) in an affected zone. If there's an outage, the analysis results aren't accurate because the analysis is based on stale data from before the outage. Don't rely on it. Regional outage: In a regional outage, you can still manage and view Connectivity Tests resources. Connectivity Tests resources might include configuration data of regional resources, like subnetworks, in an affected region. If there's an outage, the analysis results aren't accurate because the analysis is based on stale data from before the outage. Don't rely on it. Container Registry Container Registry provides a scalable hosted Docker Registry implementation that securely and privately stores Docker container images. Container Registry implements the HTTP Docker Registry API. Container Registry is a global service that synchronously stores image metadata redundantly across multiple zones and regions by default. Container images are stored in Cloud Storage multi-regional buckets. With this storage strategy, Container Registry provides zonal outage resilience in all cases, and regional outage resilience for any data that has been asynchronously replicated to multiple regions by Cloud Storage. Database Migration Service Database Migration Service is a fully managed Google Cloud service to migrate databases from other cloud providers or from on-premises data centers to Google Cloud. Database Migration Service is architected as a regional control plane. The control plane doesn't depend on an individual zone in a given region. During a zonal outage, you retain access to the Database Migration Service APIs, including the ability to create and manage migration jobs. During a regional outage, you lose access to Database Migration Service resources that belong to that region until the outage is resolved. Database Migration Service depends on the availability of the source and destination databases that are used for the migration process. If a Database Migration Service source or destination database is unavailable, migrations stop making progress, but no customer core data or job data is lost. Migration jobs resume when the source and destination databases become available again. For example, you can configure a destination Cloud SQL database with high-availability (HA) enabled to get a destination database that is resilient for zonal outages. Database Migration Service migrations go through two phases: Full dump: Performs a full data copy from the source to the destination according to the migration job specification. Change data capture (CDC): Replicates incremental changes from the source to the destination. Zonal outage: If a zonal failure occurs during either of these phases, you are still able to access and manage resources in Database Migration Service. Data migration is affected as follows: Full dump: Data migration fails; you need to restart the migration job once the destination database completes the failover operation. CDC: Data migration is paused. The migration job resumes automatically once the destination database completes the failover operation. Regional outage: Database Migration Service doesn't support cross-regional resources, and therefore it's not resilient against regional failures. Dataflow Dataflow is Google Cloud's fully managed and serverless data processing service for streaming and batch pipelines. By default, a regional endpoint configures the Dataflow worker pool to use all available zones within the region. Zone selection is calculated for each worker at the time that the worker is created, optimizing for resource acquisition and use of unused reservations. In the default configuration for Dataflow jobs, intermediate data is stored by the Dataflow service, and the state of the job is stored in the backend. If a zone fails, Dataflow jobs can continue to run, because workers are re-created in other zones. The following limitations apply: Regional placement is supported only for jobs using Streaming Engine or Dataflow Shuffle. Jobs that have opted out of Streaming Engine or Dataflow Shuffle can't use regional placement. Regional placement applies to VMs only. It doesn't apply to Streaming Engine and Dataflow Shuffle-related resources. VMs aren't replicated across multiple zones. If a VM becomes unavailable, its work items are considered lost and are reprocessed by another VM. If a region-wide stockout occurs, the Dataflow service can't create any more VMs. Architecting Dataflow pipelines for high availability You can run multiple streaming pipelines in parallel for high-availability data processing. For example, you can run two parallel streaming jobs in different regions. Running parallel pipelines provides geographical redundancy and fault tolerance for data processing. By considering the geographic availability of data sources and sinks, you can operate end-to-end pipelines in a highly available, multi-region configuration. For more information, see High availability and geographic redundancy in "Design Dataflow pipeline workflows." In case of a zone or region outage, you can avoid data loss by reusing the same subscription to the Pub/Sub topic. To guarantee that records aren't lost during shuffle, Dataflow uses upstream backup, which means that the worker sending the records retries RPCs until it receives positive acknowledgement that the record has been received and that the side-effects of processing the record are committed to persistent storage downstream. Dataflow also continues to retry RPCs if the worker sending the records becomes unavailable. Retrying RPCs ensures that every record is delivered exactly once. For more information about the Dataflow exactly-once guarantee, see Exactly-once in Dataflow. If the pipeline is using grouping or time-windowing, you can use the Seek functionality of Pub/Sub or Replay functionality of Kafka after a zonal or regional outage to reprocess data elements to arrive at the same calculation results. If the business logic used by the pipeline does not rely on data before the outage, the data loss of pipeline outputs can be minimized down to 0 elements. If the pipeline business logic does rely on data that was processed before the outage (for example, if long sliding windows are used, or if a global time window is storing ever-increasing counters), use Dataflow snapshots to save the state of the streaming pipeline and start a new version of your job without losing state. Dataproc Dataproc provides streaming and batch data processing capabilities. Dataproc is architected as a regional control plane that enables users to manage Dataproc clusters. The control plane does not depend on an individual zone in a given region. Therefore, during a zonal outage, you retain access to the Dataproc APIs, including the ability to create new clusters. You can create Dataproc clusters on: Dataproc clusters on Compute Engine Dataproc clusters on GKE Dataproc clusters on Compute Engine Because a Dataproc cluster on Compute Engine is a zonal resource, a zonal outage makes the cluster unavailable, or destroys the cluster. Dataproc does not automatically snapshot cluster status, so a zone outage could cause loss of data being processed. Dataproc does not persist user data within the service. Users can configure their pipelines to write results to many data stores; you should consider the architecture of the data store and choose a product that offers the required disaster resilience. If a zone suffers an outage, you may choose to recreate a new instance of the cluster in another zone, either by selecting a different zone or using the Auto Placement feature in Dataproc to automatically select an available zone. Once the cluster is available, data processing can resume. You can also run a cluster with High Availability mode enabled, reducing the likelihood a partial zone outage will impact a master node and, therefore, the whole cluster. Dataproc clusters on GKE Dataproc clusters on GKE can be zonal or regional. For more information about the architecture and the DR capabilities of zonal and regional GKE clusters, see the Google Kubernetes Engine section later in this document. Datastream Datastream is a serverless change data capture (CDC) and replication service that lets you synchronize data reliably, and with minimal latency. Datastream provides replication of data from operational databases into BigQuery and Cloud Storage. In addition, it offers streamlined integration with Dataflow templates to build custom workflows for loading data into a wide range of destinations, such as Cloud SQL and Spanner. Zonal outage: Datastream is a multi-zonal service. It can withstand a complete, unplanned zonal outage without any loss of data or availability. If a zonal failure occurs, you can still access and manage your resources in Datastream. Regional outage: In the case of a regional outage, Datastream becomes available again as soon as the outage is resolved. Document AI Document AI is a document understanding platform that takes unstructured data from documents and transforms it into structured data, making it easier to understand, analyze, and consume. Document AI is a regional offering. Customers can choose the region but not the zones within that region. Data and traffic are automatically load balanced across zones within a region. Servers are automatically scaled to meet incoming traffic and are load balanced across zones as necessary. Each zone maintains a scheduler that provides this autoscaling per zone. The scheduler is also aware of the load other zones are receiving and provisions extra capacity in-zone to allow for any zonal failures. Zonal outage: Document AI stores user documents and processor version data. This data is stored regionally and written synchronously. Since data is regionally stored, data plane operations aren't affected by zonal failures. Traffic automatically routes to other zones in the event of a zonal failure, with a delay based on how long it takes dependent services, like Vertex AI, to recover. Regional outage: Data is never replicated across regions. During a regional outage, Document AI will not failover. Customers choose the Google Cloud region in which they want to use Document AI. However, that customer traffic is never routed to another region. Endpoint Verification Endpoint Verification lets administrators and security operations professionals build an inventory of devices that access an organization's data. Endpoint Verification also provides critical device trust and security-based access control as a part of the Chrome Enterprise Premium solution. Use Endpoint Verification when you want an overview of the security posture of your organization's laptop and desktop devices. When Endpoint Verification is paired with Chrome Enterprise Premium offerings, Endpoint Verification helps enforce fine-grained access control on your Google Cloud resources. Endpoint Verification is available for Google Cloud, Cloud Identity, Google Workspace Business, and Google Workspace Enterprise. Eventarc Eventarc provides asynchronously delivered events from Google providers (first-party), user apps (second-party), and software as a service (third-party) using loosely-coupled services that react to state changes. It lets customers configure their destinations (for example, a Cloud Run instance or a 2nd gen Cloud Run function) to be triggered when an event occurs in an event provider service or the customer's code. Zonal outage: Eventarc stores metadata related to triggers. This data is stored regionally and written synchronously. The Eventarc API that creates and manages triggers and channels only returns the API call when the data has been committed to a quorum within a region. Since data is regionally stored, data plane operations aren't affected by zonal failures. In the event of a zonal failure, traffic is automatically routed to other zones. Eventarc services for receiving and delivering second-party and third-party events are replicated across zones. These services are regionally distributed. Requests to unavailable zones are automatically served from available zones in the region. Regional outage: Customers choose the Google Cloud region that they want to create their Eventarc triggers in. Data is never replicated across regions. Customer traffic is never routed by Eventarc to a different region. In the case of a regional failure, Eventarc becomes available again as soon as the outage is resolved. To achieve higher availability, customers are encouraged to deploy triggers to multiple regions if desired. Note the following: Eventarc services for receiving and delivering first-party events are provided on a best-effort basis and are not covered by RTO/RPO. Eventarc event delivery for Google Kubernetes Engine services is provided on a best-effort basis and is not covered by RTO/RPO. Filestore The Basic and HighScale tiers are zonal resources. They are not tolerant to failure of the deployed zone or region. Enterprise tier Filestore instances are regional resources. Filestore adopts the strict consistency policy required by NFS. When a client writes data, Filestore doesn't return an acknowledgment until the change is persisted and replicated in two zones so that subsequent reads return the correct data. In the event of a zone failure, an Enterprise tier instance continues to serve data from other zones, and in the meantime accepts new writes. Both the read and write operations might have a degraded performance; the write operation might not be replicated. Encryption is not compromised because the key will be served from other zones. We recommend that clients create external backups in case of further outages in other zones in the same region. The backup can be used to restore the instance to other regions. Firestore Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. Firestore offers automatic multi-region data replication, strong consistency guarantees, atomic batch operations, and ACID transactions. Firestore offers both single region and multi-regional locations to customers. Traffic is automatically load-balanced across zones in a region. Regional Firestore instances synchronously replicate data across at least three zones. In the case of zonal failure, writes can still be committed by the remaining two (or more) replicas, and committed data is persisted. Traffic automatically routes to other zones. A regional location offers lower costs, lower write latency, and co-location with other Google Cloud resources. Firestore multi-regional instances synchronously replicate data across five zones in three regions (two serving regions and one witness region), and they are robust against zonal and regional failure. In case of zonal or regional failure, committed data is persisted. Traffic automatically routes to serving zones/regions, and commits are still served by at least three zones across the two regions remaining. Multi-regions maximize the availability and durability of databases. Firewall Insights Firewall Insights helps you understand and optimize your firewall rules. It provides insights, recommendations, and metrics about how your firewall rules are being used. Firewall Insights also uses machine learning to predict future firewall rules usage. Firewall Insights lets you make better decisions during firewall rule optimization. For example, Firewall Insights identifies rules that it classifies as overly permissive. You can use this information to make your firewall configuration stricter. Zonal outage: Since Firewall Insights data are replicated across zones, it isn't affected by a zonal outage, and customer traffic is automatically routed to other zones. Regional outage: Since Firewall Insights data are replicated across regions, it isn't affected by a regional outage, and customer traffic is automatically routed to other regions. Fleet Fleets let customers manage multiple Kubernetes clusters as a group, and allow platform administrators to use multi-cluster services. For example, fleets let administrators apply uniform policies across all clusters or set up Multi Cluster Ingress. When you register a GKE cluster to a fleet, by default, the cluster has a regional membership in the same region. When you register a non-Google Cloud cluster to a fleet, you can pick any region or the global location. The best practice is to choose a region that's close to the cluster's physical location. This provides optimal latency when using Connect gateway to access the cluster. In the case of a zonal outage, fleet functionalities are not affected unless the underlying cluster is zonal and becomes unavailable. In the case of a regional outage, fleet functionalities fail statically for the in-region membership clusters. Mitigation of a regional outage requires deployment across multiple regions, as suggested by Architecting disaster recovery for cloud infrastructure outages. Google Cloud Armor Google Cloud Armor helps you protect your deployments and applications from multiple types of threats, including volumetric DDoS attacks and application attacks like cross-site scripting and SQL injection. Google Cloud Armor filters unwanted traffic at Google Cloud load balancers and prevents such traffic from entering your VPC and consuming resources. Some of these protections are automatic. Some require you to configure security policies and attach them to backend services or regions. Globally scoped Google Cloud Armor security policies are applied at global load balancers. Regionally scoped security policies are applied at regional load balancers. Zonal outage: In case of a zonal outage, Google Cloud load balancers redirect your traffic to other zones where healthy backend instances are available. Google Cloud Armor protection is available immediately after the traffic failover because your Google Cloud Armor security policies are synchronously replicated to all zones in a region. Regional outage: In case of regional outages, global Google Cloud load balancers redirect your traffic to other regions where healthy backend instances are available. Google Cloud Armor protection is available immediately after the traffic failover because your global Google Cloud Armor security policies are synchronously replicated to all regions. To be resilient against regional failures, you must configure Google Cloud Armor regional security policies for all your regions. Google Kubernetes Engine Google Kubernetes Engine (GKE) offers managed Kubernetes service by streamlining the deployment of containerized applications on Google Cloud. You can choose between regional or zonal cluster topologies. When creating a zonal cluster, GKE provisions one control plane machine in the chosen zone, as well as worker machines (nodes) within the same zone. For regional clusters, GKE provisions three control plane machines in three different zones within the chosen region. By default, nodes are also spanned across three zones, though you can choose to create a regional cluster with nodes provisioned only in one zone. Multi-zonal clusters are similar to zonal clusters as they include one master machine, but additionally offer the ability to span nodes across multiple zones. Zonal outage: To avoid zonal outages, use regional clusters. The control plane and the nodes are distributed across three different zones within a region. A zone outage does not impact control plane and worker nodes deployed in the other two zones. Regional outage: Mitigation of a regional outage requires deployment across multiple regions. Although currently not being offered as a built-in product capability, multi-region topology is an approach taken by several GKE customers today, and can be manually implemented. You can create multiple regional clusters to replicate your workloads across multiple regions, and control the traffic to these clusters using multi-cluster ingress. HA VPN HA VPN (high availability) is a resilient Cloud VPN offering that securely encrypts your traffic from your on-premises private cloud, other virtual private cloud, or other cloud service provider network to your Google Cloud Virtual Private Cloud (VPC). HA VPN's gateways have two interfaces, each with an IP address from separate IP address pools, split both logically and physically across different PoPs and clusters, to ensure optimal redundancy. Zonal outage: In the case of a zonal outage, one interface may lose connectivity, but traffic is redirected to the other interface via dynamic routing using Border Gateway Protocol (BGP). Regional outage: In the case of a regional outage, both interfaces may lose connectivity for a brief period. Identity and Access Management Identity and Access Management (IAM) is responsible for all authorization decisions for actions on cloud resources. IAM confirms that a policy grants permission for each action (in the data plane), and it processes updates to those policies through a SetPolicy call (in the control plane). All IAM policies are replicated across multiple zones within every region, helping IAM data plane operations recover from failures of other regions and tolerant of zone failures within each region. The resilience of IAM data plane against zone failures and region failures enables multi-region and multi-zone architectures for high availability. IAM control plane operations can depend on cross-region replication. When SetPolicy calls succeed, the data has been written to multiple regions, but propagation to other regions is eventually consistent. The IAM control plane is resilient to single region failure. Identity-Aware Proxy Identity-Aware Proxy provides access to applications hosted on Google Cloud, on other clouds, and on-premises. IAP is regionally distributed, and requests to unavailable zones are automatically served from other available zones in the region. Regional outages in IAP affect access to the applications hosted on the impacted region. We recommend that you deploy to multiple regions and use Cloud Load Balancing to achieve higher availability and resilience against regional outages. Identity Platform Identity Platform lets customers add customizable Google-grade identity and access management to their apps. Identity Platform is a global offering. Customers cannot choose the regions or zones in which their data is stored. Zonal outage: During a zonal outage, Identity Platform fails over requests to the next closest cell. All data is saved on a global scale, so there's no data loss. Regional outage: During a regional outage, Identity Platform requests to the unavailable region temporarily fail while Identity Platform removes traffic from the affected region. Once there's no more traffic to the affected region, a global server load-balancing service routes requests to the nearest available healthy region. All data is saved globally, so there's no data loss. Knative serving Knative serving is a global service that enables the customers to run serverless workloads on customer clusters. Its purpose is to ensure that Knative serving workloads are properly deployed on customer clusters and that the installation status of Knative serving is reflected in GKE Fleet API Feature resource. This service takes part only when installing or upgrading Knative serving resources on customer clusters. It isn't involved in executing cluster workloads. Customer clusters belonging to projects which have Knative serving enabled are distributed between replicas in multiple regions and zones-each cluster is monitored by one replica. Zonal and regional outage: Clusters that are monitored by replicas that were hosted in a location undergoing an outage, are automatically redistributed between healthy replicas in other zones and regions. While this reassignment is in progress, there might be a short time when some clusters are not monitored by Knative serving. If during that time the user decides to enable Knative serving features on the cluster, the installation of Knative serving resources on the cluster will commence after the cluster reconnects with a healthy Knative serving service replica. Looker (Google Cloud core) Looker (Google Cloud core) is a business intelligence platform that provides simplified and streamlined provisioning, configuration, and management of a Looker instance from the Google Cloud console. Looker (Google Cloud core) lets users explore data, create dashboards, set up alerts, and share reports. In addition, Looker (Google Cloud core) offers an IDE for data modelers and rich embedding and API features for developers. Looker (Google Cloud core) is composed of regionally isolated instances that synchronously replicate data across zones within the region. Ensure that the resources your instance uses, such as the data sources that Looker (Google Cloud core) connects to, are in the same region that your instance runs in. Zonal outage: Looker (Google Cloud core) instances store metadata and their own deployed containers. The data is written synchronously across replicated instances. In a zonal outage, Looker (Google Cloud core) instances continue to serve from other available zones in the same region. Any transactions or API calls return after the data has been committed to a quorum within a region. If the replication fails, then the transaction is not committed and the user is notified about the failure. If more than one zone fails, the transactions also fail and the user is notified. Looker (Google Cloud core) stops any schedules or queries that are currently running. You have to reschedule or queue them again after resolving the failure. Regional outage: Looker (Google Cloud core) instances within the affected region aren't available. Looker (Google Cloud core) stops any schedules or queries that are currently running. You have to reschedule or queue the queries again after resolving the failure. You can manually create new instances in a different region. You can also recover your instances using the process defined in Import or export data from a Looker (Google Cloud core) instance. We recommend that you set up a periodic data export process to copy the assets in advance, in the unlikely event of a regional outage. Looker Studio Looker Studio is a data visualization and business intelligence product. It enables customers to connect to their data stored in other systems, create reports and dashboards using that data, and share the reports and dashboards throughout their organization. Looker Studio is a global service and does not allow users to select a resource scope. In the case of a zonal outage, Looker Studio continues to serve requests from another zone in the same region or in a different region without interruption. User assets are synchronously replicated across regions. Therefore, there is no data loss. In the case of a regional outage, Looker Studio continues to serve requests from another region without interruption. User assets are synchronously replicated across regions. Therefore, there is no data loss. Memorystore for Memcached Memorystore for Memcached is Google Cloud's managed Memcached offering. Memorystore for Memcached lets customers create Memcached clusters that can be used as high-throughput, key-value databases for applications. Memcached clusters are regional, with nodes distributed across all customer-specified zones. However, Memcached doesn't replicate any data across nodes. Therefore a zonal failure can result in loss of data, also described as a partial cache flush. Memcached instances will continue to operate, but they will have fewer nodes—the service won't start any new nodes during a zonal failure. Memcached nodes in unaffected zones will continue to serve traffic, although the zonal failure will result in a lower cache hit rate until the zone is recovered. In the event of a regional failure, Memcached nodes don't serve traffic. In that case, data is lost, which results in a full cache flush. To mitigate a regional outage, you can implement an architecture that deploys the application and Memorystore for Memcached across multiple regions. Memorystore for Redis Memorystore for Redis is a fully managed Redis service for Google Cloud that can reduce the burden of managing complex Redis deployments. It currently offers 2 tiers: Basic Tier and Standard Tier. For Basic Tier, a zonal or regional outage will cause loss of data, also known as a full cache flush. For Standard Tier, a regional outage will cause loss of data. A zonal outage might cause partial data loss to Standard Tier instance due to its asynchronous replication. Important: In order for replication to perform predictably, ensure that the CPU usage and system memory usage ratio remain within the recommended values. Zonal outage: Standard Tier instances asynchronously replicate dataset operations from the dataset in the primary node to the replica node. When the outage occurs within the zone of the primary node, the replica node will be promoted to become the primary node. During the promotion, a failover occurs and the Redis client has to reconnect to the instance. After reconnecting, operations resume. For more information about high availability of Memorystore for Redis instances in the Standard Tier, refer to Memorystore for Redis high availability. If you enable read replicas in your Standard Tier instance and you only have one replica, the read endpoint isn't available for the duration of a zonal outage. For more information about disaster recovery of read replicas, see Failure modes for read replicas. Regional outage: Memorystore for Redis is a regional product, so a single instance cannot withstand a regional failure. You can schedule periodic tasks to export a Redis instance to a Cloud Storage bucket in a different region. When a regional outage occurs, you can restore the Redis instance in a different region from the dataset you have exported. Multi-Cluster Service Discovery and Multi Cluster Ingress GKE multi-cluster Services (MCS) consists of multiple components. The components include the Google Kubernetes Engine hub (which orchestrates multiple Google Kubernetes Engine clusters by using memberships), the clusters themselves, and GKE hub controllers (Multi Cluster Ingress, Multi-Cluster Service Discovery). The hub controllers orchestrate Compute Engine load balancer configuration by using backends on multiple clusters. In the case of a zonal outage, Multi-Cluster Service Discovery continues to serve requests from another zone or region. In the case of a regional outage, Multi-Cluster Service Discovery does not fail over. In the case of a zonal outage for Multi Cluster Ingress, if the config cluster is zonal and in scope of the failure, the user needs to manually fail over. The data plane is fail-static and will continue serving traffic until the user has failed over. To avoid the need for manual failover, use a regional cluster for the configuration cluster. In the case of a regional outage, Multi Cluster Ingress does not fail over. Users must have a DR plan in place for manually failing over the configuration cluster. For more information, see Setting up Multi Cluster Ingress and Configuring multi-cluster Services. For more information about GKE, see the "Google Kubernetes Engine" section in Architecting disaster recovery for cloud infrastructure outages. Network Analyzer Network Analyzer automatically monitors your VPC network configurations and detects misconfigurations and suboptimal configurations. It provides insights on network topology, firewall rules, routes, configuration dependencies, and connectivity to services and applications. It identifies network failures, provides root cause information, and suggests possible resolutions. Network Analyzer runs continuously and triggers relevant analyses based on near real-time configuration updates in your network. If Network Analyzer detects a network failure, it tries to correlate the failure with recent configuration changes to identify root causes. Wherever possible, it provides recommendations to suggest details on how to fix the issues. Network Analyzer is a diagnostic tool with no data plane components. It does not process or generate user traffic. Zonal outage: Network Analyzer service is replicated globally, and its availability isn't affected by a zonal outage. If insights from Network Analyzer contain configurations from a zone suffering an outage, it affects data quality. The network insights that refer to configurations in that zone become stale. Don't rely on any insights provided by Network Analyzer during outages. Regional outage: Network Analyzer service is replicated globally, and its availability isn't affected by a regional outage. If insights from Network Analyzer contain configurations from a region suffering an outage, it affects data quality. The network insights that refer to configurations in that region become stale. Don't rely on any insights provided by Network Analyzer during outages. Network Topology Network Topology is a visualization tool that shows the topology of your network infrastructure. The Infrastructure view shows Virtual Private Cloud (VPC) networks, hybrid connectivity to and from your on-premises networks, connectivity to Google-managed services, and the associated metrics. Zonal outage: In case of a zonal outage, data for that zone won't appear in Network Topology. Data for other zones aren't affected. Regional outage: In case of a regional outage, data for that region won't appear in Network Topology. Data for other regions aren't affected. Performance Dashboard Performance Dashboard gives you visibility into the performance of the entire Google Cloud network, as well as to the performance of your project's resources. With these performance-monitoring capabilities, you can distinguish between a problem in your application and a problem in the underlying Google Cloud network. You can also investigate historical network performance problems. Performance Dashboard also exports data to Cloud Monitoring. You can use Monitoring to query the data and get access to additional information. Zonal outage: In case of a zonal outage, latency and packet loss data for traffic from the affected zone won't appear in Performance Dashboard. Latency and packet loss data for traffic from other zones isn't affected. When the outage ends, latency and packet loss data resumes. Regional outage: In case of a regional outage, latency and packet loss data for traffic from the affected region won't appear in Performance Dashboard. Latency and packet loss data for traffic from other regions isn't affected. When the outage ends, latency and packet loss data resumes. Network Connectivity Center Network Connectivity Center is a network connectivity management product that employs a hub-and-spoke architecture. With this architecture, a central management resource serves as a hub and each connectivity resource serves as a spoke. Hybrid spokes currently support HA VPN, Dedicated and Partner Interconnect, and SD-WAN router appliances from major third party vendors. With Network Connectivity Center hybrid spokes, enterprises can connect Google Cloud workloads and services to on-premise data centers, other clouds, and their branch offices through the global reach of the Google Cloud network. Zonal outage: A Network Connectivity Center hybrid spoke with HA configuration is resilient to zonal failures because the control plane and network data plane are redundant across multiple zones within a region. Regional outage: A Network Connectivity Center hybrid spoke is a regional resource, so it can't withstand a regional failure. Network Service Tiers Network Service Tiers lets you optimize connectivity between systems on the internet and your Google Cloud instances. It offers two distinct service tiers, the Premium Tier and the Standard Tier. With the Premium Tier, a globally announced anycast Premium Tier IP address can serve as the frontend for either regional or global backends. With the Standard Tier, a regionally announced Standard Tier IP address can serve as the frontend for regional backends. The overall resilience of an application is influenced by both the network service tier and the redundancy of the backends it associates with. Zonal outage: Both the Premium Tier and the Standard Tier offer resilience against zonal outages when associated with regionally redundant backends. When a zonal outage occurs, the failover behavior for cases using regionally redundant backends is determined by the associated backends themselves. When associated with zonal backends, the service will become available again as soon as the outage is resolved. Regional outage: The Premium Tier offers resilience against regional outages when it is associated with globally redundant backends. In the Standard tier, all traffic to the affected region will fail. Traffic to all other regions is unaffected. When a regional outage occurs, the failover behavior for cases using the Premium Tier with globally redundant backends is determined by the associated backends themselves. When using the Premium Tier with regional backends or the Standard Tier, the service will become available again as soon as the outage is resolved. Organization Policy Service Organization Policy Service provides centralized and programmatic control over your organization's Google Cloud resources. As the Organization Policy administrator, you can configure constraints across your entire resource hierarchy. Zonal outage: All organization policies created by Organization Policy Service are replicated asynchronously across multiple zones within every region. Organization Policy data and control plane operations are tolerant of zone failures within each region. Regional outage: All organization policies created by Organization Policy Service are replicated asynchronously across multiple regions. Organization Policy control plane operations are written to multiple regions and the propagation to other regions is consistent within minutes. The Organization Policy control plane is resilient to single region failure. The Organization Policy data plane operations can recover from failures in other regions and the resilience of the Organization Policy data plane against zone failures and region failures enables multi-region and multi-zone architectures for high availability. Packet Mirroring Packet Mirroring clones the traffic of specified instances in your Virtual Private Cloud (VPC) network and forwards the cloned data to instances behind a regional internal load balancer for examination. Packet Mirroring captures all traffic and packet data, including payloads and headers. For more information about the functionality of Packet Mirroring, see the Packet Mirroring overview page. Zonal outage: Configure the internal load balancer so there are instances in multiple zones. If a zonal outage occurs, Packet Mirroring diverts cloned packets to a healthy zone. Regional outage: Packet Mirroring is a regional product. If there's a regional outage, packets in the affected region aren't cloned. Persistent Disk Persistent Disks are available in zonal and regional configurations. Zonal Persistent Disks are hosted in a single zone. If the disk's zone is unavailable, the Persistent Disk is unavailable until the zone outage is resolved. Regional Persistent Disks provide synchronous replication of data between two zones in a region. In the event of an outage in your virtual machine's zone, you can force attach a regional Persistent Disk to a VM instance in the disk's secondary zone. To perform this task, you must either start another VM instance in that zone or maintain a hot standby VM instance in that zone. To asynchronously replicate data in a Persistent Disk across regions, you can use Persistent Disk Asynchronous Replication (PD Async Replication), which provides low RTO and RPO block storage replication for cross-region active-passive DR. In the unlikely event of a regional outage, PD Async Replication enables you to failover your data to a secondary region and restart your workload in that region. Personalized Service Health Personalized Service Health communicates service disruptions relevant to your Google Cloud projects. It provides multiple channels and processes to view or integrate disruptive events (incidents, planned maintenance) into your incident response process—including the following: A dashboard in Google Cloud console A service API Configurable alerts Logs generated and sent to Cloud Logging Zonal outage: Data is served from a global database with no dependency on specific locations. If a zonal outage occurs, Service Health is able to serve requests and automatically reroute traffic to zones in the same region that still function. Service Health can return API calls successfully if it is able to retrieve event data from the Service Health database. Regional outage: Data is served from a global database with no dependency on specific locations. If there is a regional outage, Service Health is still able to serve requests but may perform with reduced capacity. Regional failures in Logging locations might affect Service Health users consuming logs or cloud alerting notifications. Private Service Connect Private Service Connect is a capability of Google Cloud networking that lets consumers access managed services privately from inside their VPC network. Similarly, it allows managed service producers to host these services in their own separate VPC networks and offer a private connection to their consumers. Private Service Connect endpoints for published services A Private Service Connect endpoint connects to services in service producers VPC network using a Private Service Connect forwarding rule. The service producer provides a service using private connectivity to a service consumer, by exposing a single service attachment. Then the service consumer will be able to assign a virtual IP address from their VPC for such service. Zonal outage: Private Service Connect traffic that comes from the VM traffic generated by consumer VPC client endpoints can still access exposed managed services on the service producer's internal VPC network. This access is possible because Private Service Connect traffic fails over to healthier service backends in a different zone. Regional outage: Private Service Connect is a regional product. It isn't resilient to regional outages. Multi-regional managed services can achieve high availability during a regional outages by configuring Private Service Connect endpoints across multiple regions. Private Service Connect endpoints for Google APIs A Private Service Connect endpoint connects to Google APIs using a Private Service Connect forwarding rule. This forwarding rule lets customers use customized endpoint names with their internal IP addresses. Zonal outage: Private Service Connect traffic from consumer VPC client endpoints can still access Google APIs because connectivity between the VM and the endpoint will automatically fail over to another functional zone in the same region. Requests that are already in-flight when an outage begins will depend on the client's TCP timeout and retry behavior for recover. See Compute Engine recovery for more details. Regional outage: Private Service Connect is a regional product. It isn't resilient to regional outages. Multi-regional managed services can achieve high availability during a regional outages by configuring Private Service Connect endpoints across multiple regions. For more information about Private Service Connect, see the "Endpoints" section in Private Service Connect types. Pub/Sub Pub/Sub is a messaging service for application integration and stream analytics. Pub/Sub topics are global, meaning that they are visible and accessible from any Google Cloud location. However, any given message is stored in a single Google Cloud region, closest to the publisher and allowed by the resource location policy. Thus, a topic may have messages stored in different regions throughout Google Cloud. The Pub/Sub message storage policy can restrict the regions in which messages are stored. Zonal outage: When a Pub/Sub message is published, it is synchronously written to storage in at least two zones within the region. Therefore, if a single zone becomes unavailable, there is no customer-visible impact. Regional outage: During a region outage, messages stored within the affected region are inaccessible. Publishers and subscribers that would connect to the affected region, either via a regional endpoint or the global endpoint, aren't able to connect. Publishers and subscribers that connect to other regions can still connect, and messages available in other regions are delivered to network-nearest subscribers that have capacity. If your application relies on message ordering, review the detailed recommendations from the Pub/Sub team. Message ordering guarantees are provided on a per-region basis, and can become disrupted if you use a global endpoint. reCAPTCHA reCAPTCHA is a global service that detects fraudulent activity, spam, and abuse. It does not require or allow configuration for regional or zonal resiliency. Updates to configuration metadata are asynchronously replicated to each region where reCAPTCHA runs. In the case of a zonal outage, reCAPTCHA continues to serve requests from another zone in the same or different region without interruption. In the case of a regional outage, reCAPTCHA continues to serve requests from another region without interruption. Secret Manager Secret Manager is a secrets and credential management product for Google Cloud. With Secret Manager, you can easily audit and restrict access to secrets, encrypt secrets at rest, and ensure that sensitive information is secured in Google Cloud. Secret Manager resources are normally created with the automatic replication policy (recommended), which causes them to be replicated globally. If your organization has policies that do not allow global replication of secret data, Secret Manager resources can be created with user-managed replication policies, in which one or more regions are chosen for a secret to be replicated to. Zonal outage: In the case of zonal outage, Secret Manager continues to serve requests from another zone in the same or different region without interruption. Within each region, Secret Manager always maintains at least 2 replicas in separate zones (in most regions, 3 replicas). When the zone outage is resolved, full redundancy is restored. Regional outage: In the case of a regional outage, Secret Manager continues to serve requests from another region without interruption, assuming the data has been replicated to more than one region (either through automatic replication or through user-managed replication to more than one region). When the region outage is resolved, full redundancy is restored. Security Command Center Security Command Center is the global, real time risk management platform for Google Cloud. It consists of two main components: detectors and findings. Detectors are affected by both regional and zonal outages, in different ways. During a regional outage, detectors can't generate new findings for regional resources because the resources they're supposed to be scanning aren't available. During a zonal outage, detectors can take anywhere from several minutes to hours to resume normal operation. Security Command Center won't lose finding data. It also won't generate new finding data for unavailable resources. In the worst case scenario, Container Threat Detection agents may run out of buffer space while connecting to a healthy cell, which could lead to lost detections. Findings are resilient to both regional and zonal outages because they're synchronously replicated across regions. Sensitive Data Protection (including the DLP API) Sensitive Data Protection provides sensitive data classification, profiling, de-identification, tokenization, and privacy risk analysis services. It works synchronously on the data that's sent in the request bodies, or asynchronously on the data that's already present in cloud storage systems. Sensitive Data Protection can be invoked through the global or region-specific endpoints. Global endpoint: The service is designed to be resilient to both regional and zonal failures. If the service is overloaded while a failure happens, data sent to the hybridInspect method of the service might be lost. To create a failure-resistant architecture, include logic to examine the most recent pre-failure finding that was produced by the hybridInspect method. In case of an outage, the data that was sent to the method might be lost, but no more than the last 10 minutes' worth before the failure event. If there are findings fresher than 10 minutes before the outage started, it indicates the data that resulted in that finding wasn't lost. In that case, there's no need to replay the data that came before the finding timestamp, even if it's within the 10 minute interval. Regional endpoint: Regional endpoints are not resilient to regional failures. If resiliency against a regional failure is required, consider failing over to other regions. The zonal failure characteristics are the same as above. Service Usage The Service Usage API is an infrastructure service of Google Cloud that lets you list and manage APIs and services in your Google Cloud projects. You can list and manage APIs and Services provided by Google, Google Cloud, and third-party producers. The Service Usage API is a global service and resilient to both zonal and regional outages. In the case of zonal outage or regional outage, the Service Usage API continues to serve requests from another zone across different regions. For more information about Service Usage, see Service Usage Documentation. Speech-to-Text Speech-to-Text lets you convert speech audio to text by using machine learning techniques like neural network models. Audio is sent in real time from an application’s microphone, or it is processed as a batch of audio files. Zonal outage: Speech-to-Text API v1: During a zonal outage, Speech-to-Text API version 1 continues to serve requests from another zone in the same region without interruption. However, any jobs that are currently executing within the failing zone are lost. Users must retry the failed jobs, which will be routed to an available zone automatically. Speech-to-Text API v2: During a zonal outage, Speech-to-Text API version 2 continues to serve requests from another zone in the same region. However, any jobs that are currently executing within the failing zone are lost. Users must retry the failed jobs, which will be routed to an available zone automatically. The Speech-to-Text API only returns the API call once the data has been committed to a quorum within a region. In some regions, AI accelerators (TPUs) are available only in one zone. In that case, an outage in that zone causes speech recognition to fail but there is no data loss. Regional outage: Speech-to-Text API v1: Speech-to-Text API version 1 is unaffected by regional failure because it is a global multi-region service. The service continues to serve requests from another region without interruption. However, jobs that are currently executing within the failing region are lost. Users must retry those failed jobs, which will be routed to an available region automatically. Speech-to-Text API v2: Multi-region Speech-to-Text API version 2, the service continues to serve requests from another zone in the same region without interruption. Single-region Speech-to-Text API version 2, the service scopes the job execution to the requested region. Speech-to-Text API version 2 doesn't route traffic to a different region, and data is not replicated to a different region. During a regional failure, Speech-to-Text API version 2 is unavailable in that region. However, it becomes available again when the outage is resolved. Storage Transfer Service Storage Transfer Service manages data transfers from various cloud sources to Cloud Storage, as well as to, from, and between file systems. The Storage Transfer Service API is a global resource. Storage Transfer Service depends on the availability of the source and destination of a transfer. If a transfer source or destination is unavailable, transfers stop making progress. However, no customer core data or job data is lost. Transfers resume when the source and destination become available again. You can use Storage Transfer Service with or without an agent, as follows: Agentless transfers use regional workers to orchestrate transfer jobs. Agent-based transfers use software agents that are installed on your infrastructure. Agent-based transfers rely on the availability of the transfer agents and on the ability of the agents to connect to the file system. When you're deciding where to install transfer agents, consider the availability of the file system. For example, if you're running transfer agents on multiple Compute Engine VMs to transfer data to an Enterprise-tier Filestore instance (a regional resource), you should consider locating the VMs in different zones within the Filestore instance's region. If agents become unavailable, or if their connection to the file system is interrupted, transfers stop making progress, but no data is lost. If all agent processes are terminated, the transfer job is paused until new agents are added to the transfer's agent pool. During an outage, the behavior of Storage Transfer Service is as follows: Zonal outage: During a zonal outage, the Storage Transfer Service APIs remain available, and you can continue to create transfer jobs. Data continues to transfer. Regional outage: During a regional outage, the Storage Transfer Service APIs remain available, and you can continue to create transfer jobs. If your transfer's workers are located in the affected region, data transfer stops until the region becomes available again and the transfer automatically resumes. Vertex ML Metadata Vertex ML Metadata lets you record the metadata and artifacts produced by your ML system and query that metadata to help analyze, debug, and audit the performance of your ML system or the artifacts that it produces. Zonal outage: In the default configuration, Vertex ML Metadata offers protection against zonal failure. The service is deployed in multiple zones across each region, with data synchronously replicated across different zones within each region. In case of a zonal failure, the remaining zones take over with minimal interruption. Regional outage: Vertex ML Metadata is a regionalized service. In the case of a regional outage, Vertex ML Metadata will not fail over to another region. Vertex AI Batch prediction Batch prediction lets users run batch prediction against AI/ML models on Google's infrastructure. Batch prediction is a regional offering. Customers can choose the region in which they run jobs, but not the specific zones within that region. The batch prediction service automatically load-balances the job across different zones within the chosen region. Zonal outage: Batch prediction stores metadata for batch prediction jobs within a region. The data is written synchronously, across multiple zones within that region. In a zonal outage, batch prediction partially loses workers performing jobs, but automatically adds them back in other available zones. If multiple batch prediction retries fail, the UI lists the job status as failed in the UI and in the API call requests. Subsequent user requests to run the job are routed to available zones. Regional outage: Customers choose the Google Cloud region in which they want to run their batch prediction jobs. Data is never replicated across regions. Batch prediction scopes the job execution to the requested region and never routes prediction requests to a different region. When a regional failure occurs, batch prediction is unavailable in that region. It becomes available again when the outage resolves. We recommend that customers use multiple regions to run their jobs. In case of a regional outage, direct jobs to a different available region. Vertex AI Model Registry Vertex AI Model Registry lets users streamline model management, governance, and the deployment of ML models in a central repository. Vertex AI Model Registry is a regional offering with high availability and offers protection against zonal outages. Zonal outage: Vertex AI Model Registry offers protection against zonal outages. The service is deployed in three zones across each region, with data synchronously replicated across different zones within the region. If a zone fails, the remaining zones will take over with no data loss and minimum service interruption. Regional outage: Vertex AI Model Registry is a regionalized service. If a region fails, Model Registry won't fail over. Vertex AI Online prediction Online prediction lets users deploy AI/ML models on Google Cloud. Online prediction is a regional offering. Customers can choose the region where they deploy their models, but not the specific zones within that region. The prediction service will automatically load-balance the workload across different zones within the selected region. Zonal outage: Online prediction doesn't store any customer content. A zonal outage leads to failure of the current prediction request execution. Online prediction may or may not automatically retry the prediction request depending on which endpoint type is used, specifically, a public endpoint will retry automatically while the private endpoint will not. To help handle failures and for improved resilience, incorporate retry logic with exponential back off in your code. Regional outage: Customers choose the Google Cloud region in which they want to run their AI/ML models and online prediction services. Data is never replicated across regions. Online prediction scopes the AI/ML model execution to the requested region and never routes prediction requests to a different region. When a regional failure occurs, online prediction service is unavailable in that region. It becomes available again when the outage is resolved. We recommend that customers use multiple regions to run their AI/ML models. In case of a regional outage, direct traffic to a different, available region. Vertex AI Pipelines Vertex AI Pipelines is a Vertex AI service that lets you automate, monitor, and govern your machine learning (ML) workflows in a serverless manner. Vertex AI Pipelines is built to provide high availability and it offers protection against zonal failures. Zonal outage: In the default configuration, Vertex AI Pipelines offers protection against zonal failure. The service is deployed in multiple zones across each region, with data synchronously replicated across different zones within the region. In case of a zonal failure, the remaining zones take over with minimal interruption. Regional outage: Vertex AI Pipelines is a regionalized service. In the case of a regional outage, Vertex AI Pipelines will not fail over to another region. If a regional outage occurs we recommend that you run your pipeline jobs in a backup region. Vertex AI Search Vertex AI Search is a customizable search solution with generative AI features and native enterprise compliance. Vertex AI Search is automatically deployed and replicated across multiple regions within Google Cloud. You can configure where data is stored by choosing a supported multi-region, such as: global, US, or EU. Zonal and Regional outage: UserEvents uploaded to Vertex AI Search might not be recoverable due to asynchronous replication delay. Other data and services provided by Vertex AI Search remain available due to automatic failover and synchronous data replication. Vertex AI Training Vertex AI Training provides users the ability to run custom training jobs on Google's infrastructure. Vertex AI Training is a regional offering, meaning that customers can choose the region to run their training jobs. However, customers can't choose the specific zones within that region. The training service might automatically load-balance the job execution across different zones within the region. Zonal outage: Vertex AI Training stores metadata for the custom training job. This data is stored regionally and written synchronously. The Vertex AI Training API call only returns once this metadata has been committed to a quorum within a region. The training job might run in a specific zone. A zonal outage leads to failure of the current job execution. If so, the service automatically retries the job by routing it to another zone. If multiple retries fail, the job status is updated to failed. Subsequent user requests to run the job are routed to an available zone. Regional outage: Customers choose the Google Cloud region they want to run their training jobs in. Data is never replicated across regions. Vertex AI Training scopes the job execution to the requested region and never routes training jobs to a different region. In the case of a regional failure, Vertex AI Training service is unavailable in that region and becomes available again when the outage is resolved. We recommend that customers use multiple regions to run their jobs, and in case of a regional outage, to direct jobs to a different region that is available. Virtual Private Cloud (VPC) VPC is a global service that provides network connectivity to resources (VMs, for example). Failures, however, are zonal. In the event of a zonal failure, resources in that zone are unavailable. Similarly, if a region fails, only traffic to and from the failed region is affected. The connectivity of healthy regions is unaffected. Zonal outage: If a VPC network covers multiple zones and a zone fails, the VPC network will still be healthy for healthy zones. Network traffic between resources in healthy zones will continue to work normally during the failure. A zonal failure only affects network traffic to and from resources in the failing zone. To mitigate the impact of zonal failures, we recommend that you don't create all resources in a single zone. Instead, when you create resources, spread them across zones. Regional outage: If a VPC network covers multiple regions and a region fails, the VPC network will still be healthy for healthy regions. Network traffic between resources in healthy regions will continue to work normally during the failure. A regional failure only affects network traffic to and from resources in the failing region. To mitigate the impact of regional failures, we recommended that you spread resources across multiple regions. VPC Service Controls VPC Service Controls is a regional service. Using VPC Service Controls, enterprise security teams can define fine-grained perimeter controls and enforce that security posture across numerous Google Cloud services and projects. Customer policies are mirrored regionally. Zonal outage: VPC Service Controls continues to serve requests from another zone in the same region without interruption. Regional outage: APIs configured for VPC Service Controls policy enforcement on the affected region are unavailable until the region becomes available again. Customers are encouraged to deploy VPC Service Controls enforced services to multiple regions if higher availability is desired. Workflows Workflows is an orchestration product that lets Google Cloud customers: deploy and run workflows which connect other existing services using HTTP, automate processes, including waiting on HTTP responses with automatic retries for up to a year, and implement real-time processes with low-latency, event-driven executions. A Workflows customer can deploy workflows that describe the business logic they want to perform, then run the workflows either directly with the API or with event-driven triggers (currently limited to Pub/Sub or Eventarc). The workflow being run can manipulate variables, make HTTP calls and store the results, or define callbacks and wait to be resumed by another service. Zonal outage: Workflows source code is not affected by zonal outages. Workflows stores the source code of workflows, along with the variable values and HTTP responses received by workflows that are running. Source code is stored regionally and written synchronously: the control plane API only returns once this metadata has been committed to a quorum within a region. Variables and HTTP results are also stored regionally and written synchronously, at least every five seconds. If a zone fails, workflows are automatically resumed based on the last stored data. However, any HTTP requests that haven't already received responses aren't automatically retried. Use retry policies for requests that can be safely retried as described in our documentation. Regional outage: Workflows is a regionalized service; in the case of a regional outage, Workflows won't fail over. Customers are encouraged to deploy Workflows to multiple regions if higher availability is desired. Cloud Service Mesh Cloud Service Mesh lets you configure a managed service mesh spanning multiple GKE clusters. This documentation concerns only the managed Cloud Service Mesh, the in-cluster variant is self-hosted and regular platform guidelines should be followed. Zonal outage: Mesh configuration, as it is stored in the GKE cluster, is resilient to zonal outages as long as the cluster is regional. Data that the product uses for internal bookkeeping is stored either regionally or globally, and isn't affected if a single zone is out of service. The control plane is run in the same region as the GKE cluster it supports (for zonal clusters it is the containing region), and isn't affected by outages within a single zone. Regional outage: Cloud Service Mesh provides services to GKE clusters, which are either regional or zonal. In case of a regional outage, Cloud Service Mesh won't fail over. Neither would GKE. Customers are encouraged to deploy meshes constituting of GKE clusters covering different regions. Service Directory Service Directory is a platform for discovering, publishing, and connecting services. It provides real-time information, in a single place, about all your services. Service Directory lets you perform service inventory management at scale, whether you have a few service endpoints or thousands. Service Directory resources are created regionally, matching the location parameter specified by the user. Zonal outage: During a zonal outage, Service Directory continues to serve requests from another zone in the same or different region without interruption. Within each region, Service Directory always maintains multiple replicas. Once the zonal outage is resolved, full redundancy is restored. Regional outage: Service Directory isn't resilient to regional outages. Send feedback \ No newline at end of file diff --git a/Architecting_for_locality-restricted_workloads.txt b/Architecting_for_locality-restricted_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..da01078d1892ca2250c822ead17c1d14d265ed26 --- /dev/null +++ b/Architecting_for_locality-restricted_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/architecting-disaster-recovery-for-locality-restricted-workloads +Date Scraped: 2025-02-23T11:54:37.340Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecting disaster recovery for locality-restricted workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-20 UTC This document discusses how you can use Google Cloud to architect for disaster recovery (DR) to meet location-specific requirements. For some regulated industries, workloads must adhere to these requirements. In this scenario, one or more of the following requirements apply: Data at rest must be restricted to a specified location. Data must be processed in the location where it resides. Workloads are accessible only from predefined locations. Data must be encrypted by using keys that the customer manages. If you are using cloud services, each cloud service must provide a minimum of two locations that are redundant to each other. For an example of location redundancy requirements, see the Cloud Computing Compliance Criteria Catalogue (C5). The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads (this document) Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Terminology Before you begin architecting for DR for locality-restricted workloads, it's a good idea to review locality terminology used in Google Cloud. Google Cloud provides services in regions throughout the Americas, Europe and the Middle East, and Asia Pacific. For example, London (europe-west2) is a region in Europe, and Oregon (us-west1) is a region in North America. Some Google Cloud products group multiple regions into a specific multi-region location which is accessible in the same way that you would use a region. Regions are further divided into zones where you deploy certain Google Cloud resources such as virtual machines, Kubernetes clusters, or Cloud SQL databases. Resources on Google Cloud are multi-regional, regional, or zonal. Some resources and products that are by default designated as multi-regional can also be restricted to a region. The different types of resources are explained as follows: Multi-regional resources are designed by Google Cloud to be redundant and distributed in and across regions. Multi-regional resources are resilient to the failure of a single region. Regional resources are redundantly deployed across multiple zones in a region, and are resilient to the failure of a zone within the region. Note: For more information about region-specific considerations, see Geography and regions. Zonal resources operate in a single zone. If a zone becomes unavailable, all zonal resources in that zone are unavailable until service is restored. Consider a zone as a single-failure domain. You need to architect your applications to mitigate the effects of a single zone becoming unavailable. For more information, see Geography and regions. Planning for DR for locality-restricted workloads The approach you take to designing your application depends on the type of workload and the locality requirements you must meet. Also consider why you must meet those requirements because what you decide directly influences your DR architecture. Start by reading the Google Cloud disaster recovery planning guide. And as you consider locality-restricted workloads, focus on the requirements discussed in this planning section. Define your locality requirements Before you start your design, define your locality requirements by answering these questions: Where is the data at rest? The answer dictates what services you can use and the high availability (HA) and DR methods you can employ to achieve your RTO/RPO values. Use the Cloud locations page to determine what products are in scope. Can you use encryption techniques to mitigate the requirement? If you are able to mitigate locality requirements by employing encryption techniques using Cloud External Key Manager and Cloud Key Management Service, you can use multi-regional and dual-regional services and follow the standard HA/DR techniques outlined in Disaster recovery scenarios for data. Can data be processed outside of where it rests? You can use products such as GKE Enterprise to provide a hybrid environment to address your requirements or implement product-specific controls such as load-balancing Compute Engine instances across multiple zones in a region. Use the Organization policy Resource Location constraint to restrict where resources can be deployed . If data can be processed outside of where it needs to be at rest, you can design the "processing" parts of your application by following the guidance in Disaster recovery building blocks and Disaster recovery scenarios for applications. Configure a VPC Security Controls perimeter to control who can access the data and to restrict what resources can process the data. Can you use more than one region? If you can use more than one region, you can use many of the techniques outlined in the Disaster Recovery series. Check the multi-region and region constraints for Google Cloud products. Do you need to restrict who can access your application? Google Cloud has several products and features that help you restrict who can access your applications: Identity-Aware Proxy (IAP). Verifies a user's identity and then determines whether that user should be permitted to access an application. Organization policy uses the domain-restricted sharing constraint to define the allowed Cloud Identity or Google Workspace IDs that are permitted in IAM policies. Product-specific locality controls. Refer to each product you want to use in your architecture for appropriate locality constraints. For example, if you're using Cloud Storage, create buckets in specified regions. Identify the services that you can use Identify what services can be used based on your locality and regional granularity requirements. Designing applications that are subject to locality restrictions requires understanding what products can be restricted to what region and what controls can be applied to enforce location restriction requirements. Identify the regional granularity for your application and data Identify the regional granularity for your application and data by answering these questions: Can you use multi-regional services in your design? By using multi-regional services, you can create highly available resilient architectures. Does access to your application have location restrictions? Use these Google Cloud products to help enforce where your applications can be accessed from: Google Cloud Armor. Lets you implement IP and geo-based constraints. VPC Service Controls. Provides context-based perimeter security. Is your data at rest restricted to a specific region? If you use managed services, ensure that the services you are using can be configured so that your data stored in the service is restricted to a specific region. For example, use BigQuery locality restrictions to dictate where your datasets are stored and backed up to. What regions do you need to restrict your application to? Some Google Cloud products do not have regional restrictions. Use the Cloud locations page and the product-specific pages to validate what regions you can use the product in and what mitigating features if any are available to restrict your application to a specific region. Meeting locality restrictions using Google Cloud products This section details features and mitigating techniques for using Google Cloud products as part of your DR strategy for locality-restricted workloads. We recommend reading this section along with Disaster recovery building blocks. Organization policies The Organization Policy Service gives you centralized control over your Google Cloud resources. Using organization policies, you can configure restrictions across your entire resource hierarchy. Consider the following policy constraints when architecting for locality-restricted workloads: Domain-restricted sharing: By default, all user identities are allowed to be added to IAM policies. The allowed/denied list must specify one or more Cloud Identity or Google Workspace customer identities. If this constraint is active, only identities in the allowed list are eligible to be added to IAM policies. Location-restricted resources: This constraint refers to the set of locations where location-based Google Cloud resources can be created. Policies for this constraint can specify as allowed or denied locations any of the following: multi-regions such as Asia and Europe, regions such as us-east1 or europe-west1, or individual zones such as europe-west1-b. For a list of supported services, see Resource locations supported services. Encryption If your data locality requirements concern restricting who can access the data, then implementing encryption methods might be an applicable strategy. By using external key management systems to manage keys that you supply outside of Google Cloud, you might be able to deploy a multi-region architecture to meet your locality requirements. Without the keys available, the data cannot be decrypted. Google Cloud has two products that let you use keys that you manage: Cloud External Key Manager (Cloud EKM): Cloud EKM lets you encrypt data in BigQuery and Compute Engine with encryption keys that are stored and managed in a third-party key management system that's deployed outside Google's infrastructure. Customer-supplied encryption keys (CSEK): You can use CSEK with Cloud Storage and Compute Engine. Google uses your key to protect the Google-generated keys that are used to encrypt and decrypt your data. If you provide a customer-supplied encryption key, Google does not permanently store your key on Google's servers or otherwise manage your key. Instead, you provide your key for each operation, and your key is purged from Google's servers after the operation is complete. When managing your own key infrastructure, you must carefully consider latency and reliability issues and ensure that you implement appropriate HA and recovery processes for your external key manager. You must also understand your RTO requirements. The keys are integral to writing the data, so RPO isn't the critical concern because no data can be safely written without the keys. The real concern is RTO because without your keys you cannot unencrypt or safely write data. Storage When architecting DR for locality-restricted workloads, you must ensure that data at rest is located in the region you require. You can configure Google Cloud object and file store services to meet your requirements Cloud Storage You can create Cloud Storage buckets that meet locality restrictions. Beyond the features discussed in the Cloud Storage section of the Disaster Recovery Building Blocks article, when you architect for DR for locality-restricted workloads, consider whether redundancy across regions is a requirement: objects stored in multi-regions and dual-regions are stored in at least two geographically separate areas, regardless of their storage class. This redundancy ensures maximum availability of your data, even during large-scale disruptions, such as natural disasters. Dual-regions achieve this redundancy by using a pair of regions that you choose. Multi-regions achieve this redundancy by using any combination of data centers in the specified multi-region, which might include data centers that are not explicitly listed as available regions. Data synchronization between the buckets occurs asynchronously. If you need a high degree of confidence that the data has been written to an alternative region to meet your RTO and RPO values, one strategy is to use two single-region buckets. You can then either dual-write the object or write to one bucket and have Cloud Storage copy it to the second bucket. Single-region mitigation strategies when using Cloud Storage If your requirements restrict you to using a single region, then you can't implement an architecture that is redundant across geographic locations using Google Cloud alone. In this scenario, consider using one or more of the following techniques: Adopt a multi-cloud or hybrid strategy. This approach lets you choose another cloud or on-premises solution in the same geographic area as your Google Cloud region. You can store copies of your data in Cloud Storage buckets on-premises, or alternatively, use Cloud Storage as the target for your backup data. To use this approach, do the following: Ensure that distance requirements are met. If you are using AWS as your other cloud provider, refer to the Cloud Storage interoperability guide for how to configure access to Amazon S3 using Google Cloud tools. For other clouds and on-premises solutions, consider open source solutions such as minIO and Ceph to provide an on-premises object store. Consider using Cloud Composer with the gcloud storage command-line utility to transfer data from an on-premises object store to Cloud Storage. Use the Transfer service for on-premises data to copy data stored on-premises to Cloud Storage. Implement encryption techniques. If your locality requirements permit using encryption techniques as a workaround, you can then use multi-region or dual-region buckets. Filestore Filestore provides managed file storage that you can deploy in regions and zones according to your locality restriction requirements. Managed databases Disaster recovery scenarios for data describes methods for implementing backup and recovery strategies for Google Cloud managed database services. In addition to using these methods, you must also consider locality restrictions for each managed database service that you use in your architecture—for example: Bigtable is available in zonal locations in a region. Production instances have a minimum of two clusters, which must be in unique zones in the region. Replication between clusters in a Bigtable instance is automatically managed by Google. Bigtable synchronizes your data between the clusters, creating a separate, independent copy of your data in each zone where your instance has a cluster. Replication makes it possible for incoming traffic to fail over to another cluster in the same instance. BigQuery has locality restrictions that dictate where your datasets are stored. Dataset locations can be regional or multi-regional. To provide resilience during a regional disaster, you need to back up data to another geographic location. In the case of BigQuery multi-regions, we recommend that you avoid backing up to regions within the scope of the multi-region. If you select the EU multi-region, you exclude Zürich and London from being part of the multi-region configuration. For guidance on implementing a DR solution for BigQuery that addresses the unlikely event of a physical regional loss, see Loss of region. To understand the implications of adopting single-region or multi-region BigQuery configurations, see the BigQuery documentation. You can use Firestore to store your Firestore data in either a multi-region location or a regional location. Data in a multi-region location operates in a multi-zone and multi-region replicated configuration. Select a multi-region location if your locality restriction requirements permit it and you want to maximize the availability and durability of your database. multi-region locations can withstand loss of entire regions and maintain availability without data loss. Data in a regional location operates in a multi-zone replicated configuration. You can configure Cloud SQL for high availability. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone in the configured region. In a regional instance, the configuration is made up of a primary instance and a standby instance. Ensure that you understand the typical failover time from the primary to the standby instance. If your requirements permit, you can configure Cloud SQL with cross-region replicas. If a disaster occurs, the read replica in a different region can be promoted. Because read replicas can be configured for HA in advance, they don't need to go through additional changes after that promotion for HA. You can also configure read replicas to have their own cross-region replicas that can offer immediate protection from regional failures after replica promotion. You can configure Spanner as either regional or multi-region. For any regional configuration, Spanner maintains three read-write replicas, each in a different Google Cloud zone in that region. Each read-write replica contains a full copy of your operational database that is able to serve read/write and read-only requests. Spanner uses replicas in different zones so that if a single-zone failure occurs, your database remains available. A Spanner multi-region deployment provides a consistent environment across multiple regions, including two read-write regions and one witness region containing a witness replica. You must validate that the locations of all the regions meet your locality restriction requirements. Compute Engine Compute Engine resources are global, regional, or zonal. Compute Engine resources such as virtual machine instances or zonal persistent disks are referred to as zonal resources. Other resources, such as static external IP addresses, are regional. Regional resources can be used by any resources in that region, regardless of zone, while zonal resources can only be used by other resources in the same zone. Putting resources in different zones in a region isolates those resources from most types of physical infrastructure failure and infrastructure software-service failures. Also, putting resources in different regions provides an even higher degree of failure independence. This approach lets you design robust systems with resources spread across different failure domains. For more information, see regions and zones. Using on-premises or another cloud as a production site You might be using a Google Cloud region that prevents you from using dual or multi-region combinations for your DR architecture. To meet locality restrictions in this case, consider using your own data center or another cloud as the production site or as the failover site. This section discusses Google Cloud products that are optimized for hybrid workloads. DR architectures that use on-premises and Google Cloud are discussed in Disaster recovery scenarios for applications. GKE Enterprise GKE Enterprise is Google Cloud's open hybrid and multi-cloud application platform that helps you securely run your container-based workloads anywhere. GKE Enterprise enables consistency between on-premises and cloud environments, letting you have a consistent operating model and a single view of your Google Kubernetes Engine (GKE) clusters, no matter where you are running them. As part of your DR strategy, GKE Enterprise simplifies the configuration and operation of HA and failover architectures across dissimilar environments (between Google Cloud and on-premises or another cloud). You can run your production GKE Enterprise clusters on-premises and if a disaster occurs, you can fail over to run the same workloads on GKE Enterprise clusters in Google Cloud. GKE Enterprise on Google Cloud has three types of clusters: Single-zone cluster. A single-zone cluster has a single control plane running in one zone. This control plane manages workloads on nodes that are running in the same zone. Multi-zonal cluster. A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones Regional cluster. Regional clusters replicate cluster primaries and nodes across multiple zones in a single region. For example, a regional cluster in the us-east1 region creates replicas of the control plane and nodes in three us-east1 zones: us-east1-b, us-east1-c, and us-east1-d. Regional clusters are the most resilient to zonal outages. Note: For more information about region-specific considerations, see Geography and regions. Google Cloud VMware Engine Google Cloud VMware Engine lets you run VMware workloads in the cloud. If your on-premises workloads are VMware based, you can architect your DR solution to run on the same virtualization solution that you are running on-premises. You can select the region that meets your locality requirements. Networking When your DR plan is based on moving data from on-premises to Google Cloud or from another cloud provider to Google Cloud, then you must address your networking strategy. For more information, see the Transferring data to and from Google Cloud section of the "Disaster recovery building blocks" document. VPC Service Controls When planning your DR strategy, you must ensure that the security controls that apply to your production environment also extend to your failover environment. By using VPC Service Controls, you can define a security perimeter from on-premises networks to your projects in Google Cloud. VPC Service Controls enables a context-aware access approach to controlling your cloud resources. You can create granular access control policies in Google Cloud based on attributes like user identity and IP address. These policies help ensure that the appropriate security controls are in place in your on-premises and Google Cloud environments. What's next Read other articles in this DR series: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Read the whitepaper Data residency, operational transparency, and privacy for European customers on Google Cloud (PDF). For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Architectural_approaches.txt b/Architectural_approaches.txt new file mode 100644 index 0000000000000000000000000000000000000000..06dfee65d334238f26b18551e3b45b749c4d47d5 --- /dev/null +++ b/Architectural_approaches.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/network-architecture +Date Scraped: 2025-02-23T11:52:54.677Z + +Content: +Home Docs Cloud Architecture Center Send feedback Designing networks for migrating enterprise workloads: Architectural approaches Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This document introduces a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. These architectures emphasize advanced connectivity, zero-trust security principles, and manageability across a hybrid environment. As described in an accompanying document, Architectures for Protecting Cloud Data Planes, enterprises deploy a spectrum of architectures that factor in connectivity and security needs in the cloud. We classify these architectures into three distinct architectural patterns: lift-and-shift, hybrid services, and zero-trust distributed. The current document considers different security approaches, depending on which architecture an enterprise has chosen. It also describes how to realize those approaches using the building blocks provided by Google Cloud. You should use these security guidances in conjunction with other architectural guidances covering reliability, availability, scale, performance, and governance. This document is designed to help systems architects, network administrators, and security administrators who are planning to migrate on-premises workloads to the cloud. It assumes the following: You are familiar with data center networking and security concepts. You have existing workloads in your on-premises data center and are familiar with what they do and who their users are. You have at least some workloads that you plan to migrate. You are generally familiar with the concepts described in Architectures for Protecting Cloud Data Planes. The series consists of the following documents: Designing networks for migrating enterprise workloads: Architectural approaches (this document) Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures This document summarizes the three primary architectural patterns and introduces the resource building blocks that you can use to create your infrastructure. Finally, it describes how to assemble the building blocks into a series of reference architectures that match the patterns. You can use these reference architectures to guide your own architecture. This document mentions virtual machines (VMs) as examples of workload resources. The information applies to other resources that use VPC networks, like Cloud SQL instances and Google Kubernetes Engine nodes. Overview of architectural patterns Typically, network engineers have focused on building the physical networking infrastructure and security infrastructure in on-premises data centers. The journey to the cloud has changed this approach because cloud networking constructs are software-defined. In the cloud, application owners have limited control of the underlying infrastructure stack. They need a model that has a secure perimeter and provides isolation for their workloads. In this series, we consider three common architectural patterns. These patterns build on one another, and they can be seen as a spectrum rather than a strict choice. Lift-and-shift pattern In the lift-and-shift architectural pattern, enterprise application owners migrate their workloads to the cloud without refactoring those workloads. Network and security engineers use Layer 3 and Layer 4 controls to provide protection using a combination of network virtual appliances that mimic on-premises physical devices and cloud firewall rules in the VPC network. Workload owners deploy their services in VPC networks. Hybrid services pattern Workloads that are built using lift-and-shift might need access to cloud services such as BigQuery or Cloud SQL. Typically, access to such cloud services is at Layer 4 and Layer 7. In this context, isolation and security cannot be done strictly at Layer 3. Therefore, service networking and VPC Service Controls are used to provide connectivity and security, based on the identities of the service that's being accessed and the service that's requesting access. In this model, it's possible to express rich access-control policies. Zero-trust distributed pattern In a zero-trust architecture, enterprise applications extend security enforcement beyond perimeter controls. Inside the perimeter, workloads can communicate with other workloads only if their IAM identity has specific permission, which is denied by default. In a Zero Trust Distributed Architecture, trust is identity-based and enforced for each application. Workloads are built as microservices that have centrally issued identities. That way, services can validate their callers and make policy-based decisions for each request about whether that access is acceptable. This architecture is often implemented using distributed proxies (a service mesh) instead of using centralized gateways. Enterprises can enforce zero-trust access from users and devices to enterprise applications by configuring Identity-Aware Proxy (IAP). IAP provides identity- and context-based controls for user traffic from the internet or intranet. Combining patterns Enterprises that are building or migrating their business applications to the cloud usually use a combination of all three architectural patterns. Google Cloud offers a portfolio of products and services that serve as building blocks to implement the cloud data plane that powers the architectural patterns. These building blocks are discussed later in this document. The combination of controls that are provided in the cloud data plane, together with administrative controls to manage cloud resources, form the foundation of an end-to-end security perimeter. The perimeter that's created by this combination lets you govern, deploy, and operate your workloads in the cloud. Resource hierarchy and administrative controls This section presents a summary of the administrative controls that Google Cloud provides as resource containers. The controls include Google Cloud organization resources, folders, and projects that let you group and hierarchically organize cloud resources. This hierarchical organization provides you with an ownership structure and with anchor points for applying policy and controls. A Google organization resource is the root node in the hierarchy and is the foundation for creating deployments in the cloud. An organization resource can have folders and projects as children. A folder has projects or other folders as children. All other cloud resources are the children of projects. You use folders as a method of grouping projects. Projects form the basis for creating, enabling, and using all Google Cloud services. Projects let you manage APIs, enable billing, add and remove collaborators, and manage permissions. Using Google Identity and Access Management (IAM), you can assign roles and define access policies and permissions at all resource hierarchy levels. IAM policies are inherited by resources lower in the hierarchy. These policies can't be altered by resource owners who are lower in the hierarchy. In some cases, the identity and access management is provided at a more granular level, for example at the scope of objects in a namespace or cluster as in Google Kubernetes Engine. Design considerations for Google Virtual Private Cloud networks When you're designing a migration strategy to the cloud, it's important to develop a strategy for how your enterprise will use VPC networks. You can think of a VPC network as a virtual version of your traditional physical network. It is a completely isolated, private network partition. By default, workloads or services that are deployed in one VPC network cannot communicate with jobs in another VPC network. VPC networks therefore enable workload isolation by forming a security boundary. Because each VPC network in the cloud is a fully virtual network, each has its own private IP address space. You can therefore use the same IP address in multiple VPC networks without conflict. A typical on-premises deployment might consume a large portion of the RFC 1918 private IP address space. On the other hand, if you have workloads both on-premises and in VPC networks, you can reuse the same address ranges in different VPC networks, as long as those networks aren't connected or peered, thus using up IP address space less quickly. VPC networks are global VPC networks in Google Cloud are global, which means that resources deployed in a project that has a VPC network can communicate with each other directly using Google's private backbone. As figure 1 shows, you can have a VPC network in your project that contains subnetworks in different regions that span multiple zones. The VMs in any region can communicate privately with each other using the local VPC routes. Figure 1. Google Cloud global VPC network implementation with subnetworks configured in different regions. Sharing a network using Shared VPC Shared VPC lets an organization resource connect multiple projects to a common VPC network so that they can communicate with each other securely using internal IP addresses from the shared network. Network administrators for that shared network apply and enforce centralized control over network resources. When you use Shared VPC, you designate a project as a host project and attach one or more service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network. Enterprises typically use Shared VPC networks when they need network and security administrators to centralize management of network resources such as subnets and routes. At the same time, Shared VPC networks let application and development teams create and delete VM instances and deploy workloads in designated subnets using the service projects. Isolating environments by using VPC networks Using VPC networks to isolate environments has a number of advantages, but you need to consider a few disadvantages as well. This section addresses these tradeoffs and describes common patterns for implementing isolation. Reasons to isolate environments Because VPC networks represent an isolation domain, many enterprises use them to keep environments or business units in separate domains. Common reasons to create VPC-level isolation are the following: An enterprise wants to establish default-deny communications between one VPC network and another, because these networks represent an organizationally meaningful distinction. For more information, see Common VPC network isolation patterns later in this document. An enterprise needs to have overlapping IP address ranges because of pre-existing on-premises environments, because of acquisitions, or because of deployments to other cloud environments. An enterprise wants to delegate full administrative control of a network to a portion of the enterprise. Disadvantages of isolating environments Creating isolated environments with VPC networks can have some disadvantages. Having multiple VPC networks can increase the administrative overhead of managing the services that span multiple networks. This document discusses techniques that you can use to manage this complexity. Common VPC network isolation patterns There are some common patterns for isolating VPC networks: Isolate development, staging, and production environments. This pattern lets enterprises fully segregate their development, staging, and production environments from each other. In effect, this structure maintains multiple complete copies of applications, with progressive rollout between each environment. In this pattern, VPC networks are used as security boundaries. Developers have a high degree of access to development VPC networks to do their day-to-day work. When development is finished, an engineering production team or a QA team can migrate the changes to a staging environment, where the changes can be tested in an integrated fashion. When the changes are ready to be deployed, they are sent to a production environment. Isolate business units. Some enterprises want to impose a high degree of isolation between business units, especially in the case of units that were acquired or ones that demand a high degree of autonomy and isolation. In this pattern, enterprises often create a VPC network for each business unit and delegate control of that VPC to the business unit's administrators. The enterprise uses techniques that are described later in this document to expose services that span the enterprise or to host user-facing applications that span multiple business units. Recommendation for creating isolated environments We recommend that you design your VPC networks to have the broadest domain that aligns with the administrative and security boundaries of your enterprise. You can achieve additional isolation between workloads that run in the same VPC network by using security controls such as firewalls. For more information about designing and building an isolation strategy for your organization, see Best practices and reference architectures for VPC design and Networking in the Google Cloud enterprise foundations blueprint. Building blocks for cloud networking This section discusses the important building blocks for network connectivity, network security, service networking, and service security. Figure 2 shows how these building blocks relate to one another. You can use one or more of the products that are listed in a given row. Figure 2. Building blocks in the realm of cloud network connectivity and security. The following sections discuss each of the building blocks and which Google Cloud services you can use for each of the blocks. Network connectivity The network connectivity block is at the base of the hierarchy. It's responsible for connecting Google Cloud resources to on-premises data centers or other clouds. Depending on your needs, you might need only one of these products, or you might use all of them to handle different use cases. Cloud VPN Cloud VPN lets you connect your remote branch offices or other cloud providers to Google VPC networks through IPsec VPN connections. Traffic traveling between the two networks is encrypted by one VPN gateway and then decrypted by the other VPN gateway, thereby helping to protect data as it traverses the internet. Cloud VPN lets you connect your on-premises environment and Google Cloud for less cost, but lower bandwidth, than Cloud Interconnect (described in the next section). You can provision an HA VPN to meet an SLA requirement of up to 99.99% availability if you have the conforming architecture. For example, Cloud VPN is a good choice for non-mission-critical use cases or for extending connectivity to other cloud providers. Cloud Interconnect Cloud Interconnect provides enterprise-grade dedicated connectivity to Google Cloud that has higher throughput and more reliable network performance compared to using VPN or internet ingress. Dedicated Interconnect provides direct physical connectivity to Google's network from your routers. Partner Interconnect provides dedicated connectivity through an extensive network of partners, who might offer broader reach or other bandwidth options than Dedicated Interconnect does. Cross-Cloud Interconnect provides dedicated direct connectivity from your VPC networks to other cloud providers. Dedicated Interconnect requires that you connect at a colocation facility where Google has a presence, but Partner Interconnect does not. Cloud Interconnect ensures that the traffic between your on-premises network or other cloud network and your VPC network doesn't traverse the public internet. You can provision these Cloud Interconnect connections to meet an SLA requirement of up to 99.99% availability if you provision the appropriate architecture. You can consider using Cloud Interconnect to support workloads that require low latency, high bandwidth, and predictable performance while ensuring that all of your traffic stays private. Network Connectivity Center for hybrid Network Connectivity Center provides site-to-site connectivity among your on-premises and other cloud networks. It does this using Google's backbone network to deliver reliable connectivity among your sites. Additionally, you can extend your existing SD-WAN overlay network to Google Cloud by configuring a VM or a third-party vendor router appliance as a logical spoke attachment. You can access resources inside the VPC networks using the router appliance, VPN, or Cloud Interconnect network as spoke attachments. You can use Network Connectivity Center to consolidate connectivity between your on-premises sites, your presences in other clouds, and Google Cloud and manage it all using a single view. Network Connectivity Center for VPC networks Network Connectivity Center also lets you create a mesh or star topology among many VPC networks using VPC spokes. You can connect the hub to on-premises or other clouds using Network Connectivity Center hybrid spokes. VPC Network Peering VPC Network Peering lets you connect Google VPC networks so that workloads in different VPC networks can communicate internally regardless of whether they belong to the same project or to the same organization resource. Traffic stays within Google's network and doesn't traverse the public internet. VPC Network Peering requires that the networks to be peered don't have overlapping IP addresses. Network security The network security block sits on top of the network connectivity block. It's responsible for allowing or denying access to resources based on the characteristics of IP packets. Cloud NGFW Cloud Next Generation Firewall (Cloud NGFW) is a distributed firewall service that lets you apply firewall policies at the organization, folder, and network level. Enabled firewall rules are always enforced, protecting your instances regardless of their configuration or the operating system, or even whether the VMs have fully booted. The rules are applied on a per-instance basis, meaning that the rules protect connections between VMs within a given network as well connections to outside the network. Rule application can be governed using IAM-governed Tags, which allow you to control which VMs are covered by particular rules. Cloud NGFW also offers the option to do L7 inspection of packets. Packet mirroring Packet mirroring clones the traffic of specific instances in your VPC network and forwards it to collectors for examination. Packet mirroring captures all traffic and packet data, including payloads and headers. You can configure mirroring for both ingress and egress traffic, for only ingress traffic, or for only egress traffic. The mirroring happens on the VM instances, not on the network. Network virtual appliance Network virtual appliances let you apply security and compliance controls to the virtual network that are consistent with controls in the on-premises environment. You can do this by deploying VM images that are available in the Google Cloud Marketplace to VMs that have multiple network interfaces, each attached to a different VPC network, to perform a variety of network virtual functions. Typical use cases for virtual appliances are as follows: Next-generation firewall (NGFW). NGFW NVAs deliver protection in situations not covered by Cloud NGFW or to provide management consistency with on-premises NGFW installations. Intrusion detection system/intrusion prevention system (IDS/IPS). A network-based IDS provides visibility into potentially malicious traffic. To prevent intrusions, IPS devices can block malicious traffic from reaching its destination. Google Cloud offers Cloud Intrusion Detection System (Cloud IDS) as a managed service. Secure web gateway (SWG). A SWG blocks threats from the internet by letting enterprises apply corporate policies on traffic that's traveling to and from the internet. This is done by using URL filtering, malicious code detection, and access control. Google Cloud offers Secure Web Proxy as a managed service. Network address translation (NAT) gateway. A NAT gateway translates IP addresses and ports. For example, this translation helps avoid overlapping IP addresses. Google Cloud offers Cloud NAT as a managed service. Web application firewall (WAF). A WAF is designed to block malicious HTTP(S) traffic that's going to a web application. Google Cloud offers WAF functionality through Google Cloud Armor security policies. The exact functionality differs between WAF vendors, so it's important to determine what you need. Cloud IDS Cloud IDS is an intrusion detection service that provides threat detection for intrusions, malware, spyware, and command-and-control attacks on your network. Cloud IDS works by creating a Google-managed peered network containing VMs that will receive mirrored traffic. The mirrored traffic is then inspected by Palo Alto Networks threat protection technologies to provide advanced threat detection. Cloud IDS provides full visibility into intra-subnet traffic, letting you monitor VM-to-VM communication and to detect lateral movement. Cloud NAT Cloud NAT provides fully managed, software-defined network address translation support for applications. It enables source network address translation (source NAT or SNAT) for internet-facing traffic from VMs that don't have external IP addresses. Firewall Insights Firewall Insights helps you understand and optimize your firewall rules. It provides data about how your firewall rules are being used, exposes misconfigurations, and identifies rules that could be made more strict. It also uses machine learning to predict future usage of your firewall rules so that you can make informed decisions about whether to remove or tighten rules that seem overly permissive. Network logging You can use multiple Google Cloud products to log and analyze network traffic. Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule that's designed to deny traffic is functioning as intended. Firewall Rules Logging is also useful if you need to determine how many connections are affected by a given firewall rule. You enable Firewall Rules Logging individually for each firewall rule whose connections you need to log. Firewall Rules Logging is an option for any firewall rule, regardless of the action (allow or deny) or direction (ingress or egress) of the rule. VPC Flow Logs records a sample of network flows that are sent from and received by VM instances, including instances used as Google Kubernetes Engine (GKE) nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Service networking Service networking blocks are responsible for providing lookup services that tell services where a request should go (DNS, Service Directory) and with getting requests to the correct place (Private Service Connect, Cloud Load Balancing). Cloud DNS Workloads are accessed using domain names. Cloud DNS offers reliable, low-latency translation of domain names to IP addresses that are located anywhere in the world. Cloud DNS offers both public zones and private managed DNS zones. A public zone is visible to the public internet, while a private zone is visible only from one or more VPC networks that you specify. Cloud Load Balancing Within Google Cloud, load balancers are a crucial component—they route traffic to various services to ensure speed and efficiency, and to help ensure security globally for both internal and external traffic. Our load balancers also let traffic be routed and scaled across multiple clouds or hybrid environments. This makes Cloud Load Balancing the "front door" through which any application can be scaled no matter where it is or in how many places it's hosted. Google offers various types of load balancing: global and regional, external and internal, and Layer 4 and Layer 7. Service Directory Service Directory lets you manage your service inventory, providing a single secure place to publish, discover, and connect services, all operations underpinned by identity-based access control. It lets you register named services and their endpoints. Registration can be either manual or by using integrations with Private Service Connect, GKE, and Cloud Load Balancing. Service discovery is possible by using explicit HTTP and gRPC APIs, as well as by using Cloud DNS. Cloud Service Mesh Both Cloud Service Mesh is designed to run complex, distributed applications by enabling a rich set of traffic management and security policies in service mesh architectures. Cloud Service Mesh supports Kubernetes-based regional and global deployments, both Google Cloud and on-premises, that benefit from a managed Istio product. It also supports Google Cloud using proxies on VMs or proxyless gRPC. Private Service Connect Private Service Connect creates service abstractions by making workloads accessible across VPC networks through a single endpoint. This allows two networks to communicate in a client-server model that exposes just the service to the consumer instead of the entire network or the workload itself. A service-oriented network model allows network administrators to reason about the services they expose between networks rather than subnets or VPCs, enabling consumption of the services in a producer-consumer model, be it for first-party or third-party services (SaaS). With Private Service Connect a consumer VPC can use a private IP address to connect to a Google API or a service in another VPC. You can extend Private Service Connect to your on-premises network to access endpoints that connect to Google APIs or to managed services in another VPC network. Private Service Connect allows consumption of services at Layer 4 or Layer 7. At Layer 4, Private Service Connect requires the producer to create one or more subnets specific to Private Service Connect. These subnets are also referred to as NAT subnets. Private Service Connect performs source NAT using an IP address that's selected from one of the Private Service Connect subnets to route the requests to a service producer. This approach lets you use overlapping IP addresses between consumers and producers. At Layer 7, you can create a Private Service Connect backend using an internal Application Load Balancer. The internal Application Load Balancer lets you choose which services are available using a URL map. For more information, see About Private Service Connect backends. Private services access Private services access is a private connection between your VPC network and a network that's owned by Google or by a third party. Google or the third parties who offer services are known as service producers. Private services access uses VPC Network Peering to establish the connectivity, and it requires the producer and consumer VPC networks to be peered with each other. This is different from Private Service Connect, which lets you project a single private IP address into your subnet. The private connection lets VM instances in your VPC network and the services that you access communicate exclusively by using internal IP addresses. VM instances don't need internet access or external IP addresses to reach services that are available through private services access. Private services access can also be extended to the on-premises network by using Cloud VPN or Cloud Interconnect to provide a way for the on-premises hosts to reach the service producer's network. For a list of Google-managed services that are supported using private services access, see Supported services in the Virtual Private Cloud documentation. Serverless VPC Access Serverless VPC Access makes it possible for you to connect directly to your VPC network from services hosted in serverless environments such as Cloud Run, App Engine, or Cloud Run functions. Configuring Serverless VPC Access lets your serverless environment send requests to your VPC network using internal DNS and internal IP addresses. The responses to these requests also use your virtual network. Serverless VPC Access sends internal traffic from your VPC network to your serverless environment only when that traffic is a response to a request that was sent from your serverless environment through the Serverless VPC Access connector. Serverless VPC Access has the following benefits: Requests sent to your VPC network are never exposed to the internet. Communication through Serverless VPC Access can have less latency compared to communication over the internet. Direct VPC egress Direct VPC egress lets your Cloud Run service send traffic to a VPC network without setting up a Serverless VPC Access connector. Service security The service security blocks control access to resources based on the identity of the requestor or based on higher-level understanding of packet patterns instead of just the characteristics of an individual packet. Google Cloud Armor for DDoS/WAF Google Cloud Armor is a web-application firewall (WAF) and distributed denial-of-service (DDoS) mitigation service that helps you defend your web applications and services from multiple types of threats. These threats include DDoS attacks, web-based attacks such as cross-site scripting (XSS) and SQL injection (SQLi), and fraud and automation-based attacks. Google Cloud Armor inspects incoming requests on Google's global edge. It has a built-in set of web application firewall rules to scan for common web attacks and an advanced ML-based attack detection system that builds a model of good traffic and then detects bad traffic. Finally, Google Cloud Armor integrates with Google reCAPTCHA to help detect and stop sophisticated fraud and automation-based attacks by using both endpoint telemetry and cloud telemetry. Identity Aware Proxy (IAP) Identity-Aware Proxy (IAP) provides context-aware access controls to cloud-based applications and VMs that are running on Google Cloud or that are connected to Google Cloud using any of the hybrid networking technologies. IAP verifies the user identity and determines if the user request is originating from trusted sources, based on various contextual attributes. IAP also supports TCP tunneling for SSH/RDP access from enterprise users. VPC Service Controls VPC Service Controls helps you mitigate the risk of data exfiltration from Google Cloud services such as Cloud Storage and BigQuery. Using VPC Service Controls helps ensure that use of your Google Cloud services happens only from approved environments. You can use VPC Service Controls to create perimeters that protect the resources and data of services that you specify by limiting access to specific cloud-native identity constructs like service accounts and VPC networks. After a perimeter has been created, access to the specified Google services is denied unless the request comes from within the perimeter. Content delivery The content delivery blocks control the optimization of delivery of applications and content. Cloud CDN Cloud CDN provides static content acceleration by using Google's global edge network to deliver content from a point closest to the user. This helps reduce latency for your websites and applications. Media CDN Media CDN is Google's media delivery solution and is built for high-throughput egress workloads. Observability The observability blocks give you visibility into your network and provide insight which can be used to troubleshoot, document, investigate, issues. Network Intelligence Center Network Intelligence Center comprises several products that address various aspects of network observability. Each product has a different focus and provides rich insights to inform administrators, architects, and practitioners about network health and issues. Reference architectures The following documents present reference architectures for different types of workloads: intra-cloud, internet-facing, and hybrid. These workload architectures are built on top of a cloud data plane that is realized using the building blocks and the architectural patterns that were outlined in earlier sections of this document. You can use the reference architectures to design ways to migrate or build workloads in the cloud. Your workloads are then underpinned by the cloud data plane and use the architectures. Although these documents don't provide an exhaustive set of reference architectures, they do cover the most common scenarios. As with the security architecture patterns that are described in Architectures for Protecting Cloud Data Planes, real-world services might use a combination of these designs. These documents discuss each workload type and the considerations for each security architecture. Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures What's next Migration to Google Cloud can help you to plan, design, and implement the process of migrating your workloads to Google Cloud. Landing zone design in Google Cloud has guidance for creating a landing zone network. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Architectural_approaches_to_adopt_a_hybrid_or_multicloud_architecture.txt b/Architectural_approaches_to_adopt_a_hybrid_or_multicloud_architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..7506e0bc832fae43f0a385d5fe90ef37566d4cdf --- /dev/null +++ b/Architectural_approaches_to_adopt_a_hybrid_or_multicloud_architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/adopt +Date Scraped: 2025-02-23T11:49:46.536Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architectural approaches to adopt a hybrid or multicloud architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This document provides guidance on common and proven approaches and considerations to migrate your workload to the cloud. It expands on guidance in Design a hybrid and multicloud architecture strategy, which discusses several possible, and recommended, steps to design a strategy for adopting a hybrid or multicloud architecture. Note: The phrase migrate your workload to the cloud refers to hybrid and multicloud scenarios, not to a complete cloud migration. Cloud first A common way to begin using the public cloud is the cloud-first approach. In this approach, you deploy your new workloads to the public cloud while your existing workloads stay where they are. In that case, consider a classic deployment to a private computing environment only if a public cloud deployment is impossible for technical or organizational reasons. The cloud-first strategy has advantages and disadvantages. On the positive side, it's forward looking. You can deploy new workloads in a modernized fashion while avoiding (or at least minimizing) the hassles of migrating existing workloads. While a cloud-first approach can provide certain advantages, it could potentially result in missed opportunities for improving or using existing workloads. New workloads might represent a fraction of the overall IT landscape, and their effect on IT expenses and performance can be limited. Allocating time and resources to migrating an existing workload could potentially lead to more substantial benefits or cost savings compared to attempting to accommodate a new workload in the cloud environment. Following a strict cloud-first approach also risks increasing the overall complexity of your IT environment. This approach might create redundancies, lower performance due to potential excessive cross-environment communication, or result in a computing environment that isn't well suited for the individual workload. Also, compliance with industry regulations and data privacy laws can restrict enterprises from migrating certain applications that hold sensitive data. Considering these risks, you might be better off using a cloud-first approach only for selected workloads. Using a cloud-first approach lets you concentrate on the workloads that can benefit the most from a cloud deployment or migration. This approach also considers the modernization of existing workloads. A common example of a cloud-first hybrid architecture is when legacy applications and services holding critical data must be integrated with new data or applications. To complete the integration, you can use a hybrid architecture that modernizes legacy services by using API interfaces, which unlocks them for consumption by new cloud services and applications. With a cloud API management platform, like Apigee, you can implement such use cases with minimal application changes and add security, analytics, and scalability to the legacy services. Migration and modernization Hybrid multicloud and IT modernization are distinct concepts that are linked in a virtuous circle. Using the public cloud can facilitate and simplify the modernization of IT workloads. Modernizing your IT workloads can help you get more from the cloud. The primary goals of modernizing workloads are as follows: Achieve greater agility so that you can adapt to changing requirements. Reduce the costs of your infrastructure and operations. Increase reliability and resiliency to minimize risk. However, it might not be feasible to modernize every application in the migration process at the same time. As described in Migration to Google Cloud, you can implement one of the following migration types, or even combine multiple types as needed: Rehost (lift and shift) Replatform (lift and optimize) Refactor (move and improve) Rearchitect (continue to modernize) Rebuild (remove and replace, sometimes called rip and replace) Repurchase When making strategic decisions about your hybrid and multicloud architectures, it's important to consider the feasibility of your strategy from a cost and time perspective. You might want to consider a phased migration approach, starting with lifting and shifting or replatforming and then refactoring or rearchitecting as the next step. Typically, lifting and shifting helps to optimize applications from an infrastructure perspective. After applications are running in the cloud, it's easier to use and integrate cloud services to further optimize them using cloud-first architectures and capabilities. Also, these applications can still communicate with other environments over a hybrid network connection. For example, you can refactor or rearchitect a large, monolithic VM-based application and turn it into several independent microservices, based on a cloud-based microservice architecture. In this example, the microservices architecture uses Google Cloud managed container services like Google Kubernetes Engine (GKE) or Cloud Run. However, if the architecture or infrastructure of an application isn't supported in the target cloud environment as it is, you might consider starting with replatforming, refactoring, or rearchitecting your migration strategy to overcome those constraints where feasible. When using any of these migration approaches, consider modernizing your applications (where applicable and feasible). Modernization can require adopting and implementing Site Reliability Engineering (SRE) or DevOps principles, such that you might also need to extend application modernization to your private environment in a hybrid setup. Even though implementing SRE principles involves engineering at its core, it's more of a transformation process than a technical challenge. As such, it will likely require procedural and cultural changes. To learn more about how the first step to implementing SRE in an organization is to get leadership buy-in, see With SRE, failing to plan is planning to fail. Mix and match migration approaches Each migration approach discussed here has certain strengths and weaknesses. A key advantage of following a hybrid and multicloud strategy is that it isn't necessary to settle on a single approach. Instead, you can decide which approach works best for each workload or application stack, as shown in the following diagram. This conceptual diagram illustrates the various migration and modernization paths or approaches that can be simultaneously applied to different workloads, driven by the unique business, technical requirements, and objectives of each workload or application. In addition, it's not necessary that the same application stack components follow the same migration approach or strategy. For example: The backend on-premises database of an application can be replatformed from self-hosted MySQL to a managed database using Cloud SQL in Google Cloud. The application frontend virtual machines can be refactored to run on containers using GKE Autopilot, where Google manages the cluster configuration, including nodes, scaling, security, and other preconfigured settings. The on-premises hardware load balancing solution and web application firewall WAF capabilities can be replaced with Cloud Load Balancing and Google Cloud Armor. Choose rehost (lift and shift), if any of the following is true of the workloads: They have a relatively small number of dependencies on their environment. They aren't considered worth refactoring, or refactoring before migration isn't feasible. They are based on third-party software. Consider refactor (move and improve) for these types of workloads: They have dependencies that must be untangled. They rely on operating systems, hardware, or database systems that can't be accommodated in the cloud. They aren't making efficient use of compute or storage resources. They can't be deployed in an automated fashion without some effort. Consider whether rebuild (remove and replace) meets your needs for these types of workloads: They no longer satisfy current requirements. They can be incorporated with other applications that provide similar capabilities without compromising business requirements. They are based on third-party technology that has reached its end of life. They require third-party license fees that are no longer economical. The Rapid Migration Program shows how Google Cloud helps customers to use best practices, lower risk, control costs, and simplify their path to cloud success. Previous arrow_back Plan a hybrid and multicloud strategy Next Other considerations arrow_forward Send feedback \ No newline at end of file diff --git a/Architecture.txt b/Architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..912fa8b31b1b6eda674b37c5bf11708912c6a375 --- /dev/null +++ b/Architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/architecture +Date Scraped: 2025-02-23T11:47:03.160Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC The following diagram shows the high-level architecture that is deployed by the blueprint for a single environment. You deploy this architecture across three separate environments: production, non-production, and development. This diagram includes the following: Cloud Load Balancing distributes application traffic across regions to Kubernetes service objects. Behind each service is a logical grouping of related pods. Cloud Service Mesh lets Kubernetes services communicate with each other. Kubernetes services are grouped into tenants, which are represented as Kubernetes namespaces. Tenants are an abstraction that represent multiple users and workloads that operate in a cluster, with separate RBAC for access control. Each tenant also has its own project for tenant-specific cloud resources such as databases, storage buckets, and Pub/Sub subscriptions. Namespaces with their own identities for accessing peer services and cloud resources. The identity is consistent across the same namespace in different clusters because of fleet Workload Identity Federation for GKE. Each environment has a separate workload identity pool to mitigate privilege escalation between environments. Each service has a dedicated pipeline that builds and deploys that service. The same pipeline is used to deploy the service into the development environment, then deploy the service into the non-production environment, and finally deploy the service into the production environment. Key architectural decisions for developer platform The following table describes the architecture decisions that the blueprint implements. Decision area Decision Reason Deployment archetype Deploy across multiple regions. Permit availability of applications during region outages. Organizational architecture Deploy on top of the enterprise foundation blueprint. Use the organizational structure and security controls that are provided by the foundation. Use the three environment folders that are set up in the foundation: development, nonproduction, and production. Provide isolation for environments that have different access controls. Developer platform cluster architecture Package and deploy applications as containers. Support separation of responsibilities, efficient operations, and application portability. Run applications on GKE clusters. Use a managed container service that is built by the company that pioneered containers. Replicate and run application containers in an active-active configuration. Achieve higher availability and rapid progressive rollouts, improving development velocity. Provision the production environment with two GKE clusters in two different regions. Achieve higher availability than a single cloud region. Provision the non-production environment with two GKE clusters in two different regions. Stage changes to cross-regional settings, such as load balancers, before deployment to production. Provision the development environment with a single GKE cluster instance. Helps reduce cost. Configure highly-available control planes for each GKE cluster. Ensure that the cluster control plane is available during upgrade and resizing. Use the concept of sameness across namespaces, services, and identity in each GKE cluster. Ensure that Kubernetes objects with the same name in different clusters are treated as the same thing. This normalization is done to make administering fleet resources easier. Enable private IP address spaces for GKE clusters through Private Service Connect access to the control plane and private node pools. Help protect the Kubernetes cluster API from scanning attacks. Enable administrative access to the GKE clusters through the Connect gateway. Use one command to fetch credentials for access to multiple clusters. Use groups and third-party identity providers to manage cluster access. Use Cloud NAT to provide GKE pods with access to resources with public IP addresses. Improve the overall security posture of the cluster, because pods are not directly exposed to the internet, but are still able to access internet-facing resources. Configure nodes to use Container-Optimized OS and Shielded GKE Nodes. Limit the attack surface of the nodes. Associate each environment with a GKE fleet. Permit management of sets of GKE clusters as a unit. Use the foundation infrastructure pipeline to deploy the application factory, fleet-scope pipeline, and multi-tenant infrastructure pipeline. Provide a controllable, auditable, and repeatable mechanism to deploy application infrastructure. Configure GKE clusters using GKE Enterprise configuration and policy management features. Provide a service that allows configuration-as-code for GKE clusters. Use an application factory to deploy the application CI/CD pipelines used in the blueprint. Provide a repeatable pattern to deploy application pipelines more easily. Use an application CI/CD pipeline to build and deploy the blueprint application components. Provide a controllable, auditable, and repeatable mechanism to deploy applications. Configure the application CI/CD pipeline to use Cloud Build, Cloud Deploy, and Artifact Registry. Use managed build and deployment services to optimize for security, scale, and simplicity. Use immutable containers across environments, and sign the containers with Binary Authorization. Provide clear code provenance and ensure that code has been tested across environments. Use Google Cloud Observability, which includes Cloud Logging and Cloud Monitoring. Simplify operations by using an integrated managed service of Google Cloud. Enable Container Threat Detection (a service in Security Command Center) to monitor the integrity of containers. Use a managed service that enhances security by continually monitoring containers. Control access to the GKE clusters by Kubernetes role-based access control (RBAC), which is based on Google Groups for GKE. Enhance security by linking access control to Google Cloud identities. Service architecture Use a unique Kubernetes service account for each Kubernetes service. This account acts as an IAM service account through the use of Workload Identity Federation for GKE. Enhance security by minimizing the permissions each service needs to be provided. Expose services through the GKE Gateway API. Simplify configuration management by providing a declarative-based and resource-based approach to managing ingress rules and load-balancing configurations. Run services as distributed services through the use of Cloud Service Mesh with Certificate Authority Service. Provide enhanced security through enforcing authentication between services and also provides automatic fault tolerance by redirecting traffic away from unhealthy services. Use cross-region replication for AlloyDB for PostgreSQL. Provide for high-availability in the database layer. Network architecture Shared VPC instances are configured in each environment and GKE clusters are created in service projects. Shared VPC provides centralized network configuration management while maintaining separation of environments. Use Cloud Load Balancing in a multi-cluster, multi-region configuration. Provide a single anycast IP address to access regionalized GKE clusters for high availability and low-latency services. Use HTTPS connections for client access to services. Redirect any client HTTP requests to HTTPS. Help protect sensitive data in transit and help prevent person-in-the-middle-attacks. Use Certificate Manager to manage public certificates. Manage certificates in a unified way. Protect the web interface with Google Cloud Armor. Enhance security by protecting against common web application vulnerabilities and volumetric attacks. Your decisions might vary from the blueprint. For information about alternatives, see Alternatives to default recommendations. What's next Read about developer platform controls (next document in this series). Send feedback \ No newline at end of file diff --git a/Architecture_and_functions_in_a_data_mesh.txt b/Architecture_and_functions_in_a_data_mesh.txt new file mode 100644 index 0000000000000000000000000000000000000000..33fe46f03ae8925760f24e6c973624ab826d755f --- /dev/null +++ b/Architecture_and_functions_in_a_data_mesh.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/data-mesh +Date Scraped: 2025-02-23T11:48:52.839Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture and functions in a data mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-03 UTC A data mesh is an architectural and organizational framework which treats data as a product (referred to in this document as data products). In this framework, data products are developed by the teams that best understand that data, and who follow an organization-wide set of data governance standards. Once data products are deployed to the data mesh, distributed teams in an organization can discover and access data that's relevant to their needs more quickly and efficiently. To achieve such a well-functioning data mesh, you must first establish the high-level architectural components and organizational roles that this document describes. This document is part of a series which describes how to implement a data mesh on Google Cloud. It assumes that you have read and are familiar with the concepts described in Build a modern, distributed Data Mesh with Google Cloud. The series has the following parts: Architecture and functions in a data mesh (this document) Design a self-service data platform for a data mesh Build data products in a data mesh Discover and consume data products in a data mesh In this series, the data mesh that's described is internal to an organization. Although it's possible to extend a data mesh architecture to provide data products to third-parties, this extended approach is outside the scope of this document. Extending a data mesh involves additional considerations beyond just the usage within an organization. Architecture The following key terms are used to define the architectural components which are described in this series: Data product: A data product is a logical container or grouping of one or more related data resources. Data resource: A data resource is a physical asset in a storage system which holds structured data or stores a query that yields structured data. Data attribute: A data attribute is a field or element of a data resource. The following diagram provides an overview of the key architectural components in a data mesh implemented on Google Cloud. The preceding diagram shows the following: Central services enable the creation and management of data products, including organizational policies that affect the data mesh participants, access controls (through Identity and Access Management groups), and the infrastructure-specific artifacts. Examples of such commitments and reservations, and infrastructure that facilitates the functioning of the data mesh are described in Create platform components and solutions. Central services primarily supply the Data Catalog for all the data products in the data mesh and the discovery mechanism for potential customers of these products. Data domains expose subsets of their data as data products through well-defined data consumption interfaces. These data products could be a table, view, structured file, topic, or stream. In BigQuery, it would be a dataset, and in Cloud Storage, it would be a folder or bucket. There can be different types of interfaces that can be exposed as a data product. An example of an interface is a BigQuery view over a BigQuery table. The types of interfaces most commonly used for analytical purposes are discussed in Build data products in a data mesh. Data mesh reference implementation You can find a reference implementation of this architecture in the data-mesh-demo repository. The Terraform scripts that are used in the reference implementation demonstrate data mesh concepts and are not intended for production use. By running these scripts, you'll learn how to do the following: Separate product definitions from the underlying data. Create Data Catalog templates for describing product interfaces. Tag product interfaces with these templates. Grant permissions to the product consumers. For the product interfaces, the reference implementation creates and uses the following interface types: Authorized views over BigQuery tables. Data streams based on Pub/Sub topics. For further details, refer to the README file in the repository. Functions in a data mesh For a data mesh to operate well, you must define clear roles for the people who perform tasks within the data mesh. Ownership is assigned to team archetypes, or functions. These functions hold the core user journeys for people who work in the data mesh. To clearly describe user journeys, they have been assigned to user roles. These user roles can be split and combined based on the circumstances of each enterprise. You don't need to map the roles directly with employees or teams in your organization. A data domain is aligned with a business unit (BU), or a function within an enterprise. Common examples of business domains might be the mortgage department in a bank, or the customer, distribution, finance, or HR departments of an enterprise. Conceptually, there are two domain-related functions in a data mesh: the data producer teams and the data consumer teams. It's important to understand that a single data domain is likely to serve both functions at once. A data domain team produces data products from data that it owns. The team also consumes data products for business insight, and to produce derived-data products for the use of other domains. In addition to the domain-based functions, a data mesh also has a set of functions that are performed by centralized teams within the organization. These central teams enable the operation of the data mesh by providing cross-domain oversight, services, and governance. They reduce the operational burden for data domains in producing and consuming data products, and facilitate the cross-domain relationships that are required for the data mesh to operate. This document only describes functions that have a data mesh-specific role. There are several other roles that are required in any enterprise, regardless of the architecture being employed for the platform. However, these other roles are out of scope for this document. The four main functions in a data mesh are as follows: Data domain-based producer teams: Create and maintain data products over their lifecycle. These teams are often referred to as the data producers. Data domain-based consumer teams: Discover data products and use them in various analytic applications. These teams might consume data products to create new data products. These teams are often referred to as the data consumers. Central data governance team: Defines and enforces data governance policies among data producers, ensuring high data quality and data trustworthiness for consumers. This team is often referred to as the data governance team. Central self-service data infrastructure platform team: Provides a self-service data platform for data producers. This team also provides the tooling for central data discovery and data product observability that both data consumers and data producers use. This team is often referred to as the data platform team. An optional extra function to consider is that of a Center of Excellence (COE) for the data mesh. The purpose of the COE is to provide management of the data mesh. The COE is also the designated arbitration team that resolves any conflicts raised by any of the other functions. This function is useful for helping to connect the other four functions. Data domain-based producer team Typically, data products are built on top of a physical repository of data (either single or multiple data warehouses, lakes, or streams). An organization needs traditional data platform roles to create and maintain these physical repositories. However, these traditional data platform roles are not typically the people who create the data product. To create data products from these physical repositories, an organization needs a mix of data practitioners, such as data engineers and data architects. The following table lists all the domain-specific user roles that are needed in data producer teams. Role Responsibilities Required skills Desired outcomes Data product owner Acts as the primary business point of contact for the data product. Is accountable for the definitions, policies, business decisions, and application of business rules for the data exposed as products. Acts as a point of contact for business questions. As such, the owner represents the data domain when meeting with the data consumer teams or the centralized teams (data governance and data infrastructure platform). Data analytics Data architecture Product management The data product is driving value for consumers. There's robust management of the lifecycle of the data product, including deciding when to retire a product or release a new version. There's coordination of universal data elements with other data domains. Data product technical lead Acts as the primary technical point of contact for the product. Is accountable for implementing and publishing product interfaces. Acts as a point of contact for technical questions. As such, the lead represents the data domain when meeting with the data consumer teams or the centralized teams (data governance and data infrastructure platform). Works with the data governance team to define and implement data mesh standards in the organization. Works with the data platform team to help to develop the platform in tandem with the technical needs that production and consumption generate. Data engineering Data architecture Software engineering The data product meets business requirements and adheres to the data mesh technical standards. The data consumer teams use the data product and it appears in the results generated by the data product discovery experience. The use of the data product can be analyzed (for example, the number of daily queries). Data product support Acts as the point of contact for production support. Is accountable for maintaining product Service Level Agreement (SLA). Software engineering Site reliability engineering (SRE) The data product is meeting the stated SLA. Data consumer questions about the use of the data product are addressed and resolved. Subject matter expert (SME) for data domain Represents the data domain when meeting with SMEs from other data domains to establish data element definitions and boundaries that are common across the organization. Helps new data producers within the domain define their product scopes. Data analytics Data architecture Collaborates with other SMEs from across data domains to establish and maintain comprehensive understanding of the data in the organization and the data models that it uses. Facilitates the creation of interoperable data products which match the overall data model of the organization. There are clear standards for data product creation and lifecycle management. The data products from the data domain provide business value. Data owner Is accountable for a content area. Is responsible for data quality and accuracy. Approves access requests. Contributes to data product documentation. Any skill, but must have full knowledge of the business function. Any skill, but must have full knowledge of what the data means and business rules around it. Any skill, but must be able to determine the best possible resolution to data quality issues. Data that cross-functional areas use is accurate. Stakeholders understand the data. Data use is in accordance with usage policies. Data domain-based consumer teams In a data mesh, the people that consume a data product are typically data users who are outside of the data product domain. These data consumers use a central data catalog to find data products that are relevant to their needs. Because it's possible that more than one data product might meet their needs, data consumers can end up subscribing to multiple data products. If data consumers are unable to find the required data product for their use case, it's their responsibility to consult directly with the data mesh COE. During that consultation, data consumers can raise their data needs and seek advice on how to get those needs met by one or more domains. When looking for a data product, data consumers are looking for data that help them achieve various use cases such as persistent analytics dashboards and reports, individual performance reports, and other business performance metrics. Alternatively, data consumers might be looking for data products that can be used in artificial intelligence (AI) and machine learning (ML) use cases. To achieve these various use cases, data consumers require a mix of data practitioner personas, which are as follows: Role Responsibilities Required skills Desired outcomes Data analyst Searches for, identifies, evaluates, and subscribes to single-domain or cross-domain data products to create a foundation for business intelligence frameworks to operate. Analytics engineering Business analytics Provides clean, curated, and aggregated datasets for data visualization specialists to consume. Creates best practices for how to use data products. Aggregates and curates cross-domain datasets to meet the analytical needs of their domain. Application developer Develops an application framework for consumption of data across one or more data products, either inside or outside of the domain. Application development Data engineering Creates, serves, and maintains applications that consume data from one or more data products. Creates data applications for end-user consumption. Data visualization specialist Translates data engineering and data analysis jargon into information which business stakeholders can understand. Defines processes to populate business reports from data products. Creates and monitors reports that describe strategic business goals. Collaborates with engineers within the organization to design datasets which are aggregated from consumed data products. Implements reporting solutions. Translates high-level business requirements into technical requirements. Requirement analysis Data visualization Provides valid, accurate datasets and reports to end users. Business requirements are met through the dashboards and reports that are developed. Data scientist Searches for, identifies, evaluates, and subscribes to data products for data science use cases. Extracts data products and metadata from multiple data domains. Trains predictive models and deploys those models to optimize domain business processes. Provides feedback on possible data curation and data annotation techniques for multiple data domains. ML engineering Analytics engineering Creates predictive and prescriptive models to optimize business processes. Model training and model deployment are done in a timely manner. Central data governance team The data governance team enables data producers and consumers to safely share, aggregate, and compute data in a self-service manner, without introducing compliance risks to the organization. To meet the compliance requirements of the organization, the data governance team is a mix of data practitioner personas, which are as follows: Role Responsibilities Required skills Desired outcomes Data governance specialist Provides oversight and coordinates a single view of compliance. Recommends mesh-wide privacy policies on data collection, data protection, and data retention. Ensures that data stewards know about policies and can access them. Informs and consults on the latest data privacy regulations as required. Informs and consults on security questions as required. Performs internal audits and shares regular reports on risk and control plans. Legal SME Security SME Data privacy SME Privacy regulations in policies are up to date. Data producers are informed of policy changes in a timely manner. Management receives timely and regular reports on policy compliance for all published data products. Data steward (sits within each domain) Codifies the policies created by the data governance specialists. Defines and updates the taxonomy that an organization uses for annotating data products, data resources, and data attributes with discovery and privacy-related metadata. Coordinates across various stakeholders inside and outside of their respective domain. Ensures that the data products in their domain meet the metadata standards and privacy policies of the organization. Provides guidance to the data governance engineers on how to design and prioritize data platform features. Data architecture Data stewardship Required metadata has been created for all data products in the domain, and the data products for the domain are described accurately. The self-service data infrastructure platform team is building the right tooling to automate metadata annotations of data products, policy creation and verification. Data governance engineer Develops tools which auto-generate data annotations and can be used by all data domains, and then uses these annotations for policy enforcement. Implements monitoring to check the consistency of annotations and alerts when problems are found. Ensures that employees in the organization are informed of the status of data products by implementing alerts, reporting, and dashboards. Software engineering Data governance annotations are automatically verified. Data products comply with data governance policies. Data product violations are detected in a timely fashion. Central self-service data infrastructure platform team The self-service data infrastructure platform team, or just the data platform team, is responsible for creating a set of data infrastructure components. Distributed data domain teams use these components to build and deploy their data products. The data platform team also promotes best practices and introduces tools and methodologies which help to reduce cognitive load for distributed teams when adopting new technology. Platform infrastructure should provide easy integration with operations toolings for global observability, instrumentation, and compliance automation. Alternatively, the infrastructure should facilitate such integration to set up distributed teams for success. The data platform team has a shared responsibility model that it uses with the distributed domain teams and the underlying infrastructure team. The model shows what responsibilities are expected from the consumers of the platform, and what platform components the data platform team supports. As the data platform is itself an internal product, the platform doesn't support every use case. Instead, the data platform team continuously releases new services and features according to a prioritized roadmap. The data platform team might have a standard set of components in place and in development. However, data domain teams might choose to use a different, unique set of components if the needs of a team don't align with those provided by the data platform. If data domain teams choose a different approach, they must ensure that any platform infrastructure that they build and maintain complies with organization-wide policies and guardrails for security and data governance. For data platform infrastructure that is developed outside of the central data platform team, the data platform team might either choose to co-invest or embed their own engineers into the domain teams. Whether the data platform team chooses to co-invest or embed engineers might depend on the strategic importance of the data domain platform infrastructure to the organization. By staying involved in the development of infrastructure by data domain teams, organizations can provide the alignment and technical expertise required to repackage any new platform infrastructure components that are in development for future reuse. You might need to limit autonomy in the early stages of building a data mesh if your initial goal is to get approval from stakeholders for scaling up the data mesh. However, limiting autonomy risks creating a bottleneck at the central data platform team. This bottleneck can inhibit the data mesh from scaling. So, any centralization decisions should be taken carefully. For data producers, making their technical choices from a limited set of available options might be preferable to evaluating and choosing from an unlimited list of options themselves. Promoting autonomy of data producers doesn't equate to creating an ungoverned technology landscape. Instead, the goal is to drive compliance and platform adoption by striking the right balance between freedom of choice and standardization. Finally, a good data platform team is a central source of education and best practices for the rest of the company. Some of the most impactful activities that we recommend central data platform teams undertake are as follows: Fostering regular architectural design reviews for new functional projects and proposing common ways of development across development teams. Sharing knowledge and experiences, and collectively defining best practices and architectural guidelines. Ensuring engineers have the right tools in place to validate and check for common pitfalls like issues with code, bugs, and performance degradations. Organizing internal hackathons so development teams can surface their requirements for internal tooling needs. Example roles and responsibilities for the central data platform team might include the following: Role Responsibilities Required skills Desired outcomes Data platform product owner Creates an ecosystem of data infrastructure and solutions to empower distributed teams to build data products. Lowers the technical barrier to entry, ensures that governance is embedded, and minimizes collective technical debt for data infrastructure. Interfaces with leadership, data domain owners, data governance team, and technology platform owners to set the strategy and roadmap for the data platform. Data strategy and operations Product management Stakeholder management Establishes an ecosystem of successful data products. There are robust numbers of data products in production. There's a reduction in time-to-minimum viable product and time-to-production for data product releases. A portfolio of generalized infrastructure and components is in place that addresses the most common needs for data producers and data consumers. There's a high satisfaction score from data producers and data consumers. Data platform engineer Creates reusable and self-service data infrastructure and solutions for data ingestion, storage, processing, and consumption through templates, deployable architecture blueprints, developer guides, and other documentation. Also creates Terraform templates, data pipeline templates, container templates, and orchestration tooling. Develops and maintains central data services and frameworks to standardize processes for cross-functional concerns such as data sharing, pipelines orchestration, logging and monitoring, data governance, continuous integration and continuous deployment (CI/CD) with embedded guardrails, security and compliance reporting, and FinOps reporting. Data engineering Software engineering There are standardized, reusable infrastructure components and solutions for data producers to do data ingestion, storage, processing, curation, and sharing, along with necessary documentation. Releases of components, solutions, and end-user documentation align with the roadmap. Users report a high level of customer satisfaction. There are robust shared services for all functions in the data mesh. There is high uptime for shared services. The support response time is short. Platform and security engineer (a representative from the central IT teams such as networking and security, who is embedded in the data platform team) Ensures that data platform abstractions are aligned to enterprise-wide technology frameworks and decisions. Supports engineering activities by building the technology solutions and services in their core team that are necessary for data platform delivery. Infrastructure engineering Software engineering Platform infrastructure components are developed for the data platform. Releases of components, solutions, and end-user documentation align with the roadmap. The central data platform engineers report a high level of customer satisfaction. The health of the infrastructure platform improves for components that are used by the data platform (for example, logging). Underlying technology components have a high uptime. When data platform engineers have issues, the support response time is short. Enterprise architect Aligns data mesh and data platform architecture with enterprise-wide technology and data strategy. Provides advisory and design authority and assurance for both data platform and data product architectures to ensure alignment with enterprise-level strategy and best-practices. Data architecture Solution iteration and problem solving Consensus building A successful ecosystem is built that includes robust numbers of data products for which there is a reduction in time to both create minimum viable products and to release those products into production. Architecture standards have been established for critical data journeys, such as by establishing common standards for metadata management and for data sharing architecture. Additional considerations for a data mesh There are multiple architectural options for an analytics data platform, each option with different prerequisites. To enable each data mesh architecture, we recommend that your organization follow the best practices described in this section. Acquire platform funding As explained in the blog post, "If you want to transform start with finance", the platform is never finished: it's always operating based on a prioritized roadmap. Therefore, the platform must be funded as a product, not as a project with a fixed endpoint. The first adopter of the data mesh bears the cost. Usually, the cost is shared between the business that forms the first data domain to initiate the data mesh, and the central technology team, which generally houses the central data platform team. To convince finance teams to approve funding for the central platform, we recommend that you make a business case for the value of the centralized platform being realized over time. That value comes from reimplementing the same components in individual delivery teams. Define the minimum viable platform for the data mesh To help you to define the minimum viable platform for the data mesh, we recommend that you pilot and iterate with one or more business cases. For your pilot, find use cases that are needed, and where there's a consumer ready to adopt the resulting data product. The use cases should already have funding to develop the data products, but there should be a need for input from technical teams. Make sure the team that is implementing the pilot understands the data mesh operating model as follows: The business (that is, the data producer team) owns the backlog, support, and maintenance. The central team defines the self-service patterns and helps the business build the data product, but passes the data product to the business to run and own when it's complete. The primary goal is to prove the business operating model (domains produce, domains consume). The secondary goal is to prove the technical operating model (self-service patterns developed by the central team). Because platform team resources are limited, use the trunk and branch teams model to pool knowledge but still allow for the development of specialized platform services and products. We also recommend that you do the following: Plan roadmaps rather than letting services and features evolve organically. Define minimum viable platform capabilities spanning ingest, storage, processing, analysis, and ML. Embed data governance in every step, not as a separate workstream. Put in place the minimum capabilities across governance, platform, value-stream, and change management. Minimum capabilities are those which meet 80% of business cases. Plan for the co-existence of the data mesh with an existing data platform Many organizations that want to implement a data mesh likely already have an existing data platform, such as a data lake, data warehouse, or a combination of both. Before implementing a data mesh, these organizations must make a plan for how their existing data platform can evolve as the data mesh grows. These organizations should consider factors such as the following: The data resources that are most effective on the data mesh. The assets that must stay within the existing data platform. Whether assets have to move, or whether they can be maintained on the existing platform and still participate in the data mesh. What's next To learn more about designing and operating a cloud topology, see the Google Cloud Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Architecture_decision_records_overview.txt b/Architecture_decision_records_overview.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2b6618ed08788db0c96584250b5461dbf26642e --- /dev/null +++ b/Architecture_decision_records_overview.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/architecture-decision-records +Date Scraped: 2025-02-23T11:47:45.989Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture decision records overview Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-16 UTC To help explain why your infrastructure or application teams make certain design choices, you can use architecture decision records (ADRs). This document explains when and how to use ADRs as you build and run applications on Google Cloud. An ADR captures the key options available, the main requirements that drive a decision, and the design decisions themselves. You often store ADRs in a Markdown file close to the codebase relevant to that decision. If someone needs to understand the background of a specific architectural decision, such as why you use a regional Google Kubernetes Engine (GKE) cluster, they can review the ADR and then the associated code. ADRs can also help you run more reliable applications and services. The ADR helps you understand your current state and troubleshoot when there's a problem. ADRs also build a collection of engineering decisions to help future decision choices and deployments. When to use ADRs You use ADRs to track the key areas that you think are important to your deployment. The following categories might be in your ADRs: Specific product choices, such as the choice between Pub/Sub and Cloud Tasks. Specific product options and configurations, such as the use of regional GKE clusters with Multi Cluster Ingress for highly available applications. General architectural guidance, such as best practices for Dockerfile manifests. Some specific examples that might prompt you to create an ADR could be for the following choices: How and why do you set up high availability (HA) for your Cloud SQL instances? How do you approach uptime of GKE clusters? Do you use regional clusters? Do you use canary releases? Why or why not? As you evaluate the products to use, the ADR helps to explain each of your decisions. You can revisit the ADR as the team evolves and learns more about the stack and additional decisions are made or adjusted. If you make adjustments, include the previous decision and why a change is made. This history keeps a record of how the architecture has changed as business needs evolve, or where there are new technical requirements or available solutions. The following prompts help you to know when to create ADRs: When you have a technical challenge or question and there's no existing basis for a decision, such as a recommended solution, standard operation procedure, blueprint, or codebase. When you or your team offers a solution that's not documented somewhere accessible to the team. When there are two or more engineering options and you want to document your thoughts and reasons behind the selection. When you write an ADR, it helps to have potential readers in mind. The primary readers are members of the team that work on the technology covered by the ADR. Broader groups of potential readers of the ADR might include adjacent teams who want to understand your decisions, such as architecture and security teams. You should also consider that the application might change owners or include new team members. An ADR helps new contributors understand the background of the engineering choices that were made. An ADR also makes it easier to plan future changes. Format of an ADR A typical ADR includes a set of chapters. Your ADRs should help capture what you feel is important to the application and your organization. Some ADRs might be one page long, whereas others require a longer explanation. The following example ADR outline shows how you might a format an ADR to include the information that's important for your environment: Authors and the team Context and problem you want to solve Functional and non-functional requirements you want to address Potential critical user journey (CUJ) the decision impacts Overview of the key options Your decision and reasons behind the accepted choice To help keep a record of decisions, you might include a timestamp for each decision to show when the choice was made. How ADRs work ADRs work best when engineers, developers, or application owners can easily access information they contain. When they have a question about the why something is done a certain way, they can look at the ADR to find the answer. To make the ADR accessible, some teams host it in a central wiki that's also accessible to business owners, instead of in their source control repository. When someone has a question about a specific engineering decision, the ADR is there to provide answers. ADRs work well in the following scenarios: Onboarding: New team members can easily learn about the project, and they can review the ADR if they have questions while they're learning a new codebase. Evolution of the architecture: If there's a transfer of technology stack between teams, the new owners can review past decisions to understand the current state. The team can also review past decisions when there's a new technology available to them. The ADR can help teams avoid a repeat of the same discussion points, and it can help provide historical context when teams revisit topics. Sharing best practices: Teams can align on best practices across the organization when ADRs detail why certain decisions were made and alternatives were decided against. An ADR is often written in Markdown to keep it lightweight and text-based. Markdown files can be included in the source control repository with your application code. Store your ADRs close to your application code, ideally in the same version control system. As you make changes to your ADR, you can review previous versions from source control as needed. You could also use another medium like a shared Google Doc or an internal wiki. These alternate locations might be more accessible to users not part of the ADR's team. Another option is to create your ADR in a source control repository, but mirror key decisions into a more accessible wiki. What's next The Cloud Architecture Center and the Architecture Framework provide additional guidance and best practices. For some areas that might be in your ADR, see Patterns for scalable and resilient apps. Send feedback \ No newline at end of file diff --git a/Architecture_for_connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt b/Architecture_for_connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..2d968d8cc7d0eb8f69a2b7349145ab6c6449b15d --- /dev/null +++ b/Architecture_for_connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/architecture-for-connecting-visualization-software-to-hadoop-on-google-cloud +Date Scraped: 2025-02-23T11:52:47.061Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture for connecting visualization software to Hadoop on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC This document is intended for operators and IT administrators who want to set up secure data access for data analysts using business intelligence (BI) tools such as Tableau and Looker. It doesn't offer guidance on how to use BI tools, or interact with Dataproc APIs. This document is the first part of a series that helps you build an end-to-end solution to give data analysts secure access to data using BI tools. This document explains the following concepts: A proposed architecture. A high-level view of the component boundaries, interactions, and networking in the architecture. A high-level view of authentication and authorization in the architecture. The second part of this series, Connecting your visualization software to Hadoop on Google Cloud, shows you how to set up the architecture on Google Cloud. Architecture The following diagram shows the architecture and the flow of events that are explained in this document. For more information about the products that are used in this architecture, see Architectural components. Client applications connect through Java Database Connectivity (JDBC) to a single entry point on a Dataproc cluster. The entry point is provided by Apache Knox, which is installed on the cluster master node. The communication with Apache Knox is secured by TLS. Apache Knox delegates authentication through an authentication provider to a system such as an LDAP directory. After authentication, Apache Knox routes the user request to one or more backend clusters. You define the routes and configuration as custom topologies. A data processing service, such as Apache Hive, listens on the selected backend cluster and takes the request. Apache Ranger intercepts the request and determines whether processing should go ahead, depending on if the user has valid authorization. If the validation succeeds, the data processing service analyzes the request and returns the results. Architectural components The architecture is made up of the following components. The managed Hadoop platform: Dataproc. Dataproc is Google Cloud-managed Apache Spark, an Apache Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc is the platform that underpins the solution described in this document. User authentication and authorization: Apache Knox. Apache Knox acts as a single HTTP access point for all the underlying services in a Hadoop cluster. Apache Knox is designed as a reverse proxy with pluggable providers for authentication, authorization, audit, and other services. Clients send requests to Knox, and, based on the request URL and parameters, Knox routes the request to the appropriate Hadoop service. Because Knox is an entry point that transparently handles client requests and hides complexity, it's at the center of the architecture. Apache Ranger. Apache Ranger provides fine-grained authorization for users to perform specific actions on Hadoop services. It also audits user access and implements administrative actions. Processing engines: Apache Hive. Apache Hive is data warehouse software that enables access and management of large datasets residing in distributed storage using SQL. Apache Hive parses the SQL queries, performs semantic analysis, and builds a directed acyclic graph (DAG) of stages for a processing engine to execute. In the architecture shown in this document, Hive acts as the translation point between the user requests. It can also act as one of the multiple processing engines. Apache Hive is ubiquitous in the Hadoop ecosystem and it opens the door to practitioners familiar with standard SQL to perform data analysis. Apache Tez. Apache Tez is the processing engine in charge of executing the DAGs prepared by Hive and of returning the results. Apache Spark. Apache Spark is a unified analytics engine for large-scale data processing that supports the execution of DAGs. The architecture shows the Spark SQL component of Apache Spark to demonstrate the flexibility of the approach presented in this document. One restriction is that Spark SQL doesn't have official Ranger plugin support. For this reason, authorization must be done through the coarse-grained ACLs in Apache Knox instead of using the fine-grained authorization that Ranger provides. Components overview In the following sections, you learn about each of the components in more detail. You also learn how the components interact with each other. Client applications Client applications include tools that can send requests to an HTTPS REST endpoint, but don't necessarily support the Dataproc Jobs API. BI tools such as Tableau and Looker have HiveServer2 (HS2) and Spark SQL JDBC drivers that can send requests through HTTP. This document assumes that client applications are external to Google Cloud, executing in environments such as an analyst workstation, on-premises, or on another cloud. So, the communication between the client applications and Apache Knox must be secured with either a CA-signed or self-signed SSL/TLS certificate. Entry point and user authentication The proxy clusters are one or more long-lived Dataproc clusters that host the Apache Knox Gateway. Apache Knox acts as the single entry point for client requests. Knox is installed on the proxy cluster master node. Knox performs SSL termination, delegates user authentication, and forwards the request to one of the backend services. In Knox, each backend service is configured in what is referred to as a topology. The topology descriptor defines the following actions and permissions: How authentication is delegated for a service. The URI the backend service forwards requests to. Simple per-service authorization access control lists (ACLs). Knox lets you integrate authentication with enterprise and cloud identity management systems. To configure user authentication for each topology, you can use authentication providers. Knox uses Apache Shiro to authenticate against a local demonstration ApacheDS LDAP server by default. Alternatively, you can opt for Knox to use Kerberos. In the preceding diagram, as an example, you can see an Active Directory server hosted on Google Cloud outside of the cluster. For information on how to connect Knox to an enterprise authentication services such as an external ApacheDS server or Microsoft Active Directory (AD), see the Apache Knox user guide and the Google Cloud Managed Active Directory and Federated AD documentation. For the use case in this document, as long as Apache Knox acts as the single gatekeeper to the proxy and backend clusters, you don't have to use Kerberos. Processing engines The backend clusters are the Dataproc clusters hosting the services that process user requests. Dataproc clusters can autoscale the number of workers to meet the demand from your analyst team without manual reconfiguration. We recommend that you use long-lived Dataproc clusters in the backend. Long-lived Dataproc clusters allow the system to serve requests from data analysts without interruption. Alternatively, if the cluster only needs to serve requests for a brief time, you can use job-specific clusters, which are also known as ephemeral clusters. Ephemeral clusters can also be more cost effective than long-lived clusters. If you use ephemeral clusters, to avoid modifying the topology configuration, make sure that you recreate the clusters in the same zone and under the same name. Using the same zone and name lets Knox route the requests transparently using the master node internal DNS name when you recreate the ephemeral cluster. HS2 is responsible for servicing user queries made to Apache Hive. HS2 can be configured to use various execution engines such as the Hadoop MapReduce engine, Apache Tez, and Apache Spark. In this document, HS2 is configured to use the Apache Tez engine. Spark SQL is a module of Apache Spark that includes a JDBC/ODBC interface to execute SQL queries on Apache Spark. In the preceding architectural diagram, Spark SQL is shown as an alternative option for servicing user queries. A processing engine, either Apache Tez or Apache Spark, calls the YARN Resource Manager to execute the engine DAG on the cluster worker machines. Finally, the cluster worker machines access the data. For storing and accessing the data in a Dataproc cluster, use the Cloud Storage connector, not Hadoop Distributed File System (HDFS). For more information about the benefits of using the Cloud Storage connector, see the Cloud Storage connector documentation. The preceding architectural diagram shows one Apache Knox topology that forwards requests to Apache Hive, and another that forwards requests to Spark SQL. The diagram also shows other topologies that can forward requests to services in the same or different backend clusters. The backend services can process different datasets. For example, one Hive instance can offer access to personally identifiable information (PII) for a restricted set of users while another Hive instance can offer access to non-PII data for broader consumption. User authorization Apache Ranger can be installed on the backend clusters to provide fine-grained authorization for Hadoop services. In the architecture, a Ranger plugin for Hive intercepts the user requests and determines whether a user is allowed to perform an action over Hive data, based on Ranger policies. As an administrator, you define the Ranger policies using the Ranger Admin page. We strongly recommend that you configure Ranger to store these policies in an external Cloud SQL database. Externalizing the policies has two advantages: It makes them persistent in case any of the backend clusters are deleted. It enables the policies to be centrally managed for all groups or for custom groups of backend clusters. To assign Ranger policies to the correct user identities or groups, you must configure Ranger to sync the identities from the same directory that Knox is connected to. By default, the user identities used by Ranger are taken from the operating system. Apache Ranger can also externalize its audit logs to Cloud Storage to make them persistent. Ranger uses Apache Solr as its indexing and querying engine to make the audit logs searchable. Unlike HiveServer2, Spark SQL doesn't have official Ranger plugin support, so you need to use the coarse-grained ACLs available in Apache Knox to manage its authorization. To use these ACLs, add the LDAP identities that are allowed to use each service, such as Spark SQL or Hive, in the corresponding topology descriptor for the service. For more information, see Best practices to use Apache Ranger on Dataproc. High availability Dataproc provides a high availability (HA) mode. In this mode, there are several machines configured as master nodes, one of which is in active state. This mode allows uninterrupted YARN and HDFS operations despite any single-node failures or reboots. However, if the master node fails, the single entry point external IP changes, so you must reconfigure the BI tool connection. When you run Dataproc in HA mode, you should configure an external HTTP(S) load balancer as the entry point. The load balancer routes requests to an unmanaged instance group that bundles your cluster master nodes. As an alternative to a load balancer, you can apply a round-robin DNS technique instead, but there are drawbacks to this approach. These configurations are outside of the scope of this document. Cloud SQL also provides a high availability mode, with data redundancy made possible by synchronous replication between master instances and standby instances located in different zones. If there is an instance or zone failure, this configuration reduces downtime. However, note that an HA-configured instance is charged at double the price of a standalone instance. Cloud Storage acts as the datastore. For more information about Cloud Storage availability, see storage class descriptions. Networking In a layered network architecture, the proxy clusters are in a perimeter network. The backend clusters are in an internal network protected by firewall rules that only let through incoming traffic from the proxy clusters. The proxy clusters are isolated from the other clusters because they are exposed to external requests. Firewall rules only allow a restricted set of source IP addresses to access the proxy clusters. In this case, the firewall rules only allow requests that come from the addresses of your BI tools. The configuration of layered networks is outside of the scope of this document. In Connecting your visualization software to Hadoop on Google Cloud, you use the default network throughout the tutorial. For more information on layered network setups, see the best practices for VPC network security and the overview and examples on how to configure multiple network interfaces. What's next Read the second part of the series, Connecting your visualization software to Hadoop on Google Cloud, and learn how to set up the architecture on Google Cloud. Set up the architecture on Google Cloud using these Terraform configuration files. Read about the best practices for using Apache Ranger on Dataproc. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Architecture_patterns.txt b/Architecture_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..b19fb5982698eb921fa496557ae40b6ef4bd0a01 --- /dev/null +++ b/Architecture_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/architecture-patterns +Date Scraped: 2025-02-23T11:50:26.218Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC The documents in this series discuss networking architecture patterns that are designed based on the required communication models between applications residing in Google Cloud and in other environments (on-premises, in other clouds, or both). These patterns should be incorporated into the overall organization landing zone architecture, which can include multiple networking patterns to address the specific communication and security requirements of different applications. The documents in this series also discuss the different design variations that can be used with each architecture pattern. The following networking patterns can help you to meet communication and security requirements for your applications: Mirrored pattern Meshed pattern Gated patterns Gated egress Gated ingress Gated egress and gated ingress Handover pattern Previous arrow_back Design considerations Next Mirrored pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Architecture_using_Cloud_Functions.txt b/Architecture_using_Cloud_Functions.txt new file mode 100644 index 0000000000000000000000000000000000000000..7979a437b1777e9c1ee0cec686cddcb34924567c --- /dev/null +++ b/Architecture_using_Cloud_Functions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/serverless-functions-blueprint +Date Scraped: 2025-02-23T11:56:14.777Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy a secured serverless architecture using Cloud Run functions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-06 UTC Serverless architectures let you develop software and services without provisioning or maintaining servers. You can use serverless architectures to build applications for a wide range of services. This document provides opinionated guidance for DevOps engineers, security architects, and application developers on how to help protect serverless applications that use Cloud Run functions (2nd gen). The document is part of a security blueprint that consists of the following: A GitHub repository that contains a set of Terraform configurations and scripts. A guide to the architecture, design, and security controls that you implement with the blueprint (this document). Though you can deploy this blueprint without deploying the Google Cloud enterprise foundations blueprint first, this document assumes that you've already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. The architecture that's described in this document helps you to layer additional controls onto your foundation to help protect your serverless applications. To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint are designed to address the risks that are relevant to the various use cases described in this document. Serverless use cases The blueprint supports the following use cases: Deploying a serverless architecture using Cloud Run functions (this document) Deploying a serverless architecture using Cloud Run Differences between Cloud Run functions and Cloud Run include the following: Cloud Run functions is triggered by events, such as changes to data in a database or the receipt of a message from a messaging system such as Pub/Sub. Cloud Run is triggered by requests, such as HTTP requests. Cloud Run functions is limited to a set of supported runtimes. You can use Cloud Run with any programming language. Cloud Run functions manages containers and the infrastructure that controls the web server or language runtime so that you can focus on your code. Cloud Run provides the flexibility for you to run these services yourself, so that you have control of the container configuration. For more information about differences between Cloud Run and Cloud Run functions, see Choosing a Google Cloud compute option. Architecture This blueprint uses a Shared VPC architecture, in which Cloud Run functions is deployed in a service project and can access resources that are located in other VPC networks. The following diagram shows a high-level architecture, which is further described in the example architectures that follow it. The architecture that's shown in the preceding diagram uses a combination of the following Google Cloud services and features: Cloud Run functions lets you run functions as a service and manages the infrastructure on your behalf. By default, this architecture deploys Cloud Run functions with an internal IP address only and without access to the public internet. The triggering event is the event that triggers Cloud Run functions. As further described in the example architectures, this can be a Cloud Storage event, a scheduled interval, or a change in BigQuery. Artifact Registry stores the source containers for your Cloud Run functions application. Shared VPC lets you connect a Serverless VPC Access connector in your service project to the host project. You deploy a separate Shared VPC network for each environment (production, non-production, and development). This networking design provides network isolation between the different environments. A Shared VPC network lets you centrally manage network resources in a common network while delegating administrative responsibilities for the service project. The Serverless VPC Access connector connects your serverless application to your VPC network using Serverless VPC Access. Serverless VPC Access helps to ensure that requests from your serverless application to the VPC network aren't exposed to the internet. Serverless VPC Access lets Cloud Run functions communicate with other services, storage systems, and resources that support VPC Service Controls. You can configure Serverless VPC Access in the Shared VPC host project or a service project. By default, this blueprint deploys Serverless VPC access in the Shared VPC host project to align with the Shared VPC model of centralizing network configuration resources. For more information, see Comparison of configuration methods. VPC Service Controls creates a security perimeter that isolates your Cloud Run functions services and resources by setting up authorization, access controls, and secure data exchange. This perimeter is designed to isolate your application and managed services by setting up additional access controls and monitoring, and to separate your governance of Google Cloud from the application. Your governance includes key management and logging. The consumer service is the application that is acted on by Cloud Run functions. The consumer service can be an internal server or another Google Cloud service such as Cloud SQL. Depending on your use case, this service might be behind Cloud Next Generation Firewall, in another subnet, in the same service project as Cloud Run functions, or in another service project. Secure Web Proxy is designed to secure the egress web traffic, if required. It enables flexible and granular policies based on cloud identities and web applications. This blueprint uses Secure Web Proxy for granular access policies to egress web traffic during the build phase of Cloud Run functions. The blueprint adds an allowed list of URLs to the Gateway Security Policy Rule. Cloud NAT provides outbound connection to the internet, if required. Cloud NAT supports source network address translation (SNAT) for compute resources without public IP addresses. Inbound response packets use destination network address translation (DNAT). You can disable Cloud NAT if Cloud Run functions doesn't require access to the internet. Cloud NAT implements the egress network policy that is attached to Secure Web Proxy. Cloud Key Management Service (Cloud KMS) stores the customer-managed encryption keys (CMEKs) that are used by the services in this blueprint, including your serverless application, Artifact Registry, and Cloud Run functions. Secret Manager stores the Cloud Run functions secrets. The blueprint mounts secrets as a volume to provide a higher level of security than passing secrets as environment variables. Identity and Access Management (IAM) and Resource Manager help to restrict access and isolate resources. The access controls and resource hierarchy follow the principle of least privilege. Cloud Logging collects all the logs from Google Cloud services for storage and retrieval by your analysis and investigation tools. Cloud Monitoring collects and stores performance information and metrics about Google Cloud services. Example architecture with a serverless application using Cloud Storage The following diagram shows how you can run a serverless application that accesses an internal server when a particular event occurs in Cloud Storage. In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: Cloud Storage emits an event when any cloud resource, application, or user creates a web object on a bucket. Eventarc routes events from different resources. Eventarc encrypts events in transit and at rest. Pub/Sub queues events that are used as the input and a trigger for Cloud Run functions. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as an internal server. The internal server runs on Compute Engine or Google Kubernetes Engine and hosts your internal application. If you deploy the Secure Cloud Run functions with Internal Server Example, you deploy an Apache server with a Hello World HTML page. This example simulates access to an internal application that runs VMs or containers. Example architecture with Cloud SQL The following diagram shows how you can run a serverless application that accesses a Cloud SQL hosted service at a regular interval that is defined in Cloud Scheduler. You can use this architecture when you must gather logs, aggregate data, and so on. In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: Cloud Scheduler emits events on a regular basis. Pub/Sub queues events that are used as the input and a trigger for Cloud Run functions. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as company data stored in Cloud SQL. Cloud SQL Auth Proxy controls access to Cloud SQL. Cloud SQL hosts a service that is peered to the VPC network and that the serverless application can access. If you deploy the Secure Cloud Run functions with Cloud SQL example, you deploy a MySQL database with a sample database. Example architecture with BigQuery data warehouse The following diagram shows how you can run a serverless application that is triggered when an event occurs in BigQuery (for example, data is added or a table is created). In addition to the services described in Architecture, this example architecture uses a combination of the following Google Cloud services and features: BigQuery hosts a data warehouse. If you deploy the Secure Cloud Run functions triggered by BigQuery example, you deploy a sample BigQuery dataset and table. Eventarc triggers Cloud Run functions when a particular event occurs in BigQuery. Organization structure Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or testing), and development. This resource hierarchy is based on the hierarchy that's described in the enterprise foundations blueprint. You deploy the projects that the blueprint specifies into the following folders: Common, Production, Non-production, and Dev. The following sections describe this diagram in more detail. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Bootstrap Contains resources required to deploy the enterprise foundations blueprint. Common Contains centralized services for the organization, such as the security project. Production Contains projects that have cloud resources that have been tested and are ready to be used by customers. In this blueprint, the Production folder contains the service project and host project. Non-production Contains projects that have cloud resources that are currently being tested and staged for release. In this blueprint, the Non-production folder contains the service project and host project. Development Contains projects that have cloud resources that are currently being developed. In this blueprint, the Development folder contains the service project and host project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see Organization structure. For other folder structures, see Decide a resource hierarchy for your Google Cloud landing zone. Projects You isolate resources in your environment using projects. The following table describes the projects that are needed within the organization. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Shared VPC host project This project includes the firewall ingress rules and any resources that have internal IP addresses (as described in Connect to a VPC network). When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Serverless VPC Access connector, Cloud NAT, and Cloud Secure Web Proxy. Shared VPC service project This project includes your serverless application, Cloud Run functions, and the Serverless VPC Access connector. You attach the service project to the host project so that the service project can participate in the Shared VPC network. When you apply the Terraform code, you specify the name of this project. The blueprint deploys Cloud Run functions and services needed for your use case, such as Cloud SQL, Cloud Scheduler, Cloud Storage, or BigQuery. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Cloud KMS. If you use the Secure Serverless Harness module in the serverless blueprint for Cloud Run functions, Artifact Registry is also deployed. Security project This project includes your security-specific services, such as Cloud KMS and Secret Manager. The default name of the security project is prj-bu1-p-sec. If you deploy this blueprint after you deploy the security foundations blueprint, the security project project is created in addition to the enterprise foundation blueprint's secrets project (prj-bu1-p-env-secrets). For more information about the enterprise foundations blueprint projects, see Projects. If you deploy multiple instances of this blueprint without the enterprise foundations blueprint, each instance has its own security project. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the serverless architecture. The following table describes the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Group Project Roles Serverless administrator grp-gcp-serverless-admin@example.com Service project Cloud Run functions Admin (roles/cloudfunctions.admin) Compute Network User (roles/compute.networkUser) Compute Network Viewer (roles/compute.networkViewer) Cloud Run Admin (roles/run.admin) Serverless security administrator grp-gcp-serverless-security-admin@example.com Security project Artifact Registry Reader (roles/artifactregistry.reader) Cloud Run functions Admin (roles/cloudfunctions.admin) Cloud KMS Viewer (roles/cloudkms.viewer) Cloud Run Viewer (roles/run.viewer) Cloud Run functions developer grp-gcp-secure-cloud-run-developer@example.com Security project Artifact Registry Writer (roles/artifactregistry.writer) Cloud Run functions Developer (roles/cloudfunctions.developer) Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) Cloud Run functions user grp-gcp-secure-cloud-run-user@example.com Shared VPC service project Cloud Run functions Invoker (roles/cloudfunctions.invoker) Security controls This section discusses the security controls in Google Cloud that you use to help secure your serverless architecture. The key security principles to consider are as follows: Secure access according to the principle of least privilege, giving principals only the privileges required to perform tasks. Secure network connections through trust boundary design, which includes network segmentation, organization policies, and firewall policies. Secure configuration for each of the services. Identify any compliance or regulatory requirements for the infrastructure that hosts serverless workloads and assign a risk level. Configure sufficient monitoring and logging to support audit trails for security operations and incident management. Build system controls When you deploy your serverless application, you use Artifact Registry to store the container images and binaries. Artifact Registry supports CMEK so that you can encrypt the repository using your own encryption keys. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from restricted.googleapis.com special domain names. Using the restricted.googleapis.com domain has the following benefits: It helps to reduce your network attack surface by using Private Google Access when workloads communicate with Google APIs and services. It ensures that you use only services that support VPC Service Controls. In addition, you create a DNS record to resolve *.googleapis.com to restricted.googleapis.com. For more information, see Configuring Private Google Access. Perimeter controls As shown in the Architecture section, you place the resources for the serverless application in a separate VPC Service Controls security perimeter. This perimeter helps reduce the broad impact from a compromise of systems or services. However, this security perimeter doesn't apply to the Cloud Run functions build process when Cloud Build automatically builds your code into a container image and pushes that image to Artifact Registry. In this scenario, create an ingress rule for the Cloud Build service account in the service perimeter. Access policy To help ensure that only specific principals (users or services) can access resources and data, you enable IAM groups and roles. To help ensure that only specific resources can access your projects, you enable an access policy for your Google organization. For more information, see Access level attributes. Service accounts and access controls Service accounts are accounts for applications or compute workloads instead of for individual end users. To implement the principle of least privilege and the principle of separation of duties, you create service accounts with granular permissions and limited access to resources. The service accounts are as follows: A Cloud Run functions service account (cloudfunction_sa) that has the following roles: Compute Network Viewer (roles/compute.networkViewer) Eventarc Event Receiver (roles/eventarc.eventReceiver) Cloud Run Invoker (roles/run.invoker) Secret Manager Secret Assessor (roles/secretmanager.secretAccessor) For more information, see Allow Cloud Run functions to access a secret. Cloud Run functions uses this service account to grant permission to specific Pub/Sub topics only and to restrict the Eventarc event system from Cloud Run functions compute resources in Example architecture with a serverless application using Cloud Storage and Example architecture with BigQuery data warehouse. A Serverless VPC Access connector account (gcp_sa_vpcaccess) that has the Compute Network User (roles/compute.networkUser) role. A second Serverless VPC Access connector account (cloud_services) that has the Compute Network User (roles/compute.networkUser) role. These service accounts for the Serverless VPC Access connector are required so that the connector can create the firewall ingress and egress rules in the host project. For more information, see Grant permissions to service accounts in your service projects. A service identity to run Cloud Run functions (cloudfunction_sa) that has the [Serverless VPC Access User (roles/vpcaccess.user)](/iam/docs/understanding-roles#vpcaccess.user) and the Service Account User (roles/iam.serviceAccountUser) roles. A service account for the Google APIs (cloud_services_sa) that has the Compute Network User (roles/compute.networkUser) role to run internal Google processes on your behalf. A service identity for Cloud Run functions (cloud_serverless_sa) that has the Artifact Registry Reader (roles/artifactregistry.reader) role. This service account provides access to Artifact Registry and CMEKs. A service identity for Eventarc (eventarc_sa) that has the Cloud KMS CryptoKey Decrypter (roles/cloudkms.cryptoKeyDecrypter) and the Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) roles. A service identity for Artifact Registry (artifact_sa) with the CryptoKey Decrypter (roles/cloudkms.cryptoKeyDecrypter) and the Cloud KMS CryptoKey Encrypter (roles/cloudkms.cryptoKeyEncrypter) roles. Key management To validate integrity and help protect your data at rest, you use CMEKs with Artifact Registry, Cloud Run functions, Cloud Storage, and Eventarc. CMEKs provides you with greater control over your encryption key. The following CMEKs are used: A software key for Artifact Registry that attests the code for your serverless application. An encryption key to encrypt the container images that Cloud Run functions deploys. An encryption key for Eventarc events that encrypts the messaging channel at rest. An encryption key to help protect data in Cloud Storage. When you apply the Terraform configuration, you specify the CMEK location, which determines the geographical location where the keys are stored. You must ensure that your CMEKs are in the same region as your resources. By default, CMEKs are rotated every 30 days. Secret management Cloud Run functions supports Secret Manager to store the secrets that your serverless application might require. These secrets can include API keys and database usernames and passwords. To expose the secret as a mounted volume, use the service_configs object variables in the main module. When you deploy this blueprint with the enterprise foundations blueprint, you must add your secrets to the secrets project before you apply the Terraform code. The blueprint will grant the Secret Manager Secret Assessor (roles/secretmanager.secretAccessor) role to the Cloud Run functions service account. For more information, see Using secrets. Organization policies This blueprint adds constraints to the organization policy constraints that the enterprise foundations blueprint uses. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy constraints. The following table describes the additional organization policy constraints that are defined in the Secure Cloud Run functions Security module of this blueprint. Policy constraint Description Recommended value Allowed ingress settings (Cloud Run functions) constraints/cloudfunctions.allowedIngressSettings Allow ingress traffic only from internal services or the external HTTP(S) load balancer. The default is ALLOW_ALL. ALLOW_INTERNAL_ONLY Require VPC Connector (Cloud Run functions) constraints/cloudfunctions.requireVPCConnector Require specifying a Serverless VPC Access connector when deploying a function. When this constraint is enforced, functions must specify a Serverless VPC Access connector. The default is false. true Allowed VPC Connector egress settings (Cloud Run functions) cloudfunctions.allowedVpcConnectorEgressSettings Require all egress traffic for Cloud Run functions to use a Serverless VPC Access connector. The default is PRIVATE_RANGES_ONLY. ALL_TRAFFIC Operational controls You can enable logging and Security Command Center Premium tier features such as security health analytics and threat detection. These controls help you to do the following: Monitor data access. Ensure that proper auditing is in place. Support security operations and incident management capabilities of your organization. Logging To help you meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for the services that you want to track. Deploy Cloud Logging in the projects before you apply the Terraform code to ensure that the blueprint can configure logging for the firewall, load balancer, and VPC network. After you deploy the blueprint, we recommend that you configure the following: Create an aggregated log sink across all projects. Add CMEKs to your logging sink. For all services within the projects, ensure that your logs include information about data writes and administrative access. For more information about logging best practices, see Detective controls. Monitoring and alerts After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security event has occurred. For example, you can use alerts to let your security analysts know when a permission was changed on an IAM role. For more information about configuring Security Command Center alerts, see Setting up finding notifications. The Cloud Run functions Monitoring dashboard helps you to monitor the performance and health of your Cloud Run functions. It provides a variety of metrics and logs, which you can use to identify and troubleshoot problems. The dashboard also includes a number of features that can help you to improve the performance of your functions, such as the ability to set alerts and quotas. For more information, see Monitoring Cloud Run functions. To export alerts, see the following documents: Introduction to alerting Cloud Monitoring metric export Debugging and troubleshooting You can run Connectivity Tests to help you debug network configuration issues between Cloud Run functions and the resources within your subnet. Connectivity Tests simulates the expected path of a packet and provides details about the connectivity, including resource-to-resource connectivity analysis. Connectivity Tests isn't enabled by the Terraform code; you must set it up separately. For more information, see Create and run Connectivity Tests. Terraform deployment modes The following table describes the ways that you can deploy this blueprint, and which Terraform modules apply for each deployment mode. Deployment mode Terraform modules Deploy this blueprint after deploying the enterprise foundations blueprint (recommended). This option deploys the resources for this blueprint in the same VPC Service Controls perimeter that is used by the enterprise foundations blueprint. For more information, see How to customize Foundation v3.0.0 for Secure Cloud Run functions deployment. This option also uses the secrets project that you created when you deployed the enterprise foundations blueprint. Use these Terraform modules: secure-cloud-function-core secure-serverless-net secure-web-proxy Install this blueprint without installing the security foundations blueprint. This option requires that you create a VPC Service Controls perimeter. Use these Terraform modules: secure-cloud-function-core secure-serverless-harness secure-serverless-net secure-cloud-function-security secure-web-proxy secure-cloud-function Bringing it all together To implement the architecture described in this document, do the following: Review the README for the blueprint to ensure that you meet all the prerequisites. In your testing environment, to see the blueprint in action, deploy one of the examples. These examples match the architecture examples described in Architecture. As part of your testing process, consider doing the following: Use Security Command Center to scan the projects against common compliance requirements. Replace the sample application with a real application (for example 1) and run through a typical deployment scenario. Work with the application engineering and operations teams in your enterprise to test their access to the projects and to verify whether they can interact with the solution in the way that they would expect. Deploy the blueprint into your environment. What's next Review the Google Cloud enterprise foundations blueprint for a baseline secure environment. To see the details of the blueprint, read the Terraform configuration README. To deploy a serverless application using Cloud Run, see Deploy a secured serverless architecture using Cloud Run. Send feedback \ No newline at end of file diff --git a/Architecture_using_Cloud_Run.txt b/Architecture_using_Cloud_Run.txt new file mode 100644 index 0000000000000000000000000000000000000000..91c2e931dfec606f06fd637ba0406e684632c100 --- /dev/null +++ b/Architecture_using_Cloud_Run.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/serverless-blueprint +Date Scraped: 2025-02-23T11:56:16.852Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy a secured serverless architecture using Cloud Run Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-03-10 UTC This content was last updated in March 2023, and represents the status quo as of the time it was written. Google's security policies and systems may change going forward, as we continually improve protection for our customers. Serverless architectures let you develop software and services without provisioning or maintaining servers. You can use serverless architectures to build applications for a wide range of services. This document provides opinionated guidance for DevOps engineers, security architects, and application developers on how to help protect serverless applications that use Cloud Run. The document is part of a security blueprint that consists of the following: A GitHub repository that contains a set of Terraform configurations and scripts. A guide to the architecture, design, and security controls that you implement with the blueprint (this document). Though you can deploy this blueprint without deploying the Google Cloud enterprise foundations blueprint first, this document assumes that you've already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. The architecture that's described in this document helps you to layer additional controls onto your foundation to help protect your serverless applications. To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint are designed to address the risks that are relevant to the various use cases described in this document. Serverless use cases The blueprint supports the following use cases: Deploying a serverless architecture using Cloud Run (this document) Deploying a serverless architecture using Cloud Run functions Differences between Cloud Run functions and Cloud Run include the following: Cloud Run functions is triggered by events, such as changes to data in a database or the receipt of a message from a messaging system such as Pub/Sub. Cloud Run is triggered by requests, such as HTTP requests. Cloud Run functions is limited to a set of supported runtimes. You can use Cloud Run with any programming language. Cloud Run functions manages containers and the infrastructure that controls the web server or language runtime so that you can focus on your code. Cloud Run provides the flexibility for you to run these services yourself, so that you have control of the container configuration. For more information about differences between Cloud Run and Cloud Run functions, see Choosing a Google Cloud compute option. Architecture This blueprint lets you run serverless applications on Cloud Run with Shared VPC. We recommend that you use Shared VPC because it centralizes network policy and control for all networking resources. In addition, Shared VPC is deployed in the enterprise foundations blueprint. Recommended architecture: Cloud Run with a Shared VPC network The following image shows how you can run your serverless applications in a Shared VPC network. The architecture that's shown in the preceding diagram uses a combination of the following Google Cloud services and features: An external Application Load Balancer receives the data that serverless applications require from the internet and forwards it to Cloud Run. The external Application Load Balancer is a Layer 7 load balancer. Google Cloud Armor acts as the web application firewall to help protect your serverless applications against denial of service (DoS) and web attacks. Cloud Run lets you run application code in containers and manages the infrastructure on your behalf. In this blueprint, the Internal and Cloud Load Balancing ingress setting restricts access to Cloud Run so that Cloud Run will accept requests only from the external Application Load Balancer. The Serverless VPC Access connector connects your serverless application to your VPC network using Serverless VPC Access. Serverless VPC Access helps to ensure that requests from your serverless application to the VPC network aren't exposed to the internet. Serverless VPC Access lets Cloud Run communicate with other services, storage systems, and resources that support VPC Service Controls. By default, you create the Serverless VPC Access connector in the service project. You can create the Serverless VPC Access connector in the host project by specifying true for the connector_on_host_project input variable when you run the Secure Cloud Run Network module. For more information, see Comparison of configuration methods. Virtual Private Cloud (VPC) firewall rules control the flow of data into the subnet that hosts your resources, such as a company server hosted on Compute Engine, or company data stored in Cloud Storage. VPC Service Controls creates a security perimeter that isolates your Cloud Run services and resources by setting up authorization, access controls, and secure data exchange. This perimeter is designed to protect incoming content, to isolate your application by setting up additional access controls and monitoring, and to separate your governance of Google Cloud from the application. Your governance includes key management and logging. Shared VPC lets you connect the Serverless VPC Access connector in your service project to the host project. Cloud Key Management Service (Cloud KMS) stores the customer-managed encryption keys (CMEKs) that are used by the services in this blueprint, including your serverless application, Artifact Registry, and Cloud Run. Identity and Access Management (IAM) and Resource Manager help to restrict access and isolate resources. The access controls and resource hierarchy follow the principle of least privilege. Alternative architecture: Cloud Run without a Shared VPC network If you're not using a Shared VPC network, you can deploy Cloud Run and your serverless application in a VPC Service Control perimeter without a Shared VPC network. You might implement this alternative architecture if you're using a hub-and-spoke topology. The following image shows how you can run your serverless applications without Shared VPC. The architecture that's shown in the preceding diagram uses a combination of Google Cloud services and features that's similar to those that are described in the previous section, Recommended architecture: Cloud Run with a shared VPC. Organization structure You group your resources so that you can manage them and separate your development and testing environments from your production environment. Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or testing), and development. This resource hierarchy is based on the hierarchy that's described in the enterprise foundations blueprint. You deploy the projects that the blueprint specifies into the following folders: Common, Production, Non-production, and Dev. The following sections describe this diagram in more detail. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Bootstrap Contains resources required to deploy the enterprise foundations blueprint. Common Contains centralized services for the organization, such as the security project. Production Contains projects that have cloud resources that have been tested and are ready to use by customers. In this blueprint, the Production folder contains the service project and host project. Non-production Contains projects that have cloud resources that are currently being tested and staged for release. In this blueprint, the Non-production folder contains the service project and host project. Dev Contains projects that have cloud resources that are currently being developed. In this blueprint, the Dev folder contains the service project and host project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see Organization structure. For other folder structures, see Decide a resource hierarchy for your Google Cloud landing zone. Projects You isolate resources in your environment using projects. The following table describes the projects that are needed within the organization. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Host project This project includes the firewall ingress rules and any resources that have internal IP addresses (as described in Connect to a VPC network). When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys the services. Service project This project includes your serverless application, Cloud Run, and the Serverless VPC Access connector. You attach the service project to the host project so that the service project can participate in the Shared VPC network. When you apply the Terraform code, you specify the name of this project. The blueprint deploys Cloud Run, Google Cloud Armor, Serverless VPC Access connector, and the load balancer. Security project This project includes your security-specific services, such as Cloud KMS and Secret Manager. When you apply the Terraform code, you specify the name of this project, and the blueprint deploys Cloud KMS. If you use the Secure Cloud Run Harness module, Artifact Registry is also deployed. If you deploy this blueprint after you deploy the security foundations blueprint, this project is the secrets project created by the enterprise foundations blueprint. For more information about the enterprise foundations blueprint projects, see Projects. If you deploy multiple instances of this blueprint without the enterprise foundations blueprint, each instance has its own security project. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the serverless architecture. The following table describes the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Group Project Roles Serverless administrator grp-gcp-serverless-admin@example.com Service project roles/run.admin roles/compute.networkViewer compute.networkUser Serverless security administrator grp-gcp-serverless-security-admin@example.com Security project roles/run.viewer roles/cloudkms.viewer roles/artifactregistry.reader Cloud Run developer grp-gcp-secure-cloud-run-developer@example.com Security project roles/run.developer roles/artifactregistry.writer roles/cloudkms.cryptoKeyEncrypter Cloud Run user grp-gcp-secure-cloud-run-user@example.com Service project roles/run.invoker Security controls This section discusses the security controls in Google Cloud that you use to help secure your serverless architecture. The key security principles to consider are as follows: Secure access according to the principle of least privilege, giving entities only the privileges required to perform their tasks. Secure network connections through segmentation design, organization policies, and firewall policies. Secure configuration for each of the services. Understand the risk levels and security requirements for the environment that hosts your serverless workloads. Configure sufficient monitoring and logging to allow detection, investigation, and response. Security controls for serverless applications You can help to protect your serverless applications using controls that protect traffic on the network, control access, and encrypt data. Build system controls When you deploy your serverless application, you use Artifact Registry to store the container images and binaries. Artifact Registry supports CMEK so that you can encrypt the repository using your own encryption keys. SSL traffic To support HTTPS traffic to your serverless application, you configure an SSL certificate for your external Application Load Balancer. By default, you use a self-signed certificate that you can change to a managed certificate after you apply the Terraform code. For more information about installing and using managed certificates, see Using Google-managed SSL certificates. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from restricted.googleapis.com special domain names. Using the restricted.googleapis.com domain has the following benefits: It helps reduce your network attack surface by using Private Google Access when workloads communicate with Google APIs and services. It ensures that you use only services that support VPC Service Controls. For more information, see Configuring Private Google Access. Perimeter controls As shown in the recommended-architecture diagram, you place the resources for the serverless application in a separate perimeter. This perimeter helps protect the serverless application from unintended access and data exfiltration. Access policy To help ensure that only specific identities (users or services) can access resources and data, you enable IAM groups and roles. To help ensure that only specific resources can access your projects, you enable an access policy for your Google organization. For more information, see Access level attributes. Identity and Access Proxy If your environment already includes Identity and Access Proxy (IAP), you can configure the external Application Load Balancer to use IAP to authorize traffic for your serverless application. IAP lets you establish a central authorization layer for your serverless application so that you can use application-level access controls instead of relying on network-level firewalls. To enable IAP for your application, in the loadbalancer.tf file, set iap_config.enable to true. For more information about IAP, see Identity-Aware Proxy overview. Service accounts and access controls Service accounts are identities that Google Cloud can use to run API requests on your behalf. To implement separation of duties, you create service accounts that have different roles for specific purposes. The service accounts are as follows: A Cloud Run service account (cloud_run_sa) that has the following roles: roles/run.invoker roles/secretmanager.secretAccessor For more information, see Allow Cloud Run to access a secret. A Serverless VPC Access connector account (gcp_sa_vpcaccess) that has the roles/compute.networkUser role. A second Serverless VPC Access connector account (cloud_services) that has the roles/compute.networkUser role. These service accounts for the Serverless VPC Access connector are required so that the connector can create the firewall ingress and egress rules in the host project. For more information, see Grant permissions to service accounts in your service projects. A service identity to run Cloud Run (run_identity_services) that has the roles/vpcaccess.user role. A service agent for the Google APIs (cloud_services_sa) that has the roles/editor role. This service account lets Cloud Run communicate with the Serverless VPC Access connector. A service identity for Cloud Run (serverless_sa) that has the roles/artifactregistry.reader role. This service account provides access to Artifact Registry and CMEK encryption and decryption keys. Key management You use the CMEK keys to help protect your data in Artifact Registry and in Cloud Run. You use the following encryption keys: A software key for Artifact Registry that attests the code for your serverless application. An encryption key to encrypt the container images that Cloud Run deploys. When you apply the Terraform configuration, you specify the CMEK location, which determines the geographical location where the keys are stored. You must ensure that your CMEK keys are in the same region as your resources. By default, CMEK keys are rotated every 30 days. Secret management Cloud Run supports Secret Manager to store the secrets that your serverless application might require. These secrets can include API keys and database usernames and passwords. To expose the secret as a mounted volume, use the volume_mounts and volumes variables in the main module. When you deploy this blueprint with the enterprise foundations blueprint, you must add your secrets to the secrets project before you apply the Terraform code. The blueprint will grant the Secret Manager Secret Accessor role to the Cloud Run service account. For more information, see Use secrets. Organization policies This blueprint adds constraints to the organization policy constraints. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy constraints. The following table describes the additional organization policy constraints that are defined in the Secure Cloud Run Security module of this blueprint. Policy constraint Description Recommended value constraints/run.allowedIngress Allow ingress traffic only from internal services or the external Application Load Balancer. internal-and-cloud-load-balancing constraints/run.allowedVPCEgress Require a Cloud Run service's revisions to use a Serverless VPC Access connector, and ensure that the revisions' VPC egress settings are set to allow private ranges only. private-ranges-only Operational controls You can enable logging and Security Command Center Premium tier features such as security health analytics and threat detection. These controls help you to do the following: Monitor who is accessing your data. Ensure that proper auditing is in place. Support the ability of your incident management and operations teams to respond to issues that might occur. Logging To help you meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for the services that you want to track. Deploy Cloud Logging in the projects before you apply the Terraform code to ensure that the blueprint can configure logging for the firewall, load balancer, and VPC network. After you deploy the blueprint, we recommend that you configure the following: Create an aggregated log sink across all projects. Select the appropriate region to store your logs. Add CMEK keys to your logging sink. For all services within the projects, ensure that your logs include information about data reads and writes, and ensure that they include information about what administrators access. For more information about logging best practices, see Detective controls. Monitoring and alerts After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security incident might be occurring. For example, you can use alerts to let your security analysts know when a permission has changed in an IAM role. For more information about configuring Security Command Center alerts, see Setting up finding notifications. The Cloud Run Monitoring dashboard, which is part of the sample dashboard library, provides you with the following information: Request count Request latency Billable instance time Container CPU allocation Container memory allocation Container CPU utilization Container memory utilization For instructions on importing the dashboard, see Install sample dashboards. To export alerts, see the following documents: Introduction to alerting Cloud Monitoring metric export Debugging and troubleshooting You can run Connectivity Tests to help you debug network configuration issues between Cloud Run and the resources within your subnet. Connectivity Tests simulates the expected path of a packet and provides details about the connectivity, including resource-to-resource connectivity analysis. Connectivity Tests isn't enabled by the Terraform code; you must set it up separately. For more information, see Create and run Connectivity Tests. Detective controls This section describes the detective controls that are included in the blueprint. Google Cloud Armor and WAF You use an external Application Load Balancer and Google Cloud Armor to provide distributed denial of service (DDoS) protection for your serverless application. Google Cloud Armor is the web application firewall (WAF) included with Google Cloud. You configure the Google Cloud Armor rules described in the following table to help protect the serverless application. The rules are designed to help mitigate against OWASP Top 10 risks. Google Cloud Armor rule name ModSecurity rule name Remote code execution rce-v33-stable Local file include lfi-v33-stable Protocol attack protocolattack-v33-stable Remote file inclusion rfi-v33-stable Scanner detection scannerdetection-v33-stable Session fixation attack sessionfixation-v33-stable SQL injection sqli-v33-stable Cross-site scripting xss-v33-stable When these rules are enabled, Google Cloud Armor automatically denies any traffic that matches the rule. For more information about these rules, see Tune Google Cloud Armor preconfigured WAF rules. Security issue detection in Cloud Run You can detect potential security issues in Cloud Run using Recommender. Recommender can detect security issues such as the following: API keys or passwords that are stored in environment variables instead of in Secret Manager. Containers that include hard-coded credentials instead of using service identities. About a day after you deploy Cloud Run, Recommender starts providing its findings and recommendations. Recommender displays its findings and recommended corrective actions in the Cloud Run service list or the Recommendation Hub. Terraform deployment modes The following table describes the ways that you can deploy this blueprint, and which Terraform modules apply for each deployment mode. Deployment mode Terraform modules Deploy this blueprint after deploying the enterprise foundations blueprint (recommended). This option deploys the resources for this blueprint in the same VPC Service Controls perimeter that is used by the enterprise foundations blueprint. For more information, see How to customize Foundation v2.3.1 for Secured Serverless deployment. This option also uses the secrets project that you created when you deployed the enterprise foundations blueprint. Use these Terraform modules: secure-cloud-run-core secure-serverless-net secure-cloud-run-security secure-cloud-run Install this blueprint without installing the enterprise foundations blueprint. This option requires that you create a VPC Service Controls perimeter. Use these Terraform modules: secure-cloud-run-core secure-serverless-harness secure-serverless-net secure-cloud-run-security secure-cloud-run Bringing it all together To implement the architecture described in this document, do the following: Review the README for the blueprint and ensure that you meet all the prerequisites. Create an SSL certificate for use with the external Application Load Balancer. If you do not complete this step, the blueprint uses a self-signed certificate to deploy the load balancer, and your browser will display warnings about insecure connections when you attempt to access your serverless application. In your testing environment, deploy the Secure Cloud Run Example to see the blueprint in action. As part of your testing process, consider doing the following: Use Security Command Center to scan the projects against common compliance requirements. Replace the sample application with a real application and run through a typical deployment scenario. Work with the application engineering and operations teams in your enterprise to test their access to the projects and to verify whether they can interact with the solution in the way that they would expect. Deploy the blueprint into your environment. Compliance mappings To help define key security controls that are related to serverless applications, the Cloud Security Alliance (CSA) published Top 12 Critical Risks for Serverless Applications. The security controls used in this blueprint help you address most of these risks, as described in the following table. Risk Blueprint mitigation Your responsibility 1. Function event-data injection Google Cloud Armor and external Application Load Balancers help protect against OWASP Top 10, as described in OWASP Top 10 2021 mitigation options on Google Cloud Secure coding practices such as exception handling, as described in the OWASP Secure Coding Practices and Supply chain Levels for Software Artifacts (SLSA) 2. Broken authentication None IAP and Identity Platform to authenticate users to the service 3. Insecure serverless deployment configuration CMEK with Cloud KMS Management of your own encryption keys 4. Over-privileged function permissions and roles Custom service account for service authentication (not the default Compute Engine service account) Tightly-scoped IAM roles on the Cloud Run service account VPC Service Controls to limit scope of Google Cloud API access (as provided using the Google Cloud enterprise foundations blueprint) None 5. Inadequate function monitoring and logging Cloud Logging Cloud Monitoring dashboards and alerting structure 6. Insecure third-party dependencies None Protect the CI/CD pipeline using code scanning and pre-deployment analysis 7. Insecure application secrets storage Secret Manager Secret management in application code 8. Denial of service and financial resource exhaustion Google Cloud Armor Cloud Run service timeouts (default is 120 seconds) None 9. Serverless business logic manipulation VPC Service Controls to limit scope of Google Cloud API access (provided using enterprise foundations blueprint) None 10. Improper exception handling and verbose error messages None Secure programming best practices 11. Obsolete functions, cloud resources, and event triggers Use revisions to minimize the attack surface. Revisions help to reduce the likelihood of accidentally enabling a previous, obsolete iteration of a service. Revisions also help you test a new revision's security posture using A/B testing along with monitoring and logging tools. Infrastructure as code (IaC) to manage cloud resources Cloud resources monitoring using Security Command Center Cloud Billing monitoring Cleanup of unused cloud resources to minimize attack surface 12. Cross-execution data persistency None None What's next For a baseline secure environment, review the Google Cloud enterprise foundations blueprint. To see the details of the blueprint that's described in this document, read the Terraform configuration README file. To read about security and compliance best practices, see Google Cloud Architecture Framework: Security, privacy, and compliance. For more best practices and blueprints, see the security best practices center. Send feedback \ No newline at end of file diff --git a/Artifact_Registry(1).txt b/Artifact_Registry(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..5e9224d8ff1437296e7f0e89080f791274e0cb7d --- /dev/null +++ b/Artifact_Registry(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/artifact-registry/docs +Date Scraped: 2025-02-23T12:04:16.424Z + +Content: +Home Documentation Artifact Registry Artifact Registry documentation Stay organized with collections Save and categorize content based on your preferences. Artifact Registry documentation View all product documentation A universal package manager for all your build artifacts and dependencies. Fast, scalable, reliable and secure. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstarts Transitioning from Container Registry Managing repositories Configuring access control Working with container images Working with Java packages Working with Node.js packages Working with Python packages find_in_page Reference Support for the Docker Registry API REST API RPC API info Resources Artifact Registry Pricing Release notes Getting support Quotas and limits Related videos \ No newline at end of file diff --git a/Artifact_Registry.txt b/Artifact_Registry.txt new file mode 100644 index 0000000000000000000000000000000000000000..4dab8910c4ce07c413cdb88af317dbb9c0885c5c --- /dev/null +++ b/Artifact_Registry.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/artifact-registry/docs +Date Scraped: 2025-02-23T12:03:02.560Z + +Content: +Home Documentation Artifact Registry Artifact Registry documentation Stay organized with collections Save and categorize content based on your preferences. Artifact Registry documentation View all product documentation A universal package manager for all your build artifacts and dependencies. Fast, scalable, reliable and secure. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstarts Transitioning from Container Registry Managing repositories Configuring access control Working with container images Working with Java packages Working with Node.js packages Working with Python packages find_in_page Reference Support for the Docker Registry API REST API RPC API info Resources Artifact Registry Pricing Release notes Getting support Quotas and limits Related videos \ No newline at end of file diff --git a/Artificial_Intelligence.txt b/Artificial_Intelligence.txt new file mode 100644 index 0000000000000000000000000000000000000000..3c221b3822b5f3d54523db2e41fe064983abdaeb --- /dev/null +++ b/Artificial_Intelligence.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/ai +Date Scraped: 2025-02-23T11:58:37.866Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceAI and machine learning solutionsAt Google, AI is in our DNA. In partnership with Google Cloud, business leaders can leverage the power of purpose-built AI solutions to transform their organizations and solve real-world problems.Try it in consoleContact salesSummarize large documents with generative AIDeploy a preconfigured solution that uses generative AI to quickly extract text and summarize large documents.Deploy an AI/ML image processing pipelineLaunch a preconfigured, interactive solution that uses pre-trained machine learning models to analyze images and generate image annotations.Create a chat app using retrieval-augmented generation (RAG)Deploy a preconfigured solution with a chat-based experience that provides questions and answers based on embeddings stored as vectors.Turn ideas into reality with Google Cloud AIAI SOLUTIONSRELATED PRODUCTS AND SERVICESCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Conversational AgentsAgent Assist Conversational InsightsContact Center as a ServiceDocument AIImprove your operational efficiency by bringing AI-powered document understanding to unstructured data workflows across a variety of document formats.Document AIBase OCREnterprise Knowledge Graph enrichmentHuman in the LoopGemini for Google CloudGemini for Google Cloud helps you be more productive and creative. It can be your writing and coding assistant, creative designer, expert adviser, or even your data analyst.Gemini Code Assist Gemini Cloud AssistGemini in SecurityGemini in BigQueryVertex AI Search for commerceIncrease conversion rate across digital properties with AI solutions that help brands to deliver personalized consumer experiences across channels. Recommendations AIVision Product SearchRetail SearchAI SOLUTIONSCustomer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint.Conversational AgentsAgent Assist Conversational InsightsContact Center as a ServiceDocument AIImprove your operational efficiency by bringing AI-powered document understanding to unstructured data workflows across a variety of document formats.Document AIBase OCREnterprise Knowledge Graph enrichmentHuman in the LoopGemini for Google CloudGemini for Google Cloud helps you be more productive and creative. It can be your writing and coding assistant, creative designer, expert adviser, or even your data analyst.Gemini Code Assist Gemini Cloud AssistGemini in SecurityGemini in BigQueryVertex AI Search for commerceIncrease conversion rate across digital properties with AI solutions that help brands to deliver personalized consumer experiences across channels. Recommendations AIVision Product SearchRetail SearchLet's solve your challenges together.See how you can transform your business with Google Cloud.Contact usLearn from our customersVideoRetailer Marks & Spencer created better customer experience with Contact Center AI from Google Cloud.46:04VideoAES uses AutoML Vision to cut wind turbine inspection time from two weeks to two days.02:30Case StudyThe City of Memphis uses Google Cloud AI to detect potholes with 90%+ accuracy.5-min readSee all customersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Start your AI journey todayTry Google Cloud AI and machine learning products in the console.Go to my consoleHave a large project?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Assess_and_discover_your_workloads.txt b/Assess_and_discover_your_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..034490bf18cf007b01673900a4be56709adc0fd6 --- /dev/null +++ b/Assess_and_discover_your_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-gcp-assessing-and-discovering-your-workloads +Date Scraped: 2025-02-23T11:51:33.632Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Assess and discover your workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC This document can help you plan, design, and implement the assessment phase of your migration to Google Cloud. Discovering your workloads and services inventory, and mapping their dependencies, can help you identify what you need to migrate and in what order. When planning and designing a migration to Google Cloud, you first need a deep knowledge of your current environment and of the workloads to migrate. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads (this document) Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs The following diagram illustrates the path of your migration journey. This document is useful if you're planning a migration from an on-premises environment, a private hosting environment, another cloud provider, or if you're evaluating the opportunity to migrate and exploring what the assessment phase might look like. In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. Build an inventory of your workloads To scope your migration, you must first understand how many items, such as workloads and hardware appliances, exist in your current environment, along with their dependencies. Building the inventory is a non-trivial task that requires a significant effort, especially when you don't have any automatic cataloging system in place. To have a comprehensive inventory, you need to use the expertise of the teams that are responsible for the design, deployment, and operation of each workload in your current environment, as well as the environment itself. The inventory shouldn't be limited to workloads only, but should at least contain the following: Dependencies of each workload, such as databases, message brokers, configuration storage systems, and other components. Services supporting your workload infrastructure, such as source repositories, continuous integration and continuous deployment (CI/CD) tools, and artifact repositories. Servers, either virtual or physical, and runtime environments. Physical appliances, such as network devices, firewalls, and other dedicated hardware. When compiling this list, you should also gather information about each item, including: Source code location and if you're able to modify this source code. Deployment method for the workload in a runtime environment, for example, if you use an automated deployment pipeline or a manual one. Network restrictions or security requirements. IP address requirements. How you're exposing the workload to clients. Licensing requirements for any software or hardware. How the workload authenticates against your identity and access management system. For example, for each hardware appliance, you should know its detailed specifications, such as its name, vendor, technologies, and dependencies on other items in your inventory. For example: Name: NAS Appliance Vendor and model: Vendor Y, Model Z Technologies: NFS, iSCSI Dependencies: Network connectivity with Jumbo frames to VM compute hardware. This list should also include non-technical information, for example, under which licensing terms you're allowed to use each item and any other compliance requirements. While some licenses let you deploy a workload in a cloud environment, others explicitly forbid cloud deployment. Some licenses are assigned based on the number of CPUs or sockets in use, and these concepts might not be applicable when running on cloud technology. Some of your data might have restrictions regarding the geographical region where it's stored. Finally, some sensitive workloads can require sole tenancy. Along with the inventory, it's useful to provide aids for a visual interpretation of the data you gathered. For example, you can provide a dependency graph and charts to highlight aspects of interest, such as how your workloads are distributed in an automated or manual deployment process. How to build your inventory There are different ways to build a workload inventory. Although the quickest way to get started is to proceed manually, this approach can be difficult for a large production environment. Information in manually built inventories can quickly become outdated, and the resulting migration might fail because you didn't confirm the contents of your inventories. Building the inventory is not a one-time exercise. If your current environment is highly dynamic, you should also spend effort in automating the inventory creation and maintenance, so you eventually have a consistent view of all the items in your environment at any given time. For information about how to build an inventory of your workloads, see Migration Center: Start an asset discovery. Example of a workload inventory This example is an inventory of an environment supporting an ecommerce app. The inventory includes workloads, dependencies, services supporting multiple workloads, and hardware appliances. Note: The system resources requirements refer to the current environment. These requirements must be re-evaluated to consider the resources of the target environment. For example, the CPU cores of the target environment might be more performant due to a more modern architecture and higher clock speeds, so your workloads might require fewer cores. Workloads For each workload in the environment, the following table highlights the most important technologies, its deployment procedure, and other requirements. Name Source code location Technologies Deployment procedure Other requirements Dependencies System resources requirements Marketing website Corporate repository Angular frontend Automated Legal department must validate content Caching service 5 CPU cores 8 GB of RAM Back office Corporate repository Java backend, Angular frontend Automated N/A SQL database 4 CPU cores 4 GB of RAM Ecommerce workload Proprietary workload Vendor X Model Y Version 1.2.0 Manual Customer data must reside inside the European Union SQL database 10 CPU cores 32 GB of RAM Enterprise resource planning (ERP) Proprietary workload Vendor Z, Model C, Version 7.0 Manual N/A SQL database 10 CPU cores 32 GB of RAM Stateless microservices Corporate repository Java Automated N/A Caching service 4 CPU cores 8 GB of RAM Dependencies The following table is an example of the dependencies of the workloads listed in the inventory. These dependencies are necessary for the workloads to correctly function. Name Technologies Other requirements Dependencies System resources requirements SQL database PostgreSQL Customer data must reside inside the European Union Backup and archive system 30 CPU cores 512 GB of RAM Supporting services In your environment, you might have services that support multiple workloads. In this ecommerce example, there are the following services: Name Technologies Other requirements Dependencies System resources requirements Source code repositories Git N/A Backup and archive system 2 CPU cores 4 GB of RAM Backup and archive system Vendor G, Model H, version 2.3.0 By law, long-term storage is required for some items N/A 10 CPU cores 8 GB of RAM CI tool Jenkins N/A Source code repositories artifact repository backup and archive system 32 CPU cores 128 GB of RAM Artifact repository Vendor A Model N Version 5.0.0 N/A Backup and archive system 4 CPU cores 8 GB of RAM Batch processing service Cron jobs running inside the CI tool N/A CI tool 4 CPU cores 8 GB of RAM Caching service Memcached Redis N/A N/A 12 CPU cores 50 GB of RAM Hardware The example environment has the following hardware appliances: Name Technologies Other requirements Dependencies System resources requirements Firewall Vendor H Model V N/A N/A N/A Instances of Server j Vendor K Model B Must be decommissioned because no longer supported N/A N/A NAS Appliance Vendor Y Model Z NFS iSCSI N/A N/A N/A Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Assess your infrastructure After you assess your deployment and operational processes, we recommend that you assess the infrastructure that is supporting your workloads in the source environment. To assess that infrastructure, consider the following: How you organized resources in your source environment. For example, some environments support a logical separation between resources using constructs that isolate groups of resources from each others, such as organizations, projects, and namespaces. How you connected your environment to other environments, such as on-premises environments, and other cloud providers. Categorize your workloads After you complete the inventory, you need to organize your workloads into different categories. This categorization can help you prioritize the workloads to migrate according to their complexity and risk in moving to the cloud. A catalog matrix should have one dimension for each assessment criterion you're considering in your environment. Choose a set of criteria that covers all the requirements of your environment, including the system resources each workload needs. For example, you might be interested to know if a workload has any dependencies, or if it's stateless or stateful. When you design the catalog matrix, consider that for each criteria you add, you are adding another dimension to represent. The resulting matrix might be difficult to visualize. A possible solution to this problem could be to use multiple smaller matrixes, instead of a single, complex one. Also, next to each workload you should add a migration complexity indicator. This indicator estimates the difficulty rating to migrate each workload. The granularity of this indicator depends on your environment. For a basic example, you might have three categories: easy to migrate, hard to migrate or cannot be migrated. To complete this activity, you need experts for each item in the inventory to estimate its migration complexity. Drivers of this migration complexity are unique to each business. When the catalog is complete, you can also build visuals and graphs to help you and your team to quickly evaluate metrics of interest. For example, draw a graph that highlights how many components have dependencies or highlight the migration difficulty of each component. For information about how to build an inventory of your workloads, see Migration Center: Start an asset discovery. Example of a workload catalog The following assessment criteria is used in this example, one for each matrix axis: How critical a workload is to the business. Whether a workload has dependencies, or is a dependency for other workloads. Maximum allowable downtime for the workload. How difficult a workload is to be migrated. Importance to the business Doesn't have dependencies or dependents Has dependencies or dependents Maximum allowable downtime Difficulty Mission critical Stateless microservices 2 minutes Easy ERP 24 hours Hard Ecommerce workload No downtime Hard Hardware firewall No downtime Can't move SQL database 10 minutes Easy Source code repositories 12 hours Easy Non-mission critical Marketing website 2 hours Easy Backup and archive 24 hours Easy Batch processing service 48 hours Easy Caching service 30 minutes Easy Back office 48 hours Hard CI tool 24 hours Easy Artifact repository 30 minutes Easy To help you visualize the results in the catalog, you can build visuals and charts. The following chart highlights the migration difficulty: In the preceding chart, most of the workloads are easy to move, three of them are hard to move, and one of them is not possible to move. Educate your organization about Google Cloud To take full advantage of Google Cloud, your organization needs to start learning about the services, products, and technologies that your business can use on Google Cloud. Your staff can begin with Google Cloud free trial accounts that contain credits to help them experiment and learn. Creating a free environment for testing and learning is critical to the learning experience of your staff. You have several training options: Public and open resources: You can get started learning Google Cloud with free hands-on labs, video series, Cloud OnAir webinars, and Cloud OnBoard training events. In-depth courses: If you want a deeper understanding of how Google Cloud works, you can attend on-demand courses from Google Cloud Skills Boost or Google Cloud Training Specializations from Coursera that you can attend online at your own pace or classroom training by our world-wide authorized training partners. These courses typically span from one to several days. Role-based learning paths: You can train your engineers according to their role in your organization. For example, you can train your workload developers or infrastructure operators how to best use Google Cloud services. You can also certify your engineers' knowledge of Google Cloud with various certifications, at different levels: Associate certifications: A starting point for those new to Google Cloud that can open the door to professional certifications, such as the associate cloud engineer certification. Professional certifications: If you want to assess advanced design and implementation skills for Google Cloud from years of experience, you can get certifications, such as the professional cloud architect or the professional data engineer. Google Workspace certifications: You can demonstrate collaboration skills using Google Workspace tools with a Google Workspace certification. Apigee certifications: With the Apigee certified API engineer certification, you can demonstrate the ability to design and develop robust, secure, and scalable APIs. Google developers certifications: You can demonstrate development skills with the Associate Android developer (This certification is being updated) and mobile web specialist certifications. In addition to training and certification, one of the best ways to get experience with Google Cloud is to begin using the product to build business proofs-of-concept. Experiment and design proofs of concept To show the value and efficacy of Google Cloud, consider designing and developing one or more proofs of concept (PoCs) for each category of workload in your workload catalog. Experimentation and testing let you validate assumptions and demonstrate the value of cloud to business leaders. At a minimum, your PoC should include the following: A comprehensive list of the use cases that your workloads support, including uncommon ones and corner cases. All the requirements for each use case, such as performance, scalability, and consistency requirements, failover mechanisms, and network requirements. A potential list of technologies and products that you want to investigate and test. You should design PoCs and experiments to validate all the use cases on the list. Each experiment should have a precise validity context, scope, expected outputs, and measurable business impact. For example, if one of your CPU-bound workloads needs to quickly scale to satisfy peaks in demand, you can run an experiment to verify that a zone can create many virtual CPU cores, and how much time it takes to do so. If you experience a significant value-add, such as reducing new workload scale-up time by 95% compared to your current environment, this experiment can demonstrate instant business value. If you're interested in evaluating how the performance of your on-premises databases compares to Cloud SQL, Spanner, Firestore, or Bigtable, you could implement a PoC where the same business logic uses different databases. This PoC gives you a low-risk opportunity to identify the right managed database solution for your workload across multiple benchmarks and operating costs. If you want to evaluate the performance of the VM provisioning process in Google Cloud, you can use a third-party tool, such as PerfKit Benchmarker, and compare Google Cloud with other cloud providers. You can measure the end-to-end time to provision resources in the cloud, in addition to reporting on standard metrics of peak performance, including latency, throughput, and time-to-complete. For example, you might be interested in how much time and effort it takes to provision many Kubernetes clusters. PerfKit Benchmarker is an open source community effort involving over 500 participants, such as researchers, academic institutions, and companies, including Google. Calculate total cost of ownership When you have a clear view of the resources you need in the new environment, you can build a total cost of ownership model that lets you compare your costs on Google Cloud with the costs of your current environment. When building this cost model, you should consider not only the costs for hardware and software, but also all the operational costs of running your own data center, such as power, cooling, maintenance, and other support services. Consider that it's also typically easier to reduce costs, thanks to the elastic scalability of Google Cloud resources, compared to a more rigid on-premises data center. A commonly overlooked cost when considering cloud migrations is the use of a cloud network. In a data center, purchasing network infrastructure, such as routers and switches, and then running appropriate network cabling are one-time costs that let you use the entire capacity of the network. In a cloud environment, there are many ways that you might be billed for network utilization. For data intensive workloads, or those that generate a large amount of network traffic, you might need to consider new architectures and network flows to lower networking costs in the cloud. Google Cloud also provides a wide range of options for intelligent scaling of resources and costs. For example, in Compute Engine you can rightsize during your migration with Migrate for Compute Engine, or after VMs are already running, or building autoscaling groups of instances. These options can have a large impact on the costs of running services and should be explored to calculate the total cost of ownership (TCO). To calculate the total cost of Google Cloud resources, you can use the price calculator. Choose the migration strategy for your workloads For each workload to migrate, evaluate and select a migration strategy that best suits their use case. For example, your workloads might have the following conditions: They don't tolerate any downtime or data loss, such as mission-critical workloads. For these workloads, you can choose zero or near-zero downtime migration strategies. They tolerate downtimes, such secondary or backend workloads. For these workloads, you can choose migration strategies that require a downtime. When you choose migration strategies, consider that zero and near-zero downtime migration strategies are usually more costly and complex to design and implement than migration strategies that require a downtime. Choose your migration tools After you choose a migration strategy for your workloads, review and decide upon the migration tools. There are many migration tools available, each optimized for certain migration use cases. Use cases can include the following: Migration strategy Source and target environments Data and workload size Frequency of changes to data and workloads Availability to use managed services for migration To ensure a seamless migration and cut-over, you can use application deployment patterns, infrastructure orchestration, and custom migration applications. However, specialized tools called managed migration services can facilitate the process of moving data, worloads, or even entire infrastructures from one environment to another. With these capabilities, they encapsulate the complex logic of migration and offer migration monitoring capabilities. Define the migration plan and timeline Now that you have an exhaustive view of your current environment, you need to complete your migration plan by: Grouping the workloads and data to migrate in batches (also called sprints in some contexts). Choosing the order in which you want to migrate the batches. Choosing the order in which you want to migrate the workloads inside each batch. As part of your migration plan, we recommend that you also produce the following documents: Technical design document RACI matrix Timeline (such as a T-Minus plan) As you gain experience with Google Cloud, momentum with the migration, and the understanding of your environment, you can do the following: Refine the grouping of workloads and data to migrate. Increase the size of migration batches. Update the order in which you migrate batches and workloads inside batches. Update the composition of the batches. To group the workloads and data to migrate in batches, and to define migration ordering, you assess your workloads against several criteria, such as the following: Business value of the workload. If the workload is deployed or run in a unique way compared to the rest of your infrastructure. Teams responsible for development, deployment, and operations of the workload. Number, type, and scope of dependencies of the workload. Refactoring effort to make the workload work in the new environment. Compliance and licensing requirements of the workload. Availability and reliability requirements of the workload. The workloads you migrate first are the ones that let your teams build their knowledge and experience on Google Cloud. Greater cloud exposure and experience from your team can lower the risk of complications during the migration phase of your migration, and make subsequent migrations easier and quicker. For this reason, choosing the right first-movers is crucial for a successful migration. Business value Choosing a workload that isn't business critical protects your main line of business, and decreases the impact on business from undiscovered risks and mistakes while your team is learning cloud technologies. For example, if you choose the component where the main financial transactions logic of your ecommerce workload is implemented as a first-mover, any mistake during the migration might cause an impact on your main line of business. A better choice is the SQL database supporting your workloads, or better yet, the staging database. You should avoid rarely used workloads. For example, if you choose a workload that's used only a few times per year by a low number of users, although it's a low risk migration, it doesn't increase the momentum of your migration, and it can be hard to detect and respond to problems. Edge cases You should also avoid edge cases, so you can discover patterns that you can apply to other workloads to migrate. A primary goal when selecting a first mover is to gain experience with common patterns in your organization so you can build a knowledge base. You can apply what you learned with these first movers when migrating future workloads later. For example, if most of your workloads are designed following a test-driven development methodology and are developed using the Python programming language, choosing a workload with little test coverage and developed using the Java programming language, doesn't let you discover any pattern that you can apply when migrating the Python workloads. Teams When choosing your first-movers, pay attention to the teams responsible for each workload. The team responsible for a first-mover should be highly motivated, and eager to try Google Cloud and its services. Moreover, business leadership should have clear goals for the first-mover teams and actively work to sponsor and support them through the process. For example, a high performing team that sits in the main office with a proven history of implementing modern development practices such as DevOps and disciplines such as site reliability engineering can be a good candidate. If they also have top-down leadership sponsors and clear goals around each workloads migration, they can be a superb candidate. Dependencies Also, you should focus on workloads that have the fewest number of dependencies, either from other workloads or services. The migration of a workload with no dependencies is easier when you have limited experience with Google Cloud. If you have to choose workloads that have dependencies on other components, pick the ones that are loosely coupled to their dependencies. If a workload is already designed for the eventual unavailability of its dependencies, it can reduce the friction when migrating the workload to the target environment. For example, loosely coupled candidates are workloads that communicate by using a message broker, or that work offline, or are designed to tolerate the unavailability of the rest of the infrastructure. Although there are strategies to migrate data of stateful workloads, a stateless workload rarely requires any data migration. Migrating a stateless workload can be easier because you don't need to worry about a transitory phase where data is partially in your current environment and partially in your target environment. For example, stateless microservices are good first-mover candidates, because they don't rely on any local stateful data. Refactoring effort A first-mover should require a minimal amount of refactoring, so you can focus on the migration itself and on Google Cloud, instead of spending a large effort on changes to the code and configuration of your workloads. The refactoring should focus on the necessary changes that allow your workloads to run in the target environment instead of focusing on modernizing and optimizing your workloads, which is tackled in later migration phases. For example, a workload that requires only configuration changes is a good first-mover, because you don't have to implement any change to codebase, and you can use the existing artifacts. Licensing and compliance Licenses also play a role in choosing the first-movers, because some of your workloads might be licensed under terms that affect your migration. For example, some licenses explicitly forbid running workloads in a cloud environment. When examining the licensing terms, don't forget the compliance requirements because you might have sole tenancy requirements for some of your workloads. For these reasons, you should choose workloads that have the least amount of licensing and compliance restrictions as first-movers. For example, your customers might have the legal right to choose in which region you store their data, or your customers' data might be restricted to a particular region. Availability and reliability Good first-movers are the ones that can afford a downtime caused by a cutover window. If you choose a workload that has strict availability requirements, you have to implement a zero-downtime data migration strategy such as Y (writing and reading) or by developing a data-access microservice. While this approach is possible, it distracts your teams from gaining the necessary experience with Google Cloud, because they have to spend time to implement such strategies. For example, the availability requirements of a batch processing engine can tolerate a longer downtime than the customer-facing workload of your ecommerce site where your users finalize their transactions. Validate your migration plan Before taking action to start your migration plan, we recommend that you validate its feasibility. For more information, see Best practices for validating a migration plan. What's next Learn how to plan your migration and build your foundation on Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Assess_existing_user_accounts.txt b/Assess_existing_user_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..c094108369c4433df749ae6cc7c1b4569b84607f --- /dev/null +++ b/Assess_existing_user_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/assessing-existing-user-accounts +Date Scraped: 2025-02-23T11:55:21.194Z + +Content: +Home Docs Cloud Architecture Center Send feedback Assess existing user accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC Google supports two types of user accounts, managed user accounts and consumer user accounts. Managed user accounts are under the full control of a Cloud Identity or Google Workspace administrator. In contrast, consumer accounts are fully owned and managed by the people who created them. A core tenet of identity management is to have a single place to manage identities across your organization: If you use Google as your identity provider (IdP), then Cloud Identity or Google Workspace should be the single place to manage identities. Employees should rely exclusively on user accounts that you manage in Cloud Identity or Google Workspace. If you use an external IdP, then that provider should be the single place to manage identities. The external IdP needs to provision and manage user accounts in Cloud Identity or Google Workspace, and employees should rely exclusively on these managed user accounts when they use Google services. If employees use consumer user accounts, then the premise of having a single place to manage identities is compromised: consumer accounts aren't managed by Cloud Identity, Google Workspace, or your external IdP. Therefore, you must identify the consumer user accounts that you want to convert to managed accounts, as explained in the authentication overview. To convert consumer accounts to managed accounts using the transfer tool, described later in this document, you must have a Cloud Identity or Google Workspace identity with a Super Admin role. This document helps you to understand and assess the following: Which existing user accounts that your organization's employees might be using and how to identify those accounts. Which risks might be associated with these existing user accounts. Example scenario To illustrate the different sets of user accounts that employees might be using, this document uses an example scenario for a company named Example Organization. Example Organization has six employees and former employees who have all been using Google services such as Google Google Docs and Google Ads. Example Organization now intends to consolidate their identity management and establish their external IdP as the single place to manage identities. Each employee has an identity in the external IdP, and that identity matches the employee's email address. There are two consumer user accounts, Carol and Chuck, that use an example.com email address: Carol created a consumer account using her corporate email address (carol@example.com). Chuck, a former employee, created a consumer account using his corporate email address (chuck@example.com). Two employees, Glen and Grace, decided to use Gmail accounts: Glen signed up for a Gmail account (glen@gmail.com), which he uses to access private and corporate documents and other Google services. Grace also uses a Gmail account (grace@gmail.com), but she added her corporate email address, grace@example.com, as an alternate email address. Finally, two employees, Mary and Mike, are already using Cloud Identity: Mary has a Cloud Identity user account (mary@example.com). Mike is the administrator of the Cloud Identity account and created a user (admin@example.com) for himself. The following diagram illustrates the different sets of user accounts: To establish the external IdP as the single place to manage identities, you must link the identities of the existing Google user accounts to the identities in the external IdP. The following diagram therefore adds an account set that depicts the identities in the external IdP. Recall that if employees want to establish an external IdP as the single place to manage identities, they must rely exclusively on managed user accounts, and that the external IdP must control those user accounts. In this scenario, only Mary meets these requirements. She uses a Cloud Identity user, which is a managed user account, and her user account's identity matches her identity in the external IdP. All other employees either use consumer accounts, or the identity of their accounts doesn't match their identity in the external IdP. The risks and implications of not meeting the requirements are different for each of these users. Each user represents a different set of user accounts that might require further investigation. User account sets to investigate The following sections examine potentially problematic sets of user accounts. Consumer accounts This set of user accounts consists of accounts for which either of the following is true: They were created by employees using the Sign up feature offered by many Google services. They use a corporate email address as their identity. In the example scenario, this description fits Carol and Chuck. A consumer account that's used for business purposes and that uses a corporate email address can pose a risk to your business, such as the following: You cannot control the lifecycle of the consumer account. An employee who leaves the company might continue to use the user account to access corporate resources or to generate corporate expenses. Even if you revoke access to all resources, the account might still pose a social engineering risk. Because the user account uses a seemingly trustworthy identity like chuck@example.com, the former employee might be able to convince current employees or business partners to grant access to resources again. Similarly, a former employee might use the user account to perform activities that aren't in line with your organization's policies, which could put your company's reputation at risk. You cannot enforce security policies like MFA verification or password complexity rules on the account. You cannot restrict which geographic location Google Docs and Google Drive data is stored in, which might be a compliance risk. You cannot restrict which Google services can be accessed by using this user account. If ExampleOrganization decides to use Google as their IdP, then the best way for them to deal with consumer accounts is to either migrate them to Cloud Identity or Google Workspace or to evict them by forcing the owners to rename the user account. If ExampleOrganization decides to use an external IdP, they need to further distinguish between the following: Consumer accounts that have a matching identity in the external IdP. Consumer accounts that don't have a matching identity in the external IdP. The following two sections look at these two subclasses in detail. Consumer accounts with a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a corporate email address as the primary email address. Their identity matches an identity in the external IdP. In the example scenario, this description fits Carol. The fact that these consumer accounts have a matching identity in your external IdP suggests that these user accounts belong to current employees and should be retained. You should therefore consider migrating these accounts to Cloud Identity or Google Workspace. You can identify consumer accounts that have matching identity in the external IdP as follows: Add all domains to Cloud Identity or Google Workspace that you suspect might have been used for consumer account signups. In particular, the list of domains in Cloud Identity or Google Workspace should include all domains that your email system supports. Use the transfer tool for unmanaged users to identify consumer accounts that use an email address that matches one of the domains you've added to Cloud Identity or Google Workspace. The tool also lets you export the list of affected users as a CSV file. Compare the list of consumer accounts with the identities in your external IdP, and find consumer accounts that have a counterpart. Consumer accounts without a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a corporate email address as their identity. Their identity does not match any identity in the external IdP. In the example scenario, this description fits Chuck. There can be several causes for consumer accounts without a matching identity in the external IdP, including the following: The employee who created the account might have left the company, so the corresponding identity no longer exists in the external IdP. There might be a mismatch between the email address used for the consumer account sign-up and the identity known in the external IdP. Mismatches like these can occur if your email system allows variations in email addresses such as the following: Using alternate domains. For example, johndoe@example.org and johndoe@example.com might be aliases for the same mailbox, but the user might only be known as johndoe@example.com in your IdP. Using alternate handles. For example johndoe@example.com and john.doe@example.com might also refer to the same mailbox, but your IdP might recognize only one spelling. Using different casing. For example, the variants johndoe@example.com and JohnDoe@example.com might not be recognized as the same user. You can handle consumer accounts that don't have a matching identity in the external IdP in the following ways: You can migrate the consumer account to Cloud Identity or Google Workspace and then reconcile any mismatches caused by alternate domains, handles, or casing. If you think the user account is illegitimate or shouldn't be used anymore, you can evict the consumer account by forcing the owner to rename it. You can identify consumer accounts without a matching identity in the external IdP as follows: Add all domains to Cloud Identity or Google Workspace that you suspect might have been used for consumer account signups. In particular, the list of domains in Cloud Identity or Google Workspace should include all domains that your email system supports as aliases. Use the transfer tool for unmanaged users to identify consumer accounts that use an email address that matches one of the domains you've added to Cloud Identity or Google Workspace. The tool also lets you export the list of affected users as a CSV file. Compare the list of consumer accounts with the identities in your external IdP and find consumer accounts that lack a counterpart. Managed accounts without a matching identity in the external IdP This set of user accounts consists of accounts that match all of the following: They were manually created by a Cloud Identity or Google Workspace administrator. Their identity doesn't match any identity in the external IdP. In the example scenario, this description fits Mike, who used the identity admin@example.com for his managed account. The potential causes for managed accounts without a matching identity in the external IdP are similar to those for consumer accounts without a matching identity in the external IdP: The employee for whom the account was created might have left the company, so the corresponding identity no longer exists in the external IdP. The corporate email address that matches the identity in the external IdP might have been set as an alternate email address or alias rather than as the primary email address. The email address that's used for the user account in Cloud Identity or Google Workspace might not match the identity known in the external IdP. Neither Cloud Identity nor Google Workspace verifies that the email address used as the identity exists. A mismatch can therefore not only occur because of alternate domains, alternate handles, or different casing, but also because of a typo or other human error. Regardless of their cause, managed accounts without a matching identity in the external IdP are a risk because they can become subject to inadvertent reuse and name squatting. We recommend that you reconcile these accounts. You can identify consumer accounts without a matching identity in the external IdP as follows: Using the Admin Console or the Directory API, export the list of user accounts in Cloud Identity or Google Workspace. Compare the list of accounts with the identities in your external IdP and find accounts that lack a counterpart. Gmail accounts used for corporate purposes This set of user accounts consists of accounts that match the following: They were created by employees. They use a gmail.com email address as their identity. Their identities don't match any identity in the external IdP. In the example scenario, this description fits Grace and Glen. Gmail accounts that are used for corporate purposes are subject to similar risks as consumer accounts without matching identity in external IdP: You cannot control the lifecycle of the consumer account. An employee who leaves the company might continue to use the user account to access corporate resources or to generate corporate expenses. You cannot enforce security policies like MFA verification or password complexity rules on the account. The best way to deal with Gmail accounts is therefore to revoke access for those user accounts to all corporate resources and provide affected employees with new managed user accounts as replacements. Because Gmail accounts use gmail.com as their domain, there is no clear affiliation with your organization. The lack of a clear affiliation implies that there is no systematic way—other than scrubbing existing access control policies—to identify Gmail accounts that have been used for corporate purposes. Gmail accounts with a corporate email address as alternate email This set of user accounts consists of accounts that match all of the following: They were created by employees. They use a gmail.com email address as their identity. They use a corporate email address as an alternate email address. Their identities don't match any identity in the external IdP. In the example scenario, this description fits Grace. From a risk perspective, Gmail accounts that use a corporate email address as an alternate email address are equivalent to consumer accounts without a matching identity in the external IdP. Because these accounts use a seemingly trustworthy corporate email address as their second identity, they are subject to the risk of social engineering. If you want to maintain the access rights and some of the data associated with the Gmail account, you can ask the owner to remove Gmail from the user account so that you can then migrate them to Cloud Identity or Google Workspace. The best way to handle Gmail accounts that use a corporate email address as an alternate email address is to sanitize them. When you sanitize an account, you force the owner to give up the corporate email address by creating a managed user account with that same corporate email address. Additionally, we recommend that you revoke access to all corporate resources and provide the affected employees with the new managed user accounts as replacements. What's next Learn more about the different types of user accounts on Google Cloud. Find out how the migration process for consumer accounts works. Review best practices for federating Google Cloud with an external identity provider. Send feedback \ No newline at end of file diff --git a/Assess_onboarding_plans.txt b/Assess_onboarding_plans.txt new file mode 100644 index 0000000000000000000000000000000000000000..9552f115a9a8ccd0bf8ae89ce545b78c1a9f2f27 --- /dev/null +++ b/Assess_onboarding_plans.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/assessing-onboarding-plans +Date Scraped: 2025-02-23T11:55:23.370Z + +Content: +Home Docs Cloud Architecture Center Send feedback Assess onboarding plans Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC Cloud Identity and Google Workspace let you manage corporate identities and control access to Google services. To take advantage of the features that Cloud Identity and Google Workspace provide, you first have to onboard existing and new identities to Cloud Identity or Google Workspace. Onboarding involves the following steps: Prepare your Cloud Identity or Google Workspace accounts. If you've decided to use an external identity provider (IdP), set up federation. Create user accounts for your corporate identities. Consolidate existing user accounts. This document helps you assess the best order in which to approach these steps. Select an onboarding plan When you select an onboarding plan, consider the following critical decisions: Select a target architecture. Most importantly, you have to decide whether you want to make Google your primary IdP or whether you prefer to use an external IdP. If you have not yet decided, see the reference architectures overview to learn more about possible options. Decide whether to migrate existing consumer accounts. If you haven't been using Cloud Identity or Google Workspace, it's possible that your organization's employees might be using consumer accounts to access Google services. If you want to keep these user accounts and their data, you must migrate them to Cloud Identity or Google Workspace. For details on consumer accounts, how to identify them, and what risk they might pose to your organization, see Assessing existing user accounts. If you've decided to use an external IdP and to migrate existing consumer accounts, then you have a third decision to make—deciding whether to set up federation first or migrate existing user accounts first. Take the following factors into account: Migrating consumer accounts requires the owner's consent. The more user accounts you have to migrate, the longer it might take to get the consent of all affected account owners. If you need to migrate 100 or more consumer accounts to migrate, consider setting up federation before you migrate the existing consumer accounts. By setting up federation first, you ensure that all new identities and each migrated user account can immediately benefit from single sign-on, two-step verification, and other security features offered by Cloud Identity and Google Workspace. Setting up federation therefore helps you to quickly improve your overall security posture. However, setting up federation first requires you to configure your identity provider in a way that still allows existing user accounts to be migrated. This configuration can increase the complexity of your overall setup. If you need to migrate fewer than 100 consumer accounts, you can expect the process of migrating these user accounts to be reasonably quick. In this case, consider migrating existing user accounts before setting up federation. By completing the user account migration first, you can avoid the extra complexity of having to configure your identity provider in a way that still allows existing user accounts to be migrated. However, delaying the federation setup might slow down the process of improving your overall security posture. The following diagram summarizes how to select the best onboarding plan. This diagram shows the following decision paths to select an onboarding plan: If you're using Google as an IdP, select plan 1. If you aren't using Google as an IdP, and you don't want to migrate existing accounts, select plan 2. Select plan 3 in the following scenario: You aren't using Google as an IdP. You want to migrate existing accounts. You want to set up federation first. Select plan 4 in the following scenario: You aren't using Google as an IdP. You want to migrate existing accounts. You don't want to set up federation first. Onboarding plans This section outlines a set of onboarding plans that correspond to the scenarios discussed in the previous section. Plan 1: No federation Consider using this plan if all of the following are true: You want to use Google as your primary IdP. You might need to migrate existing user accounts to Cloud Identity or Google Workspace. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved, see Prepare your Cloud Identity or Google Workspace accounts. If some of the identities you want to onboard have existing consumer accounts, don't create user accounts in Cloud Identity or Google Workspace for these identities because doing so would result in a conflicting account. To minimize the risk of inadvertently creating conflicting accounts, start by creating user accounts for only a small, initial set of identities. We recommend that you use the Admin Console to create these accounts instead of using the API or batch upload to create these user accounts because the Admin Console will warn you about an impending creation of a conflicting account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 2 and 3 in any order or in parallel. Finally, create user accounts for all remaining identities that you need to onboard. You can create accounts manually using the Admin Console, or if you're onboarding a large number of identities, consider the following alternatives: Create users in batches by using a CSV file. Automate user and group creation by using open source tools such as Google Apps Manager (GAM). Use the Directory API. Plan 2: Federation without user account consolidation Consider using this plan if all of the following are true: You want to use an external IdP. You don't need to migrate any existing user accounts. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved in this process, see Prepare your Cloud Identity or Google Workspace accounts. Set up federation with your external IdP. Typically, this means configuring automatic user account provisioning and setting up single sign-on. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all identities that you need to onboard. Ensure that the identities in Cloud Identity or Google Workspace are a subset of the identities in your external IdP. For details, see Reconciling orphaned managed user accounts. Plan 3: Federation with user account consolidation Consider using this plan if all of the following are true: You want to use an external IdP. You need to migrate existing user accounts to Cloud Identity or Google Workspace, but want to set up federation first. This plan lets you start using single sign-on quickly. Any new user accounts that you create in Cloud Identity or Google Workspace are immediately able to use single sign-on, as are existing user accounts after you've migrated them. This integration with an external IdP lets you minimize user account administration—your IdP can handle both identity onboarding and offboarding. Compared to the delayed federation plan explained in the next section, this plan increases your risk of conflicting accounts or locked-out users. This plan therefore requires careful attention when you set up federation. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved in this process, see Prepare your Cloud Identity or Google Workspace accounts. Set up federation with your external IdP. Typically, this means that you configure automatic user account provisioning and setting up single sign-on. Because some of the identities you want to onboard have existing consumer accounts that you still need to migrate, make sure that you prevent your external IdP from interfering with your ability to consolidate existing consumer accounts. For details on how you can configure your external IdP in a way that is safe for account consolidation, see Assessing user account consolidation impact on federation. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for the initial set of identities that you need to onboard. Be careful to create user accounts only for identities that don't have an existing user account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 3 and 4 in any order or in parallel. To make your setup safe for account consolidation, remove any special configuration that you've applied to your federation setup. Because all existing accounts are already migrated at this point, this special configuration is no longer required. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all remaining identities that you need to onboard. Plan 4: Delayed federation Consider using this plan if all of the following are true: You want to use an external IdP. You need to migrate existing user accounts to Cloud Identity or Google Workspace before setting up federation. This plan is effectively a combination of no federation and federation without user account consolidation, as discussed earlier. A key benefit of this plan over federation with user account consolidation is the lower risk of conflicting accounts or locked-out users. However, because your plan is to eventually use an external IdP for authentication, the approach has the following downsides: You cannot enable single sign-on before all relevant users have been migrated. Depending on the number of unmanaged accounts you're dealing with and how quickly users react to your account transfer requests, this migration might take days or weeks. During the migration, you have to create new user accounts in Cloud Identity or Google Workspace in addition to creating accounts in your external IdP. Similarly, for employees who leave, you must disable or delete their user accounts in Cloud Identity or Google Workspace, and in the external IdP. This redundant administration increases overall effort and can introduce inconsistencies. The following diagram illustrates the process and steps that this plan involves. Set up the required Cloud Identity or Google Workspace accounts. To determine the right number of Cloud Identity or Google Workspace accounts to use, see Best practices for planning accounts and organizations. For details on how to create the accounts and which stakeholders might need to get involved, see Prepare your Cloud Identity or Google Workspace accounts. If some of the identities you want to onboard have existing consumer accounts, don't create user accounts in Cloud Identity or Google Workspace for these identities because doing so would result in conflicting accounts. Start by creating user accounts for only a small, initial set of identities. We recommend that you use the Admin Console to create these accounts instead of using the API or batch upload because the Admin Console will warn you about an impending creation of a conflicting account. Start the process of consolidating your existing user accounts. For details on how to accomplish this and which stakeholders might need to get involved, see Consolidating existing user accounts. Note: You can perform steps 2 and 3 in any order or in parallel. Set up federation with your external IdP. Typically, this means configuring automatic user account provisioning and setting up single sign-on. When you configure federation, take into account the recommendations in Best practices for federating Google Cloud with an external identity provider. Because all existing accounts are already migrated at this point, you don't need to apply any special configuration to make federation safe for account consolidation. Use your external IdP to create user accounts in Cloud Identity or Google Workspace for all identities that you need to onboard. What's next If you decided to use federation with user account consolidation, proceed by assessing user account consolidation impact on federation. Start your onboarding process by preparing Cloud Identity or Google Workspace accounts. Send feedback \ No newline at end of file diff --git a/Assess_reliability_requirements.txt b/Assess_reliability_requirements.txt new file mode 100644 index 0000000000000000000000000000000000000000..c7fb278ab84b2c6e54b271e4685e7447efb58160 --- /dev/null +++ b/Assess_reliability_requirements.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/requirements +Date Scraped: 2025-02-23T11:54:09.213Z + +Content: +Home Docs Cloud Architecture Center Send feedback Assess the reliability requirements for your cloud workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC The first step toward building reliable infrastructure for your cloud workloads is to identify the reliability requirements of the workloads. This part of the Google Cloud infrastructure reliability guide provides guidelines to help you define the reliability requirements of workloads that you deploy in Google Cloud. Determine workload-specific requirements The reliability requirements of an application depend on the nature of the service that the application provides or the process that it performs. For example, an application that provides ATM services for a bank might need 5-nines availability. A website that supports an online trading platform might need 5-nines availability and a fast response time. A batch process that writes banking transactions to an accounting ledger at the end of every day might have a data-freshness target of eight hours. Within an application, the individual components or operations might have varying reliability requirements. For example, an order-processing application might need higher reliability for operations that write data to the orders database when compared with read requests. Assessing the reliability requirements of your workloads granularly helps you focus your spending and effort on the workloads that are critical for your business. Identify critical periods There might be periods when an application is more business-critical than at other times. These periods are often the times when the application has peak load. Identify these periods, plan adequate capacity, and test the application against peak-load conditions. To avoid the risk of application outages during peak-load periods, you can use appropriate operational practices like freezing the production code. The following are examples of applications that experience seasonal spikes in load: The inventory module of a financial accounting application is typically used more heavily on the days when the monthly, quarterly, or annual inventory audits are scheduled. An ecommerce website would have significant spikes in load during peak shopping seasons or promotional events. A database that supports the student admissions module of a university would have a high volume of write operations during certain months of every year. An online tax-filing service would have a high load during the tax-filing season. An online trading platform might need 5-nines availability and fast response time, but only during trading hours (for example, 8 AM to 5 PM from Monday to Friday). Consider other non-functional requirements Besides reliability requirements, enterprise applications can have other important non-functional requirements for security, performance, cost, and operational efficiency. When you assess the reliability requirements of an application, consider the dependencies and trade-offs with these other requirements. The following are examples of requirements that aren't for reliability, but can involve trade-offs with reliability requirements. Cost optimization: To optimize IT cost, your organization might impose quotas for certain cloud resources. For example, to reduce the cost of third-party software licenses, your organization might set quotas for the number of compute cores that can be provisioned. Similar quotas can exist for the amount of data that can be stored and the volume of cross-region network traffic. Consider the effects of these cost constraints on the options available for designing reliable infrastructure. Data residency: To meet regulatory requirements, your application might need to store and process data in specific countries, even if the business serves users globally. Consider such data residency constraints when deciding the regions and zones where your applications can be deployed. Certain design decisions that you make to meet other requirements can help improve the reliability of your applications. The following are some examples: Deployment automation: To operate your cloud deployments efficiently, you might decide to automate the provisioning flow by using infrastructure as code (IaC). Similarly, you might automate the application build and deployment process by using a continuous integration and continuous deployment (CI/CD) pipeline. Using IaC and CI/CD pipelines can help improve not just operational efficiency, but also the reliability of your workloads. Security controls: Security controls that you implement can also help improve the availability of the application. For example, Google Cloud Armor security policies can help ensure that the application remains available during denial of service (DoS) attacks. Content caching: To improve the performance of a content-serving application, you might enable caching as part of your load balancer configuration. With this design, users experience not only faster access to content but also higher availability. They can access cached content even when the origin servers are down. Reassess requirements periodically As your business evolves and grows, the requirements of your applications might change. Reassess your reliability requirements periodically, and make sure that they align with the current business goals and priorities of your organization. Consider an application that provides a standard level of availability for all users. You might have deployed the application in two zones within a region, with a regional load balancer as the frontend. If your organization plans to launch a premium service option that provides higher availability, then the reliability requirements of the application have changed. To meet the new availability requirements, you might need to deploy the application to multiple regions and use a global load balancer with Cloud CDN enabled. Another opportunity to reassess the availability requirements of your applications is after an outage occurs. Outages might expose mismatched expectations across different teams within your business. For example, one team might consider a 45-minute outage once a year (that is, 99.99% annual availability) as acceptable. But another team might expect a maximum downtime of 4.3 minutes per month (that is, 99.99% monthly availability). Depending on how you decide to modify or clarify the availability requirements, you should adjust your architecture to meet the new requirements. Previous arrow_back Building blocks of reliability Next Design reliable infrastructure arrow_forward Send feedback \ No newline at end of file diff --git a/Assess_the_impact_of_user_account_consolidation_on_federation.txt b/Assess_the_impact_of_user_account_consolidation_on_federation.txt new file mode 100644 index 0000000000000000000000000000000000000000..1e8947dddcc16fd5c4c15218f0264d7d4686c15b --- /dev/null +++ b/Assess_the_impact_of_user_account_consolidation_on_federation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/assessing-consolidation-impact-on-federation +Date Scraped: 2025-02-23T11:55:25.795Z + +Content: +Home Docs Cloud Architecture Center Send feedback Assess the impact of user account consolidation on federation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC If you plan to federate Cloud Identity or Google Workspace with an external identity provider (IdP) but still need to consolidate existing consumer accounts, this document helps you understand and assess the interplay between federation and consolidation. This document also shows you how to configure federation in a way that doesn't interfere with your ability to consolidate existing consumer accounts. Note: This document applies only if you decided to follow Plan 3: Federation with user account consolidation (or a variation thereof). Interplay between federation and user account consolidation In a federated setup, you connect Cloud Identity or Google Workspace to an external authoritative source so that the authoritative source can automatically provision user accounts in Cloud Identity or Google Workspace. These invariants typically hold for a federated setup: The authoritative source is the only source for identities. There are no user accounts in Cloud Identity or Google Workspace other than the ones provisioned by the authoritative source. The SAML identity provider does not allow Google single sign-on for any identities other than the ones for which the authoritative source has provisioned user accounts. Although these invariants reflect the best practices for federating Google Cloud with an external identity provider, they cause problems when you want to migrate existing consumer accounts: Existing consumer accounts don't originate from the authoritative source. These accounts already exist, and they now need to be linked to an identity known by the authoritative source. Existing consumer accounts, once they are migrated to Cloud Identity or Google Workspace, are user accounts that have not been provisioned by the authoritative source. The authoritative source must recognize and "adopt" these migrated accounts. The identities of existing consumer accounts might be unknown to the SAML identity provider, yet they still need to be allowed to use single sign-on. To allow existing consumer accounts to be consolidated, you have to temporarily set up federation in a way that is safe for account consolidation. Make federation safe for account consolidation The following table lists the requirements to consider in order to make federation safe for account consolidation. If you plan to use an external IdP but still need to consolidate existing consumer accounts, then you have to make sure that your setup initially meets these requirements. After you have completed the migration of existing consumer accounts, you are free to change the configuration because the requirements then no longer hold. Requirement Justification Allow single sign-on for identities with consumer accounts Migrating a consumer account requires an account transfer. A Cloud Identity or Google Workspace administrator initiates the account transfer, but in order to complete the transfer, the owner of the consumer account must consent to the transfer. As an administrator, you have limited control over when the consent will be expressed and thus, when the transfer is conducted. Once the owner expresses consent and the transfer is complete, all subsequent sign-ons are subject to single sign-on using your external IdP. For single sign-on to succeed, regardless of when the transfer is complete, ensure that your external IdP allows single sign-ons for the identities of all consumer accounts that you plan to migrate. Prevent automatic user provisioning for identities with consumer accounts If you provision a user account for an identity that already has a consumer account, you create a conflicting account. A conflicting account blocks you from transferring ownership of the consumer account, its configuration, and any associated data to Cloud Identity or Google Workspace. The default behavior of many external IdPs is to proactively create user accounts in Cloud Identity or Google Workspace. This behavior can inadvertently cause conflicting accounts to be created. By preventing automatic user provisioning for identities with existing consumer accounts, you avoid inadvertently creating conflicting accounts and ensure that consumer accounts can be transferred correctly. If you have identified consumer accounts without a matching identity in the external IdP that you consider legitimate and that you want to migrate to Cloud Identity or Google Workspace, then you have to make sure that your federation configuration does not interfere with your ability to migrate these consumer accounts. Requirement Justification Prevent deletion of migrated accounts without a matching identity in the external IdP If you have a user account in Cloud Identity or Google Workspace that does not have a matching identity in your external IdP, then your IdP might consider this user account orphaned and might suspend or delete it. By preventing your external IdP from suspending or deleting migrated accounts without matching the identity in the external IdP, you avoid losing the configuration and data associated with affected accounts and ensure that you can manually reconcile the accounts them. Make Microsoft Entra ID (formerly Azure AD) federation safe for account consolidation If you plan to federate Cloud Identity or Google Workspace with Microsoft Entra ID (formerly Azure AD), you can use the Google Workspace gallery app. Note: The Google Workspace gallery app from the Microsoft Azure marketplace is a Microsoft product and is neither maintained nor supported by Google. When you enable provisioning, Microsoft Entra ID ignores existing accounts in Cloud Identity or Google Workspace that don't have a counterpart in Microsoft Entra ID, so the requirement to prevent deletion of migrated accounts without a matching identity in the external IdP is always met. Depending on how you configure the gallery app, you must still ensure that you do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Don't configure provisioning In this approach, you configure the gallery app to handle single sign-on, but you don't configure automatic user provisioning. By not configuring user provisioning, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, assign the app to all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. For users who don't have a user account in Cloud Identity or Google Workspace, you have to create one manually. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Microsoft Entra ID won't be propagated to Cloud Identity or Google Workspace. Approach 2: Two apps with manual assignment In this approach, you overcome the limitation of having to manually create user accounts in Google Workspace or Cloud Identity for users that don't have an existing account. The idea is to use two gallery apps, one for provisioning and one for single sign-on: The first app is used exclusively for provisioning users and groups and has single sign-on disabled. By assigning users to this app, you control which accounts are being provisioned to Cloud Identity or Google Workspace. The second app is used exclusively for single sign-on and is not authorized to provision users. By assigning users to this app, you control which users are allowed to sign on. Using these two apps, assign users as follows: Assign all identities that eventually need access to Google services to the single sign-on app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. When assigning identities to the provisioning app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. Important: However, any mistake in assignment can lead immediately to a conflicting account being created, making this approach more risky than other approaches. Approach 3: Two apps with user creation disabled When configuring provisioning, you need to authorize Microsoft Entra ID to access Cloud Identity or Google Workspace by using a Cloud Identity or Google Workspace account. Normally, it's best to use a dedicated super-admin account for this purpose, because super-admin accounts are exempted from single sign-on (that is, any SSO configuration doesn't apply to them; they will continue to use passwords for login). However, for this scenario, you can have Microsoft Entra ID use a more restricted account for migration, one that doesn't allow Microsoft Entra ID to create users. That way, you effectively prevent Azure from automatically provisioning user accounts for identities with consumer accounts, regardless of which users are assigned to the provisioning app. A restricted administrator user account in Cloud Identity or Google Workspace should have only the following privileges: Organization Units > Read Users > Read Users > Update Groups Note: Disallowing Microsoft Entra ID from creating user accounts won't stop it from attempting to create them. Therefore, you're likely to find errors in the Microsoft Entra ID audit logs indicating that it failed to create user accounts in Cloud Identity or Google Workspace. A downside of this approach is that for users without unmanaged accounts, you must manually create accounts in Cloud Identity or Google Workspace. Federate with Microsoft Entra ID: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: No provisioning ✅ ✅ ✅ X X Approach 2: Two apps with manual assignment ✅ Prone to manual error ✅ ✅ ✅ Approach 3: Two apps with user creation disabled ✅ ✅ ✅ X ✅ Make Active Directory federation safe for account If you plan to federate Cloud Identity or Google Workspace with Active Directory, you can use Google Cloud Directory Sync (GCDS) and Active Directory Federation Services (AD FS). When you configure GCDS and AD FS, you have to make sure to do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. Prevent the deletion of migrated accounts without a matching identity in the external IdP. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Disable GCDS In this approach, you set up single sign-on with AD FS, but you don't enable GCDS until you've finished migrating unmanaged user accounts. By disabling GCDS, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, create a custom access control policy in AD FS and assign all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. By using the custom access control policy, you ensure that the user can immediately use single sign-on. For users who don't have a user account in Cloud Identity or Google Workspace, you have to create one manually. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Active Directory won't be propagated to Cloud Identity or Google Workspace. Approach 2: GCDS with manual assignment In this approach, you overcome the limitation of having to manually create user accounts in Cloud Identity or Google Workspace for users that don't have an existing account: Equivalent to Approach 1, you allow single sign-on for identities with consumer accounts by creating a custom access control policy in AD FS and assigning all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. Create a group in Active Directory that reflects the user accounts that you want to automatically provision to GCDS. In the list of members, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. Configure GCDS to provision user accounts only for identities that are members of this group. This way, you prevent automatic user provisioning for identities with consumer accounts. A key limitation of this approach is that you cannot prevent the deletion of migrated accounts without a matching identity in the external IdP. The approach is therefore applicable only if you don't have any consumer accounts without a matching identity in the external IdP. Important: Any mistake in assignment can lead to a conflicting account being created, making this approach more risky than other approaches. Approach 3: Disallow GCDS to create users When configuring provisioning, you must authorize GCDS to access Cloud Identity or Google Workspace. Normally, it's best to use a dedicated super-admin account for this purpose, because such accounts are exempted from single sign-on (that is, any SSO configuration doesn't apply to them; they will continue to use passwords for login). However, for this scenario, you can have GCDS use a more restricted account for migration, one that doesn't allow it to create users. That way, you effectively prevent GCDS from automatically provisioning user accounts for identities with consumer accounts and from deleting migrated accounts without a matching identity in the external IdP. A restricted administrator user account in Cloud Identity or Google Workspace should only have the following privileges: Organizational Units Users > Read Users > Update Groups Schema Management Domain Management Note: Disallowing GCDS from creating user accounts won't stop it from attempting to create them. Therefore, you're likely to find errors in the GCDS log indicating that it failed to create user accounts in Cloud Identity or Google Workspace. A downside of this approach is that for users without unmanaged accounts, you must manually create accounts in Cloud Identity or Google Workspace. Federate with Active Directory: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: Don't configure provisioning ✅ ✅ ✅ X X Approach 2: GCDS with manual assignment ✅ Prone to manual error X ✅ ✅ Approach 3: Disallow GCDS to create users ✅ ✅ ✅ X ✅ Make Okta federation safe for account consolidation To federate Cloud Identity or Google Workspace with Okta, you can use the Google Workspace app from the Okta app catalog. This app can handle single sign-on and provision users and groups to Cloud Identity or Google Workspace. When you use the Google Workspace app for provisioning, Okta ignores any existing users in Cloud Identity or Google Workspace that don't have a counterpart in Okta, so the requirement to prevent deletion of migrated accounts without a matching identity in the external IdP is always met. Depending on how you configure Okta, you must still do the following: Allow single sign-on for identities with consumer accounts. Prevent automatic user provisioning for identities with consumer accounts. There are multiple ways to meet these requirements. Each approach has advantages and disadvantages. Approach 1: Don't configure provisioning In this approach, you configure the Google Workspace app to handle single sign-on but don't configure provisioning at all. By not configuring user provisioning, you prevent automatic user provisioning for identities with consumer accounts. To allow single sign-on for identities with consumer accounts, assign the app to all identities that might eventually need access to Google services, even if their existing consumer accounts are still subject to being migrated. The Google Workspace or Google Cloud icons appear on the Okta homepage of all identities that have been assigned to the app. However, signing in will fail unless a corresponding user account happens to exist on the Google side. For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. Although this approach meets the requirements and is the least complex to set up, it comes with the limitation that any attribute changes or user suspensions performed in Okta won't be propagated to Cloud Identity or Google Workspace. Another downside of this approach is that you must manually create accounts in Cloud Identity or Google Workspace for all users who don't have an existing consumer account. Approach 2: Provision with manual assignment In this approach, you configure the Google Workspace app to handle single sign-on and provisioning but only enable the following provisioning features: Create users Update user attributes Deactivate users When you assign identities to the app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. As soon as a user accepts a transfer request, assign the user to the app so that they are enabled to use single sign-on and access Google Workspace or Google Cloud. One downside of this approach is that any mistake that you make in assignment can immediately lead to a conflicting account being created, which makes this approach much riskier than some of the other approaches. Another downside of this approach is that it causes temporary lockouts of migrated accounts. After accepting a transfer request, a user has to perform any subsequent sign-ons through Okta. These sign-on attempts will fail until you have assigned the user to the app in Okta. Approach 3: Provision with user creation disabled In this approach, you configure Google Workspace to handle single sign-on and provisioning but only enable the following provisioning features: Update user attributes Deactivate users Leave the Create Users option disabled and assign all identities that eventually need access to Google services to the app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. By disallowing Okta to create accounts, you prevent Okta from automatically provisioning user accounts for identities with consumer accounts. At the same time, this configuration still lets Okta propagate attribute changes and user suspensions to Cloud Identity or Google Workspace for those users that have a corresponding Google Account. For identities that don't have a corresponding user account in Cloud Identity or Google Workspace, Okta might display an error message in the Okta Admin console: For a user who has an existing consumer account, the corresponding Cloud Identity or Google Workspace user account is created automatically when the transfer request is accepted. That user can then immediately use single sign-on. Although the user account is functional at this point, Okta might not display an icon on the user's home page yet and might instead continue to display the error message in the Admin UI. To fix this, retry the assignment task in the Okta Administrator Dashboard. This approach successfully prevents Okta from automatically provisioning user accounts for identities with consumer accounts, but still allows single sign-on for identities with consumer accounts. The approach is also less prone to accidental misconfiguration than the second approach. One downside is still that for users without existing consumer accounts, you must manually create user accounts in Cloud Identity or Google Workspace. Approach 4: Two apps with manual assignment You can overcome some of the disadvantages of the previous approaches by using two apps, one for provisioning and one for single sign-on: Configure one instance of the Google Workspace app to handle provisioning only. The single sign-on functionality of the app is not used. By assigning users to this app, you control which accounts are being provisioned to Cloud Identity or Google Workspace. You can ensure that this app is effectively hidden from your users by enabling the Do not display application icon to users option. Configure another instance of the Google Workspace app for single sign-on purposes only. By assigning users to this app, you control who is allowed to sign on. Using these two apps, assign users as follows: Assign all identities that eventually need access to Google services to the single sign-on app. Include identities with existing consumer accounts so that you allow single sign-on for identities with consumer accounts. When assigning identities to the provisioning app, include the identities that eventually need access to Google services, but exclude all identities that are known to have an existing consumer account. This way, you prevent automatic user provisioning for identities with consumer accounts. Whenever a user accepts a transfer request, assign the user to the app as well. Important: Similar to Approach 2, a downside of this approach is that any mistake that you make in assignment can immediately lead to a conflicting account being created, making this approach substantially more risky than other approaches. Federate with Okta: Comparison The following table summarizes the approaches. Allow single sign-on for identities with consumer accounts Prevent automatic user provisioning for identities with consumer accounts Prevent deletion of migrated accounts without a matching identity in the external IdP Auto-provision new accounts Auto-update migrated accounts Approach 1: No provisioning ✅ ✅ ✅ X X Approach 2: Provision with manual assignment X Risky ✅ ✅ ✅ Approach 3: Provision with user creation disabled ✅ ✅ ✅ X ✅ Approach 4: Two apps with manual assignment ✅ Risky ✅ ✅ ✅ What's next Review how you can set up federation with Active Directory or Microsoft Entra ID. Start your onboarding process by preparing Cloud Identity or Google Workspace accounts. Send feedback \ No newline at end of file diff --git a/Assured_Workloads.txt b/Assured_Workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..276213bbc793ae6eb8ce586bff1dee78e3a50501 --- /dev/null +++ b/Assured_Workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/assured-workloads +Date Scraped: 2025-02-23T12:09:30.887Z + +Content: +Get started with Assured Workloads by signing up for a free trial.Jump to Assured WorkloadsAssured WorkloadsAccelerate your path to running more secure and compliant workloads on Google Cloud.Go to consoleSign-up for free trialConfigure regulated workloads in just a few clicksHelp prevent misconfigurations of required controlsSimplify your path to complianceHelp meet government cloud compliance requirementsVIDEOIntroduction to Assured Workloads6:31BenefitsCompliance without compromiseYou don’t have to choose between supporting regulatory compliance and using the latest, most innovative cloud services.Simplified security and complianceHelp manage the requirements for your regulated workloads with just a few clicks.Reduced cost and riskReduce costs and risk through simplified management of required controls.Key featuresKey features of Assured WorkloadsData residencyTo help comply with data residency requirements, Google Cloud gives you the ability to control the regions where data at rest is stored. During Assured Workloads setup, you create an environment and select your compliance program. When you create resources in the environment, Assured Workloads restricts the regions you can select for those resources based on the compliance program you chose using Organization Policy.The Google Cloud Data Location Service Specific Terms apply.Cryptographic control over data accessGoogle Cloud applies encryption at rest and in transit by default. To gain more control over how data is encrypted, Google Cloud customers can use Cloud Key Management Service to generate, use, rotate, and destroy encryption keys according to their own policies. Cryptographic control over data access is achieved through the use of Key Access Justifications (KAJ) together with our Cloud External Key Manager (EKM).Assured Workloads configures the appropriate encryption services per workload depending on the compliance program you chose.Assured SupportRegulated customers’ compliance obligations extend to support services. Assured Support is a value-added service to Premium or Enhanced Support to ensure only Google support personnel meeting specific geographical locations and personnel conditions support their workload when raising a support case or needing technical assistance. By delivering the same features and benefits of Premium or Enhanced Support (including response times) with an added layer of controls and transparency, Assured Support helps customers meet compliance requirements without compromising on the level and quality of support.Assured Workloads monitoringAssured Workloads monitoring scans your environment in real time and provides alerts whenever organization policy changes violate the defined compliance posture. The monitoring dashboard shows which policy is being violated and provides instructions on how to resolve the finding.VIDEOHow to configure and run Assured Workloads on Google Cloud6:30We chose to deploy with Google Cloud Assured Workloads because it provides us with the security controls we need and helps address a wide range of compliance requirements. Our ability to meet requirements around the globe enables us to grow our business while reducing the overhead and complexities of the multinational compliance process.David Williams, Cloud Manager, Iron MountainRead the blogWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postGoogle Cloud expands in Saudi ArabiaRead the blogBlog postHow European customers benefit today from the power of choice with Google Sovereign CloudRead the blogBlog postThe power of choice: Empowering your regulatory and compliance journeyRead the blogBlog postRegion expansion, TLS version restrictions, new supported servicesRead the blogBlog postLearn how Iron Mountain uses Assured Workloads Read the blogBlog postEuropean data sovereignty offerings with Assured Workloads for EURead the blogDocumentationDocumentationGoogle Cloud BasicsAssured Workloads conceptsUnderstand key concepts, such as data residency, platform controls, personnel access controls, and encryption key management.Learn moreQuickstartAssured Workloads quickstart guideUse this guide to get started on how to set up and evaluate the core capabilities of Assured Workloads in your Google Cloud environment. Learn moreTutorialConfigure an IL4/CJIS workloadSet up a new Assured Workloads environment in Google Cloud Console for IL4 and CJIS compliance programs. Learn moreTutorialConfigure a FedRAMP/US Regions and Support workloadSet up a new Assured Workloads environment in Google Cloud Console for FedRAMP Moderate, FedRAMP High, and US Regions and Support compliance programs.Learn moreTutorialConfigure Assured Workloads for EUSet up a new Assured Workloads environment in Google Cloud Console for EU Regions and Support with optional Sovereign Controls.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Assured WorkloadsUse casesUse casesUse caseCreating controlled environmentsApply security controls to an environment in support of compliance requirements.View all technical guidesAll featuresCompliance programsFedRAMP ModerateThe FedRAMP Moderate controls support access controls for first level support personnel who have completed enhanced background checks. Additionally, customers can control what region their data should reside using an org policy.FedRAMP HighThe FedRAMP High platform controls support access controls for first- and second-level support personnel who have completed enhanced background checks and are located in the US. Data location controls are set to support US-only regions.Criminal Justice Information Systems (CJIS)The CJIS platform controls support access controls for first- and second-level support personnel who have completed state-sponsored background checks and are located in the US. Escorted session controls are also used to supervise and monitor support actions by non-adjudicated staff. Data location controls are set to support US-only regions.Impact Level 2 (IL2)The IL2 controls support access controls for first- and second-level support personnel who have completed enhanced background checks, are US persons, and are located in the US. Data location controls are set to support US-only regions.Impact Level 4 (IL4)The IL4 controls support access controls for first- and second-level support personnel who have completed enhanced background checks, are US persons, and are located in the US. Data location controls are set to support US-only regions.Impact Level 5 (IL5)The IL5 controls support access controls for first- and second-level support personnel who have completed enhanced background checks, are US persons, and are located in the US. Data location controls are set to support US-only regions.International Traffic in Arms Regulations (ITAR)The ITAR controls support access controls for first- and second-level support personnel who are US persons, and are located in the US. Data location controls are set to support US-only regions.US Regions and SupportThe US Regions and Support controls support access controls for first- and second-level support personnel who are US persons and are located in the US. Data location controls are set to support US-only regions.EU Regions and SupportThe EU Regions and Support controls support access controls for first- and second-level support personnel who are EU personnel based in the EU. Data location controls are set to support available EU regions.EU Regions and Support with Sovereign ControlsThe Assured Workloads for EU Regions and Support with Sovereign Controls support access controls for first- and second-level support personnel who are based in the EU, and provides data residency and data sovereignty controls for EU-based customers. Data location controls are set to support EU-only regions.Australia Regions and SupportThe Australia Regions and Support controls restrict personnel access and technical support to persons based in five countries (US, UK, Australia, Canada, and New Zealand). Data location controls are set to support available Australia regions.Canada Regions and SupportThe Canada Regions and Support controls support access controls for first- and second-level support personnel who are Canadian personnel based in Canada. Data location controls are set to support available Canadian regions.Israel Regions and SupportThe Israel Regions and Support controls support access controls for first-level and second-level support personnel who are either security-cleared Israeli personnel located in Israel or US persons who have completed enhanced background checks located in the US. Data location controls are set to support Israel-only regions.Japan RegionsData location controls are set to support available Japan regions.Healthcare & Life Sciences ControlsData location controls are restricted to US regions. Services must have completed a HIPAA BAA & the HITRUST CSF, and support Data Residency at-rest in the US, CMEK, VPC-SC, and Access Transparency approvals and logging.Healthcare & Life Sciences Controls with US SupportData location controls are restricted to US regions. Services must have completed a HIPAA BAA & the HITRUST CSF, and support Data Residency at-rest in the US, CMEK, VPC-SC, and Access Transparency approvals and logging. Support access controls are set for first- and second-level support personnel who are located in the US. Kingdom of Saudi Arabia Sovereign Controls for MultinationalsSovereign Controls for Kingdom of Saudi Arabia (KSA) is specifically for non-Saudi domiciled organizations and is built on Google Cloud’s Class C-certified infrastructure. It provides data residency and data sovereignty controls in the KSA region. Data location controls are set to support KSA-only regions.PricingPricingAssured Workloads and Assured Support pricing is based on consumption. Please contact sales for more information. View pricing detailsPartnersPartnersDeploy workloads with Assured Workloads using ISV solutions. The Google Cloud Ready initiative ensures compliance. Visit the Regulated & Sovereignty Solutions page for details.See all partnersA product or feature listed on this page is in preview. For more information on our product launch stages, see hereTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Authentication_and_authorization.txt b/Authentication_and_authorization.txt new file mode 100644 index 0000000000000000000000000000000000000000..f13c38ed4cd975517b3df6ea933dcec93a3bf162 --- /dev/null +++ b/Authentication_and_authorization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/authentication-authorization +Date Scraped: 2025-02-23T11:45:25.239Z + +Content: +Home Docs Cloud Architecture Center Send feedback Authentication and authorization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC This section introduces how to use Cloud Identity to manage the identities that your employees use to access Google Cloud services. External identity provider as the source of truth We recommend federating your Cloud Identity account with your existing identity provider. Federation helps you ensure that your existing account management processes apply to Google Cloud and other Google services. If you don't have an existing identity provider, you can create user accounts directly in Cloud Identity. Note: If you're already using Google Workspace, Cloud Identity uses the same console, administrative controls, and user accounts as your Google Workspace account. The following diagram shows a high-level view of identity federation and single sign-on (SSO). It uses Microsoft Active Directory, located in the on-premises environment, as the example identity provider. This diagram describes the following best practices: User identities are managed in an Active Directory domain that is located in the on-premises environment and federated to Cloud Identity. Active Directory uses Google Cloud Directory Sync to provision identities to Cloud Identity. Users attempting to sign in to Google services are redirected to the external identity provider for single sign-on with SAML, using their existing credentials to authenticate. No passwords are synchronized with Cloud Identity. The following table provides links to setup guidance for identity providers. Identity provider Guidance Active Directory Active Directory user account provisioning Active Directory single sign-on Microsoft Entra ID (formerly Azure AD) Federating Google Cloud with Microsoft Entra ID Other external identity providers (for example, Ping or Okta) Integrating Ping Identity Solutions with Google Identity Services Using Okta with Google Cloud Providers Best practices for federating Google Cloud with an external identity provider We strongly recommend that you enforce multi-factor authentication at your identity provider with a phishing-resistant mechanism such as a Titan Security Key. The recommended settings for Cloud Identity aren't automated through the Terraform code in this blueprint. See administrative controls for Cloud Identity for the recommended security settings that you must configure in addition to deploying the Terraform code. Groups for access control A principal is an identity that can be granted access to a resource. Principals include Google Accounts for users, Google groups, Google Workspace accounts, Cloud Identity domains, and service accounts. Some services also let you grant access to all users who authenticate with a Google Account, or to all users on the internet. For a principal to interact with Google Cloud services, you must grant them roles in Identity and Access Management (IAM). To manage IAM roles at scale, we recommend that you assign users to groups based on their job functions and access requirements, then grant IAM roles to those groups. You should add users to groups using the processes in your existing identity provider for group creation and membership. We don't recommend granting IAM roles to individual users because individual assignments can increase the complexity of managing and auditing roles. The blueprint configures groups and roles for view-only access to foundation resources. We recommend that you deploy all resources in the blueprint through the foundation pipeline, and that you don't grant roles to users to groups to modify foundation resources outside of the pipeline. The following table shows the groups that are configured by the blueprint for viewing foundation resources. Name Description Roles Scope grp-gcp-org-admin@example.com Highly privileged administrators who can grant IAM roles at the organization level. They can access any other role. This privilege is not recommended for daily use. Organization Administrator organization grp-gcp-billing-admin@example.com Highly privileged administrators who can modify the Cloud Billing account. This privilege is not recommended for daily use. Billing Account Admin organization grp-gcp-billing-viewer@example.com The team who is responsible for viewing and analyzing the spending across all projects. Billing Account Viewer organization BigQuery User billing project grp-gcp-audit-viewer@example.com The team who is responsible for auditing security-related logs. Logs Viewer BigQuery User logging project grp-gcp-security-reviewer@example.com The team who is responsible for reviewing cloud security. Security Reviewer organization grp-gcp-network-viewer@example.com The team who is responsible for viewing and maintaining network configurations. Compute Network Viewer organization grp-gcp-scc-admin@example.com The team who is responsible for configuring Security Command Center. Security Center Admin Editor organization grp-gcp-secrets-admin@example.com The team who is responsible for managing, storing, and auditing credentials and other secrets that are used by applications. Secret Manager Admin secrets projects grp-gcp-kms-admin@example.com The team who is responsible for enforcing encryption key management to meet compliance requirements. Cloud KMS Viewer kms projects As you build your own workloads on top of the foundation, you create additional groups and grant IAM roles that are based on the access requirements for each workload. We strongly recommend that you avoid basic roles (such as Owner, Editor, or Viewer) and use predefined roles instead. Basic roles are overly permissive and a potential security risk. Owner and Editor roles can lead to privilege escalation and lateral movement, and the Viewer role includes access to read all data. For best practices on IAM roles, see Use IAM securely. Super admin accounts Cloud Identity users with the super admin account bypass the organization's SSO settings and authenticate directly to Cloud Identity. This exception is by design, so that the super admin can still access the Cloud Identity console in the event of an SSO misconfiguration or outage. However, it means you must consider additional protection for super admin accounts. To protect your super admin accounts, we recommend that you always enforce 2-step verification with security keys in Cloud Identity. For more information, see Security best practices for administrator accounts. Issues with consumer user accounts If you didn't use Cloud Identity or Google Workspace before you onboarded to Google Cloud, it's possible that your organization's employees are already using consumer accounts that are associated with their corporate email identities to access other Google services such as Google Marketing Platform or YouTube. Consumer accounts are accounts that are fully owned and managed by the individuals who created them. Because those accounts aren't under your organization's control and might include both personal and corporate data, you must decide how to consolidate these accounts with other corporate accounts. We recommend that you consolidate existing consumer user accounts as part of onboarding to Google Cloud. If you aren't using Google Workspace for all your user accounts already, we recommend blocking the creation of new consumer accounts. Administrative controls for Cloud Identity Cloud Identity has various administrative controls that are not automated by Terraform code in the blueprint. We recommend that you enforce each of these best practice security controls early in the process of building your foundation. Control Description Deploy 2-step verification User accounts might be compromised through phishing, social engineering, password spraying, or various other threats. 2-step verification helps mitigate these threats. We recommend that you enforce 2-step verification for all user accounts in your organization with a phishing-resistant mechanism such as Titan Security Keys or other keys that are based on the phishing-resistant FIDO U2F (CTAP1) standards. Set session length for Google Cloud services Persistent OAuth tokens on developer workstations can be a security risk if exposed. We recommend that you set a reauthentication policy to require authentication every 16 hours using a security key. Set session length for Google Services (Google Workspace customers only) Persistent web sessions across other Google services can be a security risk if exposed. We recommend that you enforce a maximum web session length and align this with session length controls in your SSO provider. Share data from Cloud Identity with Google Cloud services Admin Activity audit logs from Google Workspace or Cloud Identity are ordinarily managed and viewed in the Admin Console, separately from your logs in your Google Cloud environment. These logs contain information that is relevant for your Google Cloud environment, such as user login events. We recommend that you share Cloud Identity audit logs to your Google Cloud environment to centrally manage logs from all sources. Set up post SSO verification The blueprint assumes that you set up SSO with your external identity provider. We recommend that you enable an additional layer of control based on Google's sign-in risk analysis. After you apply this setting, users might see additional risk-based login challenges at sign-in if Google deems that a user sign-in is suspicious. Remediate issues with consumer user accounts Users with a valid email address at your domain but no Google Account can sign up for unmanaged consumer accounts. These accounts might contain corporate data, but are not controlled by your account lifecycle management processes. We recommend that you take steps to ensure that all user accounts are managed accounts. Disable account recovery for super admin accounts Super admin account self-recovery is off by default for all new customers (existing customers might have this setting on). Turning this setting off helps to mitigate the risk that a compromised phone, compromised email, or social engineering attack could let an attacker gain super admin privileges over your environment. Plan an internal process for a super admin to contact another super admin in your organization if they have lost access to their account, and ensure that all super admins are familiar with the process for support-assisted recovery. Enforce and monitor password requirements for users In most cases, user passwords are managed through your external identity provider, but super admin accounts bypass SSO and must use a password to sign in to Cloud Identity. Disable password reuse and monitor password strength for any users who use a password to log in to Cloud Identity, particularly super admin accounts. Set organization-wide policies for using groups By default, external user accounts can be added to groups in Cloud Identity. We recommend that you configure sharing settings so that group owners can't add external members. Note that this restriction doesn't apply to the super admin account or other delegated administrators with Groups admin permissions. Because federation from your identity provider runs with administrator privileges, the group sharing settings don't apply to this group synchronization. We recommend that you review controls in the identity provider and synchronization mechanism to ensure that non-domain members aren't added to groups, or that you apply group restrictions. What's next Read about organization structure (next document in this series). Send feedback \ No newline at end of file diff --git a/Automate_and_manage_change.txt b/Automate_and_manage_change.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d78c202b0ed35a518478137d8c1d09d9305ae4a --- /dev/null +++ b/Automate_and_manage_change.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence/automate-and-manage-change +Date Scraped: 2025-02-23T11:42:47.188Z + +Content: +Home Docs Cloud Architecture Center Send feedback Automate and manage change Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you automate and manage change for your cloud workloads. It involves implementing infrastructure as code (IaC), establishing standard operating procedures, implementing a structured change management process, and using automation and orchestration. Principle overview Change management and automation play a crucial role in ensuring smooth and controlled transitions within cloud environments. For effective change management, you need to use strategies and best practices that minimize disruptions and ensure that changes are integrated seamlessly with existing systems. Effective change management and automation include the following foundational elements: Change governance: Establish clear policies and procedures for change management, including approval processes and communication plans. Risk assessment: Identify potential risks associated with changes and mitigate them through risk management techniques. Testing and validation: Thoroughly test changes to ensure that they meet functional and performance requirements and mitigate potential regressions. Controlled deployment: Implement changes in a controlled manner, ensuring that users are seamlessly transitioned to the new environment, with mechanisms to seamlessly roll back if needed. These foundational elements help to minimize the impact of changes and ensure that changes have a positive effect on business operations. These elements are represented by the processes, tooling, and governance focus areas of operational readiness. Recommendations To automate and manage change, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Adopt IaC Infrastructure as code (IaC) is a transformative approach for managing cloud infrastructure. You can define and manage cloud infrastructure declaratively by using tools like Terraform. IaC helps you achieve consistency, repeatability, and simplified change management. It also enables faster and more reliable deployments. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. The following are the main benefits of adopting the IaC approach for your cloud deployments: Human-readable resource configurations: With the IaC approach, you can declare your cloud infrastructure resources in a human-readable format, like JSON or YAML. Infrastructure administrators and operators can easily understand and modify the infrastructure and collaborate with others. Consistency and repeatability: IaC enables consistency and repeatability in your infrastructure deployments. You can ensure that your infrastructure is provisioned and configured the same way every time, regardless of who is performing the deployment. This approach helps to reduce errors and ensures that your infrastructure is always in a known state. Accountability and simplified troubleshooting: The IaC approach helps to improve accountability and makes it easier to troubleshoot issues. By storing your IaC code in a version control system, you can track changes, and identify when changes were made and by whom. If necessary, you can easily roll back to previous versions. Implement version control A version control system like Git is a key component of the IaC process. It provides robust change management and risk mitigation capabilities, which is why it's widely adopted, either through in-house development or SaaS solutions. This recommendation is relevant to these focus areas of operational readiness: governance and tooling. By tracking changes to IaC code and configurations, version control provides visibility into the evolution of the code, making it easier to understand the impact of changes and identify potential issues. This enhanced visibility fosters collaboration among team members who work on the same IaC project. Most version control systems let you easily roll back changes if needed. This capability helps to mitigate the risk of unintended consequences or errors. By using tools like Git in your IaC workflow, you can significantly improve change management processes, foster collaboration, and mitigate risks, which leads to a more efficient and reliable IaC implementation. Build CI/CD pipelines Continuous integration and continuous delivery (CI/CD) pipelines streamline the process of developing and deploying cloud applications. CI/CD pipelines automate the building, testing, and deployment stages, which enables faster and more frequent releases with improved quality control. This recommendation is relevant to the tooling focus area of operational readiness. CI/CD pipelines ensure that code changes are continuously integrated into a central repository, typically a version control system like Git. Continuous integration facilitates early detection and resolution of issues, and it reduces the likelihood of bugs or compatibility problems. To create and manage CI/CD pipelines for cloud applications, you can use tools like Cloud Build and Cloud Deploy. Cloud Build is a fully managed build service that lets developers define and execute build steps in a declarative manner. It integrates seamlessly with popular source-code management platforms and it can be triggered by events like code pushes and pull requests. Cloud Deploy is a serverless deployment service that automates the process of deploying applications to various environments, such as testing, staging, and production. It provides features like blue-green deployments, traffic splitting, and rollback capabilities, making it easier to manage and monitor application deployments. Integrating CI/CD pipelines with version control systems and testing frameworks helps to ensure the quality and reliability of your cloud applications. By running automated tests as part of the CI/CD process, development teams can quickly identify and fix any issues before the code is deployed to the production environment. This integration helps to improve the overall stability and performance of your cloud applications. Use configuration management tools Tools like Puppet, Chef, Ansible, and VM Manager help you to automate the configuration and management of cloud resources. Using these tools, you can ensure resource consistency and compliance across your cloud environments. This recommendation is relevant to the tooling focus area of operational readiness. Automating the configuration and management of cloud resources provides the following benefits: Significant reduction in the risk of manual errors: When manual processes are involved, there is a higher likelihood of mistakes due to human error. Configuration management tools reduce this risk by automating processes, so that configurations are applied consistently and accurately across all cloud resources. This automation can lead to improved reliability and stability of the cloud environment. Improvement in operational efficiency: By automating repetitive tasks, your organization can free up IT staff to focus on more strategic initiatives. This automation can lead to increased productivity and cost savings and improved responsiveness to changing business needs. Simplified management of complex cloud infrastructure: As cloud environments grow in size and complexity, managing the resources can become increasingly difficult. Configuration management tools provide a centralized platform for managing cloud resources. The tools make it easier to track configurations, identify issues, and implement changes. Using these tools can lead to improved visibility, control, and security of your cloud environment. Automate testing Integrating automated testing into your CI/CD pipelines helps to ensure the quality and reliability of your cloud applications. By validating changes before deployment, you can significantly reduce the risk of errors and regressions, which leads to a more stable and robust software system. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. The following are the main benefits of incorporating automated testing into your CI/CD pipelines: Early detection of bugs and defects: Automated testing helps to detect bugs and defects early in the development process, before they can cause major problems in production. This capability saves time and resources by preventing the need for costly rework and bug fixes at later stages in the development process. High quality and standards-based code: Automated testing can help improve the overall quality of your code by ensuring that the code meets certain standards and best practices. This capability leads to more maintainable and reliable applications that are less prone to errors. You can use various types of testing techniques in CI/CD pipelines. Each test type serves a specific purpose. Unit testing focuses on testing individual units of code, such as functions or methods, to ensure that they work as expected. Integration testing tests the interactions between different components or modules of your application to verify that they work properly together. End-to-end testing is often used along with unit and integration testing. End-to-end testing simulates real-world scenarios to test the application as a whole, and helps to ensure that the application meets the requirements of your end users. To effectively integrate automated testing into your CI/CD pipelines, you must choose appropriate testing tools and frameworks. There are many different options, each with its own strengths and weaknesses. You must also establish a clear testing strategy that outlines the types of tests to be performed, the frequency of testing, and the criteria for passing or failing a test. By following these recommendations, you can ensure that your automated testing process is efficient and effective. Such a process provides valuable insights into the quality and reliability of your cloud applications. Previous arrow_back Manage and optimize cloud resources Next Continuously improve and innovate arrow_forward Send feedback \ No newline at end of file diff --git a/Backup_and_DR_Service.txt b/Backup_and_DR_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..2217ba4ded8ad3a9bed2313ad4eda92affadc2fa --- /dev/null +++ b/Backup_and_DR_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/backup-disaster-recovery +Date Scraped: 2025-02-23T12:10:16.868Z + +Content: +Now generally available in Backup and DR service: Backup vault and simplified Compute Engine backup. Explore details.Jump to Backup and Disaster Recovery (DR) ServiceBackup and Disaster Recovery (DR) ServiceSecure, centrally-managed backup and recovery service for Google Cloud workloads that protects backup data from malicious or accidental deletion.Go to consoleContact salesDeploy a backup and recovery service that protects from malicious or accidental data deletion—in single or multi-regions. Try it in the console.Protect Compute Engine VMs, VMware VMs, databases, and file systems all in one placeLearn how to back up and recover a Compute Engine VMVIDEOAn introduction to Google Cloud Backup and DR Service 5:40BenefitsProtect data from malicious attackGain peace of mind with immutable, indelible backups secured in a backup vault. Maximize cyber resilience and satisfy compliance objectives. Restore from backups quickly to resume business operations.Manage your backups and restores all from one placeProtect a broad spectrum of workloads and manage them from a central dashboard. Serve critical use cases such as recovery from cyber attack (for example ransomware, user errors, or data corruption.)Integrate with existing automation workflows for at-scale protectionLeverage gcloud CLI, APIs, or Terraform, to achieve at-scale automation. Integrate backup operations with existing infrastructure-as-code and incorporate them seamlessly into your broader resource management strategies.Key featuresReduce operational burden and recover easilyBackup vault protects against modification or early deletionStore backup data in a secure Google-managed environment, protecting them from modification and early deletion. Self-contained, immutable, indelible backups enable recovery into new or pre-existing Google Cloud environments.Manage from a simple, centralized interfaceProtects a broad range of workloads with a single service. Easy-to-use and integrated directly into Google Cloud console.Comprehensive monitoring, alerting, and customizable reportsGain full, real-time visibility into your backup operations. Easily monitor job status, set up notifications, and tailor automated reporting to demonstrate compliance, analysis, and gain insights.Enforcement and oversight features to keep your organization safeEnables efficient delegation of backup tasks to platform admins (or app developers), while retaining centralized governance and control over backup policies.View all featuresBLOGBackup and DR integrates with logging and monitoring toolsCintas uses Google Cloud Backup and DR to protect more than 250TB of critical SAP workloads. The SAP Basis team's transition to protect HANA instances using Persistent Disk Snapshot has resulted in ~15% savings in backup costs and a reduction in full backup duration from 1.5 - 3 hours to only 15 minutes.Mohan Bukkapatnam, Vice President - IT Strategic Initiatives, CintasWhat's newNews and announcementsBlog postNew: Backup vaults for cyber resilience and simplified Compute Engine backupsRead the blogBlog postDetecting data deletion and threats to backups with Security Command CenterRead the blogBlog postSafeguard your VM workloads with new Google Cloud VMWare Engine ProtectedLearn moreBlog postDetecting data deletion and threats to backups with Security Command CenterRead the blogBlog postBackup and DR service integrates with logging and monitoring toolsRead the blogDocumentationDocumentationTutorialGetting started with Google Cloud Backup and DRA list of resources for getting started with Google Cloud Backup and DR Service.Learn moreTutorialUse backup vaults for immutable and indelible backupsLearn about backup vaults and how to use them. Backup vaults provide protection against malicious and accidental data loss.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for Backup and DR ServiceUse casesUse casesUse caseRansomware recoveryBackup and DR service enables users to protect backups against malicious attack and to recover in new environments. Use your security tools to analyze restored backups in an isolated recovery environment (IRE), or identify potential ransomware infections before recovering into production.Use casePlan a backup and recovery deploymentThis guide provides the steps to architect and implement a robust backup and recovery strategy for your Google Cloud environment based on your use case, Recovery Time Objective (RTO), and single or multi-regional workload requirements. Learn how to define recovery objectives, select the right approach, and deploy a backup and recovery service to ensure business continuity.Use caseCreate a backup plan in the Google Cloud consoleCreating reliable backups for your Google Cloud resources is easy with the Google Cloud console. In this guide, you’ll learn how to define which resources to include in your backups, set appropriate retention policies to meet your compliance needs, and automate backups with flexible scheduling options.Use caseCreate a backup vault for immutable and indelible backupsProtect your backups from accidental or malicious deletion using a backup vault. This guide explains how a backup vault provides a secure, isolated repository for your backups, ensuring their integrity and availability. Learn how to configure immutability policies, control access, and integrate a backup vault into your existing backup strategy.Use caseBackup and restore VMsProtect your Compute Engine instances with a simple and reliable backup and recovery process. In this quickstart, learn how to create backups of your virtual machines directly from the Google Cloud console. Explore how to easily restore your instances in case of accidental deletion or data corruption. View all technical guidesAll featuresAdditional featuresCentralized managementCentralized management of backups for various Google Cloud workloads. Plan-driven data management with automated retention Create backup plans that are powered by a sophisticated and powerful scheduling engine. Preconfigure, set, and forget your intraday, daily, weekly, monthly, and yearly backups. Specify backup locations and retention periods. Configure workload specific options and choose between app-consistent and crash-consistent backups. Cross-region backups You have complete control over your backup data. Store your backups in a multi-region or single region/location to meet disaster recovery and compliance needs. Backup VaultStore backup data in a secure Google-managed environment, protecting them from modification and early deletion. Self-contained, immutable, indelible backups enable recovery into new or pre-existing Google Cloud environments.Cross project recoveriesRecover workloads to different projects—allows for disaster recovery and migrating workloads to different projects.Data encryption in transit and at restEnsure that data is safely secured to meet business and regulatory requirements.PricingPricingGoogle Cloud Backup and DR Service is billed monthly, based on usage.View pricing detailsPartnersBackup and DR Partners Our partners integrate their industry-leading tools and services with Google Cloud for enhanced protection.Expand allAssessment and planningThese partners are well versed in helping customers with Google Backup and DR solutions.ImplementationThese partners can work with your Google Cloud Backup and DR implementation.See all partnersExplore our marketplaceTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Backup_and_Disaster_Recovery.txt b/Backup_and_Disaster_Recovery.txt new file mode 100644 index 0000000000000000000000000000000000000000..ea6a833f922431bfb51cfde9691cb934018673bb --- /dev/null +++ b/Backup_and_Disaster_Recovery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/backup-dr +Date Scraped: 2025-02-23T11:59:49.907Z + +Content: +Now generally available in Backup and DR service: Backup vault and simplified Compute Engine backup. Explore details.Backup and Disaster Recovery Solutions with Google CloudBackup and disaster recovery are vital components in your business continuity plan. We are committed to providing suitable solutions that meet your business needs through first-party and partner solutions.Contact usCheck out Google Cloud Backup and DR—a managed backup and disaster recovery (DR) service.Learn moreBenefitsDeliver backup and recovery for cloud workloadsEnsure business continuityUse backups to protect mission-critical data and enable rapid recoveries.Secure application-aware backupsCapture application/VM-consistent backups and secure against unauthorized deletion.Address multiple use casesUse backups for ransomware recovery, recovery from user errors, and analytics.Key featuresGoogle Cloud supports a wide range of data protection optionsThe primary option for protecting workloads in Google Cloud is the Google-managed Backup and DR Service, which provides centralized management and secure backup with protection against unauthorized modification and deletion. To complement Backup and DR Service, Google Cloud also supports data protection partner solutions and provides service-level data protection features.Partner solutionsA broad ecosystem of ISV and system integration (SI) partners provide backup and disaster recovery offerings on, and/or integrated with, Google Cloud. This provides Google Cloud customers with freedom of choice and facilitates frictionless use of preferred third-party products and services.Service-level featuresGoogle Cloud products and services offer a broad range of data protection features, such as Backup for GKE, Persistent Disk snapshots, Cloud SQL backups, Filestore backups, and geo-redundant Cloud Storage. Customers and ISVs can use these features to design and implement robust protection strategies.Ready to get started? Contact usLearn more about Google Cloud Backup and DR Protect critical workloads running in Google Cloud. Secure backup data against accidental or malicious deletion with backup vaults. Google Cloud Backup and DR Service overviewRead the overviewLearn about Backup VaultsLearn moreExplore quickstart guidesGet startedCustomersCustomers rely on Google Cloud for backup and disaster recoveryVideoEtsy protects their MySQL cloud instances with automated PD snapshots 14:32Blog postRodan + Fields achieves business continuity for retail workloads with SAP on Google Cloud5-min read Case studyGeorgia State University uses Google Cloud to improve disaster recovery5-min read See all customersWhat's new See the latest updates about Google Cloud backup and disaster recoveryEventWatch the Spotlight on Storage on demand to hear our latest storage updatesWatch videoBlog postIntroducing Google Cloud Backup and DR ServiceRead the blogBlog post Zero-footprint DR solution with Google Cloud VMware Engine and ActifioRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution. Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Bare_Metal.txt b/Bare_Metal.txt new file mode 100644 index 0000000000000000000000000000000000000000..a4dd392f318d0e51f0922673bbb0a49cf2ef8e64 --- /dev/null +++ b/Bare_Metal.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bare-metal +Date Scraped: 2025-02-23T12:02:47.176Z + +Content: +Oracle and Google Cloud announce a groundbreaking multicloud partnership. Read the blog.Jump to Bare Metal Solution for OracleBare Metal Solution for OracleBring your Oracle workloads to Google Cloud with Bare Metal Solution and jumpstart your cloud journey with minimal risk.Contact usView DocumentationState-of-the-art certified infrastructure, precision tuned to meet all your performance needsDeploy Oracle capabilities like clustered databases, replication, and all performance featuresSeamlessly take advantage of all Google Cloud services with less than 2 ms latencyLeverage your on-premises licensing, run books, database administrators, and system integratorsBilling, support, and SLA provided by Google CloudBenefitsFully managed certified database infrastructureEnd-to-end infrastructure management including compute, storage, and networking. Fully managed and monitored data center operations such as power, cooling, smart hand support, and facilities. Experience the latest hardwareConsolidate your workloads on Intel Cascade Lake servers with the industry’s highest DRAM density. Latest NVMe Tier-1 storage is configured and tuned with decades of industry experience for demanding workloads.All of Google Cloud, a millisecond awayFully managed low latency network makes all Google Cloud services seamlessly accessible to all Oracle workloads. Set up Oracle Data Guard in a few clicks for cost effective backup storage on Cloud Storage.Key featuresReliable, secure, and high-performance database infrastructure for your Oracle workloadsSeamlessly access all Oracle capabilitiesRun Oracle databases the same way you do it, on-premises. Install Oracle Real Application Clusters (RAC) on certified hardware for HA, use Oracle Data Guard for disaster recovery, and Oracle Recovery Manager (RMAN) for backups. Google Cloud's service catalog provides analogous topologies to all Oracle Maximum Availability Architecture templates. Migrate Oracle workloads with your preferred method (i.e. Oracle Data Guard, Oracle GoldenGate, Oracle Data Pump, or Oracle RMAN for backups). Maintain all your current playbooks, support runbooks, databases administrator teams, and system integrators.Integrated support and billingEnjoy a seamless experience with support for infrastructure, including defined SLAs for initial response, defined enterprise-grade SLA for infrastructure uptime and interconnect availability, 24/7 coverage for all Priority 1 and 2 issues, and unified billing across Google Cloud and Bare Metal Solution for Oracle.Industry-leading data protectionMeet demanding compliance requirements with industry certifications such as ISO, PCI DSS, and HIPAA, plus regional certifications where applicable. Copy data management and backups, fully integrated into Bare Metal Solution for Oracle.Tools and services to simplify operationsAutomate day-to-day operational database administrator tasks by either using Google’s open source Ansible based toolkit or Google Cloud's Kubernetes operator for Oracle. You can integrate these tools with their existing automation framework of choice. In addition, Google Cloud provides database and system administration professional services to further assist you with managing your Oracle workloads.VIDEORun your Oracle workloads in Google Cloud with Bare Metal Solution13:24CustomersLearn from customers using Bare Metal Solution for Oracle Case study Maxxton: 60% better app performance with Bare Metal Solution for Oracle5-min readCase studyLendlease: Globally integrated real estate and investment group creating places for everyone5-min readCase studySanken: Providing a comfortable, eco-friendly environment for customers and for society10-min readCase studyAvaloq: Delivering state-of-the-art cloud solutions for banks and wealth managers5-min readCase studyCommerzbank: Driving customer-centric banking experiences with Google Cloud5-min readCase studyStubHub: Providing reliable, fast ticket purchase services with next-gen cloud technology5-min readSee all customersGoogle Bare Metal Solution is our bridge to get off the Oracle database entirely. It gives us the flexibility to move the rest of our platform to the cloud so we can now modernize faster.Austin Sheppard, CTO, StubHubRead the blogWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoRunning Oracle-based applications in Google CloudWatch videoVideoBare Metal Solution sketchnoteWatch videoVideoCloud modernization for Oracle databasesWatch videoBlog postIntroducing Bare Metal Solution for SAP workloadsLearn moreDocumentationDocumentationGoogle Cloud BasicsPlanning for Bare Metal SolutionLearn how to get started with the Bare Metal Solution for Oracle.Learn moreQuickstartSetting up the environmentLearn how to set up the Bare Metal Solution for Oracle environment.Learn moreBest PracticeMaintaining the environmentAfter your Bare Metal Solution environment is up and running, use the information included in this guide to maintain your servers.Learn moreTutorialGetting started with Bare Metal SolutionListen to this podcast to learn how to get started with Bare Metal Solution on Google Cloud.Learn moreGoogle Cloud BasicsIntroduction to Bare Metal Solution for OracleEmbrace the cloud's pace of innovation and operational business model without disrupting your existing IT landscape or upgrading all your legacy applications. Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Use casesUse cases for Bare Metal Solution for OracleUse caseA day in the life of a database administratorAll tools, features, and options work the same as your on-premises deployments, without the need to manage the infrastructure. You can connect to the Oracle database using a SQL*Plus client, SQL*Developer/Toad, Oracle Enterprise Manager, and more. Connect from any virtual machine in your VPC using SSH to view Oracle OS processes and alert logs. Create new applications and microservices using Google services that connect to your Oracle database.Use caseA day in the life of a system administrator Many tools are available for better database life cycle management of the environment, including root access to the bare metal server, serial console redirection, reset APIs, and automatic daily snapshots of OS volumes. Bare Metal Solution for Oracle supports all major operating systems, including Red Hat Enterprise Linux (RHEL), Suse, Windows, and Oracle Linux. Google Cloud offers professional services to help with system administration tasks.View all technical guidesCompare featuresCompare Bare Metal Solution for Oracle to alternatives on AWS and AzureUnderstand the advantages in functionality and cost for Bare Metal Solution for Oracle.ImpactBare Metal Solution for OracleAWS and Azure virtual environmentsFlexibilityContinue to run any version, any feature set, any database option, and any customizations (patchsets)✕ Minimal support for Oracle database optionsCost advantagesUp to 50% reduction in Oracle license cost vs. AWS/AzureMinimum migration effort✕ Oracle requires more licenses per CPU core to run inside AWS or Azure✕ Re-tune the database to cloud storage and hypervisorLatest technologyCascade Lake servers, Tier 1 NVME storage, 100G network, and highest core to memory ratio✕ Slow to adapt latest innovations in silicon technology, low core to memory ratio, and lower performanceBusiness continuityEnterprise-grade deployment platformHigh availability with Oracle RACWorks with any application, any Oracle versionAll your existing investment in tooling and best practices will work as is✕ Minimal support for Oracle Real Application Clusters (RAC)✕ Some applications use database features not available from managed database services✕ Older Oracle releases may not be supported✕ Need to create new playbooks and runbooksBare Metal Solution for OracleFlexibilityContinue to run any version, any feature set, any database option, and any customizations (patchsets)Cost advantagesUp to 50% reduction in Oracle license cost vs. AWS/AzureMinimum migration effortLatest technologyCascade Lake servers, Tier 1 NVME storage, 100G network, and highest core to memory ratioBusiness continuityEnterprise-grade deployment platformHigh availability with Oracle RACWorks with any application, any Oracle versionAll your existing investment in tooling and best practices will work as isAWS and Azure virtual environmentsFlexibility✕ Minimal support for Oracle database optionsCost advantages✕ Oracle requires more licenses per CPU core to run inside AWS or Azure✕ Re-tune the database to cloud storage and hypervisorLatest technology✕ Slow to adapt latest innovations in silicon technology, low core to memory ratio, and lower performanceBusiness continuity✕ Minimal support for Oracle Real Application Clusters (RAC)✕ Some applications use database features not available from managed database services✕ Older Oracle releases may not be supported✕ Need to create new playbooks and runbooksPricingPricing informationThe pricing model for Bare Metal Solution for Oracle is subscription based. You can subscribe for right-sized hardware configurations with no up-front capital outlay. You can add capacity as your business needs evolve over time. Complete this form to talk to a sales representative to learn more.PartnersRecommended partnersWork with our industry-leading tech partners to manage Oracle workloads to Google Cloud.See all partnersORACLE® is a registered trademark of Oracle Corporation.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact SalesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Bare_Metal_Solution.txt b/Bare_Metal_Solution.txt new file mode 100644 index 0000000000000000000000000000000000000000..4e1a6df8ccb2ee38dea56245d881960f2051bfb2 --- /dev/null +++ b/Bare_Metal_Solution.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bare-metal +Date Scraped: 2025-02-23T12:04:08.847Z + +Content: +Oracle and Google Cloud announce a groundbreaking multicloud partnership. Read the blog.Jump to Bare Metal Solution for OracleBare Metal Solution for OracleBring your Oracle workloads to Google Cloud with Bare Metal Solution and jumpstart your cloud journey with minimal risk.Contact usView DocumentationState-of-the-art certified infrastructure, precision tuned to meet all your performance needsDeploy Oracle capabilities like clustered databases, replication, and all performance featuresSeamlessly take advantage of all Google Cloud services with less than 2 ms latencyLeverage your on-premises licensing, run books, database administrators, and system integratorsBilling, support, and SLA provided by Google CloudBenefitsFully managed certified database infrastructureEnd-to-end infrastructure management including compute, storage, and networking. Fully managed and monitored data center operations such as power, cooling, smart hand support, and facilities. Experience the latest hardwareConsolidate your workloads on Intel Cascade Lake servers with the industry’s highest DRAM density. Latest NVMe Tier-1 storage is configured and tuned with decades of industry experience for demanding workloads.All of Google Cloud, a millisecond awayFully managed low latency network makes all Google Cloud services seamlessly accessible to all Oracle workloads. Set up Oracle Data Guard in a few clicks for cost effective backup storage on Cloud Storage.Key featuresReliable, secure, and high-performance database infrastructure for your Oracle workloadsSeamlessly access all Oracle capabilitiesRun Oracle databases the same way you do it, on-premises. Install Oracle Real Application Clusters (RAC) on certified hardware for HA, use Oracle Data Guard for disaster recovery, and Oracle Recovery Manager (RMAN) for backups. Google Cloud's service catalog provides analogous topologies to all Oracle Maximum Availability Architecture templates. Migrate Oracle workloads with your preferred method (i.e. Oracle Data Guard, Oracle GoldenGate, Oracle Data Pump, or Oracle RMAN for backups). Maintain all your current playbooks, support runbooks, databases administrator teams, and system integrators.Integrated support and billingEnjoy a seamless experience with support for infrastructure, including defined SLAs for initial response, defined enterprise-grade SLA for infrastructure uptime and interconnect availability, 24/7 coverage for all Priority 1 and 2 issues, and unified billing across Google Cloud and Bare Metal Solution for Oracle.Industry-leading data protectionMeet demanding compliance requirements with industry certifications such as ISO, PCI DSS, and HIPAA, plus regional certifications where applicable. Copy data management and backups, fully integrated into Bare Metal Solution for Oracle.Tools and services to simplify operationsAutomate day-to-day operational database administrator tasks by either using Google’s open source Ansible based toolkit or Google Cloud's Kubernetes operator for Oracle. You can integrate these tools with their existing automation framework of choice. In addition, Google Cloud provides database and system administration professional services to further assist you with managing your Oracle workloads.VIDEORun your Oracle workloads in Google Cloud with Bare Metal Solution13:24CustomersLearn from customers using Bare Metal Solution for Oracle Case study Maxxton: 60% better app performance with Bare Metal Solution for Oracle5-min readCase studyLendlease: Globally integrated real estate and investment group creating places for everyone5-min readCase studySanken: Providing a comfortable, eco-friendly environment for customers and for society10-min readCase studyAvaloq: Delivering state-of-the-art cloud solutions for banks and wealth managers5-min readCase studyCommerzbank: Driving customer-centric banking experiences with Google Cloud5-min readCase studyStubHub: Providing reliable, fast ticket purchase services with next-gen cloud technology5-min readSee all customersGoogle Bare Metal Solution is our bridge to get off the Oracle database entirely. It gives us the flexibility to move the rest of our platform to the cloud so we can now modernize faster.Austin Sheppard, CTO, StubHubRead the blogWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoRunning Oracle-based applications in Google CloudWatch videoVideoBare Metal Solution sketchnoteWatch videoVideoCloud modernization for Oracle databasesWatch videoBlog postIntroducing Bare Metal Solution for SAP workloadsLearn moreDocumentationDocumentationGoogle Cloud BasicsPlanning for Bare Metal SolutionLearn how to get started with the Bare Metal Solution for Oracle.Learn moreQuickstartSetting up the environmentLearn how to set up the Bare Metal Solution for Oracle environment.Learn moreBest PracticeMaintaining the environmentAfter your Bare Metal Solution environment is up and running, use the information included in this guide to maintain your servers.Learn moreTutorialGetting started with Bare Metal SolutionListen to this podcast to learn how to get started with Bare Metal Solution on Google Cloud.Learn moreGoogle Cloud BasicsIntroduction to Bare Metal Solution for OracleEmbrace the cloud's pace of innovation and operational business model without disrupting your existing IT landscape or upgrading all your legacy applications. Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Use casesUse cases for Bare Metal Solution for OracleUse caseA day in the life of a database administratorAll tools, features, and options work the same as your on-premises deployments, without the need to manage the infrastructure. You can connect to the Oracle database using a SQL*Plus client, SQL*Developer/Toad, Oracle Enterprise Manager, and more. Connect from any virtual machine in your VPC using SSH to view Oracle OS processes and alert logs. Create new applications and microservices using Google services that connect to your Oracle database.Use caseA day in the life of a system administrator Many tools are available for better database life cycle management of the environment, including root access to the bare metal server, serial console redirection, reset APIs, and automatic daily snapshots of OS volumes. Bare Metal Solution for Oracle supports all major operating systems, including Red Hat Enterprise Linux (RHEL), Suse, Windows, and Oracle Linux. Google Cloud offers professional services to help with system administration tasks.View all technical guidesCompare featuresCompare Bare Metal Solution for Oracle to alternatives on AWS and AzureUnderstand the advantages in functionality and cost for Bare Metal Solution for Oracle.ImpactBare Metal Solution for OracleAWS and Azure virtual environmentsFlexibilityContinue to run any version, any feature set, any database option, and any customizations (patchsets)✕ Minimal support for Oracle database optionsCost advantagesUp to 50% reduction in Oracle license cost vs. AWS/AzureMinimum migration effort✕ Oracle requires more licenses per CPU core to run inside AWS or Azure✕ Re-tune the database to cloud storage and hypervisorLatest technologyCascade Lake servers, Tier 1 NVME storage, 100G network, and highest core to memory ratio✕ Slow to adapt latest innovations in silicon technology, low core to memory ratio, and lower performanceBusiness continuityEnterprise-grade deployment platformHigh availability with Oracle RACWorks with any application, any Oracle versionAll your existing investment in tooling and best practices will work as is✕ Minimal support for Oracle Real Application Clusters (RAC)✕ Some applications use database features not available from managed database services✕ Older Oracle releases may not be supported✕ Need to create new playbooks and runbooksBare Metal Solution for OracleFlexibilityContinue to run any version, any feature set, any database option, and any customizations (patchsets)Cost advantagesUp to 50% reduction in Oracle license cost vs. AWS/AzureMinimum migration effortLatest technologyCascade Lake servers, Tier 1 NVME storage, 100G network, and highest core to memory ratioBusiness continuityEnterprise-grade deployment platformHigh availability with Oracle RACWorks with any application, any Oracle versionAll your existing investment in tooling and best practices will work as isAWS and Azure virtual environmentsFlexibility✕ Minimal support for Oracle database optionsCost advantages✕ Oracle requires more licenses per CPU core to run inside AWS or Azure✕ Re-tune the database to cloud storage and hypervisorLatest technology✕ Slow to adapt latest innovations in silicon technology, low core to memory ratio, and lower performanceBusiness continuity✕ Minimal support for Oracle Real Application Clusters (RAC)✕ Some applications use database features not available from managed database services✕ Older Oracle releases may not be supported✕ Need to create new playbooks and runbooksPricingPricing informationThe pricing model for Bare Metal Solution for Oracle is subscription based. You can subscribe for right-sized hardware configurations with no up-front capital outlay. You can add capacity as your business needs evolve over time. Complete this form to talk to a sales representative to learn more.PartnersRecommended partnersWork with our industry-leading tech partners to manage Oracle workloads to Google Cloud.See all partnersORACLE® is a registered trademark of Oracle Corporation.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact SalesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Batch.txt b/Batch.txt new file mode 100644 index 0000000000000000000000000000000000000000..a7f35ad2e3b40be803cf98380af9aec8129b3538 --- /dev/null +++ b/Batch.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/batch +Date Scraped: 2025-02-23T12:02:43.267Z + +Content: +Jump to BatchBatchFully managed batch service to schedule, queue, and execute batch jobs on Google's infrastructure.Go to consoleProvisions and autoscales capacity while eliminating the need to manage third-party solutionsNatively integrated with Google Cloud to run, scale, and monitor your workloadReduce the computing costs of workloads with Spot machinesStart using Batch now with this quickstart guide9:50See how Batch simplifies the execution of jobsBenefitsFocus on business critical tasksLeverage fully managed and scalable compute infrastructure to shift focus to job submission and extracting business insights from the job's results.Define your execution modelRun high throughput or tightly coupled computations defined by a script or container.Enhanced developer experienceBatch simplifies workload development and execution. Batch jobs can be submitted within a few steps. Leverage Cloud Storage, Pub/Sub, Cloud Logging, and Workflows for an end-to-end developer experience.Key featuresKey featuresDynamic resource provisioning and autoscalingRun any scale batch jobs on cloud compute resources to limit job wait times while executing requests in parallel and scaling resources without manual intervention.Support for scripts and containerized workloadsBatch provides a consistent experience for Docker containerized workloads and script based workloads that run directly on VMs.Leverage native services and batch toolsEasily adopt first-party services to control the end-to-end batch workflow from preprocessing to postprocessing.View all featuresAt Roivant Discovery, we leverage massive computational simulations to drive the discovery of novel drugs across small molecule modalities. Our scientists are required to operate at the cutting edge of quantum physics with speed and agility, at scale. Google Batch's unprecedented simplicity and flexibility will ensure we're always focused on innovation.Matt Maisak, COO at Roivant DiscoveryDocumentationDocumentationQuickstartBatch quickstartLearn about what is needed to get started with Batch and how easy it is to create a batch job in the console, gcloud, REST API, and client libraries. Learn moreGoogle Cloud BasicsDefining different types of batch jobs and more.Learn how to submit containerized and script based workloads. These jobs can be array jobs or multi-node jobs using MPI libraries.Learn moreGoogle Cloud BasicsStorage file systemsLearn about the storage file systems supported by Batch and how to use them with a batch job.Learn moreGoogle Cloud BasicsMonitor your jobLearn how to view logs to gather insights on job tasks and how to describe the job to inspect its details and statuses.Learn moreAPIs & LibrariesUsing the Batch API referencesLearn about all the fields supported by the Batch API and how to use them.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for Batch.Use casesUse casesUse caseLife sciences: genomics & drug discovery pipelinesHigh throughput processing of reproducible pipelines used for genomic sequencing, drug discovery, and more.Use caseFinancial services: quantitative and risk analysisPerform Monte Carlo simulations and quickly analyze results needed to transact business in the market.Use caseManufacturing: electronic design automationAutomate verification tests and simulations based on varying inputs to optimize designs.View all technical guidesAll featuresAll featuresSupport for containers or scriptsRun your scripts natively on Compute Engine VM instances or bring your containerized workload that will run to completion.Leverage Google Cloud computeGet the latest software and hardware available as a service to use with Batch.Job priorities and retriesDefine priorities for your job and establish automated retry strategies.Pub/Sub notifications for BatchConfigure Pub/Sub with Batch to asynchronously communicate messages to subscribers.Integrated logging and monitoringRetrieve stderr and stdout logs directly to Cloud Logging. Audit logs help you answer questions about who did what, where, and when. Monitor metrics related to resources used in Cloud Monitoring.Alternate methods to use BatchBatch APIs can be called directly via gcloud, REST APIs, client libraries, or the Cloud Console. In addition, Batch can be used with an ecosystem of workflow engines.Identity and access managementControl the access of resources and service with IAM permissions and VPC Service Controls.PricingPricingThere is no charge for the Batch service. You will only pay for the Google Cloud resources used to execute your batch jobs.View pricing detailsPartnersInnovate with our partnersReady to take advantage of Batch to run your workloads? These partners can assist you.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Best_practices.txt b/Best_practices.txt new file mode 100644 index 0000000000000000000000000000000000000000..38d81091dd7e6c3c630d824fabbfc05b0cd5954a --- /dev/null +++ b/Best_practices.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/bps-for-mitigating-ransomware-attacks +Date Scraped: 2025-02-23T11:56:47.485Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for mitigating ransomware attacks using Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-03 UTC Code that was created by a third party to infiltrate your systems to hijack, encrypt, and steal data is referred to as ransomware. To protect your enterprise resources and data from ransomware attacks, you must put multi-layered controls in place across your on-premises and cloud environments. This document describes some best practices to help your organization identify, prevent, detect, and respond to ransomware attacks. This document is part of a series that is intended for security architects and administrators. It describes how Google Cloud can help your organization mitigate the effects of ransomware attacks. The series has the following parts: Mitigating ransomware attacks using Google Cloud Best practices for mitigating ransomware attacks using Google Cloud (this document) Identify your risks and assets To determine your organization's exposure to ransomware attacks, you must develop your understanding of the risks to your systems, people, assets, data, and capabilities. To help you, Google Cloud provides the following capabilities: Asset management with Cloud Asset Inventory Risk management programs Data classification Supply chain risk management Manage your assets with Cloud Asset Inventory To help mitigate ransomware attacks, you need to know what your organization's assets are, their state, and their purpose, both in Google Cloud and in your on-premises or other cloud environments. For static assets, maintain a baseline of the last known good configuration in a separate location. Use Cloud Asset Inventory to obtain a five-week history of your resources in Google Cloud. Set up monitoring feeds to receive notifications when particular changes to resources occur, or when there are policy deviations. To track changes so that you can watch for attacks that progress over a longer time period, export the feed. To create the export, you can use tools like Terraform. For this type of analysis, you can export the inventory to a BigQuery table or a Cloud Storage bucket. Assess and manage your risks Use an existing risk assessment framework to help you catalog your risks and determine how capable your organization is in detecting and counteracting a ransomware attack. These assessments check factors like whether you have malware protection controls, properly configured access controls, database protection, and backups. For example, the Cloud Security Alliance (CSA) provides the Cloud Controls Matrix (CCM) to assist organizations in their cloud risk assessments. For CCM information specific to Google Cloud, see New CIS Benchmark for Google Cloud Computing Platform. To identify potential application gaps and take actions to remediate them, you can use threat models such as OWASP Application Threat Modeling. For more information on how you can help mitigate the top 10 OWASP security risks with Google Cloud, see OWASP Top 10 mitigation options on Google Cloud. After you catalog your risks, determine how to respond to them, and whether you want to accept, avoid, transfer, or mitigate the risks. The Risk Protection Program provides access to Risk Manager and cyber insurance. Use Risk Manager to scan your workloads on Google Cloud and implement the security recommendations that help reduce your ransomware-related risks. Configure Sensitive Data Protection Sensitive Data Protection lets you inspect data in your Google Cloud organization and data coming from external sources. Configure Sensitive Data Protection to classify and protect your confidential data using de-identification techniques. Classifying your data helps you to focus your monitoring and detection efforts on data that matters most to your organization. Combine Sensitive Data Protection with other products such as the Security Command Center or with a third-party SIEM to help ensure appropriate monitoring and alerting on any unexpected changes to your data. Manage risks to your supply chain A key attack vector for ransomware attacks is vulnerabilities within the supply chain. The challenge to this attack vector is that most organizations have many vendors that they must track, each with their own list of vendors. If you create and deploy applications, use frameworks such as the Supply-chain Levels for Software Architects (SLSA). These frameworks help define the requirements and best practices that your enterprise can use to protect your source code and build processes. Using SLSA, you can work through four security levels to improve the security of the software that you produce. If you use open source packages in your applications, consider using security scorecards to auto-generate the security score of a particular open source package. Security scorecards are a low-cost, easy-to-use method to get an assessment before your developers integrate open source packages with your systems. To learn about resources that you can use to help verify the security of Google Cloud, see Vendor security risk assessment. Control access to your resources and data As your organization moves workloads outside of your on-premises network, you must manage access to those workloads across all the environments that host your resources and data. Google Cloud supports several controls that help you set up appropriate access. The following sections highlight some of them. Set up zero trust security with Chrome Enterprise Premium As you move your workloads from your on-premises environment to the cloud, your network trust model changes. Zero trust security means that no one is trusted implicitly, whether they are inside or outside of your organization's network. Unlike a VPN, zero trust security shifts access controls from the network perimeter to users and their devices. Zero trust security means that the user's identity and context is considered during authentication. This security control provides an important prevention tactic against ransomware attacks that are successful only after attackers breach your network. Use Chrome Enterprise Premium to set up zero trust security in Google Cloud. Chrome Enterprise Premium provides threat and data protection and access controls. To learn how to set it up, see Getting started with Chrome Enterprise Premium. If your workloads are located both on-premises and in Google Cloud, configure Identity-Aware Proxy (IAP). IAP lets you extend zero trust security to your applications in both locations. It provides authentication and authorization for users who access your applications and resources, using access control policies. Configure least privilege Least privilege ensures users and services only have the access that they require to perform their specific tasks. Least privilege slows down the ability of ransomware to spread throughout an organization because an attacker can't easily escalate their privileges. To meet your organization's particular needs, use the fine-grained policies, roles, and permissions in Identity and Access Management (IAM). In addition, analyze your permissions regularly using role recommender and Policy Analyzer. Role recommender uses machine learning to analyze your settings and provide recommendations to help ensure your role settings adhere to the principle of least privilege. Policy Analyzer lets you see which accounts have access to your cloud resources. For more information about least privilege, see Using IAM securely. Configure multi-factor authentication with Titan Security Keys Multi-factor authentication (MFA) ensures that users must provide a password and a biometric factor or a possessive factor (like a token) before they can access a resource. As passwords can be relatively easy to discover or steal, MFA helps to prevent ransomware attackers from being able to take over accounts. Consider Titan Security Keys for MFA to help prevent account takeovers and phishing attacks. Titan Security Keys are tamper-resistant and can be used with any service that supports the Fast IDentity Online (FIDO) Alliance standards. Enable MFA for your applications, Google Cloud administrators, SSH connections to your VMs (by using OS Login), and for anyone who requires privileged access to sensitive information. Use Cloud Identity to configure MFA for your resources. For more information, see Enforce uniform MFA to company-owned resources. Protect your service accounts Service accounts are privileged identities that provide access to your Google Cloud resources, so attackers would consider them valuable. For best practices on protecting service accounts, see Best practices for working with service accounts. Protect your critical data The main goals of a ransomware attack are generally the following: To make your critical data inaccessible until you pay the ransom. To exfiltrate your data. To protect your critical data from attacks, combine various security controls to control access to data, based on the data sensitivity. The following sections describe some best practices that you can use to help protect your data and help effectively mitigate ransomware attacks. Configure data redundancy Google Cloud has global-scale infrastructure that is designed to provide resiliency, scalability, and high availability. Cloud resilience helps Google Cloud to recover and adapt to various events. For more information, see Google Cloud infrastructure reliability guide. In addition to the default resiliency capabilities in Google Cloud, configure redundancy (N+2) on the cloud storage option that you use to store your data. Redundancy helps mitigate the effects of a ransomware attack because it removes a single point of failure and provides backups of your primary systems in case they are compromised. If you use Cloud Storage, you can enable Object Versioning or the Bucket Lock feature. The Bucket Lock feature lets you to configure a data retention policy for your Cloud Storage buckets. Note: If you choose data redundancy, regularly test your recovery plans. In addition, ensure your recovery plans include the ability to recover from isolated, backed-up sources if your redundant systems are also compromised (see Back up your databases and filestores). For more information about data redundancy in Google Cloud, see the following: Redundancy across regions with Cloud Storage buckets Copying BigQuery datasets across regions Back up your databases and filestores Backups let you keep copies of your data for recovery purposes so that you can create a replicated environment if a security incident occurs. Store backups in both the format that you need it, and in raw source form if possible. To avoid compromising your backup data, store these copies in separate, isolated zones away from your production zone. In addition, back up binaries and executable files separately from your data. When planning for a replicated environment, ensure that you apply the same (or stronger) security controls in your mirror environment. Determine the time it takes you to recreate your environment and to recreate any new administrator accounts that you require. For some examples of backups in Google Cloud, see the following: Backups in Cloud SQL for SQL Server Backups in Filestore In addition to these backup options, consider using Backup and DR Service to back up Google Cloud workloads. For more information, see solutions for backup and disaster recovery. Note: Although backups and redundancy are best practices for helping to mitigate ransomware attacks, they require a certain level of commitment by your organization to ensure that your recovery plans work. Regularly practice your recovery plans. Keep historical copies of your backups in isolated locations so that you can recover data from a historic backup if a more current backup is compromised. Protect and back up your data encryption keys To help prevent attackers from getting access to your data encryption keys, rotate your keys regularly and monitor key-related activities. Implement a key backup strategy that considers the key location and whether the keys are Google-managed (software or HSM), or whether you supply the keys to Google. If you supply your own keys, configure backups and key rotation using the controls in your external key management system. For more information, see Manage encryption keys with Cloud Key Management Service. Protect your network and infrastructure To protect your network, you must ensure that attackers can't easily traverse it to get access to your sensitive data. The following sections describe some of the items to consider as you plan and deploy your network. Automate infrastructure provisioning Automation is an important control against ransomware attackers, as automation provides your operations team with a known good state, fast rollback, and troubleshooting capabilities. Automation requires various tools such as Terraform, Jenkins, Cloud Build, and others. Deploy a secure Google Cloud environment using the enterprise foundations blueprint. If necessary, build on the security foundations blueprint with additional blueprints or design your own automation. For more security guidance, see the Cloud Security Best Practices Center. Segment your network Network segments and perimeters help slow down the progress that an attacker can make in your environment. To segment services and data and to help secure your perimeter, Google Cloud offers the following tools: To direct and protect the flow of traffic, use Cloud Load Balancing with firewall rules. To set up perimeters within your organization to segment resources and data, use VPC Service Controls. To set up connections with your other workloads, whether on-premises or in other cloud environments, use Cloud VPN or Cloud Interconnect. To restrict access to IP addresses and ports, configure organization policies such as "Restrict Public IP access on Cloud SQL instances" and "Disable VM serial port access." To harden the VMs on your network, configure organization policies such as "Shielded VM." Customize network security controls to match your risks for different resources and data. Protect your workloads Google Cloud includes services that let you build, deploy, and manage code. Use these services to prevent drift and rapidly detect and patch issues such as misconfigurations and vulnerabilities. To protect your workloads, build a gated deployment process that prevents ransomware attackers from getting initial access through unpatched vulnerabilities and misconfigurations. The following sections describe some of the best practices that you can implement to help protect your workloads. For example, to deploy workloads in GKE Enterprise, you do the following: Configure trusted builds and deployments. Isolate applications within a cluster. Isolate pods on a node. For more information about GKE Enterprise security, see Hardening your cluster's security. Use a secure software development lifecycle When developing your software development lifecycle (SDLC), use industry best practices such as DevSecOps. The DevOps Research and Assessment (DORA) research program describes many of the technical, process, measurement, and cultural capabilities of DevSecOps. DevSecOps can help mitigate ransomware attacks because it helps ensure that security considerations are included at each step of the development lifecycle and lets your organization rapidly deploy fixes. For more information about using an SDLC with Google Kubernetes Engine (GKE), see Software supply chain security. Use a secure continuous integration and continuous delivery pipeline Continuous integration and continuous delivery (CI/CD) provides a mechanism for getting your latest functionality to your customers quickly. To prevent ransomware attacks against your pipeline, you must perform appropriate code analysis and monitor your pipeline for malicious attacks. To protect your CI/CD pipeline on Google Cloud, use access controls, segregated duties, and cryptographic code verification as the code moves through the CI/CD pipeline. Use Cloud Build to track your build steps and Artifact Registry to complete vulnerability scanning on your container images. Use Binary Authorization to verify that your images meet your standards. When you build your pipeline, ensure that you have backups for your application binaries and executable files. Back them up separately from your confidential data. Protect your deployed applications Attackers can try to access your network by finding Layer 7 vulnerabilities within your deployed applications. To help mitigate against these attacks, complete threat modeling activities to find potential threats. After you minimize your attack surface, configure Google Cloud Armor, which is a web-application firewall (WAF) that uses Layer 7 filtering and security policies. WAF rules help you protect your applications against numerous OWASP Top 10 issues. For more information, see OWASP Top 10 mitigation options on Google Cloud. For information about deploying Google Cloud Armor with a global external Application Load Balancer to protect your applications across multiple regions, see Getting to know Google Cloud Armor—defense at scale for internet-facing services. For information about using Google Cloud Armor with applications that run outside Google Cloud, see Integrating Google Cloud Armor with other Google products. Patch vulnerabilities quickly A key attack vector for ransomware is open-source software vulnerabilities. To mitigate the effects that ransomware might have, you must be able to rapidly deploy fixes across your fleet. According to the shared responsibility model, you're responsible for any software vulnerabilities in your applications, while Google is responsible for maintaining the security of the underlying infrastructure. To view vulnerabilities associated with the operating systems that your VMs are running and to manage the patching process, use OS patch management in Compute Engine. For GKE and GKE Enterprise, Google automatically patches vulnerabilities, though you have some control over GKE maintenance windows. If you're using Cloud Build, automate builds whenever a developer commits a change to the code source repository. Ensure that your build configuration file includes appropriate verification checks, such as vulnerability scanning and integrity checks. For information about patching Cloud SQL, see Maintenance on Cloud SQL instances. Detect attacks Your ability to detect attacks depends on your detection capabilities, your monitoring and alerting system, and the activities that prepare your operations teams to identify attacks when they occur. This section describes some best practices for detecting attacks. Configure monitoring and alerts Enable Security Command Center to get centralized visibility into any security concerns and risks within your Google Cloud environment. Customize the dashboard to ensure that the events that are most important to your organization are most visible. Use Cloud Logging to manage and analyze the logs from your services in Google Cloud. For additional analysis, you can choose to integrate with Google Security Operations or export the logs to your organization's SIEM. In addition, use Cloud Monitoring to measure the performance of your service and resources and set up alerts. For example, you can monitor for sudden changes to the number of VMs running in your environment, which might be a sign that malware is present in your environment. Make all this information available to your security operations center in a centralized way. Build detection capabilities Build detection capabilities in Google Cloud that correspond with your risks and workload needs. These capabilities provide you with more insight into advanced threats, and help you better monitor your compliance requirements. If you have Security Command Center Premium tier, use Event Threat Detection and Google SecOps. Event Threat Detection searches your logs for potential security attacks and logs its findings in the Security Command Center. Event Threat Detection lets you monitor both Google Cloud and Google Workspace at the same time. It checks for malware based on known bad domains and known bad IP addresses. For more information, see Using Event Threat Detection. Use Google SecOps to store and analyze your security data in one place. Google SecOps helps enhance the process of handling threats in Google Cloud by adding investigative abilities into Security Command Center Premium. You can use Google SecOps to create detection rules, set up indicators of compromise matching, and perform threat hunting activities. Google SecOps has the following features: When you map logs, Google SecOps enriches them and links them together into timelines, so that you can see the entire span of an attack. Google SecOps constantly re-evaluates log activity against threat intelligence gathered by the Google Cloud Threat Intelligence for Google Security Operations team. When the intelligence changes, Google SecOps automatically reapplies it against all historical activity. You can write your own YARA rules to improve your threat detection capabilities. Optionally, you can use a Google Cloud partner to further augment your detection capabilities. Plan for a ransomware attack To prepare for a ransomware attack, complete business continuity and disaster recovery plans, create a ransomware incident response playbook, and perform tabletop exercises. For your incident response playbook, consider the functionality available for each service. For example, if you're using GKE with Binary Authorization, you can add breakglass processes. Ensure that your incident response playbook helps you quickly contain infected resources and accounts and move to healthy secondary sources and backups. If you use a backup service like Backup and DR, regularly practice your restore procedures from Google Cloud to your on-premises environment. Build a cyber-resiliency program and a backup strategy that prepares you to restore core systems or assets affected by a ransomware incident. Cyber resiliency is critical for supporting recovery timelines and lessening the effects of an attack so you can get back to operating your business. Depending on the scope of an attack and the regulations that apply to your organization, you might need to report the attack to the appropriate authorities. Ensure that contact information is accurately captured in your incident response playbook. Respond to and recover from attacks When an attack occurs, you need to follow your incident response plan. Your response likely goes through four phases, which are: Incident identification Incident coordination and investigation Incident resolution Incident closure Best practices related to incident response are further described in the following sections. For information on how Google manages incidents, see Data incident response process. Activate your incident response plan When you detect a ransomware attack, activate your plan. After you confirm that the incident isn't a false positive and that it affects your Google Cloud services, open a P1 Google Support ticket. Google Support responds as documented in the Google Cloud: Technical Support Services Guidelines. If your organization has a Google technical account manager (TAM) or other Google representative, contact them as well. Coordinate your incident investigation After you activate your plan, gather the team within your organization that needs to be involved in your incident coordination and resolution processes. Ensure that these tools and processes are in place to investigate and resolve the incident. Continue to monitor your Google Support ticket and work with your Google representative. Respond to any requests for further information. Keep detailed notes on your activities. Resolve the incident After you complete your investigation, follow your incident response plan to remove the ransomware and restore your environment to a healthy state. Depending on the severity of the attack and the security controls that you have enabled, your plan can include activities such as the following: Quarantining infected systems. Restoring from healthy backups. Restoring your infrastructure to a previously known good state using your CI/CD pipeline. Verifying that the vulnerability was removed. Patching all systems that might be vulnerable to a similar attack. Implementing the controls that you require to avoid a similar attack. As you progress through the resolution stage, continue to monitor your Google Support ticket. Google Support takes appropriate actions within Google Cloud to contain, eradicate, and (if possible) recover your environment. Continue to keep detailed notes on your activities. Close the incident You can close the incident after your environment is restored to a healthy state and you have verified that the ransomware has been eradicated from your environment. Inform Google Support when your incident is resolved and your environment is restored. If one is scheduled, participate in a joint retrospective with your Google representative. Ensure that you capture any lessons learned from the incident, and set in place the controls that you require to avoid a similar attack. Depending on the nature of the attack, you could consider the following actions: Writing detection rules and alerts that would automatically trigger should the attack occur again. Updating your incident response playbook to include any lessons learned. Improving your security posture based on your retrospective findings. What's next Learn more about the incident response process at Google. Bookmark Google Cloud Status Dashboard to view Google Cloud status. Improve your incident response plan with Google's SRE book - Incident Response. Read the Architecture Framework for more best practices for Google Cloud. Plan your disaster recovery processes. Help secure software supply chains on GKE. Learn how Google identifies and protects against the largest DDoS attacks. Send feedback \ No newline at end of file diff --git a/Best_practices_and_reference_architectures_for_VPC_design.txt b/Best_practices_and_reference_architectures_for_VPC_design.txt new file mode 100644 index 0000000000000000000000000000000000000000..22c400f1b8d8a6bded0a597bbc6d567d98651991 --- /dev/null +++ b/Best_practices_and_reference_architectures_for_VPC_design.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/best-practices-vpc-design +Date Scraped: 2025-02-23T11:53:40.804Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices and reference architectures for VPC design Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC This guide introduces best practices and typical enterprise architectures for the design of Virtual Private Cloud (VPC) with Google Cloud. This guide is for cloud network architects and system architects who are already familiar with Google Cloud networking concepts. General principles and first steps Best practices: Identify decision makers, timelines, and pre-work. Consider VPC network design early. Keep it simple. Use clear naming conventions. Identify decision makers, timelines, and pre-work As a first step in your VPC network design, identify the decision makers, timelines, and pre-work necessary to ensure that you can address stakeholder requirements. Stakeholders might include application owners, security architects, solution architects, and operations managers. The stakeholders themselves might change depending on whether you are planning your VPC network for a project, a line of business, or the entire organization. Part of the pre-work is to get the team acquainted with concepts and terminology around VPC network design. Useful documents include the following: Resource Manager documentation Identity and Access Management (IAM) documentation Virtual Private Cloud documentation Consider VPC network design early Make VPC network design an early part of designing your organizational setup in Google Cloud. It can be disruptive to your organization if you later need to change fundamental things such as how your network is segmented or where your workloads are located. Different VPC network configurations can have significant implications for routing, scale, and security. Careful planning and deep understanding of your specific considerations helps you to create a solid architectural foundation for incremental workloads. Keep it simple Keeping the design of your VPC network topology simple is the best way to ensure a manageable, reliable, and well-understood architecture. Use clear naming conventions Make your naming conventions simple, intuitive, and consistent. This ensures that administrators and end users understand the purpose of each resource, where it is located, and how it is differentiated from other resources. Commonly accepted abbreviations of long words help with brevity. Using familiar terminology where possible helps with readability. Consider the components illustrated in the following example when establishing your naming conventions: Company name: Acme Company: acmeco Business unit: Human Resources: hr Application code: Compensation system: comp Region code: northamerica-northeast1: na-ne1, europe-west1: eu-we1 Environment codes: dev, test, uat, stage, prod In this example, the development environment for the human resources department's compensation system is named acmeco-hr-comp-eu-we1-dev. For other common networking resources, consider patterns like these: VPC network syntax: {company name}-{description(App or BU)-label}-{environment-label}-{seq#} example: acmeco-hr-dev-vpc-1 Subnet syntax: {company-name}-{description(App or BU)-label}-{region-label}-{environment-label}-{subnet-label} example: acmeco-hr-na-ne1-dev-vpc-1-subnet-1 note: If a subnet contains resources in one zone only, you can use "zone-label" instead of "region-label." Firewall rule syntax: {company-name}-{description(App or BU)-label}{source-label}-{dest-label}-{protocol}-{port}-{action} example: acmeco-hr-internet-internal-tcp-80-allow-rule IP route syntax: {priority}-{VPC-label}-{tag}-{next hop} example: 1000-acmeco-hr-dev-vpc-1-int-gw Addresses and subnets Best practices: Use custom mode subnets in your enterprise VPC networks. Group applications into fewer subnets with larger address ranges. Use custom mode VPC networks Though auto mode networks can be useful for early exploration, custom mode VPC networks are better suited for most production environments. We recommend that enterprises use VPC networks in custom mode from the beginning for the following reasons: Custom mode VPC networks better integrate into existing IP address management schemes. Because all auto mode networks use the same set of internal IP ranges, auto mode IP ranges might overlap when connected with your on-premises corporate networks. You can't connect two auto mode VPC networks together using VPC Network Peering because their subnets use identical primary IP ranges. Auto mode subnets all have the same name as the network. You can choose unique, descriptive names for custom mode subnets, making your VPC networks more understandable and maintainable. When a new Google Cloud region is introduced, auto mode VPC networks automatically get a new subnet in that region. Custom mode VPC networks only get new subnets if you specify them. This can be important for both sovereignty and IP address management reasons. If it has no resources, you can delete the default network. You cannot delete a VPC network until you have removed all resources, including Virtual Machine (VM) instances, that depend on it. For more details of the differences between auto mode and custom mode VPC networks, see the VPC network overview. Group applications into fewer subnets with larger address ranges Conventionally, some enterprise networks are separated into many small address ranges for a variety of reasons. For example, this might have been done to identify or isolate an application or keep a small broadcast domain. However, we recommend that you group applications of the same type into fewer, more manageable subnets with larger address ranges in the regions you want to operate. Unlike other networking environments in which a subnet mask is used, Google Cloud uses a software-defined networking (SDN) approach to provide a full mesh of reachability between all VMs in the global VPC network. The number of subnets does not affect routing behaviour. You can use service accounts or network tags to apply specific routing policies or firewall rules. Identity in Google Cloud is not based solely on the subnet IP address. Some VPC features—including Cloud NAT, Private Google Access, VPC Flow Logs, and alias IP ranges—are configured per subnet. If you need more fine-grained control of these features, use additional subnets. Single VPC network and Shared VPC Best practices: Start with a single VPC network for resources that have common requirements. Use Shared VPC for administration of multiple working groups. Grant the network user role at the subnet level. Use a single host project if resources require multiple network interfaces. Use multiple host projects if resource requirements exceed the quota of a single project. Use multiple host projects if you need separate administration policies for each VPC. Start with a single VPC network for resources that have common requirements For many simple use cases, a single VPC network provides the features that you need, while being easier to create, maintain, and understand than the more complex alternatives. By grouping resources with common requirements and characteristics into a single VPC network, you begin to establish the VPC network border as the perimeter for potential issues. For an example of this configuration, see the single project, single VPC network reference architecture. Factors that might lead you to create additional VPC networks include scale, network security, financial considerations, operational requirements, and identity and access management (IAM). Use Shared VPC for administration of multiple working groups For organizations with multiple teams, Shared VPC provides an effective tool to extend the architectural simplicity of a single VPC network across multiple working groups. The simplest approach is to deploy a single Shared VPC host project with a single Shared VPC network and then attach team service projects to the host project network. In this configuration, network policy and control for all networking resources are centralized and easier to manage. Service project departments can configure and manage non-network resources, enabling a clear separation of responsibilities for different teams in the organization. Resources in those projects can communicate with each other more securely and efficiently across project boundaries using internal IP addresses. You can manage shared network resources—such as subnets, routes, and firewalls—from a central host project, so you can enforce consistent network policies across the projects. For an example of this configuration, see the Single host project, multiple service projects, single Shared VPC reference architecture. Grant the network user role at the subnet level The centralized Shared VPC administrator can grant members the network user (networkUser) role either at a subnet level, for fine-grained service-project authorization, or for all subnets at the host project level. Following the principle of least privilege, we recommend granting the network user role at the subnet level to the associated user, service account, or group. Because subnets are regional, this granular control lets you specify which regions each service project can use to deploy resources. With Shared VPC architectures, you also have the flexibility to deploy multiple Shared VPC host projects within your organization. Each Shared VPC host project can then accommodate a single or multiple Shared VPC networks. In this configuration, different environments can enforce different policy concerns. For more information, see IAM roles for networking. Use a single host project if resources require multiple network interfaces If you have a service project that needs to deploy resources to multiple isolated VPC networks—for example, VM instances with multiple network interfaces—your host project must contain all of the VPC networks that provide the services. This is because a service project is allowed to attach to only one host project. Use multiple host projects if resource requirements exceed the quota of a single project In cases where the aggregate resource requirements of all VPC networks can't be met within a project's quota, use an architecture with multiple host projects with a single Shared VPC network per host project, rather than a single host project with multiple Shared VPC networks. It's important to evaluate your scale requirements, because using a single host project requires multiple VPC networks in the host project, and quotas are enforced at the project level. For an example of this configuration, see the Multiple host projects, multiple service projects, multiple Shared VPC reference architecture. Use multiple host projects if you need separate administration policies for each VPC network Because each project has its own quota, use a separate Shared VPC host project for every VPC network to scale aggregate resources. This allows each VPC network to have separate IAM permissions for networking and security management, because IAM permissions are also implemented at the project level. For example, if you deploy two VPC networks (VPC network A and VPC network B) into the same host project, the network administrator (networkAdmin) role applies to both VPC networks. Deciding whether to create multiple VPC networks Best practices: Create a single VPC network per project to map VPC network quotas to projects. Create a VPC network for each autonomous team, with shared services in a common VPC network. Create VPC networks in different projects for independent IAM controls. Isolate sensitive data in its own VPC network. Create a single VPC network per project to map VPC resource quotas to projects Quotas are constraints applied at the project or network level. All resources have an initial default quota meant to protect you from unexpected resource usage. However, many factors might lead you to want more quota. For most resources, you can request additional quota. We recommend creating a single VPC network per project if you expect to grow beyond the default VPC resource quotas. This makes it easier to map project-level quota increases to each VPC network rather than to a combination of VPC networks in the same project. Limits are designed to protect system resources in aggregate. Limits generally can't be raised easily, although Google Cloud support and sales teams can work with you to increase some limits. See VPC resource quotas and limits for current values. Google Support can increase some scaling limits, but there might be times when you need to build multiple VPC networks to meet your scaling requirements. If your VPC network has a requirement to scale beyond the limits, discuss your case with Google Cloud sales and support teams about the best approach for your requirements. Create a VPC network for each autonomous team, with shared services in a common VPC network Some large enterprise deployments involve autonomous teams that each require full control over their respective VPC networks. You can meet this requirement by creating a VPC network for each business unit, with shared services in a common VPC network (for example, analytic tools, CI/CD pipeline and build machines, DNS/Directory services). Create VPC networks in different projects for independent IAM controls A VPC network is a project-level resource with fine-grained, project-level identity and access management (IAM) controls, including the following roles: networkAdmin securityAdmin networkUser networkViewer By default, IAM controls are deployed at the project level and each IAM role applies to all VPC networks within the project. If you require independent IAM controls per VPC network, create your VPC networks in different projects. If you require IAM roles scoped to specific Compute Engine resources such as VM instances, disks, and images, use IAM policies for Compute Engine resources. Isolate sensitive data in its own VPC network For companies that deal with compliance initiatives, sensitive data, or highly regulated data that is bound by compliance standards such as HIPAA or PCI-DSS, further security measures often make sense. One method that can improve security and make it easier to prove compliance is to isolate each of these environments into its own VPC network. Connecting multiple VPC networks Best practices: Choose the VPC connection method that meets your cost, performance, and security needs. Use Network Connectivity Center VPC spokes. Use VPC Network Peering if you need to insert NVAs or if your application doesn't support Private Service Connect. Use external routing if you don't need private IP address communication. Use Cloud VPN to connect VPC networks that host service access points that are not transitively reachable over Network Connectivity Center. Use multi-NIC virtual appliances to control traffic between VPC networks through a cloud device. Choose the VPC connection method that meets your cost, performance, and security needs The next step after deciding to implement multiple VPC networks is connecting those VPC networks. VPC networks are isolated tenant spaces within Google's Andromeda SDN, but there are several ways that you can enable communication between them. The subsequent sections provide best practices for choosing a VPC connection method. The advantages and disadvantages of each are summarized in the following table, and subsequent sections provide best practices for choosing a VPC connection method. Advantages Disadvantages Network Connectivity Center VPC spokes Up to 250 active VPC spokes per hub. Export filters. Preset topologies. Maximum per-VPC VM instance scale without separate peering group quotas. Hub-administrator to spoke administrator workflow for cross-organizational VPC connectivity. Private Service Connect connection propagation enables transitive reachability of Private Service Connect end-points across VPC spokes. Plus all the benefits of VPC Network Peering. Source tags and source service accounts of the sending VM are not propagated across sNetwork Connectivity Center. Limited NVA insertion options. Inter-VPC transit costs. IPv4 only. VPC Network Peering Easy to configure. Low management overhead. High bandwidth. Low egress charges (same as single VPC network). Each VPC network maintains its own distributed firewall. Each VPC network maintains its own IAM accounts and permissions. Non-transitive. Scaling numbers are bound to the aggregate group of peered VPC networks. This includes the number of VMs, routes, and internal forwarding rules. Requires non-overlapping address space. Static and dynamic routes are not propagated. Source tags and source service accounts of the sending VM are not propagated across VPC Network Peering. Static and dynamic routes are not propagated. External routing (public IP or NAT gateway) No configuration needed. Full isolation between VPC networks. Overlapping IP address space is possible. High bandwidth. Egress charges for VMs within the same zone are higher than for other options, such as VPC Network Peering. VMs need to be exposed using external IP addresses. No firewalling using private IP addresses. Static and dynamic routes are not propagated. Source tags and source service accounts of the sending VM are not honored by peered networks. Cloud VPN Cloud VPN enables transitive topologies for hub and spoke. Scalable through ECMP. 99.99% service availability SLA on HA VPN. Management overhead. Billed at internet egress rates. Slightly higher latency. Limited throughput per tunnel. Lower MTU because of additional tunnel encapsulation. Source tags and source service accounts of the sending VM are lost across the tunnel. Multiple network interfaces (Multi-NIC) Scalable through managed instance groups and ECMP routes across instances. Individual VMs have [bandwidth limits](/compute/docs/network-bandwidth). Limit of 8 interfaces per instance. The internal passthrough Network Load Balancer supports sending traffic to [any interface](/load-balancing/docs/internal#backend-service-multi-nic) of the VM. Use Network Connectivity Center VPC spokes We recommend that you use Network Connectivity Center VPC spokes when you need to connect VPC networks together. Network Connectivity Center VPC spokes allow for address re-use across VPCs in the same project and organization or in a different project and organization. Network Connectivity Center VPC spokes is the preferred method for connecting VPC networks for the following reasons: The data plane is distributed, so there is no gateway bottleneck. Traffic travels across VPC networks as if the VMs were in the same VPC network. Inter-network connectivity across different organizations. Networks include VPCs as well as external networks. Up to 250 VPC networks per hub. Transitive reachability across VPCs of service access points. Integrated inter-VPC Cloud NAT to enable IP address re-use across VPCs. Defined network reachability rules using pre-set topologies and prefix filters. Use VPC Network Peering if you need to insert NVAs or if your application doesn't support Private Service Connect We recommend that you use VPC Network Peering if you need to insert network virtual appliances (NVAs), such as firewall VMs. You might need to insert NVAs for traffic that traverses multiple VPC networks or for private connectivity to services that aren't published by using Private Service Connect. When you use VPC Network Peering, ensure that the totals of the resources needed for all directly connected peers don't exceed the limits on VM instances, number of peering connections, and internal forwarding rules. VPC Network Peering enables two VPC networks to connect with each other internally over Google's SDN, whether or not they belong to the same project or the same organization. VPC Network Peering merges the control plane and flow propagation between each peer, allowing the same forwarding characteristics as if all the VMs were in the same VPC network. When VPC networks are peered, all subnets, alias IP ranges, and internal forwarding rules are accessible, and each VPC network maintains its own distributed firewall. VPC Network Peering is not transitive. Use external routing if you don't need private IP address communication If you don't need private IP address communication, you can use external routing with external IP addresses or a NAT gateway. When a VPC network is deployed, a route to Google's default internet gateway is provisioned with a priority of 1000. If this route exists, and a VM is given an external IP address, VMs can send outbound (egress) traffic through Google's internet gateway. You can also deploy services behind one of Google's many public load-balancing offerings, which allows the services to be reached externally. Externally addressed VMs communicate with each other privately over Google's backbone, regardless of region and Network Service Tiers. Use Google Cloud firewall rules to control external inbound (ingress) traffic to your VMs. External routing is a good option for scaling purposes, but it's important to understand how public routing affects costs. For details, see the Network pricing documentation. Use Cloud VPN to connect VPC networks that host service access points that are not transitively reachable over Network Connectivity Center HA VPN provides a managed service to connect VPC networks by creating IPsec tunnels between sets of endpoints. If you configure your Cloud Routers with custom advertisement mode, you can enable transitive routing across VPC networks and hub-and-spoke topologies as described later in this document. Using Network Connectivity Center, you can use HA VPN tunnels as a transit network between on-premises networks, as explained in the Cloud VPN documentation. Cloud VPN does not support large MTU. For details, see MTU considerations. Use virtual appliances to control traffic between VPC networks through a cloud device Multiple network interface VMs are common for VPC networks that require additional security or services between them, because multiple network interface VMs enable VM instances to bridge communication between VPC networks. A VM is allowed to have only one interface for each VPC network that it connects to. To deploy a VM into multiple VPC networks, you must have the appropriate IAM permission for each VPC network to which the VM connects. Keep the following characteristics in mind when deploying a multi-NIC VM: A multi-NIC VM can have a maximum of 8 interfaces. The subnet IP ranges of the interfaces must not overlap. Service connectivity Private Service Connect allows consumers to access services privately from inside their VPC network without the need for a network oriented deployment model. Similarly, it lets producers host these services in their own separate VPC networks, and they can offer a private connection to their consumers within the same organization or across organizations. Private Service Connect enables connectivity to first-party and third-party managed services, which eliminates the needs for subnet allocation for private services access and VPC Network Peering. Private Service Connect offers a service-centric security model with the following benefits: No shared dependencies Explicit authorization Line-rate performance Private Service Connect is available in different types that provide different capabilities and modes of communication: Private Service Connect endpoints: Endpoints are deployed by using forwarding rules that provide the consumer an IP address that is mapped to the Private Service Connect service. Private Service Connect backends: Backends are deployed by using network endpoint groups (NEGs) that let consumers direct traffic to their load balancer before the traffic reaches a Private Service Connect service. If the backends are deployed with an HTTPS load balancer, they can support certificates. Private Service Connect interfaces: Interfaces let the consumer and producer originate traffic, which enables bidirectional communication. Interfaces can be used in the same VPC network as endpoints and backends. An alternative to Private Service Connect is private services access that allows consumers to connect the producer services through VPC Network Peering. When you use private services access, we recommend that you consider IP allocation for each producer service, IP overlap, and shared quota. Hybrid design: connecting an on-premises environment Best practices: Use dynamic routing when possible. Use a connectivity VPC network to scale a hub-and-spoke architecture with multiple VPC networks. After you have identified the need for hybrid connectivity and have chosen a solution that meets your bandwidth, performance, and security requirements, consider how to integrate it into your VPC design. Using Shared VPC alleviates the need for each project to replicate the same solution. For example, when you integrate a Cloud Interconnect solution into a Shared VPC, all VMs—regardless of region or service project—can access the Cloud Interconnect connection. Use dynamic routing when possible Dynamic routing is available on all hybrid solutions, including HA VPN, Classic VPN, Dedicated Interconnect, and Partner Interconnect. Dynamic routing uses Google's Cloud Router as a Border Gateway Protocol (BGP) speaker to provide dynamic External BGP (eBGP) routing. The Cloud Router is not in the data plane; it only creates routes in the SDN. Dynamic routing does not use tags, and the Cloud Router never re-advertises learned prefixes. You can enable either of the Cloud Router's two modes, regional or global, on each VPC network. If you choose regional routing, the Cloud Router only advertises subnets that co-reside in the region where the Cloud Router is deployed. Global routing, on the other hand, advertises all subnets of the given VPC network, regardless of region, but does penalize routes that are advertised and learned outside of the region. This maintains symmetry within the region by always preferring a local interconnect, and is calculated by adding a penalty metric (MED) equal to 200 + TTL in milliseconds between regions. Static routing Static routing is only available on Classic VPN. Static routing offers the ability to set a next-hop route pointing at a Cloud VPN tunnel. By default, a static route applies to all VMs in the network regardless of region. Static routing also lets network administrators selectively set which VMs the route applies to by using instance tags, which can be specified when you create a route. Static routes apply globally within the VPC network, with the same route priority as each other. Therefore, if you have multiple tunnels in multiple regions to the same prefix with the same priority, a VM will use 5-tuple hash-based ECMP across all tunnels. To optimize this setup, you can create a preferred in-region route by referencing instance tags for each region and creating preferred routes accordingly. If you don't want outbound (egress) traffic to go through Google's default internet gateway, you can set a preferred default static route to send all traffic back on-premises through a tunnel. Use a connectivity VPC network to scale a hub-and-spoke architecture with multiple VPC networks If you need to scale a hub-and-spoke architecture with multiple VPC networks, configure centralized hybrid connectivity in one or more dedicated VPC networks, then add the hybrid connections and all of the VPC spokes to a Network Connectivity Center hub. You'll need to enable route exchange with VPC spokes. This configuration allows static routes or dynamically learned routes to be exported to VPC spokes in order to provide centralized configuration and scale to your VPC network design. The following diagram illustrates a centralized hybrid connectivity design using Network Connectivity Center: Alternatively, you can use VPC Network Peering and custom advertised routes to provide access to shared hybrid connections, if you won't exceed resource limits and you require the use of NVAs. The following diagram illustrates centralized hybrid connectivity with VPC Network Peering custom routes: This centralized design is in contrast to conventional hybrid connectivity deployment, which uses VPN tunnels or VLAN attachments in each individual VPC network. Network security Best practices: Identify clear security objectives. Limit external access. Define service perimeters for sensitive data. Manage traffic with Google Cloud firewall rules when possible. Use fewer, broader firewall rule sets when possible. Isolate VMs using service accounts when possible. Use automation to monitor security policies when using tags. Use additional tools to help secure and protect your apps. Google Cloud provides robust security features across its infrastructure and services, from the physical security of data centers and custom security hardware to dedicated teams of researchers. However, securing your Google Cloud resources is a shared responsibility. You must take appropriate measures to help ensure that your apps and data are protected. Identify clear security objectives Before evaluating either cloud-native or cloud-capable security controls, start with a set of clear security objectives that all stakeholders agree to as a fundamental part of the product. These objectives should emphasize achievability, documentation, and iteration, so that they can be referenced and improved throughout development. Limit external access When you create a Google Cloud resource that uses a VPC network, you choose a network and subnet where the resource resides. The resource is assigned an internal IP address from one of the IP ranges associated with the subnet. Resources in a VPC network can communicate among themselves through internal IP addresses if firewall rules permit. Limit access to the internet to only those resources that need it. Resources with only a private, internal IP address can still access many Google APIs and services through Private Service Connect or Private Google Access. Private access enables resources to interact with key Google and Google Cloud services while remaining isolated from the public internet. Additionally, use organizational policies to further restrict which resources are allowed to use external IP addresses. To allow VMs to access the internet, use Secure Web Proxy if the traffic can be downloaded over HTTP(S) and you want to implement identity controls. Otherwise, use Cloud NAT. Define service perimeters for sensitive data For workloads involving sensitive data, use VPC Service Controls to configure service perimeters around your VPC resources and Google-managed services and control the movement of data across the perimeter boundary. Using VPC Service Controls, you can group projects and your on-premises network into a single perimeter that prevents data access through Google-managed services. Service perimeters can't contain projects from different organizations, but you can use perimeter bridges to allow projects and services in different service perimeters to communicate. Manage traffic with Google Cloud firewall rules when possible Google Cloud VPC includes a stateful firewall that is horizontally scalable and applied to each VM in a distributed manner. See the Cloud NGFW overview for details. Google Cloud Marketplace features a large ecosystem of third-party solutions, including VMs that do the following: provide advanced security, such as protection from information leakage, application exploits, and escalation of privileges; detect known and unknown threats; and apply URL filtering. There are also operational benefits to having a single vendor implement policy across cloud service providers and on-premises environments. Traffic is typically routed to these VMs by specifying routes, either with the same priority (to distribute the traffic using a 5-tuple hash) or with different priorities (to create a redundant path), as shown in the multiple paths to the Dev-subnet in the following diagram. Most solutions require multiple network interface VMs. Because a VM can't have more than one interface per VPC network, when you create a multiple network interface VM, each interface must attach to a separate VPC network. Scale is also an important consideration when deploying third-party solutions into your VPC network for the following reasons: Limits: Most VM-based appliances must be inserted into the data path. This requires a multiple network interface VM that bridges multiple VPC networks that reside in the same project. Because VPC resource quotas are set at the project level, the aggregate resource needs across all VPC networks can become limiting. Performance: Introducing a single VM-based chokepoint into the fully horizontal scalability attributes of a VPC network goes against cloud design methodologies. To mitigate this, you can place multiple network virtual appliances (NVAs) into a managed instance group behind an internal passthrough Network Load Balancer. To account for these factors in high-scale requirement architectures, push security controls to your endpoints. Start by hardening your VMs and using Google Cloud firewall rules. This strategy can also involve introducing host-based endpoint inspection agents that don't change the forwarding architecture of your VPC network through multiple network interface VMs. For an additional example of this configuration, see the Stateful L7 firewall between VPC networks reference architecture. Use fewer, broader firewall rule sets when possible Only a certain number of rules can be programmed on any VM. However, you can combine many rules into one complex rule definition. For example, if all VMs in the VPC network need to explicitly allow 10 ingress TCP ports, you have two options: write 10 separate rules, each defining a single port, or define a single rule that includes all 10 ports. Defining a single rule that includes all 10 ports is the more efficient option. Create a generic rule set that applies to the entire VPC network, and then use more specific rule sets for smaller groupings of VMs using targets. In other words, start by defining broad rules, and progressively define rules more narrowly as needed: Apply firewall rules that are common across all VMs in the VPC network. Apply firewall rules that can be grouped across several VMs, like a service instance group or subnet. Apply firewall rules to individual VMs, such as a NAT gateway or bastion host. Isolate VMs using service accounts when possible Many organizations have environments that require specific rules for a subset of the VMs in a VPC network. There are two common approaches that you can take in these cases: subnet isolation and target filtering. Subnet isolation With subnet isolation, the subnet forms the security boundary across which Google Cloud firewall rules are applied. This approach is common in on-premises networking constructs and in cases where IP addresses and network placement form part of the VM identity. You can identify the VMs on a specific subnet by applying a unique Tag, network tag, or service account to those instances. This lets you create firewall rules that only apply to the VMs in a subnet—those with the associated Tag, network tag, service account. For example, to create a firewall rule that permits all communication between VMs in the same subnet, you can use the following rule configuration on the Firewall rules page: Targets: Specified target tags Target tags: subnet-1 Source filter: Subnets Subnets: Select subnet by name (example: subnet-1). Target filtering With target filtering, all VMs either reside on the same subnet or are part of an arbitrary set of subnets. With this approach, subnet membership is not considered part of the instance identity for firewall rules. Instead, you can use Tags, network tags, or service accounts to restrict access between VMs in the same subnet. Each group of VMs that uses the same firewall rules has the same network tag applied. To illustrate this, consider a three-tier (web, app, database) application for which all of the instances are deployed in the same subnet. The web tier can communicate with end users and the app tier, and the app tier can communicate with the database tier, but no other communication between tiers is allowed. The instances running the web tier have a network tag of web, the instances running the app tier have a network tag of app, and the instances running the database tier have a network tag of db. The following firewall rules implement this approach: Rule 1: Permit end-users → web tier Targets: Specified target tags Target tags: web Source filter: IP ranges Source IP ranges: 0.0.0.0/0 Rule 2: Permit web tier → app tier Targets: Specified target tags Target tags: app Source filter: Source tags Source tags: web Rule 3: Permit app tier → database tier Targets: Specified target tags Target tags: db Source filter: Source tags Source tags: app However, even though it is possible to use network tags for target filtering in this manner, we recommend that you use Tags or service accounts where possible. Target tags are not access-controlled and can be changed by someone with the instanceAdmin role while VMs are in service. Tags and service accounts are access-controlled, meaning that a specific user must be explicitly authorized to use a service account. There can only be one service account per instance, whereas there can be multiple tags. Also, service accounts assigned to a VM can only be changed when the VM is stopped. Use automation to monitor security policies when using tags If you use network tags, remember that an instance administrator can change those tags. This can circumvent security policy. Therefore, if you do use network tags in a production environment, use an automation framework to help you overcome the lack of IAM governance over the network tags. Use additional tools to help secure and protect your apps In addition to firewall rules, use these additional tools to help secure and protect your apps: Use a Google Cloud global HTTP(S) load balancer to support high availability and protocol normalization. Integrate Google Cloud Armor with the HTTP(S) load balancer to provide DDoS protection and the ability to block or allow IP addresses at the network edge. Control access to apps by using IAP (IAP) to verify user identity and the context of the request to determine if a user should be granted access. Provide a single interface for security insights, anomaly detection, and vulnerability detection with Security Command Center. Network services: NAT and DNS Best practices: Use fixed external IP addresses with Cloud NAT. Reuse IP addresses across VPCs with Cloud NAT. Use Private DNS zones for name resolution. Use fixed external IP addresses with Cloud NAT If you need fixed external IP addresses from a range of VMs, use Cloud NAT. An example of why you might need fixed outbound IP addresses is the case in which a third party allows requests from specific external IP addresses. Using Cloud NAT lets you have a small number of NAT IP addresses for each region that are used for outbound communications. Cloud NAT also allows your VM instances to communicate across the internet without having their own external IP addresses. Google Cloud firewall rules are stateful. This means that if a connection is allowed between a source and a target or a target and a destination, then all subsequent traffic in either direction will be allowed as long as the connection is active. In other words, firewall rules allow bidirectional communication after a session is established. Reuse IP addresses across VPCs with Cloud NAT IP addresses can be reused across VPCs when Cloud NAT for Network Connectivity Center is enabled. Inter-VPC Cloud NAT is available when VPCs are connected using Network Connectivity Center VPC spokes. If the VPC IP addresses overlap with ranges in external networks, enable Hybrid NAT. Only connections initiated from workloads in Google Cloud towards external networks are translated. Use private DNS zones for name resolution Use private zones on Cloud DNS to allow your services to be resolved with DNS within your VPC network using their internal IP addresses without exposing this mapping to the outside. Use split horizon DNS to map services to different IP addresses from within the VPC network than from the outside. For example, you can have a service exposed through network load balancing from the public internet, but have internal load balancing provide the same service using the same DNS name from within the VPC network. API access for Google managed services Best practices: Use the default internet gateway where possible. Add explicit routes for Google APIs if you need to modify the default route. Deploy instances that use Google APIs on the same subnet. Use the default internet gateway where possible Access from resources within the VPC network to Google APIs follows the default internet gateway next-hop. Despite the next-hop gateway's name, the traffic path from instances to the Google APIs remains within Google's network. By default, only VM instances with an external IP address can communicate with Google APIs and services. If you need access from instances without an external IP address, set up Private Service Connect endpoints or use the Private Google Access feature for each subnet. This doesn't slow down communications for Google APIs. Google Kubernetes Engine (GKE) automatically enables Private Google Access on subnets where nodes are deployed. All nodes on these subnets without an external IP address are able to access Google managed services. Add explicit routes for Google APIs if you need to modify the default route If you need to modify the default route, then add explicit routes for Google API destination IP ranges. In environments where the default route (0.0.0.0/0) doesn't use the default internet gateway next-hop, configure explicit routes for the destination IP address ranges used by Google APIs. Set the next-hop of the explicit routes to the default internet gateway. An example of such a scenario is when you need to inspect all traffic through an on-premises device. Deploy instances that use Google APIs on the same subnet Deploy instances that require access to Google APIs and services on the same subnet and enable Private Google Access for instances without external IP addresses. Alternatively, set up Private Service Connect endpoints. If you are accessing Google APIs from your on-premises environment using Private Google Access, use Configuring Private Google Access for on-premises hosts to access some Google services over private IP address ranges. Check to see which services are supported before activating this feature, because access to other Google APIs through the IP addresses provided by this service will be unreachable. Activating this feature can require additional DNS configuration, such as configuring DNS Views. If you are accessing Google APIs from your on-premises environment using Private Service Connect endpoints, see Access the endpoint from on-premises hosts for details. Logging, monitoring, and visibility Best practices: Tailor logging for specific use cases and intended audiences. Increase the log aggregation interval for VPC networks with long connections. Use VPC Flow Logs sampling to reduce volume. Remove additional metadata when you only need IP and port data. Use Network Intelligence Center to get insights into your networks Tailor logging for specific use cases and intended audiences Use VPC Flow Logs for network monitoring, forensics, real-time security analysis, and expense optimization. You can enable or disable VPC Flow Logs at the subnet level. If VPC Flow Logs are enabled for a subnet, it collects data from all VM instances in that subnet. These logs record a sample of network flows that VM instances send and receive. Your logging use cases help to determine which subnets you decide require logging, and for how long. Flow logs are aggregated by connection at 5-second intervals from Compute Engine VMs and then exported in real time. You can view flow logs in Cloud Logging and export them to any destination that Cloud Logging export supports. The logs ingestion page in Logging tracks the volume of logs in your project and lets you disable all logs ingestion or exclude (discard) log entries you're not interested in, so that you can minimize any charges for logs over your monthly allotment. Logs are a critical part of both operational and security success, but they aren't useful unless you review them and take action. Tailor logs for their intended audience, which helps to ensure operational and security success for your VPC networks. For details, see Using VPC Flow Logs. Increase log aggregation interval for VPC networks with long connections For VPC networks with mostly long-lived connections, set the log aggregation interval to 15 minutes to greatly reduce the number of logs generated and to enable quicker and simpler analysis. An example workflow for which increasing the log aggregation interval is appropriate is network monitoring, which involves the following tasks: Performing network diagnosis Filtering the flow logs by VMs and by applications to understand traffic changes Analyzing traffic growth to forecast capacity Use VPC Flow Logs sampling to reduce volume Use VPC Flow Logs sampling to reduce the volume of VPC Flow Logs, but still be able to see low-level samples and aggregated views. An example workflow for which using sampling to reduce volume is appropriate is understanding network usage and optimizing network traffic expense. This workflow involves the following tasks: Estimating traffic between regions and zones Estimating traffic to specific countries on the internet Identifying top talkers Remove additional metadata when you only need IP and port data In security use cases where you are only interested in IP addresses and ports, remove the additional metadata to reduce the volume of data consumed in Cloud Logging. An example workflow for which removing metadata is appropriate is network forensics, which involves the following tasks: Determining which IPs talked with whom and when Identifying any compromised IP addresses, found by analyzing network flows Use Network Intelligence Center to get insights into your networks Network Intelligence Center provides a single console for managing Google Cloud network visibility, monitoring, and troubleshooting. The following sections provide details about the Network Intelligence Center components. Network Topology Use Network Topology to visualize your network topology. Connectivity Tests Use Connectivity Tests to help diagnose connectivity issues with your VPC networks. Performance Dashboard Use Performance Dashboard to check on the performance of the physical networking underlying your VPC virtual networks. Firewall Insights Use Firewall Insights to gain understandings about your firewall rules and how they interact. Network Analyzer Use Network Analyzer to monitor your VPC network configurations and to detect misconfigurations and suboptimal configurations. Flow Analyzer Use Flow Analyzer to gain a better understanding of VPC traffic flows. Reference architectures This section highlights a few architectures that illustrate some of the best practices in this document. Single project, single VPC network This initial reference architecture includes all of the components necessary to deploy highly available architectures across multiple regions, with subnet-level isolation and a 99.99% SLA connecting to your on-premises data centers. Single host project, multiple service projects, single Shared VPC Building on the initial reference architecture, Shared VPC host projects and multiple service projects let administrators delegate administrative responsibilities—such as creating and managing instances—to Service Project Admins while maintaining centralized control over network resources like subnets, routes, and firewalls. Multiple host projects, multiple service projects, multiple Shared VPC The following diagram illustrates an architecture for VPC isolation, which builds on our high-availability design while separating prod from other projects. There are many reasons to consider VPC isolation, including audit requirements (such as PCI), quota considerations between environments, or just another layer of logical isolation. You only require two interconnects (for redundancy) per location but can add multiple Interconnect attachments to multiple VPC networks or regions from those. Using isolation can also introduce the need for replication, as you decide where to place core services such as proxies, authentication, and directory services. Using a Shared Services VPC network can help to avoid this replication, and allow you to share these services with other VPC networks through Network Connectivity Center, while at the same time centralizing administration and deployment. Stateful L7 firewall between VPC networks This architecture has multiple VPC networks that are bridged by an L7 next-generation firewall (NGFW) appliance, which functions as a multi-NIC bridge between VPC networks. An untrusted, outside VPC network is introduced to terminate hybrid interconnects and internet-based connections that terminate on the outside leg of the L7 NGFW for inspection. There are many variations on this design, but the key principle is to filter traffic through the firewall before the traffic reaches trusted VPC networks. This design requires each VPC network to reside in the project where you insert the VM-based NGFW. Because quotas are enforced at the project level, you must consider the aggregate of all VPC resources. Multiple VPC networks interconnected with Network Connectivity Center This architecture has multiple VPC networks that connect to each other using Network Connectivity Center. A transit VPC network is introduced to terminate hybrid interconnects and share the hybrid connectivity across all other VPCs, which avoids the need to create VLAN attachments for each VPC network. This approach consolidates the external connectivity and its associated routing considerations. Similarly, one or more shared services VPC networks can be introduced to host common services such as proxies, authentication, and directory services. There are many variations on this design, but the key principle is to handle the different services and connection types as spokes to a Network Connectivity Center hub that provides any-to-any connectivity amongst these. This reference architecture is described in detail in Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center. What's next Cross-Cloud Network for distributed applications VPC deep dive and best practices (Cloud NEXT'18 video) Hybrid and multi-cloud network topologies Decide a resource hierarchy for your Google Cloud landing zone Best practices for Compute Engine region selection Google Cloud for data center professionals: networking VPC documentation GKE networking overview Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Best_practices_for_automatically_provisioning_and_configuring_edge_and_bare_metal_systems_and_servers.txt b/Best_practices_for_automatically_provisioning_and_configuring_edge_and_bare_metal_systems_and_servers.txt new file mode 100644 index 0000000000000000000000000000000000000000..c51c2155a2b5656670c4998574d9ae8f6c0e3bcd --- /dev/null +++ b/Best_practices_for_automatically_provisioning_and_configuring_edge_and_bare_metal_systems_and_servers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices/best-practices-provisioning-configuring-bare-metal +Date Scraped: 2025-02-23T11:48:18.037Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for automatically provisioning and configuring edge and bare metal systems and servers Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-31 UTC This document suggests best practices for designing and implementing reliable and automated provisioning and configuration processes for devices running at the edges of your environment such as the following: Edge devices, such as Internet of Things (IoT) devices, microcomputers and microcontrollers Bare metal systems and servers Read this document if you design provisioning and configuration processes for edge devices and bare metal systems and servers, or if you want to learn more about best practices for provisioning and configuring these device types. This document doesn't list all of the possible best practices for provisioning and configuring edge and bare metal systems and servers, and it doesn't give you guarantees of success. Instead, it helps you to stimulate discussions about potential changes and improvements to your provisioning and configuration processes. This document is part of a series of documents that provide information about IoT architectures on Google Cloud. The other documents in this series include the following: Connected device architectures on Google Cloud overview Standalone MQTT broker architecture on Google Cloud IoT platform product architecture on Google Cloud Device on Pub/Sub architecture to Google Cloud Best practices for running an IoT backend on Google Cloud Best practices for automatically provisioning and configuring edge and bare metal systems and servers (this document) Manually provisioning and configuring a large fleet of devices is prone to human error and doesn't scale as your fleet grows. For example, you might forget to run a critical provisioning or configuration task, or you might be relying on partially or fully undocumented processes. Fully automated and reliable provisioning and configuration processes help solve these issues. They also help you manage the lifecycle of each device from manufacture to decommissioning to disposal. Terminology The following terms are important for understanding how to implement and build automated provisioning and configuration processes for your devices: Edge device: A device that you deploy at the edges of your environment that is proximate to the data you want to process. Provisioning process: The set of tasks that you must complete to prepare a device for configuration. Configuration process: The set of tasks you must complete to make a device ready to operate in a specific environment. Configuration management: The set of tasks that you continuously perform to manage the configuration of your environment and devices. Base image: A minimal working operating system (OS) or firmware image produced by your company or produced by a device or OS manufacturer. Golden image: An immutable OS or firmware image that you create for your devices or prepare from a base image. Golden images include all data and configuration information that your devices need to accomplish their assigned tasks. You can prepare various golden images to accomplish different tasks. Synonyms for golden image types include flavors, spins, and archetypes. Silver image: An OS or firmware image that you prepare for your devices by applying minimal changes to a golden image or a base image. Devices running a silver image complete their provisioning and configuration upon the first boot, according to the needs of the use cases that those devices must support. Seed device: A device that bootstraps your environment without external dependencies. Network booting: The set of technologies that lets a device obtain software and any related configuration information from the network, instead of from a storage system that's attached to the device. To help you set goals and avoid common issues, apply the best practices for provisioning and configuration that are described in the following sections. Automate the provisioning and configuration processes During their first boot, or anytime it's necessary, your devices should be able to provision and configure themselves using only the software image installed inside them. To avoid implementing the logic you need during the provisioning and configuration processes, you can use tools that give you the primitives needed to orchestrate and implement those processes. For example, you can use cloud-init and its NoCloud data source, together with scripting or a configuration management tool, such as Ansible, Puppet, or Chef, running against the local host. To design reliable provisioning and configuration processes, ensure that all the steps and tasks performed during those processes are valid, possibly in an automated manner. For example, you can use an automated compliance testing framework, such as InSpec, to verify that your provisioning and configuration processes are operating as expected. This best practice helps you avoid single points of failure and the need for manual intervention when you need to complete device provisioning and configuration. Avoid special-purpose devices When designing your edge devices, minimize their variance in terms of purpose and specialty. This recommendation doesn't mean that all your edge devices must be equal to each other or share the same purpose, but they should be as homogeneous as possible. For example, you might define device archetypes by the workload types they need to support. Then you can deploy and manage your devices according to the properties of those archetypes. To ensure that you're following this best practice, verify that you can pick a device at random from the ones of a given archetype and then do the following: Treat the device like you would other devices of the same archetype. Doing so shows that you have operational efficiency. Replace the device with devices of the same archetype without additional customizations. Doing so shows that you have correctly implemented those archetypes. This best practice ensures that you reduce the variance in your fleet of devices, leading to less fragmentation in your environment and in the provisioning and configuration processes. Use seed devices to bootstrap your environment When provisioning and configuring your devices, you might come across a circular dependency problem: your devices need supporting infrastructure to provision and configure themselves, but that infrastructure isn't in place because you still have to provision and configure it. You can solve this problem with seed devices. Seed devices have a temporary special purpose. After completing the tasks for which the special purpose was designed, the device conforms its behavior and status to the relevant archetype. For example, if you're using cloud-init to automatically initialize your devices, you might need to configure a cloud-init NoCloud data source in the following ways: Provide the NoCloud data source data to the seed device through a file system. Wait for the seed device to complete its own provisioning and configuration with its special purpose, which includes serving the NoCloud data source data to other devices over the network. The provisioning and configuration processes on the seed device then wait until the conditions to drop the seed device's temporary special purpose are met. Some examples of these conditions are: Are there other devices in the environment that serve the NoCloud data source data over the network? Are there enough nodes in the cluster? Did the first backup complete? Is the disaster recovery site ready? Provision and configure other devices that download the NoCloud data source data over the network from the seed device. Some devices must be able to serve the NoCloud data source data over the network. The provisioning and configuration processes on the seed device resume because the conditions to drop the special purpose of the seed device are met: there are other devices in the fleet that serve the NoCloud data source data over the network. The provisioning and configuration processes on the seed device drop the special purpose, making the seed device indistinguishable from other devices of the same archetype. This best practice ensures that you can bootstrap your environment even without supporting infrastructure and without contravening the Avoid special-purpose devices best practice. Minimize the statefulness of your devices When designing your edge devices, keep the need to store stateful information at minimum. Edge devices might have limited hardware resources, or be deployed in harsh environments. Minimizing the stateful information that they need to function simplifies the provisioning, configuration, backup, and recovery processes because you can treat such devices homogeneously. If a stateless edge device starts to malfunction and it's not recoverable, for example, you can swap it with another device of the same archetype with minimal disruptions or data loss. This best practice helps you avoid unanticipated issues due to data loss, or due to your processes being too complex. Most complexity comes from the need to support a fleet of heterogeneous devices. Automatically build OS and firmware images To avoid expensive provisioning and configuration tasks when your devices first boot, and to spare device resources, customize the OS and firmware images before making them available. You could, for example, install dependencies directly in the image instead of installing them when each device boots for the first time. When preparing the OS and firmware images for your devices, you start from a base image. When you customize the base image, you can do the following: Produce golden images. Golden images contain all dependencies in the image so that your devices don't have to install those dependencies on first boot. Producing golden images might be a complex task, but they enable your devices to save time and resources during provisioning and configuration. Produce silver images. Unlike golden images, devices running silver images complete all provisioning and configuration processes during their first boot. Producing silver images can be less complex than producing golden images, but the devices running a silver image spend more time and resources during provisioning and configuration. You can customize the OS and firmware images as part of your continuous integration and continuous deployment (CI/CD) processes, and automatically make the customized images available to your devices after validation. The CI/CD processes that you implement with a tool such as Cloud Build, GitHub Actions, GitLab CI/CD, or Jenkins, can perform the following sequence of tasks: Perform an automated validation against the customized images. Publish the customized images in a repository where your devices can obtain them. If your CI/CD environment and the OS or firmware for which you need to build images use different hardware architectures, you can use tools like QEMU to emulate those architectures. For example, you can emulate the hardware architecture of the ARM family on an x86_64 architecture. To customize your OS or firmware images, you need to be able to modify them and verify those modifications in a test environment before installing them in your edge devices. Tools like chroot let you virtually change but not physically change the root directory before running a command. This best practice helps you customize OS and firmware images before making the images available to your devices. Reliably orchestrate workloads running on your devices If your devices support heterogeneous workloads, you can use the following tools to orchestrate those workloads and manage their lifecycle: A workload orchestration system: Using a workload orchestration system, such as Kubernetes, is suitable for workloads that have complex orchestration or lifecycle management requirements. These systems are also suitable for workloads that span multiple components. In both cases, it means you don't have to implement that orchestration and workload lifecycle management logic by yourself. If your devices are resource-constrained, you can install a lightweight Kubernetes distribution that needs fewer resources than the canonical one, such as MicroK8s, K3s, or Google Distributed Cloud installed with the edge profile. An init system: Using an init system, like systemd, is suitable for workloads with the following characteristics: Simple orchestration requirements A lack of resources to support a workload orchestration system Workloads that can't be placed in containers After you have system in place to orchestrate your workloads, you can also use it to run tasks that are part of your provisioning and configuration processes. If you need to run a configuration management tool as part of your provisioning and configuration processes, for example, you can use the workload orchestration system as you would with any other workload. This best practice helps ensure that you can orchestrate the workloads running on your devices. Verify, authenticate, and connect devices When you need to verify if your devices need to connect to external systems, such as other devices or to a backend, consider the recommendations in the following subsections. This best practice helps you: Design secure communication channels for your devices. Avoid potential backdoors that circumvent the security perimeter of your devices. Verify that your devices don't expose unauthorized interfaces that an attacker might exploit. Connection practices to enforce Authenticate other parties that are making information requests before exchanging any information. Verify that transmitted information isn't traveling across unexpected channels. Rely on trusted execution environments to handle secrets, such as encryption keys, authentication keys, and passwords. Verify the integrity and the authenticity of any OS or firmware image before use. Verify the validity, the integrity, and the authenticity of any user-provided configuration. Limit the attack surface by not installing unnecessary software and removing any that already exists on your devices. Limit the use of privileged operations and accounts. Verify the integrity of the device's case if that case needs to resist physical manipulation and tampering. Connection practices to avoid Don't transmit sensitive information over unencrypted channels. Avoid leaving privileged access open, such as the following: Virtual or physical serial ports and serial consoles with elevated privileges, even if the ports are accessible only if someone physically tampers with the device. Endpoints that respond to requests coming from the network and that can run privileged operations. Don't rely on hardcoded credentials in your OS or firmware images, configuration, or source code. Don't reveal any information that might help an adversary gather information to gain elevated privileges. For example, you should encrypt data on your devices and turn off unneeded tracing and logging systems on production devices. Don't let users and workloads execute arbitrary code. Monitor your devices Gathering information about the state of your devices without manual intervention is essential for the reliability of your environment. Ensure that your devices automatically report all the data that you need. There are two main reasons to gather and monitor data: To help you ensure that your devices are working as intended. To proactively spot issues and perform preventive maintenance. For example, you can collect monitoring metrics and events with Cloud Monitoring. To help you investigate and troubleshoot issues, we recommend that you design and implement processes to gather high resolution diagnostic data, such as detailed monitoring, tracing and debugging information, on top of the processes that monitor your devices during their normal operation. Gathering high resolution diagnostic data and transferring it by using a network can be expensive in terms of device resources, such as computing, data storage, and electrical power. For this reason, we recommend that you enable processes to gather high resolution diagnostic data only when needed, and only for the devices that need further investigation. For example, if one of your devices is not working as intended, and the regular monitoring data that the device reports is not enough to thoroughly diagnose the issue, you can enable high resolution data gathering for that device so it reports more information that can help you investigate the causes of the issue. This best practice ensures that you don't leave devices in an unknown state, and that you have a enough data to determine whether and how your devices are performing. Support unattended booting and upgrades When you design your provisioning and configuration processes, ensure that your devices are capable of unattended booting and that you have the necessary infrastructure in place. By implementing an unattended booting mechanism that supports both the first boot and the delivery of over-the-air upgrades, you increase the maintainability of your infrastructure. Using unattended booting frees you from manually attending to each device as it boots or upgrades. Manually attending a large fleet of devices is error-prone because operators might miss or incorrectly perform actions, or they might not have enough time to perform the required actions for every device in the fleet. Also, you don't have to prepare each device in advance to boot the correct OS or firmware image. You can release a new version of an OS or firmware image, for example, and make that version available as one of the options that your devices can choose when they take their boot instructions from the network. This best practice helps you ensure that your devices can perform boots and upgrades that are automated and unattended. Design and implement resilient processes Even with fully automated provisioning and configuration processes, errors can occur that prevent those processes from correctly completing, thus leaving your devices in an inconsistent state. Help ensure that your devices are able to recover from such failures by implementing retry and fallback mechanisms. When a device fails to complete a task that's part of the provisioning and configuration processes, for example, it should automatically attempt to recover from that failure. After the device recovers from the failure or falls back to a working state, it can resume running processes from the point at which the processes failed. This best practice helps you design and implement resilient provisioning and configuration processes. Support the whole lifecycle of your devices When designing your provisioning and configuration processes, ensure that those processes can manage the entire device lifecycle. Effectively managing device lifecycles includes planning for termination and disposal, even if your devices are supposed to run for a relatively long time. If you don't manage the lifecycle of your devices, it could create issues, like the following: Sustained high costs: Introducing lifecycle management support after your provisioning and configuration processes are in place can increase costs. By planning this support early in the design, you might lower those costs. If your provisioning and configuration processes don't support the whole lifecycle of your devices, for example, you might have to manually intervene on each device to properly handle each phase of their lifecycle. Manual intervention can be expensive, and often doesn't scale. Increased rigidity: Not supporting lifecycle management might eventually lead to the inability to update or manage your devices. If you lack a mechanism to safely and efficiently turn off your devices, for example, it might be challenging to manage their end of life and ultimate disposal. What's next Read Connected device architectures on Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. . ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Best_practices_for_cost-optimized_Kubernetes_applications_on_GKE.txt b/Best_practices_for_cost-optimized_Kubernetes_applications_on_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..d106a2fcae88bd37948b6ba26ddcbc64133853f4 --- /dev/null +++ b/Best_practices_for_cost-optimized_Kubernetes_applications_on_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-gke +Date Scraped: 2025-02-23T11:47:28.008Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for running cost-optimized Kubernetes applications on GKE Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-11 UTC This document discusses Google Kubernetes Engine (GKE) features and options, and the best practices for running cost-optimized applications on GKE to take advantage of the elasticity provided by Google Cloud. This document assumes that you are familiar with Kubernetes, Google Cloud, GKE, and autoscaling. Introduction As Kubernetes gains widespread adoption, a growing number of enterprises and platform-as-a-service (PaaS) and software-as-a-service (SaaS) providers are using multi-tenant Kubernetes clusters for their workloads. This means that a single cluster might be running applications that belong to different teams, departments, customers, or environments. The multi-tenancy provided by Kubernetes lets companies manage a few large clusters, instead of multiple smaller ones, with benefits such as appropriate resource utilization, simplified management control, and reduced fragmentation. Over time, some of these companies with fast-growing Kubernetes clusters start to experience a disproportionate increase in cost. This happens because traditional companies that embrace cloud-based solutions like Kubernetes don't have developers and operators with cloud expertise. This lack of cloud readiness leads to applications becoming unstable during autoscaling (for example, traffic volatility during a regular period of the day), sudden bursts, or spikes (such as TV commercials or peak scale events like Black Friday and Cyber Monday). In an attempt to "fix" the problem, these companies tend to over-provision their clusters the way they used to in a non-elastic environment. Over-provisioning results in considerably higher CPU and memory allocation than what applications use for most of the day. This document provides best practices for running cost-optimized Kubernetes workloads on GKE. The following diagram outlines this approach. The foundation of building cost-optimized applications is spreading the cost-saving culture across teams. Beyond moving cost discussions to the beginning of the development process, this approach forces you to better understand the environment that your applications are running in—in this context, the GKE environment. In order to achieve low cost and application stability, you must correctly set or tune some features and configurations (such as autoscaling, machine types, and region selection). Another important consideration is your workload type because, depending on the workload type and your application's requirements, you must apply different configurations in order to further lower your costs. Finally, you must monitor your spending and create guardrails so that you can enforce best practices early in your development cycle. The following table summarizes the challenges that GKE helps you solve. Although we encourage you to read the whole document, this table presents a map of what's covered. Challenge Action I want to look at easy cost savings on GKE. Select the appropriate region, sign up for committed-use discounts, and use E2 machine types. I need to understand my GKE costs. Observe your GKE clusters and watch for recommendations, and enable GKE usage metering I want to make the most out of GKE elasticity for my existing workloads. Read Horizontal Pod Autoscaler, Cluster Autoscaler, and understand best practices for Autoscaler and over-provisioning. I want to use the most efficient machine types. Choose the right machine type for your workload. Many nodes in my cluster are sitting idle. Read best practices for Cluster Autoscaler. I need to improve cost savings in my batch jobs. Read best practices for batch workloads. I need to improve cost savings in my serving workloads. Read best practices for serving workloads. I don't know how to size my Pod resource requests. Use Vertical Pod Autoscaler (VPA), but pay attention to mixing Horizontal Pod Autoscaler (HPA) and VPA best practices. My applications are unstable during autoscaling and maintenance activities. Prepare cloud-based applications for Kubernetes, and understand how Metrics Server works and how to monitor it. How do I make my developers pay attention to their applications' resource usage? Spread the cost-saving culture, consider using GKE Enterprise Policy Controller, design your CI/CD pipeline to enforce cost savings practices, and use Kubernetes resource quotas. What else should I consider to further reduce my ecosystem costs? Review small development clusters, review your logging and monitoring strategies, and review inter-region egress traffic in regional and multi-zonal clusters. GKE cost-optimization features and options Cost-optimized Kubernetes applications rely heavily on GKE autoscaling. To balance cost, reliability, and scaling performance on GKE, you must understand how autoscaling works and what options you have. This section discusses GKE autoscaling and other useful cost-optimized configurations for both serving and batch workloads. Fine-tune GKE autoscaling Autoscaling is the strategy GKE uses to let Google Cloud customers pay only for what they need by minimizing infrastructure uptime. In other words, autoscaling saves costs by 1) making workloads and their underlying infrastructure start before demand increases, and 2) shutting them down when demand decreases. The following diagram illustrates this concept. In Kubernetes, your workloads are containerized applications that are running inside Pods, and the underlying infrastructure, which is composed of a set of Nodes, must provide enough computing capacity to run the workloads. As the following diagram shows, this environment has four scalability dimensions. The workload and infrastructure can scale horizontally by adding and removing Pods or Nodes, and they can scale vertically by increasing and decreasing Pod or Node size. GKE handles these autoscaling scenarios by using features like the following: Horizontal Pod Autoscaler (HPA), for adding and removing Pods based on utilization metrics. Vertical Pod Autoscaler (VPA), for sizing your Pods. Cluster Autoscaler, for adding and removing Nodes based on the scheduled workload. Node auto-provisioning, for dynamically creating new node pools with nodes that match the needs of users' Pods. The following diagram illustrates these scenarios. The remainder of this section discusses these GKE autoscaling capabilities in more detail and covers other useful cost-optimized configurations for both serving and batch workloads. Horizontal Pod Autoscaler Horizontal Pod Autoscaler (HPA) is meant for scaling applications that are running in Pods based on metrics that express load. You can configure either CPU utilization or other custom metrics (for example, requests per second). In short, HPA adds and deletes Pods replicas, and it is best suited for stateless workers that can spin up quickly to react to usage spikes, and shut down gracefully to avoid workload instability. As the preceding image shows, HPA requires a target utilization threshold, expressed in percentage, which lets you customize when to automatically trigger scaling. In this example, the target CPU utilization is 70%. That means your workload has a 30% CPU buffer for handling requests while new replicas are spinning up. A small buffer prevents early scale-ups, but it can overload your application during spikes. However, a large buffer causes resource waste, increasing your costs. The exact target is application specific, and you must consider the buffer size to be enough for handling requests for two or three minutes during a spike. Even if you guarantee that your application can start up in a matter of seconds, this extra time is required when Cluster Autoscaler adds new nodes to your cluster or when Pods are throttled due to lack of resources. The following are best practices for enabling HPA in your application: Size your application correctly by setting appropriate resource requests and limits. Set your target utilization to reserve a buffer that can handle requests during a spike. Make sure your application starts as quickly as possible and shuts down according to Kubernetes expectations. Set meaningful readiness and liveness probes. Make sure that your Metrics Server is always up and running. Inform clients of your application that they must consider implementing exponential retries for handling transient issues. For more information, see Configuring a Horizontal Pod Autoscaler. Vertical Pod Autoscaler Unlike HPA, which adds and deletes Pod replicas for rapidly reacting to usage spikes, Vertical Pod Autoscaler (VPA) observes Pods over time and gradually finds the optimal CPU and memory resources required by the Pods. Setting the right resources is important for stability and cost efficiency. If your Pod resources are too small, your application can either be throttled or it can fail due to out-of-memory errors. If your resources are too large, you have waste and, therefore, larger bills. VPA is meant for stateless and stateful workloads not handled by HPA or when you don't know the proper Pod resource requests. As the preceding image shows, VPA detects that the Pod is consistently running at its limits and recreates the Pod with larger resources. The opposite also happens when the Pod is consistently underutilized—a scale-down is triggered. VPA can work in three different modes: Off. In this mode, also known as recommendation mode, VPA does not apply any change to your Pod. The recommendations are calculated and can be inspected in the VPA object. Initial: VPA assigns resource requests only at Pod creation and never changes them later. Auto: VPA updates CPU and memory requests during the life of a Pod. That means, the Pod is deleted, CPU and memory are adjusted, and then a new Pod is started. If you plan to use VPA, the best practice is to start with the Off mode for pulling VPA recommendations. Make sure it's running for 24 hours, ideally one week or more, before pulling recommendations. Then, only when you feel confident, consider switching to either Initial or Auto mode. Follow these best practices for enabling VPA, either in Initial or Auto mode, in your application: Don't use VPA either Initial or Auto mode if you need to handle sudden spikes in traffic. Use HPA instead. Make sure your application can grow vertically. Set minimum and maximum container sizes in the VPA objects to avoid the autoscaler making significant changes when your application is not receiving traffic. Don't make abrupt changes, such as dropping the Pod's replicas from 30 to 5 all at once. This kind of change requires a new deployment, new label set, and new VPA object. Make sure your application starts as quickly as possible and shuts down according to Kubernetes expectations. Set meaningful readiness and liveness probes. Make sure that your Metrics Server is always up and running. Inform clients of your application that they must consider implementing exponential retries for handling transient issues. Consider using node auto-provisioning along with VPA so that if a Pod gets large enough to fit into existing machine types, Cluster Autoscaler provisions larger machines to fit the new Pod. Whether you are considering using Auto mode, make sure you also follow these practices: Make sure your application can be restarted while receiving traffic. Add Pod Disruption Budget (PDB) to control how many Pods can be taken down at the same time. For more information, see Configuring Vertical Pod Autoscaling. Mixing HPA and VPA The official recommendation is that you must not mix VPA and HPA on either CPU or memory. However, you can mix them safely when using recommendation mode in VPA or custom metrics in HPA—for example, requests per second. When mixing VPA with HPA, make sure your deployments are receiving enough traffic—meaning, they are consistently running above the HPA min-replicas. This lets VPA understand your Pod's resource needs. For more information about VPA limitations, see Limitations for Vertical Pod autoscaling. Cluster Autoscaler Cluster Autoscaler (CA) automatically resizes the underlying computer infrastructure. CA provides nodes for Pods that don't have a place to run in the cluster and removes under-utilized nodes. CA is optimized for the cost of infrastructure. In other words, if there are two or more node types in the cluster, CA chooses the least expensive one that fits the given demand. Unlike HPA and VPA, CA doesn't depend on load metrics. Instead, it's based on scheduling simulation and declared Pod requests. It's a best practice to enable CA whenever you are using either HPA or VPA. This practice ensures that if your Pod autoscalers determine that you need more capacity, your underlying infrastructure grows accordingly. As these diagrams show, CA automatically adds and removes compute capacity to handle traffic spikes and save you money when your customers are sleeping. It is a best practice to define Pod Disruption Budget (PDB) for all your applications. It is particularly important at the CA scale-down phase when PDB controls the number of replicas that can be taken down at one time. Certain Pods cannot be restarted by any autoscaler when they cause some temporary disruption, so the node they run on can't be deleted. For example, system Pods (such as metrics-server and kube-dns), and Pods using local storage won't be restarted. However, you can change this behavior by defining PDBs for these system Pods and by setting "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation for Pods using local storage that are safe for the autoscaler to restart. Moreover, consider running long-lived Pods that can't be restarted on a separate node pool, so they don't block scale-down of other nodes. Finally, learn how to analyze CA events in the logs to understand why a particular scaling activity didn't happen as expected. If your workloads are resilient to nodes restarting inadvertently and to capacity losses, you can save more money by creating a cluster or node pool with Spot VMs. For CA to work as expected, Pod resource requests need to be large enough for the Pod to function normally. If resource requests are too small, nodes might not have enough resources and your Pods might crash or have troubles during runtime. The following is a summary of the best practices for enabling Cluster Autoscaler in your cluster: Use either HPA or VPA to autoscale your workloads. Make sure you are following the best practices described in the chosen Pod autoscaler. Size your application correctly by setting appropriate resource requests and limits or use VPA. Define a PDB for your applications. Define PDB for system Pods that might block your scale-down. For example, kube-dns. To avoid temporary disruption in your cluster, don't set PDB for system Pods that have only 1 replica (such as metrics-server). Run short-lived Pods and Pods that can be restarted in separate node pools, so that long-lived Pods don't block their scale-down. Avoid over-provisioning by configuring idle nodes in your cluster. For that, you must know your minimum capacity—for many companies it's during the night—and set the minimum number of nodes in your node pools to support that capacity. If you need extra capacity to handle requests during spikes, use pause Pods, which are discussed in Autoscaler and over-provisioning. For more information, see Autoscaling a cluster. Node auto-provisioning Node auto-provisioning (NAP) is a mechanism of Cluster Autoscaler that automatically adds new node pools in addition to managing their size on the user's behalf. Without node auto-provisioning, GKE considers starting new nodes only from the set of user-created node pools. With node auto-provisioning, GKE can create and delete new node pools automatically. Node auto-provisioning tends to reduce resource waste by dynamically creating node pools that best fit with the scheduled workloads. However, the autoscale latency can be slightly higher when new node pools need to be created. If your workloads are resilient to nodes restarting inadvertently and to capacity losses, you can further lower costs by configuring Spot VMs toleration in your Pod. The following are best practices for enabling node auto-provisioning: Follow all the best practice of Cluster Autoscaler. Set minimum and maximum resources sizes to avoid NAP making significant changes in your cluster when your application is not receiving traffic. When using Horizontal Pod Autoscaler for serving workloads, consider reserving a slightly larger target utilization buffer because NAP might increase autoscaling latency in some cases. For more information, see Using node auto-provisioning and Unsupported features. Autoscaler and over-provisioning In order to control your costs, we strongly recommend that you enable autoscaler according to the previous sections. No one configuration fits all possible scenarios, so you must fine-tune the settings for your workload to ensure that autoscalers respond correctly to increases in traffic. However, as noted in the Horizontal Pod Autoscaler section, scale-ups might take some time due to infrastructure provisioning. To visualize this difference in time and possible scale-up scenarios, consider the following image. When your cluster has enough room for deploying new Pods, one of the Workload scale-up scenarios is triggered. Meaning, if an existing node never deployed your application, it must download its container images before starting the Pod (scenario 1). However, if the same node must start a new Pod replica of your application, the total scale-up time decreases because no image download is required (scenario 2). When your cluster doesn't have enough room for deploying new Pods, one of the Infrastructure and Workload scale-up scenarios is triggered. This means that Cluster Autoscaler must provision new nodes and start the required software before approaching your application (scenario 1). If you use node auto-provisioning, depending on the workload scheduled, new node pools might be required. In this situation, the total scale-up time increases because Cluster Autoscaler has to provision nodes and node pools (scenario 2). For scenarios where new infrastructure is required, don't squeeze your cluster too much—meaning, you must over-provision but only for reserving the necessary buffer to handle the expected peak requests during scale-ups. There are two main strategies for this kind of over-provisioning: Fine-tune the HPA utilization target. The following equation is a simple and safe way to find a good CPU target: (1 - buff)/(1 + perc) buff is a safety buffer that you can set to avoid reaching 100% CPU. This variable is useful because reaching 100% CPU means that the latency of request processing is much higher than usual. perc is the percentage of traffic growth you expect in two or three minutes. For example, if you expect a growth of 30% in your requests and you want to avoid reaching 100% of CPU by defining a 10% safety buffer, your formula would look like this: (1 - 0.1)/(1 + 0.3) = 0.69 Note: If you have large nodes and small applications whose traffic is consistently below what is requested, then you can get more aggressive (by reducing or removing the buff variable). You can be more aggressive because there are likely some free CPU cycles on the machine, so in some cases, it's safe for Pods to pass 100% utilization. Configure pause Pods. There is no way to configure Cluster Autoscaler to spin up nodes upfront. Instead, you can set an HPA utilization target to provide a buffer to help handle spikes in load. However, if you expect large bursts, setting a small HPA utilization target might not be enough or might become too expensive. An alternative solution for this problem is to use pause Pods. Pause Pods are low-priority deployments that do nothing but reserve room in your cluster. Whenever a high-priority Pod is scheduled, pause Pods get evicted and the high-priority Pod immediately takes their place. The evicted pause Pods are then rescheduled, and if there is no room in the cluster, Cluster Autoscaler spins up new nodes for fitting them. It's a best practice to have only a single pause Pod per node. For example, if you are using 4 CPU nodes, configure the pause Pods' CPU request with around 3200m. Choose the right machine type Beyond autoscaling, other configurations can help you run cost-optimized kubernetes applications on GKE. This section discusses choosing the right machine type. Spot VMs Spot VMs are Compute Engine VM instances that provide no availability guarantees. Spot VMs are up to 91% cheaper than standard Compute Engine VMs, but we recommend that you use them with caution on GKE clusters. Spot VMs on GKE are best suited for running batch or fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of Spot VMs. Stateful and serving workloads must not use Spot VMs unless you prepare your system and architecture to handle the constraints of Spot VMs. Whatever the workload type, you must pay attention to the following constraints: Pod Disruption Budget might not be respected because Spot VMs can shut down inadvertently. There is no guarantee that your Pods will shut down gracefully once node preemption ignores the Pod grace period. It might take several minutes for GKE to detect that the node was preempted and that the Pods are no longer running, which delays rescheduling the Pods to a new node. Starting with GKE v1.20 graceful node shutdown is enabled by default to provide notice to running workloads. Spot VMs have no guaranteed availability, meaning that they can stock out easily in some regions. To overcome this limitation, we recommend that you set a backup node pool without Spot VMs. Cluster Autoscaler gives preference to Spot VMs because it's optimized for infrastructure cost. For more information, see Run fault-tolerant workloads at lower costs with Spot VMs, Run fault-tolerant workloads at lower costs in Spot Pods and Run web applications on GKE using cost-optimized Spot VMs. E2 machine types E2 machine types (E2 VMs) are cost-optimized VMs that offer you 31% savings compared to N1 machine types. E2 VMs are suitable for a broad range of workloads, including web servers, microservices, business-critical applications, small-to-medium sized databases, and development environments. For more information about E2 VMs and how they compare with other Google Cloud machine types, see Performance-driven dynamic resource management in E2 VMs and Machine types. Select the appropriate region When cost is a constraint, where you run your GKE clusters matters. Due to many factors, cost varies per computing region. So make sure you are running your workload in the least expensive option but where latency doesn't affect your customer. If your workload requires copying data from one region to another—for example, to run a batch job—you must also consider the cost of moving this data. For more information on how to choose the right region, see Best practices for Compute Engine regions selection. Sign up for committed-use discounts If you intend to stay with Google Cloud for a few years, we strongly recommend that you purchase committed-use discounts in return for deeply discounted prices for VM usage. When you sign a committed-use contract, you purchase compute resources at a discounted price (up to 70% discount) in return for committing to paying for those resources for one year or three years. If you are unsure about how much resource to commit, look at your minimum computing usage—for example, during nighttime—and commit the payment for that amount. For more information about committed-use prices for different machine types, see VM instances pricing. Review small development clusters For small development clusters where you need to minimize costs, consider using Autopilot clusters. With clusters in this mode of operation, you aren't charged for system Pods, operating system costs, or unscheduled workloads. Review your logging and monitoring strategies If you use Cloud Logging and Cloud Monitoring to provide observability into your applications and infrastructure, you are paying only for what you use. However, the more your infrastructure and applications log, and the longer you keep those logs, the more you pay for them. Similarly, the more external and custom metrics you have, the higher your costs. Review inter-region egress traffic in regional and multi-zonal clusters The types of available GKE clusters are single-zone, multi-zonal, and regional. Because of the high availability of nodes across zones, regional and multi-zonal clusters are well suited for production environments. However, you are charged by the egress traffic between zones. For production environments, we recommend that you monitor the traffic load across zones and improve your APIs to minimize it. Also consider using inter-pod affinity and anti-affinity configurations to colocate dependent Pods from different services in the same nodes or in the same availability zone to minimize costs and network latency between them. The suggested way to monitor this traffic is to enable GKE usage metering and its network egress agent, which is disabled by default. For non-production environments, the best practice for cost saving is to deploy single-zone clusters. Prepare your environment to fit your workload type Enterprises have different cost and availability requirements. Their workloads can be divided into serving workloads, which must respond quickly to bursts or spikes, and batch workloads, which are concerned with eventual work to be done. Serving workloads require a small scale-up latency; batch workloads are more tolerant to latency. The different expectations for these workload types make choosing different cost-saving methods more flexible. Batch workloads Because batch workloads are concerned with eventual work, they allow for cost saving on GKE because the workloads are commonly tolerant to some latency at job startup time. This tolerance gives Cluster Autoscaler space to spin up new nodes only when jobs are scheduled and take them down when the jobs are finished. The first recommended practice is to separate batch workloads in different node pools by using labels and selectors, and by using taints and tolerations. The rationale is the following: Cluster Autoscaler can delete empty nodes faster when it doesn't need to restart pods. As batch jobs finish, the cluster speeds up the scale-down process if the workload is running on dedicated nodes that are now empty. To further improve the speed of scale-downs, consider configuring CA's optimize-utilization profile. Some Pods cannot be restarted, so they permanently block the scale-down of their nodes. These Pods, which include the system Pods, must run on different node pools so that they don't affect scale-down. The second recommended practice is to use node auto-provisioning to automatically create dedicated node pools for jobs with a matching taint or toleration. This way, you can separate many different workloads without having to set up all those different node pools. We recommend that you use Spot VMs only if you run fault-tolerant jobs that are less sensitive to the ephemeral, non-guaranteed nature of Spot VMs. Serving workloads Unlike batch workloads, serving workloads must respond as quickly as possible to bursts or spikes. These sudden increases in traffic might result from many factors, for example, TV commercials, peak-scale events like Black Friday, or breaking news. Your application must be prepared to handle them. Problems in handling such spikes are commonly related to one or more of the following reasons: Applications not being ready to run on Kubernetes—for example, apps with large image sizes, slow startup times, or non-optimal Kubernetes configurations. Applications depending on infrastructure that takes time to be provisioned, like GPUs. Autoscalers and over-provisioning not being appropriately set. Prepare cloud-based Kubernetes applications Some of the best practices in this section can save money by themselves. However, because most of these practices are intended to make your application work reliably with autoscalers, we strongly recommend that you implement them. Understand your application capacity When you plan for application capacity, know how many concurrent requests your application can handle, how much CPU and memory it requires, and how it responds under heavy load. Most teams don't know these capacities, so we recommend that you test how your application behaves under pressure. Try isolating a single application Pod replica with autoscaling off, and then execute the tests simulating a real usage load. This helps you understand your per-Pod capacity. We then recommend configuring your Cluster Autoscaler, resource requests and limits, and either HPA or VPA. Then stress your application again, but with more strength to simulate sudden bursts or spikes. Ideally, to eliminate latency concerns, these tests must run from the same region or zone that the application is running on Google Cloud. You can use the tool of your choice for these tests, whether it's a homemade script or a more advanced performance tool, like Apache Benchmark, JMeter, or Locust. For an example of how you can perform your tests, see Distributed load testing using Google Kubernetes Engine. Make sure your application can grow vertically and horizontally Ensure that your application can grow and shrink. This means you can choose to handle traffic increases either by adding more CPU and memory or adding more Pod replicas. This gives you the flexibility to experiment what fits your application better, whether that's a different autoscaler setup or a different node size. Unfortunately, some applications are single threaded or limited by a fixed number of workers or subprocesses that make this experiment impossible without a complete refactoring of their architecture. Set appropriate resource requests and limits By understanding your application capacity, you can determine what to configure in your container resources. Resources in Kubernetes are mainly defined as CPU and memory (RAM). You configure CPU or memory as the amount required to run your application by using the request spec.containers[].resources.requests., and you configure the cap by using the request spec.containers[].resources.limits.. When you've correctly set resource requests, Kubernetes scheduler can use them to decide which node to place your Pod on. This guarantees that Pods are being placed in nodes that can make them function normally, so you experience better stability and reduced resource waste. Moreover, defining resource limits helps ensure that these applications never use all available underlying infrastructure provided by computing nodes. A good practice for setting your container resources is to use the same amount of memory for requests and limits, and a larger or unbounded CPU limit. Take the following deployment as an example: apiVersion: apps/v1 kind: Deployment metadata: name: wordpress spec: replicas: 1 selector: matchLabels: app: wp template: metadata: labels: app: wp spec: containers: - name: wp image: wordpress resources: requests: memory: "128Mi" cpu: "250m" limits: memory: "128Mi" The reasoning for the preceding pattern is founded on how Kubernetes out-of-resource handling works. Briefly, when computer resources are exhausted, nodes become unstable. To avoid this situation, kubelet monitors and prevents total starvation of these resources by ranking the resource-hungry Pods. When the CPU is contended, these Pods can be throttled down to its requests. However, because memory is an incompressible resource, when memory is exhausted, the Pod needs to be taken down. To avoid having Pods taken down—and consequently, destabilizing your environment—you must set requested memory to the memory limit. You can also use VPA in recommendation mode to help you determine CPU and memory usage for a given application. Because VPA provides such recommendations based on your application usage, we recommend that you enable it in a production-like environment to face real traffic. VPA status then generates a report with the suggested resource requests and limits, which you can statically specify in your deployment manifest. If your application already defines HPA, see Mixing HPA and VPA. Make sure your container is as lean as possible When you run containers on Kubernetes, your application can start and stop at any moment. It's therefore important to follow these best practices: Have the smallest image possible. It's a best practice to have small images because every time Cluster Autoscaler provisions a new node for your cluster, the node must download the images that will run in that node. The smaller the image, the faster the node can download it. Start the application as quickly as possible. Some applications can take minutes to start because of class loading, caching, and so on. When a Pod requires a long startup, your customers' requests might fail while your application is booting. Consider these two practices when designing your system, especially if you are expecting bursts or spikes. Having a small image and a fast startup helps you reduce scale-ups latency. Consequently, you can better handle traffic increases without worrying too much about instability. These practices work better with the autoscaling best practices discussed in GKE autoscaling. Add Pod Disruption Budget to your application Pod Disruption Budget (PDB) limits the number of Pods that can be taken down simultaneously from a voluntary disruption. That means the defined disruption budget is respected at rollouts, node upgrades, and at any autoscaling activities. However, this budget can not be guaranteed when involuntary things happen, such as hardware failure, kernel panic, or someone deleting a VM by mistake. When PDB is respected during the Cluster Autoscaler compacting phase, it's a best practice to define a Pod Disruption Budget for every application. This way you can control the minimum number of replicas required to support your load at any given time, including when CA is scaling down your cluster. For more information, see Specifying a Disruption Budget for your Application. Set meaningful readiness and liveness probes for your application Setting meaningful probes ensures your application receives traffic only when it is up and running and ready to accept traffic. GKE uses readiness probes to determine when to add Pods to or remove Pods from load balancers. GKE uses liveness probes to determine when to restart your Pods. The liveness probe is useful for telling Kubernetes that a given Pod is unable to make progress, for example, when a deadlock state is detected. The readiness probe is useful for telling Kubernetes that your application isn't ready to receive traffic, for example, while loading large cache data at startup. To ensure the correct lifecycle of your application during scale-up activities, it's important to do the following: Define the readiness probe for all your containers. If your application depends on a cache to be loaded at startup, the readiness probe must say it's ready only after the cache is fully loaded. If your application can start serving right away, a good default probe implementation can be as simple as possible, for example, an HTTP endpoint returning a 200 status code. If you implement a more advanced probe, such as checking if the connection pool has available resources, make sure your error rate doesn't increase as compared to a simpler implementation. Never make any probe logic access other services. It can compromise the lifecycle of your Pod if these services don't respond promptly. For more information, see Configure Liveness, Readiness and Startup Probes. Make sure your applications are shutting down according to Kubernetes expectations Autoscalers help you respond to spikes by spinning up new Pods and nodes, and by deleting them when the spikes finish. That means that to avoid errors while serving your Pods must be prepared for either a fast startup or a graceful shutdown. Because Kubernetes asynchronously updates endpoints and load balancers, it's important to follow these best practices in order to ensure non-disruptive shutdowns: Don't stop accepting new requests right after SIGTERM. Your application must not stop immediately, but instead finish all requests that are in flight and still listen to incoming connections that arrive after the Pod termination begins. It might take a while for Kubernetes to update all kube-proxies and load balancers. If your application terminates before these are updated, some requests might cause errors on the client side. If your application doesn't follow the preceding practice, use the preStop hook. Most programs don't stop accepting requests right away. However, if you're using third-party code or are managing a system that you don't have control over, such as nginx, the preStop hook is a good option for triggering a graceful shutdown without modifying the application. One common strategy is to execute, in the preStop hook, a sleep of a few seconds to postpone the SIGTERM. This gives Kubernetes extra time to finish the Pod deletion process, and reduces connection errors on the client side. Handle SIGTERM for cleanups. If your application must clean up or has an in-memory state that must be persisted before the process terminates, now is the time to do it. Different programming languages have different ways to catch this signal, so find the right way in your language. Configure terminationGracePeriodSeconds to fit your application needs. Some applications need more than the default 30 seconds to finish. In this case, you must specify terminationGracePeriodSeconds. High values might increase time for node upgrades or rollouts, for example. Low values might not allow enough time for Kubernetes to finish the Pod termination process. Either way, we recommend that you set your application's termination period to less than 10 minutes because Cluster Autoscaler honors it for 10 minutes only. If your application uses container-native load balancing, start failing your readiness probe when you receive a SIGTERM. This action directly signals load balancers to stop forwarding new requests to the backend Pod. Depending on the race between health check configuration and endpoint programming, the backend Pod might be taken out of traffic earlier. For more information, see Kubernetes best practices: terminating with grace. Set up NodeLocal DNSCache The GKE-managed DNS is implemented by kube-dns, an add-on deployed in all GKE clusters. When you run DNS-hungry applications, the default kube-dns-autoscaler configuration, which adjusts the number of kube-dns replicas based on the number of nodes and cores in the cluster, might not be enough. In this scenario, DNS queries can either slow down or time out. To mitigate this problem, companies are accustomed to tuning the kube-dns-autoscaler ConfigMap to increase the number of kube-dns replicas in their clusters. Although this strategy might work as expected, it increases the resource usage, and the total GKE cost. Another cost-optimized and more scalable alternative is to configure the NodeLocal DNSCache in your cluster. NodeLocal DNSCache is an optional GKE add-on that improves DNS lookup latency, makes DNS lookup times more consistent, and reduces the number of DNS queries to kube-dns by running a DNS cache on each cluster node. For more information, see Setting up NodeLocal DNSCache. Use container-native load balancing through Ingress Container-native load balancing lets load balancers target Kubernetes Pods directly and to evenly distribute traffic to Pods by using a data model called network endpoint groups (NEGs). This approach improves network performance, increases visibility, enables advanced load-balancing features, and enables the use of Cloud Service Mesh, Google Cloud's fully managed traffic control plane for service mesh. Because of these benefits, container-native load balancing is the recommended solution for load balancing through Ingress. When NEGs are used with GKE Ingress, the Ingress controller facilitates the creation of all aspects of the L7 load balancer. This includes creating the virtual IP address, forwarding rules, health checks, firewall rules, and more. Container-native load balancing becomes even more important when using Cluster Autoscaler. For non-NEG load balancers, during scale downs, load-balancing programming, and connection draining might not be fully completed before Cluster Autoscaler terminates the node instances. This might disrupt ongoing connections flowing through the node even when the backend Pods are not on the node. Container-native load balancing is enabled by default for Services when all of the following conditions are true: The Services were created in GKE clusters 1.17.6-gke.7 and higher. If you are using VPC-native clusters. If you are not using a Shared VPC. If you are not using GKE Network Policy. For more information, see Ingress GKE documentation and Using container-native load balancing. Consider using retries with exponential backoff In microservices architectures running on Kubernetes, transient failures might occur for various reasons—for example: A large spike that triggered a still-working scale-up Network failures Connections dropped due to Pods not shutting down Spot VMs shutting down inadvertently Applications reaching their rating limits These issues are ephemeral, and you can mitigate them by calling the service again after a delay. However, to prevent overwhelming the destination service with requests, it's important that you execute these calls using an exponential backoff. To facilitate such a retry pattern, many existing libraries implement the exponential retrial logic. You can use your library of choice or write your own code. If you use Istio or Cloud Service Mesh (ASM), you can opt for the proxy-level retry mechanism, which transparently executes retries on your behalf. It's important to plan for your application to support service call retries, for example, to avoid inserting already-inserted information. Consider that a chain of retries might impact the latency of your final user, which might time-out if not correctly planned. Monitor your environment and enforce cost-optimized configurations and practices In many medium and large enterprises, a centralized platform and infrastructure team is often responsible for creating, maintaining, and monitoring Kubernetes clusters for the entire company. This represents a strong need for having resource usage accountability and for making sure all teams are following the company's policies. This section addresses options for monitoring and enforcing cost-related practices. Observe your GKE clusters and watch for recommendations You can check the resource utilization in a Kubernetes cluster by examining the containers, Pods, and services, and the characteristics of the overall cluster. There are many ways you can perform this task, but the initial approach we recommend is observing your GKE clusters through the Monitoring Dashboard. This gives you time-series data of how your cluster is being used, letting you aggregate and span from infrastructure, workloads, and services. Although this is a good starting point, Google Cloud provides other options—for example: In the Google Cloud console, on the GKE Clusters page, look at the Notifications column. If you have high resource waste in a cluster, the UI gives you a hint of the overall allocated versus requested information. Go to GKE Cluster List In the Google Cloud console, on the Recommendations page, look for Cost savings recommendation cards. Go to Recommendation Hub For more details, see Observing your GKE clusters and Getting started with Recommendation Hub. Enable GKE usage metering For a more flexible approach that lets you see approximate cost breakdowns, try GKE usage metering. GKE usage metering lets you see your GKE clusters' usage profiles broken down by namespaces and labels. It tracks information about the resource requests and resource consumption of your cluster's workloads, such as CPU, GPU, TPU, memory, storage, and optionally network egress. GKE usage metering helps you understand the overall cost structure of your GKE clusters, what team or application is spending the most, which environment or component caused a sudden spike in usage or costs, and which team is being wasteful. By comparing resource requests with actual utilization, you can understand which workloads are either under- or over-provisioned. You can take advantage of the default Looker Studio templates, or go a step further and customize the dashboards according to your organizational needs. For more information about GKE usage metering and its prerequisites, see Understanding cluster resource usage. Understand how Metrics Server works and monitor it Metrics Server is the source of the container resource metrics for GKE built-in autoscaling pipelines. Metrics Server retrieves metrics from kubelets and exposes them through the Kubernetes Metrics API. HPA and VPA then use these metrics to determine when to trigger autoscaling. For the health of GKE autoscaling, you must have a healthy Metrics Server. With GKE metrics-server deployment, a resizer nanny is installed, which makes the Metrics Server container grow vertically by adding or removing CPU and memory according to the cluster's node count. In-place update of Pods is still not supported in Kubernetes, which is why the nanny must restart the metrics-server Pod to apply the new required resources. Although the restart happens quickly, the total latency for autoscalers to realize they must act can be slightly increased after a metrics-server resize. To avoid Metrics Server frequent restarts in fast-changing clusters, starting at GKE 1.15.11-gke.9, the nanny supports resize delays. Follow these best practices when using Metric Server: Pick the GKE version that supports metrics-server resize delays. You can confirm it by checking whether the metrics-server deployment YAML file has the scale-down-delay configuration in the metrics-server-nanny container. Monitor metrics-server deployment. If Metrics Server is down, it means no autoscaling is working at all. You want your top-priority monitoring services to monitor this deployment. Follow the best practices discussed in GKE autoscaling. Use Kubernetes Resource Quotas In multi-tenant clusters, different teams commonly become responsible for applications deployed in different namespaces. For a centralized platform and infrastructure group, it's a concern that one team might use more resources than necessary. Starving all cluster's compute resources or even triggering too many scale-ups can increase your costs. To address this concern, you must use resource quotas. Resource quotas manage the amount of resources used by objects in a namespace. You can set quotas in terms of compute (CPU and memory) and storage resources, or in terms of object counts. Resource quotas let you ensure that no tenant uses more than its assigned share of cluster resources. For more information, see Configure Memory and CPU Quotas for a Namespace. Consider using GKE Enterprise Policy Controller GKE Enterprise Policy Controller is a Kubernetes dynamic admission controller that checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or arbitrary business rules. Policy Controller uses constraints to enforce your clusters' compliance. For example, you can install in your cluster constraints for many of the best practices discussed in the Preparing your cloud-based Kubernetes application section. This way, deployments are rejected if they don't strictly adhere to your Kubernetes practices. Enforcing such rules helps to avoid unexpected cost spikes and reduces the chances of having workload instability during autoscaling. For more information about how to enforce and write your own rules, see Creating constraints and Writing a constraint template. If you aren't an GKE Enterprise customer, you can consider using Gatekeeper, the open source software that Policy Controller is built on. Design your CI/CD pipeline to enforce cost-saving practices GKE Enterprise Policy Controller helps you avoid deploying noncompliant software in your GKE cluster. However, we recommend that you enforce such policy constraints early in your development cycle, whether in pre-commit checks, pull request checks, delivery workflows, or any step that makes sense in your environment. This practice lets you find and fix misconfigurations quickly, and helps you understand what you need to pay attention to by creating guardrails. Also consider using kpt functions in your CI/CD pipeline to validate whether your Kubernetes configuration files adhere to the constraints enforced by GKE Enterprise Policy Controller, and to estimate resource utilization or deployment cost. This way, you can stop the pipeline when a cost-related issue is detected. Or you can create a different deployment approval process for configurations that, for example, increase the number of replicas. For more information, see Using Policy Controller in a CI pipeline, and for a complete example of a delivery platform, see Modern CI/CD with GKE Enterprise. Spread the cost saving culture Many organizations create abstractions and platforms to hide infrastructure complexity from you. This is a common practice in companies that are migrating their services from virtual machines to Kubernetes. Sometimes these companies let developers configure their own applications in production. However, it's not uncommon to see developers who have never touched a Kubernetes cluster. The practices we recommend in this section don't mean that you should stop doing abstractions at all. Instead, they help you view your spending on Google Cloud and train your developers and operators on your infrastructure. You can do this by creating learning incentives and programs where you can use traditional or online classes, discussion groups, peer reviews, pair programming, CI/CD and cost-saving gamifications, and more. For example, in the Kubernetes world, it's important to understand the impact of a 3 Gb image application, a missing readiness probe, or an HPA misconfiguration. Finally, as shown in Google's DORA research, culture capabilities are some of the main factors that drive better organizational performance, less rework, less burnout, and so on. Cost saving is no different. Giving your employees access to their spending aligns them more closely with business objectives and constraints. Summary of best practices The following table summarizes the best practices recommended in this document. Topic Task GKE cost-optimization features and options Fine-tune GKE autoscaling Choose the right machine type Select the appropriate region Sign up for committed-use discounts Review small development clusters Review your logging and monitoring strategies Review inter-region egress traffic in regional and multi-zonal clusters Prepare your environment to fit your workload type Prepare your cloud-native Kubernetes applications Understand your application capacity Make sure your application can grow both vertically and horizontally Set appropriate resource requests and limits Make sure your container is as lean as possible Add Pod Disruption Budget to your application Set meaningful readiness and liveness probes for your application Make sure your applications are shutting down in accordance with Kubernetes expectations Setup NodeLocal DNSCache Use container-native load balancing through Ingress Consider using retries with exponential backoff Monitor your environment and enforce cost-optimized configurations and practices Observe your GKE clusters and watch for recommendations Enable GKE usage metering Understand how Metrics Server works and monitor it Use Kubernetes Resource Quotas Consider using GKE Enterprise Policy Controller Design your CI/CD pipeline to enforce cost-saving practices Culture Spread the cost-saving culture What's next Find design recommendations and best practices to optimize the cost of Google Cloud workloads in Google Cloud Architecture Framework: Cost optimization. To learn how to save money at night or at other times when usage is lower, see the Reducing costs by scaling down GKE clusters during off-peak hours tutorial. To learn more about using Spot VMs, see the Run web applications on GKE using cost-optimized Spot VMs tutorial. To understand how you can save money on logging and monitoring, take a look at Optimize cost: Cloud operations. For reducing costs in Google Cloud in general, see Understanding the principles of cost optimization. For a broader discussion of scalability, see Patterns for scalable and resilient apps. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Learn more about Running a GKE application on Spot VMs with on-demand nodes as fallback. Send feedback \ No newline at end of file diff --git a/Best_practices_for_federating_Google_Cloud_with_an_external_identity_provider.txt b/Best_practices_for_federating_Google_Cloud_with_an_external_identity_provider.txt new file mode 100644 index 0000000000000000000000000000000000000000..968123f2416d65bf8bde1efe285cfc315495e43e --- /dev/null +++ b/Best_practices_for_federating_Google_Cloud_with_an_external_identity_provider.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/best-practices-for-federating +Date Scraped: 2025-02-23T11:55:11.128Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for federating Google Cloud with an external identity provider Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document presents best practices and guidance that help you set up federation consistently and securely. The guidance builds on the best practices for using Cloud Identity or Google Workspace with Google Cloud. All Google services, including Google Cloud, Google Marketing Platform, and Google Ads, rely on Google Sign-In to authenticate users. Instead of manually creating and maintaining user accounts in Cloud Identity or Google Workspace for each employee, you can federate Cloud Identity or Google Workspace with your external identity provider (IdP) such as Active Directory or Azure Active Directory. Setting up federation typically entails the following: Automatically provisioning relevant user accounts from an external authoritative source to Cloud Identity or Google Workspace. Enabling users to use an external IdP to authenticate to Google services. Note: A Cloud Identity or Google Workspace account denotes a directory of users, not an individual user account. Individual user accounts are referred to as users or user accounts. Map identities The identity of a user account that's managed by Cloud Identity or Google Workspace is defined by its primary email address. The primary email address is listed as Google Account email on the Google Account page. To be considered valid, the primary email address must use one of the domains that you have added to your Cloud Identity or Google Workspace account. The primary email address also serves these purposes: The primary email address is the username that a user must provide when signing in. Although users can add alternate email addresses or aliases to their Google user account, these addresses are not considered identities and can't be used to sign in. When an administrator needs to send out notifications to users (for example, to announce a potential security risk), those notifications are sent to the primary email address only. To set up single sign-on and automatic user provisioning between Google and your external IdP, you must map identities in your external IdP to corresponding identities in Cloud Identity or Google Workspace. The following sections describe best practices for establishing this mapping. Use the same identity across all federated systems All that's required to establish a mapping is to verify that the SAML assertion that the IdP supplies to Google contains a NameID claim with a value that matches the primary email address of an existing Cloud Identity or Google Workspace user. The IdP is free to use whatever mapping or logic is applicable to derive a suitable NameID claim for its existing users. Many IdPs rely on email addresses (or more generally, RFC 822–compliant names) to identify users. Examples include the following: The userPrincipalName attribute in Active Directory is an RFC 822–compliant name that uniquely identifies a user and that can be used to sign in to Windows or Active Directory Federation Services (AD FS). Azure Active Directory uses the UserPrincipalName attribute to identify users and let them sign in. If the users that your external IdP manages already rely on an email address as their identity, you can use the same identity as the primary email address in Cloud Identity or Google Workspace. Using the same user identity across federated systems has multiple advantages: When users sign in to a Google tool such as the Google Cloud console, they are first prompted to provide the primary email address of their Cloud Identity or Google Workspace user before they are redirected to your external IdP. Depending on your IdP, the user might then be presented with another sign-on screen that prompts for their username and password. If the email addresses differ for these steps, the sequence of sign-on screens can easily confuse end users. In contrast, if users can use a common identity across all steps, they aren't exposed to the technical differences between IdPs, which minimizes potential confusion. If you don't have to map between user identities, it's easier for you to correlate audit logs from Google Cloud with logs from on-premises systems. Similarly, if applications that are deployed on-premises and on Google Cloud need to exchange data containing user identities, you avoid the extra complexity of having to map user identifiers. For more details on mapping Active Directory users or Azure AD users to Cloud Identity or Google Workspace, see the Active Directory or Azure AD guide. Ensure that identities use routable email addresses Google Cloud uses the primary email address of a user to deliver notification emails. Examples of these notifications include the following: Budget alerts: If you've configured a budget alert, Google Cloud sends a notification to Billing Admins and Billing Users when a budget threshold is crossed. Payment notifications: Any payment-related notifications or alerts are sent to the email addresses of payment users that are configured for the affected billing account. Project invitations: If you assign the Project Admin role to a user that is part of a different Cloud Identity or Google Workspace account than the one the project's organization is associated with, an invitation is sent to the user. Before the newly granted role can take effect, the user has to accept the invite by clicking a link in the email message. Replies to support cases and other notifications from Support. If you use Google Workspace and have properly configured the necessary DNS MX records, all notification emails that are sent to the primary email address are delivered to the corresponding user's Gmail inbox. For Cloud Identity users, Google Cloud also attempts to deliver notification emails, but the email must be handled by your existing email infrastructure. To make sure that your Cloud Identity users don't miss notifications, make sure that your existing email system accepts messages that are sent to the primary email addresses of Cloud Identity users, and that it routes this email to the correct mailboxes. Do the following: Ensure that all domains added to Cloud Identity have DNS MX records that point to your existing email system. Ensure that the primary email address of a Cloud Identity user matches an existing mailbox in your email system. If there is a mismatch between the primary email address used by Cloud Identity and the user's email address, configure an alias in your existing email system so that email sent to each primary email address is routed to the right mailbox. If these solutions aren't practical, consider adding Google Workspace to your existing Cloud Identity account and assigning Google Workspace licenses to the key users that are in charge of billing or interacting with support, and who are therefore most likely to receive notifications. Assess migration options for existing consumer accounts If your employees were using Google services such as Google Ad Manager, Google AdSense, or Google Analytics before your organization signed up for Cloud Identity or Google Workspace, it's possible that your employees have been using consumer accounts to access these services. Letting employees use consumer accounts for business purposes can be risky. For more details on what these risks are and how you can surface existing consumer accounts, see Assessing your existing Google accounts. You can handle existing consumer accounts in the following ways: Keep the consumer accounts as they are, accepting potential risks. Migrate the accounts to Cloud Identity or Google Workspace by initiating a transfer. Force the owners of the unmanaged user accounts to change their email address to use a different domain. For more details on how to consolidate existing consumer accounts, see Assessing identity consolidation options. How you decide to deal with consumer accounts influences how and in what sequence you set up federation. Assess any existing consumer accounts that your users use before you begin creating user accounts or setting up single sign-on in Cloud Identity or Google Workspace. Map identity sets When you've defined how individual identities are mapped between your external IdP and Cloud Identity or Google Workspace, you have to determine the set of identities for which to enable access to Google services. The effective set of identities that can authenticate to Google services is the intersection of two sets: External identities that map to an identity in Cloud Identity or Google Workspace. External identities that your external IdP allows for single sign-on to Cloud Identity or Google Workspace. The process for controlling who is permitted to use single sign-on differs depending on which IdP you use. For example, Azure AD relies on assignments, while AD FS uses access control policies. Limit the set of identities that can authenticate to Google services Depending on how you plan to use Google Cloud, Google Workspace, and other Google tools, either all of your organization's employees or only a subset of them must be able to authenticate to Google services. If you plan to grant only a subset of your organization's employees access to Google services, then it's best to restrict authentication to this set of users. By restricting the number of users that can authenticate, you establish an extra layer of defense that helps in case you accidentally configured any access control rule to be too lax. You can limit the set of users that can authenticate to Google in the following ways: Minimize the number of user accounts in Cloud Identity or Google Workspace. If there is no mapped user account, any attempt by a user to authenticate will fail, even if the IdP permits the user to sign in to Cloud Identity or Google Workspace by using single sign-on. Configure single-sign on in your IdP to minimize the number of users that are allowed to sign in to Cloud Identity or Google Workspace. The best approach for your situation depends on whether you have existing consumer accounts that you need to migrate. Limit the set of identities that you provision if you still need to migrate consumer accounts You can migrate a consumer account to Cloud Identity or Google Workspace only if you haven't created a user account with the same identity in Cloud Identity or Google Workspace. If you have existing consumer accounts that you still plan to migrate, you need to be careful not to accidentally create conflicting user accounts. Follow these guidelines: Limit authentication by creating new Cloud Identity or Google Workspace user accounts only for users that need them and are known to not have an existing consumer account. Avoid inadvertently locking out migrated accounts by allowing single sign-on not only for the users for whom you create user accounts in Cloud Identity or Google Workspace, but also for all consumer accounts that are yet to be migrated. The following diagram shows how different types of identities overlap and interrelate. In the diagram, identities with a user account in Cloud Identity or Google Workspace are in the smallest (yellow) ellipse. That ellipse is contained in the second (blue) ellipse, which contains identities that are able to authenticate. The third and largest ellipse (gray) contains the others and consists of the identities that have a user account in your IdP. For details on how to configure Active Directory, Azure AD, or Okta, see our separate best practices guide. Prevent new consumer account sign-ups Adding a domain such as example.com to your Cloud Identity or Google Workspace account doesn't prevent employees from signing up for new consumer accounts using the same domain. These new consumer accounts are surfaced as unmanaged users in your Cloud Identity or Google Workspace account, but they aren't allowed to use single sign-on or any other configuration you applied in your Cloud Identity or Google Workspace account. One way to block the creation of a new consumer account is to create a user account in Cloud Identity or Google Workspace with the same email address. For example, if you created a user alice@example.com in your Cloud Identity or Google Workspace account, then any attempts by an employee to sign up for a new consumer account with the same identity will fail. However, if no corresponding user exists in Cloud Identity or Google Workspace yet, then signing up for a new consumer account will succeed. When there are no more consumer accounts to migrate to Cloud Identity, prevent new consumer account sign-ups by applying the following configuration: Limit authentication by permitting only relevant users to use single sign-on to Cloud Identity or Google Workspace. Provision Cloud Identity or Google Workspace users for all employees. Ensure that the user accounts use the employee's corporate email address as the primary email address or alias so that no new consumer accounts can be created using the same email address. If possible, keep user accounts that are not enabled for single sign-on in a suspended state. The following diagram shows this configuration. If you are still in the process of migrating consumer accounts, you can temporarily monitor new consumer account sign-ups by capturing the verification emails sent during the sign-up process. Verification emails use an envelope sender address that matches *@idverification.bounces.google.com. Set up a filter in your email system that identifies emails with this envelope sender address and holds them for internal review. Note: The sender address used by verification emails might change in the future. Make Cloud Identity or Google Workspace identities a subset of the identities in your external IdP Your Cloud Identity or Google Workspace account might contain user accounts with identities that don't map to any users in your external IdP. There are two typical scenarios that can result in these user accounts: You create a new user account in Cloud Identity or Google Workspace and use a primary email address that doesn't map to any user in your external IdP. You have a user account in Cloud Identity or Google Workspace that maps to an identity in your external IdP, but then you delete the identity in the external IdP. For example, you might delete the identity if the employee leaves the company. When you enable single sign-on in Cloud Identity or Google Workspace, all users (with the exception of super admins) are forced to use single sign-on. For a Cloud Identity or Google Workspace user account that doesn't map to an identity in your external IdP, any attempt to use single sign-on will fail. A user account without such a counterpart is effectively defunct, but might still pose a risk, as in the following situations: If at some point single sign-on is temporarily or permanently disabled, the user account can be used again. This might enable a former employee to sign in and access corporate resources. The name of the deleted user account might be reused. For example, an employee joining the company might share the same name with a different employee who left the company months or years earlier. If the previous employee's user account has meanwhile been deleted, it's possible that you might assign the new employee the username that the former employee used to use. The new employee's user account might have an internal ID that's different from that of the former employee. However, from a federation perspective, the two user accounts are considered the same if they both map to the same primary email address in Cloud Identity or Google Workspace. As a result, when the new employee signs in to Google, they might "inherit" existing data, settings, and permissions from the former employee. Cloud Identity and Google Workspace super-admin users are exempt from the requirement to use single sign-on, but they are still allowed to use single sign-on when sign-on is initiated by the IdP. The ability to use IdP-initiated single sign-on makes super-admin users sensitive to name squatting if they lack a counterpart in your external IdP. Consider the following example: Alice has a super-admin user alice-admin@example.com in Cloud Identity or Google Workspace and does not use two-step verification. Mallory, who does not know Alice's password, finds a way to create a new user in the external IdP that maps to alice-admin@example.com. Although this newly created user account might not have any special privileges and has no relation to Alice's user account, the fact that it shares a common identity with Alice's super-admin account now enables Mallory to use IdP-initiated single sign-on and authenticate as alice-admin@example.com. To mitigate this name-squatting risk, make sure that your Cloud Identity or Google Workspace identities are a subset of the identities in your external IdP. The best way to accomplish this is to map the entire user account lifecycle as implemented by your external IdP to the user account lifecycle in Cloud Identity or Google Workspace. Use dedicated super-admin users Google Workspace and Cloud Identity super-admin accounts have a powerful set of permissions that apply not only to the Cloud Identity or Google Workspace account, but also to an associated Google Cloud organization. Because only a small number of activities require super-admin privileges, whenever possible it's best to limit the number of administrators with super-admin access and to use less-privileged user accounts for the daily administration of your account or Google Cloud organization. When you've identified the administrators who require super-admin access, create dedicated super-admin user accounts—for example: Create a user account with the identity alice@example.com and assign regular permissions. This account should be used on an everyday basis for routine activities. Create a second user account with identity alice-admin@example.com and assign it super-user privileges. It's a best practice for Alice to use this account only for occasions where super-admin privileges are required. To ensure that notification emails sent to this email address are received in the user's mailbox, configure Google Workspace or your existing email system so that the email address alice-admin@example.com is an alias or is forwarded to alice@example.com. Ensure that both identities are recognized by your external IdP so that the set of identities in Cloud Identity or Google Workspace continues to be a subset of the identities in your external IdP. We recommend following a naming scheme for these dedicated super-admin accounts so that in audit logs you can track where and how they're being used. For example, add -admin to the username, as in the previous example. Limit the number of service users in Google Workspace and Cloud Identity Your external IdP might contain not only user accounts for employees, but also user accounts intended for machine users such as applications, tools, or background jobs. In contrast, the preferred way on Google Cloud to enable an application to authenticate and access Google APIs is to implement one of the following approaches: A web application or tool that needs to access Google APIs or services on an end user's behalf should use either OAuth 2.0 or OpenID Connect. By using one of these protocols, the application first solicits the end user's consent to access the user's data and, on receiving consent, is then able to perform the access on behalf of the end user. If access to APIs or services should not be on behalf of an end user but rather on behalf of the application itself, it's best to use Google Cloud service accounts for authentication and then grant the service account access to the resources by using IAM. If Google Cloud service accounts are granted the appropriate roles in IAM, they can be used to authenticate and use any Cloud API. If you need to grant a service account access to APIs that are outside Google Cloud, for example, to the Directory API or Drive API, you can use Google Workspace domain-wide delegation. With Google Workspace domain-wide delegation, you enable a service account to act on behalf of a Cloud Identity or Google Workspace user. Because delegation is a powerful privilege, we recommend that you use Google Workspace domain-wide delegation only in cases where the OAuth approach is not practical. Use a dedicated Cloud Identity or Google Workspace user for every Google Cloud service account that is enabled for Google Workspace domain-wide delegation. Create this user in your external IdP and then provision it to Cloud Identity or Google Workspace so that the set of users in Cloud Identity or Google Workspace continues to be a subset of the users in your external IdP. Avoid creating service users in Cloud Identity and Google Workspace that you don't intend to use for Google Workspace domain-wide delegation. Map user account lifecycles The user accounts in your external IdP follow a certain lifecycle. Typically, this lifecycle reflects the employment relationship between the employee and the company: A new user account is created for an employee joining the organization. The user account might temporarily be suspended and later reactivated—for example, when the employee goes on leave. When the user leaves the company, the user account is either deleted or kept in a suspended state for a certain retention period before ultimately being deleted. The following diagram illustrates this lifecycle. This section lists best practices for ensuring that the lifecycle of user accounts in Cloud Identity or Google Workspace follows the lifecycle of corresponding user accounts in your external IdP. Designate your external IdP as the source of truth Many HR information systems (HRIS), IdPs, and adapters only support one-way user provisioning. This means that changes performed in the HRIS or IdP are propagated to Cloud Identity or Google Workspace, but changes performed in Cloud Identity or Google Workspace are not propagated back. To prevent inconsistencies caused by one-way provisioning, designate your external IdP as the source of truth. Exclusively use your external IdP (or HRIS) to create, modify, or delete users, and rely on automated provisioning to have changes be propagated to Google Workspace and Cloud Identity. By designating your external IdP as the source of truth, you limit the risk of inconsistencies and of having manual modifications overridden by the IdP. Automate user provisioning to Cloud Identity or Google Workspace To enable an employee to authenticate to Google by using single sign-on, you first have to create a user account for the employee in Cloud Identity or Google Workspace. To ensure consistency with your external IdP, it's best to automate the process of provisioning these user accounts in Cloud Identity or Google Workspace: By automating provisioning, you can ensure that Cloud Identity or Google Workspace identities are always a subset of the identities in your external IdP. You minimize the risk of inadvertently introducing a mismatch between an identity in Cloud Identity or Google Workspace and the corresponding identity in your external IdP. Mismatches such as an accidental misspelling of the email address can otherwise lead to employees experiencing problems signing in. You minimize manual administration effort. If you use an HRIS to manage the onboarding process, then one way to automate user provisioning is to configure the HRIS to provision user accounts to both your external IdP and to Cloud Identity or Google Workspace, as in the following diagram. For this setup to work well, you have to ensure that your HRIS provisions user accounts so that they properly map to each other. Also, the HRIS must handle the decision of which user accounts to provision to Cloud Identity or Google Workspace. Another way to automate user provisioning that works independently of an HRIS is to use your external IdP as the authoritative source for provisioning users in Cloud Identity or Google Workspace. In this setup, the configuration for mapping user accounts and user account sets is managed in the IdP or by the adapter, as the following diagram shows. If you use Active Directory as your IdP, you can duplicate this setup by using Google Cloud Directory Sync. Other IdPs such as Azure AD or Okta have built-in adapters for provisioning users to Google Workspace. Because Google Workspace and Cloud Identity are based on the same platform and use the same APIs, these adapters can also be used for Cloud Identity. Propagate suspension events to Cloud Identity or Google Workspace When an employee leaves the organization, whether temporarily or permanently, we recommend that you revoke the employee's access to Google services. By suspending the employee's user account in your external IdP, you revoke their ability to use single sign-on to authenticate to Google, but that might not fully revoke access to all Google services. The following can still occur: If the Cloud Identity or Google Workspace user has an active browser session, the session will continue to work. Any OAuth tokens that have already been obtained also continue to be valid. If the user has super-admin privileges or if you have configured network masks, the employee might still be able to sign on using a password. The user account might still receive notifications from Google Cloud, including budget alerts. If the user has a Google Workspace license assigned, emails will continue to be delivered to the user's mailbox, the user still shows up in the address book, and the user might still be listed as available in Google Google Calendar. If you allow users to use less secure apps, the user might still be able to access Gmail, Google Calendar, and other data by using app passwords. To fully revoke access to Google services, propagate suspension events to Cloud Identity or Google Workspace in the following ways: Ensure that whenever a user account is suspended in your external IdP, the corresponding user account in Cloud Identity or Google Workspace is suspended as well. Suspending a user in Cloud Identity or Google Workspace terminates active browser sessions, invalidates tokens, and revokes all other access. Similarly, when you reactivate a user account in your external IdP, make sure that you also reactivate the corresponding user account in Cloud Identity or Google Workspace. Suspending a user account in Cloud Identity or Google Workspace is a non-destructive operation. While the user account is suspended, the user's data is retained, Google Workspace licenses stay attached, and assigned roles in Google Cloud remain unchanged. Propagate deletion events to Cloud Identity or Google Workspace When an employee permanently leaves the organization, you might decide to not only suspend the employee's user account in your external IdP, but also delete it. If you delete a user account in your external IdP, but you don't delete the corresponding Cloud Identity or Google Workspace user, then the set of users in Cloud Identity and Google Workspace is not a subset of the users in your external IdP anymore. This can turn into a problem in the future if a new employee joins your organization who happens to have the same name as the employee who left the company. If you create a user account in your IdP for the new employee, you might reuse the old employee's username, causing the new user account to map to the old user account in Cloud Identity or Google Workspace. As a result, the new employee might gain access to the old employee's data and settings. You can avoid the risks associated with orphaned Cloud Identity or Google Workspace user accounts by deleting a Cloud Identity or Google Workspace user as soon as the corresponding user account in your IdP is deleted. Deleting a Cloud Identity or Google Workspace user is a destructive operation that can only be undone within a certain time period. Depending on the Google services consumed by the user, deleting the user might cause associated data to be permanently deleted and might therefore not meet your compliance requirements. To preserve the user's settings and data for a certain retention period before permanently deleting them, implement one of the following approaches: Delay the user account deletion in your external IdP by keeping the user in a suspended state for a certain retention period. Delete the user in both your IdP and Cloud Identity or Google Workspace after the retention period has elapsed. When you delete a user account in your external IdP, suspend the corresponding Cloud Identity or Google Workspace user and change its primary email address to a name that is unlikely to ever cause a conflict. For example, rename alice@example.com to obsolete-yyyymmdd-alice@example.com where yyyymmdd reflects the date of the deletion in your external IdP. Keep the renamed user account in a suspended state for a retention period, and delete it after the retention period has elapsed. To preserve Google Workspace data for suspended users, either keep the Google Workspace license assigned to the user or switch it to an Archived User license to ensure that Google Workspace Vault retention rules continue to apply and that the user's data is preserved. Single sign-on All Google products, including services such as Google Cloud, Google Ads, and YouTube, use Google Sign-In to authenticate users. A service initiates the authentication process by redirecting a user to accounts.google.com where the user sees the Google sign-in screen and is prompted for their email address. Depending on the domain of the provided email address, the user account is then looked up in Gmail, the consumer account directory, or if the domain matches its primary, secondary, or alias domain, in a Cloud Identity or Google Workspace account. The following diagram illustrates this authentication process. If you configure a Cloud Identity or Google Workspace account to use single sign-on, then users are redirected to an external IdP to authenticate. When the external IdP has completed the authentication, the result is relayed back to Google Sign-In by means of a SAML assertion. This SAML assertion serves as proof of a successful authentication. The assertion contains the email address of the user, and is signed by the external IdP's certificate so that Google Sign-In can verify its authenticity. This process is referred to as service provider–initiated single sign-on, and it applies to all users except for super admins. Super admins are never redirected to an external IdP and are prompted for a password instead. Some IdPs also support IdP-initiated single sign-on. In this model, the user authenticates at the external IdP and then follows a link pointing to a Google service such as Google Cloud or Google Ads. If single sign-on has been enabled for a Cloud Identity or Google Workspace account, all users of that account are allowed to use IdP-initiated single sign-on, including super admins. Minimize the set of attributes passed in SAML assertions After a user has authenticated at the external IdP, Cloud Identity or Google Workspace use the SAML assertion that is passed by the external IdP to establish a session. In addition to validating its authenticity, this process includes identifying the Cloud Identity or Google Workspace user account that corresponds to the NameID value in the SAML assertion. The NameID value is expected to contain the primary email address of the user account to be authenticated. The email address is case sensitive. Aliases and alternate email addresses are not considered. To avoid accidental spelling or casing mismatches, it's best to automatically provision user accounts. SAML assertions are allowed to contain additional attributes, but they are not considered during authentication. Attributes containing information such as a user's forename, surname, or group memberships are ignored because the user's account in Cloud Identity or Google Workspace is considered the only source for this user information. To minimize the size of SAML assertions, configure your IdP to not include any attributes that aren't required by Google Sign-In. Use google.com as the issuer Google Sign-In sessions are not restricted to a single tool or domain. Instead, once a session has been established, the session is valid across all Google services that the user has been granted access to. This list of services might include services like Google Cloud and Google Ads as well as services such as Google Search and YouTube. The global nature of a session is reflected in the SAML protocol exchange: by default, Google uses google.com as the issuer (the Issuer element in the SAML request) in SAML requests, and it expects SAML assertions to specify google.com as the audience (the Audience element in the SAML response). You can change this default by enabling the Use a domain-specific issuer option when you configure single sign-on in Cloud Identity or Google Workspace. Enable the domain-specific issuer option only if you have more than one Cloud Identity or Google Workspace account (and therefore multiple domains) and your IdP needs to be able to distinguish between sign-ons initiated by one Cloud Identity or Google Workspace account versus another account. When you enable the option, SAML requests use google.com/a/DOMAIN as the issuer and expect google.com/a/DOMAIN as the audience, where DOMAIN is the primary domain of the Cloud Identity or Google Workspace account. In all other cases, keep the default setting to use google.com as the issuer and configure your external IdP to specify google.com as the audience in SAML assertions. Depending on your IdP, this setting might also be referred to as issuer, relying party trust identifier, or entity ID. Align the length of Google sessions and IdP sessions When a user has completed the single sign-on process and a session has been established, the Google Sign-In session remains active for a certain period of time. When this time period elapses or if the user signs out, the user is requested to re-authenticate by repeating the single sign-on process. By default, the session length for Google services is 14 days. For users with a Cloud Identity Premium or Google Workspace Business license, you can change the default session length to a different period such as 8 hours. You can control the session length used by Google Cloud. The Google Cloud session applies to the Google Cloud console as well as to the Google Cloud CLI and other API clients. You can set the Google Cloud session length in all Cloud Identity and Google Workspace account types. Most external IdPs also maintain a session for authenticated users. If the length of the IdP session is longer than the Google session length, then re-authentication might occur silently. That is, the user is redirected to the IdP but might not have to enter credentials again. Silent re-authentication might undermine the intent of shortening the length of Google sessions. To ensure that users have to enter their credentials again after a Google session has expired, configure the Google session length and the IdP session length so that the IdP session length is shorter than the Google session length. Note: Google Admin console sessions have a fixed session length of 1 hour. This cannot be customized. Disable single sign-on for super-admin users For super-admin users, SSO is optional: They can use SSO when the sign-on is initiated by the IdP, but they can also sign in by using username and password. If you're not planning to use IdP-initiated single sign-on for super-admin users, you can decrease risk by using the following procedure: Add a dedicated organizational unit for super-admins. Assign an SSO profile to the organizational unit that disables single sign-on. Otherwise, if you're planning to use IdP-initiated single sign-on, make sure that you enforce post-SSO verification for super-admin users. Multi-factor authentication Enabling single sign-on between Cloud Identity or Google Workspace and your external IdP improves the user experience for employees. Users have to authenticate less frequently and don't need to memorize separate credentials to access Google services. But enabling users to seamlessly authenticate across services and environments also means that the potential impact of compromised user credentials increases. If a user's username and password are lost or stolen, an attacker might use these credentials to access resources not only in your existing environment, but also across one or more Google services. To mitigate the risk of credential theft, it's best to use multi-factor authentication for all user accounts. Enforce multi-factor authentication for users When you have configured single sign-on for your Cloud Identity or Google Workspace account, users without super-admin privilege have to use your external IdP to authenticate. Configure your IdP to require the use of a second factor (such as a one-time code or a USB key) to enforce that multi-factor authentication is applied whenever single sign-on to Google is used. If your external IdP doesn't support multi-factor authentication, consider enabling post-SSO verification to have Google perform additional verification after the user returns from authenticating with your external IdP. Avoid using network masks, because they could be abused as a way to sidestep multi-factor authentication for users. Enforce Google 2-step authentication for super-admin users Super-admin users are not redirected to your external IdP when they try to sign in to the Google Cloud console or other Google sites. Instead, super-admin users are prompted to authenticate by username and password. Because super-admin users can authenticate by username and password, they are not subject to any multi-factor authentication policies that your IdP might enforce. To protect super-admin users, enforce Google 2-step authentication for all super-admin users. Super-admin users typically fall into one of two categories: Personalized super-admin users: These users are personalized and intended to be used by a single administrator. We recommend that you enforce 2-step verification for all personalized super-admin users. Machine super-admin users: Although it's best to avoid these user accounts, they are sometimes necessary to enable tools such as Cloud Directory Sync or the Azure AD gallery app to manage users and groups in Cloud Identity or Google Workspace. Limit the number of machine super-admin users and try to enable 2-step verification whenever possible. For users that don't use single sign-on, you can enforce 2-step authentication on either account globally, by organizational unit, or by group: If you don't have any machine super-admin users who cannot use 2-step verification, it's best to turn on enforcement for all users. By enforcing 2-step verification for all users, you avoid the risk of accidentally missing users. If you have some machine super-admin users who cannot use 2-step verification, use a dedicated group to control 2-step verification. Create a new group, turn on enforcement for the group, and add all relevant super-users to it. For more details on using 2-step authentication to secure super-admin users, see our security best practices for administrator accounts. Enforce post-SSO verification for super-admin users Although super-admin users are not required to use single sign-on, they are still allowed to use single sign-on when initiated by the IdP. To ensure that super-admin users are always subject to 2-step verification, even if they authenticate by IdP-initiated sign-in, turn on additional SSO verifications and 2-Step verification for all super-admin users. The additional SSO verifications might seem redundant in cases where your IdP already enforce multi-factor authentication, but the setting can help protect super-admin users if your IdP becomes compromised. Don't restrict single sign-on by network mask When you configure single sign-on in Cloud Identity or Google Workspace, you can optionally specify a network mask. If you specify a mask, only users that have IP addresses matching the mask are subject to single sign-on; other users are prompted for a password when they sign in. If you use network masks, you might be undermining the multi-factor authentication that's enforced by your external IdP. For example, by changing locations or using a VPN, a user might be able to control whether the network mask applies or not. If you enforce multi-factor authentication at the external IdP, then this tactic might allow a user or attacker to circumvent the multi-factor authentication policies configured at your external IdP. To ensure that multi-factor authentication applies consistently regardless of a user's location or IP address, avoid restricting single sign-on by network mask. To restrict access to resources by IP address, either configure an appropriate IP allow list at your external IdP or use context-aware access for Google Cloud and Google Workspace. What's next Learn about additional best practices: Best practices for planning accounts and organizations Best practices for securing super-admin accounts Security checklist for medium and large businesses Compare patterns for federating an external IdP with Google Cloud. Send feedback \ No newline at end of file diff --git a/Best_practices_for_implementing_machine_learning_on_Google_Cloud.txt b/Best_practices_for_implementing_machine_learning_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..d24ad4051fe71cf97ebe2223a36154632c28bf9e --- /dev/null +++ b/Best_practices_for_implementing_machine_learning_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ml-on-gcp-best-practices +Date Scraped: 2025-02-23T11:46:05.423Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for implementing machine learning on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-09 UTC This document introduces best practices for implementing machine learning (ML) on Google Cloud, with a focus on custom-trained models based on your data and code. It provides recommendations on how to develop a custom-trained model throughout the ML workflow, including key actions and links for further reading. The following diagram gives a high-level overview of the stages in the ML workflow addressed in this document including related products: ML development Data preparation ML training Model deployment and serving ML workflow orchestration Artifact organization Model monitoring The document is not an exhaustive list of recommendations; its goal is to help data scientists and ML architects understand the scope of activities involved in using ML on Google Cloud and plan accordingly. And while ML development alternatives like AutoML are mentioned in Use recommended tools and products, this document focuses primarily on custom-trained models. Before following the best practices in this document, we recommend that you read Introduction to Vertex AI. This document assumes the following: You are primarily using Google Cloud services; hybrid and on-premises approaches are not addressed in this document. You plan to collect training data and store it in Google Cloud. You have an intermediate-level knowledge of ML, big data tools, and data preprocessing, as well as a familiarity with Cloud Storage, BigQuery, and Google Cloud fundamentals. If you are new to ML, check out Google's ML Crash Course. Use recommended tools and products The following table lists recommended tools and products for each phase of the ML workflow as outlined in this document: ML workflow step Recommended tools and products ML environment setup Vertex AI SDK for Python Colab Enterprise Vertex AI Workbench instances Terraform ML development BigQuery Cloud Storage Vertex AI Workbench instances Label data Vertex AI Feature Store Vertex AI TensorBoard Vertex AI training Vertex AI Experiments AutoML Tabular Workflow for End-to-End AutoML Tabular Workflow for TabNet ML in BigQuery Vertex AI Vizier Data preparation BigQuery Dataflow (Apache Beam) Dataproc (Apache Spark) Dataplex (Data Catalog) ML training PyTorch TensorFlow XGBoost scikit-learn Vertex AI Feature Store Vertex AI Pipelines Vertex AI training Model evaluation in Vertex AI Model deployment and serving Predictions on Vertex AI Vertex AI Feature Store Vertex AI Vector Search Streaming import Custom prediction routines VM cohosting TensorFlow Enterprise Manage BigQuery ML models ML workflow orchestration Vertex AI Pipelines Artifact organization Vertex ML Metadata Vertex AI Model Registry Model monitoring Vertex Explainable AI Vertex AI Model Monitoring Model monitoring with BigQuery ML Managed-open source platforms Ray on Vertex AI Google offers AutoML, forecasting with Vertex AI, and BigQuery ML as prebuilt training routine alternatives to Vertex AI custom-trained model solutions. The following table provides recommendations about when to use these options for Vertex AI. ML environment Description Choose this environment if... BigQuery ML BigQuery ML brings together data, infrastructure, and predefined model types into a single system. All of your data is contained in BigQuery. You are comfortable with SQL. The set of models available in BigQuery ML matches the problem you are trying to solve. AutoML (in the context of Vertex AI) AutoML provides training routines for common problems like image classification and tabular regression. Nearly all aspects of training and serving a model, like choosing an architecture, hyperparameter tuning, and provisioning machines, are handled for you. Your data and problem match one of the formats with data types and model objectives that AutoML supports. The model can be served from Google Cloud or deployed to an external device. See Train and use your own models and Training an AutoML Edge model using Google Cloud console. For text, video, or tabular models, your model can tolerate inference latencies > 100ms. You can also train AutoML tabular models from the BigQuery ML environment. Vertex AI custom trained models Vertex lets you run your own custom training routines and deploy models of any type on serverless architecture. Vertex AI offers additional services, like hyperparameter tuning and monitoring, to make it easier to develop a model. See Choose a custom training method. Your problem does not match the criteria listed in this table for BigQuery ML or AutoML. You are already running training on-premises or on another cloud platform, and you need consistency across the platforms. ML environment setup We recommend that you use the following best practices when you set up your ML environment: Use Vertex AI Workbench instances for experimentation and development. Create a Vertex AI Workbench instance for each team member. Store your ML resources and artifacts based on your corporate policy. Use Vertex AI SDK for Python. Use Vertex AI Workbench instances for experimentation and development Regardless of your tooling, we recommend that you use Vertex AI Workbench instances for experimentation and development, including writing code, starting jobs, running queries, and checking status. Vertex AI Workbench instances let you access all of Google Cloud's data and AI services in a simple, reproducible way. Vertex AI Workbench instances also give you a secure set of software and access patterns right out of the box. It is a common practice to customize Google Cloud properties like network and Identity and Access Management, and software (through a container) associated with a Vertex AI Workbench instance. For more information, see Introduction to Vertex AI and Introduction to Vertex AI Workbench instances. Alternatively, you can use Colab Enterprise, which is a collaborative managed notebook environment that uses the security and compliance capabilities of Google Cloud. Create a Vertex AI Workbench instance for each team member Create a Vertex AI Workbench instance for each member of your data science team. If a team member is involved in multiple projects, especially projects that have different dependencies, we recommend using multiple instances, treating each instance as a virtual workspace. Note that you can stop Vertex AI Workbench instances when they are not being used. Store your ML resources and artifacts based on your corporate policy The simplest access control is to store both your raw and Vertex AI resources and artifacts, such as datasets and models, in the same Google Cloud project. More typically, your corporation has policies that control access. In cases where your resources and artifacts are stored across projects, you can configure your corporate cross-project access control with Identity and Access Management (IAM). Use Vertex AI SDK for Python Use Vertex AI SDK for Python, a Pythonic way to use Vertex AI for your end-to-end model building workflows, which works seamlessly with your favorite ML frameworks including PyTorch, TensorFlow, XGBoost, and scikit-learn. Alternatively, you can use the Google Cloud console, which supports the functionality of Vertex AI as a user interface through the browser. ML development We recommend the following best practices for ML development: Best practices: Prepare training data. Store structured and semi-structured data in BigQuery. Store image, video, audio and unstructured data on Cloud Storage. Use Vertex AI Feature Store with structured data. Use Vertex AI TensorBoard and Vertex AI Experiments for analyzing experiments. Train a model within a Vertex AI Workbench instance for small datasets. Maximize your model's predictive accuracy with hyperparameter tuning. Use a Vertex AI Workbench instance to understand your models. Use feature attributions to gain insights into model predictions. ML development addresses preparing the data, experimenting, and evaluating the model. When solving a ML problem, it is typically necessary to build and compare many different models to figure out what works best. Typically, data scientists train models using different architectures, input data sets, hyperparameters, and hardware. Data scientists evaluate the resulting models by looking at aggregate performance metrics like accuracy, precision, and recall on test datasets. Finally, data scientists evaluate the performance of the models against particular subsets of their data, different model versions, and different model architectures. Prepare training data The data used to train a model can originate from any number of systems, for example, logs from an online service system, images from a local device, or documents scraped from the web. Regardless of your data's origin, extract data from the source systems and convert to the format and storage (separate from the operational source) optimized for ML training. For more information on preparing training data for use with Vertex AI, see Train and use your own models. Store structured and semi-structured data in BigQuery If you're working with structured or semi-structured data, we recommend that you store all data in BigQuery, following BigQuery's recommendation for project structure. In most cases, you can store intermediate, processed data in BigQuery as well. For maximum speed, it's better to store materialized data instead of using views or subqueries for training data. Read data out of BigQuery using the BigQuery Storage API. For artifact tracking, consider using a managed tabular dataset. The following table lists Google Cloud tools that make it easier to use the API: If you're using... Use this Google Cloud tool TensorFlow for Keras tf.data.dataset reader for BigQuery TFX BigQuery client Dataflow Google BigQuery I/O Connector Any other framework (such as PyTorch, XGBoost, or scilearn-kit) Importing models in BigQuery Store image, video, audio and unstructured data on Cloud Storage Store these data in large container formats on Cloud Storage. This applies to sharded TFRecord files if you're using TensorFlow, or Avro files if you're using any other framework. Combine many individual images, videos, or audio clips into large files, as this will improve your read and write throughput to Cloud Storage. Aim for files of at least 100mb, and between 100 and 10,000 shards. To enable data management, use Cloud Storage buckets and directories to group the shards. For more information, see Product overview of Cloud Storage. Use data labeling services with the Google Cloud console You can create and import training data through the Vertex AI page in the Google Cloud console. By using the prompt and tuning capabilities of Gemini, you can manage text data with customized classification, entity extraction, and sentiment analysis. There are also data labeling solutions on the Google Cloud console Marketplace, such as Labelbox and Snorkel Flow. Use Vertex AI Feature Store with structured data You can use Vertex AI Feature Store to create, maintain, share, and serve ML features in a central location. It's optimized to serve workloads that need low latency, and lets you store feature data in a BigQuery table or view. To use Vertex AI Feature Store, you must create an online store instance and define your feature views. BigQuery stores all the feature data, including historical feature data to allow you to work offline. Use Vertex AI TensorBoard and Vertex AI Experiments for analyzing experiments When developing models, use Vertex AI TensorBoard to visualize and compare specific experiments—for example, based on hyperparameters. Vertex AI TensorBoard is an enterprise-ready managed service with a cost-effective, secure solution that lets data scientists and ML researchers collaborate by making it seamless to track, compare, and share their experiments. Vertex AI TensorBoard enables tracking experiment metrics like loss and accuracy over time, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more. Use Vertex AI Experiments to integrate with Vertex ML Metadata and to log and build linkage across parameters, metrics, and dataset and model artifacts. Train a model within a Vertex AI Workbench instance for small datasets Training a model within the Vertex AI Workbench instance may be sufficient for small datasets, or subsets of a larger dataset. It may be helpful to use the training service for larger datasets or for distributed training. Using the Vertex AI training service is also recommended to productionize training even on small datasets if the training is carried out on a schedule or in response to the arrival of additional data. Maximize your model's predictive accuracy with hyperparameter tuning To maximize your model's predictive accuracy use hyperparameter tuning, the automated model enhancer provided by the Vertex AI training service which takes advantage of the processing infrastructure of Google Cloud and Vertex AI Vizier to test different hyperparameter configurations when training your model. Hyperparameter tuning removes the need to manually adjust hyperparameters over the course of numerous training runs to arrive at the optimal values. To learn more about hyperparameter tuning, see Overview of hyperparameter tuning and Create a hyperparameter tuning job. Use a Vertex AI Workbench instance to understand your models Use a Vertex AI Workbench instance to evaluate and understand your models. In addition to built-in common libraries like scikit-learn, Vertex AI Workbench instances include the What-if Tool (WIT) and Language Interpretability Tool (LIT). WIT lets you interactively analyze your models for bias using multiple techniques, while LIT helps you understand natural language processing model behavior through a visual, interactive, and extensible tool. Use feature attributions to gain insights into model predictions Vertex Explainable AI is an integral part of the ML implementation process, offering feature attributions to provide insights into why models generate predictions. By detailing the importance of each feature that a model uses as input to make a prediction, Vertex Explainable AI helps you better understand your model's behavior and build trust in your models. Vertex Explainable AI supports custom-trained models based on tabular and image data. For more information about Vertex Explainable AI, see: Introduction to Vertex Explainable AI Improving explanations Vertex Explainable AI code examples Data preparation We recommend the following best practices for data preparation: Best practices: Use BigQuery to process tabular data. Use Dataflow to process data. Use Dataproc for serverless Spark data processing. Use managed datasets with Vertex ML Metadata. The recommended approach for processing your data depends on the framework and data types you're using. This section provides high-level recommendations for common scenarios. Use BigQuery to process structured and semi-structured data Use BigQuery for storing unprocessed structured or semi-structured data. If you're building your model using BigQuery ML, use the transformations built into BigQuery for preprocessing data. If you're using AutoML, use the transformations built into AutoML for preprocessing data. If you're building a custom model, using the BigQuery transformations may be the most cost-effective method. For large datasets, consider using partitioning in BigQuery. This practice can improve query performance and cost efficiency. Use Dataflow to process data With large volumes of data, consider using Dataflow, which uses the Apache Beam programming model. You can use Dataflow to convert the unstructured data into binary data formats like TFRecord, which can improve performance of data ingestion during the training process. Use Dataproc for serverless Spark data processing Alternatively, if your organization has an investment in an Apache Spark codebase and skills, consider using Dataproc. Use one-off Python scripts for smaller datasets that fit into memory. If you need to perform transformations that are not expressible in Cloud SQL or are for streaming, you can use a combination of Dataflow and the pandas library. Use managed datasets with ML metadata After your data is pre-processed for ML, you may want to consider using a managed dataset in Vertex AI. Managed datasets enable you to create a clear link between your data and custom-trained models, and provide descriptive statistics and automatic or manual splitting into train, test, and validation sets. Managed datasets are not required; you may choose not to use them if you want more control over splitting your data in your training code, or if lineage between your data and model isn't critical to your application. For more information, see Datasets and Using a managed dataset in a custom training application. ML training We recommend the following best practices for ML training: Best practices: Run your code in a managed service. Operationalize job execution with training pipelines. Use training checkpoints to save the current state of your experiment. Prepare model artifacts for serving in Cloud Storage. Regularly compute new feature values. In ML training, operationalized training refers to the process of making model training repeatable by tracking repetitions, and managing performance. Although Vertex AI Workbench instances are convenient for iterative development on small datasets, we recommend that you operationalize your code to make it reproducible and able to scale to large datasets. In this section, we discuss tooling and best practices for operationalizing your training routines. Run your code in a managed service We recommend that you run your code in either Vertex AI training service or orchestrate with Vertex AI Pipelines. Optionally, you can run your code directly in Deep Learning VM Images, Deep Learning Containers, or Compute Engine. However, we advise against this approach if you are using features of Vertex AI such as automatic scaling and burst capability. Operationalize job execution with training pipelines To operationalize training job execution on Vertex AI, you can create training pipelines. A training pipeline, which is different from a general ML pipeline, encapsulates training jobs. To learn more about training pipelines, see Creating training pipelines and REST Resource: projects.locations.trainingPipelines. Use training checkpoints to save the current state of your experiment The ML workflow in this document assumes that you're not training interactively. If your model fails and isn't checkpointed, the training job or pipeline will finish and the data will be lost because the model isn't in memory. To prevent this scenario, make it a practice to always use training checkpoints to ensure you don't lose state. We recommend that you save training checkpoints in Cloud Storage. Create a different folder for each experiment or training run. To learn more about checkpoints, see Training checkpoints for TensorFlow Core, Saving and loading a General Checkpoint in PyTorch, and ML Design Patterns. Prepare model artifacts for serving in Cloud Storage For custom-trained models or custom containers, store your model artifacts in a Cloud Storage bucket, where the bucket's region matches the regional endpoint you're using for production. See Bucket regions for more information. Cloud Storage supports object versioning. To provide a mitigation against accidental data loss or corruption, enable object versioning in Cloud Storage. Store your Cloud Storage bucket in the same Google Cloud project. If your Cloud Storage bucket is in a different Google Cloud project, you need to grant Vertex AI access to read your model artifacts. If you're using a Vertex AI prebuilt container, ensure that your model artifacts have filenames that exactly match these examples: TensorFlow SavedModel: saved_model.pb Scikit-learn: model.joblib or model.pkl XGBoost: model.bst PyTorch: model.pth To learn how to save your model in the form of one or more model artifacts, see Exporting model artifacts for prediction. Regularly compute new feature values Often, a model will use a subset of features sourced from Vertex AI Feature Store. The features in Vertex AI Feature Store will already be ready for online serving. For any new features created by data scientist by sourcing data from the data lake, we recommend scheduling the corresponding data processing and feature engineering jobs (or ideally Dataflow) to regularly compute the new feature values at the required cadence, depending upon feature freshness needs, and ingesting them into Vertex AI Feature Store for online or batch serving. Model deployment and serving We recommend the following best practices for model deployment and serving: Best practices: Specify the number and types of machines you need. Plan inputs to the model. Turn on automatic scaling. Monitor models by using BigQuery ML. Model deployment and serving refers to putting a model into production. The output of the training job is one or more model artifacts stored on Cloud Storage, which you can upload to Model Registry so the file can be used for prediction serving. There are two types of prediction serving: batch prediction is used to score batches of data at a regular cadence, and online prediction is used for near real-time scoring of data for live applications. Both approaches let you obtain predictions from trained models by passing input data to a cloud-hosted ML model and getting inferences for each data instance.To learn more, see Getting batch predictions and Get online predictions from custom-trained models. To lower latency for peer-to-peer requests between the client and the model server, use Vertex AI private endpoints. Private endpoints are particularly useful if your application that makes the prediction requests and the serving binary are within the same local network. You can avoid the overhead of internet routing and make a peer-to-peer connection using Virtual Private Cloud. Specify the number and types of machines you need To deploy your model for prediction, choose hardware that is appropriate for your model, like different central processing unit (CPU) virtual machine (VM) types or graphics processing unit (GPU) types. For more information, see Specifying machine types or scale tiers. Plan inputs to the model In addition to deploying the model, you'll need to determine how you're going to pass inputs to the model. If you're using batch prediction you can fetch data from the data lake, or from the Vertex AI Feature Store batch serving API. If you are using online prediction, you can send input instances to the service and it returns your predictions in the response. For more information, see Response body details. If you are deploying your model for online prediction, you need a low latency, scalable way to serve the inputs or features that need to be passed to the model's endpoint. You can either do this by using one of the many Database services on Google Cloud, or you can use Vertex AI Feature Store's online serving API. The clients calling the online prediction endpoint can first call the feature serving solution to fetch the feature inputs, and then call the prediction endpoint with those inputs. You can serve multiple models to the same endpoint, for example, to gradually replace the model. Alternatively, you can deploy models to multiple endpoints,for example, in testing and production, by sharing resources across deployments. Streaming ingestion lets you make real-time updates to feature values. This method is useful when having the latest available data for online serving is a priority. For example, you can ingest streaming event data and, within a few seconds, Vertex AI Feature Store streaming ingestion makes that data available for online serving scenarios. You can additionally customize the input (request) and output (response) handling and format to and from your model server by using custom prediction routines. Turn on automatic scaling If you use the online prediction service, in most cases we recommend that you turn on automatic scaling by setting minimum and maximum nodes. For more information, see Get predictions for a custom trained model. To ensure a high availability service level agreement (SLA), set automatic scaling with a minimum of two nodes. To learn more about scaling options, see Scaling ML predictions. ML workflow orchestration We recommend the following best practices for ML workflow orchestration: Best practices: Use Vertex AI Pipelines to orchestrate the ML workflow. Use Kubeflow Pipelines for flexible pipeline construction. Use Ray on Vertex AI for distributed ML workflows. Vertex AI provides ML workflow orchestration to automate the ML workflow with Vertex AI Pipelines, a fully managed service that lets you retrain your models as often as necessary. While retraining enables your models to adapt to changes and maintain performance over time, consider how much your data will change when choosing the optimal model retraining cadence. ML orchestration workflows work best for customers who have already designed and built their model, put it into production, and want to determine what is and isn't working in the ML model. The code you use for experimentation will likely be useful for the rest of the ML workflow with some modification. To work with automated ML workflows, you need to be fluent in Python, understand basic infrastructure like containers, and have ML and data science knowledge. Use Vertex AI Pipelines to orchestrate the ML workflow While you can manually start each data process, training, evaluation, test, and deployment, we recommend that you use Vertex AI Pipelines to orchestrate the flow. For detailed information, see MLOps level 1: ML pipeline automation. Vertex AI Pipelines supports running DAGs generated by Kubeflow, TensorFlow Extended (TFX), and Airflow. Use Kubeflow Pipelines for flexible pipeline construction We recommend Kubeflow Pipelines SDK for most users who want to author managed pipelines. Kubeflow Pipelines is flexible, letting you use code to construct pipelines. It also provides Google Cloud pipeline components, which lets you include Vertex AI functionality like AutoML in your pipeline. To learn more about Kubeflow Pipelines, see Kubeflow Pipelines and Vertex AI Pipelines. Use Ray on Vertex AI for distributed ML workflows Ray provides a general and unified distributed framework to scale machine learning workflows through a Python open-source, scalable, and distributed computing framework. This framework can help to solve the challenges that come from having a variety of distributed frameworks in your ML ecosystem, such as having to deal with multiple modes of task parallelism, scheduling, and resource management. You can use Ray on Vertex AI to develop applications on Vertex AI. Artifact organization We recommend that you use the following best practices to organize your artifacts: Best practices: Organize your ML model artifacts. Use a source control repository for pipeline definitions and training code. Artifacts are outputs resulting from each step in the ML workflow. It's a best practice to organize them in a standardized way. Organize your ML model artifacts Store your artifacts in these locations: Storage location Artifacts Source control repository Vertex AI Workbench instances Pipeline source code Preprocessing Functions Model source code Model training packages Serving functions Experiments and ML metadata Experiments Parameters Hyperparameters Metaparameters Metrics Dataset artifacts Model artifacts Pipeline metadata Model Registry Trained models Artifact Registry Pipeline containers Custom training environments Custom prediction environments Vertex AI Prediction Deployed models Use a source control repository for pipeline definitions and training code You can use source control to version control your ML pipelines and the custom components you build for those pipelines. Use Artifact Registry to store, manage, and secure your Docker container images without making them publicly visible. Model monitoring Best practices: Use skew and drift detection. Fine tune alert thresholds. Use feature attributions to detect data drift or skew. Use BigQuery to support model monitoring. Once you deploy your model into production, you need to monitor performance to ensure that the model is performing as expected. Vertex AI provides two ways to monitor your ML models: Skew detection: This approach looks for the degree of distortion between your model training and production data Drift detection: In this type of monitoring, you're looking for drift in your production data. Drift occurs when the statistical properties of the inputs and the target, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions could become less accurate as time passes. Model monitoring works for structured data, like numerical and categorical features, but not for unstructured data, like images. For more information, see Monitoring models for feature skew or drift. Use skew and drift detection As much as possible, use skew detection because knowing that your production data has deviated from your training data is a strong indicator that your model isn't performing as expected in production. For skew detection, set up the model monitoring job by providing a pointer to the training data that you used to train your model. If you don't have access to the training data, turn on drift detection so that you'll know when the inputs change over time. Use drift detection to monitor whether your production data is deviating over time. For drift detection, enable the features you want to monitor and the corresponding thresholds to trigger an alert. Fine tune alert thresholds Tune the thresholds used for alerting so you know when skew or drift occurs in your data. Alert thresholds are determined by the use case, the user's domain expertise, and by initial model monitoring metrics. To learn how to use monitoring to create dashboards or configure alerts based on the metrics, see Cloud monitoring metrics. Use feature attributions to detect data drift or skew You can use feature attributions in Vertex Explainable AI to detect data drift or skew as an early indicator that model performance may be degrading. For example, if your model originally relied on five features to make predictions in your training and test data, but the model began to rely on entirely different features when it went into production, feature attributions would help you detect this degradation in model performance. Note: You can use feature attributions to detect model degradation regardless of the type of feature your model takes as input. This is particularly useful for complex feature types, like embeddings and time series, which are difficult to compare using traditional skew and drift methods. With Vertex Explainable AI, feature attributions can indicate when model performance is degrading. Use BigQuery to support model monitoring BigQuery ML model monitoring is a set of tools and functionalities that helps you track and evaluate the performance of your ML models over time. Model monitoring is essential for maintaining model accuracy and reliability in real-world applications. We recommend that you monitor for the following issues: Data skew: This issue happens when feature value distributions differ between training and serving data. Training statistics, which are saved during model training, enable skew detection without needing the original data. Data drift: Real-world data often changes over time. Model monitoring helps you identify when the input data that your model sees in production (serving data) starts to differ significantly from the data that it was trained on (training data). This drift can lead to degraded performance. Advanced data skew or drift: When you want fine-grained skew or drift statistics, monitor for advanced data skew or drift. What's next Vertex AI documentation Practitioners guide to Machine Learning Operations (MLOps): A framework for continuous delivery and automation of ML For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Best_practices_for_mitigating_compromised_OAuth_tokens_for_Google_Cloud_CLI.txt b/Best_practices_for_mitigating_compromised_OAuth_tokens_for_Google_Cloud_CLI.txt new file mode 100644 index 0000000000000000000000000000000000000000..ffb6866bd110d13bc8d620cb6d940e6a83a9f617 --- /dev/null +++ b/Best_practices_for_mitigating_compromised_OAuth_tokens_for_Google_Cloud_CLI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/bps-for-mitigating-gcloud-oauth-tokens +Date Scraped: 2025-02-23T11:56:40.686Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for mitigating compromised OAuth tokens for Google Cloud CLI Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-15 UTC This document describes how to mitigate the impact of an attacker compromising the OAuth tokens that are used by the gcloud CLI. An attacker can compromise these OAuth tokens if they gain access to an endpoint where a legitimate user account or service account has already authenticated with the gcloud CLI. The attacker can then copy these tokens to another endpoint that they control to make requests that impersonate the legitimate identity. Even after you remove the attacker's access to the compromised endpoint, the attacker can continue to make authenticated API requests using the copied tokens. To help mitigate this risk, you can control access to your systems by using credentials that are short-lived and context-aware. This document is intended for security teams or cloud architects who are responsible for securing their cloud resources from illegitimate access. This document introduces the available controls that you can use to proactively reduce the impact of compromised gcloud CLI OAuth tokens and remediate your environment after an endpoint has been compromised. Overview To understand how this threat works, you must understand how the gcloud CLI stores OAuth 2.0 credentials and how those credentials can be abused if compromised by an attacker. Types of credentials stored by the gcloud CLI The gcloud CLI uses OAuth 2.0 access tokens to authenticate requests for Google Cloud APIs. The OAuth flow varies by the credential types used, but generally the access token and other credentials are accessible locally. In each case, the access token expires after 60 minutes, but other credential types might be persistent. When you authorize the gcloud CLI with a user account, the gcloud CLI initiates a three-legged OAuth consent flow to access Google Cloud APIs on the user's behalf. After the user completes the consent flow, the gcloud CLI receives an access token and a refresh token that allows it to request new access tokens. Under the default settings, the long-lived refresh token persists until its expiration conditions are met. When you authorize the gcloud CLI with a service account, the gcloud CLI initiates a two-legged OAuth flow to access Google Cloud APIs as the service account identity. After you activate a service account from a private key file, this key is used to periodically request an access token. The long-lived private key is stored in the gcloud CLI configuration and remains valid until you delete the service account key. When you run the gcloud CLI inside a Google Cloud environment, like Compute Engine or Cloud Shell, the application can automatically find credentials and authenticate as a service account. For example, in Compute Engine, an application like the gcloud CLI can query the metadata server for an access token. Google manages and rotates the private signing key that is used to create the access token, and the long-lived credentials aren't exposed to the application. When you authenticate with workload identity federation, applications authenticate based on a credential from an external identity provider and receive a federated short-lived access token. For more information on how to store and manage the long-lived credentials used by the external identity provider, see best practices for using workload identity federation. How an attacker can use compromised OAuth tokens If an attacker manages to compromise an endpoint, credentials such as OAuth tokens are valuable targets because they let attackers persist or escalate their access. A developer might have a legitimate need to view their own credentials when writing and debugging code. For example, a developer might need to authenticate for using REST requests to Google Cloud services when working with an unsupported client library. The developer can view the credentials through various methods, including the following: Viewing the gcloud CLI configuration files on the local filesystem Querying the Compute Engine metadata server Using commands like gcloud auth print-access-token or gcloud auth describe IDENTITY However, an attacker might use these same techniques after they have compromised an endpoint. If an attacker compromises an endpoint, such as a developer workstation, the primary threat is that the attacker can run gcloud CLI commands or other code with the legitimate credentials of the authenticated identity. In addition, the attacker might copy the OAuth tokens to another endpoint that they control to persist their access. When this credential theft happens, there is a secondary threat that the attacker can still use the long-lived OAuth tokens to have persistent access even after you remove access to the compromised endpoint. If the attacker manages to compromise OAuth tokens, they can complete the following actions: An attacker can impersonate the compromised user or service account. API traffic that uses the compromised tokens is logged as if it came from the compromised user or service account, making it difficult to distinguish normal and malicious activity in logs. An attacker can refresh the access token indefinitely using a persistent refresh token associated with a user or a private key associated with a service account. An attacker can bypass authentication with the user's password or 2-step verification because the tokens are granted after the sign-in flow. Best practices for mitigating risks Implement the controls described in the following sections to help mitigate the risk of compromised gcloud CLI tokens. If you're following the security best practices that are described in the enterprise foundations blueprint or landing zone design in Google Cloud, you might already have these controls in place. Set session length for Google Cloud services To reduce how long an attacker can exploit a compromised token, set the session length for Google Cloud services. By default, the refresh token that the system grants after authentication is a long-lived credential and a gcloud CLI session never requires reauthentication. Change this setting to configure a reauthentication policy with a session length that is between 1 and 24 hours. After the defined session length, the reauthentication policy invalidates the refresh token and forces the user to regularly reauthenticate the gcloud CLI with their password or security key. The session length for Google Cloud services is a distinct setting from session length for Google services, which controls web sessions for sign-in across Google Workspace services but doesn't control reauthentication for the Google Cloud. If you use Google Workspace services, set the session length for both. Configure VPC Service Controls Configure VPC Service Controls across your environment to help ensure that only Google Cloud API traffic that originates within your defined perimeter can access supported resources. The service perimeter limits the usefulness of compromised credentials because the perimeter blocks requests to restricted services that originate from attacker-controlled endpoints that are outside of your environment. Configure Chrome Enterprise Premium Configure Chrome Enterprise Premium policies to help secure the Google Cloud console and Google Cloud APIs. Configure a Chrome Enterprise Premium access level and binding to selectively allow attributes that are evaluated on every API request, including IP-based access or certificate-based access for mutual TLS. Requests that use compromised authorization credentials but don't meet the conditions that are defined in your Chrome Enterprise Premium policy are rejected. Note: Not all device attributes collected by endpoint verification are evaluated during every API request. Chrome Enterprise Premium is a user-centric control that rejects user API traffic that doesn't meet defined conditions. VPC Service Controls is a resource-centric control that defines the perimeters within which resources can communicate. VPC Service Controls applies to all user identities and service account identities, but Chrome Enterprise Premium applies only to user identities within your organization. When used together, Chrome Enterprise Premium and VPC Service Controls reduce the effectiveness of compromised credentials on an attacker-controlled machine that is outside of your environment. Enforce 2-step verification for remote server access If you let developers access Compute Engine resources using SSH, configure OS Login with 2-step verification. This enforces an additional checkpoint where a user must reauthenticate with their password or security key. An attacker with compromised OAuth tokens but no password or security key is blocked by this feature. Remote Desktop Protocol (RDP) access to Windows instances on Compute Engine doesn't support the OS Login service, so 2-step verification can't be granularly enforced for RDP sessions. When using IAP Desktop or Google Chrome-based RDP plugins, set coarse-grained controls like session length for Google services and 2-step verification settings for the user's web sessions and disable the Allow user to trust the device setting under 2-step verification. Restrict the use of service account keys When you use a service account key to authenticate, the key value is stored in the gcloud CLI configuration files, separately from the downloaded key file. An attacker with access to your environment could copy the key from the gcloud CLI configuration or copy the key file from your local filesystem or internal code repository. Therefore, in addition to your plan to mitigate compromised access tokens, consider how you manage downloaded service account key files. Review more secure alternatives for authentication to reduce or eliminate your use cases that depend on a service account key, and enforce the organization policy constraint iam.disableServiceAccountKeyCreation to disable service account key creation. Consider the principle of least privilege When designing IAM policies, consider least privilege. Grant users the roles that they require to accomplish a task at the smallest scope. Don't grant them roles that they don't require. Review and apply role recommendations to avoid IAM policies with unused and excessive roles in your environment. Protect your endpoints Consider how an attacker might gain physical access or remote access to your endpoints, like developer workstations or Compute Engine instances. While a plan to address the threat of compromised OAuth tokens is important, also consider how to respond to the threat of how an attacker can compromise your trusted endpoints in the first place. If an attacker has access to your trusted endpoints, they can run gcloud CLI commands or other code directly on the endpoint itself. Although comprehensive protection for developer workstations is beyond the scope of this document, evaluate how your security tools and operations can help protect and monitor your endpoints for compromise. Consider the following questions: How is the physical security of developer workstations protected? How do you identify and respond to network breaches? How do users get remote access to SSH or RDP sessions? How might other persistent credentials like SSH keys or service account keys be compromised? Are there workflows that use persistent credentials that could be replaced with short-lived credentials? Are there shared devices where someone could read another user's cached gcloud CLI credentials? Can a user authenticate with gcloud CLI from an untrusted device? How does approved traffic connect to resources inside your VPC Service Control perimeter? Ensure that your security operations address each of these questions. Align your response teams Ensure in advance that security teams who are responsible for incident response have appropriate access across the Google Cloud console and the Admin Console. If separate teams manage the Google Cloud console and the Admin Console, you might have a delayed response during an incident. To remove access from a compromised user account, your incident response team needs an Admin Console role, such as User Management Admin. To assess whether suspicious activity has occurred in your Google Cloud resources, this team might also need IAM roles, such as Security Reviewer across all projects or Logs Viewer on a centralized log sink. The necessary roles for your security team will vary based on the design and operation of your environment. Best practices for remediation after a security incident After an endpoint is compromised, as part of your incident management plan, determine how to respond to the primary threat of a compromised endpoint and how to mitigate potential ongoing damage from the secondary threat of compromised tokens. If an attacker has persistent access to the developer workstation, they might copy tokens again after the legitimate user reauthenticates. If you suspect that gcloud CLI tokens might be compromised, open a ticket with Cloud Customer Care and complete the recommendations in the following sections. These actions can help limit the impact of an event like this in your Google Cloud organization. The recommendations in this section overlap with the general guidance on handling compromised Google Cloud credentials, but focus specifically on the threat of gcloud CLI tokens that are copied from a compromised endpoint. Expire active tokens for all user accounts with Google Cloud session control If you haven't already enforced Google Cloud session control, immediately enable this with a short reauthentication frequency. This control helps ensure that all refresh tokens expire at the end of the duration that you define, which limits the duration that an attacker can use the compromised tokens. Manually invalidate tokens for compromised user accounts Review the guidance for handling compromised credentials for any user identities who could have been compromised. In particular, removing gcloud CLI credentials is the most effective method for a security team to address compromised OAuth tokens for user identities. To immediately invalidate refresh tokens and access tokens for the gcloud CLI and force the user to reauthenticate with their password or security key, remove the gcloud CLI from a user's list of connected applications. An individual user can also remove gcloud CLI credentials for their individual account. Other methods, like suspending the user, resetting the user's password, or resetting sign-in cookies don't specifically address the threat of compromised OAuth tokens. These methods are generally useful for incident response but don't invalidate the access tokens the attacker already controls. For example, if you chose to suspend a user during an investigation but don't revoke gcloud CLI tokens, the access tokens might still be valid if the suspended user is restored before the access tokens expire. Programmatically invalidate tokens for many user accounts If you suspect a breach but can't identify which users were affected, consider revoking active sessions for all users in your organization more quickly than the reauthentication policy allows. This approach can be disruptive to legitimate users and terminate long-running processes that depend on user credentials. If you choose to adopt this approach, prepare a scripted solution for your security operations center (SOC) to run in advance and test it with a few users. The following sample code uses the Workspace Admin SDK to identify all user identities in your Google Workspace or Cloud Identity account who have access to the gcloud CLI. If a user has authorized the gcloud CLI, the script revokes the refresh token and access token and forces them to reauthenticate with their password or security key. For instructions on how to enable the Admin SDK API and run this code, see the Google Apps Script Quickstart. /** * Remove access to the Google Cloud CLI for all users in an organization * @see https://developers.google.com/admin-sdk/directory/reference/rest/v1/tokens * @see https://developers.google.com/admin-sdk/directory/reference/rest/v1/users * @see https://developers.google.com/apps-script/guides/services/advanced#enabling_advanced_services */ function listUsersAndInvalidate() { const users = AdminDirectory.Users.list({ customer: 'my_customer' // alias to represent your account's customerId }).users; if (!users || users.length === 0) { Logger.log('No users found.'); return; } for (const user of users){ let tokens = AdminDirectory.Tokens.list(user.primaryEmail).items if (!tokens || tokens.length === 0) { continue; } for (const token of tokens) { if (token.clientId === "32555940559.apps.googleusercontent.com") { AdminDirectory.Tokens.remove(user.primaryEmail, token.clientId) Logger.log('Invalidated the tokens granted to gcloud for user %s', user.primaryEmail) } } } } Invalidate and rotate credentials for service accounts Unlike access tokens that are granted to user identities, access tokens that are granted to service accounts can't be invalidated through the Admin Console or commands like gcloud auth revoke. Additionally, the session duration that you specify in Google Cloud session control applies to user accounts in your Cloud Identity or Google Workspace directory, but not to service accounts. Therefore, your incident response for compromised service accounts needs to address both the persistent key files and the short-lived access tokens. If you suspect credentials for a service account were compromised, disable the service account, delete service account keys if any exist, and then, after 60 minutes, enable the service account. Deleting a service account key can invalidate the long-lived credential so that an attacker can't request a new access token, but it doesn't invalidate the access tokens already granted. To ensure access tokens aren't abused within their 60 minute lifetime, you must disable the service account for 60 minutes. Alternatively, you can delete and replace the service account to immediately revoke all short-lived and long-lived credentials, but this might require more disruptive work to replace the service account in applications. What's next Handle compromised Google Cloud credentials Authorize the gcloud CLI Authenticate as a service account Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Best_practices_for_planning_accounts_and_organizations.txt b/Best_practices_for_planning_accounts_and_organizations.txt new file mode 100644 index 0000000000000000000000000000000000000000..adeaf310ebc0ae6a615616f779542d50df18d2c2 --- /dev/null +++ b/Best_practices_for_planning_accounts_and_organizations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/best-practices-for-planning +Date Scraped: 2025-02-23T11:55:09.184Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for planning accounts and organizations Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document presents best practices for deciding how many Cloud Identity or Google Workspace accounts, Google Cloud organizations, and billing accounts you need to use. The document provides guidance on identifying a design that satisfies your security and organizational requirements. In Google Cloud, identity and access management is based on two pillars: Cloud Identity and Google Workspace accounts are containers for users and groups. These accounts are therefore key to authenticating your corporate users and managing access to your Google Cloud resources. A Cloud Identity or Google Workspace account denotes a directory of users, not an individual user account. Individual user accounts are referred to as users or user accounts. Organizations are containers for projects and resources within Google Cloud. Organizations allow resources to be structured hierarchically and are key to centrally and efficiently managing resources. This document is intended primarily for customers who are setting up enterprise environments. Cloud Identity and Google Workspace accounts Google Workspace is an integrated suite of cloud-native collaboration and productivity apps. It includes tools that let you manage users, groups, and authentication. Cloud Identity provides a subset of the Google Workspace features. Like Google Workspace, Cloud Identity lets you manage users, groups, and authentication, but it doesn't include all of Google Workspace's collaboration and productivity features. Cloud Identity and Google Workspace share a common technical platform and use the same set of APIs and administrative tools. The products share the concept of an account as a container for users and groups. This container is identified by a domain name such as example.com. For managing users, groups, and authentication, the two products can be considered equivalent. Combine Cloud Identity and Google Workspace in a single account Because Cloud Identity and Google Workspace share a common platform, you can combine access to the products in a single account. If you already administer a Google Workspace account and want to enable more users to use Google Cloud, you might not want to assign all users a Google Workspace license. In this case, add Cloud Identity Free to your existing account. You can then onboard more users without additional charge and decide which users should have access to Google Workspace by assigning them a Google Workspace license. Similarly, if you already administer a Cloud Identity Free or Premium account, you might want to grant certain users the right to use Google Workspace. Rather than creating separate Google Workspace accounts for those users, you can add Google Workspace to an existing Cloud Identity account. After you've assigned the Google Workspace license to those selected users, they can use the productivity and collaboration apps. Use as few accounts as possible, but as many as necessary To let your users collaborate by using Google Workspace, and to minimize administrative overhead, it's best to manage all users through a single Cloud Identity or Google Workspace account and provide a single user account to each individual. This approach helps ensure that settings such as password policies, single sign-on, and two-step verification are consistently applied to all users. Despite these benefits of using a single Cloud Identity or Google Workspace account, you can gain flexibility and administrative autonomy by using multiple accounts. When you are deciding how many Cloud Identity or Google Workspace accounts to use, consider all requirements that suggest using multiple accounts. Then use the smallest number of Cloud Identity or Google Workspace accounts that satisfies your requirements. Use separate accounts for staging and production For most settings that you can configure in Cloud Identity and Google Workspace, you can choose the scope at which each setting should apply—for example, you can specify the geographic location for your data by user, by group, or by organizational unit. When you plan to apply a new configuration, you can initially choose a small scope (for example, by user) and verify whether the configuration meets your expectations. When you've verified the configuration, you can then apply the same setting to a larger set of groups or organizational units. Unlike most settings, integrating a Cloud Identity or Google Workspace account with an external identity provider (IdP) has a global impact: Enabling single sign-on is a global setting that applies to all users other than super admins. Depending on your external IdP, setting up user provisioning can also have global impact. An accidental misconfiguration in the external IdP might lead to users inadvertently being modified, suspended, or even deleted. To mitigate these risks, have separate staging and production Cloud Identity or Google Workspace accounts: Use the staging account to verify any risky configuration changes before applying the same change to your production account. Create a few test users in the staging accounts that you and other administrators can use to verify configuration changes. But don't grant your users access to the staging account. If you have separate staging and production instances of your external IdP, take these steps: Integrate your staging Cloud Identity or Google Workspace account with your staging IdP instance. Integrate your production Cloud Identity or Google Workspace account with your production IdP instance. For example, suppose you plan to set up federation with Active Directory and have a separate Active Directory forest for testing purposes. In this case, you integrate your staging Cloud Identity or Google Workspace account with the staging forest and the production Cloud Identity or Google Workspace account with your main forest. Apply the same mapping approach for DNS domains, users, and groups to both forest/account pairs, as shown in the following diagram: Your production and staging Active Directory forests and domains might use DNS domains that don't let you apply the same domain mapping approach for staging and production. In this case, consider registering more User Principal Name (UPN) suffix domains in your staging forest. Similarly, if you plan to set up federation with Azure Active Directory, it's best to take the following approach: Integrate the staging Cloud Identity or Google Workspace account with a staging Azure Active Directory tenant. Integrate the production Cloud Identity or Google Workspace account with your main Azure Active Directory tenant. Apply the same mapping approach for DNS domains, users, and groups to both tenant/account pairs: Your production and staging Azure Active Directory tenants might use DNS domains that don't let you apply the same domain mapping approach for staging and production. In this case, consider adding an extra domain to your staging tenant. Use disjoint DNS domains among Cloud Identity and Google Workspace accounts When you first add a domain such as example.com to your Cloud Identity or Google Workspace account, you need to verify that you own the domain. After you have added and verified a domain, you can add subdomains such as marketing.example.com and finance.example.com without having to verify each subdomain individually. However, if you add subdomains without verifying each one, conflicts can result if you have another Cloud Identity or Google Workspace account that already uses this subdomain. To avoid these conflicts, prefer using disjoint domains for each account. For example, if you need two Cloud Identity or Google Workspace accounts, try to avoid using two domains where one domain is a subdomain of the other. Instead, use two domains that are not subdomains of another. This practice applies to the primary domain and to secondary domains. Don't separate Google Workspace and Google Cloud If you already use Google Workspace, some users in your Google Workspace account might have been granted super admin privileges so that they can carry out administrative tasks. Users with super admin privileges are implicitly granted permission to modify the Identity and Access Management (IAM) policy of the organization node. This permission enables super admins to assign themselves the organization administrators role or any other role in the Google Cloud organization. Having super admin access to a Cloud Identity or Google Workspace account also enables a user to delete the account, its associated Google Cloud organization, and all its resources. You therefore have to assume that super admins have full access to all resources within an organization. The fact that super admins are also organization administrators might be a security concern if your existing Google Workspace administrators are a different set of users from the users who will be in charge of managing your Google Cloud organization. In this case, you might decide to create a separate Cloud Identity account that is dedicated to Google Cloud in order to limit the access of Google Workspace super admins to Google Cloud resources. Separating Google Workspace and Google Cloud can have several unintended consequences. For example, you might try creating separate users in the Cloud Identity account and then restricting access to the Google Cloud organization users to the Cloud Identity account. This would ensure that your Google Cloud deployment and Google Workspace are fully isolated. However, duplicating users negatively affects both user experience and administrative overhead. Additionally, you wouldn't be able to receive any notification emails sent by Google Cloud because the domain used by Cloud Identity must be different from the domain used for Google Workspace and is therefore unsuitable for routing email. If you instead reference users from the Google Workspace account in your Google Cloud organization, you undermine the isolation between Google Workspace and Google Cloud: Google Workspace super admins have the ability to use domain-wide delegation to impersonate any user in the same Google Workspace account. A rogue super admin could use their elevated privileges to regain access to Google Cloud. A more effective approach than using separate accounts is to segregate responsibilities between Google Workspace and Google Cloud administrators to reduce the number of super admins: Most administrative duties in Google Workspace don't require super admin privileges. Use pre-built administrative roles or custom administrator roles in place of super admin privileges to grant Google Workspace administrators the permissions necessary to conduct their duties. Retain only a minimal number of super-admin users and discourage everyday usage. Take advantage of auditing to detect suspicious activity among administrative users. Secure your external IdP when using single sign-on Cloud Identity and Google Workspace let you set up single sign-on with your external IdP such as Active Directory, Azure Active Directory, or Okta (to name a few). If single sign-on is enabled, Cloud Identity and Google Workspace trust the external IdP to authenticate users on Google's behalf. Enabling single sign-on offers several advantages: Users can use their existing credentials to sign on to Google services. Users don't need to enter their passwords again if they already have an existing sign-on session. You can apply consistent multi-factor authentication policies across your applications and all Google services. But enabling single sign-on also exposes you to risks. When single sign-on is enabled and a user needs to be authenticated, Cloud Identity or Google Workspace redirects the user to your external IdP. After authenticating the user, the IdP returns to Google a SAML assertion that states the user's identity. The SAML assertion is signed. Therefore, Google can verify the authenticity of the SAML assertion and verify that the correct identity provider has been used. However, there is no way for Cloud Identity or Google Workspace to verify that the IdP made the right authentication decision and correctly stated the user's identity. If single sign-on is enabled, the overall security and integrity of your Google deployment hinges on the security and integrity of your IdP. Your Cloud Identity or Google Workspace account and all resources that rely on the users managed by the account are at risk if any of the following are configured insecurely: The IdP itself The machines that the provider is running on The user database that the provider is getting user information from Any other system that the provider depends on To prevent single sign-on from becoming a weak link in your security posture, ensure that your IdP and all systems it depends on are configured securely: Limit the number of users with administrative access to your IdP or to any of the systems that the provider relies on. Don't grant any user administrative access to these systems to whom you wouldn't also give super admin privileges in Cloud Identity or Google Workspace. Don't use single sign-on if you are unsure about the security controls in place for your external IdP. Secure access to your DNS zones Cloud Identity and Google Workspace accounts are identified by a primary DNS domain name. When you create a new Cloud Identity or Google Workspace account, you must verify ownership of the DNS domain by creating a special DNS record in the corresponding DNS zone. The importance of DNS and the impact on the overall security of your Google deployment extends beyond the sign-up process: Google Workspace relies on DNS MX records for routing emails. By modifying these MX records, an attacker might be able to route emails to a different server and gain access to sensitive information. If an attacker is able to add records to the DNS zone, they might then be able to reset the password of a super admin user and gain access to the account. To prevent DNS from becoming a weak link in your security posture, ensure that your DNS server is configured securely: Limit the number of users that have administrative access to the DNS server that manages the primary domain used for Cloud Identity or Google Workspace. Don't grant any user administrative access to your DNS server to whom you wouldn't also give super admin privileges in Cloud Identity or Google Workspace. If you plan to deploy a workload on Google Cloud that has security requirements that aren't met by your existing DNS infrastructure, consider registering for that workload a new DNS domain that uses separate DNS servers or a managed DNS service. Export audit logs to BigQuery Cloud Identity and Google Workspace maintain multiple audit logs to keep track of configuration changes and other activities that might be relevant to the security of your Cloud Identity or Google Workspace account. The following table summarizes these audit logs. Log Activities captured Admin Actions performed in your Google Admin Console Login Successful, unsuccessful, and suspicious sign-in attempts to your domain Token Instances of authorizing or revoking access to third-party mobile or web applications Groups Changes to groups and group memberships You can access these audit logs either through the Admin Console or through the Reports API. For most audit logs, data is retained for 6 months. If you are operating in a regulated industry, retaining audit data for 6 months might not be sufficient. To retain data for a longer period of time, configure audit logs to be exported to BigQuery. To limit the risk of unauthorized access to the exported audit logs, use a dedicated Google Cloud project for the BigQuery dataset. To keep the audit data safe from tampering or deletion, grant access to the project and dataset on a least-privilege basis. Note: Super admins are allowed to grant themselves any role within the Google Cloud organization, which means you can't prevent them from being able to modify or delete audit logs. The Cloud Identity and Google Workspace audit logs are complementary to Cloud Audit Logs logs. If you also use BigQuery to collect Cloud Audit Logs logs and other application-specific audit logs, you can correlate login events to activities that a user has performed in Google Cloud or in your applications. Being able to correlate audit data across multiple sources can help you detect and analyze suspicious activity. Setting up BigQuery exporting requires a Google Workspace Enterprise license. After you set up the main audit logs, these are exported for all users, including those users without a Google Workspace license. Certain logs such as the Drive and Mobile audit logs are exported for users that have at least a Google Workspace Business license. If you use Cloud Identity Free for the majority of your users, you can still export to BigQuery by adding Google Workspace Enterprise to your existing Cloud Identity account and purchasing Google Workspace licenses for a selected set of administrators. Organizations Organizations let you organize resources into a hierarchy of projects and folders, with the organization node being the root. Organizations are associated with a Cloud Identity or Google Workspace account, and they derive their name from the primary domain name of the associated account. An organization is created automatically when a user from the Cloud Identity or Google Workspace account creates a first project. Each Cloud Identity or Google Workspace account can have only a single organization. In fact, it's not possible to create an organization without a corresponding account. Despite this dependency, you can grant users from multiple different accounts access to resources in a single organization: The flexibility to reference users from different Cloud Identity or Google Workspace accounts lets you separate how you treat accounts and organizations. You can separate the decision of how many Cloud Identity or Google Workspace accounts you need in order to manage your users from the decision of how many organizations you need in order to manage your resources. The number of organizations that you create and their purposes can profoundly impact your security posture, the autonomy of your teams and departments, and the consistency and efficiency of your administration. The following sections describe best practices for deciding how many organizations to use. Use as few organizations as possible, but as many as necessary The resource hierarchy of an organization lets you reduce the effort of managing IAM policies and organizational policies by taking advantage of inheritance. By configuring policies at the folder or organization level, you ensure that policies are applied consistently across a sub-hierarchy of resources, and you avoid repetitive configuration work. To minimize your administrative overhead, it's best to use as few organizations as possible. In contrast, you can gain additional flexibility and administrative autonomy by using multiple organizations. The following sections describe when you might require such additional flexibility or autonomy. In any case, when you are deciding how many organizations to use, consider all requirements that suggest using multiple organizations. Then use the smallest number of organizations that satisfies your requirements. Use organizations to delineate administrative authority Within a resource hierarchy, you can grant permissions at the resource, project, or folder level. The ultimate level at which you can grant a user permissions is the organization. A user who has been assigned the Organization Administrator role at the organization level has full control over the organization, its resources, and its IAM policies. An Organization Administrator can take control of any resource within the organization and is free to delegate administrative access to any other user. However, the capabilities of an Organization Administrator are confined to the organization, making the organization the ultimate scope of administrative authority: An Organization Administrator cannot access any resources in other organizations unless explicitly granted permission. No user outside the organization can take control away from an Organization Administrator of that organization. The right number of organizations to use depends on the number of independent groups of administrative users in your company: If your company is organized by function, you might have a single department that's in charge of overseeing all Google Cloud deployments. If your company is organized by division or owns a number of autonomously-run subsidiaries, then there might not be a single department that's in charge. If a single department is in charge of overseeing Google Cloud deployments, it's best to use a single Google Cloud organization node. Within the organization, use folders to structure resources and grant different levels of access to other teams and departments. If you are dealing with multiple independent departments, trying to centralize administration by using a single organization might cause friction: If you designate a single department to manage the organization, you might face resistance from other departments. If you let multiple teams or departments co-own a single organization, it might be difficult to maintain a consistent resource hierarchy and to ensure that IAM policies and organizational policies are used consistently across your resources. To prevent this kind of friction, use multiple organizations and create separate folder structures in each organization. By using separate organizations, you establish boundaries between different groups of administrative users, thus delineating their administrative authority. Use a separate staging organization To help ensure consistency and avoid repetitive configuration work, organize your projects into a hierarchy of folders and apply IAM policies and organizational policies at the folder or organization level. There is a downside of having policies that apply to many projects and resources. Any change to the policy itself or the resource hierarchy the policy applies to might have far-reaching and unintended consequences. To mitigate this risk, it's best to test policy changes before you apply them in your main organization. We recommend that you create a separate staging organization. This organization helps you test resource hierarchy changes, IAM policies, organizational policies, or other configuration that have potential organization-wide impact such as access levels and policies. To ensure that the results of tests or experiments conducted in the staging organization also apply to the production organization, configure the staging organization to use the same folder structure as your main organization, but with only a small number of projects. At any point, the IAM and organizational policies in the staging organization should either match the policies of your production organization or should reflect the next version of policies that you intend to apply to your production organization. As the following diagram shows, you use your staging Cloud Identity or Google Workspace account as a basis for the staging organization. You use the staging account (or the external IdP that your staging account is integrated with) to create dedicated test users and a group structure that mirrors the groups you use in your production Cloud Identity or Google Workspace account. You can then use these dedicated test users and groups to test IAM policies without impacting existing users. Avoid using a separate testing organization For each production workload that you plan to deploy on Google Cloud, you probably need one or more testing environments, which your development and DevOps teams can use to validate new releases. From a security perspective, to prevent untested releases from accidentally impacting production workloads, you want to cleanly separate your test environments from your production environments. Similarly, the two types of environments have different requirements for IAM policies. For example, in order for you to grant developers and testers the flexibility they need, the security requirements for your testing environments might be substantially looser than those of your production environments. As the following diagram shows, from a configuration perspective you want to keep your test environments as similar to your production environments as possible. Any divergence increases the risk that a deployment that worked properly in a test environment fails when deployed to a production environment. To strike a balance between keeping environments isolated and consistent, use folders within the same organization to manage testing and production environments: Apply any IAM policies or organizational policies that are common across environments at the organizational level (or by using a common parent folder). Use the individual folders to manage IAM policies and organization policies that differ between different types of environments. Don't use your staging organization for managing testing environments. Testing environments are critical to the productivity of development and DevOps teams. Managing such environments in your staging environment would restrict your ability to use the staging organization to experiment with and test policy changes; any such change might disrupt the work of these teams. In short, if you use a staging organization to manage testing environments, you undermine the purpose of your staging organization. Use a separate organization for experimenting To get the most out of Google Cloud, let your development, DevOps, and operations teams familiarize themselves with the platform and expand their experience by running tutorials. Encourage them to experiment with new features and services. Use a separate organization as the sandbox environment to support these types of experimental activities. By using a separate organization, you can keep experiments unencumbered by any policies, configuration, or automation that you use in your production organization. Avoid using your staging organization for experimenting. Your staging environment should use IAM and organization policies that are similar to your production organization. Therefore, the staging environment is likely to be subject to the same limitations as your production environment. At the same time, relaxing policies in order to allow experiments would undermine the purpose of your staging organization. To prevent your experimental organization from becoming disorganized, from generating excessive cost, or from becoming a security risk, issue guidelines that define how teams are allowed to use the organization, and who bears responsibility for maintaining the organization. Additionally, consider setting up automation to automatically delete resources, or even entire projects, after a certain number of days. As the following diagram shows, when you create an organization for experimenting, you first create a dedicated Cloud Identity account. You then use the existing users from your main Cloud Identity or Google Workspace account to grant users access to the experimental organization. To manage the additional Cloud Identity account, you need at least one super admin user account in the Cloud Identity account. For information about how to secure these super-admin user accounts, see Super administrator account best practices. Use domain-restricted sharing to enforce trust relationships IAM policies let you add any valid Google identity as a member. This means that when you edit the IAM policy of a resource, project, folder, or organization, you can add members from different sources, including the following: Users from the Cloud Identity or Google Workspace account that the organization is associated with, as well as service accounts from the same organization Users from other Cloud Identity or Google Workspace accounts Service accounts from other organizations Consumer user accounts, including gmail.com users and consumer accounts that use a corporate email address Being able to reference users from different sources is useful for scenarios and projects where you need to collaborate with affiliates or other companies. In most other cases, it's best to constrain IAM policies to only allow members from trusted sources. Use domain-restricted sharing to define a set of trusted Cloud Identity or Google Workspace accounts from which you want to allow members to be added to IAM policies. Define this organizational policy either at the organization level (so that it applies to all projects) or at a folder near the top of the resource hierarchy (to allow certain projects to be exempted). If you use separate Cloud Identity or Google Workspace accounts and organizations to segregate staging, production, and experimenting environments, use domain-restricted sharing policies to enforce the segregation as listed in the following table: Organization Trusted Cloud Identity or Google Workspace account Staging Staging Production Production Experiments Production Use neutral domain names for organizations Organizations are identified by a DNS domain name such as example.com. The domain name is derived from the primary domain name of the associated Cloud Identity or Google Workspace account. If there is a canonical DNS domain name that is used throughout your company, use that domain as primary domain in your production Cloud Identity or Google Workspace account. By using the canonical DNS name, you ensure that employees can easily recognize the name of the organization node. If your company has several subsidiaries or owns a variety of different brands, there might not be a canonical domain name. Rather than arbitrarily picking one of the existing domains, consider registering a new DNS domain that is neutral and dedicated for use by Google Cloud. By using a neutral DNS name, you avoid expressing a preference for a specific brand, subsidiary, or department within your company, which could negatively affect cloud adoption. Add your other, brand-specific domains as secondary domains so that they can be used in user IDs and for single sign-on. Use billing accounts across Google Cloud organizations Billing accounts define who pays for a given set of Google Cloud resources. Although billing accounts can exist outside of a Google Cloud organization, they are most commonly associated with an organization. When billing accounts are associated with an organization, they are considered a sub-resource of the organization and are subject to IAM policies defined within the organization. Most importantly, this means that you can use the Billing IAM roles to define which users or groups are allowed to administer or use an account. A user who has been granted the Billing Account User role can link a project to a billing account, causing the resources to be billed to this account. Linking a project to a billing account works within the same organization, but also across organizations. If you use multiple organizations, you can take advantage of the fact that billing accounts can be used across organizations. This lets you decide how many billing accounts you need independent of how many organizations you need. The right number of billing accounts depends exclusively on your commercial or contractual requirements such as currency, payment profile, and the number of separate invoices you need. As you did with accounts and organizations, in order to minimize complexity, you want to use the smallest number of billing accounts that satisfies your requirements. To break down costs accrued across multiple organizations, configure your billing account to export billing data to BigQuery. Each record exported to BigQuery not only contains the project name and ID, but also the ID of the organization the project is associated with (in the project.ancestry_numbers field). What's next Create a new Cloud Identity account Learn how Google Cloud lets you authenticate corporate users in a hybrid environment and common patterns. Review the best practices for federating Google Cloud with an external identity provider. Find out how to federate Google Cloud with Active Directory or Azure Active Directory. Send feedback \ No newline at end of file diff --git a/Best_practices_for_protecting_against_cryptocurrency_mining_attacks.txt b/Best_practices_for_protecting_against_cryptocurrency_mining_attacks.txt new file mode 100644 index 0000000000000000000000000000000000000000..579fa5aa4aad8264000da27e9984bfc66c97565a --- /dev/null +++ b/Best_practices_for_protecting_against_cryptocurrency_mining_attacks.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/bps-for-protecting-against-crytocurrency-attacks +Date Scraped: 2025-02-23T11:56:42.967Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for protecting against cryptocurrency mining attacks Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-10-20 UTC Cryptocurrency mining (also known as bitcoin mining) is the process used to create new cryptocoins and verify transactions. Crytocurrency mining attacks occurs when attackers who gain access to your environment might also exploit your resources to run their own mining operations at your expense. According to the November 2021 Threat Horizons report, cryptocurrency mining attacks are the most common way that attackers exploit your computing resources after they compromise your Google Cloud environment. The report also says that attackers typically download cryptocurrency mining software to your resources within 22 seconds of compromising your system. Cryptocurrency mining can rapidly increase costs, and a cryptocurrency mining attack can cause a much larger bill than you expected. Because costs can add quickly, you must put in place protective, detective, and mitigation measures to protect your organization. This document is intended for security architects and administrators. It describes the best practices that you can take to help protect your Google Cloud resources from cryptocurrency mining attacks and to help mitigate the impact should an attack occur. For information about how to respond to cryptocurrency alerting, see Respond to abuse notifications and warnings. Identify your threat vectors To determine your organization's exposure to cryptocurrency mining attacks, you must identify the threat vectors that apply to your organization. The November 2021 Threat Horizons report indicates that most attackers exploit vulnerabilities such as the following: Weak password or no password for user accounts Weak or no authentication for Google Cloud APIs Vulnerabilities in third-party software Misconfigurations in your Google Cloud environment or in third-party applications that you're running on Google Cloud Leaked credentials, such as service account keys published in public GitHub repositories In addition, you can subscribe to and review the following documents for a list of threat vectors: Your government's cybersecurity advisories Google Cloud security bulletins Compute Engine security bulletins The security bulletins for the third-party applications that you are running in Google Cloud Important Google Cloud notifications After you identify the threat vectors that apply to you, you can use the remaining best practices in this document to help address them. Protect accounts and account credentials Attackers can exploit unguarded or mismanaged accounts to gain access to your Compute Engine resources. Google Cloud includes different options that you can configure to manage accounts and groups. Restrict access to your cloud environment The following table describes the organizational policies that you can use to define who can access your cloud environment. Organization policy constraint Description Domain restricted sharing Specify which customer IDs for Cloud Identity or Google Workspace are valid. Allowed AWS accounts that can be configured for workload identity federation in Cloud IAM In a hybrid cloud environment, define which AWS accounts can use workload identify federation. Allowed external identity providers for workloads In a hybrid cloud environment, define which identity providers your workloads can use. Set up MFA or 2FA Cloud Identity supports multi-factor authentication (MFA) using various methods. Configure MFA, particularly for your privileged accounts. For more information, see Enforce uniform MFA to company-owned resources. To help prevent phishing attacks that can lead to cryptocurrency mining attacks, use Titan Security Keys for two-factor authentication (2FA). Configure least privilege Least privilege ensures that users and services only have the access that they require to perform their specific tasks. Least privilege slows down the ability of attacks to spread throughout an organization because an attacker can't easily escalate their privileges. To meet your organization's needs, use the fine-grained policies, roles, and permissions in Identity and Access Management (IAM). In addition, analyze your permissions regularly using role recommender and Policy Analyzer. Role recommender uses machine learning to analyze your settings and provide recommendations to help ensure that your role settings adhere to the principle of least privilege. Policy Analyzer lets you see which accounts have access to your cloud resources. Monitor accounts If you use groups to assign IAM policies, monitor the group logs to ensure that non-corporate accounts aren't added. In addition, restrict the identities, based on Cloud Identity or Google Workspace domains, that can access your resources. For more information, see Restricting identities by domain. Ensure that your offboarding procedures include processes to deactivate accounts and reset permissions when employees leave your organization or change roles. For more information, see Revoking Access to Google Cloud. To audit your users and groups, see Audit logs for Google Workspace. Reduce internet exposure to your Compute Engine and GKE resources Reducing internet exposure means that your attackers have fewer opportunities to find and exploit vulnerabilities. This section describes the best practices that help protect your Compute Engine VMs and your Google Kubernetes Engine (GKE) clusters from internet exposure. Restrict external traffic Do not assign external IP addresses to your VMs. You can use the Disable VPC External IPv6 usage organization policy constraint to deny external IP addresses to all VMs. To view which VMs have publicly accessible IP addresses, see View the network configuration for an instance. If your architecture requires external IP addresses for your VMs, use the Define allowed external IPs for VM instances organization policy, which lets you define a list of instance names that are permitted to have external IP addresses. Restrict GKE nodes to internal IP addresses only. For more information, see Creating a private cluster. Restrict inbound (ingress) and outbound (egress) traffic to the internet for all resources in your projects. For more information, see VPC firewall rules and Hierarchical firewall policies. For more information about restricting external traffic, such as configuring Cloud NAT to allow outgoing communications for VMs without external IP address or using a proxy load balancer for incoming communications, see Securely connecting to VM instances. Use service perimeters Create a service perimeter for your Compute Engine and GKE resources using VPC Service Controls. VPC Service Controls lets you control communications to your Compute Engine resources from outside of the perimeter. Service perimeters allow free communication within the perimeter, block data exfiltration, and block service communication from outside the perimeter. Use context-aware access attributes like IP addresses and users' identities to further control access to Google Cloud services from the internet. Set up zero trust security Set up zero trust security with Chrome Enterprise Premium. Chrome Enterprise Premium provides threat and data protection and access controls. If your workloads are located both on-premises and in Google Cloud, configure Identity-Aware Proxy (IAP). Configure TCP forwarding to control who can access administrative services like SSH and RDP on your Google Cloud resources from the public internet. TCP forwarding prevents these services from being openly exposed to the internet. Secure your Compute Engine and GKE resources Cryptocurrency mining requires access to your Compute Engine and GKE resources. This section describes the best practices that will help you secure your Compute Engine and GKE resources. Secure your VM images Use hardened and curated VM images by configuring Shielded VM. Shielded VM is designed to prevent malicious code such as kernel-level malware or rootkits from being loaded during the boot cycle. Shielded VM provides boot security, monitors integrity, and uses the Virtual Trusted Platform Module (vTPM). To restrict which images can be deployed, you can implement trusted image policies. The Define trusted image projects organization policy defines which projects can store images and persistent disks. Ensure that only trusted and maintained images exist in those projects. In GKE, ensure that your containers use base images, which are regularly updated with security patches. Also, consider distroless container images that include only your application and its runtime dependencies. Secure SSH access to VMs Configure OS Login to manage SSH access to the VMs running in Compute Engine. OS Login simplifies SSH access management by linking your administrator's Linux user account to their Google identity. OS Login works with IAM so that you can define the privileges that administrators have. For more information about how you can secure your VMs and containers, see Use hardened and attested infrastructure and services. Restrict service accounts A service account is a Google Cloud account that workloads use to call the Google API of a service. Do not permit Google Cloud to assign default service account roles to resources when they are created. For more information, see Restricting service account usage. If your applications are running outside of Google Cloud, and yet require access to Google Cloud resources, do not use service account keys. Instead, implement workload identity federation to manage external identities and the permissions that you associate with them. For GKE, you can implement workload identities. For more information, see Choose the right authentication method for your use case. For more best practices that help secure service accounts, see Best practices for working with service accounts. Monitor usage of service accounts and service account keys Set up monitoring so that you can track how service accounts and service account keys are being used in your organization. To get visibility into notable usage patterns, use service account insights. For example, you can use service account insights to track how permissions are used in your projects and to identify unused service accounts. To see when your service accounts and keys were last used to call a Google API for authentication activities, view recent usage for service accounts and service account keys. Monitor and patch VMs and containers To start a cryptocurrency mining attack, attackers often exploit misconfigurations and software vulnerabilities to gain access to Compute Engine and GKE resources. To obtain insight into the vulnerabilities and misconfigurations that apply to your environment, use Security Health Analytics to scan your resources. In particular, if you use Security Command Center Premium, review any Compute Engine instance findings and Container findings and set up processes to resolve them quickly. Use Artifact Analysis to check for vulnerabilities in the container images that you store in Artifact Registry or Container Registry. Ensure that your organization can deploy patches as soon as they are available. You can use OS patch management for Compute Engine. Google automatically patches vulnerabilities in GKE. For more information, see Use hardened and attested infrastructure and services. Protect your applications using a WAF Attackers can try to access your network by finding Layer 7 vulnerabilities within your deployed applications. To help mitigate against these attacks, configure Google Cloud Armor, which is a web application firewall (WAF) that uses Layer 7 filtering and security policies. Google Cloud Armor provides denial of service (DoS) and WAF protection for applications and services hosted on Google Cloud, on your premises, or on other clouds. Google Cloud Armor includes a WAF rule to help address Apache Log4j vulnerabilities. Attackers can use Log4j vulnerabilities to introduce malware that can perform unauthorized cryptocurrency mining. For more information, see Google Cloud Armor WAF rule to help address Apache Log4j vulnerability. Secure your supply chain Continuous integration and continuous delivery (CI/CD) provides a mechanism for getting your latest functionality to your customers quickly. To help prevent cryptocurrency mining attacks against your pipeline, perform code analysis and monitor your pipeline for malicious attacks. Implement Binary Authorization to ensure that all images are signed by trusted authorities during the development process and then enforce signature validation when you deploy the images. Move security checks to as early in the CI/CD process as possible (sometimes referred to as shifting left). For more information, see Shifting left on security: Securing software supply chains. For information on setting up a secure supply chain with GKE, see Software supply chain security. Manage secrets and keys A key attack vector for unauthorized cryptocurrency mining attacks is insecure or leaked secrets. This section describes the best practices that you can use to help protect your secrets and encryption keys. Rotate encryption keys regularly Ensure that all encryption keys are rotated regularly. If Cloud KMS manages your encryption keys, you can rotate your encryption keys automatically. If you use service accounts that have Google-owned and Google-managed encryption keys, the keys are also automatically rotated. Avoid downloading secrets Exposed secrets are a key attack vector for attackers. If at all possible, do not download encryption keys or other secrets, including service account keys. If you must download keys, ensure that your organization has a key rotation process in place. If you are using GitHub or other public repository, you must avoid leaking credentials. Implement tools such as secret scanning, which warns you about exposed secrets in your GitHub repositories. To stop keys from being committed to your GitHub repositories, consider using tools such as git-secrets. Use secret management solutions such as Secret Manager and Hashicorp Vault to store your secrets, rotate them regularly, and apply least privilege. Detect anomalous activity To monitor for anomalous activity, configure Google Cloud and third-party monitoring tools and set up alerts. For example, configure alerts based on administrator activity in Compute Engine audit logging information and GKE audit logs. In addition, use Event Threat Detection in Security Command Center to identify threats that are based on administrator activities, Google Groups changes, and IAM permission changes. Use Virtual Machine Threat Detection in Security Command Center to identify threats related to your Compute Engine VMs. For more information about Security Command Center services, see Security Command Center service tiers. To help detect network-based threats such as malware, configure Cloud IDS. Participate in the Security Command Center Cryptomining Protection Program If you are a Security Command Center Premium customer and use Compute Engine, you can participate in the Security Command Center Cryptomining Protection Program. This program lets you defray the Compute Engine VM costs related to undetected and unauthorized cryptomining attacks in your Compute Engine VM environment. You must implement the cryptomining detection best practices, some of which overlap with the other best practices that are described on this page. Update your incident response plan Ensure that your incident response plan and your playbooks provide prescriptive guidance for how your organization will respond to cryptocurrency mining attacks. For example, ensure that your plan includes the following: How to file a support case with Cloud Customer Care and contact your Google technical account manager (TAM). If you do not have a support account, review the available support plans and create one. How to tell the difference between legitimate high performance computing (HPC) workloads and cryptocurrency mining attacks. For example, you can tag which projects have HPC enabled, and set up alerts for unexpected cost increases. How to deal with compromised Google Cloud credentials. How to quarantine infected systems and restore from healthy backups. Who in your organization must be notified to investigate and respond to the attack. What information needs to be logged for your retrospective activities. How to verify that your remediation activities effectively removed the mining activities and addressed the initial vulnerability that led to the attack. How to respond to an alert sent from Cloud Customer Care. For more information, see Policy violations FAQ. For more information, see Respond to and recover from attacks. Implement a disaster recovery plan To prepare for a cryptocurrency mining attack, complete business continuity and disaster recovery plans, create an incident response playbook, and perform tabletop exercises. If unauthorized cryptocurrency mining occurs, ensure that you can address the threat vector that caused the initial breach and that you can reconstruct your environment from a known good state. Your disaster recovery plan must provide for the ability to determine what a known good state is so that the attacker can't repeatedly use the same vulnerabilities to exploit your resources. What's next Find more security best practices in Google Cloud Architecture Framework: Security, privacy, and compliance. Protect against ransomware attacks. Deploy a secure baseline in Google Cloud, as described in the Google Cloud enterprise foundations blueprint. Send feedback \ No newline at end of file diff --git a/Best_practices_for_running_an_IoT_backend.txt b/Best_practices_for_running_an_IoT_backend.txt new file mode 100644 index 0000000000000000000000000000000000000000..43fc312a1e0d3c4082ec8996c117026c9fe0c39f --- /dev/null +++ b/Best_practices_for_running_an_IoT_backend.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices/bps-running-iot-backend-securely +Date Scraped: 2025-02-23T11:48:15.929Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for running an IoT backend on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This document provides security best practices for managing and running an Internet of Things (IoT) backend on Google Cloud. In an IoT solution, an IoT backend connects edge devices to other resources. This document focuses on the following IoT backends: Message Queuing Telemetry Transport (MQTT) broker and the IoT platform. This document is part of a series of documents that provide information about IoT architectures on Google Cloud and about migrating from IoT Core. The other documents in this series include the following: Connected device architectures on Google Cloud overview Standalone MQTT broker architecture on Google Cloud IoT platform product architecture on Google Cloud Best practices for running an IoT backend on Google Cloud (this document) Device on Pub/Sub architecture to Google Cloud Best practices for automatically provisioning and configuring edge and bare metal systems and servers This document provides best practices for provisioning and managing device credentials, authenticating and accessing control edge devices, and letting IoT edge devices access Google Cloud resources. IoT architecture An IoT architecture includes services that let you provision and manage device credentials, authenticate and access control edge devices, and let edge devices access Google Cloud resources. This document discusses two IoT backend architectures: one using MQTT broker and the other using an IoT platform. The main security differences between these two backends are device identity and device management. IoT platforms provide these capabilities as part of their system, whereas MQTT brokers require you to provide these capabilities. The following diagram describes the IoT architecture. The architecture shows the services that are required for the following three processes: Certificate provisioning, which is the process that you must complete to prepare an edge device for configuration. Authentication and authorization, which include the authentication scheme that the edge device and the MQTT broker or IoT platform use to authenticate with each other. Connections between edge devices and Google Cloud services, which are tasks that the edge device completes to connect to cloud resources and upload or download data. This document focuses primarily on security best practices for provisioning and authentication. The architecture uses a combination of the following services and features: An edge device (such as a medical device) that you deploy at the edges of your environment and that is geographically close to the data that you want to process. The edge devices connect bi-directionally with your IoT backend, which means that they can send messages to and receive messages from the IoT backend. An IoT backend that can be an MQTT broker or an IoT platform. An MQTT broker provides a secure interface for edge devices to connect using the MQTT protocol. MQTT brokers lack capabilities for device identity and device management and rely on external systems to provide them. An IoT platform is a cloud application that edge devices connect and communicate with. IoT platforms provide a secure interface for edge devices to connect using the MQTT protocol. Each IoT platform has its own security implementation that determines how it authenticates and authorizes edge devices and how it manages device identities. A central certificate store that hosts the certificates for all edge devices. Cloud resources that edge devices must access. Provisioning an edge device Before an edge device can connect with your backend workloads, you must provision a certificate for the edge device. There are two main scenarios that decide how you provision the certificate: If your solution is based on commercial, generic devices, you have full control over the provisioning process after purchasing the device. If you use custom-built devices, the initial provisioning process happens during the manufacturing of the devices and you must integrate the provisioning process with your vendors and manufacturers. In either scenario, you must create device certificates with a chain of trust that links to a root certificate authority (CA). These certificates authenticate the device identity and help ensure that updates and modifications done on the device are by trusted actors. Use a CA such as Certificate Authority Service to complete the following tasks: Generate and store the root CA certificate in a secure manner. Generate and store subordinate CA certificates for signing of your device certificates, if necessary. Request and sign device certificates. Configure and distribute permissions to subordinate CAs to your vendors and manufacturers, if necessary. Revoke device certificates when they aren't required any longer or you suspect the device has been compromised. To provision a device certificate, you must complete the following tasks: If your device has a hardware-based, tamper-proof security solution, such as a secure element (SE) or a hardware secure module (HSM) that stores the private keys locally and the private keys are never exposed externally, do the following: Generate the public-private key pair using the hardware-based security solution that's supported by your device. Request a certificate by using a certificate signing request (CSR). If you aren't using a hardware-based security solution to generate a public-private key pair, use the CA to generate the keys and the certificate instead. For more information, see Using an auto-generated key. The certificate that you download by using this method is already signed. After you generate and sign a device certificate, you then install the signed device certificate on the edge device and store the certificate in a central certificate repository, such as Secret Manager. For more information, see How to deploy a secure and reliable public key infrastructure with Google Cloud CA Service (PDF). For information about other provisioning best practices, see Best practices for automatically provisioning and configuring edge and bare metal systems and servers. Three types of certificates are used to help secure an IoT solution: The root CA certificate provides the root for the chain of trust of all the other certificates in your system. The backend workloads use the root certificate to validate the client certificates and the edge devices use the root certificate to validate the server certificate. You must distribute the root certificate to both the IoT backend and to the edge devices. The intermediate CAs certificates provide a chain of trust that is rooted into the root CA. You can use intermediate CAs for provisioning, or for operational needs, such as granting access to intermediate CAs to manufacturers, or to implement flexible CA management processes. The server certificates are used to secure the endpoints that are exposed by the IoT backend. You have server certificates for the different encryption algorithms that your endpoints need to support. Server certificates are linked to the root CA. A secret manager manages and stores both the private and public portions of the server certificates. You must configure your IoT backend with the server certificates and their corresponding private keys. The client certificates are used to identify edge devices. Each edge device has at least one client certificate, which means that the number of certificates that you have increases with the number of edge devices in your environment. Client certificates are linked to the root CA. You must distribute client certificates to your edge devices and to the IoT backend. Process for generating a device certificate using an HSM or SE The following diagram shows how a device certificate is provisioned when using an HSM or SE. In this diagram, the following steps occur: The edge device generates the public key pair in the hardware. You download the public key and create the certificate signing request (CSR) for it. You send the CSR to the CA to request a certificate. The CA completes the following actions: Signs the certificate. Returns the signed certificate to the provisioner. The provisioner completes the following actions: Sends the signed certificate to the edge device. Stores the signed certificate in the central certificate store. The edge device stores the certificate in a secure location. Process for generating a device certificate using the CA The following diagram shows how a device certificate is provisioned when using a CA. In this diagram, the following steps occur: The provisioner requests that the CA send a signed certificate for the device. The CA completes the following actions: Generates a public-private key pair and signs the public key. Returns the device certificate and the private key to the provisioner. The provisioner completes the following actions: Sends the certificate and private key to the edge device. Stores the certificate and private key in the central certificate store. The edge device stores the certificate and the private key in a secure location. If you want to store the private key in a single place (the device), you should avoid storing the private key in the central secret store. However, if you store the private key outside the central secret store and you lose access to the private key, the device has to go through the provisioning process again. Authenticate devices before signing certificates The process to generate a device certificate (either on the device, or by using a CA) requires that the device and the CA communicate and authenticate one another. Without proper authentication, your CA might mistakenly trust a malicious device. For example, an attacker with knowledge about how to reach the certificate signing infrastructure of your CA might deploy a malicious device that asks your CA to sign a certificate. If you don't have device authentication in place, your CA might sign the certificate that the malicious device presents. If your CA does sign the certificate, the malicious device can then communicate with your backend as a trusted device. To help you prevent malicious devices from communicating with your CA, we recommend that you take the following actions: Implement an authentication mechanism for devices that aren't yet trusted. Establish the authenticity of any device requesting authentication. Establish device authenticity before a device asks the CA to generate a new certificate or sign an existing certificate. Implementing an authentication mechanism at this point of the provisioning process is challenging. You cannot rely on device certificates to authenticate devices because the device doesn't yet have a signed certificate from the CA. This lack of a signed certificate can occur for the following reasons: The device didn't yet generate a certificate. The device didn't yet send a CSR to the CA. The CA didn't yet send the signed certificate back to the device. One way to solve this problem is to extend your device provisioning process to do the following for each device that you want to authenticate or need to authenticate: Generate a provisioning certificate that you use only to authenticate the device against the certificate signing infrastructure. Sign the provisioning certificate with your CA. Store the signed provisioning certificate in the SE or HSM on the device. Store the signed provisioning certificate in your Google Cloud backend. Before the device is granted access to the certificate signing infrastructure of your CA, the device has to present the provisioning certificate. It has to present the certificate so that you can verify its integrity and authenticity, and determine whether the certificate matches one of the provisioning certificates stored in your Google Cloud backend. If the verification is successful, the device can access the certificate signing infrastructure of your CA and the certificate provisioning process can continue. There are differences between a provisioning certificate and a fully-trusted certificate. A provisioning certificate only grants access to a minimal amount of services and infrastructure. Creating a provisioning certificate lets the CA verify that the device is genuine before considering it fully trusted and issuing a fully-trusted certificate. An extension to this process is that you can use subordinate CAs that device manufacturers have access to, together with your CA, to sign provisioning certificates. For example, a manufacturer might sign the provisioning certificates of a device after it completes the manufacturing process for that device. You can then verify these signatures to help you validate that the device is genuine. If a device is compromised before it's provisioned, we recommend that you remove the corresponding provisioning certificate from your Google Cloud backend, so that the device can't initiate the process to obtain a fully-trusted certificate because it won't be able to authenticate against your CA. Best practices for device identity This section describes the best practices for device identities. Use an identity provider with MQTT brokers MQTT brokers authenticate edge devices by using device credentials provided by plugins, databases, and files. To manage your device identities in a systematic and scalable manner, use an identity provider (IdP). The IdP manages the identities and credentials for all devices and acts as the primary source of truth for device identities. To keep the device identity updated in the MQTT broker, implement a system-specific integration layer. For more information about managing device credentials, see Provisioning an edge device. Use the digital identities of the IoT platform as the source of truth The IoT platform has security features that manage the device identities and device credentials, and authenticate and authorize devices trying to access the platform. These security features help ensure that only authorized devices are allowed to access the IoT platform and help ensure data integrity. Verify that the device identities managed by the IoT platform represent the primary source of truth of all the devices the IoT platform manages. Other components in an IoT solution that need device identity information should rely on the security system of the IoT platform. The IoT platform grants access rights to devices and propagates any security changes throughout the IoT solution. Best practices for network connectivity Securing network connectivity is important for the following reasons: Secure networks help ensure that a device connects to the right backend. For example, a secure network can prevent DNS spoofing, which is an attack that tries to divert devices to connect to a rogue backend that is controlled by attackers. Secure networks help ensure third parties can't read your data traffic. For example, a secure network can prevent an attacker-in-the-middle attack, where attackers read the traffic between your device and the backend. Use Transport Layer Security (TLS) to protect network communication between your edge devices and backend workloads. Extend TLS with mTLS to implement a mutual authentication scheme that enables both connecting parties to establish the identity of each other. For instructions on using TLS, see Standalone MQTT broker architecture on Google Cloud and IoT platform product architecture on Google Cloud. Best practices for certificate management for MQTT brokers This section describes best practices for managing certificates when using MQTT brokers. Store certificates centrally Store and manage server certificates and device certificates in a central location. Specifically, ensure that you have the following controls in place: An inventory of all your devices and their certificates and the server endpoints and their certificates. Additional information about the certificates such as their validity. The ability to add and remove certificates for devices so that devices can connect using new certificates. Access rights to your central certificate store, to limit what the different roles in your backend can do with the certificates. Use a secret storage and management solution such as Secret Manager or HashiCorp Vault. Secret Manager lets you version, update, and invalidate device credentials, and to manage access policies to your credentials. For an IoT platform, implement access to the credentials using Secret Manager API access. Protect certificates on edge devices To store certificates and keys on the edge devices, use a local trusted execution environment or certificate store to protect the credential and block unauthorized accesses. If you need to store secret material on your devices, encrypt that material using techniques such as flash encryption, and store it on tamper-proof elements to help prevent unauthorized data extraction. Synchronize the central certificate store with the MQTT broker certificate store MQTT brokers must access client certificates for certificate-based authentication, so you must synchronize the certificate stores of MQTT brokers with the central certificate store. Verify that changes on the central certificate store, such as add, update, and delete certificates, are synchronized with the MQTT broker certificate store. MQTT brokers use certificate stores such as MySQL, PostgresDB, and Java Key Store. Depending on which certificate store your MQTT broker uses, ensure that the following processes exist: A process that monitors for changes in the central certificate store and notifies the synchronization process. A process that takes changes in the central certificate store and synchronizes the changes in the central certificate store with the certificate store used by the MQTT broker. When you use Secret Manager as your certificate store, you can use event notifications as the monitoring process. You can implement the synchronization process as a listener of the event notifications. Distribute certificates to edge devices securely When using MQTT brokers, distribute the root certificate and client certificates to your edge devices. When you distribute certificates, you must secure your communication channels so that the traffic doesn't get intercepted. The main communication channels for certificate distribution are the following: A direct path from the IoT backend to the edge devices over existing communication channels. An indirect path in which edge devices request and download the certificates. During certificate distribution, you require the following components: A certificate store where certificates are centrally managed. A distribution coordinator which sends the certificates and tracks the distribution process for each edge device. An update handler on the edge device that receives or downloads the certificates and stores them on the device. Distribute certificates during the provisioning processes for edge devices, and when you need to rotate certificates. During the provisioning process, ensure that the provisioner has direct access to edge devices over encrypted channels such as SSH and uses tools such as SCP. Because the devices are not in operation, you can push certificates directly to the edge devices. When rotating certificates, use the MQTT broker as the communication channel between the distribution coordinator and the edge devices. Use other channels to download certificates onto the device. To minimize disruption of the edge devices in operation, use an indirect certificate distribution path. The process would consist of the following logical steps: The distribution coordinator acquires access credentials from the certificate store. The distribution coordinator pushes the certificate access credentials to the edge devices together with additional information, such as the download URL. The on-device update handler receives the access credentials and temporarily stores the information and acknowledges receipt back. The update handler coordinates the certificate download when the device is not active. The update handler uses the access credentials to download certificates from the credential store. After the certificates are downloaded, the update handler continues with the certificate rotation process which is described in the certificate rotation section. When you use Secret Manager as the central certificate store, you can generate short-lived access tokens to grant and restrict access to certificates. For more information, see Distribute access tokens to devices securely. To help prevent the certificates from being exposed during transit, encrypt the connection between your edge devices and the MQTT broker. For more information, see Best practices for network connectivity. Rotate certificates automatically To limit the damage an exposed certificate can cause, generate certificates with a finite valid period and rotate the certificates before they expire. For large-scale IoT deployments, implement an automatic certificate rotation procedure to consistently update your devices with new certificates before the old ones expire. Deployed devices without valid certificates means that the devices can stop functioning, which can be costly to fix and negatively affect the overall functionality of your IoT solution. Your edge devices must connect bi-directionally with your MQTT broker to ensure that they can send messages to the MQTT broker and that they can receive messages from the MQTT broker. During certificate rotation, you require the following components: A monitoring process that recurrently scans through your certificate inventory and looks for certificates that are about to expire. The monitoring process triggers certificate rotation for expiring certificates. A rotation process that initializes and oversees certificate rotation. A device certificate rotation handler on the edge device that communicates with the MQTT broker and executes certificate rotation steps on the device. To rotate certificates, the IoT solution completes the following steps: The rotation process sends an initialization message to the edge device to start certificate rotation. The device certificate rotation handler acknowledges the initialization message by sending a response back to the rotation job. The rotation process requests a new certificate from the CA. This request is similar to the certificate provisioning request, except that the keys and CSR are sent as MQTT broker messages. After receiving the new certificate from the CA, the rotation job distributes the certificate to the central certificate store and to the edge device. It also synchronizes the certificate to the certificate store of the MQTT broker. The device certificate rotation handler stores the new certificate and initializes a new connection with the MQTT broker using the new certificate. After the new connection is established, the device certificate rotation handler sends a completed message to the MQTT broker. After receiving the completed message, the rotation process invalidates the old certificate in the central certificate store. To help protect the certificates that are being sent during the rotation process, use dedicated MQTT topics for certificate rotation. Limit access to these topics to only the rotation job and the edge device. To help protect the certificate rotation process from runtime failures, enable persistence for the changes and progress. For more information about rotating secrets using Secret Manager, see Rotation of secrets. Best practices for certificate management for IoT platforms If you're using an IoT platform, use the certificate update and distribution mechanisms provided by the platform. For backup purposes, you can regularly export the credentials from your IoT platform to a secondary secret storage, such as Secret Manager. Best practices for authentication with an MQTT broker During the mutual authentication process, backend workloads verify the identity of edge devices, and edge devices verify the identity of backend workloads. After the backend workloads confirm the identity of the edge device, the backend workloads authorize the device access to resources. The following sections provide best practices for authentication methods when using MQTT brokers. Note: Most commercial IoT platform applications support their own set of authentication schemes. Choose the authentication method for MQTT brokers Different IoT backends support different authentication methods. The commonly used methods are the following: Username and password authentication, where the edge device presents its username and password to verify its identity. Token-based authentication, where encrypted security tokens are used to verify the edge device's identity. Customized authentication schemes, where you implement a custom mechanism to verify the identity of the edge device. As part of the MQTT standard, MQTT brokers support username and password authentication as the default for MQTT CONNECT packets. The MQTT CONNECT packet also contains a Client Identifier field that you can use to uniquely identify the client to the MQTT broker. Edge devices send the MQTT CONNECT packet to the MQTT broker when they establish a connection. Besides the username, password, and client identifier fields in the MQTT CONNECT packet, MQTT 5.0 supports enhanced authentication that lets you build challenge-response authentication flows. MQTT 5.0 allows for multiple AUTH packet exchanges between the edge device and the MQTT broker. Use password stores with username and password authentication For username and password authentication, configure the MQTT broker to use a password store. The password store provides a centralized location for managing passwords for all the edge devices that connect to the MQTT broker. By default, the username, password, and client identifier fields are optional in the MQTT specification. Therefore, design your authentication mechanism to verify that the username, password, and client identifier fields are present in the MQTT CONNECT packet. Ensure that the passwords are encrypted at rest and in transit, as follows: At rest, store a cryptographically strong hash of the password that cannot be reversed. For more information about hashing passwords, see Account authentication and password management best practices. In transit, encrypt the connection between your edge devices and the MQTT broker. For more information, see Best practices for network connectivity. Consider token-based authentication With token-based authentication, edge devices send a token to the MQTT broker to authenticate. Devices can generate the token themselves or get the token from other authentication services. Compared to passwords, tokens are short-lived: tokens are only valid for a period with an explicit expiration date. Always check for expiration when validating tokens. JSON Web Tokens (JWT) are a way to implement token-based authentication. Edge devices can generate the JWT and authenticate with the MQTT broker. The JWT is embedded into the MQTT CONNECT packet as the password field. The advantages of JWT are the following: JWT gives you choices on the encryption algorithm used for signing of the token. JWT works well with constrained edge devices, where you can use a less resource-intensive encryption algorithm such as ECC for signing of the token. Using public key cryptography, the private key is used only on the edge device and never shared with other parties. A private key helps make this method more secure than username and password authentication, where the credentials are sent over the connection and requires encryption of the data. Consider custom authentication schemes Some MQTT brokers support different authentication mechanisms and protocols. For example, if your MQTT broker supports customized authentication schemes, you can configure it to support the following: Industry-standard authentication protocols such as OpenID Connect, Security Assertion Markup Language (SAML), LDAP, Kerberos, and Simple Authentication and Security Layer (SASL). These protocols delegate device authentication to your existing identity providers. Some MQTT brokers support enhanced authentication and extensible authentication mechanisms that you can use to extend the MQTT broker to support new protocols and identity providers. Certificate-based mutual authentication. Some MQTT brokers support a mutual authentication scheme, such as mTLS-based authentication. Best practices for device access control and authorization Because of the publisher and subscriber communication pattern of the MQTT protocol, device access control is defined using MQTT topics. MQTT topics control how a device can communicate with your IoT backend. Each IoT backend has different implementations for access control and authorization, so refer to your IoT backend documentation for options on how to set up MQTT topics. Use single-purpose service accounts to access Google Cloud resources Access to Google Cloud resources are managed by IAM allow policies that bind the resource access allowance with a set of principals. Typical principals are user accounts, service accounts, and groups. Service accounts are typically used by an application or compute workload to make authorized API calls for cloud resources. Service accounts let IoT edge devices access cloud resources. Because the device identity is managed by the IoT backend, you must map an identity between the IoT backend and IAM so that the edge device can access Google Cloud resources. If you're managing a large set of devices, the limit on the number of service accounts for each Google Cloud project makes it infeasible to have direct one-to-one mapping between device and service account. Instead, create service accounts that are linked to the cloud resources that your IoT solution needs to access, as described in creating single-purpose service accounts. For example, create a unique service account for each of the following use cases: Downloading software update packages Uploading large media files Ingesting data from a latency stream To implement least privilege, ensure that each service account only has sufficient access rights to support its use case. For example, for a service account that is used to download software packages, only grant read access to the Cloud Storage bucket. Distribute access tokens to devices securely Usually, your edge devices communicate with your IoT platform using MQTT. However, for specific use cases, your devices might require direct access to Google Cloud resources. For example, consider the following: To download content, an edge device requires read-only access to a Cloud Storage bucket during the download process only. To upload data to a Cloud Storage bucket, an edge device requires write access to the bucket. For these use cases, use workload identity federation, where access to Google Cloud resources is granted through access tokens. Workload identity federation eliminates the need of provisioning any cloud-specific credentials on the edge devices and access distribution is done dynamically based on demand. To distribute access tokens for cloud resources to your devices, configure workload identity federation between your device identity provider and Google Cloud. To support workload identity federation, ensure that your IoT backend meets the workload identity federation requirements and follows the security best practices that match your use cases. To access Google Cloud resources using workload identity federation, your edge devices must implement the OAuth 2.0 Token Exchange workflow, which involves the following steps: The device calls the Security Token Service and provides its own device credentials. The Security Token Service verifies the identity of the edge device by validating the credentials that the edge device provided with the device identity provider. If the identity verification is successful, the Security Token Service returns an access token back to the edge device. The edge device uses that token to impersonate the single-purpose service account and obtains a short-lived OAuth 2.0 access token. The device uses the short-lived OAuth 2.0 access token to authenticate with Google Cloud APIs and get access to the required cloud resources. To restrict the access of the short-lived access token to specific buckets and objects in Cloud Storage, use Credential Access Boundaries. Credential Access Boundaries lets you limit the access of the short-lived credential and minimize the number of resources that are exposed in your Cloud Storage buckets when an access token gets compromised. Workload identity federation is a scalable way of securely distributing cloud access to edge devices. For more information about authentication, see Authentication at Google. Monitor and audit access to cloud resources Enable Cloud Audit Logs to create log entries when your edge devices access cloud resources through authenticated API requests. Cloud Audit Logs lets you monitor critical actions that are done by your edge devices on Google Cloud. In addition, Cloud Audit Logs creates the audit traces and logs that you need to investigate any issues. For more information, see Impersonating a service account to access Google Cloud. What's next Learn more about a Technical overview of Internet of Things. Read the remaining documents in the series: Standalone MQTT broker architecture on Google Cloud IoT platform product architecture on Google Cloud Learn more best practices for working with service accounts ContributorsAuthors: Charlie Wang | Cloud Solutions ArchitectMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Best_practices_for_securing_your_applications_and_APIs_using_Apigee.txt b/Best_practices_for_securing_your_applications_and_APIs_using_Apigee.txt new file mode 100644 index 0000000000000000000000000000000000000000..c8d16c29685b4a660994ec4b25c2d2b9f2a381c0 --- /dev/null +++ b/Best_practices_for_securing_your_applications_and_APIs_using_Apigee.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/best-practices-securing-applications-and-apis-using-apigee +Date Scraped: 2025-02-23T11:56:08.711Z + +Content: +Home Docs Cloud Architecture Center Send feedback Best practices for securing your applications and APIs using Apigee Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-19 UTC This document describes best practices that can help you to secure your applications and APIs using Apigee API management and the following Google Cloud products: Google Cloud Armor reCAPTCHA Cloud CDN This document is intended for API architects, security architects, and engineering leads who manage the infrastructure of an application and who want to expose secure, scalable, and performant APIs. This document uses a series of example architectures to demonstrate best practices for using Apigee API management. This document also discusses best practices for using web app and API protection (WAAP), a comprehensive security solution that you can use to help secure your applications and APIs. This document assumes that you are familiar with networking, APIs, and Google Cloud. Apigee API management Apigee is a platform for developing and managing APIs. By adding a proxy layer to your services, Apigee provides an abstraction or facade that helps you to secure your backend service APIs. Users can interact with applications using OAuth 2.0 and allow-listed IP address ranges. As shown in the following image, users can interact with an application, and data and services are exposed in a bidirectional flow. The security points are as follows: Users: OAuth 2.0 IP address access control Applications API keys OAuth 2.0 TLS Developers and partners SSO RBAC APIs OAuth 2.0 OpenID Connect Quotas Spike arrest Threat protection API team IAM RBAC Federated logic Data masking Audit logs Backend Private networking Mutual TLS IP address access control As the preceding image shows, you can use different security mechanisms in an application, such as API key or OAuth 2.0 with Transport Layer Security (TLS). You can also add rate limiting, threat protection policies, and configure mutual TLS to the backend of your API layer. To help you to manage access for an API team within the Apigee platform, Apigee has role-based access control (RBAC) and federated login. We recommend that you use the Apigee default policies to secure your APIs. The policies are as follows: Traffic Management. Helps you to configure caching, control quotas, mitigate the effects of spikes, and control API traffic. Message level protection. Lets you inspect and validate request payloads to help protect your backend from malicious attackers. Security. Helps you to control access to your APIs. You can attach one or more of these policies to your proxy layer. The following table lists the security use case for each policy, categorized by policy type. Policy type Policy name Security use case Traffic management SpikeArrest policy Applies rate limiting to the number of requests sent to the backend. Traffic management Quota policy Helps your organization to enforce quotas (the number of API calls made) for each consumer. Traffic management ResponseCache policy Caches responses, reducing the number of requests to the backend. Message-level protection OASValidation policy Validates incoming requests or response messages against an OpenAPI 3.0 Specification (JSON or YAML). Message-level protection SOAPMessageValidation policy Validates XML messages against a schema of your choice. Validates SOAP messages against a WSDL and determines whether JSON and XML messages are correctly formed. Message-level protection JSONThreatProtection policy Helps to mitigate the risk of content-level attacks by letting you specify limits on JSON structures like arrays and strings. Message-level protection XMLThreatProtection policy Helps you to address XML vulnerabilities and mitigate the risk of attacks by evaluating message content and detecting corrupt or malformed messages before they can be parsed. Message-level protection RegularExpressionProtection policy Evaluates content against predefined regular expressions and rejects it if the expression is true. Security BasicAuthentication policy Base64 encodes and decodes user credentials. Security VerifyAPIKey policy Enforces the verification and the validation of API keys at runtime. Only allows applications with approved API keys associated with your API products to access your APIs. Security OAuthV2 policy Performs OAuth 2.0 grant type operations to generate and validate access tokens. Security JWS and JWT policies Generates, verifies, and decodes JSON Web Tokens (JWT) and JSON Web Signatures (JWS). Security HMAC policy Computes and verifies hash-based message authentication code (HMAC) for authentication and application-level integrity checks. Security SAMLAssertion policy Validates incoming messages that contain a digitally signed SAML assertion. Generates SAML assertions to outbound XML requests. Security CORS policy Lets you set cross-origin resource sharing (CORS) headers for APIs that are consumed by web applications. We recommend that you use Google Cloud Armor for IP address-based and geo-based access control. However, in cases where it's not possible, you can use the AccessControl policy. To help you to secure the connections from Apigee to your backend, Apigee also provides keystore management, which lets you configure the keystore and truststore for TLS handshakes. You can use Apigee to create API products that let you bundle your API operations and make them available to application developers for consumption. An API product bundles together one or more operations. An operation specifies an API proxy and resource paths that can be accessed on that proxy. An operation can also limit access by HTTP methods and by quota. You use API products to control access to your APIs. By defining one or more API products in a developer application, you can restrict access to proxies with an API key. For example, mobile applications which are used by customers can only perform a POST operation on the /v1/payments endpoint, in this case, https://$DOMAIN/v1/payments. In another example, call center applications which are used by call center staff can perform operations like PUT or DELETE on the /payments endpoint, such as https://$DOMAIN/v1/payments/1234, to revert or reverse payments. Initial architecture This section describes an example microservices architecture with the services deployed in the data center and cloud provider. The following architecture best practices demonstrate how you can iterate and improve the initial architecture. The initial architecture is as follows: The payments and accounts services are hosted in the data center, and the money-transfer service is hosted in Google Cloud. The external Application Load Balancer controls and configures ingress to the services. The external Application Load Balancer forwards the request to the appropriate backend or third-party service and handles the TLS handshake. In its initial state, the example architecture has the following constraints: It's unlikely to scale. It's unlikely to protect a system from malicious attacks It doesn't reflect consistent best practices for security and logging because these services are developed and maintained by different teams within the organization. Architecture best practices Apigee can add value and make it easier to expose your services to your consumers by implementing a standard set of security policies across all APIs. This section discusses best practices for using Apigee to help secure your APIs. Use Apigee as a proxy layer The following diagram shows the initial architecture with the addition of Apigee as a proxy (facade) layer: Apigee is provisioned in a Google Cloud project and the runtime is provisioned and peered in a tenant project using VPC Network Peering. To help secure your system, instead of sending data through the internet, you can use Apigee as a proxy layer to establish a direct (private) connection to your data center using Cloud Interconnect. The request flow is as follows: The client sends the request to the external Application Load Balancer with the credentials for the application—for example, a key, token, or certificate. The load balancer routes the request to Apigee. Apigee processes the request, executes the security policies as described in Apigee API management, and allows or denies the request. Apigee can also be used to route the request to different backends based on the client, the request, or both the client and the request. Apigee forwards the request to the GKE backends directly through internal IP addresses. The communication between Apigee and the money-transfer service can happen over an RFC 1918 address (internal IP address) because they are within the peered network. Apigee sends the request to the private data center backends through Cloud Interconnect. Apigee sends the request to third-party services through Apigee NAT IP address provisioning. Use Google Cloud Armor as a WAF layer with Apigee You can add Google Cloud Armor to the architecture to increase your security perimeter. Google Cloud Armor is part of the global load-balancing infrastructure for Google Cloud. It provides web application firewall (WAF) capabilities and helps to prevent distributed denial of service (DDoS) attacks. It can also help you to mitigate the threat to applications from the risks listed in the OWASP Top 10. You can configure rules and policies in Google Cloud Armor to evaluate every call made by the client that hits the external Application Load Balancer. You can also automate the configuration of Google Cloud Armor policies. For more information about how to configure rules in Google Cloud Armor, see the Google Cloud Armor How-to guides. The following diagram shows the example architecture with both Apigee and Google Cloud Armor in place: The flow of events in this architecture is similar to those discussed in Use Apigee as a proxy layer earlier in this document. The request flow is as follows: The client sends the request to the external Application Load Balancer with the credentials for the application—for example, a key, token, or certificate. Google Cloud Armor filters the request because the external Application Load Balancer has it enabled. It enforces and evaluates all the configured rules and policies. If any rule is violated, Google Cloud Armor rejects the request and gives you an error message and status code. If there are no Google Cloud Armor rule violations, the external Application Load Balancer routes the request to Apigee. Apigee processes the request, executes the security policies, and allows or denies the request. It can also be used to route the request to different backends based on the client, the request, or both the client and the request. Apigee forwards the request to the GKE backends directly through internal IP addresses. The communication between Apigee and the money-transfer service can happen over an RFC 1918 address (internal IP address) because they are within the peered network. Apigee sends the request to the private data center backends through Cloud Interconnect. Apigee sends the request to third-party services through Apigee NAT IP address provisioning. Use WAAP To further enhance your security profile, you can also use WAAP, which brings together Google Cloud Armor, reCAPTCHA, and Apigee to help protect your system against DDoS attacks and bots. It also provides WAF and API protection. We recommend WAAP for enterprise use cases where the API calls are made from a website and mobile applications. You can set applications to load the reCAPTCHA libraries to generate a reCAPTCHA token and send it along when they make a request. This following diagram shows the workflow: The request flow in the preceding diagram is as follows: (1) All HTTP(S) requests by customers and API consumers are sent to the external Application Load Balancer. (2) The first point of contact on the WAAP solution is Google Cloud Armor. (2a) If none of these rules are triggered by the Google Cloud Armor policies, a request is sent to the reCAPTCHA API to evaluate whether the incoming traffic is a legitimate request or not. (3a) If it's a legitimate request, then the request is forwarded to the backend. (2b) If the request isn't legitimate, Google Cloud Armor can deny the request and send a 403 response code to the user. (3b) For any API requests, after the Google Cloud Armor OWASP rules and DDoS protection are evaluated, the request is then forwarded to Apigee to check the validity of the API request. (4) Apigee determines whether the API keys or access tokens used in the request are valid. If Apigee determines that the request isn't legitimate, Apigee can send a 403 response code. (5) If the request is legitimate, Apigee forwards the request to the backend. The following diagram shows the architecture of WAAP with Google Cloud Armor, reCAPTCHA, and Apigee for the API requests. The request flow in the preceding diagram is as follows: The client sends the request to the external Application Load Balancer with the credentials for the application—for example, a key, token, or certificate. Because the external Application Load Balancer has Google Cloud Armor enabled, Google Cloud Armor selects the request. It enforces and evaluates all the configured rules and policies. If any rule is violated, Google Cloud Armor rejects the request with an error message and status code. For website calls such as a form submission for a login page, Google Cloud Armor is integrated with reCAPTCHA. reCAPTCHA evaluates incoming traffic and adds risk scores to legitimate traffic. For traffic that isn't legitimate, Google Cloud Armor can deny the request. If there are no Google Cloud Armor rule violations, the external Application Load Balancer routes the API request to Apigee. Apigee processes the request, executes the security policies, and allows or denies the request. Apigee can also be used to route the request to different backends based on the client, the request, or both the client and the request. Apigee forwards the request to the GKE backends directly through internal IP addresses. The communication between Apigee and the money-transfer service can happen over the RFC 1918 address, which is an internal IP address, because they are both within the peered network. Apigee sends the request to the private data center backends through Cloud Interconnect. Apigee sends the request to third-party services through Apigee NAT IP address provisioning. Use Cloud CDN for caching Cloud CDN uses the Google global network to serve content closer to users, which accelerates response times for your websites and applications. Cloud CDN also offers caching capabilities that help you to secure the backend by returning the response from its cache. By caching frequently accessed data at a Google Front End (GFE) , which is at the edge of the Google network, it keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN also helps organizations seamlessly handle seasonal spikes in traffic —for example, spikes that might occur during the holiday or back-to-school seasons. This approach to caching helps to improve reliability and user experience in an ecosystem. It can also help to minimize web server load, compute, and network usage. To implement this architecture, you must enable Cloud CDN on the load balancer which serves traffic for Apigee. Cloud CDN can be used with any of the options discussed in this document. The following diagram shows the initial example architecture of WAAP with the addition of Cloud CDN. The request flow shown in the preceding diagram is as follows: The client uses reCAPTCHA libraries to get a token and sends the request to the external Application Load Balancer with the credentials for the application—for example, a key, token, or certificate. Cloud CDN checks the cache with the cache key and returns the response if the cache hit is true. If the cache hit is false, Google Cloud Armor filters the request because the external Application Load Balancer has Google Cloud Armor enabled. Google Cloud Armor enforces and evaluates all the configured rules and policies. If any rule is violated, it rejects the request with an error message and status code. Google Cloud Armor is integrated with reCAPTCHA, which evaluates the legitimate incoming traffic with risk scores. For traffic that isn't legitimate, Google Cloud Armor can deny the request. If there are no Google Cloud Armor rule violations, the external Application Load Balancer routes the request to Apigee. Apigee processes the request, executes the security policies as described in Apigee API management, and allows or denies the request. It can also be used to route the request to different backends based on the client, the request, or both the client and the request. Apigee forwards the request to the GKE backends directly through internal IP addresses. The communication between Apigee and the money-transfer service can happen over the RFC 1918 address, which is an internal IP address, because they are within the peered network. Apigee sends the request to the private data center backends through Cloud Interconnect. Apigee sends the request to third-party services through Apigee NAT IP address provisioning. When a response flows back to the client, Cloud CDN caches it so that it can return the response from the cache for future calls. What's next Learn more about Apigee provisioning options. Read about multi-layer API security with Apigee and Google Cloud Armor. Learn how to deliver high-performing global APIs with Apigee X and Cloud CDN. Read and ask questions in the Apigee community. Explore the Apigee repository on GitHub. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Best_practices_for_validating_a_migration_plan.txt b/Best_practices_for_validating_a_migration_plan.txt new file mode 100644 index 0000000000000000000000000000000000000000..a387721781ea8dca89d6aa49bd2ea8a863c06a59 --- /dev/null +++ b/Best_practices_for_validating_a_migration_plan.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-best-practices +Date Scraped: 2025-02-23T11:51:48.302Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Best practices for validating a migration plan Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-08 UTC This document describes the best practices for validating the plan to migrate your workloads to Google Cloud. This document doesn't list all of the possible best practices for validating a migration plan, and it doesn't give you guarantees of success. Instead, it helps you to stimulate discussions about potential changes and improvements to your migration plan. This document is useful if you're planning a migration from an on-premises environment, from a private hosting environment, or from another cloud provider to Google Cloud. The document is also useful if you're evaluating the opportunity to migrate and want to explore what it might look like. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan (this document) Migrate to Google Cloud: Minimize costs Assessment Performing a complete assessment of your workloads and environments helps to ensure that you develop a deep understanding of your workloads and environments. Developing this understanding helps you to minimize the risks of issues happening during and after your migration to Google Cloud. Make a complete assessment Before you proceed with the steps that follow the assessment phase, complete the assessment of your workloads and environments. To make a complete assessment, consider the following items, which are often overlooked: Inventory: Ensure that the inventory of the workloads to migrate is up to date and that you completed the assessment. For example, consider how fresh and reliable the source data is for your assessment, and what gaps might exist in the data. Downtimes: Assess which workloads can afford a downtime, and the maximum length of time that those downtimes can be. Migrating workloads while experiencing zero or nearly zero downtimes is harder than migrating workloads that can afford downtimes. To complete a zero-downtime migration, you need to design for and implement redundancy for each workload to migrate. You also need to coordinate these redundant instances. When you assess how much downtime a workload can tolerate, assess whether the business benefit of a zero-downtime migration is greater than the added migration complexity. Where possible, avoid creating a zero-downtime requirement for a workload. Clustering and redundancy: Assess which workloads support clustering and redundancy. If a workload supports clustering and redundancy, you can deploy multiple instances of that workload, even across different environments, such as the source environment and the target environment. Clustered and redundant deployments might simplify the migration because those workloads coordinate with each other with limited intervention. Configuration updates: Assess how you update the configuration of your workloads. For example, consider how you deliver updates to the configuration of each workload that you want to migrate. This consideration is critical for the success of your migration because you might have to update the configuration of your workloads while you migrate them to the target environment. Generate multiple assessment reports: During the assessment phase, it might be useful to generate more than one assessment report to account for different scenarios. For example, you can generate reports to take into account different load profiles for your workloads, such as at- and off-peak times. Assess the failure modes that your workloads support Knowing how your workloads behave under exceptional circumstances helps you to ensure that you don't expose them to conditions from which they can't recover. As part of the assessment, gather information about the failure modes and their effects that your workloads support and can automatically recover from, and which failure modes need your intervention. For example, you can start by considering questions about possible failure modes, such as the following: What happens if a workload loses connectivity to the network? Is a workload able to resume its work from where it left off after being stopped? What happens if the performance of a workload or its dependencies is inadequate? What happens if there are two workloads that have the same identifier in the architecture? What happens if a scheduled task doesn't run? What happens if two workloads process the same request? Another source for unsupported failure modes might be the migration plan itself. Determine whether your migration plan includes steps that depend on the success of a particular condition and whether it includes contingencies if the condition is not met. A plan that includes these types of conditions can indicate that the plan itself might fail or that individual components might fail during migration. After you assess those failure modes and their effects, validate your findings in a non-critical environment by simulating failures and injecting faults that emulate those failure modes. For example, if a workload is designed to automatically recover after a network connectivity loss, validate the automatic recovery by forcibly interrupting its connectivity and restoring it afterwards. Assess your data processing pipelines Your workload assessment should be able to answer the following questions: Are resources correctly sized for the migration? How much time is required to migrate the data that your workloads need? Can the target environment accommodate the full volume of data? How do your workloads behave when they have to accommodate spikes in demand or spikes in the amount of data that they produce in a given time window? If there are spikes in demand or spikes in the amount of data that your workloads produce, is there any adverse effect, such as increased latency or delays in responses? After your workloads start, do they need time to ramp up to the expected levels of performance? The results of this assessment are often models of the demand that your workloads satisfy and the data that the workloads produce in a given time window. When you gather data points to produce such models, consider that those data points might vary significantly between peak and non-peak time windows. For more information about how and what to monitor, see Service Level Objectives in the Site Reliability Engineering book. Ensure that you can update and deploy each workload to migrate During the migration, you might need to update some of the workloads that you're migrating. For example, you might need to deploy a fix for an issue, or roll back a recent change that is causing an issue. For each workload that you're migrating, ensure that you can apply and deploy changes. For example, if you're migrating a workload for which you have the source code, ensure that you can access that source code, and that you can build, package, and deploy the source code as needed. Your migration might include workloads that you can't apply and deploy changes to (such as proprietary software). In that scenario, refactor your migration plan to consider additional effort to mitigate the issues that might occur after you migrate those workloads. Assess your network infrastructure A functional network infrastructure is fundamental for the migration. You can use the network infrastructure as part of your migration tooling. For example, you can use load balancers and DNS servers to direct traffic according to your migration plan. To avoid issues during the migration, it's important to assess your network infrastructure and evaluate to what extent it can support your migration. For example, you can start by considering questions about your load-balancing infrastructure, such as the following: What happens when you reconfigure your load balancers? How long does it take for the updated configuration to be in effect? When migrating with zero downtime, what happens if you get a spike of traffic before the updated configuration is in place? After you consider questions about your load-balancing infrastructure, next consider questions about your DNS infrastructure, such as the following: Which DNS records should you update to point them to the target environment, and when should you update them? Which clients are using those DNS records? How is the time to live (TTL) configured for the DNS records to update? Can you set the DNS record TTL to its minimum during the migration? Do your DNS clients respect the TTL of the DNS records to update? For example, do your applications have client-side DNS caching that ignores the TTL that you've configured for the migration? Do you detect traffic directed at your source environment even after you completed the migration? Migration planning Thoroughly planning your migration helps you to avoid issues during and after the migration. Planning also helps you to avoid effort to deal with unanticipated tasks. Develop a rollback strategy for each step of the migration plan During the migration, any step of the migration plan that you execute might result in unanticipated issues. To ensure that you're able to recover from those issues, prepare a rollback strategy for each step of the migration plan. To avoid losing time during an outage, do the following: Ensure that your rollback strategies work by periodically reviewing and testing each rollback strategy. Set a maximum-allowed execution time for each migration step. After this allowed execution time expires, your teams start rolling back the migration step. Even if you have rollback strategies ready for each step of the migration plan, some of those steps might still be potentially disruptive. A potentially disruptive step might cause some kind of loss even if you roll it back, such as a data loss. Assess which steps of the migration plan are potentially disruptive. If you automated any step of the migration plan, ensure that you have a preplanned procedure for each automated step if there is a failure in the automation. As with rollback strategies, periodically review and test each preplanned procedure. If you set up communication channels as part of the migration, to ensure that you aren't locked out from your environment, provision backup channels that you can use to recover from a failure. For example, if you're setting up Partner Interconnect, during the migration you can also set up a backup access through the public internet in case you experience any issues during provisioning and configuration. Plan for gradual rollouts and deployments To reduce the scope of issues and problems that might occur during the migration, avoid big-scale changes, and design your migration plan to gradually deploy changes. For example, plan for gradual deployments and configuration changes. If you plan for gradual rollouts, to lower the risk of unanticipated issues caused by the application of the changes, minimize the number and the size of those changes. After you identify and resolve issues in your first small rollout, you can make the subsequent rollouts for similar changes at larger scales. Alert development and operations teams To reduce the impact of issues that might occur during a migration, alert the teams that are responsible for any workload to migrate. Also alert the teams that are responsible for the infrastructure of both the source and target environments. If your teams work in different time zones, ensure the following: Your teams properly cover those time zones and they cover multiple consecutive shifts, because they might be unable to resolve issues during a single shift. Your teams are prepared to collect detailed information about the issues that they might face. This collection provides the engineers on the next shift a complete understanding of what the previous shift did, and why. Specific people in your teams are responsible for any given shift. Remove proof-of-concept resources from the target production environment As part of the assessment, you might have used the target environment to host experiments and proofs of concept. Before the migration, remove any resources that you created during those experiments and proofs of concept from the production area of the target environment. You can keep resources in a non-production area of the target environment while the migration is in progress because they might help you to gather information about any issue that might arise during the migration. For example, to diagnose issues that affect your production workloads after the migration, you can compare the configuration and data logs of the production workload against the configuration and data logs of the proofs of concept and experiments. After you complete the migration and you validate that the target environment works as expected, you can delete the resources in the non-production area of the target environment. Define criteria to safely retire the source environment To avoid the cost of running two environments indefinitely, define what conditions must be met for you to safely retire the source environment, such as the following: All workloads, including their backups and high availability and disaster recovery mechanisms, are successfully migrated to the target environment. The data migrated on the target environment is consistent, accessible, and usable. The accuracy and completeness of the migrated data fulfill the defined standard. Resources that remain in the source environment aren't dependencies for workloads that are out of the migration scope. The performance of your workloads on the target environment fulfill your SLA targets. Your monitoring systems report that there isn't any network traffic to the source environment that should be directed to the target environment. After the workloads are running without issue in the target environment for a period that you define, you are confident that you no longer need the ability to fall back to the source environment. Operations To efficiently manage the source environment and the target environment during the migration, you need to engineer your operational processes as well. Monitor your environments To observe how your source and target environments are behaving and to help you diagnose issues as they occur, set up the following: A monitoring system to gather metrics that are useful to your scenario. A logging system to observe the flow of operations that is performed by your workloads and other components of your environments. An alerting system that warns you before a problematic event occurs. Google Cloud Observability supports integrated monitoring, logging, and alerting for your Google Cloud environment. Because a workload and its dependencies span multiple environments, you might need to consider using multiple monitoring and alerting tools for different environments. Consider the timing of when you migrate the monitoring and alerting policies that support the workloads. For example, if your source environment is configured to alert when a particular server is down, the alert triggers when you intentionally turn down that server. The alert trigger is expected, but it's unhelpful behavior. As part of the migration, you need to continuously adjust the alerts for the source environment and reconfigure them for the target environment. Manage the migration To manage the migration, you review the performance of the migration to gather information that you can use as a retrospective after the migration is complete. After you gather information, you use it to analyze the migration performance and to prepare data points about potential improvements to your environments. For example, to start planning to manage the migration, consider the following questions: How long did each step of the migration plan take? Were there any steps of the migration plan that took more time to complete than anticipated? Were there any missing steps or checks? Did any adverse events occur during the migration? What's next Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/BigLake.txt b/BigLake.txt new file mode 100644 index 0000000000000000000000000000000000000000..d25a2be3a39352b405543a6d4adfe07f04df8379 --- /dev/null +++ b/BigLake.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/biglake +Date Scraped: 2025-02-23T12:03:43.087Z + +Content: +Google is named a leader in The Forrester Wave™: Data Lakehouses Q2 2024 report.Jump to BigLakeBigLake is a storage engine that provides a unified interface for analytics and AI engines to query multiformat, multicloud, and multimodal data in a secure, governed, and performant manner. Build a single-copy AI lakehouse designed to reduce management of and need for custom data infrastructure.Deploy in consoleContact salesContinuous innovation including new research BigQuery's Evolution toward a Multi-Cloud Lakehouse to be presented at the 2024 SIGMOD event.Deploy a Google-recommended solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured dataStore a single copy of structured and unstructured data and query using analytics and AIFine-grained access control and multicloud governance over distributed dataFully managed experience with automatic data management for your open-format lakehouseVIDEOSee how BigLake unifies data lakes, warehouses, across clouds and data formats 2:00BenefitsFreedom of choiceUnlock analytics on distributed data regardless where and how it’s stored, while choosing the best analytics tools, open source or cloud native over a single copy of data. Secure and performant data lakesFine-grained access control across open source engines like Apache Spark, Presto and Trino, and open formats such as Parquet. Performant queries over data lakes powered by BigQuery.Unified governance & management at scaleIntegrates with Dataplex to provide management at scale, including logical data organization, centralized policy & metadata management, quality and lifecycle management for consistency across distributed data. Key featuresKey featuresFine grained security controlsBigLake eliminates the need to grant file level access to end users. Apply table, row, column level security policies on object store tables similar to existing BigQuery tables.Multi-compute analyticsMaintain a single copy of structured and unstructured data and make it uniformly accessible across Google Cloud and open source engines, including BigQuery, Vertex AI, Dataflow, Spark, Presto, Trino, and Hive using BigLake connectors. Centrally manage security policies in one place, and have it consistently enforced across the query engines by the API interface built into the connectors.Multicloud governanceDiscover all BigLake tables, including those defined over Amazon S3, Azure data lake Gen 2 in Data Catalog. Configure fine grained access control and have it enforced across clouds when querying with BigQuery Omni.Built for artificial intelligence (AI)Object tables enable use of multimodal data for governed AI workloads. Easily build AI use cases using BigQuery SQL and its Vertex AI integrations. Built on open formatsSupports open table and file formats including Parquet, Avro, ORC, CSV, JSON. The API serves multiple compute engines through Apache Arrow. Table format natively supports Apache Iceberg, Delta, and Hudi via manifest.As a rapidly growing e-commerce company, we have seen rapid growth in data. BigLake allows us to unlock the value of data lakes by enabling access control on our views while providing a unified interface to our users and keeping data storage costs low. This in turn allows quicker analysis on our datasets by our users.What's newWhat’s newBlog postUnify data lakes and warehouses with BigLake, now generally availableLearn moreVideoUnifying distributed data across lakes, warehouses, clouds & open formatsWatch videoBlog postUnifying data lakes and data warehouses across clouds with BigLakeRead the blogBlog postUnify your data for limitless innovation Read the blogDocumentationDocumentationGoogle Cloud BasicsIntroduction to BigLakeIntroduce BigLake concepts and learn what it can do for you to simplify your analytics experience.Learn moreQuickstartGetting started with BigLakeLearn how to create and manage BigLake tables, query a BigLake table through BigQuery or other open source engines using connectors.Learn moreQuickstartQuery Cloud Storage data in BigLake tablesLearn how to query data stored in a Cloud Storage BigLake table.Learn moreNot seeing what you’re looking for?View all product documentationPricingPricingBigLake pricing is based on querying BigLake tables, including:1. BigQuery pricing applies for queries over BigLake tables defined on Google Cloud Storage. 2. BigQuery Omni pricing applies for queries over BigLake tables defined on Amazon S3 & Azure data lake Gen 2.3. Queries from open-source engines using BigLake connectors: BigLake connectors use BigQuery Storage API, and corresponding prices apply - billed on bytes read, and Egress.4. Additional costs apply for query acceleration using metadata caching, object tables, and BigLake Metastore.Ex: * The first 1 TB of data processed with BigQuery each month is free.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/BigQuery(1).txt b/BigQuery(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..541262072c0d5cf157c7f965778add0bf1f1ac31 --- /dev/null +++ b/BigQuery(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bigquery +Date Scraped: 2025-02-23T12:03:31.350Z + +Content: +Google is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMS. Learn more.BigQueryFrom data warehouse to a unified, AI-ready data platformBigQuery is a fully managed, AI-ready data analytics platform that helps you maximize value from your data and is designed to be multi-engine, multi-format, and multi-cloud.Store 10 GiB of data and run up to 1 TiB of queries for free per month.Try it in consoleContact salesProduct highlightsUnified data platform to connect all your data and activate with AIBuilt-in machine learning to create and run models using simple SQLReal-time analytics with streaming and built-in BISign up for our no-cost discovery workshopFeaturesPower your data agents with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features, including code assist, visual data preparation, and intelligent recommendations that help enhance productivity and optimize costs. BigQuery provides a single, unified workspace that includes a SQL, a notebook, and a NL-based canvas interface for data practitioners of various coding skills to simplify analytics workflows from data ingestion and preparation to data exploration and visualization to ML model creation and use.VIDEOLearn how to build data agents with Gemini in BigQuery3:42Bring multiple engines to a single copy of dataBigQuery has multi-engine capabilities that includes SQL and serverless Apache Spark. A unified metastore for shared runtime metadata provides unified security and governance controls across all engines and storage types, and now supports Apache Iceberg. By bringing multiple engines, including SQL, Spark, and Python to a single copy of data and metadata, you can break down data silos and increase efficiency.VIDEOWhat's New with BigQuery at Next46:22Manage all data types and open formatsUse BigQuery to manage all data types across clouds, structured and unstructured, with fine-grained access controls. Support for open table formats gives you the flexibility to use existing open source and legacy tools while getting the benefits of an integrated data platform. BigLake, BigQuery’s storage engine, lets you have a common way to work with data and makes open formats like Apache Iceberg, Delta, and Hudi. Read new research on BigQuery's Evolution toward a Multi-Cloud Lakehouse.VIDEOBuild an open and fully managed lakehouse with BigQuery at Next42:06Built-in machine learningBigQuery ML provides built-in capabilities to create and run ML models for your BigQuery data. You can leverage a broad range of models for predictions, and access the latest Gemini models to derive insights from all data types and unlock generative AI tasks, such as text summarization, text generation, multimodal embeddings, and vector search. It increases the model development speed by directly bringing ML to your data and eliminating the need to move data from BigQuery.VIDEOAnalyze data in BigQuery using Gemini models7:10Built-in data governanceData governance is built into BigQuery, including full integration of Dataplex capabilities, such as a unified metadata catalog, data quality, lineage, and profiling. Customers can use rich AI-driven metadata search and discovery capabilities for assets, including dataset schemas, notebooks and reports, public and commercial dataset listings, and more. BigQuery users can also use governance rules to manage policies on BigQuery object tables.Data and AI governance at Next45:30Real-time analytics with streaming data pipelinesUse Managed Service for Apache Kafka to build and run real-time streaming applications. From SQL-based easy streaming with BigQuery continuous queries, popular open-source Kafka platforms, and advanced multimodal data streaming with Dataflow, including support for Iceberg, you can make real-time data and AI a reality.Enterprise capabilities BigQuery continues to build new enterprise capabilities. Cross-region disaster recovery provides managed failover in the unlikely event of a regional disaster as well as data backup and recovery features to help you recover from user errors. BigQuery operational health monitoring provides organization-wide views of your BigQuery operational environment. BigQuery Migration Services provides a comprehensive collection of tools for migrating to BigQuery from legacy or cloud data warehouses.Share insights with built-in business intelligenceWith built-in business intelligence, create and share insights in a few clicks with Looker Studio or build data-rich experiences that go beyond BI with Looker. Analyze billions of rows of live BigQuery data in Google Sheets with familiar tools, like pivot tables, charts, and formulas, to easily derive insights from big data with Connected Sheets. View all featuresHow It WorksBigQuery's serverless architecture lets you use SQL queries to analyze your data. You can store and analyze your data within BigQuery or use BigQuery to assess your data where it lives. To test how it works for yourself, query data—without a credit card—using the BigQuery sandbox.Run sample queryDemo: Solving business challenges with an end-to-end analysis in BigQueryCommon UsesGenerative AIUnlock generative AI use cases with BigQuery and Gemini modelsBuild data pipelines that blend structured data, unstructured data, and generative AI models together to create a new class of analytical applications. BigQuery integrates with Gemini 1.0 Pro using Vertex AI. The Gemini 1.0 Pro model is designed for higher input/output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console.Learn more about BigQuery and Vertex AI integrationsLearn how to get started with Generative AI in BigQueryRead the latest innovations on BigQuery's integrations with Vertex AITutorials, quickstarts, & labsUnlock generative AI use cases with BigQuery and Gemini modelsBuild data pipelines that blend structured data, unstructured data, and generative AI models together to create a new class of analytical applications. BigQuery integrates with Gemini 1.0 Pro using Vertex AI. The Gemini 1.0 Pro model is designed for higher input/output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console.Learn more about BigQuery and Vertex AI integrationsLearn how to get started with Generative AI in BigQueryRead the latest innovations on BigQuery's integrations with Vertex AIData warehouse migrationMigrate data warehouses to BigQuerySolve for today’s analytics demands and seamlessly scale your business by moving to Google Cloud’s enterprise data warehouse. Streamline your migration path from Netezza, Oracle, Redshift, Teradata, or Snowflake to BigQuery using the free and fully managed BigQuery Migration Service.Learn about BigQuery Migration Service for a comprehensive data warehouse migrationAmazon Redshift to BigQuery migration guideTeradata to BigQuery migration guideSnowflake to BigQuery migration guideTutorials, quickstarts, & labsMigrate data warehouses to BigQuerySolve for today’s analytics demands and seamlessly scale your business by moving to Google Cloud’s enterprise data warehouse. Streamline your migration path from Netezza, Oracle, Redshift, Teradata, or Snowflake to BigQuery using the free and fully managed BigQuery Migration Service.Learn about BigQuery Migration Service for a comprehensive data warehouse migrationAmazon Redshift to BigQuery migration guideTeradata to BigQuery migration guideSnowflake to BigQuery migration guideTransfer data into BigQueryBring any data into BigQueryMake analytics easier by bringing together data from multiple sources into BigQuery. You can upload data files from local sources, Google Drive, or Cloud Storage buckets, use BigQuery Data Transfer Service (DTS), Cloud Data Fusion plugins, replicate data from relational databases with Datastream for BigQuery, or leverage Google's industry-leading data integration partnerships. Learn about third-party transfersAutomate data movement into BigQuery with DTSDeploy data pipelines into BigQuery with Data FusionTutorials, quickstarts, & labsBring any data into BigQueryMake analytics easier by bringing together data from multiple sources into BigQuery. You can upload data files from local sources, Google Drive, or Cloud Storage buckets, use BigQuery Data Transfer Service (DTS), Cloud Data Fusion plugins, replicate data from relational databases with Datastream for BigQuery, or leverage Google's industry-leading data integration partnerships. Learn about third-party transfersAutomate data movement into BigQuery with DTSDeploy data pipelines into BigQuery with Data FusionUnlock value from all data typesDerive insights from images, documents, and audio files and combine with structured dataUnstructured data represents a large portion of untapped enterprise data. However, it can be challenging to interpret, making it difficult to extract meaningful insights from it. Leveraging the power of BigLake, you can derive insights from images, documents, and audio files using a broad range of AI models, including Vertex AI’s vision, document processing, and speech-to-text APIs, open-source TensorFlow Hub models, or your own custom models.Learn more about unstructured data analysis Tutorials, quickstarts, & labsDerive insights from images, documents, and audio files and combine with structured dataUnstructured data represents a large portion of untapped enterprise data. However, it can be challenging to interpret, making it difficult to extract meaningful insights from it. Leveraging the power of BigLake, you can derive insights from images, documents, and audio files using a broad range of AI models, including Vertex AI’s vision, document processing, and speech-to-text APIs, open-source TensorFlow Hub models, or your own custom models.Learn more about unstructured data analysis Pre-configured data solutionsDeploy a preconfigured data warehouse in the Google Cloud consoleDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis.Deploy in consoleDeploy a Google-recommended analytics lakehouse solutionSummarize large documents with AITutorials, quickstarts, & labsDeploy a preconfigured data warehouse in the Google Cloud consoleDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis.Deploy in consoleDeploy a Google-recommended analytics lakehouse solutionSummarize large documents with AIReal-time analyticsEvent-driven analysisGain a competitive advantage by responding to business events in real time with event-driven analysis. Built-in streaming capabilities automatically ingest streaming data and make it immediately available to query. This allows you to stay agile and make business decisions based on the freshest data. Or use Dataflow to enable fast, simplified streaming data pipelines for a comprehensive solution.Learn more about streaming data into BigQuerySQL-based streaming with continuous queriesReal-time data with BigQuery Pub/Sub subscriptionsReal-time analytics database solutions with BigQuery and BigtableTutorials, quickstarts, & labsEvent-driven analysisGain a competitive advantage by responding to business events in real time with event-driven analysis. Built-in streaming capabilities automatically ingest streaming data and make it immediately available to query. This allows you to stay agile and make business decisions based on the freshest data. Or use Dataflow to enable fast, simplified streaming data pipelines for a comprehensive solution.Learn more about streaming data into BigQuerySQL-based streaming with continuous queriesReal-time data with BigQuery Pub/Sub subscriptionsReal-time analytics database solutions with BigQuery and BigtablePredictive analyticsPredict business outcomes with leading AI/MLPredictive analytics can be used to streamline operations, boost revenue, and mitigate risk. BigQuery ML democratizes the use of ML by empowering data analysts to build and run models using existing business intelligence tools and spreadsheets. Predictive analytics can guide business decision-making across the organization.View analytics design patterns for predictive analytics use casesBuild an e-commerce recommendation systemPredict customer lifetime valueBuild a propensity to purchase solutionTutorials, quickstarts, & labsPredict business outcomes with leading AI/MLPredictive analytics can be used to streamline operations, boost revenue, and mitigate risk. BigQuery ML democratizes the use of ML by empowering data analysts to build and run models using existing business intelligence tools and spreadsheets. Predictive analytics can guide business decision-making across the organization.View analytics design patterns for predictive analytics use casesBuild an e-commerce recommendation systemPredict customer lifetime valueBuild a propensity to purchase solutionLog analyticsAnalyze log dataAnalyze and gain deeper insights into your logging data with BigQuery. You can store, explore, and run queries on generated data from servers, sensors, and other devices simply using GoogleSQL. Additionally, you can analyze log data alongside the rest of your business data for broader analysis all natively within BigQuery. Learn how to analyze logs using BigQueryVideo: How to analyze log data in BigQuerySample SQL queries for log analyticsPinpoint unique elements in dataTutorials, quickstarts, & labsAnalyze log dataAnalyze and gain deeper insights into your logging data with BigQuery. You can store, explore, and run queries on generated data from servers, sensors, and other devices simply using GoogleSQL. Additionally, you can analyze log data alongside the rest of your business data for broader analysis all natively within BigQuery. Learn how to analyze logs using BigQueryVideo: How to analyze log data in BigQuerySample SQL queries for log analyticsPinpoint unique elements in dataMarketing analyticsIncrease marketing ROI and performance with data and AIBring the power of Google AI to your marketing data by unifying marketing and business data sources in BigQuery. Get a holistic view of the business, increase marketing ROI and performance using more first-party data, and deliver personalized and targeting marketing at scale with ML/AI built-in. Share insights and performance with Looker Studio or Connected Sheets.Explore data and AI solutions built for marketing use casesTutorials, quickstarts, & labsIncrease marketing ROI and performance with data and AIBring the power of Google AI to your marketing data by unifying marketing and business data sources in BigQuery. Get a holistic view of the business, increase marketing ROI and performance using more first-party data, and deliver personalized and targeting marketing at scale with ML/AI built-in. Share insights and performance with Looker Studio or Connected Sheets.Explore data and AI solutions built for marketing use casesData clean roomsBigQuery data clean rooms for privacy-centric data sharingCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore more use cases for data clean roomsTutorials, quickstarts, & labsBigQuery data clean rooms for privacy-centric data sharingCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore more use cases for data clean roomsPricingHow BigQuery pricing worksBigQuery pricing is based on compute (analysis), storage, additional services, and data ingestion and extraction. Loading and exporting data are free.Services and usageSubscription typePrice (USD)Free tierThe BigQuery free tier gives customers 10 GiB storage, up to 1 TiB queries free per month, and other resources.FreeCompute (analysis)On-demandGenerally gives you access to up to 2,000 concurrent slots, shared among all queries in a single project.Starting at$6.25per TiB scanned. First 1 TiB per month is free.Standard editionLow-cost option for standard SQL analysis $0.04 per slot hourEnterprise editionSupports advanced enterprise analytics$0.06per slot hourEnterprise Plus editionSupports mission-critical enterprise analytics$0.10per slot hourStorageActive local storageBased on the uncompressed bytes used in tables or table partitions modified in the last 90 days. Starting at$0.02Per GiB. The first 10 GiB is free each month.Long-term logical storageBased on the uncompressed bytes used in tables or table partitions modified for 90 consecutive days. Starting at$0.01Per GiB. The first 10 GiB is free each month.Active physical storageBased on the compressed bytes used in tables or table partitions modified for 90 consecutive days.Starting at$0.04 Per GiB. The first 10 GiB is free each month.Long-term physical storageBased on compressed bytes in tables or partitions that have not been modified for 90 consecutive days.Starting at$0.02Per GiB. The first 10 GiB is free each month.Data ingestionBatch loading Import table from Cloud StorageFreeWhen using the shared slot poolStreaming insertsYou are charged for rows that are successfully inserted. Individual rows are calculated using a 1 KB minimum.$0.01per 200 MiBBigQuery Storage Write APIData loaded into BigQuery, is subject to BigQuery storage pricing or Cloud Storage pricing.$0.025per 1 GiB. The first 2 TiB per month are free.Data extractionBatch exportExport table data to Cloud Storage.FreeWhen using the shared slot poolStreaming readsUse the storage Read API to perform streaming reads of table data.Starting at$1.10per TiB readLearn more about BigQuery pricing. View all pricing detailsHow BigQuery pricing worksBigQuery pricing is based on compute (analysis), storage, additional services, and data ingestion and extraction. Loading and exporting data are free.Free tierSubscription typeThe BigQuery free tier gives customers 10 GiB storage, up to 1 TiB queries free per month, and other resources.Price (USD)FreeCompute (analysis)Subscription typeOn-demandGenerally gives you access to up to 2,000 concurrent slots, shared among all queries in a single project.Price (USD)Starting at$6.25per TiB scanned. First 1 TiB per month is free.Standard editionLow-cost option for standard SQL analysis Subscription type$0.04 per slot hourEnterprise editionSupports advanced enterprise analyticsSubscription type$0.06per slot hourEnterprise Plus editionSupports mission-critical enterprise analyticsSubscription type$0.10per slot hourStorageSubscription typeActive local storageBased on the uncompressed bytes used in tables or table partitions modified in the last 90 days. Price (USD)Starting at$0.02Per GiB. The first 10 GiB is free each month.Long-term logical storageBased on the uncompressed bytes used in tables or table partitions modified for 90 consecutive days. Subscription typeStarting at$0.01Per GiB. The first 10 GiB is free each month.Active physical storageBased on the compressed bytes used in tables or table partitions modified for 90 consecutive days.Subscription typeStarting at$0.04 Per GiB. The first 10 GiB is free each month.Long-term physical storageBased on compressed bytes in tables or partitions that have not been modified for 90 consecutive days.Subscription typeStarting at$0.02Per GiB. The first 10 GiB is free each month.Data ingestionSubscription typeBatch loading Import table from Cloud StoragePrice (USD)FreeWhen using the shared slot poolStreaming insertsYou are charged for rows that are successfully inserted. Individual rows are calculated using a 1 KB minimum.Subscription type$0.01per 200 MiBBigQuery Storage Write APIData loaded into BigQuery, is subject to BigQuery storage pricing or Cloud Storage pricing.Subscription type$0.025per 1 GiB. The first 2 TiB per month are free.Data extractionSubscription typeBatch exportExport table data to Cloud Storage.Price (USD)FreeWhen using the shared slot poolStreaming readsUse the storage Read API to perform streaming reads of table data.Subscription typeStarting at$1.10per TiB readLearn more about BigQuery pricing. View all pricing detailsPricing calculatorEstimate your monthly BigQuery costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptStore 10 GiB of data and run up to 1 TiB of queries for free per month.Try BigQuery in consoleHave a large project?Contact salesLearn how to locate and query public datasets in BigQueryRead guideLearn how to load data into BigQueryRead guideLearn how to create and use tables in BigQueryRead guidePartners & IntegrationWork with a partner with BigQuery expertiseETL and data integrationReverse ETL and MDMBI and data visualizationData governance and securityConnectors and developer toolsMachine learning and advanced analyticsData quality and observabilityConsulting partnersFrom data ingestion to visualization, many partners have integrated their data solutions with BigQuery. Listed above are partner integrations through Google Cloud Ready - BigQuery.Visit our partner directory to learn about these BigQuery partners.FAQExpand allWhat makes BigQuery different from other enterprise data warehouse alternatives?BigQuery is Google Cloud’s fully managed and completely serverless enterprise data warehouse. BigQuery supports all data types, works across clouds, and has built-in machine learning and business intelligence, all within a unified platform. What is an enterprise data warehouse?An enterprise data warehouse is a system used for the analysis and reporting of structured and semi-structured data from multiple sources. Many organizations are moving from traditional data warehouses that are on-premises to cloud data warehouses, which provide more cost savings, scalability, and flexibility.How secure is BigQuery?BigQuery offers robust security, governance, and reliability controls that offer high availability and a 99.99% uptime SLA. Your data is protected with encryption by default and customer-managed encryption keys.How can I get started with BigQuery?There are a few ways to get started with BigQuery. New customers get $300 in free credits to spend on BigQuery. All customers get 10 GB storage and up to 1 TB queries free per month, not charged against their credits. You can get these credits by signing up for the BigQuery free trial. Not ready yet? You can use the BigQuery sandbox without a credit card to see how it works. What is the BigQuery sandbox?The BigQuery sandbox lets you try out BigQuery without a credit card. You stay within BigQuery’s free tier automatically, and you can use the sandbox to run queries and analysis on public datasets to see how it works. You can also bring your own data into the BigQuery sandbox for analysis. There is an option to upgrade to the free trial where new customers get a $300 credit to try BigQuery.What are the most common ways companies use BigQuery?Companies of all sizes use BigQuery to consolidate siloed data into one location so you can perform data analysis and get insights from all of your business data. This allows companies to make decisions in real time, streamline business reporting, and incorporate machine learning into data analysis to predict future business opportunities.Other inquiries and supportBilling and troubleshootingAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/BigQuery(2).txt b/BigQuery(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..037fedd93e64772893a55aecfbe8694ad2727c73 --- /dev/null +++ b/BigQuery(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bigquery/pricing +Date Scraped: 2025-02-23T12:11:06.794Z + +Content: +Home Pricing BigQuery Send feedback Stay organized with collections Save and categorize content based on your preferences. BigQuery pricing BigQuery is a serverless data analytics platform. You don't need to provision individual instances or virtual machines to use BigQuery. Instead, BigQuery automatically allocates computing resources as you need them. You can also reserve compute capacity ahead of time in the form of slots, which represent virtual CPUs. The pricing structure of BigQuery reflects this design. Overview of BigQuery pricing BigQuery pricing has two main components: Compute pricing is the cost to process queries, including SQL queries, user-defined functions, scripts, and certain data manipulation language (DML) and data definition language (DDL) statements. Storage pricing is the cost to store data that you load into BigQuery. BigQuery charges for other operations, including using BigQuery Omni, BigQuery ML, BI Engine, and streaming reads and writes. In addition, BigQuery has free operations and a free usage tier. Every project that you create has a billing account attached to it. Any charges incurred by BigQuery jobs run in the project are billed to the attached billing account. BigQuery storage charges are also billed to the attached billing account. You can view BigQuery costs and trends by using the Cloud Billing reports page in the Google Cloud console. Key Point: Pricing models apply to accounts, not individual projects, unless otherwise specified. Compute pricing models BigQuery offers a choice of two compute pricing models for running queries: On-demand pricing (per TiB). With this pricing model, you are charged for the number of bytes processed by each query. The first 1 TiB of query data processed per month is free. Capacity pricing (per slot-hour). With this pricing model, you are charged for compute capacity used to run queries, measured in slots (virtual CPUs) over time. This model takes advantage of BigQuery editions. You can use the BigQuery autoscaler or purchase slot commitments, which are dedicated capacity that is always available for your workloads, at a lower price. For more information about which pricing to choose for your workloads, see Workload management using Reservations. Gemini in BigQuery pricing See Gemini in BigQuery Pricing Overview for information about pricing for Gemini in BigQuery. On-demand compute pricing By default, queries are billed using the on-demand (per TiB) pricing model, where you pay for the data scanned by your queries. With on-demand pricing, you will generally have access to up to 2,000 concurrent slots, shared among all queries in a single project. Periodically, BigQuery will temporarily burst beyond this limit to accelerate smaller queries. In addition, you might occasionally have fewer slots available if there is a high amount of contention for on-demand capacity in a specific location. On-demand (per TiB) query pricing is as follows: If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Pricing details Note the following regarding on-demand (per TiB) query charges: BigQuery uses a columnar data structure. You're charged according to the total data processed in the columns you select, and the total data per column is calculated based on the types of data in the column. For more information about how your data size is calculated, see Estimate query costs. You are charged for queries run against shared data. The data owner is not charged when their data is accessed. You aren't charged for queries that return an error or for queries that retrieve results from the cache. For procedural language jobs this consideration is provided at a per-statement level. Charges are rounded up to the nearest MB, with a minimum 10 MB data processed per table referenced by the query, and with a minimum 10 MB data processed per query. Canceling a running query job might incur charges up to the full cost for the query if you let the query run to completion. When you run a query, you're charged according to the data processed in the columns you select, even if you set an explicit LIMIT on the results. Partitioning and clustering your tables can help reduce the amount of data processed by queries. As a best practice, use partitioning and clustering whenever possible. On-demand (per TiB) pricing is referred to as analysis pricing on the Google Cloud SKUs page. When you run a query against a clustered table, and the query includes a filter on the clustered columns, BigQuery uses the filter expression to prune the blocks scanned by the query. This can reduce the number of scanned bytes. BigQuery provides cost control mechanisms that enable you to cap your query costs. You can set: User-level and project-level custom cost controls The maximum bytes billed by a query For detailed examples of how to calculate the number of bytes processed, see Query size calculation. Capacity compute pricing BigQuery offers a capacity-based compute pricing model for customers who need additional capacity or prefer a predictable cost for query workloads rather than the on-demand price (per TiB of data processed). The capacity compute model offers pay-as-you-go pricing (with autoscaling) and optional one year and three year commitments that provide discounted prices. You pay for query processing capacity, measured in slots (virtual CPUs) over time. To enable capacity pricing, use BigQuery reservations. BigQuery slot capacity: is available in 3 editions: Standard, Enterprise, and Enterprise Plus. applies to query costs, including BigQuery ML, DML, and DDL statements. does not apply to storage costs or BI Engine costs. does not apply to streaming inserts and using the BigQuery Storage API. can leverage the BigQuery autoscaler. is billed per second with a one minute minimum. Optional BigQuery slot commitments: are available for one or three year periods. are available in Enterprise and Enterprise Plus editions. are regional capacity. Commitments in one region or multi-region cannot be used in another region or multi-region and cannot be moved. can be shared across your entire organization. There is no need to buy slot commitments for every project. are offered with a 50-slot minimum and increments of 50 slots. are automatically renewed unless set to cancel at the end of the period. Standard Edition The following table shows the cost of slots in Standard edition. Enterprise Edition The following table shows the cost of slots in Enterprise edition. Enterprise Plus Edition The following table shows the cost of slots in Enterprise plus edition. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Storage pricing Storage pricing is the cost to store data that you load into BigQuery. You pay for active storage and long-term storage. Active storage includes any table or table partition that has been modified in the last 90 days. Long-term storage includes any table or table partition that has not been modified for 90 consecutive days. The price of storage for that table automatically drops by approximately 50%. There is no difference in performance, durability, or availability between active and long-term storage. Metadata storage includes storage for logical and physical metadata for datasets, tables, partitions, models and functions stored in the BigQuery metastore. The first 10 GiB of storage per month is free. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. See the physical storage documentation for eligibility criteria. Pricing details Storage pricing is based on the amount of data stored in your tables, temporary session tables and temporary multi-statement tables. There are no storage charges for temporary cached query result tables. The size of the data is calculated based on the data types of the individual columns. For a detailed explanation of how data size is calculated, see Data size calculation. Storage pricing is prorated per MiB, per second. For example, if you are using active logical storage in us-central1: For 512 MiB for half a month, you pay $0.00575 USD For 100 GiB for half a month, you pay $1.15 USD For 1 TiB for a full month, you pay $23.552 USD Storage usage is calculated in gibibytes months (GiB months), where 1 GiB is 230 bytes (1,024 MiB). Similarly, 1 tebibyte (TiB) is 240 bytes (1,024 GiB). The final usage value is the product of data size in gibibytes and storage use time in months. If the data in a table is not modified or deleted within 90 consecutive days, it is billed at the long-term storage rate. There is no degradation of performance, durability, availability, or any other functionality when a table is considered long-term storage. Each partition of a partitioned table is considered separately for long-term storage pricing. If a partition hasn't been modified in the last 90 days, the data in that partition is considered long term storage and is charged at the discounted price. If the table is edited, the price reverts back to the regular storage pricing, and the 90-day timer starts counting from zero. Anything that modifies the data in a table resets the timer, including: Action Details Loading data into a table Any load or query job that appends data to a destination table or overwrites a destination table. Copying data into a table Any copy job appends data to a destination table or overwrites a destination table. Writing query results to a table Any query job that appends data to a destination table or overwrites a destination table. Using data manipulation language (DML) Using a DML statement to modify table data. Using data definition language (DDL) Using a CREATE OR REPLACE TABLE statement to replace a table. Streaming data into the table Ingesting data using the tabledata.insertAll API call. All other actions do not reset the timer, including the following: Querying a table Creating a view that queries a table Exporting data from a table Copying a table (to another destination table) Patching or updating a table resource For tables that reach the 90-day threshold during a billing cycle, the price is prorated accordingly. Long-term storage pricing applies only to BigQuery storage, not to data stored in external data sources such as Bigtable, Cloud Storage, and Google Drive. Data size calculation When you load data into BigQuery or query the data, you're charged according to the data size. Data size is calculated based on the size of each column's data type. The size of your stored data and the size of the data processed by your queries is calculated in gibibytes (GiB), where 1 GiB is 230 bytes (1,024 MiB). Similarly, 1 tebibyte (TiB) is 240 bytes (1,024 GiB). For more information, see Data type sizes. Metadata storage free tier BigQuery provides a usage based free tier for metadata storage, wherein total_metadata_storage includes any data stored in the BigQuery metastore total_bigquery_data_storage includes any data stored in BigQuery data storage metadata (%) = total_metadata_storage / total_bigquery_data_storage If metadata (%) <= 2%, then it is within free tier and is not charged If metadata (%) > 2%, then the amount above the free tier is charged at the rate provided in the table above. The metadata free tier allowance is calculated at a per-project level. Data Transfer Service pricing The BigQuery Data Transfer Service charges monthly on a prorated basis. You are charged as follows: Data source Monthly charge (prorated) Notes Campaign Manager No charge. BigQuery Quotas and limits apply. 1 Cloud Storage No charge. BigQuery Quotas and limits apply. 1 Amazon S3 No charge. BigQuery Quotas and limits apply. 1,2,3 Google Ads No charge. BigQuery Quotas and limits apply. 1 Google Ad Manager No charge. BigQuery Quotas and limits apply. 1 Google Merchant Center No charge. BigQuery Quotas and limits apply. 1 Google Play $25 per unique Package Name in the Installs_country table. 1 Search Ads 360 No charge. BigQuery Quotas and limits apply. 1 YouTube Channel No charge. BigQuery Quotas and limits apply. 1 YouTube Content Owner No charge. BigQuery Quotas and limits apply. 1 Data warehouse Monthly charge (prorated) Notes Teradata No charge. BigQuery Quotas and limits apply. 1, 2, 3, 4 Amazon Redshift No charge. BigQuery Quotas and limits apply. 1, 2, 3 Third-party Connectors Costs apply See 5 for more details Notes on transfer pricing All transfers 1. After data is transferred to BigQuery, standard BigQuery storage and query pricing applies. Migrations from other platforms 2. Extraction, uploading to a Cloud Storage bucket, and loading data into BigQuery is free. 3. Costs can be incurred outside of Google Cloud by using the BigQuery Data Transfer Service, such as AWS or Azure data transfer charges. Teradata migrations 4. Data is not automatically deleted from your Cloud Storage bucket after it is uploaded to BigQuery. Consider deleting the data from your Cloud Storage bucket to avoid additional storage costs. See Cloud Storage pricing. Third-party Connectors 5. Costs apply for connectors provided by third-party partners. The pricing model differs for different partners and connectors. For more pricing details, refer to individual connectors when enrolling in Marketplace. Google Play Package Name Every Android app has a unique application ID that looks like a Java package name, such as com.example.myapp. The Installs report contains a column of "Package Name". The number of unique package names is used for calculating usage of transfers. Each transfer you create generates one or more runs per day. Package names are only counted on the day a transfer run completes. For example, if a transfer run begins on July 14th but completes on July 15th, the package names are counted on July 15th. If a unique package name is encountered in more than one transfer run on a particular day, it is counted only once. Package names are counted separately for different transfer configurations. If a unique package name is encountered in runs for two separate transfer configurations, the package name is counted twice. If a package name appeared every day for an entire month, you would be charged the full $25 for that month. Otherwise, if it appeared for a part of the month, the charge would be prorated. Example#1: If we sync for 1 application - com.smule.singandroid, will it cost us $25 per month + storage price for BigQuery? The answer is $25 per month (prorated) + storage/querying costs from BigQuery. Example#2: If we sync all historic data (for 10 years), will we be charged for 120 months or for 1 month, because we transferred them at once? The answer is still $25 per month (prorated) + storage/querying costs from BigQuery, since we charge $25 per unique Package Name in the Installs_country table, regardless of how many years the historic data goes back to for that unique Package Name. BigQuery Omni pricing BigQuery Omni offers the following pricing models depending on your workloads and needs. On-Demand compute pricing Similar to BigQuery on-demand analysis model, BigQuery Omni queries, by default are billed using the on-demand (per TiB) pricing model, where you pay for the data scanned by your queries. With on-demand pricing, you will generally have access to a large pool of concurrent slots, shared among all queries in a single project. Periodically, BigQuery Omni will temporarily burst beyond this limit to accelerate smaller queries. In addition, you might occasionally have fewer slots available if there is a high amount of contention for on-demand capacity in a specific location. BigQuery Omni on-demand (per TiB) query pricing is as follows: Region Price per TiB AWS North Virginia (aws-us-east-1) $7.82 Azure North Virginia (azure-eastus2) $9.13 AWS Seoul (aws-ap-northeast-2) $10.00 AWS Oregon (aws-us-west-2) $7.82 AWS Ireland (aws-eu-west-1) $8.60 AWS Sydney (aws-ap-southeast-2) $10.55 AWS Frankfurt (aws-eu-central-1) $10.16 If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Pricing details The details and limitations are similar to BigQuery analysis pricing. Note the following regarding on-demand (per TiB) query charges: BigQuery uses a columnar data structure. You're charged according to the total data processed in the columns you select, and the total data per column is calculated based on the types of data in the column. For more information about how your data size is calculated, see Data size calculation. You aren't charged for queries that return an error or for queries that retrieve results from the cache. For procedural language jobs this consideration is provided at a per-statement level. Charges are rounded up to the nearest MB, with a minimum 10 MB data processed per table referenced by the query, and with a minimum 10 MB data processed per query. Canceling a running query job might incur charges up to the full cost for the query if you let the query run to completion. When you run a query, you're charged according to the data processed in the columns you select, even if you set an explicit LIMIT on the results. Partitioning and clustering your tables can help reduce the amount of data processed by queries. As a best practice, use partitioning and clustering whenever possible. On-demand (per TiB) pricing is referred to as analysis pricing on the Google Cloud SKUs page. When you run a query against a clustered table, and the query includes a filter on the clustered columns, BigQuery uses the filter expression to prune the blocks scanned by the query. This can reduce the number of scanned bytes. BigQuery provides cost control mechanisms that enable you to cap your query costs. You can set: User-level and project-level custom cost controls. The maximum bytes billed by a query BigQuery Omni with editions BigQuery Omni regions support BigQuery editions. At present only Enterprise Edition is supported in Omni regions The following table shows the cost of slots in Omni regions AWS North Virginia (aws-us-east-1) Commitment model Hourly cost Number of slots PAYG (no commitment) $7.50 (billed per second with a 1 minute minimum) 100 1 yr commit $6 (billed for 1 year) 100 3 yr commit $4.50 (billed for 3 years) 100 Azure North Virginia (azure-eastus2) Commitment model Hourly cost Number of slots PAYG (no commitment) $8.80 (billed per second with a 1 minute minimum) 100 1 yr commit $7 (billed for 1 year) 100 3 yr commit $5.30 (billed for 3 years) 100 AWS Seoul (aws-ap-northeast-2) Commitment model Hourly cost Number of slots PAYG (no commitment) $9.60 (billed per second with a 1 minute minimum) 100 1 yr commit $7.7 (billed for 1 year) 100 3 yr commit $5.80 (billed for 3 years) 100 AWS Oregon (aws-us-west-2) Commitment model Hourly cost Number of slots PAYG (no commitment) $7.50 (billed per second with a 1 minute minimum) 100 1 yr commit $6.00 (billed for 1 year) 100 3 yr commit $4.50 (billed for 3 years) 100 AWS Ireland (aws-eu-west-1) Commitment model Hourly cost Number of slots PAYG (no commitment) $8.25 (billed per second with a 1 minute minimum) 100 1 yr commit $6.60 (billed for 1 year) 100 3 yr commit $4.95 (billed for 3 years) 100 AWS Sydney (aws-ap-southeast-2) Commitment model Hourly cost Number of slots PAYG (no commitment) $10.13 (billed per second with a 1 minute minimum) 100 1 yr commit $8.10 (billed for 1 year) 100 3 yr commit $6.08 (billed for 3 years) 100 AWS Frankfurt (aws-eu-central-1) Commitment model Hourly cost Number of slots PAYG (no commitment) $9.75 (billed per second with a 1 minute minimum) 100 1 yr commit $7.80 (billed for 1 year) 100 3 yr commit $5.85 (billed for 3 years) 100 Omni Cross Cloud Data Transfer When using Omni’s Cross Cloud capabilities (Cross Cloud Transfer, Create Table as Select, Insert Into Select, Cross Cloud Joins, and Cross Cloud Materialized Views) that involve data moving from AWS or Azure to Google Cloud, there will be additional charges for data transfer. Specifically for Cross-Cloud Materialized Views, Create Table as Select, Insert Into Select, and Cross Cloud Joins there are no charges during Preview. Starting 29 February 2024, these services will be generally available and you will be charged for data transfer. You will be charged for data transfer only when using any of the above listed services from an AWS or Azure region to a Google Cloud BigQuery region. You will be charged on a per GiB rate based on the amount of data transferred from AWS or Azure to Google Cloud. SKU Billing model Meter List price Cross-cloud data transfer from AWS North Virginia (aws-us-east-1) to Google Cloud North America usage-based GiB transferred $.09 Cross-cloud data transfer from Azure North Virginia (azure-eastus2) to Google Cloud North America usage-based GiB transferred $.0875 Cross-cloud data transfer from AWS Seoul (aws-ap-northeast-2) to Google Cloud Asia usage-based GiB transferred $.126 Cross-cloud data transfer from AWS Oregon (aws-us-west-2) to Google Cloud North America usage-based GiB transferred $.09 Cross-cloud data transfer from AWS Ireland (aws-eu-west-1) to Google Cloud Europe usage-based GiB transferred $.09 Cross-cloud data transfer from AWS Sydney (aws-ap-southeast-2) to Google Cloud Oceania usage-based GiB transferred $.114 Cross-cloud data transfer from AWS Frankfurt (aws-eu-central-1) to Google Cloud Europe usage-based GiB transferred $.09 Omni Managed Storage When using Omni’s Cross Cloud Materialized Views capability, you will also be charged for creation of local materialized views which is on BigQuery Managed Storage on AWS. You will be charged a per GiB for the amount of physical storage that is used for the local materialized view. Operation Pricing Active physical storage (aws-us-east-1) $0.05 per GiB per month Long-term physical storage (aws-us-east-1) $0.025 per GiB per month Active physical storage (azure-eastus2) $0.05 per GiB per month Long-term physical storage (azure-eastus2) $0.025 per GiB per month Active physical storage (aws-ap-northeast-2) $0.052 per GiB per month Long-term physical storage (aws-ap-northeast-2) $0.026 per GiB per month Active physical storage (aws-us-west-2) $0.04 per GiB per month Long-term physical storage (aws-us-west-2) $0.02 per GiB per month Active physical storage (aws-eu-west-1) $0.044 per GiB per month Long-term physical storage (aws-eu-west-1) $0.022 per GiB per month Active physical storage (aws-ap-southeast-2) $0.052 per GiB per month Long-term physical storage (aws-ap-southeast-2) $0.026 per GiB per month Active physical storage (aws-eu-central-1) $0.052 per GiB per month Long-term physical storage (aws-eu-central-1) $0.026 per GiB per month Data ingestion pricing BigQuery offers two modes of data ingestion: Batch loading. Load source files into one or more BigQuery tables in a single batch operation. Streaming. Stream data one record at a time or in small batches using the BigQuery Storage Write API or the legacy streaming API. For more information about which mode to choose, see Introduction to loading data. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Pricing details By default, you are not charged for batch loading data from Cloud Storage or from local files into BigQuery. Load jobs by default use a shared pool of slots. BigQuery does not make guarantees about the available capacity of this shared pool or the throughput you will see. Alternatively, you can purchase dedicated slots to run load jobs. You are charged capacity-based pricing for dedicated slots. When load jobs are assigned to a reservation, they lose access to the free pool. For more information, see Assignments. Once your data is loaded into BigQuery, it is subject to BigQuery storage pricing. If you load data from Cloud Storage, you are charged for storing the data in Cloud Storage. For details, see Data storage on the Cloud Storage pricing page. Data extraction pricing BigQuery offers the following modes of data extraction: Batch export. Use an an extract job to export table data to Cloud Storage. There is no processing charge for exporting data from a BigQuery table using an extract job. Export query results. Use the EXPORT DATA statement to export query results to Cloud Storage, Bigtable, or Spanner. You are billed for the compute cost to process the query statement. Streaming reads. Use the Storage Read API to perform high-throughput reads of table data. You are billed for the amount of data read. Note: You are not charged for data extraction or data transfer when accessing query results in the Google Cloud console, BigQuery API, or any other clients, such as Looker. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Batch export data transfer pricing You are charged for data transfer when you export data in batch from BigQuery to a Cloud Storage bucket or Bigtable table in another region, as follows: Case Example Rate Export within the same location From us-east1 to us-east1 Free Export from BigQuery US multi-region From US multi-region to us-central1 (Iowa) Free Export from BigQuery US multi-region From US multi-region to any region (except us-central1 (Iowa)) See following table Export from BigQuery EU multi-region From EU multi-region to europe-west4 (Netherlands) Free Export from BigQuery EU multi-region From EU multi-region to any region (except europe-west4 (Netherlands)) See following table Export across locations From us-east1 to us-central1 See following table Note: All prices are in $/GiB and all GiB are in physical bytes. Source location Destination location Northern America Europe Asia Indonesia Oceania Middle East Latin America Africa Northern America $0.02/GiB $0.05/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Europe $0.05/GiB $0.02/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Asia $0.08/GiB $0.08/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Indonesia $0.10/GiB $0.10/GiB $0.10/GiB $0.08/GiB $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Oceania $0.10/GiB $0.10/GiB $0.10/GiB $0.08/GiB $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Middle East $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.08/GiB $0.14/GiB $0.11/GiB Latin America $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB Africa $0.11/GiB $0.11/GiB $0.11/GiB $0.14/GiB $0.14/GiB $0.11/GiB $0.14/GiB $0.11/GiB Storage Read API data transfer within Google Cloud Case Examples Rate Accessing cached query results from temporary tables Temporary tables "anonymous dataset" Free Data reads within the same location From us-east1 to us-east1 Free Data read from a BigQuery multi-region to a different BigQuery location, and both locations are on the same continent. From us to us-east1 From eu to europe-west1 Free Data read between different locations on the same continent (assuming none of the above free cases apply) From us-east1 to northamerica-northeast1 From europe-west1 to europe-north1 $0.01/GiB* Data moves between different continents within Google cloud and neither is Australia. From us to asia From europe-west1 to southamerica-east1 $0.08 per GiB Data moves between different continents within Google cloud and one is Australia. From us to australia-southeast1 From australia-southeast1 to europe-west1 $0.15 per GiB If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Storage Read API general network usage Monthly Usage Data Transfer to Worldwide Destinations (excluding Asia & Australia)(per GiB) Data Transfer to Asia Destinations (excluding China, but including Hong Kong)(per GiB) Data Transfer to China Destinations (excluding Hong Kong)(per GiB) Data Transfer to Australia Destinations (per GiB) Data Transfer in 0-1 TiB $0.12 $0.12 $0.19 $0.19 Free 1-10 TiB $0.11 $0.11 $0.18 $0.18 Free 10+ TiB $0.08 $0.08 $0.15 $0.15 Free If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Storage Read API pricing details The Storage Read API has an on-demand price model. With on-demand pricing, BigQuery charges for the number of bytes processed (also referred to as bytes read). On-demand pricing is solely based on usage, with a bytes read free tier of 300 TiB per month for each billing account. Bytes scanned as part of reads from temporary tables are free and do not count against the 300TiB free tier. This free bytes read 300 TiB is on the bytes-read component, and does not apply to associated outbound data transfer. Note the following regarding Storage Read API charges: You are charged according to the total amount of data read. The total data read per column is calculated based on the type of data in the column, and the size of the data is calculated based on the column's data type. For a detailed explanation of how data size is calculated, see Data size calculation. You are charged for any data read in a read session even if a ReadRows call fails. If you cancel a ReadRows call before the end of the stream is reached, you are charged for any data read before the cancellation. Your charges can include data that was read but not returned to you before the cancellation of the ReadRows call. As a best practice, use partitioned and clustered tables whenever possible. You can reduce the amount of data read by using a WHERE clause to prune partitions. For more information, see Querying partitioned tables. When using Interconnect, Cloud Interconnect pricing applies instead of BigQuery Storage Read API General Network Usage prices. Data replication pricing BigQuery offers two modes of replicating (copying) data between regions: Cross-region copy. One time or scheduled copy of table data to between regions or multi-regions, see copy datasets or copy tables. Cross-region replication. Ongoing, incremental replication of a dataset between two or more different regions or multi-regions, see cross-region dataset replication. Cross-region Turbo replication. High performance, ongoing, incremental replication of a dataset between two or more different regions or multi-regions. Available only with managed disaster recovery. Storage for replicated data Replicated data stored in the destination region or multi-region is charged according to BigQuery storage pricing. Data replication data transfer pricing You are charged for data transfer for the volume of data replicated. The use cases and breakdown of data transfer charges are provided as follows: Case Example Rate Replicate within the same location From us-east1 to us-east1 Free Replicate from BigQuery US multi-region From US multi-region to us-central1 (Iowa) Free Replicate from BigQuery US multi-region From US multi-region to any region (except us-central1 (Iowa)) See following table Replicate from BigQuery EU multi-region From EU multi-region to europe-west4 (Netherlands) Free Replicate from BigQuery EU multi-region From EU multi-region to any region (except europe-west4 (Netherlands)) See following table Replicate across locations From us-east1 to us-central1 See following table Note: All prices are in $/GiB and all GiB are in physical bytes. Source location Destination location Northern America Europe Asia Indonesia Oceania Middle East Latin America Africa Northern America $0.02/GiB $0.05/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Europe $0.05/GiB $0.02/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Asia $0.08/GiB $0.08/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Indonesia $0.10/GiB $0.10/GiB $0.10/GiB $0.08/GiB $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Oceania $0.10/GiB $0.10/GiB $0.10/GiB $0.08/GiB $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Middle East $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.08/GiB $0.14/GiB $0.11/GiB Latin America $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB Africa $0.11/GiB $0.11/GiB $0.11/GiB $0.14/GiB $0.14/GiB $0.11/GiB $0.14/GiB $0.11/GiB Data replication data transfer pricing for Turbo replication Note: All prices are in $/GiB and all GiB are in physical bytes. Source location Destination location Northern America Europe Asia Indonesia Oceania Middle East Latin America Africa Northern America $0.04/GiB $0.10/GiB $0.16/GiB $0.20/GiB $0.20/GiB $0.22/GiB $0.28/GiB $0.22/GiB Europe $0.10/GiB $0.04/GiB $0.16/GiB $0.20/GiB $0.20/GiB $0.22/GiB $0.28/GiB $0.22/GiB Asia $0.16/GiB $0.16/GiB $0.16/GiB $0.20/GiB $0.20/GiB $0.22/GiB $0.28/GiB $0.22/GiB Indonesia $0.20/GiB $0.20/GiB $0.20/GiB $0.16/GiB $0.16/GiB $0.22/GiB $0.28/GiB $0.28/GiB Oceania $0.20/GiB $0.20/GiB $0.20/GiB $0.16/GiB $0.16/GiB $0.22/GiB $0.28/GiB $0.28/GiB Middle East $0.22/GiB $0.22/GiB $0.22/GiB $0.22/GiB $0.22/GiB $0.16/GiB $0.28/GiB $0.22/GiB Latin America $0.28/GiB $0.28/GiB $0.28/GiB $0.28/GiB $0.28/GiB $0.28/GiB $0.28/GiB $0.28/GiB Africa $0.22/GiB $0.22/GiB $0.22/GiB $0.28/GiB $0.28/GiB $0.22/GiB $0.28/GiB $0.22/GiB Metastore access pricing Metastore access charges apply to Hive partitioned tables managed in BigQuery metastore accessed from services outside of BigQuery, such as Dataproc or Compute Engine. Access charges do not apply for accessing any other table metadata, including Apache Iceberg tables and Delta Lake tables managed in BigQuery metastore. When you are reading data from the metastore outside of BigQuery, you will be charged for metadata access via the BigQuery Read API. Similarly, when you are writing data to the metastore outside of BigQuery, you will be charged via the BigQuery Write API. If you are accessing the metastore from BigQuery SQL queries, access charges are included in the query processing charges, whether you are using the on-demand or capacity-based pricing. Notebook runtime pricing BigQuery Studio Notebooks rely on a default notebook runtime that uses a Colab Enterprise runtime to enable notebook code execution. The default Colab Enterprise runtime consumes compute and disk (both SSD and PD) resources. Billing for these resources is metered as pay-as-you go slots as outlined below. These slots are billed under the BigQuery Standard Edition SKU. Notebooks that use a non-default runtime are billed under Vertex SKUs. The default notebook runtime is a Google-provisioned virtual machine (VM) that can run the code in your notebook (IPYNB file). This allows BigQuery customers to execute python script and is not charged after idle time. *The Pay as you go slots, will be metered in the edition that is being used at the project level. The default notebook allocates PD and SSD in the background to help users install new data science packages and maintain their work beyond the Python code they execute. Once the PD and SSD is released, you will not see charges. BigQuery Studio notebook pricing details: Default runtime configuration may change to improve usability. Details can be found here. Colab Enterprise runtimes shut down after 180 minutes of inactivity by default. This page describes the idle shutdown feature and how to change the default idle shutdown settings or turn the feature off when you create a runtime template. BigQuery ML pricing BigQuery ML models can be classified into two different categories: built-in models and external models. BigQuery ML built-in models are trained within BigQuery, such as linear regression, logistic regression, means, matrix factorization, PCA and time series models (e.g., ARIMA_PLUS). BigQuery ML external models are trained utilizing other Google Cloud services, DNN, boosted tree and random forest (which are trained on Vertex AI) and AutoML models (which are trained on the Vertex AI Tables backend). BigQuery ML model training pricing is based on the model type and your usage pattern, as well as your pricing model: editions or on-demand. BigQuery ML prediction and evaluation functions are executed within BigQuery ML for all model types, priced as explained below. BigQuery ML editions pricing BigQuery ML is available in Enterprise and Enterprise Plus Editions for customers who prefer a compute capacity (number of slots) based pricing model over the on-demand (number of bytes processed) model. Customers can use Enterprise or Enterprise Plus reservations to use all features of BigQuery ML. BigQuery ML usage will be included in the BigQuery Editions usage. Reservations to create built-in models BigQuery has three job types for reservation assignment: QUERY, PIPELINE, and ML_EXTERNAL. QUERY assignments, which are used for analytical queries, are also used to run CREATE MODEL queries for BigQuery ML built-in models. Built-in model training and analytical queries share the same pool of resources in their assigned reservations, and have the same behavior regarding being preemptible, and using idle slots from other reservations. Reservations to create external models Because external models are trained outside of BigQuery, these workloads are not preemptible. As a result, to ensure other workloads are not impacted, only reservations with ML_EXTERNAL job type assignment can be used for these external jobs. Reservations workload management describes how to create reservations for external model training jobs. The slots usage per job is calculated to maintain the price parity between BigQuery slots and external Google Cloud service costs. Note: You can add a ML_EXTERNAL job type assignment to reservations that you want to use for building BigQuery ML external models. When slots in the reservation are shared by both QUERY/PIPELINE and ML_EXTERNAL jobs, QUERY/PIPELINE jobs can use the slots only when they are available. Thus, it is important to ensure all critical QUERY and PIPELINE jobs run in their own reservations, as ML_EXTERNAL jobs are not preemptible. ML_EXTERNAL jobs cannot use idle slots from other reservations. On the other hand, if slots in this reservation (with ML_EXTERNAL assignment) are available, jobs from other reservations (with ignore_idle_slots=false) can use them.Note: To create an AutoML Tables model, you need approximately 1000 ML_EXTERNAL slots. The number of slots is subject to change based on Vertex AI Tables backend pricing. Meanwhile, the number of external slots used to train other BigQuery ML external models (such as DNN, Boosted Tree, Random Forest etc.) is dynamically determined in order to scale the training job horizontally. More specifically, the Vertex AI training scale tier (Virtual Machine configurations for chief worker, parameter servers and workers) is determined at runtime based on the training data size and model type. The BigQuery ML service then reserves a number of slots equivalent to the total VM price from your ML_EXTERNAL reservations. BigQuery ML on-demand pricing BigQuery ML pricing for on-demand queries depends on the type of operation: model type, model creation, model evaluation, model inspection, or model prediction. For the models that support it, hyperparameter tuning is billed at the same rate as model creation. The cost of the model training associated with hyperparameter tuning is the sum of the cost of all executed trials. Note: Matrix factorization models are only available to editions customers or customers with reservations. BigQuery ML on-demand pricing is as follows: If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. 1 The CREATE MODEL statement stops at 50 iterations for iterative models. This applies to both on-demand and editions pricing. 2 For time series models, when auto-arima is enabled for automatic hyper-parameter tuning, multiple candidate models are fitted and evaluated during the training phase. In this case, the number of bytes processed by the input SELECT statement is multiplied by the number of candidate models, which can be controlled by the AUTO_ARIMA_MAX_ORDER training option for ARIMA_PLUS or the AUTO_ARIMA_MAX_ORDER training option for ARIMA_PLUS_XREG. This applies to both on-demand and editions pricing. The following notes apply to time series model creation: For single time series forecasting with auto-arima enabled, when AUTO_ARIMA_MAX_ORDER is (1, 2, 3, 4, 5), the number of candidate models is (6, 12, 20, 30, 42) respectively if non-seasonal d equals one; otherwise, the number of candidate models is (3, 6, 10, 15, 21). For multiple time series forecasting using TIME_SERIES_ID_COL, the charge is for (6, 12, 20, 30, 42) candidate models when AUTO_ARIMA_MAX_ORDER is (1, 2, 3, 4, 5) respectively. Note that this model selection only applies to model creation. For model evaluation, inspection, and prediction, only the selected model is used, with regular query pricing. 3 See BigQuery ML Remote Model Inference for details. BigQuery ML remote model training, inference, and tuning BigQuery ML lets customers create a remote model that targets a Vertex AI foundation model, a Vertex AI online prediction endpoint, or a Cloud AI API, for example Cloud AI Vision API. The pricing for BigQuery ML remote model inference has the following parts: The bytes processed by BigQuery are billed according to standard pricing such as on-demand or editions pricing. In addition, costs are incurred for the remote endpoint as follows: Remote Model types Inference functions Pricing Google models hosted on Vertex AI ML.GENERATE_TEXT ML.GENERATE_EMBEDDING Generative AI on on Vertex AI pricing Anthropic Claude models enabled on Vertex AI ML.GENERATE_TEXT Generative AI on on Vertex AI pricing Open models deployed to Vertex AI ML.GENERATE_TEXT Vertex AI pricing Vertex AI endpoints ML.PREDICT Vertex AI pricing Cloud Natural Language API ML.UNDERSTAND_TEXT Cloud Natural Language API pricing Cloud Translation API ML.TRANSLATE Cloud Translation API pricing Cloud Vision API ML.ANNOTATE_IMAGE Cloud Vision API pricing Document AI API ML.PROCESS_DOCUMENT Document AI API pricing Speech-to-Text API ML.TRANSCRIBE Speech-to-Text API pricing For remote endpoint model pricing, you are billed directly by the above services. You may use the billing label billing_service = 'bigquery_ml' and the billing label bigquery_job_id to filter the exact charges. LLM supervised tuning costs When using supervised tuning with remote models over Vertex AI LLMs, costs are calculated based on the following: The bytes processed from the training data table specified in the AS SELECT clause. These charges are billed from BigQuery to your project. The GPU or TPU usage to tune the LLM. These charges are billed from Vertex AI to your project. For more information, see Vertex AI pricing. BigQuery ML dry run Due to the nature of the underlying algorithms of some model types and differences in billing, the bytes processed will not be calculated for some model types until after training is completed due to the complexity of calculating the initial estimate. BigQuery ML pricing example BigQuery ML charges are not itemized separately on your billing statement. For current models, if you have BigQuery Editions, BigQuery ML charges are included. If you are using on-demand pricing, BigQuery ML charges are included in the BigQuery analysis (query) charges. BigQuery ML jobs that perform inspection, evaluation, and prediction operations incur the same charges as on-demand query jobs. Because CREATE MODEL queries incur different charges, you must calculate CREATE MODEL job costs independently by using the Cloud logging audit logs. Using the audit logs, you can determine the bytes billed by the BigQuery ML service for each BigQuery ML CREATE MODEL job. Then, multiply the bytes billed by the appropriate cost for CREATE MODEL queries in your regional or multi-regional location. For example, to determine the cost of a query job in the US that includes a BigQuery ML CREATE MODEL statement: Open the Cloud Logging page in the Google Cloud console. Verify that the product is set to BigQuery. Click the drop-down arrow in the "Filter by label or text search" box and choose Convert to advanced filter. This adds the following text to the filter: resource.type="bigquery_resource" Add the following text on line two below the resource.type line: protoPayload.serviceData.jobCompletedEvent.job.jobConfiguration.query.statementType="CREATE_MODEL" To the right of the Submit Filter button, choose the appropriate time frame from the drop-down list. For example, choosing Last 24 hours would display BigQuery ML CREATE MODEL jobs completed in the past 24 hours. Click Submit Filter to display the jobs completed in the given time window. After the data is populated, click View Options and choose Modify custom fields. In the Add custom fields dialog, enter: protoPayload.serviceData.jobCompletedEvent.job.jobStatistics.totalBilledBytes Click Save to update the results. To calculate the charges for the BigQuery ML CREATE MODEL job, multiply the bytes billed by the BigQuery ML on-demand price. In this example, the CREATE MODEL job processed 100873011200 bytes. To calculate the cost of this job in the US multi-regional location, divide the billed bytes by the number of bytes per TiB, and multiply it by the model creation cost: 100873011200/1099511627776 x $312.5 = $28.669 Note: Currently, Cloud logging metrics are not available for BigQuery ML. When available, Cloud logging metrics will allow you to view BigQuery ML bytes billed in graphs with custom aggregations. BI Engine pricing BI Engine accelerates SQL queries by caching BigQuery data in memory. The amount of data stored is constrained by the amount of capacity you purchase. To purchase BI Engine capacity, create a BI Engine reservation in the project where queries will be run. When BI Engine accelerates a query, the query stage that reads table data is free. Subsequent stages depend on the type of BigQuery pricing you're using: For on-demand pricing, stages that use BI Engine are charged for 0 scanned bytes. Subsequent stages will not incur additional on-demand charges. For editions pricing, the first stage consumes no BigQuery reservation slots. Subsequent stages use slots from the BigQuery reservation. BI Engine pricing is as follows: If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Editions commitment bundle When you are using BigQuery capacity compute pricing with BigQuery editions commitments, you are eligible to receive a limited amount of BI Engine capacity as part of your editions price, at no extra cost, as shown in the following chart. To receive BI Engine capacity at no additional cost, follow the instructions to reserve capacity in a project within the same organization as your editions reservation. To ensure a particular project’s BI Engine reservation is discounted toward this bundled capacity, there should be some slots assigned to the project. BI Engine reservation in an 'on-demand analysis' project will not be counted towards the free capacity. Free capacity is shown in your Billing Reports as a normal cost, but it is discounted as a "Spending-Based Discount". Number of slots purchased No-cost, additional BI Engine capacity (GiB) 100 5 500 25 1000 50 1500 75 2000 100 (maximum per organization) Free operations The following BigQuery operations are free of charge in every location. Quotas and limits apply to these operations. If you are copying tables across regions, you are charged for data replication. Operation Details Batch load data You are not charged for batch loading data using the shared slot pool. You can also create a reservation using editions pricing for guaranteed capacity. Once the data is loaded into BigQuery, you are charged for storage. For details, see Batch loading data. Copy data You are not charged for copying a table, but you do incur charges for storing the new table. For more information, see Copying an existing table. Export data You are not charged for exporting data using the shared slot pool, but you do incur charges for storing the data in Cloud Storage. You can also create a reservation using editions pricing for guaranteed capacity. When you use the EXPORT DATA SQL statement, you are charged for query processing. For details, see Exporting data. Delete operations You are not charged for deleting datasets or tables, deleting individual table partitions, deleting views, or deleting user-defined functions Metadata operations You are not charged for list, get, patch, update, and delete calls. Examples include (but are not limited to): listing datasets, listing table data, updating a dataset's access control list, updating a table's description, or listing user-defined functions in a dataset. Metadata caching operations for BigLake tables aren't included in free operations. Free usage tier As part of the Google Cloud Free Tier, BigQuery offers some resources free of charge up to a specific limit. These free usage limits are available during and after the free trial period. If you go over these usage limits and are no longer in the free trial period, you will be charged according to the pricing on this page. You can try BigQuery's free tier in the BigQuery sandbox without a credit card. Resource Monthly free usage limits Details Storage The first 10 GiB per month is free. BigQuery ML models and training data stored in BigQuery are included in the BigQuery storage free tier. Queries (analysis) The first 1 TiB of query data processed per month is free. BigQuery Editions pricing is also available for high-volume customers that prefer a stable, monthly cost. Flat-rate pricing Note: The BigQuery flat-rate pricing model is no longer offered as of July 5, 2023. It is described here for customers who have existing flat-rate commitments. Flat-rate compute pricing When you use the flat-rate compute pricing model, you purchase dedicated query processing capacity, measured in BigQuery slots. Your queries consume this capacity, and you are not billed for bytes processed. If your capacity demands exceed your committed capacity, BigQuery will queue up queries, and you will not be charged additional fees. Flat-rate compute pricing: Applies to query costs, including BigQuery ML, DML, and DDL statements. Does not apply to storage costs or BI Engine costs. Does not apply to streaming inserts and using the BigQuery Storage API. Is purchased as a regional resource. Slot commitments purchased in one region or multi-region cannot be used in another region or multi-region and cannot be moved. Is available in per-second (flex), monthly, and annual commitments. Can be shared across your entire organization. There is no need to buy slot commitments for every project. Has a 100-slot minimum and is purchased in increments of 100 slots. Is billed per second until you cancel the commitment, which can be done at any time after the commitment end date. Monthly flat-rate commitments The following table shows the cost of your monthly flat-rate slot commitment. For more information, see Monthly commitments. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Annual flat-rate commitments The following table shows the cost of your annual flate-rate slot commitment. For more information, see Annual commitments. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Flex slots: short-term flat-rate commitments Flex slots are a special commitment type: Commitment duration is only 60 seconds. You can cancel flex slots any time thereafter. You are charged only for the seconds your commitment was deployed. Flex slots are subject to capacity availability. When you attempt to purchase flex slots, success of this purchase is not guaranteed. However, once your commitment purchase is successful, your capacity is guaranteed until you cancel it. For more information, see flex slots. The following table shows the cost of your Flex slot commitment. BigQuery Omni flat-rate pricing BigQuery Omni offers flat-rate pricing which provides a predictable cost for queries. To enable flat-rate pricing, use BigQuery Reservations. When you enroll in flat-rate pricing for BigQuery Omni, you purchase dedicated query processing capacity, measured in slots, on Amazon Web Services or Microsoft Azure. Your queries consume this capacity, and you are not billed for bytes processed. BigQuery Omni flat-rate pricing: Applies to query costs. Does not apply to storage costs. Slot commitments are purchased for a single multi-cloud region. Slots purchased in one region cannot be used in another region. Is available in monthly, and annual commitments. Is billed per second until you cancel the commitment, which can be done at any time after the commitment end date. Can be shared across your entire organization. There is no need to buy slot commitments for every project. Has a 100-slot minimum and is purchased in increments of 100 slots. Monthly flat-rate commitments The following table shows the cost of your monthly slot commitment. For more information, see Monthly commitments. Annual flat-rate commitments The following table shows the cost of your annual slot commitment. For more information, see Annual commitments. Flex slots: short-term commitments Flex slots are a special commitment type: Commitment duration is only 60 seconds. You can cancel flex slots any time thereafter. You are charged only for the seconds your commitment was deployed. Flex slots on BigQuery Omni are subject to capacity availability on AWS or Azure. When you attempt to purchase flex slots, success of this purchase is not guaranteed. However, once your commitment purchase is successful, your capacity is guaranteed until you cancel it. For more information, see flex slots. The following table shows the cost of your Flex slot commitment. BI Engine flat-rate commitment bundle When you are using BigQuery flat-rate slot commitments, you are eligible to receive a limited amount of BI Engine capacity as part of your flat-rate price, at no extra cost, as shown in the following chart. To receive BI Engine capacity at no additional cost, follow the instructions to reserve capacity in a project within the same organization as your flat-rate reservation. To ensure a particular project's BI Engine reservation is discounted toward this bundled capacity, there should be some slots assigned to the project. A BI Engine reservation in an on-demand compute project don't counted towards free capacity. Free capacity is shown in your billing reports as a normal cost, but it is discounted as a "Spending-Based Discount". Number of slots purchased No-cost, additional BI Engine capacity (GiB) 100 5 500 25 1000 50 1500 75 2000 100 (maximum per organization) What's next For information on analyzing billing data using reports, see View your billing reports and cost trends. For information on analyzing your billing data in BigQuery, see Export Cloud Billing data to BigQuery. For information about estimating costs, see Estimating storage and query costs. Read the BigQuery documentation. Get started with BigQuery. Try the Pricing calculator. Learn about BigQuery solutions and use cases. Request a custom quote With Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization. Contact sales \ No newline at end of file diff --git a/BigQuery.txt b/BigQuery.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3db83ce19b1039c59be93aa9a6562c5b6690e36 --- /dev/null +++ b/BigQuery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bigquery +Date Scraped: 2025-02-23T12:01:28.165Z + +Content: +Google is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMS. Learn more.BigQueryFrom data warehouse to a unified, AI-ready data platformBigQuery is a fully managed, AI-ready data analytics platform that helps you maximize value from your data and is designed to be multi-engine, multi-format, and multi-cloud.Store 10 GiB of data and run up to 1 TiB of queries for free per month.Try it in consoleContact salesProduct highlightsUnified data platform to connect all your data and activate with AIBuilt-in machine learning to create and run models using simple SQLReal-time analytics with streaming and built-in BISign up for our no-cost discovery workshopFeaturesPower your data agents with Gemini in BigQueryGemini in BigQuery provides AI-powered assistive and collaboration features, including code assist, visual data preparation, and intelligent recommendations that help enhance productivity and optimize costs. BigQuery provides a single, unified workspace that includes a SQL, a notebook, and a NL-based canvas interface for data practitioners of various coding skills to simplify analytics workflows from data ingestion and preparation to data exploration and visualization to ML model creation and use.VIDEOLearn how to build data agents with Gemini in BigQuery3:42Bring multiple engines to a single copy of dataBigQuery has multi-engine capabilities that includes SQL and serverless Apache Spark. A unified metastore for shared runtime metadata provides unified security and governance controls across all engines and storage types, and now supports Apache Iceberg. By bringing multiple engines, including SQL, Spark, and Python to a single copy of data and metadata, you can break down data silos and increase efficiency.VIDEOWhat's New with BigQuery at Next46:22Manage all data types and open formatsUse BigQuery to manage all data types across clouds, structured and unstructured, with fine-grained access controls. Support for open table formats gives you the flexibility to use existing open source and legacy tools while getting the benefits of an integrated data platform. BigLake, BigQuery’s storage engine, lets you have a common way to work with data and makes open formats like Apache Iceberg, Delta, and Hudi. Read new research on BigQuery's Evolution toward a Multi-Cloud Lakehouse.VIDEOBuild an open and fully managed lakehouse with BigQuery at Next42:06Built-in machine learningBigQuery ML provides built-in capabilities to create and run ML models for your BigQuery data. You can leverage a broad range of models for predictions, and access the latest Gemini models to derive insights from all data types and unlock generative AI tasks, such as text summarization, text generation, multimodal embeddings, and vector search. It increases the model development speed by directly bringing ML to your data and eliminating the need to move data from BigQuery.VIDEOAnalyze data in BigQuery using Gemini models7:10Built-in data governanceData governance is built into BigQuery, including full integration of Dataplex capabilities, such as a unified metadata catalog, data quality, lineage, and profiling. Customers can use rich AI-driven metadata search and discovery capabilities for assets, including dataset schemas, notebooks and reports, public and commercial dataset listings, and more. BigQuery users can also use governance rules to manage policies on BigQuery object tables.Data and AI governance at Next45:30Real-time analytics with streaming data pipelinesUse Managed Service for Apache Kafka to build and run real-time streaming applications. From SQL-based easy streaming with BigQuery continuous queries, popular open-source Kafka platforms, and advanced multimodal data streaming with Dataflow, including support for Iceberg, you can make real-time data and AI a reality.Enterprise capabilities BigQuery continues to build new enterprise capabilities. Cross-region disaster recovery provides managed failover in the unlikely event of a regional disaster as well as data backup and recovery features to help you recover from user errors. BigQuery operational health monitoring provides organization-wide views of your BigQuery operational environment. BigQuery Migration Services provides a comprehensive collection of tools for migrating to BigQuery from legacy or cloud data warehouses.Share insights with built-in business intelligenceWith built-in business intelligence, create and share insights in a few clicks with Looker Studio or build data-rich experiences that go beyond BI with Looker. Analyze billions of rows of live BigQuery data in Google Sheets with familiar tools, like pivot tables, charts, and formulas, to easily derive insights from big data with Connected Sheets. View all featuresHow It WorksBigQuery's serverless architecture lets you use SQL queries to analyze your data. You can store and analyze your data within BigQuery or use BigQuery to assess your data where it lives. To test how it works for yourself, query data—without a credit card—using the BigQuery sandbox.Run sample queryDemo: Solving business challenges with an end-to-end analysis in BigQueryCommon UsesGenerative AIUnlock generative AI use cases with BigQuery and Gemini modelsBuild data pipelines that blend structured data, unstructured data, and generative AI models together to create a new class of analytical applications. BigQuery integrates with Gemini 1.0 Pro using Vertex AI. The Gemini 1.0 Pro model is designed for higher input/output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console.Learn more about BigQuery and Vertex AI integrationsLearn how to get started with Generative AI in BigQueryRead the latest innovations on BigQuery's integrations with Vertex AITutorials, quickstarts, & labsUnlock generative AI use cases with BigQuery and Gemini modelsBuild data pipelines that blend structured data, unstructured data, and generative AI models together to create a new class of analytical applications. BigQuery integrates with Gemini 1.0 Pro using Vertex AI. The Gemini 1.0 Pro model is designed for higher input/output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console.Learn more about BigQuery and Vertex AI integrationsLearn how to get started with Generative AI in BigQueryRead the latest innovations on BigQuery's integrations with Vertex AIData warehouse migrationMigrate data warehouses to BigQuerySolve for today’s analytics demands and seamlessly scale your business by moving to Google Cloud’s enterprise data warehouse. Streamline your migration path from Netezza, Oracle, Redshift, Teradata, or Snowflake to BigQuery using the free and fully managed BigQuery Migration Service.Learn about BigQuery Migration Service for a comprehensive data warehouse migrationAmazon Redshift to BigQuery migration guideTeradata to BigQuery migration guideSnowflake to BigQuery migration guideTutorials, quickstarts, & labsMigrate data warehouses to BigQuerySolve for today’s analytics demands and seamlessly scale your business by moving to Google Cloud’s enterprise data warehouse. Streamline your migration path from Netezza, Oracle, Redshift, Teradata, or Snowflake to BigQuery using the free and fully managed BigQuery Migration Service.Learn about BigQuery Migration Service for a comprehensive data warehouse migrationAmazon Redshift to BigQuery migration guideTeradata to BigQuery migration guideSnowflake to BigQuery migration guideTransfer data into BigQueryBring any data into BigQueryMake analytics easier by bringing together data from multiple sources into BigQuery. You can upload data files from local sources, Google Drive, or Cloud Storage buckets, use BigQuery Data Transfer Service (DTS), Cloud Data Fusion plugins, replicate data from relational databases with Datastream for BigQuery, or leverage Google's industry-leading data integration partnerships. Learn about third-party transfersAutomate data movement into BigQuery with DTSDeploy data pipelines into BigQuery with Data FusionTutorials, quickstarts, & labsBring any data into BigQueryMake analytics easier by bringing together data from multiple sources into BigQuery. You can upload data files from local sources, Google Drive, or Cloud Storage buckets, use BigQuery Data Transfer Service (DTS), Cloud Data Fusion plugins, replicate data from relational databases with Datastream for BigQuery, or leverage Google's industry-leading data integration partnerships. Learn about third-party transfersAutomate data movement into BigQuery with DTSDeploy data pipelines into BigQuery with Data FusionUnlock value from all data typesDerive insights from images, documents, and audio files and combine with structured dataUnstructured data represents a large portion of untapped enterprise data. However, it can be challenging to interpret, making it difficult to extract meaningful insights from it. Leveraging the power of BigLake, you can derive insights from images, documents, and audio files using a broad range of AI models, including Vertex AI’s vision, document processing, and speech-to-text APIs, open-source TensorFlow Hub models, or your own custom models.Learn more about unstructured data analysis Tutorials, quickstarts, & labsDerive insights from images, documents, and audio files and combine with structured dataUnstructured data represents a large portion of untapped enterprise data. However, it can be challenging to interpret, making it difficult to extract meaningful insights from it. Leveraging the power of BigLake, you can derive insights from images, documents, and audio files using a broad range of AI models, including Vertex AI’s vision, document processing, and speech-to-text APIs, open-source TensorFlow Hub models, or your own custom models.Learn more about unstructured data analysis Pre-configured data solutionsDeploy a preconfigured data warehouse in the Google Cloud consoleDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis.Deploy in consoleDeploy a Google-recommended analytics lakehouse solutionSummarize large documents with AITutorials, quickstarts, & labsDeploy a preconfigured data warehouse in the Google Cloud consoleDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis.Deploy in consoleDeploy a Google-recommended analytics lakehouse solutionSummarize large documents with AIReal-time analyticsEvent-driven analysisGain a competitive advantage by responding to business events in real time with event-driven analysis. Built-in streaming capabilities automatically ingest streaming data and make it immediately available to query. This allows you to stay agile and make business decisions based on the freshest data. Or use Dataflow to enable fast, simplified streaming data pipelines for a comprehensive solution.Learn more about streaming data into BigQuerySQL-based streaming with continuous queriesReal-time data with BigQuery Pub/Sub subscriptionsReal-time analytics database solutions with BigQuery and BigtableTutorials, quickstarts, & labsEvent-driven analysisGain a competitive advantage by responding to business events in real time with event-driven analysis. Built-in streaming capabilities automatically ingest streaming data and make it immediately available to query. This allows you to stay agile and make business decisions based on the freshest data. Or use Dataflow to enable fast, simplified streaming data pipelines for a comprehensive solution.Learn more about streaming data into BigQuerySQL-based streaming with continuous queriesReal-time data with BigQuery Pub/Sub subscriptionsReal-time analytics database solutions with BigQuery and BigtablePredictive analyticsPredict business outcomes with leading AI/MLPredictive analytics can be used to streamline operations, boost revenue, and mitigate risk. BigQuery ML democratizes the use of ML by empowering data analysts to build and run models using existing business intelligence tools and spreadsheets. Predictive analytics can guide business decision-making across the organization.View analytics design patterns for predictive analytics use casesBuild an e-commerce recommendation systemPredict customer lifetime valueBuild a propensity to purchase solutionTutorials, quickstarts, & labsPredict business outcomes with leading AI/MLPredictive analytics can be used to streamline operations, boost revenue, and mitigate risk. BigQuery ML democratizes the use of ML by empowering data analysts to build and run models using existing business intelligence tools and spreadsheets. Predictive analytics can guide business decision-making across the organization.View analytics design patterns for predictive analytics use casesBuild an e-commerce recommendation systemPredict customer lifetime valueBuild a propensity to purchase solutionLog analyticsAnalyze log dataAnalyze and gain deeper insights into your logging data with BigQuery. You can store, explore, and run queries on generated data from servers, sensors, and other devices simply using GoogleSQL. Additionally, you can analyze log data alongside the rest of your business data for broader analysis all natively within BigQuery. Learn how to analyze logs using BigQueryVideo: How to analyze log data in BigQuerySample SQL queries for log analyticsPinpoint unique elements in dataTutorials, quickstarts, & labsAnalyze log dataAnalyze and gain deeper insights into your logging data with BigQuery. You can store, explore, and run queries on generated data from servers, sensors, and other devices simply using GoogleSQL. Additionally, you can analyze log data alongside the rest of your business data for broader analysis all natively within BigQuery. Learn how to analyze logs using BigQueryVideo: How to analyze log data in BigQuerySample SQL queries for log analyticsPinpoint unique elements in dataMarketing analyticsIncrease marketing ROI and performance with data and AIBring the power of Google AI to your marketing data by unifying marketing and business data sources in BigQuery. Get a holistic view of the business, increase marketing ROI and performance using more first-party data, and deliver personalized and targeting marketing at scale with ML/AI built-in. Share insights and performance with Looker Studio or Connected Sheets.Explore data and AI solutions built for marketing use casesTutorials, quickstarts, & labsIncrease marketing ROI and performance with data and AIBring the power of Google AI to your marketing data by unifying marketing and business data sources in BigQuery. Get a holistic view of the business, increase marketing ROI and performance using more first-party data, and deliver personalized and targeting marketing at scale with ML/AI built-in. Share insights and performance with Looker Studio or Connected Sheets.Explore data and AI solutions built for marketing use casesData clean roomsBigQuery data clean rooms for privacy-centric data sharingCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore more use cases for data clean roomsTutorials, quickstarts, & labsBigQuery data clean rooms for privacy-centric data sharingCreate a low-trust environment for you and your partners to collaborate without copying or moving the underlying data right within BigQuery. This allows you to perform privacy-enhancing transformations in BigQuery SQL interfaces and monitor usage to detect privacy threats on shared data. Benefit from BigQuery scale without needing to manage any infrastructure and built-in BI and AI/ML. Explore more use cases for data clean roomsPricingHow BigQuery pricing worksBigQuery pricing is based on compute (analysis), storage, additional services, and data ingestion and extraction. Loading and exporting data are free.Services and usageSubscription typePrice (USD)Free tierThe BigQuery free tier gives customers 10 GiB storage, up to 1 TiB queries free per month, and other resources.FreeCompute (analysis)On-demandGenerally gives you access to up to 2,000 concurrent slots, shared among all queries in a single project.Starting at$6.25per TiB scanned. First 1 TiB per month is free.Standard editionLow-cost option for standard SQL analysis $0.04 per slot hourEnterprise editionSupports advanced enterprise analytics$0.06per slot hourEnterprise Plus editionSupports mission-critical enterprise analytics$0.10per slot hourStorageActive local storageBased on the uncompressed bytes used in tables or table partitions modified in the last 90 days. Starting at$0.02Per GiB. The first 10 GiB is free each month.Long-term logical storageBased on the uncompressed bytes used in tables or table partitions modified for 90 consecutive days. Starting at$0.01Per GiB. The first 10 GiB is free each month.Active physical storageBased on the compressed bytes used in tables or table partitions modified for 90 consecutive days.Starting at$0.04 Per GiB. The first 10 GiB is free each month.Long-term physical storageBased on compressed bytes in tables or partitions that have not been modified for 90 consecutive days.Starting at$0.02Per GiB. The first 10 GiB is free each month.Data ingestionBatch loading Import table from Cloud StorageFreeWhen using the shared slot poolStreaming insertsYou are charged for rows that are successfully inserted. Individual rows are calculated using a 1 KB minimum.$0.01per 200 MiBBigQuery Storage Write APIData loaded into BigQuery, is subject to BigQuery storage pricing or Cloud Storage pricing.$0.025per 1 GiB. The first 2 TiB per month are free.Data extractionBatch exportExport table data to Cloud Storage.FreeWhen using the shared slot poolStreaming readsUse the storage Read API to perform streaming reads of table data.Starting at$1.10per TiB readLearn more about BigQuery pricing. View all pricing detailsHow BigQuery pricing worksBigQuery pricing is based on compute (analysis), storage, additional services, and data ingestion and extraction. Loading and exporting data are free.Free tierSubscription typeThe BigQuery free tier gives customers 10 GiB storage, up to 1 TiB queries free per month, and other resources.Price (USD)FreeCompute (analysis)Subscription typeOn-demandGenerally gives you access to up to 2,000 concurrent slots, shared among all queries in a single project.Price (USD)Starting at$6.25per TiB scanned. First 1 TiB per month is free.Standard editionLow-cost option for standard SQL analysis Subscription type$0.04 per slot hourEnterprise editionSupports advanced enterprise analyticsSubscription type$0.06per slot hourEnterprise Plus editionSupports mission-critical enterprise analyticsSubscription type$0.10per slot hourStorageSubscription typeActive local storageBased on the uncompressed bytes used in tables or table partitions modified in the last 90 days. Price (USD)Starting at$0.02Per GiB. The first 10 GiB is free each month.Long-term logical storageBased on the uncompressed bytes used in tables or table partitions modified for 90 consecutive days. Subscription typeStarting at$0.01Per GiB. The first 10 GiB is free each month.Active physical storageBased on the compressed bytes used in tables or table partitions modified for 90 consecutive days.Subscription typeStarting at$0.04 Per GiB. The first 10 GiB is free each month.Long-term physical storageBased on compressed bytes in tables or partitions that have not been modified for 90 consecutive days.Subscription typeStarting at$0.02Per GiB. The first 10 GiB is free each month.Data ingestionSubscription typeBatch loading Import table from Cloud StoragePrice (USD)FreeWhen using the shared slot poolStreaming insertsYou are charged for rows that are successfully inserted. Individual rows are calculated using a 1 KB minimum.Subscription type$0.01per 200 MiBBigQuery Storage Write APIData loaded into BigQuery, is subject to BigQuery storage pricing or Cloud Storage pricing.Subscription type$0.025per 1 GiB. The first 2 TiB per month are free.Data extractionSubscription typeBatch exportExport table data to Cloud Storage.Price (USD)FreeWhen using the shared slot poolStreaming readsUse the storage Read API to perform streaming reads of table data.Subscription typeStarting at$1.10per TiB readLearn more about BigQuery pricing. View all pricing detailsPricing calculatorEstimate your monthly BigQuery costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptStore 10 GiB of data and run up to 1 TiB of queries for free per month.Try BigQuery in consoleHave a large project?Contact salesLearn how to locate and query public datasets in BigQueryRead guideLearn how to load data into BigQueryRead guideLearn how to create and use tables in BigQueryRead guidePartners & IntegrationWork with a partner with BigQuery expertiseETL and data integrationReverse ETL and MDMBI and data visualizationData governance and securityConnectors and developer toolsMachine learning and advanced analyticsData quality and observabilityConsulting partnersFrom data ingestion to visualization, many partners have integrated their data solutions with BigQuery. Listed above are partner integrations through Google Cloud Ready - BigQuery.Visit our partner directory to learn about these BigQuery partners.FAQExpand allWhat makes BigQuery different from other enterprise data warehouse alternatives?BigQuery is Google Cloud’s fully managed and completely serverless enterprise data warehouse. BigQuery supports all data types, works across clouds, and has built-in machine learning and business intelligence, all within a unified platform. What is an enterprise data warehouse?An enterprise data warehouse is a system used for the analysis and reporting of structured and semi-structured data from multiple sources. Many organizations are moving from traditional data warehouses that are on-premises to cloud data warehouses, which provide more cost savings, scalability, and flexibility.How secure is BigQuery?BigQuery offers robust security, governance, and reliability controls that offer high availability and a 99.99% uptime SLA. Your data is protected with encryption by default and customer-managed encryption keys.How can I get started with BigQuery?There are a few ways to get started with BigQuery. New customers get $300 in free credits to spend on BigQuery. All customers get 10 GB storage and up to 1 TB queries free per month, not charged against their credits. You can get these credits by signing up for the BigQuery free trial. Not ready yet? You can use the BigQuery sandbox without a credit card to see how it works. What is the BigQuery sandbox?The BigQuery sandbox lets you try out BigQuery without a credit card. You stay within BigQuery’s free tier automatically, and you can use the sandbox to run queries and analysis on public datasets to see how it works. You can also bring your own data into the BigQuery sandbox for analysis. There is an option to upgrade to the free trial where new customers get a $300 credit to try BigQuery.What are the most common ways companies use BigQuery?Companies of all sizes use BigQuery to consolidate siloed data into one location so you can perform data analysis and get insights from all of your business data. This allows companies to make decisions in real time, streamline business reporting, and incorporate machine learning into data analysis to predict future business opportunities.Other inquiries and supportBilling and troubleshootingAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/BigQuery_Data_Transfer_Service.txt b/BigQuery_Data_Transfer_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..c40ab9ab9e49656c99bc12ed7e3651a139b4c952 --- /dev/null +++ b/BigQuery_Data_Transfer_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bigquery-transfer/docs/introduction +Date Scraped: 2025-02-23T12:06:54.325Z + +Content: +Home BigQuery Documentation Guides Send feedback Stay organized with collections Save and categorize content based on your preferences. What is BigQuery Data Transfer Service? The BigQuery Data Transfer Service automates data movement into BigQuery on a scheduled, managed basis. Your analytics team can lay the foundation for a BigQuery data warehouse without writing a single line of code. You can access the BigQuery Data Transfer Service using the: Google Cloud console bq command-line tool BigQuery Data Transfer Service API After you configure a data transfer, the BigQuery Data Transfer Service automatically loads data into BigQuery on a regular basis. You can also initiate data backfills to recover from any outages or gaps. You cannot use the BigQuery Data Transfer Service to transfer data out of BigQuery. In addition to loading data into BigQuery, BigQuery Data Transfer Service is used for two BigQuery operations: dataset copies and scheduled queries. Note: Subscribe to the BigQuery DTS announcements group to receive announcements related to the BigQuery Data Transfer Service. Supported data sources The BigQuery Data Transfer Service supports loading data from the following data sources: Amazon S3 Amazon Redshift Azure Blob Storage Campaign Manager Cloud Storage Comparison Shopping Service (CSS) Center (Preview) Display & Video 360 Facebook Ads (Preview) Google Ad Manager Google Ads Google Merchant Center (Preview) Google Play MySQL (Preview) Oracle (Preview) PostgreSQL (Preview) Salesforce (Preview) Salesforce Marketing Cloud (Preview) Search Ads 360 ServiceNow (Preview) Teradata YouTube Channel YouTube Content Owner Supported regions Like BigQuery, the BigQuery Data Transfer Service is a multi-regional resource, with many additional single regions available. A BigQuery dataset's locality is specified when you create a destination dataset to store the data transferred by the BigQuery Data Transfer Service. When you set up a transfer, the transfer configuration itself is set to the same location as the destination dataset. The BigQuery Data Transfer Service processes and stages data in the same location as the destination dataset. The data you want to transfer to BigQuery can also have a region. In most cases, the region where your data is stored and the location of the destination dataset in BigQuery are irrelevant. In other kinds of transfers, the dataset and the source data must be colocated in the same region, or a compatible region. For detailed information about transfers and region compatibility for BigQuery Data Transfer Service, see Dataset locations and transfers. For supported regions for BigQuery, see Dataset locations. Pricing For information on BigQuery Data Transfer Service pricing, see the Pricing page. Once data is transferred to BigQuery, standard BigQuery storage and query pricing applies. Quotas For information on BigQuery Data Transfer Service quotas, see the Quotas and limits page. What's next To learn how to create a transfer, see the documentation for your data source. Send feedback \ No newline at end of file diff --git a/Bigtable.txt b/Bigtable.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd74d40ea7390b139b3dfd16290c87114a213f41 --- /dev/null +++ b/Bigtable.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/bigtable +Date Scraped: 2025-02-23T12:04:02.967Z + +Content: +SQL is here and it comes with 100+ functions from JSON processing to kNN and HLL so you can do even more with your favorite NoSQL database.BigtableScale your latency-sensitive applications with the NoSQL pioneerEnterprise-grade, low-latency NoSQL database service for machine learning, operational analytics, and user-facing applications at scale.Go to consoleContact salesNew customers get $300 in free credits to spend on Bigtable.Product highlightsSupport multi-tenant, mixed operational, and real-time analytical workloads in a single platform with easeHigh-performance reads and writes even in globally distributed deploymentsBuild multicloud and hybrid cloud architectures with the open-source Apache HBase API and bi-directional syncWhat is Bigtable?VideoFeaturesLow latency and high throughputBigtable is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. This makes latency-sensitive workloads like personalization a perfect fit for Bigtable. However its distributed counters, high read and write throughput per dollar makes it also a great fit for clickstream and IoT use cases, and even batch analytics for high-performance computing (HPC) applications, including training ML models.How Segment drives real-time personalization with Bigtable41:18Write and read scalability with no limitsBigtable decouples compute resources from data storage, which makes it possible to transparently adjust processing resources. Each additional node can process reads and writes equally well, providing effortless horizontal scalability. Bigtable optimizes performance by automatically scaling resources to adapt to server traffic, handling the sharding, replication, and query processing.Data model flexibilityBigtable lets your data model evolve organically. Store anything from scalars, JSON, Protocol Buffers, Avro, Arrow to embeddings, images, and dynamically add/remove new columns as needed. Deliver low-latency serving or high-performance batch analytics over raw, unstructured data in a single database.Solana stores the world's largest blockchain ledger on Bigtable1:22From a single zone up to eight regions at onceApps backed by Bigtable can deliver low-latency reads and writes with globally distributed multi-primary configurations, no matter where your users may be. Zonal instances are great for cost savings and can be seamlessly scaled up to multi-region deployments with automatic replication. When running a multi-region instance, your database is protected against a regional failure and offers industry-leading 99.999% availability. Easy migration from NoSQL databasesLive migrations enable faster and simpler onboarding by ensuring accurate data migration with reduced effort. HBase Bigtable replication library allows for no-downtime live migrations with import and validation tools to easily load HBase snapshots into Bigtable while Dataflow templates simplify migrations from Cassandra to Bigtable. Zero-downtime HBase to Bigtable migrations12:42High-performance, workload-isolated data processingBigtable Data Boost enables users to run analytical queries, batch ETL processes, train ML models or export data faster without affecting transactional workloads. Data Boost does not require capacity planning or management. It allows directly querying data stored in Google’s distributed storage system, Colossus using on-demand capacity letting users easily handle mixed workloads and share data worry-free.Rich application and tool supportEasily connect to the open source ecosystem with the Apache HBase API. Build data-driven applications faster with seamless integrations with Apache Spark, Hadoop, GKE, Dataflow, Dataproc, Vertex AI Vector Search, and BigQuery. Meet development teams where they are with SQL and client libraries for Java, Go, Python, C#, Node.js, PHP, Ruby, C++, HBase and integration with LangChain.No hidden costsNo IOPS charges, no cost for taking or restoring backups, no disproportionate read/write pricing to impact your budget as your workloads evolve.Automated maintenanceReduce operational costs and improve reliability for any database size. Replication and maintenance are automatic and built-in with zero downtime.Real-time change data capture and eventingUse Bigtable change streams to capture change data from Bigtable databases and integrate it with other systems for analytics, event triggering, and compliance. Case studyMultiScale Health Networks' unified data hub securely hosts patient health information in Bigtable.Read moreEnterprise-grade security and controlsCustomer-managed encryption keys (CMEK) with Cloud External Key Manager support, IAM integration for access and controls, support for VPC-SC, Access Transparency, Access Approval and comprehensive audit logging help ensure your data is protected and complies with regulations. Fine-grained access control lets you authorize access at table, column, or row level.Visualizing Bigtable access patterns at Twitter for optimizing analytics35:20ObservabilityMonitor performance of Bigtable databases with server-side metrics. Analyze usage patterns with Key Visualizer interactive monitoring tool. Use query stats, table stats, and the hot tablets tool for troubleshooting query performance issues and quickly diagnose latency issues with client-side monitoring.Disaster recoveryTake instant, incremental backups of your database cost-effectively and restore on demand. Store backups in different regions for additional resilience, easily restore between instances, or projects for test versus production scenarios.Vertex AI Vector Search integrationUse the Bigtable to Vertex AI Vector Search template to index data in your Bigtable database with Vertex AI to perform a similarity search over vector embeddings with Vertex AI Vector Search.LangChain integrationEasily build generative AI applications that are more accurate, transparent, and reliable with built-in kNN nearest neighbor vector search (in preview) and LangChain integration. Visit the GitHub repository to learn more.View all featuresHow It WorksBigtable instances provide compute and storage in one or more regions. Each Bigtable cluster can receive both reads and writes. Data is automatically "split" for scalability and replicated between clusters asynchronously. A distributed clock called TrueTime guarantees transactions are correctly ordered. View documentationCommon UsesAdTech and retailPersonalize experiences in real timeTrack customer behavior and preferences for personalized ads, news feeds, discount offers, and product or content recommendations. Ingest high volume event streams and serve recommendations at low latency using a single database that automatically scales and rebalances for best performance. Bring data closer to your customers for best latencies with multi-region, multi-primary deployments, and reduce risk and downtime with 99.999% availability and zero maintenance.How Google scales ad personalization with BigtableHome Depot delivers personalized experiences with BigtableOpenX serves over 150 billion ad requests per day with Bigtable and MemorystoreLearning resourcesPersonalize experiences in real timeTrack customer behavior and preferences for personalized ads, news feeds, discount offers, and product or content recommendations. Ingest high volume event streams and serve recommendations at low latency using a single database that automatically scales and rebalances for best performance. Bring data closer to your customers for best latencies with multi-region, multi-primary deployments, and reduce risk and downtime with 99.999% availability and zero maintenance.How Google scales ad personalization with BigtableHome Depot delivers personalized experiences with BigtableOpenX serves over 150 billion ad requests per day with Bigtable and MemorystoreData fabric and operational analyticsConsolidate data silos and scale out legacy systemsIngest and integrate data from multiple databases, streaming sources, and mainframes in bulk or real time using integrations with BigQuery, Dataflow, Cloud Composer, and Cloud Data Fusion to build customer data platforms, operational data stores, digital integration hubs, semantic layers, or data fabrics to support low-latency API access and scalable in-app reporting.Equifax’s data fabric converts data silos into data products with BigtableMacy’s scales out their mainframe with BigtableReltio’s customer data platform (CDP) creates a unified view of data with Bigtable and BigQueryLearning resourcesConsolidate data silos and scale out legacy systemsIngest and integrate data from multiple databases, streaming sources, and mainframes in bulk or real time using integrations with BigQuery, Dataflow, Cloud Composer, and Cloud Data Fusion to build customer data platforms, operational data stores, digital integration hubs, semantic layers, or data fabrics to support low-latency API access and scalable in-app reporting.Equifax’s data fabric converts data silos into data products with BigtableMacy’s scales out their mainframe with BigtableReltio’s customer data platform (CDP) creates a unified view of data with Bigtable and BigQueryCybersecurityDetect malware, payment fraud, stop spam and scamsCapture fraud signals like user activity, catalog files, malware signatures and blocklists, unstructured content like product listings and reviews to identify counterfeit goods, spam accounts, scams, compromised hardware, and fraud in real time.Ravelin scales fraud detection with BigtableLearn how Sift improved performance and reduced management overhead with Bigtable How Stairwell built an ML-powered cybersecurity platform on Bigtable Learning resourcesDetect malware, payment fraud, stop spam and scamsCapture fraud signals like user activity, catalog files, malware signatures and blocklists, unstructured content like product listings and reviews to identify counterfeit goods, spam accounts, scams, compromised hardware, and fraud in real time.Ravelin scales fraud detection with BigtableLearn how Sift improved performance and reduced management overhead with Bigtable How Stairwell built an ML-powered cybersecurity platform on Bigtable MediaDeliver media content and engagement analyticsManage playlists, augmented reality (AR) assets, book, audio, or video catalogs, watch histories, ratings and comments, track viewing progress, and serve personalized content feeds and analytics for content creators and advertisers.How YouTube data warehouse relies on Bigtable Bigtable powers Twitter’s ad engagement analytics platformSpotify serves music recommendations at scale with BigtableLearning resourcesDeliver media content and engagement analyticsManage playlists, augmented reality (AR) assets, book, audio, or video catalogs, watch histories, ratings and comments, track viewing progress, and serve personalized content feeds and analytics for content creators and advertisers.How YouTube data warehouse relies on Bigtable Bigtable powers Twitter’s ad engagement analytics platformSpotify serves music recommendations at scale with BigtableTime series and IoTManage time series data at any scaleFrom financial time series to smart homes, weather sensors, online gaming logs, factory floor telemetry, connected cars, or event sourcing architectures, ingest large amounts of data without disrupting low-latency serving workloads to support real-time reporting, alerting, and predictive maintenance. Simplify data management with TTL rules, retain data cost-effectively using the storage medium of your choice at industry leading physical storage prices, and achieve high scan throughput for batch analytics without breaking a sweat.Learn how Cognite uses Bigtable to manage industrial time series dataEcobee improved performance by 10x by migrating smart home data to BigtableMLB captures every movement of the ball and every player at 30 fps with BigtableLearning resourcesManage time series data at any scaleFrom financial time series to smart homes, weather sensors, online gaming logs, factory floor telemetry, connected cars, or event sourcing architectures, ingest large amounts of data without disrupting low-latency serving workloads to support real-time reporting, alerting, and predictive maintenance. Simplify data management with TTL rules, retain data cost-effectively using the storage medium of your choice at industry leading physical storage prices, and achieve high scan throughput for batch analytics without breaking a sweat.Learn how Cognite uses Bigtable to manage industrial time series dataEcobee improved performance by 10x by migrating smart home data to BigtableMLB captures every movement of the ball and every player at 30 fps with BigtableMachine learning infrastructureScale model training and servingBuild feature stores to support low-latency predictions, cache data from GCS for fast access by HPC clusters and ML frameworks, and snapshot model weights during training with high-throughput, low-latency reads and writes, granular access control, and workload isolation.Tamr and Discord use Bigtable to deliver ML-driven experiencesCredit Karma uses Bigtable to make 60 billion predictions per dayHow Go-Jek uses Bigtable as their feature storeLearn how to use Bigtable with popular open-source feature stores.Learning resourcesScale model training and servingBuild feature stores to support low-latency predictions, cache data from GCS for fast access by HPC clusters and ML frameworks, and snapshot model weights during training with high-throughput, low-latency reads and writes, granular access control, and workload isolation.Tamr and Discord use Bigtable to deliver ML-driven experiencesCredit Karma uses Bigtable to make 60 billion predictions per dayHow Go-Jek uses Bigtable as their feature storeLearn how to use Bigtable with popular open-source feature stores.PricingHow Bigtable pricing worksBigtable pricing is based on compute capacity, database storage, backup storage, and network usage. Committed use discounts reduce the price further.ServiceDescriptionPriceCompute capacityCompute capacity is provisioned as nodes.Starting at$0.65per node per hourData storageSSDPricing is based on the physical size of tables. Each replica is billed separately. Recommended for low-latency serving.Starting at$0.17per GB per monthHDDPricing is based on the physical size of tables. Each replica is billed separately.Starting at$0.026per GB per monthBackupsPricing is based on the physical size of backups. Bigtable backups are incremental.Starting at$0.026per GB per monthNetworkIngressFreeEgress within same regionFreeEgress between regionsStarting at$0.10per GBReplicationWithin same regionFreeBetween regionsStarting at$0.01per GBLearn more about Bigtable pricing and committed use discounts.How Bigtable pricing worksBigtable pricing is based on compute capacity, database storage, backup storage, and network usage. Committed use discounts reduce the price further.Compute capacityDescriptionCompute capacity is provisioned as nodes.PriceStarting at$0.65per node per hourData storageDescriptionSSDPricing is based on the physical size of tables. Each replica is billed separately. Recommended for low-latency serving.PriceStarting at$0.17per GB per monthHDDPricing is based on the physical size of tables. Each replica is billed separately.DescriptionStarting at$0.026per GB per monthBackupsDescriptionPricing is based on the physical size of backups. Bigtable backups are incremental.PriceStarting at$0.026per GB per monthNetworkDescriptionIngressPriceFreeEgress within same regionDescriptionFreeEgress between regionsDescriptionStarting at$0.10per GBReplicationDescriptionWithin same regionPriceFreeBetween regionsDescriptionStarting at$0.01per GBLearn more about Bigtable pricing and committed use discounts.PRICING CALCULATOREstimate your monthly Bigtable costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your Bigtable proof of conceptCreate your first instanceTry BigtableLearn how to use BigtableView quickstartFederate queries from BigQuery into BigtableLearn moreMigrate from HBase, Cassandra, Aerospike, or DynamoDB to BigtableRead the guideDive into coding with examplesUse code samplesBusiness CaseExplore how other businesses built innovative apps to deliver great customer experiences, cut costs, and increase ROI with BigtableExplore how Box modernized their NoSQL databases with BigtableBox enhanced scalability and availability while reducing cost to manage, through a seamless migration.Watch the videoRelated contentSabre delivers travel search results with lower latency and TCO with BigtableHow Plaid built its real-time analytics platform on BigQuery and BigtableAirship achieves over one million writes and 700K reads per second cost-effectively with BigtableBenefits and customersGrow your business with innovative applications that scale limitlessly to meet any demand.Get best-in-class price-performance and pay for what you use.Migrate easily from other NoSQL databases and run hybrid or multicloud deployments with open source APIs and migration tools.Partners & IntegrationTake advantage of partners with Bigtable expertise to help you at every step of the journey, from assessments and business cases to migrations and building new apps on Bigtable.System integratorsWant to get more details about which partner or third-party integration is best for your business? Go to the partner directory.FAQExpand allWhat type of database is Bigtable?Bigtable is a NoSQL database service, specifically a key-value store that allows for very wide tables with tens of thousands of columns, hence also referred to as a wide-column database or a distributed multi-dimensional map. It is a NoSQL database in the "Not only SQL" sense, rather than "zero SQL". Bigtable is most similar to popular open source projects it inspired, such as Apache HBase and Cassandra, and hence is the most common destination for customers that deal with large data volumes looking for a high-performance, cost-effective, fully managed NoSQL database solution on Google Cloud.Does Bigtable support SQL?In addition to its Key-Value APIs, Bigtable also supports SQL queries in three different ways:For low-latency application development, Bigtable offers a SQL query API that builds upon GoogleSQL with extensions for the wide-column data model resembling Cassandra Query Language (CQL).For data science use cases or other kinds of batch processing and ETL, Bigtable supports SparkSQL using its Spark client.For users who want to do post-hoc exploratory analysis or blend data from multiple sources for batch analytics, Bigtable data can also be accessed from BigQuery. Simply register your Bigtable tables in BigQuery and query like any other BigQuery table without any ETL or data duplication.How do I migrate databases to Bigtable?Bigtable offers migration tooling that enables faster and simpler onboarding by ensuring accurate data migration with reduced effort. HBase Bigtable replication library allows for no-downtime live migrations with import and validation tools to easily load HBase snapshots into Bigtable while Dataflow templates simplify migrations from Cassandra to Bigtable. Is Bigtable serverless?Bigtable storage is billed per GB used similar to a serverless model. Bigtable also offers linear horizontal scaling and can automatically scale up and down compute resources in response to demand fluctuations. Hence it doesn’t require a long term capacity commitment for storage or compute. However, pricing for low-latency compute is capacity-based and billed per node, not per request where each node can serve up to 17K requests per second. This makes Bigtable price more favorable for larger workloads but less ideal for small applications, which may be more suitable for Google Cloud databases such as Firestore.For batch data processing Bigtable offers Data Boost, which bills in Serverless Processing Units (SPU).Learn moreDiscover more user storiesAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Block_Storage.txt b/Block_Storage.txt new file mode 100644 index 0000000000000000000000000000000000000000..06c12c1b63b07db9b3ceab38891549d7f251e0f7 --- /dev/null +++ b/Block_Storage.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/block-storage +Date Scraped: 2025-02-23T12:09:50.660Z + +Content: +Hyperdisk Balanced is now available on C3, M3, and H3 VMs. Take advantage of higher flexibility and performance now.Block storageHigh-performance block storage for any use caseReliable, high-performance block storage for virtual machine instances. Choose from Hyperdisk, Persistent Disk, or Local SSD.Go to consoleContact salesNew customers can try out Google Cloud today with $300 in free credits. Looking for something else? Browse Google Cloud’s various file and object storage options.Block storage for any use caseDeliver 500K IOPS and 10 GiB/s to power the most demanding applicationsReduce TCO by up to 40% for workloads like databases with Storage PoolsAchieve a near-zero RPO with Instant SnapshotsLearn more about Hyperdisk, Google’s next-generation block storage service 15:07FeaturesWorkload-optimize your storageWith Hyperdisk you can provision capacity and performance independently, so you can dynamically tune storage to your workload. Serve the most demanding applications with 10 GiB/s of throughput and 500,000 IOPS from a single Hyperdisk. Learn how to move to a new compute instance. Lower TCO and increase your utilization with Storage PoolsWith Hyperdisk Storage Pools you can go from managing hundreds of disks to a single Pool, all while reducing your storage TCO by up to 40%. Provision and manage capacity and performance in aggregate, simplifying cloud operations. Use thin provisioning, data reduction, and capacity pooling to increase utilization.Instantly protect your workloadsRadically shorten your backup window and easily and quickly recover with Instant Snapshots. Provide cross-zone high-availability with low RPO and RTO with Regional Persistent Disk. Take advantage of built-in cross-region disaster recovery, providing geo-redundancy for the most sensitive workloads with Asynchronous Persistent Disk.Take advantage of Titanium, our next-generation, tiered offload architectureHyperdisk is built on Titanium. Titanium is a system of purpose-built, custom, silicon, security microcontrollers, and tiered scale-out offloads that works by offloading processing from the host hardware. This next-generation technology delivers up to 500k IOPS per instance for Hyperdisk Extreme block storage; the highest among leading hyperscalers.View all featuresBlock storage options for any workloadBlock storage options for any workloadPick the best block storage for your performance needs and budget.ServiceDescriptionBest forHyperdisk ThroughputMassive scale and high-bandwidth storageScale-out analytics, data drives for cost-sensitive apps, cold disksHyperdisk BalancedGeneral purpose volumes, comes with 3k IOPS baselineEnterprise applications, databases, boot disksHyperdisk ExtremeGuaranteed high-performance storageSAP HANA, SQL Server, Oracle DBsHyperdisk Storage PoolsManage Hyperdisk capacity and performance in aggregate, with thin-provisioning and data reductionSimplified block storage management, lower TCO, and increased efficiencyLocal SSDLow latency, high-performance temporary storageCaches, scratch processing space, temporary data storagePersistent DiskGoogle Cloud’s first generation enterprise-ready block storage for existing VM familiesAnalytics, databases, boot disks, SAP/HANA, Oracle, and virtual desktopsSee details about Hyperdisk capacities and compatible machine typesBlock storage options for any workloadPick the best block storage for your performance needs and budget.Hyperdisk ThroughputDescriptionMassive scale and high-bandwidth storageBest forScale-out analytics, data drives for cost-sensitive apps, cold disksHyperdisk BalancedDescriptionGeneral purpose volumes, comes with 3k IOPS baselineBest forEnterprise applications, databases, boot disksHyperdisk ExtremeDescriptionGuaranteed high-performance storageBest forSAP HANA, SQL Server, Oracle DBsHyperdisk Storage PoolsDescriptionManage Hyperdisk capacity and performance in aggregate, with thin-provisioning and data reductionBest forSimplified block storage management, lower TCO, and increased efficiencyLocal SSDDescriptionLow latency, high-performance temporary storageBest forCaches, scratch processing space, temporary data storagePersistent DiskDescriptionGoogle Cloud’s first generation enterprise-ready block storage for existing VM familiesBest forAnalytics, databases, boot disks, SAP/HANA, Oracle, and virtual desktopsSee details about Hyperdisk capacities and compatible machine typesHow It WorksHyperdisk is built on Titanium, which is a next-generation, tiered offload architecture. Titanium is a system of purpose-built, custom, silicon, security microcontrollers, and tiered scale-out offloads, which improve infrastructure performance, life cycle management, reliability, and security.Titanium underpins Google Cloud’s latest compute instance types (C3, A3, H3), Hyperdisk block storage, networking, and more and is available at no additional cost.Learn more about TitaniumTitanium scale-out offload tier that distributes I/O across Google’s massive cluster-level filesystemCommon UsesBenefit from the highest performance for SAP HANA and SQL ServerUse Hyperdisk Extreme for workloads with the highest performance needGoogle Cloud Hyperdisk is the next-generation of block storage for Compute Engine. It offloads and dynamically scales out storage processing, enabling substantially higher performance, flexibility, and efficiency. With Hyperdisk Extreme, you can take advantage of this new technology to run workloads like SAP HANA and Microsoft SQL Server with the fastest single volume performance in the cloud.Start using Hyperdisk ExtremeServe the most demanding workloads in the cloud with 500k IOPS and 10 GiBps. See performance limitsProvision IOPS separately from capacity, saving 50% vs Persistent Disk Extreme. Learn how Hyperdisk worksDynamically provision performance: Adjust performance for running workloads without downtime. Learn moreTutorials, quickstarts, & labsUse Hyperdisk Extreme for workloads with the highest performance needGoogle Cloud Hyperdisk is the next-generation of block storage for Compute Engine. It offloads and dynamically scales out storage processing, enabling substantially higher performance, flexibility, and efficiency. With Hyperdisk Extreme, you can take advantage of this new technology to run workloads like SAP HANA and Microsoft SQL Server with the fastest single volume performance in the cloud.Start using Hyperdisk ExtremeServe the most demanding workloads in the cloud with 500k IOPS and 10 GiBps. See performance limitsProvision IOPS separately from capacity, saving 50% vs Persistent Disk Extreme. Learn how Hyperdisk worksDynamically provision performance: Adjust performance for running workloads without downtime. Learn moreLower your TCO and increase efficiency with Hyperdisk Storage PoolsUse thin-provisioning and data reduction to lower your TCOHyperdisk Storage Pools enable you to purchase and manage capacity and performance in aggregate, making it even easier to plan and manage your workloads. With Storage Pools you purchase the total capacity and performance your workloads will need, then provision Hyperdisk volumes as normal. You only consume the Pool capacity as you write data, enabling you to increase efficiency and lower TCO.Learn more about Storage PoolsLearn how Hyperdisk Storage Pools can help lower your TCOLearn how to create, manage, and use Hyperdisk Storage PoolsStart using Storage Pools todayLearning resourcesUse thin-provisioning and data reduction to lower your TCOHyperdisk Storage Pools enable you to purchase and manage capacity and performance in aggregate, making it even easier to plan and manage your workloads. With Storage Pools you purchase the total capacity and performance your workloads will need, then provision Hyperdisk volumes as normal. You only consume the Pool capacity as you write data, enabling you to increase efficiency and lower TCO.Learn more about Storage PoolsLearn how Hyperdisk Storage Pools can help lower your TCOLearn how to create, manage, and use Hyperdisk Storage PoolsStart using Storage Pools todaySee TCO savings of up to 40% with Kafka and HadoopUse Hyperdisk Throughput for scale-out analytics workloadsHyperdisk Throughput offers up to 3 GB/s of throughput with full read/write parity, 7.5x higher write throughput, and 2.5x higher read throughput compared to Standard PD. This level of performance allows you to effectively run your scale-out analytics workloads, such as Hadoop and Kafka.Start using Hyperdisk ThroughputDramatically improved TCO: Over 60% savings compared to PD-Standard. Review pricing detailsSubstantially higher performance: Up to 3GB/s, 7.5X higher than PD-Standard. Learn more about supported values for Hyperdisk volumesDynamically provisioning performance: Adjust performance for running workloads without downtime. Learn how to modify a Hyperdisk volumeTutorials, quickstarts, & labsUse Hyperdisk Throughput for scale-out analytics workloadsHyperdisk Throughput offers up to 3 GB/s of throughput with full read/write parity, 7.5x higher write throughput, and 2.5x higher read throughput compared to Standard PD. This level of performance allows you to effectively run your scale-out analytics workloads, such as Hadoop and Kafka.Start using Hyperdisk ThroughputDramatically improved TCO: Over 60% savings compared to PD-Standard. Review pricing detailsSubstantially higher performance: Up to 3GB/s, 7.5X higher than PD-Standard. Learn more about supported values for Hyperdisk volumesDynamically provisioning performance: Adjust performance for running workloads without downtime. Learn how to modify a Hyperdisk volumeChoose low latency ephemeral storage for AI/MLUse Local SSD for AI and machine learning workloads to download and process dataFor AI/ML training and serving that need high performance, low latency temporary storage, using Local solid-state drive (Local SSD) disks. Local SSD disks offer superior IOPS, and very low latency.Learn more about Local SSD for AI workloadsDownload data quickly from Cloud Storage. Learn howMaintain high GPU uptime. Learn moreUtilize built-in GKE support for Local SSD. Learn about Local SSD for GKETutorials, quickstarts, & labsUse Local SSD for AI and machine learning workloads to download and process dataFor AI/ML training and serving that need high performance, low latency temporary storage, using Local solid-state drive (Local SSD) disks. Local SSD disks offer superior IOPS, and very low latency.Learn more about Local SSD for AI workloadsDownload data quickly from Cloud Storage. Learn howMaintain high GPU uptime. Learn moreUtilize built-in GKE support for Local SSD. Learn about Local SSD for GKERun PostgresSQL, MySQL, and modern databasesModernize and scale your databases in the cloud, while lowering your TCOModernize and scale your databases in the cloud. Hyperdisk lets you provision the capacity and performance you need, while serving the most demanding workloads.Learn more about Hyperdisk Balanced for database workloadsTutorials, quickstarts, & labsModernize and scale your databases in the cloud, while lowering your TCOModernize and scale your databases in the cloud. Hyperdisk lets you provision the capacity and performance you need, while serving the most demanding workloads.Learn more about Hyperdisk Balanced for database workloadsModernize virtual desktop infrastructure (VDI) and IT infrastructureDeliver virtual desktops, servers, and IT infrastructure across the globe while lowering TCO.Leverage the scale and flexibility of the cloud to deliver desktops and IT as a service.See Hyperdisk optionsExplore virtual desktop solutions in Google CloudLearn more about Hyperdisk Storage Pools and join the Preview. Talk to salesGet started with Persistent Disk Balanced todayTutorials, quickstarts, & labsDeliver virtual desktops, servers, and IT infrastructure across the globe while lowering TCO.Leverage the scale and flexibility of the cloud to deliver desktops and IT as a service.See Hyperdisk optionsExplore virtual desktop solutions in Google CloudLearn more about Hyperdisk Storage Pools and join the Preview. Talk to salesGet started with Persistent Disk Balanced todayConfigure a workload-level business continuity and data protection strategyProtect your data and ensure your workloads are highly availableEnsure your data is protected using Instant Snapshots. Avoid outages by ensuring your data is available across zones and regions with Replication and Regional Disks.Learn more about Instant SnapshotsUnderstand your block storage data protection options. Compare available optionsBackup your workloads with Google Cloud Backup and Disaster Recovery. Read launch announcementProtect your data with Asynchronous Replication. Learn about this featureTutorials, quickstarts, & labsProtect your data and ensure your workloads are highly availableEnsure your data is protected using Instant Snapshots. Avoid outages by ensuring your data is available across zones and regions with Replication and Regional Disks.Learn more about Instant SnapshotsUnderstand your block storage data protection options. Compare available optionsBackup your workloads with Google Cloud Backup and Disaster Recovery. Read launch announcementProtect your data with Asynchronous Replication. Learn about this featurePricingBlock storage pricingPricing for Hyperdisk is based on provisioned capacity, throughput, and IOPS, standalone or in a Storage Pool with Advanced Capacity. Data protection products are priced on capacity and network charges.Category / typeDescriptionPrice (us-central1)Hyperdisk ThroughputCapacity, GB provisionedStandalone: $0.005 Advanced Pool: $0.009Throughput, MB/s provisioned$0.250Hyperdisk BalancedCapacity, GB provisionedStandalone: $0.080Advanced Pool: $0.14Throughput, MB/s provisioned$0.040IOPS, per IOPS provisioned$0.005Hyperdisk ExtremeCapacity, GB provisioned$0.125IOPS, per IOPS provisioned$0.032Local SSDProvisioned capacity$0.036Persistent Disk StandardCapacity, GB provisioned$0.04Persistent Disk BalancedCapacity, GB provisioned$0.1Persistent Disk SSDCapacity, GB provisioned$0.17Persistent Disk ExtremeCapacity, GB provisioned$0.125IOPS, per IOPS provisioned$0.064SnapshotsStandard snapshot storage, GB provisioned$0.065Archive snapshot storage, GB provisioned$0.019Archive snapshot retrieval, GB retrieved$0.019Regional Persistent DiskRegional capacity, GB provisioned$0.08Asynchronous Persistent DiskAsynchronous Replication Protection, GB provisioned$0.04Asynchronous Replication Networking, GiB$0.02All prices are for the Iowa (us-central1) region. For more detailed pricing information, please view the pricing guide.Block storage pricingPricing for Hyperdisk is based on provisioned capacity, throughput, and IOPS, standalone or in a Storage Pool with Advanced Capacity. Data protection products are priced on capacity and network charges.Hyperdisk ThroughputDescriptionCapacity, GB provisionedPrice (us-central1)Standalone: $0.005 Advanced Pool: $0.009Throughput, MB/s provisionedDescription$0.250Hyperdisk BalancedDescriptionCapacity, GB provisionedPrice (us-central1)Standalone: $0.080Advanced Pool: $0.14Throughput, MB/s provisionedDescription$0.040IOPS, per IOPS provisionedDescription$0.005Hyperdisk ExtremeDescriptionCapacity, GB provisionedPrice (us-central1)$0.125IOPS, per IOPS provisionedDescription$0.032Local SSDDescriptionProvisioned capacityPrice (us-central1)$0.036Persistent Disk StandardDescriptionCapacity, GB provisionedPrice (us-central1)$0.04Persistent Disk BalancedDescriptionCapacity, GB provisionedPrice (us-central1)$0.1Persistent Disk SSDDescriptionCapacity, GB provisionedPrice (us-central1)$0.17Persistent Disk ExtremeDescriptionCapacity, GB provisionedPrice (us-central1)$0.125IOPS, per IOPS provisionedDescription$0.064SnapshotsDescriptionStandard snapshot storage, GB provisionedPrice (us-central1)$0.065Archive snapshot storage, GB provisionedDescription$0.019Archive snapshot retrieval, GB retrievedDescription$0.019Regional Persistent DiskDescriptionRegional capacity, GB provisionedPrice (us-central1)$0.08Asynchronous Persistent DiskDescriptionAsynchronous Replication Protection, GB provisionedPrice (us-central1)$0.04Asynchronous Replication Networking, GiBDescription$0.02All prices are for the Iowa (us-central1) region. For more detailed pricing information, please view the pricing guide.Pricing calculatorEstimate your monthly block storage charges.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet started in the Google Cloud console today.Go to consoleCompare block storage optionsView comparisonSee how Hyperdisk can smash your price-performance goalsWatch videoMigrate a Persistent Disk volume to HyperdiskExplore guideRead Hyperdisk announcement blogRead blogGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Blockchain_Node_Engine.txt b/Blockchain_Node_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..beb94611fbfe1ffd6124f3359e149a9cf8097661 --- /dev/null +++ b/Blockchain_Node_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/blockchain-node-engine +Date Scraped: 2025-02-23T12:10:25.385Z + +Content: +Come build with the Google Cloud Web3 community. Apply now to our new Web3 startup program.Blockchain Node EngineFully managed node hosting for developing on the blockchainSecure, reliable access to the blockchain with predictable pricing. Get all of the flexibility of a self-hosted node with none of the operational overhead.Go to consoleNew customers get $300 in free credits to spend on Blockchain Node Engine.Product HighlightsFully managed nodes so you can focus on buildingDedicated nodes for consistency and configurabilityEnterprise-grade infrastructure for Web3Read more: Introducing Blockchain Node EngineFeaturesStreamlined provisioningDeploying a node is a time-intensive process that involves provisioning a compute instance, installing blockchain clients, and waiting for the node to sync with the network. Blockchain Node Engine makes this process fast and easy by allowing developers to deploy a new node with the console or API call.Fully managed nodesBlockchain Node Engine is a fully managed service, which means that you don’t have to worry about availability. Google Cloud actively monitors your nodes and restarts or upgrades them as needed—all while maintaining the same endpoint so you're never disrupted. By reducing the need for a dedicated DevOps team, Blockchain Node Engine lets your team focus on your users instead of your infrastructure.Dedicated and isolatedBlockchain Node Engine offers you all the flexibility and configurability of a self-managed node without any of the operational overhead. This means you can deploy your node in a variety of regions to meet your performance or compliance requirements. You can also control who can access your node and how much by issuing custom API keys with individual rate limits.Enterprise-grade infrastructureBlockchain Node Engine brings Google’s expertise in reliability and security to Web3. We offer SLAs so that you can build mission-critical workloads on top of our infrastructure. Our RPC endpoints are TLS-enabled and are secured by Cloud Armor to prevent DDOS attacks.Predictable pricingWith Blockchain Node Engine you pay a flat fee billed hourly no matter how many or what kinds of requests you make. You can index Mainnet including logs and traces without worrying about exceeding your budget.View all featuresHow It WorksTo use Blockchain Node Engine, you’ll first create a node that syncs with the blockchain. Once syncing is complete, you’ll be able to read/write data, and relay transactions using the RPC and WebSocket endpoints.View documentationCommon UsesIngesting blockchain dataUse nodes to reliably process blockchain data. Blockchain Node Engine provides different node types that allow you to ingest both historical blockchain data as well as real-time data. Archive nodes allow you to index the full history of data or a subset into a datastore of your choice. WebSocket endpoints on full nodes let you process new blocks from the tip of the chain.Tutorials, quickstarts, & labsUse nodes to reliably process blockchain data. Blockchain Node Engine provides different node types that allow you to ingest both historical blockchain data as well as real-time data. Archive nodes allow you to index the full history of data or a subset into a datastore of your choice. WebSocket endpoints on full nodes let you process new blocks from the tip of the chain.Fast, reliable, and private transactionsConnect wallets to hosted nodes for increased performance. Shared RPC endpoints can suffer from congestion during peak times. By connecting your wallets to your own dedicated nodes you get enhanced reliability and consistency. By avoiding third-party RPC services you also ensure your transactions are privately sent to the blockchain. Finally, Blockchain Node Engine leverages Google’s premium network for faster transaction processing.Tutorials, quickstarts, & labsConnect wallets to hosted nodes for increased performance. Shared RPC endpoints can suffer from congestion during peak times. By connecting your wallets to your own dedicated nodes you get enhanced reliability and consistency. By avoiding third-party RPC services you also ensure your transactions are privately sent to the blockchain. Finally, Blockchain Node Engine leverages Google’s premium network for faster transaction processing.Smart contract and dApp developmentConfigure your toolchain for control and reliability.Testing and deploying your smart contracts and dApps requires integration with an RPC node. By using your own node you eliminate dependencies on third parties. By deploying nodes regionally colocated with your other workloads, you benefit from enhanced performance. Tutorials, quickstarts, & labsConfigure your toolchain for control and reliability.Testing and deploying your smart contracts and dApps requires integration with an RPC node. By using your own node you eliminate dependencies on third parties. By deploying nodes regionally colocated with your other workloads, you benefit from enhanced performance. PricingHow Blockchain Node Engine pricing worksPricing for Blockchain Node Engine is based on blockchain and node type.BlockchainNode typePrice (USD)Monthly price (USD)EthereumFull nodeBest for building dApps or reading real-time data$0.69per node hour$503.70based on 730 hours/monthArchive nodeBest for reading full historical data$2.74per node hour$2000.20based on 730 hours/monthLearn more about Blockchain Node Engine pricing. View pricing details.How Blockchain Node Engine pricing worksPricing for Blockchain Node Engine is based on blockchain and node type.EthereumNode typeFull nodeBest for building dApps or reading real-time dataPrice (USD)$0.69per node hourMonthly price (USD)$503.70based on 730 hours/monthArchive nodeBest for reading full historical dataNode type$2.74per node hourPrice (USD)$2000.20based on 730 hours/monthLearn more about Blockchain Node Engine pricing. View pricing details.PRICING CALCULATOREstimate your monthly Blockchain Node Engine costs.Estimate your costsCustom QuoteFor high-volume pricing, connect with our sales team to get a custom quote.Request a quoteStart your proof of conceptNew customers get $300 in free creditsTry Blockchain Node EngineLearn how to use Blockchain Node EngineView documentationCreate a blockchain nodeGet startedView blockchain node detailsLearn moreRead blockchain data from a nodeLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Blockchain_RPC.txt b/Blockchain_RPC.txt new file mode 100644 index 0000000000000000000000000000000000000000..3eee1abf78347eb2aaadf55c378857557b8add7d --- /dev/null +++ b/Blockchain_RPC.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/blockchain-rpc +Date Scraped: 2025-02-23T12:10:28.099Z + +Content: +Google Cloud for Web3: See everything we have to offer on the Web3 Portal.Blockchain RPCEnterprise-grade RPC for building on the blockchainA service that allows secure and performant read/write access to multiple blockchains using native APIs. Blockchain RPC is currently available in public Preview.Try it for freeDuring the public Preview, the product is free to use.Product highlightsEnterprise-grade reliabilityScales automatically to grow with your businessFully compatible with EIP1474 for easy integrationFeaturesInstant access to the blockchainWith Blockchain RPC, you configure an endpoint and can immediately query the blockchain. There is no need to wait for nodes to be provisioned or synced. You’ll access our fully managed cluster of nodes to build secure and reliable apps with 100 requests / second and 1 million requests / day.WebSocket supportIn addition to standard RPC support over HTTP, Blockchain RPC allows you to subscribe to new events using WebSockets. Your application is notified as soon as data is finalized on chain.Enterprise-grade infrastructureBlockchain RPC brings Google’s expertise in reliability and security to Web3 so that you can build mission-critical workloads on top of our infrastructure. Our RPC endpoints support over 100 requests/sec and are TLS-enabled.View all featuresHow It WorksTo use Blockchain RPC, you’ll configure an endpoint using your project ID and API key. You’ll then be able to read and write data to the Ethereum mainnet or testnet. View documentationCommon UsesIngesting blockchain dataBlockchain RPC provides reliable access to blockchain dataWhether you’re developing on testnet or launching your app on mainnet, Blockchain RPC provides you with both historical and real-time data. The service is backed by clusters of archive nodes to support a variety of use cases.Learning resourcesBlockchain RPC provides reliable access to blockchain dataWhether you’re developing on testnet or launching your app on mainnet, Blockchain RPC provides you with both historical and real-time data. The service is backed by clusters of archive nodes to support a variety of use cases.Fast, reliable, and private transactionsConnect wallets to hosted nodes for increased performanceBy connecting your wallets to Blockchain RPC, you get the reliability and consistency of a dedicated node. The service leverages Google’s premium network for faster transaction processing.Learning resourcesConnect wallets to hosted nodes for increased performanceBy connecting your wallets to Blockchain RPC, you get the reliability and consistency of a dedicated node. The service leverages Google’s premium network for faster transaction processing.Smart contract and decentralized application developmentBuilding from prototype to productionBlockchain RPC requires no provisioning or syncing, so you can prototype new ideas with no delay. And because the service scales to 100 requests per second with manual effort, you can seamlessly launch your app to production.Learning resourcesBuilding from prototype to productionBlockchain RPC requires no provisioning or syncing, so you can prototype new ideas with no delay. And because the service scales to 100 requests per second with manual effort, you can seamlessly launch your app to production.PricingHow Blockchain RPC pricing worksPricing for Blockchain RPC is based on throughput and daily volume. For the Preview, the product is available at no cost.TierThroughputVolumeMonthly price (USD)Preview100 requests / second1 million requests / day$0.00How Blockchain RPC pricing worksPricing for Blockchain RPC is based on throughput and daily volume. For the Preview, the product is available at no cost.PreviewThroughput100 requests / secondVolume1 million requests / dayMonthly price (USD)$0.00Testnet FaucetReceive testnet tokens with our faucet for Sepolia and Holesky.Get tokensWeb3 PortalDiscover all the Web3 infrastructure and analytics available to you on Google Cloud.Get startedStart building todayLearn how to use Blockchain RPCView documentationContact our teamContact usWhat is Blockchain RPCLearn moreConfigure an RPC EndpointGet startedRPC API detailsLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Blog.txt b/Blog.txt new file mode 100644 index 0000000000000000000000000000000000000000..a51faa51e109a48315bb656a4de685d844c99915 --- /dev/null +++ b/Blog.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/blog +Date Scraped: 2025-02-23T12:11:14.710Z + +Content: +News, tips, and inspiration to accelerate your digital transformationSecurity & IdentityAnnouncing quantum-safe digital signatures in Cloud KMSNew PQC news: We’re introducing quantum-safe digital signatures in Cloud KMS, and we’re sharing more on our PQC strategy for Google Cloud encryption products.By Jennifer Fernick • 4-minute readRead articleData & AIAI & Machine LearningOptimizing image generation pipelines on Google Cloud: A practical guideBy Gopala Dhar • 5-minute readData AnalyticsHow to reduce costs with Managed Service for Apache Kafka: CUDs, compression and moreBy Qiqi Wu • 5-minute readAI & Machine LearningUnlock Inference-as-a-Service with Cloud Run and Vertex AIBy Jason (Jay) Smith • 4-minute readInfrastructureNetworkingIntroducing Cloud DNS public IP health checks, for more resilient multicloud deploymentsBy George Prokudin • 3-minute readComputeIntroducing A4X VMs powered by NVIDIA GB200 — now in previewBy George Elissaios • 7-minute readSystemsBalance of power: A full-stack approach to power and thermal fluctuations in ML infrastructureBy Houle Gan • 6-minute readDevelopers & OperatorsDevOps & SREAn SRE’s guide to optimizing ML systems with MLOps pipelinesBy Max Saltonstall • 5-minute readContainers & KubernetesWith MultiKueue, grab GPUs for your GKE cluster, wherever they may beBy Jean-Baptiste Leroy • 5-minute readApplication DevelopmentAnnouncing Wasm support in Go 1.24By Cameron Balahan • 2-minute readSecurity & IdentitySecurity & IdentityWhy you should check out our Next ‘25 Security HubBy Robert Sadowski • 3-minute readSecurity & IdentityCloud CISO Perspectives: New AI, cybercrime reports underscore need for security best practicesBy Phil Venables • 6-minute readThreat IntelligenceSignals of Trouble: Multiple Russia-Aligned Threat Actors Actively Targeting Signal MessengerBy Google Threat Intelligence Group • 15-minute readJoin us at Google Cloud Next '25April 9-11 in Las Vegas.Register All storiesDatabases8 steps to ensuring a smooth Spanner go-liveBy Szabolcs Rozsnyai • 4-minute readChrome EnterpriseTaking Flight: How the RAF Air Cadets bridged the digital divide with ChromeOS FlexBy Dr. Jill Matterface • 4-minute readTelecommunicationsRethinking 5G: The cloud imperativeBy Eric Parsons • 4-minute readTraining and CertificationsDiscover Google Cloud careers and credentials in our new Career DreamerBy Erin Rifkin • 2-minute readData AnalyticsHow to use gen AI for better data schema handling, data quality, and data generationBy Deb Lee • 9-minute readData AnalyticsBigQuery ML is now compatible with open-source gen AI modelsBy Vaibhav Sethi • 3-minute readDevelopers & PractitionersDeep dive into AI with Google Cloud’s global generative AI roadshowBy Christina Lin • 4-minute readInside Google CloudWhat’s new with Google CloudBy Google Cloud Content & Editorial • 3-minute readApplication ModernizationAccelerate your cloud journey using a well-architected, principles-based frameworkBy Kumar Dhanagopal • 5-minute readDatabasesWhere’s the beef? For São Paulo’s agricultural secretariat, it’s on Cloud SQL for SQL ServerBy Michel Martins da Silva • 6-minute readLoad more stories \ No newline at end of file diff --git a/Build_a_hybrid_render_farm.txt b/Build_a_hybrid_render_farm.txt new file mode 100644 index 0000000000000000000000000000000000000000..6de9b7cf0153b423c961538539cb265f6863fe32 --- /dev/null +++ b/Build_a_hybrid_render_farm.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/building-a-hybrid-render-farm +Date Scraped: 2025-02-23T11:51:06.293Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build a hybrid render farm Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-26 UTC This document provides guidance on extending your existing, on-premises render farm to use compute resources on Google Cloud. The document assumes that you have already implemented a render farm on-premises and are familiar with the basic concepts of visual effects (VFX) and animation pipelines, queue management software, and common software licensing methods. Overview Rendering 2D or 3D elements for animation, film, commercials, or video games is both compute- and time-intensive. Rendering these elements requires a substantial investment in hardware and infrastructure along with a dedicated team of IT professionals to deploy and maintain hardware and software. When an on-premises render farm is at 100-percent utilization, managing jobs can become a challenge. Task priorities and dependencies, restarting dropped frames, and network, disk, and CPU load all become part of the complex equation that you must closely monitor and control, often under tight deadlines. To manage these jobs, VFX facilities have incorporated queue management software into their pipelines. Queue management software can: Deploy jobs to on-premises and cloud-based resources. Manage inter-job dependencies. Communicate with asset management systems. Provide users with a user interface and APIs for common languages such as Python. While some queue management software can deploy jobs to cloud-based workers, you are still responsible for connecting to the cloud, synchronizing assets, choosing a storage framework, managing image templates, and providing your own software licensing. The following options are available for building and managing render pipelines and workflows in a cloud or hybrid cloud environment: If you don't already have on-premises or cloud resources, you can use a software as a service (SaaS) cloud-based render service such as Conductor. If you want to manage your own infrastructure, you can build and deploy the cloud resources described in this document. If you want to build a custom workflow based on your specific requirements, you can work with Google Cloud service integrator partners like Gunpowder or Qodea. This option has the benefit of running all the cloud services in your own secure Google Cloud environment. To help determine the ideal solution for your facility, contact your Google Cloud representative. Note: Production notes appear periodically throughout this document. These notes offer best practices to follow as you build your render farm. Connecting to the cloud Depending on your workload, decide how your facility connects to Google Cloud, whether through a partner ISP, a direct connection, or over the public internet. Connecting over the internet Without any special connectivity, you can connect to Google's network and use our end-to-end security model by accessing Google Cloud services over the internet. Utilities such as the Google Cloud CLI and resources such as the Compute Engine API all use secure authentication, authorization, and encryption to help safeguard your data. Cloud VPN No matter how you're connected, we recommend that you use a virtual private network (VPN) to secure your connection. Cloud VPN helps you securely connect your on-premises network to your Google Virtual Private Cloud (VPC) network through an IPsec VPN connection. Data that is in transit gets encrypted before it passes through one or more VPN tunnels. Learn how to create a VPN for your project. Customer-supplied VPN Although you can set up your own VPN gateway to connect directly with Google, we recommend using Cloud VPN, which offers more flexibility and better integration with Google Cloud. Cloud Interconnect Google supports multiple ways to connect your infrastructure to Google Cloud. These enterprise-grade connections, known collectively as Cloud Interconnect, offer higher availability and lower latency than standard internet connections, along with reduced egress pricing. Cross-Cloud Interconnect lets you establish high-bandwidth, dedicated connectivity to Google Cloud for your data in another cloud. Doing so reduces network complexity, reduces data transfer costs, and enables high-throughput, multicloud render farms. Dedicated Interconnect Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google's network. It delivers connection capacity over the following types of connections: One or more 10 Gbps Ethernet connections, with a maximum of eight connections or 80 Gbps total per interconnect. One or more 100 Gbps Ethernet connections, with a maximum of two connections or 200 Gbps total per interconnect. Dedicated Interconnect traffic is not encrypted. If you need to transmit data across Dedicated Interconnect in a secure manner, you must establish your own VPN connection. Cloud VPN is not compatible with Dedicated Interconnect, so you must supply your own VPN in this case. Partner Interconnect Partner Interconnect provides connectivity between your on-premises network and your VPC network through a supported service provider. A Partner Interconnect connection is useful if your infrastructure is in a physical location that can't reach a Dedicated Interconnect colocation facility or if your data needs don't warrant an entire 10-Gbps connection. Other connection types Other ways to connect to Google might be available in your specific location. For help in determining the best and most cost-effective way to connect to Google Cloud, contact your Google Cloud representative. Securing your content To run their content on any public cloud platform, content owners like major Hollywood studios require vendors to comply with security best practices that are defined both internally and by organizations such as the MPAA. Google Cloud offers zero-trust security models that are built into products like Google Workspace, Chrome Enterprise Premium, and BeyondProd. Each studio has different requirements for securing rendering workloads. You can find security whitepapers and compliance documentation at cloud.google.com/security. If you have questions about the security compliance audit process, contact your Google Cloud representative. Organizing your projects Projects are a core organizational component of Google Cloud. In your facility, you can organize jobs under their own project or break them apart into multiple projects. For example, you might want to create separate projects for the previsualization, research and development, and production phases of a film. Projects establish an isolation boundary for both network data and project administration. However, you can share networks across projects with Shared VPC, which provides separate projects with access to common resources. Production notes: Create a Shared VPC host project that contains resources with all your production tools. You can designate all projects that are created under your organization as Shared VPC service projects. This designation means that any project in your organization can access the same libraries, scripts, and software that the host project provides. The Organization resource You can manage projects under an Organization resource, which you might have established already. Migrating all your projects into an organization provides a number of benefits. Production notes: Designate production managers as owners of their individual projects and studio management as owners of the Organization resource. Defining access to resources Projects require secure access to resources coupled with restrictions on where users or services are permitted to operate. To help you define access, Google Cloud offers Identity and Access Management (IAM), which you can use to manage access control by defining which roles have what levels of access to which resources. Production notes: To restrict users' access to only the resources that are necessary to perform specific tasks based on their role, implement the principle of least privilege both on premises and in the cloud. For example, consider a render worker, which is a virtual machine (VM) that you can deploy from a predefined instance template that uses your custom image. The render worker that is running under a service account can read from Cloud Storage and write to attached storage, such as a cloud filer or persistent disk. However, you don't need to add individual artists to Google Cloud projects at all, because they don't need direct access to cloud resources. You can assign roles to render wranglers or project administrators who have access to all Compute Engine resources, which permits them to perform functions on resources that are inaccessible to other users. Define a policy to determine which roles can access which types of resources in your organization. The following table shows how typical production tasks map to IAM roles in Google Cloud. Production task Role name Resource type Studio manager resourcemanager.organizationAdmin Organization Project Production manager owner, editor Project Render wrangler compute.admin, iam.serviceAccountActor Project Queue management account compute.admin, iam.serviceAccountActor Organization Project Individual artist [no access] Not applicable Access scopes Access scopes offer you a way to control the permissions of a running instance no matter who is logged in. You can specify scopes when you create an instance yourself or when your queue management software deploys resources from an instance template. Scopes take precedence over the IAM permissions of an individual user or service account. This precedence means that an access scope can prevent a project administrator from signing in to an instance to delete a storage bucket or change a firewall setting. Production notes: By default, instances can read but not write to Cloud Storage. If your render pipeline writes finished renders back to Cloud Storage, add the scope devstorage.read_write to your instance at the time of creation. Choosing how to deploy resources With cloud rendering, you can use resources only when needed, but you can choose from a number of ways to make resources available to your render farm. Deploy on demand For optimal resource usage, you can choose to deploy render workers only when you send a job to the render farm. You can deploy many VMs to be shared across all frames in a job, or even create one VM per frame. Your queue management system can monitor running instances, which can be requeued if a VM is preempted, and terminated when individual tasks are completed. Deploy a pool of resources You can also choose to deploy a group of instances, unrelated to any specific job, that your on-premises queue management system can access as additional resources. If you use Google Cloud's Spot VMs, a group of running instances can accept multiple jobs per VM, using all cores and maximizing resource usage. This approach might be the most straightforward strategy to implement because it mimics how an on-premises render farm is populated with jobs. Licensing the software Third-party software licensing can vary widely from package to package. Here are some of the licensing schemes and models that you might encounter in a VFX pipeline. For each scheme, the third column shows the recommended licensing approach. Scheme Description Recommendation Node locked Licensed to a specific MAC address, IP address, or CPU ID. Can be run only by a single process. Instance based Node based Licensed to a specific node (instance). An arbitrary number of users or processes can run on a licensed node. Instance based Floating Checked out from a license server that keeps track of usage. License server Software licensing Interactive Allows user to run software interactively in a graphics-based environment. License server or instance based Batch Allows user to run software only in a command-line environment. License server Cloud-based licensing Usage based Checked out only when a process runs on a cloud instance. When the process finishes or terminates, the license is released. Cloud-based license server Uptime based Checked out while an instance is active and running. When the instance is stopped or deleted, the license is released. Cloud-based license server Using instance-based licensing Some software programs or plugins are licensed directly to the hardware on which they run. This approach to licensing can present a problem in the cloud, where hardware identifiers such as MAC or IP addresses are assigned dynamically. MAC addresses When they are created, instances are assigned a MAC address that is retained so long as the instance is not deleted. You can stop or restart an instance, and the MAC address will be retained. You can use this MAC address for license creation and validation until the instance is deleted. Assigning a static IP address When you create an instance, it is assigned an internal and, optionally, an external IP address. To retain an instance's external IP address, you can reserve a static IP address and assign it to your instance. This IP address will be reserved only for this instance. Because static IP addresses are a project-based resource, they are subject to regional quotas. You can also assign an internal IP address when you create an instance, which is helpful if you want the internal IP addresses of a group of instances to fall within the same range. Hardware dongles Older software might still be licensed through a dongle, a hardware key that is programmed with a product license. Most software companies have stopped using hardware dongles, but some users might have legacy software that is keyed to one of these devices. If you encounter this problem, contact the software manufacturer to see if they can provide you with an updated license for your particular software. If the software manufacturer cannot provide such a license, you could implement a network-attached USB hub or USB over IP solution. Using a license server Most modern software offers a floating license option. This option makes the most sense in a cloud environment, but it requires stronger license management and access control to prevent overconsumption of a limited number of licenses. To help avoid exceeding your license capacity, you can as part of your job queue process choose which licenses to use and control the number of jobs that use licenses. On-premises license server You can use your existing, on-premises license server to provide licenses to instances that are running in the cloud. If you choose this method, you must provide a way for your render workers to communicate with your on-premises network, either through a VPN or some other secure connection. Cloud-based license server In the cloud, you can run a license server that serves instances in your project or even across projects by using Shared VPC. Floating licenses are sometimes linked to a hardware MAC address, so a small, long-running instance with a static IP address can easily serve licenses to many render instances. Hybrid license server Some software can use multiple license servers in a prioritized order. For example, a renderer might query the number of licenses that are available from an on-premises server, and if none are available, use a cloud-based license server. This strategy can help maximize your use of permanent licenses before you check out other license types. Production notes: Define one or more license servers in an environment variable and define the order of priority; Autodesk Arnold, a popular renderer, helps you do this. If the job cannot acquire a license by using the first server, the job tries to use any other servers that are listed, as in the following example: export solidangle_LICENSE=5053@x.x.0.1;5053@x.x.0.2 In the preceding example, the Arnold renderer tries to obtain a license from the server at x.x.0.1, port 5053. If that attempt fails, it then tries to obtain a license from the same port at the IP address x.x.0.2. Cloud-based licensing Some vendors offer cloud-based licensing that provides software licenses on demand for your instances. Cloud-based licensing is generally billed in two ways: usage based and uptime based. Usage-based licensing Usage-based licensing is billed based on how much time the software is in use. Typically with this type of licensing, a license is checked out from a cloud-based server when the process starts and is released when the process completes. So long as a license is checked out, you are billed for the use of that license. This type of licensing is typically used for rendering software. Uptime-based licensing Uptime-based or metered licenses are billed based on the uptime of your Compute Engine instance. The instance is configured to register with the cloud-based license server during the startup process. So long as the instance is running, the license is checked out. When the instance is stopped or deleted, the license is released. This type of licensing is typically used for render workers that a queue manager deploys. Important: Third-party software licenses might have different requirements for use on a public cloud, so consider working with your corporate counsel to determine if your licenses are compatible with that cloud. Choosing how to store your data The type of storage that you choose on Google Cloud depends on your chosen storage strategy along with factors such as durability requirements and cost. Persistent disk You might be able to avoid implementing a file server altogether by incorporating persistent disks (PDs) into your workload. PDs are a type of POSIX-compliant block storage, up to 64 TB in size, that are familiar to most VFX facilities. Persistent disks are available as both standard drives and solid-state drives (SSD). You can attach a PD in read-write mode to a single instance, or in read-only mode to a large number of instances, such as a group of render workers. Pros Cons Ideal use case Mounts as a standard NFS or SMB volume. Can dynamically resize. Up to 128 PDs can be attached to a single instance. The same PD can be mounted as read-only on hundreds or thousands of instances. Maximum size of 64 TB. Can write to PD only when attached to a single instance. Can be accessed only by resources that are in the same region. Advanced pipelines that can build a new disk on a per-job basis. Pipelines that serve infrequently updated data, such as software or common libraries, to render workers. Object storage Cloud Storage is highly redundant, highly durable storage that, unlike traditional file systems, is unstructured and practically unlimited in capacity. Files on Cloud Storage are stored in buckets, which are similar to folders, and are accessible worldwide. Unlike traditional storage, object storage cannot be mounted as a logical volume by an operating system (OS). If you decide to incorporate object storage into your render pipeline, you must modify the way that you read and write data, either through command-line utilities such as the gcloud CLI or through the Cloud Storage API. Pros Cons Ideal use case Durable, highly available storage for files of all sizes. Single API across storage classes. Inexpensive. Data is available worldwide. Virtually unlimited capacity. Not POSIX-compliant. Must be accessed through API or command-line utility. In a render pipeline, data must be transferred locally before use. Render pipelines with an asset management system that can publish data to Cloud Storage. Render pipelines with a queue management system that can fetch data from Cloud Storage before rendering. Other storage products Other storage products are available as managed services, through third-party channels such as the Cloud Marketplace, or as open source projects through software repositories or GitHub. Product Pros Cons Ideal use case Filestore Clustered file system that can support thousands of simultaneous NFS connections. Able to synchronize with on-premises NAS cluster. No way to selectively sync files. No bidirectional sync. Medium to large VFX facilities with hundreds of TBs of data to present on the cloud. Pixit Media, PixStor Scale-out file system that can support thousands of simultaneous NFS or POSIX clients. Data can be cached on demand from on-premises NAS, with updates automatically sent back to on-premises storage. Cost, third-party support from Pixit. Medium to large VFX facilities with hundreds of TBs of data to present on the cloud. Google Cloud NetApp Volumes Fully managed storage solution on Google Cloud. Supports NFS, SMB, and multiprotocol environments. Point in time snapshots with instance recovery Not available in all Google Cloud regions. VFX facilities with a pipeline capable of asset synchronization. Shared disk across virtual workstations. Cloud Storage FUSE Mount Cloud Storage buckets as file systems. Low cost. Not a POSIX-compliant file system. Can be difficult to configure and optimize. VFX facilities that are capable of deploying, configuring, and maintaining an open source file system, with a pipeline that is capable of asset synchronization. Other storage types are available on Google Cloud. For more information, contact your Google Cloud representative. Further reading on data storage options Storage options available on Google Cloud File storage on Compute Engine Design an optimal storage strategy for your cloud workload Implementing storage strategies You can implement a number of storage strategies in VFX or animation production pipelines by establishing conventions that determine how to handle your data, whether you access the data directly from your on-premises storage or synchronize between on-premises storage and the cloud. Strategy 1: Mount on-premises storage directly Mounting on-premises storage directly from cloud-based render workers If your facility has connectivity to Google Cloud of at least 10 Gbps and is in close proximity to a Google Cloud region, you can choose to mount your on-premises NAS directly on cloud render workers. While this strategy is straightforward, it can also be cost- and bandwidth- intensive, because anything that you create on the cloud and write back to storage is counted as egress data. Pros Cons Ideal use case Straightforward implementation. Read/write to common storage. Immediate availability of data, no caching or synchronization necessary. Can be more expensive than other options. Close proximity to a Google data center is necessary to achieve low latency. The maximum number of instances that you can connect to your on-premises NAS depends on your bandwidth and connection type. Facilities near a Google data center that need to burst render workloads to the cloud, where cost is not a concern. Facilities with connectivity to Google Cloud of at least 10 Gbps. Strategy 2: Synchronize on demand Synchronizing data between on-premises storage and cloud-based storage on demand You can choose to push data to the cloud or pull data from on-premises storage, or vice versa, only when data is needed, such as when a frame is rendered or an asset is published. If you use this strategy, synchronization can be triggered by a mechanism in your pipeline such as a watch script, by an event handler such as Pub/Sub, or by a set of commands as part of a job script. You can perform a synchronization by using a variety of commands, such as the gcloud CLI scp command, the gcloud CLI rsync command, or UDP-based data transfer protocols (UDT). If you choose to use a third-party UDT such as Aspera, Cloud FastPath, BitSpeed, or FDT to communicate with a Cloud Storage bucket, refer to the third party's documentation to learn about their security model and best practices. Google does not manage these third-party services. Push method You typically use the push method when you publish an asset, place a file in a watch folder, or complete a render job, after which time you push it to a predefined location. Examples: A cloud render worker completes a render job, and the resulting frames are pushed back to on-premises storage. An artist publishes an asset. Part of the asset-publishing process involves pushing the associated data to a predefined path on Cloud Storage. Pull method You use the pull method when a file is requested, typically by a cloud-based render instance. Example: As part of a render job script, all assets that are needed to render a scene are pulled into a file system before rendering, where all render workers can access them. Pros Cons Ideal use case Complete control over which data is synchronized and when. Ability to choose transfer method and protocol. Your production pipeline must be capable of event handling to trigger push/pull synchronizations. Additional resources might be necessary to handle the synchronization queue. Small to large facilities that have custom pipelines and want complete control over asset synchronization. Production notes: Manage data synchronization with the same queue management system that you use to handle render jobs. Synchronization tasks can use separate cloud resources to maximize available bandwidth and minimize network traffic. Strategy 3: On-premises storage, cloud-based read-through cache Using your on-premises storage with a cloud-based, read-through cache Google Cloud has extended and developed a KNFSD caching solution as an open source option. The solution can handle render farm performance demands that exceed the capabilities of storage infrastructure. KNFSD caching offers high-performance, read-through caching, which lets workloads scale to hundreds—or even thousands—of render nodes across multiple regions and hybrid storage pools. KNFSD caching is a scale-out solution that reduces load on the primary file-sharing service. KNFSD caching also reduces the overload effect when many render nodes all attempt to retrieve files from the file server at the same time. By using a caching layer on the same VPC network as the render nodes, read latency is reduced, which helps render jobs start and complete faster. Depending on how you've configured your caching file server, the data remains in the cache until: The data ages out, or remains untouched for a specified amount of time. Space is needed on the file server, at which time data is removed from the cache based on age. This strategy reduces the amount of bandwidth and complexity required to deploy many concurrent render instances. In some cases, you might want to pre-warm your cache to ensure that all job-related data is present before rendering. To pre-warm the cache, read the contents of a directory that is on your cloud file server by performing a read or stat of one or more files. Accessing files in this way triggers the synchronization mechanism. You can also add a physical on-premises appliance to communicate with the virtual appliance. For example, NetApp offers a storage solution that can further reduce latency between your on-premises storage and the cloud. Pros Cons Ideal use case Cached data is managed automatically. Reduces bandwidth requirements. Clustered cloud file systems can be scaled up or down depending on job requirements. Can incur additional costs. Pre-job tasks must be implemented if you choose to pre-warm the cache. Large facilities that deploy many concurrent instances and read common assets across many jobs. Filtering data You can build a database of asset types and associated conditions to define whether to synchronize a particular type of data. You might never want to synchronize some types of data, such as ephemeral data that is generated as part of a conversion process, cache files, or simulation data. Consider also whether to synchronize unapproved assets, because not all iterations will be used in final renders. Performing an initial bulk transfer When implementing your hybrid render farm, you might want to perform an initial transfer of all or part of your dataset to Cloud Storage, persistent disk, or other cloud-based storage. Depending on factors such as the amount and type of data to transfer and your connection speed, you might be able to perform a full synchronization over the course of a few days or weeks. The following figure compares typical times for online and physical transfers. Comparison of typical times for online and physical transfers If your transfer workload exceeds your time or bandwidth constraints, Google offers a number of transfer options to get your data into the cloud, including Google's Transfer Appliance. Archiving and disaster recovery It's worth noting the difference between archiving of data and disaster recovery. The former is a selective copy of finished work, while the latter is a state of data that can be recovered. You want to design a disaster recovery plan that fits your facility's needs and provides an off-site contingency plan. Consult with your on-premises storage vendor for help with a disaster recovery plan that suits your specific storage platform. Archiving data in the cloud After a project is complete, it is common practice to save finished work to some form of long-term storage, typically magnetic tape media such as LTO. These cartridges are subject to environmental requirements and, over time, can be logistically challenging to manage. Large production facilities sometimes house their entire archive in a purpose-built room with a full-time archivist to keep track of data and retrieve it when requested. Searching for specific archived assets, shots, or footage can be time-consuming, because data might be stored on multiple cartridges, archive indexing might be missing or incomplete, or there might be speed limitations on reading data from magnetic tape. Migrating your data archive to the cloud can not only eliminate the need for on-premises management and storage of archive media, but it can also make your data far more accessible and searchable than traditional archive methods can. A basic archiving pipeline might look like the following diagram, employing different cloud services to examine, categorize, tag, and organize archives. From the cloud, you can create an archive management and retrieval tool to search for data by using various metadata criteria such as date, project, format, or resolution. You can also use the Machine Learning APIs to tag and categorize images and videos, storing the results in a cloud-based database such as BigQuery. An asset archive pipeline that includes machine learning to categorize content Further topics to consider: Automate the generation of thumbnails or proxies for content that resides within Cloud Storage storage classes that have retrieval fees. Use these proxies within your media asset management system so that users can browse data while reading only the proxies, not the archived assets. Consider using machine learning to categorize live-action content. Use the Cloud Vision to label textures and background plates, or the Video Intelligence API to help with the search and retrieval of reference footage. You can also use Vertex AI AutoML image to create a custom image model to recognize any asset, whether live action or rendered. For rendered content, consider saving a copy of the render worker's disk image along with the rendered asset. If you need to re-create the setup, you will have the correct software versions, plugins, OS libraries, and dependencies available if you need to re-render an archived shot. Managing assets and production Working on the same project across multiple facilities can present unique challenges, especially when content and assets need to be available around the world. Manually synchronizing data across private networks can be expensive and resource-intensive, and is subject to local bandwidth limitations. If your workload requires globally available data, you might be able to use Cloud Storage, which is accessible from anywhere that you can access Google services. To incorporate Cloud Storage into your pipeline, you must modify your pipeline to understand object paths, and then pull or push your data to your render workers before rendering. Using this method provides global access to published data but requires your pipeline to deliver assets to where they're needed in a reasonable amount of time. For example, a texture artist in Los Angeles can publish image files to be used by a lighting artist in London. The process looks like this: Publishing assets to Cloud Storage The publish pipeline pushes files to Cloud Storage and adds an entry to a cloud-based asset database. An artist in London runs a script to gather assets for a scene. File locations are queried from the database and read from Cloud Storage to local disk. Queue management software gathers a list of assets that are required for rendering, queries them from the asset database, and downloads them from Cloud Storage to each render worker's local storage. Using Cloud Storage in this manner also provides you with an archive of all your published data on the cloud if you choose to use Cloud Storage as part of your archive pipeline. Managing databases Asset and production management software depends on highly available, durable databases that are served on hosts capable of handling hundreds or thousands of queries per second. Databases are typically hosted on an on-premises server that is running in the same rack as render workers, and are subject to the same power, network, and HVAC limitations. You might consider running your MySQL, NoSQL, and PostgreSQL production databases as managed, cloud-based services. These services are highly available and globally accessible, encrypt data both at rest and in transit, and offer built-in replication functionality. Managing queues Commercially available queue management software programs such as Qube!, Deadline, and Tractor are widely used in the VFX/animation industry. There are also open source software options available, such as OpenCue. You can use this software to deploy and manage any compute workload across a variety of workers, not just renders. You can deploy and manage asset publishing, particle and fluid simulations, texture baking, and compositing with the same scheduling framework that you use to manage renders. A few facilities have implemented general-purpose scheduling software such as HTCondor from the University of Wisconsin, Slurm from SchedMD, or Univa Grid Engine into their VFX pipelines. Software that is designed specifically for the VFX industry, however, pays special attention to features like the following: Job-, frame-, and layer-based dependency. Some tasks need to be completed before you can begin other jobs. For example, run a fluid simulation in its entirety before rendering. Job priority, which render wranglers can use to shift the order of jobs based on individual deadlines and schedules. Resource types, labels, or targets, which you can use to match specific resources with jobs that require them. For example, deploy GPU-accelerated renders only on VMs that have GPUs attached. Capturing historical data on resource usage and making it available through an API or dashboard for further analysis. For example, look at average CPU and memory usage for the last few iterations of a render to predict resource usage for the next iteration. Pre- and post-flight jobs. For example, a pre-flight job pulls all necessary assets onto the local render worker before rendering. A post-flight job copies the resulting rendered frame to a designated location on a file system and then marks the frame as complete in an asset management system. Integration with popular 2D and 3D software applications such as Maya, 3ds Max, Houdini, Cinema 4D, or Nuke. Production notes: Use queue management software to recognize a pool of cloud-based resources as if they were on-premises render workers. This method requires some oversight to maximize resource usage by running as many renders as each instance can handle, a technique known as bin packing. These operations are typically handled both algorithmically and by render wranglers. You can also automate the creation, management, and termination of cloud-based resources on an on-demand basis. This method relies on your queue manager to run pre- and post-render scripts that create resources as needed, monitor them during rendering, and terminate them when tasks are done. Job deployment considerations When you are implementing a render farm that uses both on-premises and cloud-based storage, here are some considerations that your queue manager might need to keep in mind: Licensing might differ between cloud and on-premises deployments. Some licenses are node based, some are process driven. Ensure that your queue management software deploys jobs to maximize the value of your licenses. Consider adding unique tags or labels to cloud-based resources to ensure that they get used only when assigned to specific job types. Use Cloud Logging to detect unused or idle instances. When launching dependent jobs, consider where the resulting data will reside and where it needs to be for the next step. If your path namespaces differ between on-premises and cloud-based storage, consider using relative paths to allow renders to be location agnostic. Alternatively, depending on the platform, you could build a mechanism to swap paths at render time. Some renders, simulations, or post-processes rely on random number generation, which can differ among CPU manufacturers. Even CPUs from the same manufacturer but different chip generations can produce different results. When in doubt, use identical or similar CPU types for all frames of a job. If you are using a read-through cache appliance, consider deploying a pre-flight job to pre-warm the cache and ensure that all assets are available on the cloud before you deploy cloud resources. This approach minimizes the amount of time that render workers are forced to wait while assets are moved to the cloud. Logging and monitoring Recording and monitoring resource usage and performance is an essential aspect of any render farm. Google Cloud offers a number of APIs, tools, and solutions to help provide insight into utilization of resources and services. The quickest way to monitor a VM's activity is to view its serial port output. This output can be helpful when an instance is unresponsive through typical service control planes such as your render queue management supervisor. Other ways to collect and monitor resource usage on Google Cloud include: Use Cloud Logging to capture usage and audit logs, and to export the resulting logs to Cloud Storage, BigQuery, and other services. Use Cloud Monitoring to install an agent on your VMs to monitor system metrics. Incorporate the Cloud Logging API into your pipeline scripts to log directly to Cloud Logging by using client libraries for popular scripting languages. Use Cloud Monitoring to create charts to understand resource usage. Configuring your render worker instances For your workload to be truly hybrid, on-premises render nodes must be identical to cloud-based render nodes, with matching OS versions, kernel builds, installed libraries, and software. You might also need to reproduce mount points, path namespaces, and even user environments on the cloud, because they are on premises. Choosing a disk image You can choose from one of the public images or create your own custom image that is based on your on-premises render node image. Public images include a collection of packages that set up and manage user accounts and enable Secure Shell (SSH) key–based authentication. Creating a custom image If you choose to create a custom image, you will need to add more libraries to both Linux and Windows for them to function properly in the Compute Engine environment. Your custom image must comply with security best practices. If you are using Linux, install the Linux guest environment for Compute Engine to access the functionality that public images provide by default. By installing the guest environment, you can perform tasks, such as metadata access, system configuration, and optimizing the OS for use on Google Cloud, by using the same security controls on your custom image that you use on public images. Production notes: Manage your custom images in a separate project at the organization level. This approach gives you more precise control over how images are created or modified and lets you apply versions, which can be useful when using different software or OS versions across multiple productions. Automating image creation and instance deployment You can use tools such as Packer to make creating images more reproducible, auditable, configurable, and reliable. You can also use a tool like Ansible to configure your running render nodes and exercise fine-grained control over their configuration and lifecycle. Choosing a machine type On Google Cloud, you can choose one of the predefined machine types or specify a custom machine type. Using custom machine types gives you control over resources so you can customize instances based on the job types that you run on Google Cloud. When creating an instance, you can add GPUs and specify the number of CPUs, the CPU platform, the amount of RAM, and the type and size of disks. Production notes: For pipelines that deploy one instance per frame, consider customizing the instance based on historical job statistics like CPU load or memory use to optimize resource usage across all frames of a shot. For example, you might choose to deploy machines with higher CPU counts for frames that contain heavy motion blur to help normalize render times across all frames. Choosing between standard and preemptible VMs Preemptible VMs (PVMs) refers to excess Compute Engine capacity that is sold at a much lower price than standard VMs. Compute Engine might terminate or preempt these instances if other tasks require access to that capacity. PVMs are ideal for rendering workloads that are fault tolerant and managed by a queueing system that keeps track of jobs that are lost to preemption. Standard VMs can be run indefinitely and are ideal for license servers or queue administrator hosts that need to run in a persistent fashion. Preemptible VMs are terminated automatically after 24 hours, so don't use them to run renders or simulations that run longer. Preemption rates run from 5% to 15%, which for typical rendering workloads is probably tolerable given the reduced cost. Some preemptible best practices can help you decide the best way to integrate PVMs into your render pipeline. If your instance is preempted, Compute Engine sends a preemption signal to the instance, which you can use to trigger your scheduler to terminate the current job and requeue. Standard VM Preemptible VM Can be used for long-running jobs. Ideal for high-priority jobs with hard deadlines. Can be run indefinitely, so ideal for license servers or queue administrator hosting. Automatically terminated after 24 hours. Requires a queue management system to handle preempted instances. Production notes: Some renderers can perform a snapshot of an in-progress render at specified intervals, so if the VM gets preempted, you can pause and resume rendering without having to restart a frame from scratch. If your renderer supports snapshotting, and you choose to use PVMs, enable render snapshotting in your pipeline to avoid losing work. While snapshots are being written and updated, data can be written to Cloud Storage and, if the render worker gets preempted, retrieved when a new PVM is deployed. To avoid storage costs, delete snapshot data for completed renders. Granting access to render workers IAM helps you assign access to cloud resources to individuals who need access. For Linux render workers, you can use OS Login to further restrict access within an SSH session, giving you control over who is an administrator. Controlling costs of a hybrid render farm When estimating costs, you must consider many factors, but we recommend that you implement these common best practices as policy for your hybrid render farm: Use preemptible instances by default. Unless your render job is extremely long-running, four or more hours per frame, or you have a hard deadline to deliver a shot, use preemptible VMs. Minimize egress. Copy only the data that you need back to on-premises storage. In most cases, this data will be the final rendered frames, but it can also be separate passes or simulation data. If you are mounting your on-premises NAS directly, or using a storage product that synchronizes automatically, write all rendered data to the render worker's local file system, then copy what you need back to on-premises storage to avoid egressing temporary and unnecessary data. Right-size VMs. Make sure to create your render workers with optimal resource usage, assigning only the necessary number of vCPUs, the optimum amount of RAM, and the correct number of GPUs, if any. Also consider how to minimize the size of any attached disks. Consider the one-minute minimum. On Google Cloud, instances get billed on a per-second basis with a one-minute minimum. If your workload includes rendering frames that take less than one minute, consider chunking tasks together to avoid deploying an instance for less than one minute of compute time. Keep large datasets on the cloud. If you use your render farm to generate massive amounts of data, such as deep EXRs or simulation data, consider using a cloud-based workstation that is further down the pipeline. For example, an FX artist might run a fluid simulation on the cloud, writing cache files to cloud-based storage. A lighting artist could then access this simulation data from a virtual workstation that is on Google Cloud. For more information about virtual workstations, contact your Google Cloud representative. Take advantage of sustained and committed use discounts. If you run a pool of resources, sustained use discounts can save you up to 30% off the cost of instances that run for an entire month. Committed use discounts can also make sense in some cases. Extending your existing render farm to the cloud is a cost-effective way to use powerful, low-cost resources without capital expense. No two production pipelines are alike, so no document can cover every topic and unique requirement. For help with migrating your render workloads to the cloud, contact your Google Cloud representative. What's next Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Build_data_products_in_a_data_mesh.txt b/Build_data_products_in_a_data_mesh.txt new file mode 100644 index 0000000000000000000000000000000000000000..697bb0782719ba2085786c6727f574c1047e30c9 --- /dev/null +++ b/Build_data_products_in_a_data_mesh.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/build-data-products-data-mesh +Date Scraped: 2025-02-23T11:48:57.888Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build data products in a data mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-03 UTC To ensure that the use cases of data consumers are met, it's essential that data products in a data mesh are designed and built with care. The design of a data product starts with the definition of how data consumers would use that product, and how that product then gets exposed to consumers. Data products in a data mesh are built on top of a datastore (for example, a domain data warehouse or data lake). When you create data products in a data mesh, there are some key factors that we recommend you consider throughout this process. These considerations are described in this document. This document is part of a series which describes how to implement a data mesh on Google Cloud. It assumes that you have read and are familiar with the concepts described in Architecture and functions in a data mesh and Build a modern, distributed Data Mesh with Google Cloud. The series has the following parts: Architecture and functions in a data mesh Design a self-service data platform for a data mesh Build data products in a data mesh (this document) Discover and consume data products in a data mesh When creating data products from a domain data warehouse, we recommend that data producers carefully design analytical (consumption) interfaces for those products. These consumption interfaces are a set of guarantees on the data quality and operational parameters, along with a product support model and product documentation. The cost of changing consumption interfaces is usually high because of the need for both the data producer and potentially multiple data consumers to change their consuming processes and applications. Given that the data consumers are most likely to be in organizational units which are separate to that of the data producers, coordinating the changes can be difficult. The following sections provide background information on what you must consider when creating a domain warehouse, defining consumption interfaces, and exposing those interfaces to data consumers. Create a domain data warehouse There's no fundamental difference between building a standalone data warehouse and building a domain data warehouse from which the data producer team creates data products. The only real difference between the two is that the latter exposes a subset of its data through the consumption interfaces. In many data warehouses, the raw data ingested from operational data sources goes through the process of enrichment and data quality verification (curation). In Dataplex-managed data lakes, curated data typically is stored in designated curated zones. When curation is complete, a subset of the data should be ready for external-to-the-domain consumption through several types of interfaces. To define those consumption interfaces, an organization should provide a set of tools to domain teams who are new to adopting a data mesh approach. These tools let data producers create new data products on a self-service basis. For recommended practices, see Design a self-service data platform. Additionally, data products must meet centrally defined data governance requirements. These requirements affect data quality, data availability, and lifecycle management. Because these requirements build the trust of data consumers in the data products and encourage data product usage, the benefits of implementing these requirements are worth the effort in supporting them. Define consumption interfaces We recommend that data producers use multiple types of interfaces, instead of defining just one or two. Each interface type in data analytics has advantages and disadvantages, and there's no single type of interface that excels at everything. When data producers assess the suitability of each interface type, they must consider the following: Ability to perform the data processing needed. Scalability to support current and future data consumer use cases. Performance required by data consumers. Cost of development and maintenance. Cost of running the interface. Support by the languages and tools that your organization uses. Support for separation of storage and compute. For example, if the business requirement is to be able to run analytical queries over a petabyte-size dataset, then the only practical interface is a BigQuery view. But if the requirements are to provide near real-time streaming data, then a Pub/Sub-based interface is more appropriate. Many of these interfaces don't require you to copy or replicate existing data. Most of them also let you separate storage and compute, a critical feature of Google Cloud analytical tools. Consumers of data exposed through these interfaces process the data using the compute resources available to them. There's no need for data producers to do any additional infrastructure provisioning. There's a wide variety of consumption interfaces. The following interfaces are the most common ones used in a data mesh and are discussed in the following sections: Authorized views and functions Direct read APIs Data as streams Data access API Looker blocks Machine learning (ML) models The list of interfaces in this document is not exhaustive. There are also other options that you might consider for your consumption interfaces (for example, Analytics Hub). However, these other interfaces are outside of the scope of this document. Authorized views and functions As much as possible, data products should be exposed through authorized views and authorized functions, including table-valued functions. Authorized datasets provide a convenient way to authorize several views automatically. Using authorized views prevents direct access to the base tables, and lets you optimize the underlying tables and queries against them, without affecting consumer use of these views. Consumers of this interface use SQL to query the data. The following diagram illustrates the use of authorized datasets as the consumption interface. Authorized datasets and views help to enable easy versioning of interfaces. As shown in the following diagram, there are two primary versioning approaches that data producers can take: The approaches can be summarized as follows: Dataset versioning: In this approach, you version the dataset name. You don't version the views and functions inside the dataset. You keep the same names for the views and functions regardless of version. For example, the first version of a sales dataset is defined in a dataset named sales_v1 with two views, catalog and orders. For its second version, the sales dataset has been renamed sales_v2, and any previous views in the dataset keep their previous names but have new schemas. The second version of the dataset might also have new views added to it, or may remove any of the previous views. View versioning: In this approach, the views inside the dataset are versioned instead of the dataset itself. For example, the sales dataset keeps the name of sales regardless of version. However, the names of the views inside the dataset change to reflect each new version of the view (such as catalog_v1, catalog_v2, orders_v1, orders_v2, and orders_v3). The best versioning approach for your organization depends on your organization's policies and the number of views that are rendered obsolete with the update to the underlying data. Dataset versioning is best when a major product update is needed and most views must change. View versioning leads to fewer identically named views in different datasets, but can lead to ambiguities, for example, how to tell if a join between datasets works correctly. A hybrid approach can be a good compromise. In a hybrid approach, compatible schema changes are allowed within a single dataset, and incompatible changes require a new dataset. BigLake table considerations Authorized views can be created not only on BigQuery tables, but also on BigLake tables. BigLake tables let consumers query the data stored in Cloud Storage by using the BigQuery SQL interface. BigLake tables support fine-grained access control without the need for data consumers to have read permissions for the underlying Cloud Storage bucket. Data producers must consider the following for BigLake tables: The design of the file formats and the data layout influences the performance of the queries. Column-based formats, for example, Parquet or ORC, generally perform much better for analytic queries than JSON or CSV formats. A Hive partitioned layout lets you prune partitions and speeds up queries which use partitioning columns. The number of files and the preferred query performance for the file size must also be taken into account in the design stage. If queries using BigLake tables don't meet service-level agreement (SLA) requirements for the interface and can't be tuned, then we recommend the following actions: For data that must be exposed to the data consumer, convert that data to BigQuery storage. Redefine the authorized views to use the BigQuery tables. Generally, this approach does not cause any disruption to the data consumers, or require any changes to their queries. The queries in BigQuery storage can be optimized using techniques that aren't possible with BigLake tables. For example, with BigQuery storage, consumers can query materialized views that have different partitioning and clustering than the base tables, and they can use the BigQuery BI Engine. Direct read APIs Although we don't generally recommend that data producers give data consumers direct read access to the base tables, it might occasionally be practical to allow such access for reasons such as performance and cost. In such cases, extra care should be taken to ensure that the table schema is stable. There are two ways to directly access data in a typical warehouse. Data producers can either use the BigQuery Storage Read API, or the Cloud Storage JSON or XML APIs. The following diagram illustrates two examples of consumers using these APIs. One is a machine learning (ML) use case, and the other is a data processing job. Versioning a direct-read interface is complex. Typically, data producers must create another table with a different schema. They must also maintain two versions of the table, until all the data consumers of the deprecated version migrate to the new one. If the consumers can tolerate the disruption of rebuilding the table and switching to the new schema, then it's possible to avoid the data duplication. In cases where schema changes can be backward compatible, the migration of the base table can be avoided. For example, you don't have to migrate the base table if only new columns are added and the data in these columns is backfilled for all the rows. The following is a summary of the differences between the Storage Read API and Cloud Storage API. In general, whenever possible, we recommend that data producers use BigQuery API for analytical applications. Storage Read API: Storage Read API can be used to read data in BigQuery tables and to read BigLake tables. This API supports filtering and fine-grained access control, and can be a good option for stable data analytics or ML consumers. Cloud Storage API: Data producers might need to share a particular Cloud Storage bucket directly with data consumers. For example, data producers can share the bucket if data consumers can't use the SQL interface for some reason, or the bucket has data formats that aren't supported by Storage Read API. In general, we don't recommend that data producers allow direct access through the storage APIs because direct access doesn't allow for filtering and fine-grained access control. However, the direct access approach can be a viable choice for stable, small-sized (gigabytes) datasets. Allowing Pub/Sub access to the bucket gives data consumers an easy way to copy the data into their projects and process it there. In general, we don't recommend data copying if it can be avoided. Multiple copies of data increase storage cost, and add to the maintenance and lineage tracking overhead. Data as streams A domain can expose streaming data by publishing that data to a Pub/Sub topic. Subscribers who want to consume the data create subscriptions to consume the messages published to that topic. Each subscriber receives and consumes data independently. The following diagram shows an example of such data streams. In the diagram, the ingest pipeline reads raw events, enriches (curates) them, and saves this curated data to the analytical datastore (BigQuery base table). At the same time, the pipeline publishes the enriched events to a dedicated topic. This topic is consumed by multiple subscribers, each of whom may be potentially filtering these events to get only the ones relevant to them. The pipeline also aggregates and publishes event statistics to its own topic to be processed by another data consumer. The following are example use cases for Pub/Sub subscriptions: Enriched events, such as providing full customer profile information along with data on a particular customer order. Close-to-real-time aggregation notifications, such as total order statistics for the last 15 minutes. Business-level alerts, such as generating an alert if order volume dropped by 20% compared to a similar period on the previous day. Data change notifications (similar in concept to change data capture notifications), such as a particular order changes status. The data format that data producers use for Pub/Sub messages affects costs and how these messages are processed. For high-volume streams in a data mesh architecture, Avro or Protobuf formats are good options. If data producers use these formats, they can assign schemas to Pub/Sub topics. The schemas help to ensure that the consumers receive well-formed messages. Because a streaming data structure can be constantly changing, versioning of this interface requires coordination between the data producers and the data consumers. There are several common approaches data producers can take, which are as follows: A new topic is created every time the message structure changes. This topic often has an explicit Pub/Sub schema. Data consumers who need the new interface can start to consume the new data. The message version is implied by the name of the topic, for example, click_events_v1. Message formats are strongly typed. There's no variation on the message format between messages in the same topic. The disadvantage of this approach is that there might be data consumers who can't switch to the new subscription. In this case, the data producer must continue publishing events to all active topics for some time, and data consumers who subscribe to the topic must either deal with a gap in message flow, or de-duplicate the messages. Data is always published to the same topic. However, the structure of the message can change. A Pub/Sub message attribute (separate from the payload) defines the version of the message. For example, v=1.0. This approach removes the need to deal with gaps or duplicates; however, all data consumers must be ready to receive messages of a new type. Data producers also can't use Pub/Sub topic schemas for this approach. A hybrid approach. The message schema can have an arbitrary data section that can be used for new fields. This approach can provide a reasonable balance between having strongly typed data, and frequent and complex version changes. Data access API Data producers can build a custom API to directly access the base tables in a data warehouse. Typically, these producers expose this custom API as a REST or a gRPC API, and deploy on Cloud Run or a Kubernetes cluster. An API gateway like Apigee can provide other additional features, such as traffic throttling or a caching layer. These functionalities are useful when exposing the data access API to consumers outside of a Google Cloud organization. Potential candidates for a data access API are latency sensitive and high concurrency queries which both return a relatively small result in a single API and can be effectively cached. Examples of such a custom API for data access can be as follows: A combined view on the SLA metrics of the table or product. The top 10 (potentially cached) records from a particular table. A dataset of table statistics (total number of rows, or data distribution within key columns). Any guidelines and governance that the organization has around building application APIs are also applicable to the custom APIs created by data producers. The organization's guidelines and governance should cover issues such as hosting, monitoring, access control, and versioning. The disadvantage of a custom API is the fact that the data producers are responsible for any additional infrastructure that's required to host this interface, as well as custom API coding and maintenance. We recommend that data producers investigate other options before deciding to create custom data access APIs. For example, data producers can use BigQuery BI Engine to decrease response latency and increase concurrency. Looker Blocks For products such as Looker, which are heavily used in business intelligence (BI) tools, it might be helpful to maintain a set of BI tool-specific widgets. Because the data producer team knows the underlying data model that is used in the domain, that team is best placed to create and maintain a prebuilt set of visualizations. In the case of Looker, this visualization could be a set of Looker Blocks (prebuilt LookML data models). The Looker Blocks can be easily incorporated into dashboards hosted by consumers. ML models Because teams that work in data domains have a deep understanding and knowledge of their data, they are often the best teams to build and maintain ML models which are trained on the domain data. These ML Models can be exposed through several different interfaces, including the following: BigQuery ML models can be deployed in a dedicated dataset and shared with data consumers for BigQuery batch predictions. BigQuery ML models can be exported into Vertex AI to be used for online predictions. Data location considerations for consumption interfaces An important consideration when data producers define consumption interfaces for data products is data location. In general, to minimize costs, data should be processed in the same region that it's stored in. This approach helps to prevent cross-region data egress charges. This approach also has the lowest data consumption latency. For these reasons, data stored in multi-regional BigQuery locations is usually the best candidate for exposing as a data product. However, for performance reasons, data stored in Cloud Storage and exposed through BigLake tables or direct read APIs should be stored in regional buckets. If data exposed in one product resides in one region and needs to be joined with data in another domain in another region, data consumers must consider the following limitations: Cross-region queries that use BigQuery SQL are not supported. If the primary consumption method for the data is BigQuery SQL, all the tables in the query must be in the same location. BigQuery flat-rate commitments are regional. If a project uses only a flat-rate commitment in one region but queries a data product in another region, on-demand pricing applies. Data consumers can use direct read APIs to read data from another region. However, cross-regional network egress charges apply, and data consumers will most likely experience latency for large data transfers. Data that's frequently accessed across regions can be replicated to those regions to reduce the cost and latency of queries incurred by the product consumers. For example, BigQuery datasets can be copied to other regions. However, data should only be copied when it's required. We recommend that data producers only make a subset of the available product data available to multiple regions when you copy data. This approach helps to minimize replication latency and cost. This approach can result in the need to provide multiple versions of the consumption interface with the data location region explicitly called out. For example, BigQuery Authorized views can be exposed through naming such as sales_eu_v1 and sales_us_v1. Data stream interfaces using Pub/Sub topics don't need any additional replication logic to consume messages in regions that are not the same region as that where the message is stored. However, additional cross-region egress charges apply in this case. Expose consumption interfaces to data consumers This section discusses how to make consumption interfaces discoverable by potential consumers. Data Catalog is a fully managed service which organizations can use to provide the data discovery and metadata management services. Data producers must make the consumption interfaces of their data products searchable and annotate them with the appropriate metadata to enable product consumers to access them in a self-service manner. We The following sections discuss how each interface type is defined as a Data Catalog entry. BigQuery-based SQL interfaces Technical metadata, such as a fully qualified table name or table schema, is automatically registered for authorized views, BigLake views, and BigQuery tables that are available through the Storage Read API. We recommend that data producers also provide additional information in the data product documentation to help data consumers. For example, to help users find the product documentation for an entry, data producers can add a URL to one of the tags that has been applied to the entry. Producers can also provide the following: Sets of clustered columns, which should be used in query filters. Enumeration values for fields that have logical enumeration type, if the type is not provided as part of the field description. Supported joins with other tables. Data streams Pub/Sub topics are automatically registered with the Data Catalog. However, data producers must describe the schema in the data product documentation. Cloud Storage API Data Catalog supports the definition of Cloud Storage file entries and their schema. If a data lake fileset is managed by Dataplex, the fileset is automatically registered in the Data Catalog. Filesets that aren't associated with Dataplex are added using a different approach. Other interfaces You can add other interfaces which don't have built-in support from Data Catalog by creating custom entries. What's next See a reference implementation of the data mesh architecture. Learn more about BigQuery. Read about Dataplex. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Build_high_availability_through_redundancy.txt b/Build_high_availability_through_redundancy.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd96930ff4ef5334235a9c7d633534da4446322c --- /dev/null +++ b/Build_high_availability_through_redundancy.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/build-highly-available-systems +Date Scraped: 2025-02-23T11:43:23.489Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build highly available systems through resource redundancy Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to plan, build, and manage resource redundancy, which can help you to avoid failures. This principle is relevant to the scoping focus area of reliability. Principle overview After you decide the level of reliability that you need, you must design your systems to avoid any single points of failure. Every critical component in the system must be replicated across multiple machines, zones, and regions. For example, a critical database can't be located in only one region, and a metadata server can't be deployed in only one single zone or region. In those examples, if the sole zone or region has an outage, the system has a global outage. Recommendations To build redundant systems, consider the recommendations in the following subsections. Identify failure domains and replicate services Map out your system's failure domains, from individual VMs to regions, and design for redundancy across the failure domains. To ensure high availability, distribute and replicate your services and applications across multiple zones and regions. Configure the system for automatic failover to make sure that the services and applications continue to be available in the event of zone or region outages. For examples of multi-zone and multi-region architectures, see Design reliable infrastructure for your workloads in Google Cloud. Detect and address issues promptly Continuously track the status of your failure domains to detect and address issues promptly. You can monitor the current status of Google Cloud services in all regions by using the Google Cloud Service Health dashboard. You can also view incidents relevant to your project by using Personalized Service Health. You can use load balancers to detect resource health and automatically route traffic to healthy backends. For more information, see Health checks overview. Test failover scenarios Like a fire drill, regularly simulate failures to validate the effectiveness of your replication and failover strategies. For more information, see Simulate a zone outage for a regional MIG and Simulate a zone failure in GKE regional clusters. Previous arrow_back Set realistic targets for reliability Next Take advantage of horizontal scalability arrow_forward Send feedback \ No newline at end of file diff --git a/Building_blocks.txt b/Building_blocks.txt new file mode 100644 index 0000000000000000000000000000000000000000..5fbc0e575debfa3b3665306c23d71ef20f420995 --- /dev/null +++ b/Building_blocks.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/dr-scenarios-building-blocks +Date Scraped: 2025-02-23T11:54:29.913Z + +Content: +Home Docs Cloud Architecture Center Send feedback Disaster recovery building blocks Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-01 UTC This document is the second part of a series that discusses disaster recovery (DR) in Google Cloud. This part discusses services and products that you can use as building blocks for your DR plan—both Google Cloud products and products that work across platforms. The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks (this article) Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Google Cloud has a wide range of products that you can use as part of your disaster recovery architecture. This section discusses DR-related features of the products that are most commonly used as Google Cloud DR building blocks. Many of these services have high availability (HA) features. HA doesn't entirely overlap with DR, but many of the goals of HA also apply to designing a DR plan. For example, by taking advantage of HA features, you can design architectures that optimize uptime and that can mitigate the effects of small-scale failures, such as a single VM failing. For more about the relationship of DR and HA, see the Disaster recovery planning guide. The following sections describe these Google Cloud DR building blocks and how they help you implement your DR goals. Compute and storage The following table provides a summary of the features in Google Cloud compute and storage services that serve as building blocks for DR: Product Feature Compute Engine Scalable compute resources Predefined and custom machine types Fast boot times Snapshots Instance templates Managed instance groups Reservations Persistent disks Transparent maintenance Live migration Cloud Storage Highly durable object store Redundancy across regions Storage classes Object lifecycle management Data transfer from other sources Encryption at rest by default Soft deletion Google Kubernetes Engine (GKE) Managed environment for deploying and scaling containerized applications Node auto-repair Liveness and readiness probes Persistent volumes Multi-zone and regional clusters Multi-cluster networking For more information about how the features and the design of these and other Google Cloud products might influence your DR strategy, see Architecting disaster recovery for cloud infrastructure outages: product reference. Compute Engine Compute Engine provides virtual machine (VM) instances; it's the workhorse of Google Cloud. In addition to configuring, launching, and monitoring Compute Engine instances, you typically use a variety of related features in order to implement a DR plan. For DR scenarios, you can prevent accidental deletion of VMs by setting the delete protection flag. This is particularly useful where you are hosting stateful services such as databases. For information about how to meet low RTO and RPO values, see Designing resilient systems. Instance templates You can use Compute Engine instance templates to save the configuration details of the VM and then create Compute Engine instances from existing instance templates. You can use the template to launch as many instances as you need, configured exactly the way you want when you need to stand up your DR target environment. Instance templates are globally replicated, so you can recreate the instance anywhere in Google Cloud with the same configuration. For more information, see the following resources: About instance templates Deterministic instance templates For details about using Compute Engine images, see the balancing image configuration and deployment speed section later in this document. Managed instance groups Managed instance groups work with Cloud Load Balancing (discussed later in this document) to distribute traffic to groups of identically configured instances that are copied across zones. Managed instance groups allow for features like autoscaling and autohealing, where the managed instance group can delete and recreate instances automatically. Reservations Compute Engine allows for the reservation of VM instances in a specific zone, using custom or predefined machine types, with or without additional GPUs or local SSDs. In order to assure capacity for your mission critical workloads for DR, you should create reservations in your DR target zones. Without reservations, there is a possibility that you might not get the on-demand capacity you need to meet your recovery time objective. Reservations can be useful in cold, warm, or hot DR scenarios. They let you keep recovery resources available for failover to meet lower RTO needs, without having to fully configure and deploy them in advance. Persistent disks and snapshots Persistent disks are durable network storage devices that your instances can access. They are independent of your instances, so you can detach and move persistent disks to keep your data even after you delete your instances. You can take incremental backups or snapshots of Compute Engine VMs that you can copy across regions and use to recreate persistent disks in the event of a disaster. Additionally, you can create snapshots of persistent disks to protect against data loss due to user error. Snapshots are incremental, and take only minutes to create even if your snapshot disks are attached to running instances. Persistent disks have built-in redundancy to protect your data against equipment failure and to ensure data availability through data center maintenance events. Persistent disks are either zonal or regional. Regional persistent disks replicate writes across two zones in a region. In the event of a zonal outage, a backup VM instance can force-attach a regional persistent disk in the secondary zone. To learn more, see High availability options using regional persistent disks. Transparent maintenance Google regularly maintains its infrastructure by patching systems with the latest software, performing routine tests and preventative maintenance, and working to ensure that Google infrastructure is as fast and efficient as possible. By default, all Compute Engine instances are configured so that these maintenance events are transparent to your applications and workloads. For more information, see Transparent maintenance. When a maintenance event occurs, Compute Engine uses Live Migration to automatically migrate your running instances to another host in the same zone. Live Migration lets Google perform maintenance that's integral to keeping infrastructure protected and reliable without interrupting any of your VMs. Virtual disk import tool The virtual disk import tool lets you import file formats including VMDK, VHD, and RAW to create new Compute Engine virtual machines. Using this tool, you can create Compute Engine virtual machines that have the same configuration as your on-premises virtual machines. This is a good approach for when you are not able to configure Compute Engine images from the source binaries of software that's already installed on your images. Automated backups You can automate backups of your Compute Engine instances using tags. For example, you can create a backup plan template using Backup and DR Service, and automatically apply the template to your Compute Engine instances. For more information, see Automate protection of new Compute Engine instances. Cloud Storage Cloud Storage is an object store that's ideal for storing backup files. It provides different storage classes that are suited for specific use cases, as outlined in the following diagram. In DR scenarios, Nearline, Coldline, and Archive storage are of particular interest. These storage classes reduce your storage cost compared to Standard storage. However, there are additional costs associated with retrieving data or metadata stored in these classes, as well as minimum storage durations that you are charged for. Nearline is designed for backup scenarios where access is at most once a month, which is ideal for allowing you to undertake regular DR stress tests while keeping costs low. Nearline, Coldline, and Archive are optimized for infrequent access, and the pricing model is designed with this in mind. Therefore, you are charged for minimum storage durations, and there are additional costs for retrieving data or metadata in these classes earlier than the minimum storage duration for the class. To protect your data in a Cloud Storage bucket against accidental or malicious deletion, you can use the Soft Delete feature to preserve deleted and overwritten objects for a specified period, and the Object holds feature to prevent deletion or updates to objects. Storage Transfer Service lets you import data from Amazon S3, Azure Blob Storage, or on-premises data sources into Cloud Storage. In DR scenarios, you can use Storage Transfer Service to do the following: Back up data from other storage providers to a Cloud Storage bucket. Move data from a bucket in a dual-region or multi-region to a bucket in a region to lower your costs for storing backups. Filestore Filestore instances are fully managed NFS file servers for use with applications running on Compute Engine instances or GKE clusters. Filestore Basic and Zonal tiers are zonal resources and don't support replication across zones, while Filestore Enterprise tier instances are regional resources. To help you increase the resiliency of your Filestore environment, we recommend that you use Enterprise tier instances. Google Kubernetes Engine GKE is a managed, production-ready environment for deploying containerized applications. GKE lets you orchestrate HA systems, and includes the following features: Node auto repair. If a node fails consecutive health checks over an extended time period (approximately 10 minutes), GKE initiates a repair process for that node. Liveness and readiness probes. You can specify a liveness probe, which periodically tells GKE that the pod is running. If the pod fails the probe, it can be restarted. Multi-zone and regional clusters. You can distribute Kubernetes resources across multiple zones within a region. Multi-cluster Gateway lets you configure shared load balancing resources across multiple GKE clusters in different regions. Backup for GKE lets you back up and restore workloads in GKE clusters. Networking and data transfer The following table provides a summary of the features in Google Cloud networking and data transfer services that serve as building blocks for DR: Product Feature Cloud Load Balancing Health checks Global load balancing Regional load balancing Multi-region failover Multi-protocol load balancing External and internal load balancing Cloud Service Mesh Google-managed service mesh control plane Advanced request routing and rich traffic-control policies Cloud DNS Programmatic DNS management Access control Anycast to serve zones DNS policies Cloud Interconnect Cloud VPN (IPsec VPN) Direct peering Cloud Load Balancing Cloud Load Balancing provides HA for Google Cloud computing products by distributing user traffic across multiple instances of your applications. You can configure Cloud Load Balancing with health checks that determine whether instances are available to do work so that traffic is not routed to failing instances. Cloud Load Balancing provides a single anycast IP address to front your applications. Your applications can have instances running in different regions (for example, in Europe and in the US), and your end users are directed to the closest set of instances. In addition to providing load balancing for services that are exposed to the internet, you can configure internal load balancing for your services behind a private load-balancing IP address. This IP address is accessible only to VM instances that are internal to your Virtual Private Cloud (VPC). For more information see Cloud Load Balancing overview. Cloud Service Mesh Cloud Service Mesh is a Google-managed service mesh that's available on Google Cloud. Cloud Service Mesh provides in-depth telemetry to help you gather detailed insights about your applications. It supports services that run on a range of computing infrastructures. Cloud Service Mesh also supports advanced traffic management and routing features, such as circuit breaking and fault injection. With circuit breaking, you can enforce limits on requests to a particular service. When circuit breaking limits are reached, requests are prevented from reaching the service, which prevents the service from degrading further. With fault injection, Cloud Service Mesh can introduce delays or abort a fraction of requests to a service. Fault injection enables you to test your service's ability to survive request delays or aborted requests. For more information, see Cloud Service Mesh overview. Cloud DNS Cloud DNS provides a programmatic way to manage your DNS entries as part of an automated recovery process. Cloud DNS uses Google's global network of Anycast name servers to serve your DNS zones from redundant locations around the world, providing high availability and lower latency for your users. If you chose to manage DNS entries on-premises, you can enable VMs in Google Cloud to resolve these addresses through Cloud DNS forwarding. Cloud DNS supports policies to configure how it responds to DNS requests. For example, you can configure DNS routing policies to steer traffic based on specific criteria, such as enabling failover to a backup configuration to provide high availability, or to route DNS requests based on their geographic location. Cloud Interconnect Cloud Interconnect provides ways to move information from other sources to Google Cloud. We discuss this product later under Transferring data to and from Google Cloud. Management and monitoring The following table provides a summary of the features in Google Cloud management and monitoring services that serve as building blocks for DR: Product Feature Cloud Status Dashboard Status of Google Cloud services Google Cloud Observability Uptime monitoring Alerts Logging Error reporting Google Cloud Managed Service for Prometheus Google-managed Prometheus solution Cloud Status Dashboard The Cloud Status Dashboard shows you the current availability of Google Cloud services. You can view the status on the page, and you can subscribe to an RSS feed that is updated whenever there is news about a service. Cloud Monitoring Cloud Monitoring collects metrics, events, and metadata from Google Cloud, AWS, hosted uptime probes, application instrumentation, and a variety of other application components. You can configure alerting to send notifications to third-party tools such as Slack or Pagerduty in order to provide timely updates to administrators. Cloud Monitoring lets you create uptime checks for publicly available endpoints and for endpoints within your VPCs. For example, you can monitor URLs, Compute Engine instances, Cloud Run revisions, and third-party resources, such as Amazon Elastic Compute Cloud (EC2) instances. Google Cloud Managed Service for Prometheus Google Cloud Managed Service for Prometheus is a Google-managed, multi-cloud, cross-project solution for Prometheus metrics. It lets you globally monitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale. For more information, see Google Cloud Managed Service for Prometheus. Cross-platform DR building blocks When you run workloads across more than one platform, a way to reduce the operational overhead is to select tooling that works with all of the platforms you're using. This section discusses some tools and services that are platform-independent and therefore support cross-platform DR scenarios. Infrastructure as code By defining your infrastructure using code, instead of graphical interfaces or scripts, you can adopt declarative templating tools and automate the provisioning and configuration of infrastructure across platforms. For example, you can use Terraform and Infrastructure Manager to actuate your declarative infrastructure configuration. Configuration management tools For large or complex DR infrastructure, we recommend platform-agnostic software management tools such as Chef and Ansible. These tools ensure that reproducible configurations can be applied no matter where your compute workload is. Orchestrator tools Containers can also be considered a DR building block. Containers are a way to package services and introduce consistency across platforms. If you work with containers, you typically use an orchestrator. Kubernetes works not just to manage containers within Google Cloud (using GKE), but provides a way to orchestrate container-based workloads across multiple platforms. Google Cloud, AWS, and Microsoft Azure all provide managed versions of Kubernetes. To distribute traffic to Kubernetes clusters running in different cloud platforms, you can use a DNS service that supports weighted records and incorporates health checking. You also need to ensure you can pull the image to the target environment. This means you need to be able to access your image registry in the event of a disaster. A good option that's also platform-independent is Artifact Registry. Data transfer Data transfer is a critical component of cross-platform DR scenarios. Make sure that you design, implement, and test your cross-platform DR scenarios using realistic mockups of what the DR data transfer scenario calls for. We discuss data transfer scenarios in the next section. Backup and DR Service Backup and DR Service is a backup and DR solution for cloud workloads. It helps you recover data and resume critical business operation, and supports several Google Cloud products and third-party databases and data storage systems. For more information, see Backup and DR Service overview. Patterns for DR This section discusses some of the most common patterns for DR architectures based on the building blocks discussed earlier. Transferring data to and from Google Cloud An important aspect of your DR plan is how quickly data can be transferred to and from Google Cloud. This is critical if your DR plan is based on moving data from on-premises to Google Cloud or from another cloud provider to Google Cloud. This section discusses networking and Google Cloud services that can ensure good throughput. When you are using Google Cloud as the recovery site for workloads that are on-premises or on another cloud environment, consider the following key items: How do you connect to Google Cloud? How much bandwidth is there between you and the interconnect provider? What is the bandwidth provided by the provider directly to Google Cloud? What other data will be transferred using that link? For more information about transferring data to Google Cloud, see Migrate to Google Cloud: Transfer your large datasets. Balancing image configuration and deployment speed When you configure a machine image for deploying new instances, consider the effect that your configuration will have on the speed of deployment. There is a tradeoff between the amount of image preconfiguration, the costs of maintaining the image, and the speed of deployment. For example, if a machine image is minimally configured, the instances that use it will require more time to launch, because they need to download and install dependencies. On the other hand, if your machine image is highly configured, the instances that use it launch more quickly, but you must update the image more frequently. The time taken to launch a fully operational instance will have a direct correlation to your RTO. Maintaining machine image consistency across hybrid environments If you implement a hybrid solution (on-premises-to-cloud or cloud-to-cloud), you need to find a way to maintain image consistency across production environments. If a fully configured image is required, consider something like Packer, which can create identical machine images for multiple platforms. You can use the same scripts with platform-specific configuration files. In the case of Packer, you can put the configuration file in version control to keep track of what version is deployed in production. As another option, you can use configuration management tools such as Chef, Puppet, Ansible, or Saltstack to configure instances with finer granularity, creating base images, minimally-configured images, or fully-configured images as needed. You can also manually convert and import existing images such as Amazon AMIs, Virtualbox images, and RAW disk images to Compute Engine. Implementing tiered storage The tiered storage pattern is typically used for backups where the most recent backup is on faster storage, and you slowly migrate your older backups to lower cost (but slow) storage. By applying this pattern, you migrate backups between buckets of different storage classes, typically from Standard to lower cost storage classes, such as Nearline and Coldline. To implement this pattern, you can use Object Lifecycle Management. For example, you can automatically change the storage class of objects older than a certain amount of time to Coldline. What's next Read about Google Cloud geography and regions. Read other articles in this DR series: Disaster recovery planning guide Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Grace Mollison | Solutions LeadMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Building_blocks_of_reliability.txt b/Building_blocks_of_reliability.txt new file mode 100644 index 0000000000000000000000000000000000000000..24bc00051d7d57ce403d1e168f366747f2d86131 --- /dev/null +++ b/Building_blocks_of_reliability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/building-blocks +Date Scraped: 2025-02-23T11:54:06.507Z + +Content: +Home Docs Cloud Architecture Center Send feedback Building blocks of reliability in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC Google Cloud infrastructure services run in locations around the globe. The locations are divided into failure domains called regions and zones, which are the foundational building blocks for designing reliable infrastructure for your cloud workloads. A failure domain is a resource or a group of resources that can fail independently of other resources. A standalone Compute Engine VM is an example of a resource that's a failure domain. A Google Cloud region or zone is an example of a failure domain that consists of a group of resources. When an application is distributed redundantly across failure domains, it can achieve a higher aggregated level of availability than that provided by each failure domain. This part of the Google Cloud infrastructure reliability guide describes the building blocks of reliability in Google Cloud and how they affect the availability of your cloud resources. Regions and zones Regions are independent geographic areas that consist of zones. Zones and regions are logical abstractions of underlying physical resources. For more information about region-specific considerations, see Geography and regions. Platform availability Google Cloud infrastructure is designed to tolerate and recover from failures. Google continually invests in innovative approaches to maintain and improve the reliability of Google Cloud. The following capabilities of Google Cloud infrastructure help to provide a reliable platform for your cloud workloads: Geographically separated regions to mitigate the effects of natural disasters and region outages on global services. Hardware redundancy and replication to avoid single points of failure. Live migration of resources during maintenance events. For example, during planned infrastructure maintenance, Compute Engine VMs can be moved to another host in the same zone by using live migration. A secure-by-design infrastructure foundation for the physical infrastructure and software on which Google Cloud runs, and operational security controls to protect your data and workloads. For more information, see Google infrastructure security design overview. A high-performance backbone network that uses an advanced software-defined networking (SDN) approach to network management, with edge-caching services to deliver consistent performance that scales well. Continuous monitoring and reporting. You can view the status of Google Cloud services in every location by using the Google Cloud Service Health Dashboard. Annual, company-wide Disaster Recovery Testing (DiRT) events to ensure that Google Cloud services and internal business operations continue to run during a disaster. A change management approach that emphasizes reliability across all the phases of the software development lifecycle for any changes to the Google Cloud platform and services. Google Cloud infrastructure is designed to support the following target levels of availability for most customer workloads: Deployment location Availability (uptime) % Estimated maximum downtime Single zone 3 nines: 99.9% 43.2 minutes in a 30-day month Multiple zones in a region 4 nines: 99.99% 4.3 minutes in a 30-day month Multiple regions 5 nines: 99.999% 26 seconds in a 30-day month Note: For more information about region-specific considerations, see Geography and regions. The availability percentages in the preceding table are targets. The uptime Service Level Agreements (SLAs) for specific Google Cloud services might be different from these availability targets. For example, the uptime SLA for a Bigtable instance depends on the number of clusters, their distribution across locations, and the routing policy that you configure. The minimum uptime SLA for a Bigtable instance with clusters in three or more regions is 99.999% if the multi-cluster routing policy is configured. But, if the single-cluster routing policy is configured, then the minimum uptime SLA is 99.9% regardless of the number of clusters and their distribution. The diagrams in this section show Bigtable instances with varying cluster sizes and the consequent differences in their uptime SLAs. Single cluster The following diagram shows a single-cluster Bigtable instance, with a minimum uptime SLA of 99.9%: Multiple clusters The following diagram shows a multi-cluster Bigtable instance in multiple zones within a single region, with multi-cluster routing (minimum uptime SLA: 99.99%): Multiple clusters The following diagram shows a multi-cluster Bigtable instance in three regions, with multi-cluster routing (minimum uptime SLA: 99.999%): Aggregate infrastructure availability To run your applications in Google Cloud, you use infrastructure resources like VMs and databases. These infrastructure resources, together, constitute your application's infrastructure stack. The following diagram shows an example of an infrastructure stack in Google Cloud and the availability SLA for each resource in the stack: This example infrastructure stack includes the following Google Cloud resources: A regional external Application Load Balancer receives and responds to user requests. A regional managed instance group (MIG) is the backend for the regional external Application Load Balancer. The MIG contains two Compute Engine VMs in different zones. Each VM hosts an instance of a web server. An internal load balancer handles communication between the web server and the application server instances. A second regional MIG is the backend for the internal load balancer. This MIG has two Compute Engine VMs in different zones. Each VM hosts an instance of an application server. A Cloud SQL instance that's configured for HA is the database for the application. The primary database instance is replicated synchronously to a standby database instance. The aggregate availability that you can expect from an infrastructure stack like the preceding example depends on the following factors: Google Cloud SLAs Resource redundancy Stack depth Google Cloud SLAs The uptime SLAs of the Google Cloud services that you use in your infrastructure stack influence the minimum aggregate availability that you can expect from the stack. The following tables present a comparison of the uptime SLAs for some services: Compute services Monthly Uptime SLA Estimated maximum downtime in a 30-day month Compute Engine VM 99.9% 43.2 minutes GKE Autopilot pods in multiple zones 99.9% 43.2 minutes Cloud Run service 99.95% 21.6 minutes Database services Monthly Uptime SLA Estimated maximum downtime in a 30-day month Cloud SQL for PostgreSQL instance (Enterprise edition) 99.95% 21.6 minutes AlloyDB for PostgreSQL instance 99.99% 4.3 minutes Spanner multi-region instance 99.999% 26 seconds For the SLAs of other Google Cloud services, see Google Cloud Service Level Agreements. As the preceding tables show, the Google Cloud services that you choose for each tier of your infrastructure stack directly affect the overall uptime that you can expect from the infrastructure stack. To increase the expected availability of a workload that's deployed on a Google Cloud resource, you can provision redundant instances of the resource, as described in the next section. Resource redundancy Resource redundancy means provisioning two or more identical instances of a resource and deploying the same workload on all the resources in the group. For example, to host the web tier of an application, you might provision a MIG containing multiple, identical Compute Engine VMs. If you distribute a group of resources redundantly across multiple failure domains—for example, two Google Cloud zones—the resource availability that you can expect from that group is higher than the uptime SLA of each resource in the group. This higher availability is because the probability that every resource in the group fails at the same time is lower than the probability that resources in a single failure domain have a coordinated failure. For example, if the availability SLA for a resource is 99.9%, the probability that the resource fails is 0.001 (1 minus the SLA). If you distribute a workload across two instances of this resource that are provisioned in separate failure domains, then the probability that both the resources fail at the same time is 0.000001 (that is, 0.001 x 0.001). This failure probability translates to a theoretical availability of 99.9999% for the group of two resources. However, the actual availability that you can expect is limited to the target availability of the deployment location: 99.9% if the resources are in a single Google Cloud zone, 99.99% for a multi-zone deployment, and 99.999% if the redundant resources are distributed across multiple regions. Note: For more information about region-specific considerations, see Geography and regions. Stack depth The depth of an infrastructure stack is the number of distinct tiers (or layers) in the stack. Each tier in an infrastructure stack contains resources that provide a distinct function for the application. For example, the middle tier in a three-tier stack might use Compute Engine VMs or a GKE cluster to host application servers. Each tier in an infrastructure stack typically has a tight interdependence with its adjacent tiers. That means if any tier of the stack is unavailable, the entire stack becomes unavailable. You can calculate the expected aggregate availability of an N-tier infrastructure stack by using the following formula: $$ tier1\_availability * tier2\_availability * tierN\_availability $$ For example, if every tier in a three-tier stack is designed to provide 99.9% availability, then the aggregate availability of the stack is approximately 99.7% (0.999 x 0.999 x 0.999). That means, the aggregate availability of a multi-tier stack is lower than the availability of the tier that provides the least availability. As the number of interdependent tiers in a stack increases, the aggregate availability of the stack decreases, as shown in the following table. Each example stack in the table has a different number of tiers and every tier is assumed to provide 99.9% availability. Tier Stack A Stack B Stack C Frontend 99.9% 99.9% 99.9% Application tier 99.9% 99.9% 99.9% Middle tier – 99.9% 99.9% Data tier – – 99.9% Aggregate availability of the stack 99.8% 99.7% 99.6% Estimated maximum downtime of the stack in a 30-day month 86 minutes 130 minutes 173 minutes Summary of design considerations When you design your applications, consider the aggregate availability of the Google Cloud infrastructure stack. The availability of each Google Cloud resource in your infrastructure stack influences the aggregate availability of the stack. When you choose Google Cloud services to build your infrastructure stack, consider the availability SLA of the services. To improve the availability of the function (for example, compute or database) that's provided by a resource, you can provision redundant instances of the resource. When you design an architecture with redundant resources, besides the availability benefits, you must also consider the potential effects on operational complexity, latency, and cost. The number of tiers in an infrastructure stack (that is, the depth of the stack) has an inverse relationship with the aggregate availability of the stack. Consider this relationship when you design or modify your stack. For example calculations of aggregate availability, see the following sections: Example calculation: Single-zone deployment Example calculation: Multi-zone deployment Example calculation: Multi-region deployment with regional load balancing Example calculation: Multi-region deployment with global load balancing Location scopes The location scope of a Google Cloud resource determines the extent to which an infrastructure failure can affect the resource. Most resources that you provision in Google Cloud have one of the following location scopes: zonal, regional, multi-region, or global. The location scope of some resource types is fixed; that is, you can't choose or change the location scope. For example, Virtual Private Cloud (VPC) networks are global resources, and Compute Engine virtual machines (VMs) are zonal resources. For certain resources, you can choose the location scope while provisioning the resource. For example, when you create a Google Kubernetes Engine (GKE) cluster, you can choose to create a zonal or regional GKE cluster. The following sections describe location scopes in more detail. Zonal resources Zonal resources are deployed within a single zone in a Google Cloud region. The following are examples of zonal resources. This list is not exhaustive. Compute Engine VMs Zonal managed instance groups (MIGs) Zonal persistent disks Single-zone GKE clusters Filestore Basic and Zonal instances Dataflow jobs Cloud SQL instances Dataproc clusters on Compute Engine A failure in a zone might affect the zonal resources that are provisioned within that zone. Zones are designed to minimize the risk of correlated failures with other zones in the region. A failure in one zone usually does not affect the resources in the other zones in the region. Also, a failure in a zone doesn't necessarily cause all the infrastructure in that zone to be unavailable. The zone merely defines the expected boundary for the effect of a failure. To protect applications that use zonal resources against zonal incidents, you can distribute or replicate the resources across multiple zones or regions. For more information, see Design reliable infrastructure for your workloads in Google Cloud. Regional resources Regional resources are deployed redundantly across multiple zones within a region. The following are examples of regional resources. This list is not exhaustive. Regional MIGs Regional Cloud Storage buckets Regional persistent disks Regional GKE clusters with the default (multi-zone) configuration VPC subnets Regional external Application Load Balancers Regional Spanner instances Filestore Enterprise instances Cloud Run services Regional resources are resilient to incidents in a specific zone. A region outage can affect some or all the regional resources provisioned within that region. Such outages can be caused by natural disasters or by large-scale infrastructure failures. Note: For more information about region-specific considerations, see Geography and regions. Multi-region resources Multi-region resources are distributed across specific regions. The following are examples of multi-region resources. This list is not exhaustive. Dual-region and multi-region Cloud Storage buckets Multi-region Spanner instances Multi-cluster (multi-region) Bigtable instances Multi-region key rings in Cloud Key Management Service For a complete list of the Google Services that are available in multi-region configurations, see Products available by location. Multi-region resources are resilient to incidents in specific regions and zones. An infrastructure outage that occurs in multiple regions can affect the availability of some or all the multi-region resources that are provisioned in the affected regions. Global resources Global resources are available across all Google Cloud locations. The following are examples of global resources. This list is not exhaustive. Projects. For guidance and best practices about organizing your Google Cloud resources into folders and projects, see Decide a resource hierarchy for your Google Cloud landing zone. VPC networks, including associated routes and firewall rules Cloud DNS zones Global external Application Load Balancers Global key rings in Cloud Key Management Service Pub/Sub topics Secrets in Secret Manager For a complete list of the Google Services that are available globally, see Global products. Global resources are resilient to zonal and regional incidents. These resources don't rely on infrastructure in any specific region. Google Cloud has systems and processes that help to minimize the risk of global infrastructure outages. Google also continually monitors the infrastructure, and quickly resolves any global outages. The following table summarizes the relative resilience of zonal, regional, multi-region, and global resources to application and infrastructure issues. It also describes the effort required to set up these resources, and recommendations to mitigate the effects of outages. Resource scope Resilience Recommendations to mitigate the effects of infrastructure outages Zonal Low Deploy the resources redundantly in multiple zones or regions. Regional Medium Deploy the resources redundantly in multiple regions. Multi-region or global High Manage changes carefully, and use defense-in-depth fallbacks where possible. For more information, see Recommendations to manage the risk of outages of global resources. Note: For more information about region-specific considerations, see Geography and regions. Recommendations to manage the risk of outages of global resources To take advantage of the resilience of global resources to zone and region outages, you might consider using certain global resources in your architecture. Google recommends the following approaches to manage the risk of outages of global resources: Careful management of changes to global resources Global resources are resilient to physical failures. The configuration for such resources is globally scoped. Thus, setting up and configuring a single global resource is easier than operating multiple regional resources. However, a critical error in the configuration of a global resource might make it a single point of failure (SPOF). For example, you might use a global load balancer as the frontend for a geographically-distributed application. A global load balancer is often a good choice for such an application. However, an error in the configuration of the load balancer can cause it to become unavailable across all geographies. To avoid this risk, you must manage configuration changes to global resources carefully. For more information, see Control changes to global resources. Use of regional resources as defense-in-depth fallbacks For applications that have exceptionally high availability requirements, regional defense-in-depth fallbacks can help minimize the effect of outages of global resources. Consider the example of a geographically-distributed application that has a global load balancer as the frontend. To ensure that the application remains accessible even if the global load balancer is affected by a global outage, you can deploy regional load balancers. You can configure the clients to prefer the global load balancer, but fail over to the nearest regional load balancer if the global load balancer is not available. Example architecture with zonal, regional, and global resources Your cloud topology can include a combination of zonal, regional, and global resources, as shown in the following diagram. The following diagram shows an example architecture for a multi-tier application that's deployed in Google Cloud. As shown in the preceding diagram, a global external HTTP/S load balancer receives client requests. The load balancer distributes the requests to the backend, which is a regional MIG that has two Compute Engine VMs. The application running on the VMs writes data to and reads from a Cloud SQL database. The database is configured for HA. The primary and standby instances of the database are provisioned in separate zones, and the primary database is replicated synchronously to the standby database. In addition, the database is backed up automatically to a multi-region bucket in Cloud Storage. The following table summarizes the Google Cloud resources in the preceding architecture and the resilience of each resource to zone and region outages: Resource Resilience to outages VPC network VPC networks, including associated routes and firewall rules, are global resources. They are resilient to zone and region outages. Subnets VPC subnets are regional resources. They are resilient to zone outages. Global external HTTP/S load balancer Global external HTTP/S load balancers are resilient to zone and region outages. Regional MIG Regional MIGs are resilient to zone outages. Compute Engine VMs Compute Engine VMs are zonal resources. If a zone outage occurs, the individual Compute Engine VMs might be affected. However, the application can continue to serve requests because the backend for the load balancer is a regional MIG, and not standalone VMs. Cloud SQL instances The Cloud SQL deployment in this architecture is configured for HA; that is, the deployment includes a primary-standby pair of database instances. The primary database is replicated synchronously to the standby database by using regional persistent disks. If an outage occurs in the zone that hosts the primary database, the Cloud SQL service fails over to the standby database automatically. If a region outage occurs, you can restore the database in a different region by using the database backups. Multi-region Cloud Storage bucket Data that's stored in multi-region Cloud Storage buckets is resilient to single-region outages. Persistent disks Persistent disks can be zonal or regional. Regional persistent disks are resilient to zone outages. To prepare for recovery from region outages, you can schedule snapshots of persistent disks and store the snapshots in a multi-region Cloud Storage bucket. Note: For more information about region-specific considerations, see Geography and regions. Previous arrow_back Overview of reliability Next Assess reliability requirements arrow_forward Send feedback \ No newline at end of file diff --git a/Building_internet_connectivity_for_private_VMs.txt b/Building_internet_connectivity_for_private_VMs.txt new file mode 100644 index 0000000000000000000000000000000000000000..feb271afd9f91447c8e42985ebdd73824ad22696 --- /dev/null +++ b/Building_internet_connectivity_for_private_VMs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/building-internet-connectivity-for-private-vms +Date Scraped: 2025-02-23T11:53:46.302Z + +Content: +Home Docs Cloud Architecture Center Send feedback Building internet connectivity for private VMs Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-10 UTC This document describes options for connecting to and from the internet using Compute Engine resources that have private IP addresses. This is helpful for developers who create Google Cloud services and for network administrators of Google Cloud environments. This tutorial assumes you are familiar with deploying VPCs, with Compute Engine, and with basic TCP/IP networking. Objectives Learn about the options available for connecting between private VMs outside their VPC. Create an instance of Identity-Aware Proxy (IAP) for TCP tunnels that's appropriate for interactive services such as SSH. Create a Cloud NAT instance to enable VMs to make outbound connections to the internet. Configure an HTTP load balancer to support inbound connections from the internet to your VMs. Costs This tutorial uses billable components of Google Cloud, including: Compute Engine Cloud NAT Cloud Load Balancing Use the pricing calculator to generate a cost estimate based on your projected usage. We calculate that the total to run this tutorial is less than US$5 per day. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Introduction Private IP addresses provide a number of advantages over public (external) IP addresses, including: Reduced attack surface. Removing external IP addresses from VMs makes it more difficult for attackers to reach the VMs and exploit potential vulnerabilities. Increased flexibility. Introducing a layer of abstraction, such as a load balancer or a NAT service, allows more reliable and flexible service delivery when compared with static, external IP addresses. This solution discusses three scenarios, as described in the following table: Interactive Fetching Serving An SSH connection is initiated from a remote host directly to a VM using IAP for TCP. Example: Remote administration using SSH or RDP A connection is initiated by a VM to an external host on the internet using Cloud NAT. Example: OS updates, external APIs A connection is initiated by a remote host to a VM through a global Google Cloud load balancer. Example: Application frontends, WordPress Some environments might involve only one of these scenarios. However, many environments require all of these scenarios, and this is fully supported in Google Cloud. The following sections describe a multi-region environment with an HTTP load-balanced service backed by two VMs in two regions. These VMs use Cloud NAT for outgoing communications. For administration, the VMs are accessible through SSH tunneled over IAP. The following diagram provides an overview of all three use cases and the relevant components. Creating VM instances To begin the tutorial, you create a total of four virtual machine (VM) instances—two instances per region in two different regions. You give all of the instances the same tag, which is used later by a firewall rule to allow incoming traffic to reach your instances. The following diagram shows the VM instances and instance groups you create, distributed in two zones. The startup script that you add to each instance installs Apache and creates a unique home page for each instance. The procedure includes instructions for using both the Google Cloud console and gcloud commands. The easiest way to use gcloud commands is to use Cloud Shell. Console In the Google Cloud console, go to the VM instances page: GO TO THE VM INSTANCES PAGE Click Create instance. Set Name to www-1. Set the Zone to us-central1-b. Click Management, Security, Disks, Networking, Sole Tenancy. Click Networking and make the following settings: For HTTP traffic, in the Network tags box, enter http-tag. Under Network Interfaces, click edit. Under External IP, select None. Note: Don't select the firewall boxes that allow HTTP or HTTPS traffic, because doing so creates unneeded firewall rules. Click Management and set Startup script to the following: sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl sudo service apache2 restart echo '

server 1

' | sudo tee /var/www/html/index.html Click Create. Create www-2 with the same settings, except set Startup script to the following: sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl sudo service apache2 restart echo '

server 2

' | sudo tee /var/www/html/index.html Create www-3 with the same settings, except set Zone to europe-west1-b and set Startup script to the following: sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl sudo service apache2 restart echo '

server 3

' | sudo tee /var/www/html/index.html Create www-4 with the same settings, except set Zone to europe-west1-b and set Startup script to the following: sudo apt-get update sudo apt-get install apache2 -y sudo a2ensite default-ssl sudo a2enmod ssl sudo service apache2 restart echo '

server 4

' | sudo tee /var/www/html/index.html gcloud Open Cloud Shell: OPEN Cloud Shell Create an instance named www-1 in us-central1-bwith a basic startup script: gcloud compute instances create www-1 \ --image-family debian-9 \ --image-project debian-cloud \ --zone us-central1-b \ --tags http-tag \ --network-interface=no-address \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo service apache2 restart echo '

www-1

' | tee /var/www/html/index.html EOF" Create an instance named www-2 in us-central1-b: gcloud compute instances create www-2 \ --image-family debian-9 \ --image-project debian-cloud \ --zone us-central1-b \ --tags http-tag \ --network-interface=no-address \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo service apache2 restart echo '

www-2

' | tee /var/www/html/index.html EOF" Create an instance named www-3, this time in europe-west1-b: gcloud compute instances create www-3 \ --image-family debian-9 \ --image-project debian-cloud \ --zone europe-west1-b \ --tags http-tag \ --network-interface=no-address \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo service apache2 restart echo '

www-3

' | tee /var/www/html/index.html EOF" Create an instance named www-4, this one also in europe-west1-b: gcloud compute instances create www-4 \ --image-family debian-9 \ --image-project debian-cloud \ --zone europe-west1-b \ --tags http-tag \ --network-interface=no-address \ --metadata startup-script="#! /bin/bash sudo apt-get update sudo apt-get install apache2 -y sudo service apache2 restart echo '

www-4

' | tee /var/www/html/index.html EOF" Terraform Open Cloud Shell: OPEN Cloud Shell Clone the repository from GitHub: git clone https://github.com/GoogleCloudPlatform/gce-public-connectivity-terraform Change the working directory to the repository directory: cd iap Install Terraform. Replace [YOUR-ORGANIZATION-NAME] in the scripts/set_env_vars.sh file with your Google Cloud organization name. Set environment variables: source scripts/set_env_vars.sh Apply the Terraform configuration: terraform apply Note: If you get an error, wait a few moments, and then run the terraform apply command a second time. Configuring IAP tunnels for interacting with instances To log in to VM instances, you connect to the instances using tools like SSH or RDP. In the configuration you're creating in this tutorial, you can't directly connect to instances. However, you can use TCP forwarding in IAP, which enables remote access for these interactive patterns. For this tutorial, you use SSH. In this section you do the following: Connect to a Compute Engine instance using the IAP tunnel. Add a second user with IAP tunneling permission in IAM. The following diagram illustrates the architecture that you build in this section. The grey areas are discussed in other parts of this tutorial. Limitations of IAP Bandwidth: The IAP TCP forwarding feature isn't intended for bulk transfer of data. IAP reserves the right to rate-limit users who are deemed to be abusing this service. Connection length: IAP won't disconnect active sessions unless required for maintenance. Protocol: IAP for TCP doesn't support UDP. Create firewall rules to allow tunneling In order to connect to your instances using SSH, you need to open an appropriate port on the firewall. IAP connections come from a specific set of IP addresses (35.235.240.0/20). Therefore, you can limit the rule to this CIDR range. Console In the Google Cloud console, go to the Firewall policies page: GO TO THE FIREWALL POLICIES PAGE Click Create firewall rule. Set Name to allow-ssh-from-iap. Leave VPC network as default. Under Targets, select Specified target tags. Set Target tags to http-tag. Leave Source filter set to IP ranges. Set Source IP Ranges to 35.235.240.0/20. Set Allowed protocols and ports to tcp:22. Click Create. It might take a moment for the new firewall rule to be displayed in the console. gcloud Create a firewall rule named allow-ssh-from-iap: gcloud compute firewall-rules create allow-ssh-from-iap \ --source-ranges 35.235.240.0/20 \ --target-tags http-tag \ --allow tcp:22 Terraform Copy the firewall rules Terraform file to the current directory: cp iap/vpc_firewall_rules.tf . Apply the Terraform configuration: terraform apply Note: If you get an error, wait a few moments, and then run the terraform apply command a second time. Test tunneling In Cloud Shell, connect to instance www-1 using IAP: gcloud compute ssh www-1 \ --zone us-central1-b \ --tunnel-through-iap If the connection succeeds, you have an SSH session that is tunneled through IAP directly to your private VM. Grant access to additional users IAP uses your existing project roles and permissions when you connect to VM instances. By default, instance owners are the only users that have the IAP Secured Tunnel User role. If you want to allow other users to access your VMs using IAP tunneling, you need to grant this role to those users. In the Google Cloud console, go to Security > Identity-Aware Proxy: If you see a message that tells you that you need to configure the OAuth consent screen, disregard the message; it's not relevant to IAP for TCP. Select the SSH and TCP Resources tab. Select the VMs you've created: On the right-hand side, click Add Principal. Add the users you want to grant permissions to, select the IAP-secured Tunnel User role, and then click Save. Summary You can now connect to your instances using SSH to administer the instances or troubleshoot them. Many applications need to make outgoing connections in order to download patches, connect with partners, or download resources. In the next section, you configure Cloud NAT to allow your VMs to reach these resources. Deploying Cloud NAT for fetching The Cloud NAT service allows Google Cloud VM instances that don't have external IP addresses to connect to the internet. Cloud NAT implements outbound NAT in conjunction with a default route to allow your instances to reach the internet. It doesn't implement inbound NAT. Hosts outside of your VPC network can respond only to established connections initiated by your instances; they cannot initiate their own connections to your instances using Cloud NAT. NAT is not used for traffic within Google Cloud. Cloud NAT is a regional resource. You can configure it to allow traffic from all primary and secondary IP address ranges of subnets in a region, or you can configure it to apply to only some of those ranges. In this section, you configure a Cloud NAT gateway in each region that you used earlier. The following diagram illustrates the architecture that you build in this section. The grey areas are discussed in other parts of this tutorial. Note: You can create Cloud NAT gateways in a subset of the regions you're using. You might do this if not all regions need to fetch data from the internet, or if some regions will use external IP addresses to access the internet. Create a NAT configuration using Cloud Router You must create the Cloud Router instance in the same region as the instances that need to use Cloud NAT. Cloud NAT is only used to place NAT information onto the VMs; it's not used as part of the actual Cloud NAT gateway. This configuration allows all instances in the region to use Cloud NAT for all primary and alias IP ranges. It also automatically allocates the external IP addresses for the NAT gateway. For more options, see the gcloud compute routers documentation. Note: Cloud NAT uses Cloud Router only to group NAT configuration information (that is, for the control plane). Cloud NAT does not direct a Cloud Router to use BGP or to add routes. NAT traffic does not pass through a Cloud Router (the data plane). Console Go to the Cloud NAT page: GO TO THE CLOUD NAT PAGE Click Get started or Create NAT gateway. Set Gateway name to nat-config. Set VPC network to default. Set Region to us-central1. Under Cloud Router, select Create new router, and then do the following: Set Name to nat-router-us-central1. Click Create. Click Create. Repeat the procedure, but substitute these values: Name: nat-router-europe-west1 Region: europe-west1 gcloud Create Cloud Router instances in each region: gcloud compute routers create nat-router-us-central1 \ --network default \ --region us-central1 gcloud compute routers create nat-router-europe-west1 \ --network default \ --region europe-west1 Configure the routers for Cloud NAT: gcloud compute routers nats create nat-config \ --router-region us-central1 \ --router nat-router-us-central1 \ --nat-all-subnet-ip-ranges \ --auto-allocate-nat-external-ips gcloud compute routers nats create nat-config \ --router-region europe-west1 \ --router nat-router-europe-west1 \ --nat-all-subnet-ip-ranges \ --auto-allocate-nat-external-ips Terraform Copy the Terraform NAT configuration file to the current directory: cp nat/vpc_nat_gateways.tf . Apply the Terraform configuration: terraform apply Note: If you get an error, wait a few moments, and then run the terraform apply command a second time. Test Cloud NAT configuration You can now test that you're able to make outbound requests from your VM instances to the internet. Wait up to 3 minutes for the NAT configuration to propagate to the VM. In Cloud Shell, connect to your instance using the tunnel you created: gcloud compute ssh www-1 --tunnel-through-iap When you're logged in to the instance, use the curl command to make an outbound request: curl example.com You see the following output: Example Domain ... ... ...

Example Domain

This domain is established to be used for illustrative examples in documents. You may use this domain in examples without prior coordination or asking for permission.

More information...

If the command is successful, you've validated that your VMs can connect to the internet using Cloud NAT. Summary Your instances can now make outgoing connections in order to download patches, connect with partners, or download resources. In the next section, you add load balancing to your deployment and configure it to allow remote clients to initiate requests to your servers. Creating an HTTP load-balanced service for serving Using Cloud Load Balancing for your application has many advantages. It can provide seamless, scalable load balancing for over a million queries per second. It can also offload SSL overhead from your VMs, route queries to the best region for your users based on both location and availability, and support modern protocols such as HTTP/2 and QUIC. For this tutorial, you take advantage of another key feature: global anycast IP connection proxying. This feature provides a single public IP address that's terminated on Google's globally distributed edge. Clients can then connect to resources hosted on private IP addresses anywhere in Google Cloud. This configuration helps protect instances from DDoS attacks and direct attacks. It also enables features such as Google Cloud Armor for even more security. In this section of the tutorial, you do the following: Reset the VM instances to install the Apache web server. Create a firewall rule to allow access from load balancers. Allocate static, global IPv4 and IPv6 addresses for the load balancer. Create an instance group for your instances. Start sending traffic to your instances. Note: This is an example setup only. You should not run these commands on instances that serve production traffic. The following diagram illustrates the architecture that you build in this section. The grey areas are discussed in other parts of this tutorial. Reset VM instances When you created the VM instances earlier in this tutorial, they didn't have access to the internet, because no external IP address was assigned and Cloud NAT was not configured. Therefore, the startup script that installs Apache could not complete successfully. The easiest way to re-run the startup scripts is to reset those instances so that the Apache webserver can be installed and used in the next section. Console In the Google Cloud console, go to the VM instances page: Go to VM instances Select www-1,www-2,www-3, and www-4. Click the Reset button at the top of the page. If you don't see a Reset button, click More actionsmore_vert and choose Reset. Confirm the reset of the four instances by clicking Reset in the dialog. gcloud Reset the four instances: gcloud compute instances reset www-1 \ --zone us-central1-b gcloud compute instances reset www-2 \ --zone us-central1-b gcloud compute instances reset www-3 \ --zone europe-west1-b gcloud compute instances reset www-4 \ --zone europe-west1-b Open the firewall The next task is to create a firewall rule to allow traffic from the load balancers to your VM instances. This rule allows traffic from the Google Cloud address range that's used both by load balancers and health checks. The firewall rule uses the http-tag tag that you created earlier; the firewall rule allows traffic to the designated port to reach instances that have the tag. Console In the Google Cloud console, go to the Firewall policies page: GO TO THE FIREWALL POLICIES PAGE Click Create firewall rule. Set Name to allow-lb-and-healthcheck. Leave the VPC network as default. Under Targets, select Specified target tags. Set Target tags to http-tag. Leave Source filter set to IP ranges. Set Source IP Ranges to 130.211.0.0/22 and 35.191.0.0/16. Set Allowed protocols and ports to tcp:80. Click Create. It might take a moment for the new firewall rule to be displayed in the console. gcloud Create a firewall rule named allow-lb-and-healthcheck: gcloud compute firewall-rules create allow-lb-and-healthcheck \ --source-ranges 130.211.0.0/22,35.191.0.0/16 \ --target-tags http-tag \ --allow tcp:80 TerraformNote: These steps configure the whole load balancer. Once you've completed them, you can skip to the "Test the configuration" section. Copy the Terraform load-balancing configuration files to the current directory: cp lb/* . Apply the Terraform configuration: terraform apply Note: If you get an error, wait a few moments, and then run the terraform apply command a second time. Allocate an external IP address for load balancers If you're serving traffic to the internet, you need to allocate an external address for the load balancer. You can allocate an IPv4 address, an IPv6 address, or both. In this section, you reserve static IPv4 and IPv6 addresses suitable for adding to DNS. There is no additional charge for public IP addresses, because they are used with a load balancer. Console In the Google Cloud console, go to the External IP addresses page: GO TO THE EXTERNAL IP ADDRESSES PAGE Click Reserve static address to reserve an IPv4 address. Set Name to lb-ip-cr. Leave Type set to Global. Click Reserve. Click Reserve static address again to reserve an IPv6 address. Set Name to lb-ipv6-cr. Set IP version to IPv6. Leave Type set to Global. Click Reserve. gcloud Create a static IP address named lb-ip-cr for IPv4: gcloud compute addresses create lb-ip-cr \ --ip-version=IPV4 \ --global Create a static IP address named lb-ipv6-cr for IPv6: gcloud compute addresses create lb-ipv6-cr \ --ip-version=IPV6 \ --global Create instance groups and add instances Google Cloud load balancers require instance groups to act as backends for traffic. In this tutorial, you use unmanaged instance groups for simplicity. However, you could also use managed instance groups to take advantage of features such as autoscaling, autohealing, regional (multi-zone) deployment, and auto-updating. In this section, you create an instance group for each of the zones that you're using. Console In the Google Cloud console, go to the Instance groups page: GO TO THE INSTANCE GROUPS PAGE Click Create instance group. On the left-hand side, click New unmanaged instance group. Set Name to us-resources-w. Set Region to us-central1 Set Zone to us-central1-b. Select Network (default) and Subnetwork (default). Under VM instances, do the following: Click Add an instance, and then select www-1. Click Add an instance again, and then select www-2. Click Create. Repeat this procedure to create a second instance group, but use the following values: Name: europe-resources-w Zone: europe-west1-b Instances: www-3 and www-4 In the Instance groups page, confirm that you have two instance groups, each with two instances. gcloud Create the us-resources-w instance group: gcloud compute instance-groups unmanaged create us-resources-w \ --zone us-central1-b Add the www-1 and www-2 instances: gcloud compute instance-groups unmanaged add-instances us-resources-w \ --instances www-1,www-2 \ --zone us-central1-b Create the europe-resources-w instance group: gcloud compute instance-groups unmanaged create europe-resources-w \ --zone europe-west1-b Add the www-3 and www-4 instances: gcloud compute instance-groups unmanaged add-instances europe-resources-w \ --instances www-3,www-4 \ --zone europe-west1-b Configure the load balancing service Load balancer functionality involves several connected services. In this section, you set up and connect the services. The services you will create are as follows: Named ports, which the load balancer uses to direct traffic to your instance groups. A health check, which polls your instances to see if they are healthy. The load balancer sends traffic only to healthy instances. Backend services, which monitor instance usage and health. Backend services know whether the instances in the instance group can receive traffic. If the instances can't receive traffic, the load balancer redirects traffic, provided that instances elsewhere have sufficient capacity. A backend defines the capacity of the instance groups that it contains (maximum CPU utilization or maximum queries per second). A URL map, which parses the URL of the request and can forward requests to specific backend services based on the host and path of the request URL. In this tutorial, because you aren't using content-based forwarding, the URL map contains only the default mapping. A target proxy, which receives the request from the user and forwards it to the URL map. Two global forwarding rules, one each for IPv4 and IPv6, that hold the global external IP address resources. Global forwarding rules forward the incoming request to the target proxy. Create the load balancer In this section you create the load balancer and configure a default backend service to handle your traffic. You also create a health check. Note: You must configure at least one backend service to handle your traffic, or you won't be able to create the load balancer. Console Start your configuration In the Google Cloud console, go to the Load balancing page. Go to Load balancing Click Create load balancer. For Type of load balancer, select Application Load Balancer (HTTP/HTTPS) and click Next. For Public facing or internal, select Public facing (external) and click Next. For Global or single region deployment, select Best for global workloads and click Next. For Load balancer generation, select Global external Application Load Balancer and click Next. Click Configure. Basic configuration Set Load balancer name to web-map. Configure the load balancer In the left panel of the Create global external Application Load Balancer page, click Backend configuration. In the Create or select backend services & backend buckets list, select Backend services, and then Create a backend service. You see the Create Backend Service dialog box. Set Name to web-map-backend-service. Set the Protocol. For HTTP protocol, leave the values set to the defaults. For Backend type, select Instance Groups. Under Backends, set Instance group to us-resources-w. Click Add backend. Select the europe-resources-w instance group and then do the following: For HTTP traffic between the load balancer and the instances, make sure that Port numbers is set to 80. Leave the default values for the rest of the fields. Click Done. Under Health check, select Create a health check or Create another health check. Set the following health check parameters: Name: http-basic-check Protocol: HTTP Port: 80 Click Create. gcloud For each instance group, define an HTTP service and map a port name to the relevant port: gcloud compute instance-groups unmanaged set-named-ports us-resources-w \ --named-ports http:80 \ --zone us-central1-b gcloud compute instance-groups unmanaged set-named-ports europe-resources-w \ --named-ports http:80 \ --zone europe-west1-b Create a health check: gcloud compute health-checks create http http-basic-check \ --port 80 Create a backend service: gcloud compute backend-services create web-map-backend-service \ --protocol HTTP \ --health-checks http-basic-check \ --global You set the --protocol flag to HTTP because you're using HTTP to go to the instances. For the health check, you use the http-basic-check health check that you created earlier. Add your instance groups as backends to the backend services: gcloud compute backend-services add-backend web-map-backend-service \ --balancing-mode UTILIZATION \ --max-utilization 0.8 \ --capacity-scaler 1 \ --instance-group us-resources-w \ --instance-group-zone us-central1-b \ --global gcloud compute backend-services add-backend web-map-backend-service \ --balancing-mode UTILIZATION \ --max-utilization 0.8 \ --capacity-scaler 1 \ --instance-group europe-resources-w \ --instance-group-zone europe-west1-b \ --global Set host and path rules Console In the left panel of the Create global external Application Load Balancer page, click Host and path rules. For this tutorial, you don't need to configure any host or path rules, because all traffic will go to the default rule. Therefore, you can accept the pre-populated default values. gcloud Create a default URL map that directs all incoming requests to all of your instances: gcloud compute url-maps create web-map \ --default-service web-map-backend-service Create a target HTTP proxy to route requests to the URL map: gcloud compute target-http-proxies create http-lb-proxy \ --url-map web-map Configure the frontend and finalize your setup Console In the left panel of the Create global external Application Load Balancer page, click Frontend configuration. Set Name to http-cr-rule. Set Protocol to HTTP. Set IP version to IPv4. In the IP address list, select lb-ip-cr, the address that you created earlier. Confirm that Port is set to 80. Click Done. Click Add frontend IP and port. Set Name to http-cr-ipv6-rule. For Protocol, select HTTP. Set IP version to IPv6. In the IP address list, select lb-ipv6-cr, the other address that you created earlier. Confirm that Port is set to 80. Click Create. Click Done. In the left panel of the Create global external Application Load Balancer page, click Review and finalize. Compare your settings to what you intended to create. If the settings are correct, click Create. You are returned to the Load Balancing pages. After the load balancer is created, a green check mark next to it indicates that it's running. gcloud Get the static IP addresses that you created for your load balancer. Make a note of them, because you use them in the next step. gcloud compute addresses list Create two global forwarding rules to route incoming requests to the proxy, one for IPv4 and one for IPv6. Replace lb_ip_address in the command with the static IPv4 address you created, and replace lb_ipv6_address with the IPv6 address you created. gcloud compute forwarding-rules create http-cr-rule \ --address lb_ip_address \ --global \ --target-http-proxy http-lb-proxy \ --ports 80 gcloud compute forwarding-rules create http-cr-ipv6-rule \ --address lb_ipv6_address \ --global \ --target-http-proxy http-lb-proxy \ --ports 80 After you create the global forwarding rules, it can take several minutes for your configuration to propagate. Test the configuration In this section, you send an HTTP request to your instance to verify that the load-balancing configuration is working. Console In the Google Cloud console, go to the Load balancing page: GO TO THE LOAD BALANCING PAGE Select the load balancer named web-map to see details about the load balancer that you just created. In the Backend section of the page, confirm that instances are healthy by viewing the Healthy column. It can take a few moments for the display to indicate that the instances are healthy. When the display shows that the instances are healthy, copy the IP:Port value from the Frontend section and paste it into your browser. In your browser, you see your default content page. gcloud Get the IP addresses of your global forwarding rules, and make a note of them for the next step: gcloud compute forwarding-rules list Use the curl command to test the response for various URLs for your services. Try both IPv4 and IPv6. For IPv6, you must put [] around the address, such as http://[2001:DB8::]/. curl http://ipv4-address curl -g -6 "http://[ipv6-address]/" Summary Your VMs can now serve traffic to the internet and can fetch data from the internet. You can also access them using SSH in order to perform administration tasks. All of this functionality is achieved using only private IP addresses, which helps protect them from direct attacks by not exposing IP addresses that are reachable from the internet. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Note: if you have used Terraform to complete this solution issue the terraform destroy command to remove all resources. What's next Creating Cloud Load Balancing shows you how to create HTTPS and HTTP2 load balancers. Setting up a private cluster shows you how to set up a private Google Kubernetes Engine cluster. Using IAP for TCP forwarding describes other uses for IAP for TCP, such as RDP or remote command execution. Using Cloud NAT provides examples for Google Kubernetes Engine and describes how to modify parameter details. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Business_Intelligence.txt b/Business_Intelligence.txt new file mode 100644 index 0000000000000000000000000000000000000000..32cd363b770c4f060cd92b4e21f79852e0a1099a --- /dev/null +++ b/Business_Intelligence.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/business-intelligence +Date Scraped: 2025-02-23T11:59:10.091Z + +Content: +Business intelligence modernizationGoogle Cloud’s uniquely powerful and flexible BI modernization solutions help you develop a strategy to modernize BI and put data at the center of your business transformation.Contact usESG: Transforming the Business with Modern Business IntelligenceRead the reportBenefitsPowerful and flexible foundation for modernizing BI and transforming businessInnovate and differentiate with dataDeliver high-value user experiences and transform your business processes by developing innovative new ways to use data to inform or automate your decision-making.Strategically utilize your BI budgetEliminate overlapping legacy BI tools by utilizing Looker's SQL-based modeling layer which is more agile, collaborative, and flexible than the rigid semantic layers at the center of legacy BI architecture.Future-proof your BI strategyOur scalable, 100% cloud-native solutions easily adapt to your organization's evolving data strategy so you can quickly react to changing business requirements.Key featuresFuel your data-driven digital transformation Supply your employees, customers, products, and services with timely, trusted data to power experiences tailored for every part of your businessArchitecture with unmatched flexibility and securityThe unique in-database architecture of Looker leverages the power and scalability of modern MPP cloud data warehouses like BigQuery and allows for future flexibility by eliminating the chaos of cubes and extracts. Leave your data securely at its source and deliver answers consistently into your business operations and customer experiences or in Looker’s modern BI and analytics applications.An efficient and reusable agile semantic layer LookML provides governance for real-time data at scale and a single, shared place for people and apps to interact with that data. It’s an agile and collaborative, centrally managed Git version-controlled modeling layer that eliminates data silos and inconsistencies.Self-service business intelligence with built-in connectorsLooker Studio is an easy to use self-serve BI solution enabling ad-hoc reporting, analysis, and data mashups. It currently supports over 800 data sources with a catalog of more than 600 connectors, making it easy to explore data from different sources with just a few clicks without ever needing IT.A developer platform designed for new data experiencesAs the trusted API for all your data, Looker lets you focus on the answers you need and where to deliver them—not how to ask the questions. Go beyond static BI reports and dashboards by building best-in-class data apps that keep development teams lean and focused on core competencies.Multicloud and database access to support any environment The SQL-based modeling layer is more agile, collaborative, and flexible than the rigid semantic layers at the center of the legacy BI architecture, offering your team the necessary agility to react to changing business requirements quickly. Take advantage of the performance and scale of modern MPP multi- and hybrid cloud architectures.Ready to get started? Request a BI modernization solution demo or contact us.Key resources and considerations for a modern approach to BIGuide to modern BI evaluationsRegister to access reportBuilding better data experiencesDownload whitepaperFederal agencies leverage data as a strategic asset to meet goalsRead hereCustomersHow leading companies are transforming their business by modernizing BICase studyCreated more engaging, user-friendly data experiences for its 30 Clubs, sponsors, and fans.5-min readCase studyReal-time access to data helped teams optimize shipments, reduce costs, and meet budget goals.5-min readCase studyRealized 50% reduction in data warehouse design time.5-min readCase studyReduced platform licensing costs by almost 75%.5-min readCase studyQuick insight into how players interact with the game has increased agility in game development.5-min readBlog postRitual’s top 3 tips for nurturing a strong data culture.5-min readSee all customersRelated servicesProducts and features from across GoogleLookerAn enterprise platform for business intelligence, data applications, and embedded analytics.BigQueryServerless, highly scalable, and cost-effective multicloud data warehouse designed for business agility.Connected SheetsAnalyze billions of rows of BigQuery data with the familiarity and functions of Google Sheets.BigQuery BI EngineIn-memory analysis service built into BigQuery for sub-second query response and high concurrency.BigQuery MLBuilt-in machine learning model creation and execution using standard SQL queries.Looker StudioSelf-service business intelligence with intuitive dashboards and native connectors to Looker, BigQuery, Ads tools and more.What's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postBringing together the best of both sides of BI with Looker and Looker StudioRead the blogBlog postVisualize your model with the LookML Diagram ApplicationRead the blogBlog postAccelerate your Looker dashboards with BigQuery BI EngineRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Business_continuity_hybrid_and_multicloud_patterns.txt b/Business_continuity_hybrid_and_multicloud_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..14b2a710f1992ad3d2a31d8fe21f7e4a2a34b8c5 --- /dev/null +++ b/Business_continuity_hybrid_and_multicloud_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/business-continuity-patterns +Date Scraped: 2025-02-23T11:50:11.370Z + +Content: +Home Docs Cloud Architecture Center Send feedback Business continuity hybrid and multicloud patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The main driver of considering business continuity for mission-critical systems is to help an organization to be resilient and continue its business operations during and following failure events. By replicating systems and data over multiple geographical regions and avoiding single points of failure, you can minimize the risks of a natural disaster that affects local infrastructure. Other failure scenarios include severe system failure, a cybersecurity attack, or even a system configuration error. Optimizing a system to withstand failures is essential for establishing effective business continuity. System reliability can be influenced by several factors, including, but not limited to, performance, resilience, uptime availability, security, and user experience. For more information on how to architect and operate reliable services on Google Cloud, see the reliability pillar of the Google Cloud Architecture Framework and building blocks of reliability in Google Cloud. This architecture pattern relies on a redundant deployment of applications across multiple computing environments. In this pattern, you deploy the same applications in multiple computing environments with the aim of increasing reliability. Business continuity can be defined as the ability of an organization to continue its key business functions or services at predefined acceptable levels following a disruptive event. Disaster recovery (DR) is considered a subset of business continuity, explicitly focusing on ensuring that the IT systems that support critical business functions are operational as soon as possible after a disruption. In general, DR strategies and plans often help form a broader business continuity strategy. From a technology point of view, when you start creating disaster recovery strategies, your business impact analysis should define two key metrics: the recovery point objective (RPO) and the recovery time objective (RTO). For more guidance on using Google Cloud to address disaster recovery, see the Disaster recovery planning guide. The smaller the RPO and RTO target values are, the faster services might recover from an interruption with minimal data loss. However, this implies higher cost because it means building redundant systems. Redundant systems that are capable of performing near real-time data replication and that operate at the same scale following a failure event, increase complexity, administrative overhead, and cost. The decision to select a DR strategy or pattern should be driven by a business impact analysis. For example, the financial losses incurred from even a few minutes of downtime for a financial services organization might far exceed the cost of implementing a DR system. However, businesses in other industries might sustain hours of downtime without a significant business effect. When you run mission-critical systems in an on-premises data center, one DR approach is to maintain standby systems in a second data center in a different region. A more cost-effective approach, however, is to use a public cloud–based computing environment for failover purposes. This approach is the main driver of the business continuity hybrid pattern. The cloud can be especially appealing from a cost point of view, because it lets you turn off some of your DR infrastructure when it's not in use. To achieve a lower cost DR solution, a cloud solution lets a business accept the potential increase in RPO and RTO values. The preceding diagram illustrates the use of the cloud as a failover or disaster recovery environment to an on-premises environment. A less common (and rarely required) variant of this pattern is the business continuity multicloud pattern. In that pattern, the production environment uses one cloud provider and the DR environment uses another cloud provider. By deploying copies of workloads across multiple cloud providers, you might increase availability beyond what a multi-region deployment offers. Evaluating a DR across multiple clouds versus using one cloud provider with different regions requires a thorough analysis of several considerations, including the following: Manageability Security Overall feasibility. Cost The potential outbound data transfer charges from more than one cloud provider could be costly with continuous inter-cloud communication. There can be a high volume of traffic when replicating databases. TCO and the cost of managing inter-cloud network infrastructure. Note: For more information about how Cross-Cloud Interconnect might help you to lower TCO and reduce complexity, see Announcing Cross-Cloud Interconnect: seamless connectivity to all your clouds. If your data needs to stay in your country to meet regulatory requirements, using a second cloud provider that's also in your country as a DR can be an option. That use of a second cloud provider assumes that there's no option to use an on-premises environment to build a hybrid setup. To avoid rearchitecting your cloud solution, ideally your second cloud provider should offer all the required capabilities and services you need in-region. Design considerations DR expectation: The RPO and the RTO targets your business wants to achieve should drive your DR architecture and build planning. Solution architecture: With this pattern, you need to replicate the existing functions and capabilities of your on-premises environment to meet your DR expectations. Therefore, you need to assess the feasibility and viability of rehosting, refactoring, or rearchitecting your applications to provide the same (or more optimized) functions and performance in the cloud environment. Design and build: Building a landing zone is almost always a prerequisite to deploying enterprise workloads in a cloud environment. For more information, see Landing zone design in Google Cloud. DR invocation: It's important for your DR design and process to consider the following questions: What triggers a DR scenario? For example, a DR might be triggered by the failure of specific functions or systems in the primary site. How is the failover to the DR environment invoked? Is it a manual approval process, or can it be automated to achieve a low RTO target? How should system failure detection and notification mechanisms be designed to invoke failover in alignment with the expected RTO? How is traffic rerouted to the DR environment after the failure is detected? Validate your answers to these questions through testing. Testing: Thoroughly test and evaluate the failover to DR. Ensure that it meets your RPO and RTO expectations. Doing so could give you more confidence to invoke DR when required. Any time a new change or update is made to the process or technology solution, conduct the tests again. Team skills: One or more technical teams must have the skills and expertise to build, operate, and troubleshoot the production workload in the cloud environment, unless your environment is managed by a third party. Advantages Using Google Cloud for business continuity offers several advantages: Because Google Cloud has many regions across the globe to choose from, you can use it to back up or replicate data to a different site within the same continent. You can also back up or replicate data to a site on a different continent. Google Cloud offers the ability to store data in Cloud Storage in a dual-region or multi-region bucket. Data is stored redundantly in at least two separate geographic regions. Data stored in dual-region and multi-region buckets are replicated across geographic regions using default replication. Dual-region buckets provide geo-redundancy to support business continuity and DR plans. Also, to replicate faster, with a lower RPO, objects stored in dual-regions can optionally use turbo replication across those regions. Similarly multi-region replication provides redundancy across multiple regions, by storing your data within the geographic boundary of the multi-region. Provides one or more of the following options to reduce capital expenses and operating expenses to build a DR: Stopped VM instances only incur storage costs and are substantially cheaper than running VM instances. That means you can minimize the cost of maintaining cold standby systems. The pay-per-use model of Google Cloud means that you only pay for the storage and compute capacity that you actually use. Elasticity capabilities, like autoscaling, let you automatically scale or shrink your DR environment as needed. For example, the following diagram shows an application running in an on-premises environment (production) that uses recovery components on Google Cloud with Compute Engine, Cloud SQL, and Cloud Load Balancing. In this scenario, the database is pre-provisioned using a VM-based database or using a Google Cloud managed database, like Cloud SQL, for faster recovery with continuous data replication. You can launch Compute Engine VMs from pre-created snapshots to reduce cost during normal operations. With this setup, and following a failure event, DNS needs to point to the Cloud Load Balancing external IP address. To have the application operational in the cloud, you need to provision the web and application VMs. Depending on the targeted RTO level and company policies, the entire process to invoke a DR, provision the workload in the cloud, and reroute the traffic, can be completed manually or automatically. To speed up and automate the provisioning of the infrastructure, consider managing the infrastructure as code. You can use Cloud Build, which is a continuous integration service, to automatically apply Terraform manifests to your environment. For more information, see Managing infrastructure as code with Terraform, Cloud Build, and GitOps. Best practices When you're using the business continuity pattern, consider the following best practices: Create a disaster recovery plan that documents your infrastructure along with failover and recovery procedures. Consider the following actions based on your business impact analysis and the identified required RPO and RTO targets: Decide whether backing up data to Google Cloud is sufficient, or whether you need to consider another DR strategy (cold, warm, or hot standby systems). Define the services and products that you can use as building blocks for your DR plan. Frame the applicable DR scenarios for your applications and data as part of your selected DR strategy. Consider using the handover pattern when you're only backing up data. Otherwise, the meshed pattern might be a good option to replicate the existing environment network architecture. Minimize dependencies between systems that are running in different environments, particularly when communication is handled synchronously. These dependencies can slow performance and decrease overall availability. Avoid the spilt-brain problem. If you replicate data bidirectionally across environments, you might be exposed to the split-brain problem. The split-brain problem occurs when two environments that replicate data bidirectionally lose communication with each other. This split can cause systems in both environments to conclude that the other environment is unavailable and that they have exclusive access to the data. This can lead to conflicting modifications of the data. There are two common ways to avoid the split-brain problem: Use a third computing environment. This environment allows systems to check for a quorum before modifying data. Allow conflicting data modifications to be reconciled after connectivity is restored. With SQL databases, you can avoid the split-brain problem by making the original primary instance inaccessible before clients start using the new primary instance. For more information, see Cloud SQL database disaster recovery. Ensure that CI/CD systems and artifact repositories don't become a single point of failure. When one environment is unavailable, you must still be able to deploy new releases or apply configuration changes. Make all workloads portable when using standby systems. All workloads should be portable (where supported by the applications and feasible) so that systems remain consistent across environments. You can achieve this approach by considering containers and Kubernetes. By using Google Kubernetes Engine (GKE) Enterprise edition, you can simplify the build and operations. Integrate the deployment of standby systems into your CI/CD pipeline. This integration helps ensure that application versions and configurations are consistent across environments. Ensure that DNS changes are propagated quickly by configuring your DNS with a reasonably short time to live value so that you can reroute users to standby systems when a disaster occurs. Select the DNS policy and routing policy that align with your architecture and solution behavior. Also, you can combine multiple regional load balancers with DNS routing policies to create global load-balancing architectures for different use cases, including hybrid setup. Use multiple DNS providers. When using multiple DNS providers, you can: Improve the availability and resiliency of your applications and services. Simplify the deployment or migration of hybrid applications that have dependencies across on-premises and cloud environments with a multi-provider DNS configuration. Google Cloud offers an open source solution based on octoDNS to help you set up and operate an environment with multiple DNS providers. For more information, see Multi-provider public DNS using Cloud DNS. Use load balancers when using standby systems to create an automatic failover. Keep in mind that load balancer hardware can fail. Use Cloud Load Balancing instead of hardware load balancers to power some scenarios that occur when using this architecture pattern. Internal client requests or external client requests can be redirected to the primary environment or the DR environment based on different metrics, such as weight-based traffic splitting. For more information, see Traffic management overview for global external Application Load Balancer. Consider using Cloud Interconnect or Cross-Cloud Interconnect if the outbound data transfer volume from Google Cloud toward the other environment is high. Cloud Interconnect can help to optimize the connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. Consider using your preferred partner solution on Google Cloud Marketplace to help facilitate the data backups, replications, and other tasks that meet your requirements, including your RPO and RTO targets. Test and evaluate DR invocation scenarios to understand how readily the application can recover from a disaster event when compared to the target RTO value. Encrypt communications in transit. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. Previous arrow_back Environment hybrid pattern Next Cloud bursting pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Business_continuity_with_CI-CD_on_Google_Cloud.txt b/Business_continuity_with_CI-CD_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..16fee67cf4cf3fddf0c9c81f9583bb935f7c01bd --- /dev/null +++ b/Business_continuity_with_CI-CD_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/business-continuity-with-cicd-on-google-cloud +Date Scraped: 2025-02-23T11:54:54.878Z + +Content: +Home Docs Cloud Architecture Center Send feedback Business continuity with CI/CD on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-27 UTC This document describes disaster recovery (DR) and business continuity planning in the context of continuous integration and continuous delivery (CI/CD). It also provides guidance about how to identify and mitigate dependencies when you develop a comprehensive business continuity plan (BCP). The document includes best practices that you can apply to your BCP, regardless of the tools and processes that you use. The document assumes that you are familiar with the basics of the software delivery and operations cycle, CI/CD, and DR. CI/CD pipelines are responsible for building and deploying your business critical applications. Thus, like your application infrastructure, your CI/CD process requires planning for DR and business continuity. When you think about DR and business continuity for CI/CD, it's important to understand each phase of the software delivery and operations cycle, and understand how they function together as a holistic process. The following diagram is a simplified view of the software development and operations cycle, which includes the following three phases: Development inner loop: code, try, and commit Continuous integration: build, test, and security Continuous delivery: promote, rollout, rollback, and metrics This diagram also shows that Google Kubernetes Engine (GKE), Cloud Run, and Google Distributed Cloud are possible deployment targets of the software development and operations cycle. Throughout the software development and operations cycle, you need to consider the impact of a disaster on the ability of teams to operate and maintain business-critical applications. Doing so will help you determine the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) for the tools in your CI/CD toolchain. In addition, most organizations have many different CI/CD pipelines for different applications and sets of infrastructure, and each pipeline has unique requirements for DR and business continuity planning. The recovery strategy that you choose for a pipeline will vary based on the RTO and RPO of your tools. For example, some pipelines are more critical than others, and they will have lower RTO and RPO requirements. It's important to identify the business-critical pipelines in your BCP, and they should also receive more attention when you implement best practices for testing and running recovery procedures. Because each CI/CD process and its toolchain are different, the goals of this guide are to help you identify single points of failure in your CI/CD process and develop a comprehensive BCP. The following sections help you do the following: Understand what it takes to recover from a DR event that affects your CI/CD process. Determine the RTO and RPO for the tools in your CI/CD process. Understand the failure modes and dependencies of your CI/CD process. Choose an appropriate recovery strategy for the tools in your toolchain. Understand general best practices for implementing a DR recovery plan for your CI/CD process. Understand the business continuity process Building a BCP is crucial for helping ensure that your organization can continue its operations in the event of disruptions and emergencies. It helps your organization quickly return to a state of normal operations for its CI/CD process. The following sections outline the high-level stages that include the steps that are involved in creating an effective BCP. Although many of these steps apply broadly to program management and DR, certain steps are more relevant to planning business continuity for your CI/CD process. The steps that are specifically relevant to planning business continuity for CI/CD are highlighted in the following sections, and they also form the basis for the guidance in the rest of this document. Initiation and planning In this initial stage, both technical and business teams work together to establish the foundation for the business continuity planning process and its continued maintenance. The key steps for this stage include the following: Leadership buy-in: ensure that senior management supports and champions the development of the BCP. Assign a dedicated team or an individual that is responsible for overseeing the plan. Resource allocation: allocate the necessary budget, personnel, and resources for developing and implementing the BCP. Scope and objectives: define the scope of your BCP and its objectives. Determine which business processes are critical and need to be addressed in the plan. Risk assessment: identify potential risks and threats that could disrupt your business, such as natural disasters, cybersecurity breaches, or supply chain interruptions. Impact analysis: assess the potential consequences of these risk assessment findings on your business operations, finances, reputation, and customer satisfaction. Business impact analysis In this stage, the business and technical teams analyze the business impact of disruptions to your customers and organization, and prioritize the recovery of critical business functions. These business functions are performed by different tools during the different phases of a build and deployment process. The business impact analysis is an important stage in the business continuity planning process for CI/CD, especially the steps for identifying critical business functions and tool dependencies. In addition, understanding your CI/CD toolchain–including its dependencies and how it functions within your DevOps lifecycle–is a foundational building block for developing a BCP for your CI/CD process. The key steps in the business impact analysis stage include the following: Critical functions: determine the key business functions and processes that must be prioritized for recovery. For example, if you determine that deploying applications is more critical than executing unit tests, you would prioritize recovery for application deployment processes and tools. Dependencies: identify internal and external dependencies that could affect the recovery of your critical functions. Dependencies are especially relevant for ensuring the continued operation of your CI/CD process through its toolchain. RTO and RPO: define acceptable limits for downtime and data-loss limits for each critical function. These RTO and RPO targets are linked to the importance of a business function for continued operations, and they involve specific tools that are needed for the business function to operate smoothly. Strategy development In this stage, the technical team develops recovery strategies for critical business functions, such as restoring operations and data, and communicating with vendors and stakeholders. Strategy development is also a key part of planning business continuity for your CI/CD process, especially the step of selecting high-level recovery strategies for critical functions. The key steps in the strategy development stage include the following: Recovery strategies: develop strategies for restoring critical functions. These strategies might involve alternate locations, remote work, or backup systems. These strategies are tied to the RTO and RPO targets for each critical function. Vendor and supplier relationships: establish communication and coordination plans with key vendors and suppliers to keep the supply chain running during disruptions. Data and IT recovery: create plans for data backup, IT system recovery, and cybersecurity measures. Communication plan: develop a clear communication plan for internal and external stakeholders during and after a disruption. Plan development In this stage, the main step is to document the BCP. The technical team documents the tools, processes, recovery strategies, rationale, and procedures for each critical function. Plan development also includes writing step-by-step instructions for employees to follow during a disruption. During implementation and ongoing maintenance, changes might need to be introduced to the plan, and the plan should be treated as a living document. Implementation In this stage, you implement the plan for your organization by using the BCP that the technical team created. Implementation includes employee training and initial testing of the BCP. Implementation also includes using the plan if a disruption occurs to recover regular operations. Key implementation steps include the following: Initial testing and training: after the BPC is documented, test it through simulations and exercises to identify gaps and improve effectiveness. Train employees on their roles and responsibilities during a disruption. Activation: when a disruption occurs, initiate the BCP according to the predefined triggers and procedures. Communication: keep stakeholders informed about the situation and recovery efforts. Maintenance and review This stage isn't a defined process that occurs only one time–instead, it represents a continuous, ongoing effort that should become a normal part of your CI/CD operations. It's important to regularly review, test, and update the BCP within your organization so that it remains relevant and actionable if a disruption occurs. The key steps of maintenance and review include the following: Regular updates: review and update the BCP periodically so that it remains current and effective. Update it whenever there are changes in personnel, technology, or business processes. Lessons learned: after each disruption or test, conduct a debriefing to identify lessons learned and areas for improvement. Regulatory compliance: align your BCP with industry regulations and standards. Employee awareness: continuously educate employees about the BCP and their roles in its execution. Build a business continuity process for CI/CD This section provides specific guidelines for building a BCP that's specifically focused on restoring your CI/CD operations. The process of planning business continuity for CI/CD starts with a thorough understanding of your CI/CD toolchain, and how it ties into the software delivery and operations lifecycle. With this understanding as the foundation, you can then plan how your organization will recover its CI/CD operations from a disruption. To build a robust business continuity process for your CI/CD process, you need to take the following major steps: Understand the toolchain Identify data and dependencies Determine RTO and RPO targets Choose a high-level strategy for business continuity Document your BCP and implement best practices Test failure scenarios and maintain the plan The following sections provide more detail about each of these steps. Understand the toolchain CI/CD toolchains are composed of many different individual tools and the possible combinations of tools can seem endless. However, understanding your CI/CD toolchain and its dependencies is key to business continuity planning for CI/CD. The core mission of your CI/CD process is to deliver code to production systems for end-user consumption. Throughout that process, many different systems and data sources are used; knowing those data sources and dependencies is critical to developing a BCP. To begin creating your DR strategy, you first need to understand the different tools involved in your CI/CD process. To help you understand how to evaluate your own toolchain and develop your BCP, this document uses the example of an enterprise Java application that runs on GKE. The following diagram shows the first layer of data and systems in the toolchain. This first layer would be under your direct control and includes the following: The source for your applications Tools in your CI/CD platform, such as Cloud Build or Cloud Deploy Basic interconnections of the different tools As shown in the diagram, the main flow for the example application is the following: Code development events in the dev inner loop trigger Cloud Build. Cloud Build pulls the application source code from the source control repository. Cloud Build identifies any necessary dependencies that are specified in build configuration files, such as third-party JAR files from the Java repository in Artifact Registry. Cloud Build then pulls these dependencies from their source locations. Cloud Build runs the build and does the necessary validation, such as static analysis and unit testing. If the build is successful, Cloud Build creates the container image and pushes it to the container repository in Artifact Registry. A Cloud Deploy pipeline is triggered, and the pipeline pulls the container image from the repository and deploys it to a GKE environment. To understand the tools that are used in your CI/CD process, we suggest creating a diagram that shows your CI/CD process and the tools that are used in it, similar to the example in this document. You can then use your diagram to create a table that captures key information about your CI/CD toolchain, such as the phase of the process, the purpose of the tool, the tool itself, and the teams that are impacted by a failure of the tool. This table provides a mapping of the tools in your toolchain and identifies the tools with specific phases of the CI/CD process. Thus, the table can help you get an overall view of your toolchain and how it operates. The following tables map the previously mentioned example of an enterprise application to each tool in the diagram. To provide a more complete example of what a toolchain mapping might look like, these tables also include other tools that aren't mentioned in the diagram, such as security tools or test tools. The first table maps to tools that are used in the CI phase of the CI/CD process: Continuous integration Source Tools used Primary users Usage Phase: Source control Application code Application configuration files Secrets, passwords, and API keys Git repositories Secret Manager Cloud Key Management Service Developers Site reliability engineers (SREs) Control the version of all sources, including code, configuration files, and documentation, in a distributed source control tool. Perform backup and replication. Store all secrets (including keys, certificates, and passwords) in a secrets management tool. Phase: Build Container image build files Build configuration files Code analysis tools Cloud Build Artifact Registry Developers Execute repeatable builds in a consistent, on-demand platform. Check and store build artifacts in a reliable and secure repository. Phase: Test Test cases Test code Test configuration files Cloud Build Developers Run unit and integration tests in a consistent, on-demand platform. Phase: Security Security rules Security configuration files Security scanner Platform administrators SREs Scan code for security issues. The second table focuses on tools that are used in the CD phase of the CI/CD process: Continuous deployment Source Tools used Primary users Usage Phase: Deployment Deployment configuration files Cloud Deploy Application operators SREs Automate deployments to promote, approve, and manage traffic in a secure and consistent platform. Phase: Test Test cases Test code Test data Configuration files Cloud Build Selenium Apache JMeter Locust Developers Test integration and performance for quality and usability. Phase: Logging Log configuration files Queries Playbooks Cloud Logging Log agents Application operators SREs Keep logs for observability and troubleshooting. Phase: Monitoring Monitoring of configuration files, including the following: Queries Playbooks Dashboard sources Cloud Monitoring Cloud Trace OpenTelemetry Twilio Sendgrid Application operators SREs Use metrics for monitoring, observability, and alerting. Use distributed tracing. Send notifications. As you continue to work on your BCP and your understanding of your CI/CD toolchain grows, you can update your diagram and mapping table. Identify data and dependencies After you complete your base inventory and map of your CI/CD toolchain, the next step is to capture any dependencies on metadata or configurations. When you implement your BCP, it's critical that you have a clear understanding of the dependencies within your CI/CD toolchain. Dependencies typically fall into one of two categories: internal (first-order) dependencies and external (second-order or third-order) dependencies. Internal dependencies Internal dependencies are systems that your toolchain uses and that you're directly in control of. Internal dependencies are also selected by your teams. These systems include your CI tool, key management store, and source control system. You can think of these systems as being in the next layer down from the toolchain itself. For example, the following diagram provides an example of how internal dependencies fit within a toolchain. The diagram expands upon the previous first-layer toolchain diagram for the example Java application to also include the toolchain's internal dependencies: application credentials, the deploy.yaml file, and the cloudbuild.yaml file. The diagram shows that in order to work successfully in the example Java application, tools like Cloud Build, Cloud Deploy, and GKE need access to non-toolchain dependencies like cloudbuild.yaml,deploy.yaml, and application credentials. When you analyze your own CI/CD toolchain, you assess whether a tool can run on its own, or if it needs to call another resource. Consider the documented internal dependencies for the example Java application. Credentials are stored in Secret Manager, which isn't part of the toolchain, but the credentials are required for the application to start up on deployment. Thus, the application credentials are included as a dependency for GKE. It's also important to include the deploy.yaml and cloudbuild.yaml files as dependencies, even though they are stored in source control with the application code, because they define the CI/CD pipeline for that application. The BCP for the example Java application should account for these dependencies on the deploy.yaml and cloudbuild.yaml files because they re-create the CI/CD pipeline after the tools are in place during the recovery process. Additionally, if these dependencies are compromised, the overall function of the pipeline would be impacted even if the tools themselves are still operational. External dependencies External dependencies are external systems that your toolchain relies on to operate, and they aren't under your direct control. External dependencies result from the tools and programming frameworks that you selected. You can think of external dependencies as being another layer down from internal dependencies. Examples of external dependencies include npm or Maven repositories, and monitoring services. Although external dependencies are outside your control, you can incorporate them into your BCP. The following diagram updates the example Java application by including external dependencies in addition to the internal ones: Java libraries in Maven Central and Docker images in Docker Hub. The Java libraries are used by Artifact Registry, and the Docker images are used by Cloud Build. The diagram shows that external dependencies can be important for your CI/CD process: both Cloud Build and GKE rely on two external services (Maven and Docker) to work successfully. When you assess your own toolchain, document both external dependencies that your tools need to access and procedures for handling dependency outages. In the example Java application, the Java libraries and Docker images can't be controlled directly, but you could still include them and their recovery procedures in the BCP. For example, consider the Java libraries in Maven. Although the libraries are stored on an external source, you can establish a process to periodically download and refresh Java libraries to a local Maven repository or Artifact Registry. By doing so, your recovery process doesn't need to rely on the availability of the third-party source. In addition, it's important to understand that external dependencies can have more than one layer. For example, you can think of the systems that are used by your internal dependencies as second-order dependencies. These second-order dependencies might have their own dependencies, which you can think of as third-order dependencies. Be aware that you might need to document and account for both second- and third-order external dependencies in your BCP in order to recover operations during a disruption. Determine RTO and RPO targets After you develop an understanding of your toolchain and dependencies, you define the RTO and RPO targets for your tools. The tools in the CI/CD process each perform a different action that can have a different impact on the business. Therefore, it's important to match the priority of the business function's RTO and RPO targets to its impact on the business. For example, building new versions of applications through the CI stage could be less impactful than deploying applications through the CD stage. Thus, deployment tools could have longer RTO and RPO targets than other functions. The following four-quadrant chart is a general example of how you might determine your RTO and RPO targets for each component of the CI/CD toolchain. The toolchain that is mapped in this chart includes tools like an IaC pipeline and test data sources. The tools weren't mentioned in the previous diagrams for the Java application, but they're included here to provide a more complete example. The chart shows quadrants that are based on the level of impact to developers and operations. In the chart, components are positioned as follows: Moderate developer impact, low operations impact: test data sources Moderate developer impact, moderate operations impact: Cloud Key Management Service, Cloud KMS Moderate developer impact, high operations impact: deployment pipeline High developer impact, low operations impact: dev inner loop High developer impact, moderate operations impact: CI pipeline, infrastructure as code (IaC) pipeline High developer impact, high operations impact: source control management (SCM), Artifact Registry Components like source control management and Artifact Registry that are high on developer impact and operations impact have the greatest impact on the business. These components should have the lowest RTO and RPO objectives. The components in the other quadrants have a lower priority, which means that the RTO and RPO objectives will be higher. In general, the RTO and RPO objectives for your toolchain components should be set according to how much data or configuration loss can be tolerated compared to the amount of time it should take to restore service for that component. For example, consider the different locations of Artifact Registry and the IaC pipeline in the graph. A comparison of these two tools shows that an Artifact Registry outage has a larger impact on business operations than an outage in the IaC pipeline. Because an Artifact Registry outage significantly impacts your ability to deploy or autoscale your application, it would then have lower RTO and RPO targets compared to other tools. In contrast, the graph shows that an IaC pipeline outage has a smaller impact on business operations than other tools. The IaC pipeline would then have higher RTO and RPO objectives because you can use other methods to deploy or update infrastructure during an outage. Choose a high-level strategy for business continuity Business continuity processes for production applications often rely on one of three common DR strategies. However, for CI/CD, you can choose between two high-level strategies for business continuity: active/passive or backup/restore. The strategy that you choose will depend on your requirements and budget. Each strategy has trade-offs with complexity and cost, and you have different considerations for your CI/CD process. The following sections provide more details about each strategy and their trade-offs. In addition, when service-interrupting events happen, they might impact more than your CI/CD implementation. You should also consider all the infrastructure you need, including the network, computing, and storage. You should have a DR plan for those building blocks and test it regularly to ensure that it is effective. Active/passive With the active/passive (or warm standby) strategy, your applications and the passive CI/CD pipeline are mirrors. However, the passive pipeline isn't actually handling customer workload and any build or deployment, so it's in a scaled-down state. This strategy is most appropriate for business-critical applications where a small amount of downtime is tolerable. The following diagram shows an active/passive configuration for the example Java application that is used in this document. The passive pipeline fully duplicates the application toolchain in a different region. In this example, region1 hosts the active CI/CD pipeline and region2 has the passive counterpart. The code is hosted on an external Git service provider, such as GitHub or GitLab. A repository event (like a merge from a pull request) can trigger the CI/CD pipeline in region1 to build, test, and deploy to the multi-regional production environment. If a critical issue for the region1 pipeline occurs, such as a regional outage of a product, the result could be failed deployments or unsuccessful builds. To quickly recover from the problem, you can update the trigger for the Git repository and switch to the region2 pipeline, which then becomes the active one. After the issue is resolved in region1, you can keep the pipeline in region1 as passive. Advantages of the active/passive strategy include the following: Low downtime: because the passive pipeline has been deployed but is scaled down, the amount of downtime is limited to the time that is required to scale the pipeline up. Configurable tolerance for data loss: with this strategy, the configuration and artifact must be periodically synchronized. However, the amount is configurable based on your requirements, which can reduce complexity. Disadvantages of this strategy include the following: Cost: with duplicated infrastructure, this strategy increases the overall cost of your CI/CD infrastructure. Backup/restore With the backup/restore strategy, you create your failover pipeline only when needed during incident recovery. This strategy is most appropriate for lower-priority use cases. The following diagram shows a backup/restore configuration for the example Java application. The backup configuration duplicates only part of the application's CI/CD pipeline in a different region. Similar to the previous example, region1 hosts the active CI/CD pipeline. Instead of having a passive pipeline in region2, region2 only has backups of necessary regional data, such as the Maven packages and container images. If you host your source repositories in region1, you should also sync the data to your DR locations. Similarly, if a critical issue occurs in the region1 pipeline, such as a regional product outage, you can restore your CI/CD implementation in region2. If the infrastructure code is stored in the infrastructure code repository, you can run your automation script from the repository and rebuild the CI/CD pipeline in region2. If the outage is a large-scale event, you might compete with other customers for cloud resources. One way to mitigate this situation is to have multiple options for the DR location. For example, if your region1 pipeline is in us-east1, your failover region can be us-east4, us-central1, or us-west1. Advantages of the backup/restore strategy include the following: Cost: this strategy incurs the lowest cost because you are deploying the backup pipeline only during DR scenarios. Disadvantages of this strategy include the following: Downtime: this strategy takes more time to implement because you create the failover pipeline when you need it. Instead of having a prebuilt pipeline, the services need to be created and configured during incident recovery. Artifact build time and the time to retrieve external dependencies could be significantly longer as well. Document your BCP and implement best practices After you map your CI/CD toolchain, identify its dependencies, and determine RTO and RPO targets for critical functions, the next step is to document all the relevant information in a written BCP. When you create your BCP, document the strategies, processes, and procedures for restoring each critical function. This documentation process includes writing step-by-step procedures for employees in specific roles to follow during a disruption. After you define your BCP, you deploy or update your CI/CD toolchain by using best practices to achieve your RTO and RPO targets. Although CI/CD toolchains can be very different, two key patterns for best practices are common regardless of the toolchain: a comprehensive understanding of dependencies and implementing automation. With regard to dependencies, most BCPs address the systems directly within your control. However, recall that second or third-order external dependencies might be just as impactful, so it's important to implement best practices and redundancy measures for those critical dependencies as well. The external Java libraries in the example application are an example of third-order dependencies. If you don't have a local repository or backup for those libraries, you might be unable to build your application if the external source where you pull the libraries is disconnected. In terms of automation, the implementation of best practices should be incorporated into your overall cloud IaC strategy. Your IaC solution should use tools such as Terraform to automatically provision the necessary resources of your CI/CD implementation and to configure the processes. IaC practices are highly effective recovery procedures because they are incorporated into the day-to-day functioning of your CI/CD pipelines. Additionally, IaC promotes the storage of your configuration files in source control, which in turn promotes adoption of the best practices for backups. After you implement your toolchain according to your BCP and the best practices for dependencies and automation, your CI/CD process and recovery strategies might change. Be sure to document any changes to recovery strategies, processes, and procedures that result from reviewing the BCP and implementing best practices. Test failure scenarios and maintain the plan It's critical to regularly review, test, and update your BCP on an ongoing basis. Testing the BCP and recovery procedures verifies that the plan is still valid and that the documented RPO and RTO targets are acceptable. Most importantly, however, regular testing, updating, and maintenance make executing the BCP a part of normal operations. Using Google Cloud, you can test recovery scenarios at minimal cost. We recommend that you do the following in order to help with your testing: Automate infrastructure provisioning with an IaC tool: you can use tools such as Terraform to automate the provisioning of CI/CD infrastructure. Monitor and debug your tests with Cloud Logging and Cloud Monitoring: Google Cloud Observability provides logging and monitoring tools that you can access through API calls, which means you can automate the deployment of recovery scenarios by reacting to metrics. When you're designing tests, make sure that you have appropriate monitoring and alerting in place that can trigger appropriate recovery actions. Perform the testing in your BCP: for example, you can test whether permissions and user access work in the DR environment like they do in the production environment. You can conduct integration and functional testing on your DR environment. You can also perform a test in which your usual access path to Google Cloud doesn't work. At Google, we regularly test our BCP through a process called DiRT (Disaster Recovery Testing). This testing helps Google verify impacts, automation, and expose unaccounted for risks. Changes to the automation and BCP that need to be implemented are an important output of DiRT. Best practices In this section, you learn about some best practices that you can implement to achieve your RTO and RPO objectives. These best practices apply broadly to DR for CI/CD, and not to specific tools. Regardless of your implementation, you should test your BCP regularly to ensure that high availability, RTO, and RPO meet your requirements. If an incident or disaster happens, you should also do a retrospective and analyze your process so that you can improve it. High availability For each tool, you should work to implement best practices for high availability. Following best practices for high availability puts your CI/CD process in a more proactive stance because these practices make the CI/CD process more resilient to failures. These proactive strategies should be used with more reactive controls and procedures for both recovery and backup. The following are a few best practices to achieve high availability. However, consult the detailed documentation for each tool in your CI/CD for more detailed best practices: Managed services: using managed services shifts the operational responsibility to Google Cloud. Autoscaling: where possible, use autoscaling. A key aspect of autoscaling is that worker instances are created dynamically, so recovery of failed nodes is automatic. Global and multi-region deployments: where possible, use global and multi-region deployments instead of regional deployment. For example, you can configure Artifact Registry for multi-region storage. Dependencies: understand all the dependencies of your tooling and ensure that those dependencies are highly available. For example, you can cache all the third-party libraries in your artifact registry. Backup procedures When you implement DR for CI/CD, some tools and processes are more suited to backup/restore strategies. A comprehensive backup strategy is the first step to effective reactive controls. Backups let you recover your CI/CD pipeline with minimal interruption in the case of bad actors or disaster scenarios. As a starting point, you should implement the following three best practices. However, for more detailed backup best practices, consult the documentation for each tool in your CI/CD process. Source control: store configuration files and anything you codify, such as automation scripts and policies, in source control. Examples include cloudbuild.yaml and Kubernetes YAML files. Redundancy: ensure that there is no single point of failure regarding accessing secrets such as passwords, certificates, and API keys. Examples of practices to avoid include only one person knowing the password or storing the API key on only a single server in a particular region. Backups: frequently verify the completeness and accuracy of your backups. Managed services such as Backup for GKE will help simplify your verification process. Recovery procedures DR also requires recovery procedures to complement backup processes. Your recovery procedures, combined with complete backups, will determine how quickly you are able to respond to disaster scenarios. Dependency management Your CI/CD pipeline can have many dependencies, which can also be sources for failures. A full list of the dependencies should identified as described earlier in this document in Identify data and dependencies. However, the two most common sources of dependencies are the following: Application artifacts: for example, packages, libraries, and images External systems: for example, ticketing and notification systems One way to mitigate the risks of dependencies is to adopt the practice of vendoring. Vendoring application packages or images is the process of creating and storing copies of them in a private repository. Vendoring removes the dependency on external sources for these packages or images, and it can also help prevent malware from being inserted into the software supply chain. Some of the benefits of vendoring application packages or images include the following: Security: vendoring removes the dependency on external sources for application packages or images, which can help prevent malware insertion attacks. Control: by vendoring their own packages or images, organizations can have more control over the source of these packages and images. Compliance: vendoring can help organizations to comply with industry regulations, such as the Cybersecurity Maturity Model Certification. If your team decides to vendor application packages or images, follow these main steps: Identify the application packages or images that need to be vendored. Create a private repository for storing the vendored packages or images. Download the packages or images from the original source and store them in the private repository. Verify the integrity of the packages or images. Update the vendored packages or images as needed. CI/CD pipelines often call external-party systems to perform actions such as running scans, logging tickets, or sending notifications. In most cases, these external-party systems have their own DR strategies that should be implemented. However, in some cases they might not have a suitable DR plan, and those instances should be clearly documented in the BCP. You must then decide if those stages in the pipeline can be skipped for availability reasons, or if it is acceptable to cause downtime for the CI/CD pipeline. Monitoring and notifications Your CI/CD is just like your application production systems, so you also need to implement monitoring and notification techniques for your CI/CD tools. As a best practice, we recommend that you implement dashboards and alerting notifications. The GitHub sample repository for Cloud Monitoring has many examples of dashboards and alerting policies. You can also implement additional levels of monitoring, such as Service Level Indicators (SLIs) and Service Level Objectives (SLOs). These monitoring levels help track the overall health and performance of your CI/CD pipelines. For example, SLOs can be implemented to track the latency of build and deployment stages. These SLOs help teams build and release applications at the rate and frequency that you want. Emergency access procedures During a disaster, it might be necessary for operations teams to take action outside of standard procedures and gain emergency access to systems and tools. Such emergency actions are sometimes referred to as breakglass procedures. As a starting point, you should implement these three best practices: Have a clear escalation plan and procedure. A clear plan helps the operations team know when they need to use the emergency access procedures. Ensure multiple people have access to critical information, such as configuration and secrets. Develop automated auditing methods, so that you can track when emergency access procedures were used and who used them. What's next Learn more about the Google Cloud products used in this document: Cloud Build Artifact Registry Cloud Deploy GKE Backup for GKE Google Cloud Backup and DR Service See the Disaster recovery planning guide. Try other Google Cloud features by completing our tutorials. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Ben Good | Solutions ArchitectXiang Shen | Solutions ArchitectOther contributors: Marco Ferrari | Cloud Solutions ArchitectAnuradha Bajpai | Solutions ArchitectRodd Zurcher | Solutions Architect Send feedback \ No newline at end of file diff --git a/C3_AI_architecture_on_Google_Cloud.txt b/C3_AI_architecture_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4bf8a3abab0e39426c2b36a7e729bf0feb2db0a --- /dev/null +++ b/C3_AI_architecture_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/c3-ai-architecture +Date Scraped: 2025-02-23T11:46:44.067Z + +Content: +Home Docs Cloud Architecture Center Send feedback C3 AI architecture on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-02 UTC Authors: Scott Kurinskas, VP Platform Product Management, C3 AI | Siddharth Desai, Partner Engineer, Google This document describes the most effective ways to deploy C3 AI applications on Google Cloud. This document is intended to help systems administrators, cloud architects, and devops engineers streamline the deployment process of the C3 AI Platform on Google Cloud. It assumes that you are familiar with cloud computing concepts, Terraform, Google Kubernetes Engine (GKE), Virtual Private Cloud (VPC), and Private Service Connect (PSC). C3 AI is an enterprise AI software provider for accelerating digital transformation. The company offers a robust platform for developing AI applications and an expanding library of turnkey solutions. With an array of more than 40 prebuilt enterprise AI applications, C3 AI caters to the critical needs of global enterprises—spanning industries like manufacturing, finance, government, utilities, and others. Reference architecture The following architecture diagram shows the data flow when you deploy the C3 AI Platform on a C3 AI tenant: As shown in the preceding diagram, this example architecture contains the following features: Customer-centric projects: Each customer is provided with a dedicated project within C3 AI's Google Cloud organization. This strategic allocation enables personalized engagement while maintaining isolation between customers. This isolation helps ensure data integrity and confidentiality. VPC Service Control perimeter: In the customer's project, the VPC Service Control perimeter helps ensure that all infrastructure components remain securely contained within a defined perimeter. Using a defined perimeter helps prevent unauthorized data leaks and helps mitigate potential risks. In this architecture, requests to the C3 AI Platform originate securely from the customer's VPC using PSC. The requests are then load balanced and directed to GKE. As outlined in the preceding diagram, GKE serves as the processing infrastructure for the C3 AI Platform within the customer project. Enterprise data is hosted on Cloud SQL, enabling streamlined processing. Additionally, Cloud Storage offers object storage for managing application and platform configurations. Cloud Storage also supports various ancillary tasks. Architecture components The preceding diagram includes the following components: VPC network: A VPC network provides isolated, private spaces in Google Cloud for each customer where their resources can reside, helping to ensure a secure and controlled environment for data and applications. Cloud network address translation (NAT): Cloud NAT facilitates outbound internet connectivity for resources within the VPC network, maintaining security while enabling external communication. GKE: The foundation of C3 AI's architecture, GKE orchestrates and accelerates computations for the C3 AI Platform. It uses dedicated GKE node pools for specific accelerator types, ensuring efficient resource allocation and optimized platform performance. Cloud Key Management Service (KMS): Customer-Managed Encryption Keys (CMEK) from Cloud KMS help fortify data security within the customer's VPC on the C3 AI environment. CMEKs encrypt data at rest in Cloud SQL, Cloud Storage, and within GKE clusters. Cloud SQL: C3 AI extends the capabilities of Cloud SQL to furnish on-demand databases for storing metrics and model facets. Extending these capabilities helps to enhance the versatility and data management abilities of the architecture. PSC: Using PSC helps ensure secure communication between the customer's Google Cloud organization and their allocated project that's hosted by C3 AI. PSC establishes a private connection that doesn't rely on the public internet. That connection helps guarantee data privacy, integrity, and a seamless connection. Cloud Storage: Reliable and secure object storage used for the management of application and platform configuration and other ancillary tasks. The use of these components helps optimize application performance and helps adhere to the standards of C3 AI's terms of use. By deploying and managing applications on their dedicated tenant, C3 AI helps ensure that an environment has been fine-tuned to an application's unique requirements. Alternative approach This section contains an alternative approach to implementing a C3 AI architecture on a customer tenant. The following diagram shows the C3 AI Platform deployed within a VPC that's owned by the customer within their organization: In scenarios where stringent data security, compliance, or specialized operational needs come into play, C3 AI extends its support for deploying applications on the customer's tenant. This approach caters to cases where data sovereignty, regulatory demands, or unique operational frameworks require a more personalized deployment. An example of such a case is the General Data Protection Regulation (GDPR), which is enforced by the European Union (EU). The GDPR mandates that organizations operating within EU member states adhere to strict guidelines regarding the processing and storage of personal data. This mandate includes requirements for data localization, where sensitive information might need to be stored within the geographic boundaries of the jurisdiction where the data originates. By deploying applications on the customer's tenant, and by maintaining control over the storage and processing of their data within specific jurisdictions, organizations can help ensure compliance with GDPR and other similar regulations. Within the customer's organization, C3 AI has its own VPC. This dedicated environment helps ensure data isolation and serves as a customized space for C3 AI's solutions. PSC facilitates the establishment of a private, secure connection between the C3 AI VPC and the customer's network. This connection doesn't use the public internet, helping to safeguard data and enabling seamless communication. Products used This reference architecture uses the following Google Cloud products: Google Kubernetes Engine Virtual Private Cloud Cloud SQL Private Service Connect Cloud Storage Design considerations This section provides guidance to help you use this document as a starting point to develop an architecture that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. Security, privacy, and compliance PSC sets up a dedicated, secure, and private connection that doesn't rely on the public internet between the customer's Google Cloud organization and their allocated project that's hosted on C3 AI's Google Cloud organization. By implementing VPC Service Controls, C3 AI helps to fortify your architecture against data exfiltration. This safeguard helps to ensure that data remains within the confines of C3 AI's designated perimeters. Reliability Google Cloud products help ensure reliability in the following ways: The GKE cluster is a regional GKE cluster which provides a control plane SLA of 99.95%. Cloud SQL instances are deployed regionally. Cloud KMS keys are regional keys. Cloud NAT is regional. Cloud Storage is configured regionally. Deployment For a comprehensive guide that explains how to install C3 AI on Google Cloud, see C3 AI Installation Guide – Google Cloud. What's next For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Scott Kurinskas | VP Platform Product Management (C3 AI)Siddharth Desai | Partner EngineerOther contributors: Zach Berger | Senior Cloud Infrastructure EngineerTanvi Desai | Technical Account ManagerEzra Uzosike | Cloud Solutions ArchitectEmily Qiao | AI/ML Customer EngineerJiyeon Kang | Customer Engineer Send feedback \ No newline at end of file diff --git a/CAMP.txt b/CAMP.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad8621d9f13633943f49f0f2a71cd22d475afd34 --- /dev/null +++ b/CAMP.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/camp +Date Scraped: 2025-02-23T11:58:10.997Z + +Content: +Cloud App Modernization Program (CAMP)CAMP has been designed as an end-to-end framework to help guide organizations through their modernization journey by assessing where they are today and identifying their most effective path forward. Download CAMP WhitepaperBook a consultationCAMP overviewWatch a quick intro to CAMPSolutionsCommon patterns for modernizationApp Mod efforts fall into three common categories and CAMP provides customers with best practices and tooling for each in addition to assessments that guide them on where to start.Move and improveModernize traditional applicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Explore modernization options for traditional applications like Java and .NET.Migrate from PaaS: Cloud Foundry, OpenshiftMove your containers into Google's managed container services for a more stable and flexible experience.Learn about tools and migration options for modernizing your existing platforms.Unlock legacy with ApigeeAdapt to changing market needs while leveraging legacy systems.Learn how to connect legacy and modern services seamlessly.Migrate from mainframeMove on from proprietary mainframes and innovate with cloud-native services. Learn about Google's automated tools and prescriptive guidance for moving to the cloud.Build and operateModernize software delivery: Secure software supply chain, CI/CD best practices, Developer productivityOptimize your application development environment, improve your software delivery with modern CI/CD, and secure your software supply chain.Learn about modern and secure cloud-based software development environments. DevOps best practicesAchieve elite performance in your software development and delivery.Learn about industry best practices that can help improve your technical and cultural capabilities to drive improved performance. SRE principles Strike the balance between speed and reliability with proven SRE principals.Learn how Google Cloud helps you implement SRE principles through tooling, professional services, and other resources.Day 2 operations for GKESimplify your GKE platform operations and build an effective strategy for managing and monitoring activities.Learn how to create a unified approach for managing all of your GKE clusters for reduced risk and increased efficiency. FinOps and optimization of GKEContinuously deliver business value by running reliable, performant, and cost efficient applications on GKE.Learn how to make signal driven decisions and scale your GKE clusters based on actual usage and industry best practices.Cloud and beyondRun applications at the edgeUse Google's hardware agnostic edge solution to deploy and govern consistent, localized, and low latency applications.Learn how to enhance your customer experience and employee productivity using an edge strategy.Architect for multicloudManage workloads across multiple clouds with a consistent platform. Learn how Google allows for a flexible approach to multicloud environments for container management and application delivery.Go serverlessEasily build enterprise grade applications with Google Cloud Serverless technologies.Learn how to use tools like Cloud Build, Cloud Run, Cloud Functions, and more to speedup your application delivery. API management Leverage API life cycle management to support new business growth and empower your ecosystem.Learn about usage of APIs as a power tool for a flexible and expandable modern application environment. Guided assessments DevOps best practicesDORA assessmentCompare your DevOps capabilities to that of the industry based on the DORA research and find out how to improve. Learn about the DORA research and contact us to see if a DevOps assessment is right for you. Modernizing traditional apps mFit assessmentPlatform owners can leverage our fit assessment tool to evaluate large VMware workloads and determine if they are good candidates for containerization. Learn about mFit and Google cloud's container migration options and schedule a consultation with us to review your strategy. CAST assessmentThis code level analysis of your traditional applications allows you to identify your best per application modernization approach. Learn more about CAST and contact us to see if this is the right assessment for you.Modernizing mainframe platformsMainFrame application portfolio assessment (MAPA)This assessment is designed to help customers build a financial and strategic plan for their migration based on complexity, risk and cost for each application.Learn about this survey based application level assessment and contact us to start your mainframe migration today. Feeling inspired? Let’s solve your challenges togetherAre you ready to learn about the latest application development trends? Contact usCloud on Air: Watch this webinar to learn how you can build enterprise ready serverless applications. Watch the webinarCustomersCustomer storiesVideoHow British Telecom is leveraging loosely coupled architecture to transform their businessVideo (2:35)VideoHow CoreLogic is replatforming 10,000+ Cloud Foundry app-instances with GoogleVideo (16:16)VideoHow Schlumberger is using DORA recommendations to improve their software delivery and monitoring. Video (2:10)Case studyGordon Food Services goes from four to 2,920 deployments a year using GKE 5-min readSee all customersPartnersOur partnersTo help facilitate our customers' modernization journey, Google works closely with a set of experienced partners globally. See all partnersTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Carbon_Footprint.txt b/Carbon_Footprint.txt new file mode 100644 index 0000000000000000000000000000000000000000..21739517b1c0924eaf976f230b6fa3d45b649faf --- /dev/null +++ b/Carbon_Footprint.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/carbon-footprint +Date Scraped: 2025-02-23T12:05:51.152Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '24. Let's go.Jump to Carbon FootprintCarbon FootprintMeasure, report, and reduce your cloud carbon emissions. Go to consoleInclude both location-based and market-based carbon emissions data in reports and disclosures Visualize carbon insights using dashboards and chartsReduce the emissions of cloud applications and infrastructure8:02Watch this session and learn how to measure carbon emissions on Google CloudBenefitsAccurately measure your carbon footprint View the location-based and market-based emissions that are derived from your Google Cloud usage, providing transparency into emissions associated with your cloud applications.Track granular emissions profile of cloud projects Monitor your cloud emissions over time by project, product, and region—giving IT teams and developers metrics that can help them improve their carbon footprint. Share detailed methodology with reviewers Our detailed calculation methodology is published so that reviewers and reporting teams can verify that their emissions data meets GHG Protocol.Key featuresKey features[NEW] Dual-reporting of both location-based and market-based emissionsIn the in-console dashboard and BigQuery export, we provide both scope 2 market-based and scope 2 location-based emissions data, in addition to scope 1 and scope 3 emissions. Compared to only reporting market-based data for electricity-related (scope 2) emissions, the dual-reporting offers more transparency and comprehensive insights for your varying use cases.In-console dashboardThe dashboard data summary gives you a high-level overview of both location-based and market-based carbon emissions from all 3 scopes associated with the usage of covered Google Cloud services for your account. Automated and granular exports to BigQuery You can export your Carbon Footprint data to BigQuery in order to perform data analysis, create custom dashboards and reports, or include the data in your organization's emissions accounting tools. You can dive deeper and analyze your carbon emissions according to Google Cloud service, project, region, and month for the selected billing account over all available months.Location-based emission reduction estimatesCarbon Footprint data is integrated with unattended project recommender, which provides you with estimates of the location-based emission reductions you could achieve by removing idle projects. Third-party review of Carbon Footprint methodology We've published a review statement from third-party experts in sustainability consulting which concludes that Carbon Footprint's methodology is a reasonable and appropriate way to calculate and allocate emissions from Google Cloud products, per the GHG Protocol. CustomersHelping customers reduce their environmental impactGoogle Cloud is working with our customers to help them achieve their own sustainability goals. Blog postHelping Etsy achieve 100% renewable energy for their commerce platform5-min readBlog post SADA Systems is leveraging carbon-free energy scores to design cleaner applications 3-min readBlog postSunPower partners with Google Cloud to make solar more accessible5-min readSee all customersWhat's newGet the latest carbon reporting news and eventsSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.ReportSustainability is high priority in IT leaders research reportRead reportBlog postPicking the greenest region for your Google Cloud resources Read the blogVideoStephanie Wong explains how to use carbon-free energy scoresWatch videoBlog postA year of carbon-free energy at our data centersRead the blogDocumentationDocumentationWhitepaperCarbon Footprint reporting methodology Review the calculation methodology that Google Cloud uses to give our customers a report tailored to their specific gross carbon footprint. Learn moreGoogle Cloud BasicsExport your carbon footprintLearn how to export your Carbon Footprint data to BigQuery or Sheets in order to perform data analysis and create custom dashboards and reports.Learn moreBest PracticeHow to reduce your cloud carbon‌ ‌footprint‌ Read our best practices guide for building Google Cloud applications that have a minimal carbon footprint.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead the latest updates on Carbon Footprint reporting PricingPricingCarbon Footprint is provided for no charge to all Google Cloud customers.If you export your carbon emissions data to BigQuery you'll incur minimal fees. To get an idea of what charges you might expect, see estimating storage and query costs.View pricing detailsPartnersData, reporting, and consulting partners You can integrate your Google Cloud emissions data with popular carbon accounting tools. We also partner with organizations that specialize in emissions and environmental data. Expand allEmissions reporting or consulting These partners can help you report company-wide emissions and prepare carbon disclosures. Data and visualization These partners provide data and tools to help you better understand your environmental impact, risks, and opportunities. See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_APIs.txt b/Cloud_APIs.txt new file mode 100644 index 0000000000000000000000000000000000000000..9096c349de0b6923a59b0cdea3918d0feec66347 --- /dev/null +++ b/Cloud_APIs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/apis +Date Scraped: 2025-02-23T12:05:44.094Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud APIsGoogle Cloud APIs allow you to automate your workflows by using your favorite language. Use these Cloud APIs with REST calls or client libraries in popular programming languages.View API managerView documentationNavigate toCloud Architecture CenterUsing Cloud APIs Google Cloud Marketplace Filter byFiltersAI and machine learning APIsCompute APIsStorage and database APIsNetworking APIsData analytics APIsManagement tools APIsOperations APIsSecurity and identity APIsManaged infrastructure APIssearchsendAI and machine learning APIsVertex AI API Train high-quality custom machine learning models with minimal machine learning expertise and effort.Foundation model APIs Pretrained multitask large models, including Gemini models, that can be tuned or customized for specific tasks using Vertex AI. Vision APIIntegrates image labeling, face, logo, and landmark detection, OCR, and detection of explicit content in apps. Speech-to-Text APIUses fast and accurate speech recognition to convert audio to text in over 125 languages and variants.Cloud Natural Language APIAnalyzes the structure and meaning of text, including sentiment analysis, entity recognition, and text annotations.Cloud Translation APITranslates text from one language to another.Dialogflow APIAn end-to-end development suite for conversational interfaces like chatbots or voice-powered apps and devices.AutoMLTrain high-quality custom machine learning models with minimal effort and machine learning expertise.Learn more Compute APIsCloud Run Admin APIDeploy and manage user-provided container images that auto-scale based on HTTP traffic.OS Login APIAssociates SSH public keys with Google accounts for accessing Compute Engine instances.OS Config APIManages OS patches and configurations on Compute Engine instances.Compute Engine APICreates and runs virtual machines on Google Cloud.Kubernetes Engine APIBuilds and manages clusters that run container-based applications, powered by Kubernetes.Compute Engine Instance Group Updater APIUpdates groups of Compute Engine instances.Cloud Functions APIManages lightweight user-provided functions executed in response to events.App Engine Admin APIProvisions and manages App Engine applications.Storage and database APIsCloud Bigtable Admin APIManages your Cloud Bigtable instances, clusters, and tables.Cloud Bigtable Data APIAccesses the NoSQL big data solution for storing terabytes or petabytes of schemaless data.Datastore APIAccesses the schemaless NoSQL document database to provide fully managed, robust, scalable storage for your app.Cloud Spanner APICreates, deletes, modifies, and lists Cloud Spanner instances and databases. Executes transactions in Cloud Spanner databases.Cloud SQL Administration APICreates and configures Cloud SQL instances, which provide fully managed MySQL databases.Cloud Storage APIStores and retrieves potentially large, immutable data objects.Storage Transfer APITransfers data from external data sources to a Google Cloud Storage bucket or between Google Cloud Storage buckets.AlloyDB APICreates and configures AlloyDB clusters, which provide fully managed PostgreSQL databases.Networking APIsCloud DNS APIConfigures and serves authoritative DNS records.Cloud Domains APIEnables management and configuration of domain names.Compute Engine APICreates and runs virtual machines on Google Cloud.Network Connectivity APIProvides enterprise connectivity from your on-premises network or from another cloud provider to Google Cloud VPC network.Network Management APIProvides a collection of network performance monitoring and diagnostic capabilities.Serverless VPC AccessManage VPC access connectors.Service Directory APIDiscover, publish and connect services.Service Networking APIProvides automatic management of network configurations necessary for certain services.Data analytics APIsBigQuery APICreates, manages, shares, and queries data.BigQuery Data Transfer APISimplified data imports to BigQueryDataflow APIDevelops and executes data processing patterns like ETL, batch computation, and continuous computation.Dataproc APIManages Hadoop-based clusters and jobs on Google Cloud.Cloud Composer APIFully managed workflow orchestration service.Cloud Life Sciences APIProcess, analyze, and annotate genomics and biomedical data at scale using containerized workflows.Pub/Sub APIProvides reliable, many-to-many, asynchronous messaging between applications.Cloud Healthcare APIStandards-based APIs powering actionable healthcare insights for security and compliance-focused environments.Looker APIRun queries, manage content and administer Looker for embedded analytics.Looker Studio APISearch for and manage assets to automate management and migrations.Management tools APIsCloud Billing APIRetrieves Google Cloud Console billing accounts and associates them with projects.Cloud Billing BudgetView, create, and manage Cloud Billing budgets programmatically at scale.Cloud Billing Catalog APIProgrammatic access to the entire public Google Cloud catalog consisting of billable SKUs, public pricing, and relevant metadata.Cloud Build APIBuilds images and artifacts in the cloud.Deployment Manager APIDeclares, configures, and deploys complex solutions on Google Cloud.Cloud Runtime Configuration APIProvides capabilities for dynamic configuration and coordination for applications running on Google Cloud.Cloud Scheduler APIFully managed enterprise-grade cron job scheduler.Cloud Tasks APIAllows you to manage the execution, dispatch, and delivery of a large number of distributed tasks.Operations APIsCloud Logging APIWrites log entries and manages your logs, log exports, and logs-based metrics.Cloud Monitoring APIManages your Cloud Monitoring data and configurations.Error Reporting APIGroups and counts errors from cloud services, provides read access to error groups and their associated errors.Cloud Trace APISends and retrieves trace data from Cloud Trace for display, reporting, and analysis.Security and identity APIsResource Manager APIProvides methods for creating, reading, and updating project metadata.Identity and Access Management APIManages identity and access control for Google Cloud resources.Cloud Data Loss PreventionFully managed service designed to help you discover, classify, and protect your most sensitive data.Cloud Key Management Service APILets you manage cryptographic keys for your cloud services the same way you do on-premises.Binary Authorization APIManages policies, attestors, and attestations in Binary Authorization.Cloud Asset APIManages the history and inventory of cloud resources.Managed infrastructure APIsService Management APIProvides methods for publishing managed services and managing service configurations.Service Control APIProvides control plane functionality for managed services, including access control and integration with logging and monitoring services.Service Consumer Management APIProvides utilities to help managed service producers manage their relationships with their services' consumers.Service Usage APIProvides methods to list, enable, and disable APIs in Google Cloud projects.Take the next stepTell us what you're solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Architecture_Center.txt b/Cloud_Architecture_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..b29940a4f61f32a8fd6168de691d3a7aecd9c082 --- /dev/null +++ b/Cloud_Architecture_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture +Date Scraped: 2025-02-23T12:11:23.552Z + +Content: +Home Stay organized with collections Save and categorize content based on your preferences. Cloud Architecture Center Discover reference architectures, design guidance, and best practices for building, migrating, and managing your cloud workloads. See what's new! construction Operational excellence security Security, privacy, compliance restore Reliability payment Cost optimization speed Performance optimization Explore the Architecture Framework Deployment archetypes Learn about the basic archetypes for building cloud architectures, and the use cases and design considerations for each archetype: zonal, regional, multi-regional, global, hybrid, and multicloud. Infrastructure reliability guide Design and build infrastructure to run your workloads reliably in the cloud. Landing zone design Design and build a landing zone that includes identity onboarding, resource hierarchy, network design, and security controls. Enterprise foundations blueprint Design and build a foundation that enables consistent governance, security controls, scale, visibility, and access. forward_circle Jump Start Solution guides 14 Learn and experiment with pre-built solution templates book Design guides 97 Build architectures using recommended patterns and practices account_tree Reference architectures 83 Deploy or adapt cloud topologies to meet your specific needs AI and machine learning 26 account_tree Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search book Build and deploy generative AI and machine learning models in an enterprise forward_circle Generative AI RAG with Cloud SQL More arrow_forward Application development 72 book Microservices overview account_tree Patterns for scalable and resilient apps More arrow_forward Databases 13 book Multi-cloud database management More arrow_forward Hybrid and multicloud 33 book Hybrid and multicloud overview account_tree Patterns for connecting other cloud service providers with Google Cloud account_tree Authenticate workforce users in a hybrid environment More arrow_forward Migration 26 book Migrate to Google Cloud book Database migration More arrow_forward Monitoring and logging 14 account_tree Stream logs from Google Cloud to Splunk More arrow_forward Networking 32 book Best practices and reference architectures for VPC design account_tree Hub-and-spoke network architecture account_tree Build internet connectivity for private VMs More arrow_forward Reliability and DR 11 book Google Cloud infrastructure reliability guide book Disaster recovery planning guide forward_circle Load balanced managed VMs More arrow_forward Security and IAM 42 book Identity and access management overview book Enterprise foundations blueprint account_tree Automate malware scanning for files uploaded to Cloud Storage More arrow_forward Storage 14 book Design an optimal storage strategy for your cloud workload More arrow_forward stars Google Cloud certification Demonstrate your expertise and validate your ability to transform businesses with Google Cloud technology. verified_user Google Cloud security best practices center Explore these best practices for meeting your security and compliance objectives as you deploy workloads on Google Cloud. Google Cloud Migration Center Accelerate your end-to-end migration journey from your current on-premises environment to Google Cloud. Google Cloud partner advantage Connect with a Google Cloud partner who can help you with your architecture needs. Send feedback \ No newline at end of file diff --git a/Cloud_Armor.txt b/Cloud_Armor.txt new file mode 100644 index 0000000000000000000000000000000000000000..4f6894b545357d49d043fdae2c859974fd9ed7db --- /dev/null +++ b/Cloud_Armor.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/armor +Date Scraped: 2025-02-23T12:07:04.872Z + +Content: +Jump to Cloud ArmorGoogle Cloud ArmorHelp protect your applications and websites against denial of service and web attacks.Go to consoleContact salesBenefit from DDoS protection and WAF at Google scaleDetect and mitigate attacks against your Cloud Load Balancing workloads or VMsAdaptive Protection ML-based mechanism to help detect and block Layer 7 DDoS attacksMitigate OWASP Top 10 risks and help protect workloads on-premises or in the cloudBot management to stop fraud at the edge through native integration with reCAPTCHA EnterpriseGoogle Cloud a Strong Performer in The Forrester Wave™: Web Application Firewalls, Q3 2022 reportRead the reportBenefitsBuilt-in DDoS defenseCloud Armor benefits from our experience of protecting key internet properties, such as Google Search, Gmail, and YouTube. It provides built-in defenses against L3 and L4 DDoS attacks.Mitigate OWASP Top 10 risksCloud Armor provides predefined rules to help defend against attacks, such as cross-site scripting (XSS) and SQL injection (SQLi) attacks.Enterprise-grade protectionWith Cloud Armor Enterprise tier, you will get access to DDoS and WAF services, curated rule sets, and other services for a predictable monthly price.Key featuresKey featuresAdaptive protectionAutomatically detect and help mitigate high volume Layer 7 DDoS attacks with an ML system trained locally on your applications. Learn more.Advanced network DDoS protectionAlways-on attack detection and mitigation to defend against volumetric network and protocol DDoS attacks to workloads using external network load balancers, protocol forwarding, and VMs with Public IP addresses. Learn more. Pre-configured WAF rulesOut-of-the-box rules based on industry standards to mitigate against common web-application vulnerabilities and help provide protection from the OWASP Top 10. Learn more in our WAF rules guide.Bot managementProvides automated protection for your apps from bots and helps stop fraud in line and at the edge through native integration with reCAPTCHA Enterprise. Learn more.Rate limitingRate-based rules help you protect your applications from a large volume of requests that flood your instances and block access for legitimate users. Learn more.View all features7:08Getting started with Cloud Armor Adaptive ProtectionCustomersLearn from customers using Cloud ArmorCase studyHow Google Cloud Armor helps Broadcom block DDoS Attacks5-min readBlog postCloud Armor Adaptive Protection5-min readBlog postCloud Armor helps mitigate a wide array of threats2-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postGoogle mitigated the largest DDoS attack to date, peaking above 398 million rpsRead the blogBlog postHow it works: The novel HTTP/2 ‘Rapid Reset’ DDoS attackLearn moreBlog postCloud Armor for regional application load balancersRead the blogBlog postIntroducing Cloud Armor WAF enhancements Read the blogBlog postIntroducing advanced DDoS protection with Cloud ArmorLearn moreBlog postHow Project Shield helped protect U.S. midterm elections from DDoS attacksRead the blogDocumentationDocumentationTutorialCloud Armor overviewLearn how Cloud Armor works and see an overview of Cloud Armor features and capabilities.Learn moreTutorialHands-on lab: Application load balancer with Cloud ArmorLearn how to configure an HTTP load balancer with global back ends, stress test the load balancer, and denylist the stress test IP.Learn moreGoogle Cloud BasicsCloud Armor security policy overviewUse Google Cloud Armor security policies to help protect your load-balanced applications from distributed denial of service (DDoS) and other web-based attacks.Learn moreTutorialCloud Armor EnterpriseCloud Armor Enterprise helps protect your web applications and services from distributed denial-of-service (DDoS) attacks and other threats from the internet. Learn moreTutorialBot managementProvides effective management of automated clients' requests toward your back ends through native integration with reCAPTCHA Enterprise.Learn moreTutorialRate limitingRate-based rules help you protect your applications from a large volume of requests that flood your instances and block access for legitimate users.Learn moreTutorialConfiguring Google Cloud Armor security policiesUse these instructions to filter incoming traffic to application load balancer by creating Google Cloud Armor security policies.Learn moreGoogle Cloud BasicsConfiguring Google Cloud Armor through GKE IngressLearn how to use a BackendConfig custom resource to configure Google Cloud Armor in Google Kubernetes Engine (GKE).Learn moreTutorialTuning Google Cloud Armor WAF rulesPreconfigured web application firewall (WAF) rules with dozens of signatures that are compiled from open source industry standards.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud ArmorAll featuresAll featuresPre-defined WAF rules to mitigate OWASP Top 10 risksOut-of-the-box rules based on industry standards to mitigate against common web-application vulnerabilities and help provide protection from the OWASP Top 10.Rich rules language for web application firewallCreate custom rules using any combination of L3–L7 parameters and geolocation to help protect your deployment with a flexible rules language.Visibility and monitoringEasily monitor all of the metrics associated with your security policies in the Cloud Monitoring dashboard. You can also view suspicious application traffic patterns from Cloud Armor directly in the Security Command Center dashboard.LoggingGet visibility into Cloud Armor decisions as well as the implicated policies and rules on a per-request basis via Cloud Logging.Preview modeDeploy Cloud Armor rules in preview mode to understand rule efficacy and impact on production traffic before enabling active enforcement.Policy framework with rulesConfigure one or more security policies with a hierarchy of rules. Apply a policy at varying levels of granularity to one or many workloads.IP-based and geo-based access controlFilter your incoming traffic based on IPv4 and IPv6 addresses or CIDRs. Identify and enforce access control based on geographic location of incoming traffic.Support for hybrid and multicloud deploymentsHelp defend applications from DDoS or web attacks and enforce Layer 7 security policies whether your application is deployed on Google Cloud or in a hybrid or multicloud architecture.Named IP ListsAllow or deny traffic through a Cloud Armor security policy based on a curated Named IP List.PricingPricingGoogle Cloud Armor is offered in two service tiers, Standard and Cloud Armor Enterprise.View pricing detailsA product or feature listed on this page is in preview. Learn more about product launch stages.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Build(1).txt b/Cloud_Build(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..883c5170bc0e72d3a15a2c124252241234813d02 --- /dev/null +++ b/Cloud_Build(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/build +Date Scraped: 2025-02-23T12:04:19.792Z + +Content: +Jump to Cloud BuildCloud BuildBuild, test, and deploy on our serverless CI/CD platform.New customers get $300 in free credits to spend on Cloud Build. All customers get 2,500 build-minutes free per month, not charged against your credits. Go to consoleContact salesBuild software quickly across all programming languages, including Java, Go, Node.js, and moreChoose from 15 machine types and run hundreds of concurrent builds per pool Deploy across multiple environments such as VMs, serverless, Kubernetes, or FirebaseAccess cloud-hosted, fully managed CI/CD workflows within your private networkKeep your data at rest within a geographical region or specific location with data residencyAccelerate State of DevOps ReportDownload the reportBenefitsFully serverless platform for scaleCloud Build scales up and down with no infrastructure to set up, upgrade, or scale. Run builds in a fully managed environment in Google Cloud with connectivity to your own private network. Native enterprise source integrationsIntegrate with some of the most popular enterprise source control systems with Cloud Build’s out-of-the-box support for GitHub Enterprise, GitLab Enterprise and Bitbucket Data Center. Software supply chain security and complianceScan images locally or in your registry for vulnerabilities. Use provenance for auditing and control deployments to production. Protect against software supply chain attacks with SLSA level 3 build support.Key featuresKey featuresExtremely fast buildsAccess machines connected via Google’s global network to significantly reduce your build time. Run builds on high-CPU VMs or cache source code, images, or other dependencies to further increase your build speed.Automate your deploymentsCreate pipelines as a part of your build steps to automate deployments. Deploy using built-in integrations to Google Kubernetes Engine, Cloud Run, App Engine, Cloud Functions, and Firebase. Use Spinnaker with Cloud Build for creating and executing complex pipelines.CI/CD across your network at scaleChoose from default or private pools to run your workloads based on your networking and scaling needs. Default pool lets you run builds in a secure, hosted environment with access to the public internet. Private pools are private, dedicated pools of workers offering you greater flexibility over the build environment with greater concurrency, and the ability to access resources in a private network.Commit to deploy in minutesGoing from PR to build, test, and deploy can’t be simpler. Set up triggers to automatically build, test, or deploy source code when you push changes to GitHub, Cloud Source Repositories, GitLab, or a Bitbucket repository.SLSA level 3 compliance Automatically generate provenance metadata and attestations for container images and language packages at build time to trace binary to the source code and prevent tampering. Verify the attestations using the built-in integration with Binary Authorization to deploy images built and signed by Cloud Build. Scan your artifacts with on-demand scanning to shift security left. Trigger fully managed CI/CD workflows from private source code repositories hosted in private networks, including GitHub Enterprise. View all features8:42Vendasta reduces build time by 80%, with 30% more deployments with Cloud BuildCustomersLearn from customers using Cloud BuildVideoVodafone built an automated and scalable AI/ML platform with Google Cloud's AI and DevOps technologies16:33Case studyGordon Food Service goes from four deployments a year to 2,9205-min readCase studyMercari radically improved its feature development in the United States5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoHow Vodafone is supercharging AI/ML at scale with Google CloudWatch videoVideoRepeatable Google Cloud environments with Cloud Build infra-as-code pipelinesWatch videoVideoShift left: Continuous integration testing with Cloud BuildWatch videoVideoMaintain control over hybrid workloads with DevOps best practices Watch videoDocumentationDocumentationGoogle Cloud Basics Cloud Build conceptsLearn more about Cloud Build, including build configurations, different types of cloud builders, and CMEK compliance.Learn moreTutorialBuilding and pushing images with Cloud BuildLearn how to enable Cloud Build, prepare source files to build, create a Docker repository in Artifact Registry, build an image, and view build results.Learn moreBest Practice Speeding up your buildsThis page provides best practices for speeding up Cloud Build builds.Learn moreTutorialCustom build steps with Cloud BuildLearn how to use community-contributed builders and custom builders in Cloud Build.Learn moreTutorialDeveloping applications with Google CloudLearn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem.Learn moreTutorial Implementing Binary Authorization using Cloud Build and GKESee how to set up, configure, and use Binary Authorization for Google Kubernetes Engine (GKE).Learn moreTutorial Infrastructure as code with Terraform, Cloud Build & GitOpsDiscover how to manage infrastructure as code with Terraform and Cloud Build using the popular GitOps methodology.Learn moreTutorial Continuous deployment from Git using Cloud BuildLearn how to use Cloud Build to automate builds and deployments using a Cloud Build trigger.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud BuildAll featuresAll features Native Docker supportJust import your existing Docker file to get started. Push images directly to Docker image storage repositories such as Docker Hub and Artifact Registry. Automate deployments to Google Kubernetes Engine or Cloud Run for continuous delivery. Generous free tierSay goodbye to managing your own build servers with 2,500 free build-minutes per month and up to 10 concurrent builds included. Build-minutes are not incurred for the time a build is in queue. Powerful insightsGet detailed insights into build results along with build errors and warnings for easy debugging. Filter build results using tags or queries to learn about slow performing builds or time-consuming tests. Identify vulnerabilitiesIdentify package vulnerabilities for your container images and language packages. Automatically perform package vulnerability scanning for Ubuntu, Debian, and Alpine. Build locally or in the cloudRun builds locally before submitting to the cloud. Build and debug on your local machine with the open source local builder.Private Pools Use VPC Peering and VPC-SC to set up a secure private network for your CI/CD workloads. Choose from regions across the world to meet regulatory obligations. You can also limit public IPs or reserve static IP addresses. First class integrations with private source repositories are available out of the box. Run hundreds of concurrent builds per pool to speed up your build and tests.PricingPricingPay for what you use above the monthly free tier. For more details, see the pricing guide.FeaturePricing (USD)First 2,500 build-minutes per monthFreeAdditional build-minutes$0.006 per minuteIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.A product or feature listed on this page is in alpha. Learn more about product launch stages.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Build(2).txt b/Cloud_Build(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..22080fd45538a6530d45db830b55fa9fa76f9f0f --- /dev/null +++ b/Cloud_Build(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/build +Date Scraped: 2025-02-23T12:04:52.628Z + +Content: +Jump to Cloud BuildCloud BuildBuild, test, and deploy on our serverless CI/CD platform.New customers get $300 in free credits to spend on Cloud Build. All customers get 2,500 build-minutes free per month, not charged against your credits. Go to consoleContact salesBuild software quickly across all programming languages, including Java, Go, Node.js, and moreChoose from 15 machine types and run hundreds of concurrent builds per pool Deploy across multiple environments such as VMs, serverless, Kubernetes, or FirebaseAccess cloud-hosted, fully managed CI/CD workflows within your private networkKeep your data at rest within a geographical region or specific location with data residencyAccelerate State of DevOps ReportDownload the reportBenefitsFully serverless platform for scaleCloud Build scales up and down with no infrastructure to set up, upgrade, or scale. Run builds in a fully managed environment in Google Cloud with connectivity to your own private network. Native enterprise source integrationsIntegrate with some of the most popular enterprise source control systems with Cloud Build’s out-of-the-box support for GitHub Enterprise, GitLab Enterprise and Bitbucket Data Center. Software supply chain security and complianceScan images locally or in your registry for vulnerabilities. Use provenance for auditing and control deployments to production. Protect against software supply chain attacks with SLSA level 3 build support.Key featuresKey featuresExtremely fast buildsAccess machines connected via Google’s global network to significantly reduce your build time. Run builds on high-CPU VMs or cache source code, images, or other dependencies to further increase your build speed.Automate your deploymentsCreate pipelines as a part of your build steps to automate deployments. Deploy using built-in integrations to Google Kubernetes Engine, Cloud Run, App Engine, Cloud Functions, and Firebase. Use Spinnaker with Cloud Build for creating and executing complex pipelines.CI/CD across your network at scaleChoose from default or private pools to run your workloads based on your networking and scaling needs. Default pool lets you run builds in a secure, hosted environment with access to the public internet. Private pools are private, dedicated pools of workers offering you greater flexibility over the build environment with greater concurrency, and the ability to access resources in a private network.Commit to deploy in minutesGoing from PR to build, test, and deploy can’t be simpler. Set up triggers to automatically build, test, or deploy source code when you push changes to GitHub, Cloud Source Repositories, GitLab, or a Bitbucket repository.SLSA level 3 compliance Automatically generate provenance metadata and attestations for container images and language packages at build time to trace binary to the source code and prevent tampering. Verify the attestations using the built-in integration with Binary Authorization to deploy images built and signed by Cloud Build. Scan your artifacts with on-demand scanning to shift security left. Trigger fully managed CI/CD workflows from private source code repositories hosted in private networks, including GitHub Enterprise. View all features8:42Vendasta reduces build time by 80%, with 30% more deployments with Cloud BuildCustomersLearn from customers using Cloud BuildVideoVodafone built an automated and scalable AI/ML platform with Google Cloud's AI and DevOps technologies16:33Case studyGordon Food Service goes from four deployments a year to 2,9205-min readCase studyMercari radically improved its feature development in the United States5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoHow Vodafone is supercharging AI/ML at scale with Google CloudWatch videoVideoRepeatable Google Cloud environments with Cloud Build infra-as-code pipelinesWatch videoVideoShift left: Continuous integration testing with Cloud BuildWatch videoVideoMaintain control over hybrid workloads with DevOps best practices Watch videoDocumentationDocumentationGoogle Cloud Basics Cloud Build conceptsLearn more about Cloud Build, including build configurations, different types of cloud builders, and CMEK compliance.Learn moreTutorialBuilding and pushing images with Cloud BuildLearn how to enable Cloud Build, prepare source files to build, create a Docker repository in Artifact Registry, build an image, and view build results.Learn moreBest Practice Speeding up your buildsThis page provides best practices for speeding up Cloud Build builds.Learn moreTutorialCustom build steps with Cloud BuildLearn how to use community-contributed builders and custom builders in Cloud Build.Learn moreTutorialDeveloping applications with Google CloudLearn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem.Learn moreTutorial Implementing Binary Authorization using Cloud Build and GKESee how to set up, configure, and use Binary Authorization for Google Kubernetes Engine (GKE).Learn moreTutorial Infrastructure as code with Terraform, Cloud Build & GitOpsDiscover how to manage infrastructure as code with Terraform and Cloud Build using the popular GitOps methodology.Learn moreTutorial Continuous deployment from Git using Cloud BuildLearn how to use Cloud Build to automate builds and deployments using a Cloud Build trigger.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud BuildAll featuresAll features Native Docker supportJust import your existing Docker file to get started. Push images directly to Docker image storage repositories such as Docker Hub and Artifact Registry. Automate deployments to Google Kubernetes Engine or Cloud Run for continuous delivery. Generous free tierSay goodbye to managing your own build servers with 2,500 free build-minutes per month and up to 10 concurrent builds included. Build-minutes are not incurred for the time a build is in queue. Powerful insightsGet detailed insights into build results along with build errors and warnings for easy debugging. Filter build results using tags or queries to learn about slow performing builds or time-consuming tests. Identify vulnerabilitiesIdentify package vulnerabilities for your container images and language packages. Automatically perform package vulnerability scanning for Ubuntu, Debian, and Alpine. Build locally or in the cloudRun builds locally before submitting to the cloud. Build and debug on your local machine with the open source local builder.Private Pools Use VPC Peering and VPC-SC to set up a secure private network for your CI/CD workloads. Choose from regions across the world to meet regulatory obligations. You can also limit public IPs or reserve static IP addresses. First class integrations with private source repositories are available out of the box. Run hundreds of concurrent builds per pool to speed up your build and tests.PricingPricingPay for what you use above the monthly free tier. For more details, see the pricing guide.FeaturePricing (USD)First 2,500 build-minutes per monthFreeAdditional build-minutes$0.006 per minuteIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.A product or feature listed on this page is in alpha. Learn more about product launch stages.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Build.txt b/Cloud_Build.txt new file mode 100644 index 0000000000000000000000000000000000000000..337a5c02171ce174c7a3329d156679e20c6558d7 --- /dev/null +++ b/Cloud_Build.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/build +Date Scraped: 2025-02-23T12:03:01.350Z + +Content: +Jump to Cloud BuildCloud BuildBuild, test, and deploy on our serverless CI/CD platform.New customers get $300 in free credits to spend on Cloud Build. All customers get 2,500 build-minutes free per month, not charged against your credits. Go to consoleContact salesBuild software quickly across all programming languages, including Java, Go, Node.js, and moreChoose from 15 machine types and run hundreds of concurrent builds per pool Deploy across multiple environments such as VMs, serverless, Kubernetes, or FirebaseAccess cloud-hosted, fully managed CI/CD workflows within your private networkKeep your data at rest within a geographical region or specific location with data residencyAccelerate State of DevOps ReportDownload the reportBenefitsFully serverless platform for scaleCloud Build scales up and down with no infrastructure to set up, upgrade, or scale. Run builds in a fully managed environment in Google Cloud with connectivity to your own private network. Native enterprise source integrationsIntegrate with some of the most popular enterprise source control systems with Cloud Build’s out-of-the-box support for GitHub Enterprise, GitLab Enterprise and Bitbucket Data Center. Software supply chain security and complianceScan images locally or in your registry for vulnerabilities. Use provenance for auditing and control deployments to production. Protect against software supply chain attacks with SLSA level 3 build support.Key featuresKey featuresExtremely fast buildsAccess machines connected via Google’s global network to significantly reduce your build time. Run builds on high-CPU VMs or cache source code, images, or other dependencies to further increase your build speed.Automate your deploymentsCreate pipelines as a part of your build steps to automate deployments. Deploy using built-in integrations to Google Kubernetes Engine, Cloud Run, App Engine, Cloud Functions, and Firebase. Use Spinnaker with Cloud Build for creating and executing complex pipelines.CI/CD across your network at scaleChoose from default or private pools to run your workloads based on your networking and scaling needs. Default pool lets you run builds in a secure, hosted environment with access to the public internet. Private pools are private, dedicated pools of workers offering you greater flexibility over the build environment with greater concurrency, and the ability to access resources in a private network.Commit to deploy in minutesGoing from PR to build, test, and deploy can’t be simpler. Set up triggers to automatically build, test, or deploy source code when you push changes to GitHub, Cloud Source Repositories, GitLab, or a Bitbucket repository.SLSA level 3 compliance Automatically generate provenance metadata and attestations for container images and language packages at build time to trace binary to the source code and prevent tampering. Verify the attestations using the built-in integration with Binary Authorization to deploy images built and signed by Cloud Build. Scan your artifacts with on-demand scanning to shift security left. Trigger fully managed CI/CD workflows from private source code repositories hosted in private networks, including GitHub Enterprise. View all features8:42Vendasta reduces build time by 80%, with 30% more deployments with Cloud BuildCustomersLearn from customers using Cloud BuildVideoVodafone built an automated and scalable AI/ML platform with Google Cloud's AI and DevOps technologies16:33Case studyGordon Food Service goes from four deployments a year to 2,9205-min readCase studyMercari radically improved its feature development in the United States5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoHow Vodafone is supercharging AI/ML at scale with Google CloudWatch videoVideoRepeatable Google Cloud environments with Cloud Build infra-as-code pipelinesWatch videoVideoShift left: Continuous integration testing with Cloud BuildWatch videoVideoMaintain control over hybrid workloads with DevOps best practices Watch videoDocumentationDocumentationGoogle Cloud Basics Cloud Build conceptsLearn more about Cloud Build, including build configurations, different types of cloud builders, and CMEK compliance.Learn moreTutorialBuilding and pushing images with Cloud BuildLearn how to enable Cloud Build, prepare source files to build, create a Docker repository in Artifact Registry, build an image, and view build results.Learn moreBest Practice Speeding up your buildsThis page provides best practices for speeding up Cloud Build builds.Learn moreTutorialCustom build steps with Cloud BuildLearn how to use community-contributed builders and custom builders in Cloud Build.Learn moreTutorialDeveloping applications with Google CloudLearn how to design, develop, and deploy applications that seamlessly integrate components from the Google Cloud ecosystem.Learn moreTutorial Implementing Binary Authorization using Cloud Build and GKESee how to set up, configure, and use Binary Authorization for Google Kubernetes Engine (GKE).Learn moreTutorial Infrastructure as code with Terraform, Cloud Build & GitOpsDiscover how to manage infrastructure as code with Terraform and Cloud Build using the popular GitOps methodology.Learn moreTutorial Continuous deployment from Git using Cloud BuildLearn how to use Cloud Build to automate builds and deployments using a Cloud Build trigger.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud BuildAll featuresAll features Native Docker supportJust import your existing Docker file to get started. Push images directly to Docker image storage repositories such as Docker Hub and Artifact Registry. Automate deployments to Google Kubernetes Engine or Cloud Run for continuous delivery. Generous free tierSay goodbye to managing your own build servers with 2,500 free build-minutes per month and up to 10 concurrent builds included. Build-minutes are not incurred for the time a build is in queue. Powerful insightsGet detailed insights into build results along with build errors and warnings for easy debugging. Filter build results using tags or queries to learn about slow performing builds or time-consuming tests. Identify vulnerabilitiesIdentify package vulnerabilities for your container images and language packages. Automatically perform package vulnerability scanning for Ubuntu, Debian, and Alpine. Build locally or in the cloudRun builds locally before submitting to the cloud. Build and debug on your local machine with the open source local builder.Private Pools Use VPC Peering and VPC-SC to set up a secure private network for your CI/CD workloads. Choose from regions across the world to meet regulatory obligations. You can also limit public IPs or reserve static IP addresses. First class integrations with private source repositories are available out of the box. Run hundreds of concurrent builds per pool to speed up your build and tests.PricingPricingPay for what you use above the monthly free tier. For more details, see the pricing guide.FeaturePricing (USD)First 2,500 build-minutes per monthFreeAdditional build-minutes$0.006 per minuteIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.A product or feature listed on this page is in alpha. Learn more about product launch stages.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_CDN(1).txt b/Cloud_CDN(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..47f86dd9a68a8822c701a5c3174c05a093a8e40f --- /dev/null +++ b/Cloud_CDN(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cdn +Date Scraped: 2025-02-23T12:06:17.127Z + +Content: +Build and run dynamic websites on Google Cloud using Cloud Run and Cloud CDN. Deploy in console.Cloud CDN and Media CDNLeverage Google’s decade of experience delivering contentGoogle's content delivery networks—Cloud CDN and Media CDN—scale to bring content closer to a global audience.Deploy in consoleLooking for something else? Browse our other networking products.Product highlightsUse the same network that serves 2 billion Google usersIncrease web performance with Cloud CDNScale video streaming to millions of users with Media CDNGoogle Cloud CDN in a minute2:44FeaturesCloud CDN and Media CDNCloud CDN accelerates web applications using Google's global edge network.Media CDN uses YouTube's infrastructure to bring video streams (VoD and live) and large file downloads closer to users for fast and reliable delivery.VIDEOGetting started with Cloud CDN3:13Universal support for any origin and back endPull content from any HTTP-capable origin, including Compute Engine, Cloud Storage, and Google Kubernetes Engine back ends and origins outside of Google Cloud, such as storage buckets in other clouds.VIDEO Introducing Media CDN1:41Security in-depthCloud CDN and Media CDN integrate with Cloud Armor. Protect content at the edge with Cloud Armor that has protected companies from some of the largest DDoS attacks. Fine-grain cache controlsConfigure caching behavior by origin that allows you to have fine-grained control over cache keys, TTLs, and other caching features based on the content type being served. Route matching and origin selectionCloud CDN and Media CDN provide advanced HTTP routing capabilities that allow you to map traffic to specific edge configurations and origins at a fine-grained level.Real-time logging and metricsUnderstand how traffic is being served by Cloud CDN and Media CDN with Cloud Logging and Cloud Monitoring.Modern protocols for better user experienceProtocols such as TLS version 1.3, QUIC, Global Anycast directly benefit the user experience by delivering render-blocking web content more quickly and reducing playback start time and rebuffering when serving video.View all featuresHow It WorksCloud CDN and Media CDN accelerate web and video content delivery by using Google's global edge network to bring content as close to your users as possible. Latency, cost, and load on your backend servers is reduced, making it easy to scale to millions of users.View documentationWhat is Cloud CDN?Common UsesStatic contentAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteTutorials, quickstarts, & labsAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteDynamic contentUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Tutorials, quickstarts, & labsUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Video streamingStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureTutorials, quickstarts, & labsStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureLarge gaming and software downloadsStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingLearning resourcesStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingPricingHow Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Category or typeDescriptionPrice (USD)Cache egress<10 TiBStarting at$0.08GiB per month10 TiB-150 TiBStarting at$0.055 GiB per month150 TiB-500 TiBStarting at$0.03GiB per month>500 TiBContact us for a volume-based discount.Cache fillWithin North America or Europe (including Hawaii)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)$0.04per GiBLookup requestsHTTP/HTTPS cache lookup requests$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.How Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Cache egressDescription<10 TiBPrice (USD)Starting at$0.08GiB per month10 TiB-150 TiBDescriptionStarting at$0.055 GiB per month150 TiB-500 TiBDescriptionStarting at$0.03GiB per month>500 TiBDescriptionContact us for a volume-based discount.Cache fillDescriptionWithin North America or Europe (including Hawaii)Price (USD)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)Description$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)Description$0.04per GiBLookup requestsDescriptionHTTP/HTTPS cache lookup requestsPrice (USD)$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.Cloud CDNEstimate your monthly Cloud CDN charges with this calculator.Estimate your costsMedia CDNConnect with our sales team to get a quote for Media CDN.Request a quoteStart your proof of conceptTry Cloud CDN in the consoleGo to my consoleContact sales to get started on Media CDNContact salesGetting started with Cloud CDNWatch videoLearn best practices for optimizing and accelerating content deliveryRead guideLearn how to stream video with Media CDNRead guideBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud CDN and Media CDNGoogle Cloud’s Media CDN helps us efficiently scale our infrastructure.Rutong Li, Chief Technology Officer at U-NEXTU-NEXT reached a 98.3% cache rate, delivering a seamless experience for viewers.Read customer storiesRelated ContentMLB partners with Google Cloud for the ultimate fan experienceGoogle Cloud to power AppLovin’s next phase of growth with Cloud CDNEnable web-application firewall and DDoS mitigation with Cloud ArmorFeatured benefits and customersShape the content delivery platform to your needs with APIs and edge computing programmability.Build an end-to-end content delivery solution that integrates seamlessly with Cloud Storage, Google Ad Manager, and APIs.Get real-time observability that helps you make intelligent decisions based on data.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_CDN.txt b/Cloud_CDN.txt new file mode 100644 index 0000000000000000000000000000000000000000..b105a60f6e0a9841dca1cffd3aa8015b8d8cfccb --- /dev/null +++ b/Cloud_CDN.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cdn +Date Scraped: 2025-02-23T12:01:51.805Z + +Content: +Build and run dynamic websites on Google Cloud using Cloud Run and Cloud CDN. Deploy in console.Cloud CDN and Media CDNLeverage Google’s decade of experience delivering contentGoogle's content delivery networks—Cloud CDN and Media CDN—scale to bring content closer to a global audience.Deploy in consoleLooking for something else? Browse our other networking products.Product highlightsUse the same network that serves 2 billion Google usersIncrease web performance with Cloud CDNScale video streaming to millions of users with Media CDNGoogle Cloud CDN in a minute2:44FeaturesCloud CDN and Media CDNCloud CDN accelerates web applications using Google's global edge network.Media CDN uses YouTube's infrastructure to bring video streams (VoD and live) and large file downloads closer to users for fast and reliable delivery.VIDEOGetting started with Cloud CDN3:13Universal support for any origin and back endPull content from any HTTP-capable origin, including Compute Engine, Cloud Storage, and Google Kubernetes Engine back ends and origins outside of Google Cloud, such as storage buckets in other clouds.VIDEO Introducing Media CDN1:41Security in-depthCloud CDN and Media CDN integrate with Cloud Armor. Protect content at the edge with Cloud Armor that has protected companies from some of the largest DDoS attacks. Fine-grain cache controlsConfigure caching behavior by origin that allows you to have fine-grained control over cache keys, TTLs, and other caching features based on the content type being served. Route matching and origin selectionCloud CDN and Media CDN provide advanced HTTP routing capabilities that allow you to map traffic to specific edge configurations and origins at a fine-grained level.Real-time logging and metricsUnderstand how traffic is being served by Cloud CDN and Media CDN with Cloud Logging and Cloud Monitoring.Modern protocols for better user experienceProtocols such as TLS version 1.3, QUIC, Global Anycast directly benefit the user experience by delivering render-blocking web content more quickly and reducing playback start time and rebuffering when serving video.View all featuresHow It WorksCloud CDN and Media CDN accelerate web and video content delivery by using Google's global edge network to bring content as close to your users as possible. Latency, cost, and load on your backend servers is reduced, making it easy to scale to millions of users.View documentationWhat is Cloud CDN?Common UsesStatic contentAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteTutorials, quickstarts, & labsAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteDynamic contentUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Tutorials, quickstarts, & labsUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Video streamingStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureTutorials, quickstarts, & labsStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureLarge gaming and software downloadsStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingLearning resourcesStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingPricingHow Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Category or typeDescriptionPrice (USD)Cache egress<10 TiBStarting at$0.08GiB per month10 TiB-150 TiBStarting at$0.055 GiB per month150 TiB-500 TiBStarting at$0.03GiB per month>500 TiBContact us for a volume-based discount.Cache fillWithin North America or Europe (including Hawaii)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)$0.04per GiBLookup requestsHTTP/HTTPS cache lookup requests$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.How Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Cache egressDescription<10 TiBPrice (USD)Starting at$0.08GiB per month10 TiB-150 TiBDescriptionStarting at$0.055 GiB per month150 TiB-500 TiBDescriptionStarting at$0.03GiB per month>500 TiBDescriptionContact us for a volume-based discount.Cache fillDescriptionWithin North America or Europe (including Hawaii)Price (USD)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)Description$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)Description$0.04per GiBLookup requestsDescriptionHTTP/HTTPS cache lookup requestsPrice (USD)$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.Cloud CDNEstimate your monthly Cloud CDN charges with this calculator.Estimate your costsMedia CDNConnect with our sales team to get a quote for Media CDN.Request a quoteStart your proof of conceptTry Cloud CDN in the consoleGo to my consoleContact sales to get started on Media CDNContact salesGetting started with Cloud CDNWatch videoLearn best practices for optimizing and accelerating content deliveryRead guideLearn how to stream video with Media CDNRead guideBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud CDN and Media CDNGoogle Cloud’s Media CDN helps us efficiently scale our infrastructure.Rutong Li, Chief Technology Officer at U-NEXTU-NEXT reached a 98.3% cache rate, delivering a seamless experience for viewers.Read customer storiesRelated ContentMLB partners with Google Cloud for the ultimate fan experienceGoogle Cloud to power AppLovin’s next phase of growth with Cloud CDNEnable web-application firewall and DDoS mitigation with Cloud ArmorFeatured benefits and customersShape the content delivery platform to your needs with APIs and edge computing programmability.Build an end-to-end content delivery solution that integrates seamlessly with Cloud Storage, Google Ad Manager, and APIs.Get real-time observability that helps you make intelligent decisions based on data.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_CDN_and_Media_CDN.txt b/Cloud_CDN_and_Media_CDN.txt new file mode 100644 index 0000000000000000000000000000000000000000..b7d536851ed919ab043de033c0f2d7bc05860eef --- /dev/null +++ b/Cloud_CDN_and_Media_CDN.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cdn +Date Scraped: 2025-02-23T12:07:05.997Z + +Content: +Build and run dynamic websites on Google Cloud using Cloud Run and Cloud CDN. Deploy in console.Cloud CDN and Media CDNLeverage Google’s decade of experience delivering contentGoogle's content delivery networks—Cloud CDN and Media CDN—scale to bring content closer to a global audience.Deploy in consoleLooking for something else? Browse our other networking products.Product highlightsUse the same network that serves 2 billion Google usersIncrease web performance with Cloud CDNScale video streaming to millions of users with Media CDNGoogle Cloud CDN in a minute2:44FeaturesCloud CDN and Media CDNCloud CDN accelerates web applications using Google's global edge network.Media CDN uses YouTube's infrastructure to bring video streams (VoD and live) and large file downloads closer to users for fast and reliable delivery.VIDEOGetting started with Cloud CDN3:13Universal support for any origin and back endPull content from any HTTP-capable origin, including Compute Engine, Cloud Storage, and Google Kubernetes Engine back ends and origins outside of Google Cloud, such as storage buckets in other clouds.VIDEO Introducing Media CDN1:41Security in-depthCloud CDN and Media CDN integrate with Cloud Armor. Protect content at the edge with Cloud Armor that has protected companies from some of the largest DDoS attacks. Fine-grain cache controlsConfigure caching behavior by origin that allows you to have fine-grained control over cache keys, TTLs, and other caching features based on the content type being served. Route matching and origin selectionCloud CDN and Media CDN provide advanced HTTP routing capabilities that allow you to map traffic to specific edge configurations and origins at a fine-grained level.Real-time logging and metricsUnderstand how traffic is being served by Cloud CDN and Media CDN with Cloud Logging and Cloud Monitoring.Modern protocols for better user experienceProtocols such as TLS version 1.3, QUIC, Global Anycast directly benefit the user experience by delivering render-blocking web content more quickly and reducing playback start time and rebuffering when serving video.View all featuresHow It WorksCloud CDN and Media CDN accelerate web and video content delivery by using Google's global edge network to bring content as close to your users as possible. Latency, cost, and load on your backend servers is reduced, making it easy to scale to millions of users.View documentationWhat is Cloud CDN?Common UsesStatic contentAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteTutorials, quickstarts, & labsAccelerate and secure web content for global audiencesCloud CDN uses Google’s global edge network to serve content closer to your users. Originally built to serve Google’s core applications like Google Search, Gmail, and Maps, Cloud CDN gives you the same world-class infrastructure to accelerate and secure mission critical web experiences at a global scale.Learn more about using Cloud CDN for your web delivery workloadsLearn how to set up a backend bucket as an originGoogle Cloud Skills Boost: Configure Cloud CDN for a backend bucketTutorial: Host a static websiteDynamic contentUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Tutorials, quickstarts, & labsUse Cloud CDN to serve dynamic, user-specific contentBy caching frequently accessed data in Google's edge network, Cloud CDN keeps the data as close as possible to users and allows for the fastest possible access. Cloud CDN offers configurable cache controls and optimizations like dynamic compression to help you accelerate your APIs and content.Learn more about using Cloud CDN for best web security practicesHow to use Cloud CDN with Cloud Armor and ApigeeSet up Cloud CDN cache mode for dynamic contentEnable dynamic compression to reduce size by up to 85%Video streamingStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureTutorials, quickstarts, & labsStream live and recorded videos with Media CDNLeverage Media CDN’s cache deployments spanning 1,300+ cities to deliver performant, engaging streaming experiences as close to your viewers as possible.Learn more about how to use Media CDN for video streamingGet started: Set up a Media CDN serviceWatch a walkthrough demo video of Media CDNTutorial: Understanding origin shielding with deep tiered architectureLarge gaming and software downloadsStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingLearning resourcesStreamline delivery of downloadsDeliver performant and cost-effective downloads with Media CDN’s planet-scale capacity and optimized origin shielding architecture.Learn more about using Media CDN for large file cachingRun Cloud Run, Cloud Functions, or App Engine with Cloud CDNLearn how Media CDN uses origin shielding with deeply tiered edge infrastructureGet near real time metrics on Media CDN with Cloud LoggingPricingHow Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Category or typeDescriptionPrice (USD)Cache egress<10 TiBStarting at$0.08GiB per month10 TiB-150 TiBStarting at$0.055 GiB per month150 TiB-500 TiBStarting at$0.03GiB per month>500 TiBContact us for a volume-based discount.Cache fillWithin North America or Europe (including Hawaii)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)$0.04per GiBLookup requestsHTTP/HTTPS cache lookup requests$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.How Cloud CDN and Media CDN pricing worksPricing for Cloud CDN is based on bandwidth and HTTP/HTTPS requests. For Media CDN, please contact sales for details.Cache egressDescription<10 TiBPrice (USD)Starting at$0.08GiB per month10 TiB-150 TiBDescriptionStarting at$0.055 GiB per month150 TiB-500 TiBDescriptionStarting at$0.03GiB per month>500 TiBDescriptionContact us for a volume-based discount.Cache fillDescriptionWithin North America or Europe (including Hawaii)Price (USD)$0.01per GiBWithin each of Asia Pacific, South America, Middle East, Africa, and Oceania (including Hong Kong)Description$0.02per GiBInter-region cache fill (for example: between Asia Pacific and North America)Description$0.04per GiBLookup requestsDescriptionHTTP/HTTPS cache lookup requestsPrice (USD)$0.0075per 10,000 requestsLearn more about Cloud CDN pricing. View all pricing details.Cloud CDNEstimate your monthly Cloud CDN charges with this calculator.Estimate your costsMedia CDNConnect with our sales team to get a quote for Media CDN.Request a quoteStart your proof of conceptTry Cloud CDN in the consoleGo to my consoleContact sales to get started on Media CDNContact salesGetting started with Cloud CDNWatch videoLearn best practices for optimizing and accelerating content deliveryRead guideLearn how to stream video with Media CDNRead guideBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud CDN and Media CDNGoogle Cloud’s Media CDN helps us efficiently scale our infrastructure.Rutong Li, Chief Technology Officer at U-NEXTU-NEXT reached a 98.3% cache rate, delivering a seamless experience for viewers.Read customer storiesRelated ContentMLB partners with Google Cloud for the ultimate fan experienceGoogle Cloud to power AppLovin’s next phase of growth with Cloud CDNEnable web-application firewall and DDoS mitigation with Cloud ArmorFeatured benefits and customersShape the content delivery platform to your needs with APIs and edge computing programmability.Build an end-to-end content delivery solution that integrates seamlessly with Cloud Storage, Google Ad Manager, and APIs.Get real-time observability that helps you make intelligent decisions based on data.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Code(1).txt b/Cloud_Code(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..ca7b344b7b06088d4c210b553b151d6726532e53 --- /dev/null +++ b/Cloud_Code(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/code +Date Scraped: 2025-02-23T12:04:17.357Z + +Content: +Jump to Cloud CodeCloud Code and Gemini Code Assist IDE PluginsCloud Code is a set of AI-assisted IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. Gemini Code Assist is integrated with Cloud Code, providing AI assistance directly on your IDE.Get startedSupports your favorite IDE: VSCode, JetBrains IDEs, Cloud Workstations, and Cloud Shell EditorBrings Gemini Code Assist inside your favorite IDEsSpeeds up your GKE and Cloud Run development with Skaffold integrationSimplifies creating configuration files for Google Cloud services and technologiesMakes it easy to integrate Cloud APIs and work with Google Cloud services within in your IDEVIDEOCloud Code overview2:34BenefitsAI Powered assistant Gemini Code Assist, your AI-powered collaborator, is available across Google Cloud and your IDE to help you get more done, faster. Development made easyEasily test and debug apps on Google Cloud directly from your IDE. Supports workloads including Compute Engine, GKE, and Serverless. Help where you need itAvoid context switching through in-IDE assistance including templates for Google Cloud APIs. Supported on both VS Code and JetBrains IDEs. Key featuresCloud Code tools for maximizing developer productivityAI-powered assistanceGemini Code Assist comes integrated with Cloud Code, bringing AI assistance to your IDEs such as Visual Studio Code, Cloud Workstations, and JetBrains IDEs (IntelliJ, PyCharm, GoLand, WebStorm, and more). Gemini Code Assist can help complete your code while you write, generate code blocks based on comments, and also function as a chat assistant right in your IDE to help you write code faster and better. Remote debuggingIf you're looking for a way to debug your application from your IDE, try Cloud Code to emulate a local debugging experience in your IDE. Because Cloud Code leverages Skaffold, you can simply place breakpoints in your code. Once your breakpoint is triggered, you can step through the code, hover over variable properties, and view the logs from your container.Get startedReduce context switchingContext switching is time consuming and breaks up your workflow. While developing cloud based applications, you might switch between your IDE, Cloud Console, documentation, and logs. Cloud Code comes with built-in capabilities to reduce context switching. For example, with Cloud Code’s Kubernetes or Cloud Run explorers you can visualize, monitor, and view information about your cluster resources without running any CLI commands. YAML authoring supportGetting used to working with the Kubernetes YAML syntax and scheme takes time, and a lot of that time is trial and error. Cloud Code lets you spend more time writing code, thanks to its YAML authoring support features. Cloud Code’s inline documentation, snippets, completions, and schema validation, a.k.a. “Linting” make writing YAML files easier for developers.VIDEOCloud-native development in the IDE with Cloud Code4:17CustomersCustomers are using Gemini Code Assist to see increased productivityCase studyWayfair incorporated Gemini Code Assist in their efforts to have developers build applications incredibly fastVideo (44:17)Case studyTuring’s early experience with Gemini Code Assist was very promising, with productivity gains around 33%Video (32:17)See all customersWhat's newSee the latest updates for Cloud CodeSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog post5 tips to maximize the Kubernetes developer experience with Cloud CodeRead the blogVideoGoogle Cloud {Code_Love_Hack} Winners!Read the blogBlog postIntroducing Cloud Code Secret Manager IntegrationRead the blogBlog postExpanding YAML editing in Cloud CodeRead the blogBlog postFollow your org’s app dev best practices with Cloud Code custom samplesRead the blogDocumentationCloud Code quickstart guides and moreTutorialCode with Gemini Code Assist on VSCodeLearn how to use Gemini Code Assist in VSCode to write better code faster, including an overview of inline-suggestions, code generation, and chat.Learn moreQuickstartCode with Gemini Code Assist on JetBrains IDEs (IntelliJ, PyCharm...)Learn how to use Gemini Code Assist in JetBrains IDEs to write better code faster, including an overview of inline-suggestions, code generation, and chat.Learn moreQuickstartCloud Code for VS Code quickstart guidesLearn how to run a Kubernetes app with just Cloud Code, with Cloud Code and a remote development environment, or how to deploy a Cloud Run service with Cloud Code.Learn moreQuickstartCloud Code for IntelliJ quickstart guidesLearn how to create, locally develop, debug, and run a Google Kubernetes Engine application with Cloud Code or how to deploy a Cloud Run app with Cloud Code.Learn moreQuickstartCloud Code for Cloud Shell Editor quickstart guidesLearn how to create, locally develop, debug, and run a GKE application with Cloud Code for Cloud Shell or how to deploy a Cloud Run service with Cloud Code.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresCloud Code featuresGemini Code Assist Gemini Code Assist is built-in on Cloud Code, AI-powered assistance right in your IDE such as AI code completion, code generation, and chat. Speed up Kubernetes developmentGet a fully integrated Kubernetes development and debugging environment within your IDE. Create and manage clusters directly from within the IDE.Deploy Cloud Run servicesBuild and deploy your code to Cloud Run or Cloud Run for Anthos in a few clicks.Easily integrate Google Cloud APIsFind, add, and configure Google Cloud APIs for your project from the built-in library manager and easily view associated documentation.Simplify Kubernetes local developmentUnder the covers, Cloud Code for IDEs uses popular tools such as Skaffold, Jib, and kubectl to provide continuous feedback on your code in real time.Easily extend to production deploymentWhen it comes to working with production clusters, we have you covered with support for Skaffold profiles, Kustomize-based environment management, and Cloud Build integration.Explore deploymentsView underlying resources and metadata for your Kubernetes clusters and Cloud Run services. You’re a click away from taking action on these resources; you can fetch a description, view logs, manage secrets, or get a terminal directly into a pod.Debug running applicationsDebug the code within your IDEs using Cloud Code for VS Code and Cloud Code for IntelliJ by leveraging built-in IDE debugging features.Access powerful IDE featuresWhile interacting with Google Cloud configuration files, get out-of-the-box support for IDE features including code completion, inline documentation, linting, and snippets.Develop from your browserStart using Cloud Code right away with just your browser. With the Cloud Shell Editor, you can access the same powerful features you'd experience in Cloud Code for VS Code but without having to set anything up.PricingPricingCloud Code is available to all Google Cloud customers free of charge.Take the next stepStart your next project, explore interactive tutorials, and manage your account.View quickstartsNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Code.txt b/Cloud_Code.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b2de0a33e0f594bfb97307f8fb510b1f1721593 --- /dev/null +++ b/Cloud_Code.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/code +Date Scraped: 2025-02-23T12:03:05.021Z + +Content: +Jump to Cloud CodeCloud Code and Gemini Code Assist IDE PluginsCloud Code is a set of AI-assisted IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. Gemini Code Assist is integrated with Cloud Code, providing AI assistance directly on your IDE.Get startedSupports your favorite IDE: VSCode, JetBrains IDEs, Cloud Workstations, and Cloud Shell EditorBrings Gemini Code Assist inside your favorite IDEsSpeeds up your GKE and Cloud Run development with Skaffold integrationSimplifies creating configuration files for Google Cloud services and technologiesMakes it easy to integrate Cloud APIs and work with Google Cloud services within in your IDEVIDEOCloud Code overview2:34BenefitsAI Powered assistant Gemini Code Assist, your AI-powered collaborator, is available across Google Cloud and your IDE to help you get more done, faster. Development made easyEasily test and debug apps on Google Cloud directly from your IDE. Supports workloads including Compute Engine, GKE, and Serverless. Help where you need itAvoid context switching through in-IDE assistance including templates for Google Cloud APIs. Supported on both VS Code and JetBrains IDEs. Key featuresCloud Code tools for maximizing developer productivityAI-powered assistanceGemini Code Assist comes integrated with Cloud Code, bringing AI assistance to your IDEs such as Visual Studio Code, Cloud Workstations, and JetBrains IDEs (IntelliJ, PyCharm, GoLand, WebStorm, and more). Gemini Code Assist can help complete your code while you write, generate code blocks based on comments, and also function as a chat assistant right in your IDE to help you write code faster and better. Remote debuggingIf you're looking for a way to debug your application from your IDE, try Cloud Code to emulate a local debugging experience in your IDE. Because Cloud Code leverages Skaffold, you can simply place breakpoints in your code. Once your breakpoint is triggered, you can step through the code, hover over variable properties, and view the logs from your container.Get startedReduce context switchingContext switching is time consuming and breaks up your workflow. While developing cloud based applications, you might switch between your IDE, Cloud Console, documentation, and logs. Cloud Code comes with built-in capabilities to reduce context switching. For example, with Cloud Code’s Kubernetes or Cloud Run explorers you can visualize, monitor, and view information about your cluster resources without running any CLI commands. YAML authoring supportGetting used to working with the Kubernetes YAML syntax and scheme takes time, and a lot of that time is trial and error. Cloud Code lets you spend more time writing code, thanks to its YAML authoring support features. Cloud Code’s inline documentation, snippets, completions, and schema validation, a.k.a. “Linting” make writing YAML files easier for developers.VIDEOCloud-native development in the IDE with Cloud Code4:17CustomersCustomers are using Gemini Code Assist to see increased productivityCase studyWayfair incorporated Gemini Code Assist in their efforts to have developers build applications incredibly fastVideo (44:17)Case studyTuring’s early experience with Gemini Code Assist was very promising, with productivity gains around 33%Video (32:17)See all customersWhat's newSee the latest updates for Cloud CodeSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog post5 tips to maximize the Kubernetes developer experience with Cloud CodeRead the blogVideoGoogle Cloud {Code_Love_Hack} Winners!Read the blogBlog postIntroducing Cloud Code Secret Manager IntegrationRead the blogBlog postExpanding YAML editing in Cloud CodeRead the blogBlog postFollow your org’s app dev best practices with Cloud Code custom samplesRead the blogDocumentationCloud Code quickstart guides and moreTutorialCode with Gemini Code Assist on VSCodeLearn how to use Gemini Code Assist in VSCode to write better code faster, including an overview of inline-suggestions, code generation, and chat.Learn moreQuickstartCode with Gemini Code Assist on JetBrains IDEs (IntelliJ, PyCharm...)Learn how to use Gemini Code Assist in JetBrains IDEs to write better code faster, including an overview of inline-suggestions, code generation, and chat.Learn moreQuickstartCloud Code for VS Code quickstart guidesLearn how to run a Kubernetes app with just Cloud Code, with Cloud Code and a remote development environment, or how to deploy a Cloud Run service with Cloud Code.Learn moreQuickstartCloud Code for IntelliJ quickstart guidesLearn how to create, locally develop, debug, and run a Google Kubernetes Engine application with Cloud Code or how to deploy a Cloud Run app with Cloud Code.Learn moreQuickstartCloud Code for Cloud Shell Editor quickstart guidesLearn how to create, locally develop, debug, and run a GKE application with Cloud Code for Cloud Shell or how to deploy a Cloud Run service with Cloud Code.Learn moreNot seeing what you’re looking for?View all product documentationAll featuresCloud Code featuresGemini Code Assist Gemini Code Assist is built-in on Cloud Code, AI-powered assistance right in your IDE such as AI code completion, code generation, and chat. Speed up Kubernetes developmentGet a fully integrated Kubernetes development and debugging environment within your IDE. Create and manage clusters directly from within the IDE.Deploy Cloud Run servicesBuild and deploy your code to Cloud Run or Cloud Run for Anthos in a few clicks.Easily integrate Google Cloud APIsFind, add, and configure Google Cloud APIs for your project from the built-in library manager and easily view associated documentation.Simplify Kubernetes local developmentUnder the covers, Cloud Code for IDEs uses popular tools such as Skaffold, Jib, and kubectl to provide continuous feedback on your code in real time.Easily extend to production deploymentWhen it comes to working with production clusters, we have you covered with support for Skaffold profiles, Kustomize-based environment management, and Cloud Build integration.Explore deploymentsView underlying resources and metadata for your Kubernetes clusters and Cloud Run services. You’re a click away from taking action on these resources; you can fetch a description, view logs, manage secrets, or get a terminal directly into a pod.Debug running applicationsDebug the code within your IDEs using Cloud Code for VS Code and Cloud Code for IntelliJ by leveraging built-in IDE debugging features.Access powerful IDE featuresWhile interacting with Google Cloud configuration files, get out-of-the-box support for IDE features including code completion, inline documentation, linting, and snippets.Develop from your browserStart using Cloud Code right away with just your browser. With the Cloud Shell Editor, you can access the same powerful features you'd experience in Cloud Code for VS Code but without having to set anything up.PricingPricingCloud Code is available to all Google Cloud customers free of charge.Take the next stepStart your next project, explore interactive tutorials, and manage your account.View quickstartsNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Composer(1).txt b/Cloud_Composer(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f984e1e53c8bec460c453395b88314e7b482c5ea --- /dev/null +++ b/Cloud_Composer(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/composer +Date Scraped: 2025-02-23T12:05:28.107Z + +Content: +Watch the Data Cloud Summit on demand and learn about the latest innovations in analytics, AI, BI, and databases.Jump to Cloud ComposerCloud ComposerA fully managed workflow orchestration service built on Apache Airflow.New customers get $300 in free credits to spend on Composer or other Google Cloud products during the first 90 days.Go to consoleContact salesAuthor, schedule, and monitor pipelines that span across hybrid and multi-cloud environmentsBuilt on the Apache Airflow open source project and operated using PythonFrees you from lock-in and is easy to use1:21An introduction to Cloud ComposerBenefitsFully managed workflow orchestrationCloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources.Integrates with other Google Cloud productsEnd-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline.Supports hybrid and multi-cloudAuthor, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud.Key featuresKey featuresHybrid and multi-cloud Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.Open sourceCloud Composer is built upon Apache Airflow, giving users freedom from lock-in and portability. This open source project, which Google is contributing back into, provides freedom from lock-in for customers as well as integration with a broad number of platforms, which will only expand as the Airflow community grows.Easy orchestrationCloud Composer pipelines are configured as directed acyclic graphs (DAGs) using Python, making it easy for any user. One-click deployment yields instant access to a rich library of connectors and multiple graphical representations of your workflow in action, making troubleshooting easy. Automatic synchronization of your directed acyclic graphs ensures your jobs stay on schedule.View all featuresBLOGArchitect your data lake with Data Fusion and ComposerCustomersLearn from customers using Cloud ComposerBlog postCloud Composer shows success with customers.5-min readSee all customersDocumentationDocumentationGoogle Cloud BasicsOverview of Cloud ComposerFind an overview of a Cloud Composer environment and the Google Cloud products used for an Apache Airflow deployment.Learn moreArchitecture Use a CI/CD pipeline for your data-processing workflowDiscover how to set up a continuous integration/continuous deployment (CI/CD) pipeline for processing data with managed products on Google Cloud.Learn morePatternPrivate IP Cloud Composer environmentFind information on using a private IP Cloud Composer environment.Learn moreTutorial Writing DAGs (workflows)Find out how to write an Apache Airflow directed acyclic graph (DAG) that runs in a Cloud Composer environment.Learn moreTutorial Google Cloud Skills Boost: Data engineering on Google CloudThis four-day instructor led class provides participants a hands-on introduction to designing and building data pipelines on Google Cloud.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud ComposerAll featuresAll featuresMulti-cloudCreate workflows that connect data, processing, and services across clouds, giving you a unified data environment.Open sourceCloud Composer is built upon Apache Airflow, giving users freedom from lock-in and portability.HybridEase your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud.IntegratedBuilt-in integration with BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, AI Platform, and more, giving you the ability to orchestrate end-to-end Google Cloud workloads.Python programming languageLeverage existing Python skills to dynamically author and schedule workflows within Cloud Composer.ReliabilityIncrease reliability of your workflows through easy-to-use charts for monitoring and troubleshooting the root cause of an issue.Fully managedCloud Composer's managed nature allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources.Networking and securityDuring environment creation, Cloud Composer provides the following configuration options: Cloud Composer environment with a route-based GKE cluster (default), Private IP Cloud Composer environment, Cloud Composer environment with a VPC Native GKE cluster using alias IP addresses, Shared VPC.PricingPricingPricing for Cloud Composer is consumption based, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. We have multiple pricing units because Cloud Composer uses several Google Cloud products as building blocks.Pricing is uniform across all levels of consumption and sustained usage. For more information, please see the pricing page.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Composer.txt b/Cloud_Composer.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d186a2952a7c33a347d2e5fd646f1c5f6e0d204 --- /dev/null +++ b/Cloud_Composer.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/composer +Date Scraped: 2025-02-23T12:03:41.239Z + +Content: +Watch the Data Cloud Summit on demand and learn about the latest innovations in analytics, AI, BI, and databases.Jump to Cloud ComposerCloud ComposerA fully managed workflow orchestration service built on Apache Airflow.New customers get $300 in free credits to spend on Composer or other Google Cloud products during the first 90 days.Go to consoleContact salesAuthor, schedule, and monitor pipelines that span across hybrid and multi-cloud environmentsBuilt on the Apache Airflow open source project and operated using PythonFrees you from lock-in and is easy to use1:21An introduction to Cloud ComposerBenefitsFully managed workflow orchestrationCloud Composer's managed nature and Apache Airflow compatibility allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources.Integrates with other Google Cloud productsEnd-to-end integration with Google Cloud products including BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, and AI Platform gives users the freedom to fully orchestrate their pipeline.Supports hybrid and multi-cloudAuthor, schedule, and monitor your workflows through a single orchestration tool—whether your pipeline lives on-premises, in multiple clouds, or fully within Google Cloud.Key featuresKey featuresHybrid and multi-cloud Ease your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud. Create workflows that connect data, processing, and services across clouds to give you a unified data environment.Open sourceCloud Composer is built upon Apache Airflow, giving users freedom from lock-in and portability. This open source project, which Google is contributing back into, provides freedom from lock-in for customers as well as integration with a broad number of platforms, which will only expand as the Airflow community grows.Easy orchestrationCloud Composer pipelines are configured as directed acyclic graphs (DAGs) using Python, making it easy for any user. One-click deployment yields instant access to a rich library of connectors and multiple graphical representations of your workflow in action, making troubleshooting easy. Automatic synchronization of your directed acyclic graphs ensures your jobs stay on schedule.View all featuresBLOGArchitect your data lake with Data Fusion and ComposerCustomersLearn from customers using Cloud ComposerBlog postCloud Composer shows success with customers.5-min readSee all customersDocumentationDocumentationGoogle Cloud BasicsOverview of Cloud ComposerFind an overview of a Cloud Composer environment and the Google Cloud products used for an Apache Airflow deployment.Learn moreArchitecture Use a CI/CD pipeline for your data-processing workflowDiscover how to set up a continuous integration/continuous deployment (CI/CD) pipeline for processing data with managed products on Google Cloud.Learn morePatternPrivate IP Cloud Composer environmentFind information on using a private IP Cloud Composer environment.Learn moreTutorial Writing DAGs (workflows)Find out how to write an Apache Airflow directed acyclic graph (DAG) that runs in a Cloud Composer environment.Learn moreTutorial Google Cloud Skills Boost: Data engineering on Google CloudThis four-day instructor led class provides participants a hands-on introduction to designing and building data pipelines on Google Cloud.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud ComposerAll featuresAll featuresMulti-cloudCreate workflows that connect data, processing, and services across clouds, giving you a unified data environment.Open sourceCloud Composer is built upon Apache Airflow, giving users freedom from lock-in and portability.HybridEase your transition to the cloud or maintain a hybrid data environment by orchestrating workflows that cross between on-premises and the public cloud.IntegratedBuilt-in integration with BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub, AI Platform, and more, giving you the ability to orchestrate end-to-end Google Cloud workloads.Python programming languageLeverage existing Python skills to dynamically author and schedule workflows within Cloud Composer.ReliabilityIncrease reliability of your workflows through easy-to-use charts for monitoring and troubleshooting the root cause of an issue.Fully managedCloud Composer's managed nature allows you to focus on authoring, scheduling, and monitoring your workflows as opposed to provisioning resources.Networking and securityDuring environment creation, Cloud Composer provides the following configuration options: Cloud Composer environment with a route-based GKE cluster (default), Private IP Cloud Composer environment, Cloud Composer environment with a VPC Native GKE cluster using alias IP addresses, Shared VPC.PricingPricingPricing for Cloud Composer is consumption based, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. We have multiple pricing units because Cloud Composer uses several Google Cloud products as building blocks.Pricing is uniform across all levels of consumption and sustained usage. For more information, please see the pricing page.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Connectivity.txt b/Cloud_Connectivity.txt new file mode 100644 index 0000000000000000000000000000000000000000..119c37aea2188f3134acc3fdac299019382eb5c7 --- /dev/null +++ b/Cloud_Connectivity.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/hybrid-connectivity +Date Scraped: 2025-02-23T12:07:12.885Z + +Content: +Google Cloud ConnectivityConnect to Google Cloud on your terms, from anywhereExtend on-premises and other cloud networks to Google Cloud with highly available, low latency connections through Cloud VPN or Dedicated, Partner, or Cross-Cloud Interconnect.Go to consoleContact salesNew customers can use $300 in free credits to spend on Cloud VPN, Dedicated Interconnect, and Partner Interconnect.Product highlightsConnections for every workload to support any amount of dataUse Google Cloud to connect to Alibaba Cloud, AWS, Azure, and Oracle CloudGuaranteed uptime of 99.99% through Dedicated InterconnectConnect anywhere with Google's Cross-Cloud Network2:39FeaturesHigh bandwidth, dedicated connectivityEnable highly available, low-latency, and high throughput dedicated connections from your on-premises to Google Cloud, or between Google Cloud and another cloud provider.Flexible, low-cost VPNFor low-volume data connections, Cloud VPN offers a lower cost option that delivers 1.5–3.0 Gbps per tunnel over an encrypted public internet connection. Cloud VPN's flexible routing options allow you to use static or dynamic routing to connect to different VPN gateways.Service-centric connectivityEnable access from on-premises and public cloud to SaaS, Google Cloud services, or customer-managed services hosted on Google Cloud through Private Service Connect (PSC) over interconnect.18:37Demo: Service-centric Cross-Cloud Network between AWS and Google CloudEncrypted dedicated connectionsEnable encrypted access to your VPC from your on-premises environment through HA VPN (IPSec) over Cloud Interconnect, or MACsec on Cloud Interconnect.Traffic differentiation and prioritizationEnsure prioritization of your mission-critical application traffic with an industry first: application awareness on Cloud Interconnect. Watch a short demo2:37Application awareness on Cloud Interconnect explainedCentralized dynamic routingEnable centralized management of static and dynamic routing in a hub and spoke architecture - across both multiple VPCs and remote spokes.Bidirectional Forwarding DetectionCloud Router enables fast forwarding path outage detection on Border Gateway Protocol (BGP) sessions and failover traffic to alternate healthy links.View all featuresConnectivity optionsProduct family or capabilityGoogle Cloud offeringsDescriptionBest forCloud InterconnectDedicated InterconnectDirect physical connections between your on-premises network and Google's networkTransferring encrypted sensitive data that requires 10 GB or more pipePartner InterconnectConnects on-premises networks and Virtual Private Cloud (VPC) networks through a supported service providerData needs that don’t need a 10-Gbps connectionCross-Cloud InterconnectProvides a secure, high bandwidth, managed service connecting Google Cloud directly to third-party cloud providersConnecting to Alibaba Cloud, AWS, Azure, or Oracle Cloud Virtual Private Networking (VPN)Cloud VPNSecurely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connectionExtending your data center to the cloud for non-encrypted informationInternet connectivityNetwork Service Tiers Tiered service options to optimize connectivity between systems on the internet and your Google Cloud instances; choose between Premium Tier for applications demanding exceptional performance and Standard Tier for cost-effective, yet reliable connectivityImproving the performance and cost of your network traffic based on your application's requirementsPeeringVerified Peering ProviderInternet connectivity through a provider that has verified peering to Google's edge networkSimplified access to Google Workspace and any other internet facing Google resourceDirect Peering with Google Tiered service options to optimize connectivity between systems on the internet and your Google Cloud instances; choose between Premium Tier for applications demanding exceptional performance and Standard Tier for cost-effective, yet reliable connectivityImproving the performance and cost of your network traffic based on your application's requirementsCloud InterconnectGoogle Cloud offeringsDedicated InterconnectDescriptionDirect physical connections between your on-premises network and Google's networkBest forTransferring encrypted sensitive data that requires 10 GB or more pipeGoogle Cloud offeringsPartner InterconnectDescriptionConnects on-premises networks and Virtual Private Cloud (VPC) networks through a supported service providerBest forData needs that don’t need a 10-Gbps connectionGoogle Cloud offeringsCross-Cloud InterconnectDescriptionProvides a secure, high bandwidth, managed service connecting Google Cloud directly to third-party cloud providersBest forConnecting to Alibaba Cloud, AWS, Azure, or Oracle Cloud Virtual Private Networking (VPN)Google Cloud offeringsCloud VPNDescriptionSecurely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connectionBest forExtending your data center to the cloud for non-encrypted informationInternet connectivityGoogle Cloud offeringsNetwork Service Tiers DescriptionTiered service options to optimize connectivity between systems on the internet and your Google Cloud instances; choose between Premium Tier for applications demanding exceptional performance and Standard Tier for cost-effective, yet reliable connectivityBest forImproving the performance and cost of your network traffic based on your application's requirementsPeeringGoogle Cloud offeringsVerified Peering ProviderDescriptionInternet connectivity through a provider that has verified peering to Google's edge networkBest forSimplified access to Google Workspace and any other internet facing Google resourceGoogle Cloud offeringsDirect Peering with Google DescriptionTiered service options to optimize connectivity between systems on the internet and your Google Cloud instances; choose between Premium Tier for applications demanding exceptional performance and Standard Tier for cost-effective, yet reliable connectivityBest forImproving the performance and cost of your network traffic based on your application's requirementsHow It WorksGoogle Cloud customers can connect to Google through enterprise-grade connections with higher availability and/or lower latency than their existing internet connections. Options include Dedicated Interconnect, Partner Interconnect, Cross-Cloud Interconnect, and peering options.Choose a productCross-Cloud Networking: Hybrid ConnectivityCommon UsesConnecting on-prem to Google CloudConnecting on-premises environments to Google Cloud resourcesGet highly available, low latency connections to Google Cloud using Dedicated Interconnect or Partner Interconnect.Learn more about choosing the right hybrid connectivity optionFind a Dedicated Interconnect colocation facility Explore the list of supported service providers for Partner InterconnectLearning resourcesConnecting on-premises environments to Google Cloud resourcesGet highly available, low latency connections to Google Cloud using Dedicated Interconnect or Partner Interconnect.Learn more about choosing the right hybrid connectivity optionFind a Dedicated Interconnect colocation facility Explore the list of supported service providers for Partner InterconnectGet data into Google CloudMigration to Google Cloud: Transferring your large datasetsConnectivity is the foundation to the process of adding your data to Google Cloud. Explore different connectivity options depending on your data requirements for a secure migration.Learn more on transferring large datasets to Google CloudLearning resourcesMigration to Google Cloud: Transferring your large datasetsConnectivity is the foundation to the process of adding your data to Google Cloud. Explore different connectivity options depending on your data requirements for a secure migration.Learn more on transferring large datasets to Google CloudSecure connectivitySecurely connect to Google Cloud over dedicated connections or the public internetCloud VPN provides a low-cost option that securely connects you over IPSec tunnels over the public internet. Cloud Interconnect provides dedicated physical connections, and offers encryption both at the network layer with the IPSec protocol, and at the physical layer with the IEEE standard 802.1AE Media Access Control Security (MACsec).Get an overview of HA VPN over Cloud InterconnectGet an overview of MACsec for Cloud InterconnectGet an overview of Cloud VPNLearning resourcesSecurely connect to Google Cloud over dedicated connections or the public internetCloud VPN provides a low-cost option that securely connects you over IPSec tunnels over the public internet. Cloud Interconnect provides dedicated physical connections, and offers encryption both at the network layer with the IPSec protocol, and at the physical layer with the IEEE standard 802.1AE Media Access Control Security (MACsec).Get an overview of HA VPN over Cloud InterconnectGet an overview of MACsec for Cloud InterconnectGet an overview of Cloud VPNMulticloud networking connectivitySimple way to connect to multiple cloudsProvides a secure, high bandwidth, managed service connecting Google Cloud directly to third-party cloud providers: Alibaba Cloud, AWS, Azure, and Oracle Cloud.Watch a demo Cross-Cloud InterconnectIntroducing new capabilities for optimizing Cross-Cloud Network for distributed appsPartner Cross-Cloud Interconnect for Oracle Cloud Infrastructure overviewCross-Cloud Interconnect demo: Google Cloud and Amazon Web ServicesLearning resourcesSimple way to connect to multiple cloudsProvides a secure, high bandwidth, managed service connecting Google Cloud directly to third-party cloud providers: Alibaba Cloud, AWS, Azure, and Oracle Cloud.Watch a demo Cross-Cloud InterconnectIntroducing new capabilities for optimizing Cross-Cloud Network for distributed appsPartner Cross-Cloud Interconnect for Oracle Cloud Infrastructure overviewCross-Cloud Interconnect demo: Google Cloud and Amazon Web ServicesUse Google Cloud to connect sitesGlobal connectivity over reliable Google backboneNetwork Connectivity Center lets you use Google's network as part of a wide area network (WAN) that includes your external sites.Get an introduction to Network Connectivity CenterLearn more about site-to-site data transferLearn how to create hubs and spokesLearn how to connect two sites using VPN spokesLearning resourcesGlobal connectivity over reliable Google backboneNetwork Connectivity Center lets you use Google's network as part of a wide area network (WAN) that includes your external sites.Get an introduction to Network Connectivity CenterLearn more about site-to-site data transferLearn how to create hubs and spokesLearn how to connect two sites using VPN spokesPricingCloud Connectivity pricing OptionPricing detailsCloud Interconnect (Dedicated, Partner)Learn more about Cloud Interconnect pricingCross-Cloud InterconnectLearn more about Cross-Cloud Interconnect pricingCloud VPNLearn more about Cloud VPN pricingNetwork Service TiersLearn more about Network Service Tiers pricingCDN InterconnectLearn more about CDN Interconnect pricingDirect PeeringLearn more about Direct Peering pricingCustomers can leverage a Verified Peering Provider partner at no additional cost.Cloud Connectivity pricing Cloud Interconnect (Dedicated, Partner)Pricing detailsLearn more about Cloud Interconnect pricingCross-Cloud InterconnectPricing detailsLearn more about Cross-Cloud Interconnect pricingCloud VPNPricing detailsLearn more about Cloud VPN pricingNetwork Service TiersPricing detailsLearn more about Network Service Tiers pricingCDN InterconnectPricing detailsLearn more about CDN Interconnect pricingDirect PeeringPricing detailsLearn more about Direct Peering pricingCustomers can leverage a Verified Peering Provider partner at no additional cost.Pricing calculatorEstimate your monthly charges.Estimate your costsCustom QuoteRequest a quote.Connect with our sales team to get a custom quote for your organizationTechnical deep diveCloud Interconnect troubleshootingWatch the videoCloud Interconnect best practicesRead the guideNetwork Connectivity Center (NCC) codelabNetwork Connectivity Center as Spoke labArchitecture recommendationsRecommended topology for production-level applicationsCentralized organizational policies for Cloud Interconnect Manage Cloud Interconnect using custom constraintsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_DNS.txt b/Cloud_DNS.txt new file mode 100644 index 0000000000000000000000000000000000000000..e127d4c32baf5e635eec44abd79bf6bd711cef98 --- /dev/null +++ b/Cloud_DNS.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/dns +Date Scraped: 2025-02-23T12:07:07.946Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to Cloud DNSCloud DNSReliable, resilient, low-latency DNS serving from Google's worldwide network with everything you need to register, manage, and serve your domains.New customers get $300 in free credits to spend on Cloud DNS.Go to consoleContact salesStart using Cloud DNS with this guideEasily publish and manage millions of DNS zones and records using our simple user interfaceExplore the latest news, articles, and videos for Cloud DNS and its featuresBLOGIncreasing robustness of serving public DNS namesBenefits100% availability and low latencyUse Google’s infrastructure for production quality and high-volume authoritative DNS serving. Your users will have reliable, low-latency access from anywhere in the world using our anycast name servers.Automatic scalingCloud DNS can scale to large numbers of DNS zones and records. You can reliably create and update millions of DNS records. Our name servers automatically scale to handle query volume.End-to-end domain managementUse Cloud Domains to register and manage domains in Google Cloud and automatically set up DNS zones for your domains.Key featuresKey featuresAuthoritative DNS lookupCloud DNS translates requests for domain names like www.google.com into IP addresses like 74.125.29.101.Domain registration and managementCloud Domains allow customers to register and manage domains on Google Cloud and provide tight integration with Cloud DNS.Fast anycast name serversCloud DNS uses our global network of anycast name servers to serve your DNS zones from redundant locations around the world, providing high availability and lower latency for your users.View all featuresLooking for other networking products, including Cloud CDN, Cloud Armor or Cloud Load Balancing?Browse productsWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postCreate a log-based metric using Cloud DNS public zone logs dataRead the blogVideoCloud DNS demo of multi-cloud private DNS between AWS and Google CloudWatch videoBlog postCloud Domains, now GA, makes it easy to register and manage custom domainsRead the blogBlog postCloud DNS ExplainedLearn moreBlog postIntroducing container-native Cloud DNS: Global DNS for KubernetesLearn moreDocumentationDocumentationGoogle Cloud BasicsCloud DNS overviewFind an overview of Cloud DNS features and capabilities.Learn moreGoogle Cloud BasicsCloud Domains overviewFind an overview of Cloud Domains features and capabilities.Learn moreTutorialSetting up a domain using Cloud DNSGet guided through the end-to-end process for registering a domain, setting up a sample web server, and using Cloud DNS to point the domain URL to the server.Learn moreQuickstartSet up a Cloud DNS managed zoneSee how to set up a Cloud DNS managed zone and resource record for your domain name.Learn moreQuickstartRegister a domain using Cloud DomainsSee how to search for an available domain, register it, and then verify the registration.Learn moreArchitectureReference architectures for Hybrid DNSExplore reference architectures for common scenarios using Cloud DNS private zones in a hybrid environment.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesGet the latest release notes for Cloud DNSAll featuresAll featuresAuthoritative DNS lookupCloud DNS translates requests for domain names like www.google.com into IP addresses like 74.125.29.101.Cloud IAMCloud Domains’ integration with Cloud IAM provides secure domains management with full control and visibility for domain resources.Cloud LoggingPrivate DNS logs a record for every DNS query received from VMs and inbound forwarding flows within your networks. You can view DNS logs in Cloud Logging and export logs to any destination that Cloud Logging export supports.DNS peeringDNS peering makes available a second method of sharing DNS data. All or a portion of the DNS namespace can be configured to be sent from one network to another and, once there, will respect all DNS configuration defined in the peered network.DNS forwardingIf you have a hybrid-cloud architecture, DNS forwarding can help bridge your on-premises and Google Cloud DNS environments. This fully managed product lets you use your existing DNS servers as authoritative, and intelligent caching makes sure your queries are performed efficiently—all without third-party software or the need to use your own compute resources.DNS registration and managementCloud Domains allow customers to register and manage domains on Google Cloud and provide tight integration with Cloud DNS.DNS security (DNSSEC)DNSSEC protects your domains from spoofing and cache poisoning attacks. With Cloud Domains, you can enable or disable managed DNSSEC when you create a public zone.Fast anycast name serversCloud DNS uses our global network of anycast name servers to serve your DNS zones from redundant locations around the world, providing high availability and lower latency for your users.Private zonesPrivate DNS zones provide an easy-to-manage internal DNS solution for your private Google Cloud networks, eliminating the need to provision and manage additional software and resources. And since DNS queries for private zones are restricted to a private network, hostile agents can’t access your internal network information.Zone and project managementCreate managed zones for your project, then add, edit, and delete DNS records. You can control permissions at a project level and monitor your changes as they propagate to DNS name servers.Container-native Cloud DNSThe native integration of Cloud DNS with Google Kubernetes Engine (GKE) provides in-cluster Service DNS resolution with Cloud DNS, enabling high-throughput, scalable DNS resolution for every GKE node.PricingPricingCloud DNS is a simple, cost-effective alternative to hosting your own DNS servers on-premises or using other third-party DNS services. For pricing information, please refer to our volume pricing tier.With Cloud Domains, domain registration pricing is simple and transparent with registration and renewal through Cloud Billing. .com and .net pricing starts at $12. For detailed pricing information, please refer to the pricing table.If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Data_Fusion(1).txt b/Cloud_Data_Fusion(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f84368e740baad08236d25b0091193b13162d6bf --- /dev/null +++ b/Cloud_Data_Fusion(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/data-fusion +Date Scraped: 2025-02-23T12:05:27.438Z + +Content: +Watch the Data Cloud Summit on demand and learn about the latest innovations in analytics, AI, BI, and databases.Jump to Cloud Data Fusion Cloud Data FusionFully managed, cloud-native data integration at any scale.New customers get $300 in free credits to spend on Data Fusion. All customers get the first 120 hours of pipeline development free per month, per account, not charged against your credits.Go to consoleContact salesVisual point-and-click interface enabling code-free deployment of ETL/ELT data pipelinesBroad library of 150+ preconfigured connectors and transformations, at no additional costNatively integrated best-in-class Google Cloud servicesEnd-to-end data lineage for root cause and impact analysisBuilt with an open source core (CDAP) for pipeline portability1:54Introduction to Cloud Data FusionBenefitsAvoid technical bottlenecks and lift productivityData Fusion’s intuitive drag-and-drop interface, pre-built connectors, and self-service model of code-free data integration remove technical expertise-based bottlenecks and accelerate time to insight.Lower total cost of pipeline ownershipA serverless approach leveraging the scalability and reliability of Google services like Dataproc means Data Fusion offers the best of data integration capabilities with a lower total cost of ownership.Build with a data governance foundationWith built-in features like end-to-end data lineage, integration metadata, and cloud-native security and data protection services, Data Fusion assists teams with root cause or impact analysis and compliance.Key featuresKey featuresOpen core, delivering hybrid and multi-cloud integrationData Fusion is built using open source project CDAP, and this open core ensures data pipeline portability for users. CDAP’s broad integration with on-premises and public cloud platforms gives Cloud Data Fusion users the ability to break down silos and deliver insights that were previously inaccessible.Integrated with Google’s industry-leading big data toolsData Fusion’s integration with Google Cloud simplifies data security and ensures data is immediately available for analysis. Whether you’re curating a data lake with Cloud Storage and Dataproc, moving data into BigQuery for data warehousing, or transforming data to land it in a relational store like Spanner, Cloud Data Fusion’s integration makes development and iteration fast and easy.Data integration through collaboration and standardizationCloud Data Fusion offers pre-built transformations for both batch and real-time processing. It provides the ability to create an internal library of custom connections and transformations that can be validated, shared, and reused across teams. It lays the foundation of collaborative data engineering and improves productivity. That means less waiting for ETL developers and data engineers and, importantly, less sweating about code quality.View all featuresThe Economic Benefits of Data Fusion and its Data Integration Alternatives Download the reportCustomersLearn from customers using Cloud Data FusionBlog postLiveramp scales identity data management with Cloud Data Fusion 5-min readCase studyStar Media Group transforms into an engagement business with Cloud Data Fusion.5-min readSee all customersWhat's newExplore the latest updatesSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoHow to bring data from SAP to Google CloudWatch videoVideoEmbedded data wrangling with Data FusionWatch videoBlog postLower TCO for managing data pipelines by 80% with Cloud Data FusionLearn moreBlog postBridge Data Silos with Data FusionRead the blogBlog postReal-time Change Data Capture for data replication into BigQueryRead the blogBlog postBetter together: orchestrating your Data Fusion pipelines with Cloud ComposerRead the blogDocumentationDocumentationTutorial Enabling Cloud Data FusionLearn how to enable the Cloud Data Fusion API for your Google Cloud project.Learn moreTutorial Cloud Data Fusion concepts overviewLearn about Cloud Data Fusion concepts and features.Learn moreTutorialExploring data lineageThis tutorial shows how to use Cloud Data Fusion to explore data lineage: the data's origins and its movement over time.Learn moreTutorialUsing JDBC drivers with Cloud Data FusionDiscover how to use Java Database Connectivity (JDBC) drivers with Cloud Data Fusion pipelines.Learn moreTutorialData engineering on Google CloudLearn firsthand how to design and build data processing systems on Google Cloud with this four-day instructor-led class.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud Data FusionUse casesUse casesUse caseModern, more secure data lakes on Google CloudCloud Data Fusion helps users build scalable, distributed data lakes on Google Cloud by integrating data from siloed on-premises platforms. Customers can leverage the scale of the cloud to centralize data and drive more value out of their data as a result. The self-service capabilities of Cloud Data Fusion increase process visibility and lower the overall cost of operational support.Use caseAgile data warehouses with BigQueryCloud Data Fusion can help organizations better understand their customers by breaking down data silos and enabling development of agile, cloud-based data warehouse solutions in BigQuery. A trusted, unified view of customer engagement and behavior unlocks the ability to drive a better customer experience, which leads to higher retention and higher revenue per customer.Use caseUnified analytics environmentMany users today want to establish a unified analytics environment across a myriad of expensive, on-premises data marts. Employing a wide range of disconnected tools and stop-gap measures creates data quality and security challenges. Cloud Data Fusion’s vast variety of connectors, visual interfaces, and abstractions centered around business logic helps in lowering TCO, promoting self-service and standardization, and reducing repetitive work.View all technical guidesAll featuresAll features Code-free self-serviceRemove bottlenecks by enabling nontechnical users through a code-free graphical interface that delivers point-and-click data integration. Collaborative data engineeringCloud Data Fusion offers the ability to create an internal library of custom connections and transformations that can be validated, shared, and reused across an organization. Google Cloud-nativeFully managed Google Cloud-native architecture unlocks the scalability, reliability, security, and privacy features of Google Cloud.Real-time data integrationReplicate transactional and operational databases such as SQL Server, Oracle and MySQL directly into BigQuery with just a few clicks using Data Fusion’s replication feature. Integration with Datastream allows you to deliver change streams into BigQuery for continuous analytics. Use feasibility assessment for faster development iterations and performance/health monitoring for observability.Batch integrationDesign, run and operate high-volumes of data pipelines periodically with support for popular data sources including file systems and object stores, relational and NoSQL databases, SaaS systems, and mainframes. Enterprise-grade securityIntegration with Cloud Identity and Access Management (IAM), Private IP, VPC-SC and CMEK provides enterprise security and alleviates risks by ensuring compliance and data protection. Integration metadata and lineageSearch integrated datasets by technical and business metadata. Track lineage for all integrated datasets at the dataset and field level. Seamless operationsREST APIs, time-based schedules, pipeline state-based triggers, logs, metrics, and monitoring dashboards make it easy to operate in mission-critical environments. Comprehensive integration toolkitBuilt-in connectors to a variety of modern and legacy systems, code-free transformations, conditionals and pre/post processing, alerting and notifications, and error processing provide a comprehensive data integration experience. Hybrid enablementOpen source provides the flexibility and portability required to build standardized data integration solutions across hybrid and multi-cloud environments.PricingPricingCloud Data Fusion pricing is broken down by: 1. Design cost: based on the number of hours an instance is running and not the number of pipelines being developed and run. The Basic edition offers the first 120 hours per month per account at no cost.2. Processing cost: The cost of Dataproc clusters used to run the pipelines.EditionPrice per Cloud Data Fusion instance hourNumber of simultaneous pipelines supportedNumber of users supportedDeveloperUS$0.352 (Recommended)2 (Recommended)BasicUS$1.80UnlimitedUnlimitedEnterpriseUS$4.20UnlimitedUnlimitedView pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Data_Fusion.txt b/Cloud_Data_Fusion.txt new file mode 100644 index 0000000000000000000000000000000000000000..9ab8c68204e51184d09ce32f595e4bf5eacac9ca --- /dev/null +++ b/Cloud_Data_Fusion.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/data-fusion +Date Scraped: 2025-02-23T12:03:39.184Z + +Content: +Watch the Data Cloud Summit on demand and learn about the latest innovations in analytics, AI, BI, and databases.Jump to Cloud Data Fusion Cloud Data FusionFully managed, cloud-native data integration at any scale.New customers get $300 in free credits to spend on Data Fusion. All customers get the first 120 hours of pipeline development free per month, per account, not charged against your credits.Go to consoleContact salesVisual point-and-click interface enabling code-free deployment of ETL/ELT data pipelinesBroad library of 150+ preconfigured connectors and transformations, at no additional costNatively integrated best-in-class Google Cloud servicesEnd-to-end data lineage for root cause and impact analysisBuilt with an open source core (CDAP) for pipeline portability1:54Introduction to Cloud Data FusionBenefitsAvoid technical bottlenecks and lift productivityData Fusion’s intuitive drag-and-drop interface, pre-built connectors, and self-service model of code-free data integration remove technical expertise-based bottlenecks and accelerate time to insight.Lower total cost of pipeline ownershipA serverless approach leveraging the scalability and reliability of Google services like Dataproc means Data Fusion offers the best of data integration capabilities with a lower total cost of ownership.Build with a data governance foundationWith built-in features like end-to-end data lineage, integration metadata, and cloud-native security and data protection services, Data Fusion assists teams with root cause or impact analysis and compliance.Key featuresKey featuresOpen core, delivering hybrid and multi-cloud integrationData Fusion is built using open source project CDAP, and this open core ensures data pipeline portability for users. CDAP’s broad integration with on-premises and public cloud platforms gives Cloud Data Fusion users the ability to break down silos and deliver insights that were previously inaccessible.Integrated with Google’s industry-leading big data toolsData Fusion’s integration with Google Cloud simplifies data security and ensures data is immediately available for analysis. Whether you’re curating a data lake with Cloud Storage and Dataproc, moving data into BigQuery for data warehousing, or transforming data to land it in a relational store like Spanner, Cloud Data Fusion’s integration makes development and iteration fast and easy.Data integration through collaboration and standardizationCloud Data Fusion offers pre-built transformations for both batch and real-time processing. It provides the ability to create an internal library of custom connections and transformations that can be validated, shared, and reused across teams. It lays the foundation of collaborative data engineering and improves productivity. That means less waiting for ETL developers and data engineers and, importantly, less sweating about code quality.View all featuresThe Economic Benefits of Data Fusion and its Data Integration Alternatives Download the reportCustomersLearn from customers using Cloud Data FusionBlog postLiveramp scales identity data management with Cloud Data Fusion 5-min readCase studyStar Media Group transforms into an engagement business with Cloud Data Fusion.5-min readSee all customersWhat's newExplore the latest updatesSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoHow to bring data from SAP to Google CloudWatch videoVideoEmbedded data wrangling with Data FusionWatch videoBlog postLower TCO for managing data pipelines by 80% with Cloud Data FusionLearn moreBlog postBridge Data Silos with Data FusionRead the blogBlog postReal-time Change Data Capture for data replication into BigQueryRead the blogBlog postBetter together: orchestrating your Data Fusion pipelines with Cloud ComposerRead the blogDocumentationDocumentationTutorial Enabling Cloud Data FusionLearn how to enable the Cloud Data Fusion API for your Google Cloud project.Learn moreTutorial Cloud Data Fusion concepts overviewLearn about Cloud Data Fusion concepts and features.Learn moreTutorialExploring data lineageThis tutorial shows how to use Cloud Data Fusion to explore data lineage: the data's origins and its movement over time.Learn moreTutorialUsing JDBC drivers with Cloud Data FusionDiscover how to use Java Database Connectivity (JDBC) drivers with Cloud Data Fusion pipelines.Learn moreTutorialData engineering on Google CloudLearn firsthand how to design and build data processing systems on Google Cloud with this four-day instructor-led class.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud Data FusionUse casesUse casesUse caseModern, more secure data lakes on Google CloudCloud Data Fusion helps users build scalable, distributed data lakes on Google Cloud by integrating data from siloed on-premises platforms. Customers can leverage the scale of the cloud to centralize data and drive more value out of their data as a result. The self-service capabilities of Cloud Data Fusion increase process visibility and lower the overall cost of operational support.Use caseAgile data warehouses with BigQueryCloud Data Fusion can help organizations better understand their customers by breaking down data silos and enabling development of agile, cloud-based data warehouse solutions in BigQuery. A trusted, unified view of customer engagement and behavior unlocks the ability to drive a better customer experience, which leads to higher retention and higher revenue per customer.Use caseUnified analytics environmentMany users today want to establish a unified analytics environment across a myriad of expensive, on-premises data marts. Employing a wide range of disconnected tools and stop-gap measures creates data quality and security challenges. Cloud Data Fusion’s vast variety of connectors, visual interfaces, and abstractions centered around business logic helps in lowering TCO, promoting self-service and standardization, and reducing repetitive work.View all technical guidesAll featuresAll features Code-free self-serviceRemove bottlenecks by enabling nontechnical users through a code-free graphical interface that delivers point-and-click data integration. Collaborative data engineeringCloud Data Fusion offers the ability to create an internal library of custom connections and transformations that can be validated, shared, and reused across an organization. Google Cloud-nativeFully managed Google Cloud-native architecture unlocks the scalability, reliability, security, and privacy features of Google Cloud.Real-time data integrationReplicate transactional and operational databases such as SQL Server, Oracle and MySQL directly into BigQuery with just a few clicks using Data Fusion’s replication feature. Integration with Datastream allows you to deliver change streams into BigQuery for continuous analytics. Use feasibility assessment for faster development iterations and performance/health monitoring for observability.Batch integrationDesign, run and operate high-volumes of data pipelines periodically with support for popular data sources including file systems and object stores, relational and NoSQL databases, SaaS systems, and mainframes. Enterprise-grade securityIntegration with Cloud Identity and Access Management (IAM), Private IP, VPC-SC and CMEK provides enterprise security and alleviates risks by ensuring compliance and data protection. Integration metadata and lineageSearch integrated datasets by technical and business metadata. Track lineage for all integrated datasets at the dataset and field level. Seamless operationsREST APIs, time-based schedules, pipeline state-based triggers, logs, metrics, and monitoring dashboards make it easy to operate in mission-critical environments. Comprehensive integration toolkitBuilt-in connectors to a variety of modern and legacy systems, code-free transformations, conditionals and pre/post processing, alerting and notifications, and error processing provide a comprehensive data integration experience. Hybrid enablementOpen source provides the flexibility and portability required to build standardized data integration solutions across hybrid and multi-cloud environments.PricingPricingCloud Data Fusion pricing is broken down by: 1. Design cost: based on the number of hours an instance is running and not the number of pipelines being developed and run. The Basic edition offers the first 120 hours per month per account at no cost.2. Processing cost: The cost of Dataproc clusters used to run the pipelines.EditionPrice per Cloud Data Fusion instance hourNumber of simultaneous pipelines supportedNumber of users supportedDeveloperUS$0.352 (Recommended)2 (Recommended)BasicUS$1.80UnlimitedUnlimitedEnterpriseUS$4.20UnlimitedUnlimitedView pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Deploy(1).txt b/Cloud_Deploy(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..62dd46a18927b26d1e2bf04f1dbf1a6ac740d836 --- /dev/null +++ b/Cloud_Deploy(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/deploy +Date Scraped: 2025-02-23T12:04:20.473Z + +Content: +Jump to Cloud Deploy Cloud DeployDeliver continuously to Google Kubernetes Engine, Cloud Run, and more.Go to consoleView documentationCreate deployment pipelines for GKE, Cloud Run, and moreFully managed continuous delivery service for easy scalingEnterprise security and auditBuilt-in delivery metricsSnaps into your existing DevOps ecosystemKey featuresKey featuresStreamlined continuous deliveryCloud Deploy makes continuous delivery to GKE, Cloud Run services and jobs, and Anthos easy and powerful. Define releases and progress them through environments, such as test, stage, and production. Cloud Deploy provides easy one-step promotion and rollback of releases via the web console, CLI, or API. Built-in metrics enable insight into deployment frequency and success.Fully managed single pane of glassAs a fully managed service, Cloud Deploy has no infrastructure to set up and manage, while providing scale-up and scale-down automatically to optimize cost and performance. This centralization also provides a single pane of glass to monitor and control release candidates organization-wide as they progress toward production.Tightly integrated with Google CloudCloud Deploy is the most integrated GKE, Cloud Run, and Anthos deployment platform available. Lockdown release progression via IAM, monitor release events with Cloud Logging, and achieve traceability with Cloud Audit Logs. Connect monitoring to deployed resources.Integrates with the tools you loveCloud Deploy can be integrated with popular DevOps tools, such as CI and ticketing. Cloud Deploy brings Skaffold to your pipelines, which, in unison with Cloud Code, brings pipeline parity across dev and CI/CD.View all featuresCloud Deploy offers a streamlined approach to create CI/CD pipelines using Skaffold, along with advanced features like canary deployment and verification. Additionally, it offers a unified developers’ experience for GKE and Cloud Run, making it easy to choose the most suitable platform for applications.Jun Sakata - Head of Platform Engineering, UbieWhat's newWhat’s newBlog postCloud Deploy gains support for custom target typesRead the blogBlog postConnecting GitHub Actions and Cloud DeployRead the blogBlog postCloud Deploy adds pipeline automation and Cloud Run Jobs supportLearn moreBlog postDeploy a secure CI/CD pipeline using Google Cloud DevOpsRead the blogReport Cloud Deploy adds canary and parallel deployment supportRead the blogReport Promoting pre-prod to production in Cloud Run with Cloud DeployRead the blogDocumentationDocumentationGoogle Cloud BasicsCloud Deploy conceptsLearn more about Cloud Deploy, including how it works, terminology, and architecture.Learn moreQuickstartDeploy an application to two GKE targetsDeploy a simple app to a progression of Google Kubernetes Engine clusters.Learn moreQuickstartDeploy an application to two Cloud Run targetsDeploy a simple app to a progression of Cloud Run services.Learn moreTutorialCloud Deploy tutorialsThese hands-on tutorials guide you through setting up a pipeline and deploying a sample application using Cloud Deploy.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud DeployAll featuresLearn more about Cloud Deploy featuresPipeline visualizationVisualize the path to delivery. Define delivery pipelines and visualize the progression of candidate releases through to production.Easy rollout/rollbackRollout and rollback to GKE, Cloud Run, and Anthos user clusters has never been easier and more clear. Promote a release between target stages using a one-step operation in the web console, CLI, or API.Built-in approvalsCloud Deploy supports separation of duties and concerns with formal release promotion approvals, accessible via the web console, CLI, or API and integrated with IAM.Parallel deployDeploy to multiple GKE or Anthos clusters, or Cloud Run service regions, concurrently. Cloud Deploy orchestrates to ensure the deployment succeeds everywhere or is collectively rolled back.Canary deployProgressively deploy a new version of your application to a specified portion (such as 10%) of traffic. Deploy hooksConfigure Cloud Deploy to perform pre-deployment, post-deployment actions, or both.Deployment verificationIntegrate deployment and verification tests to have Cloud Deploy confirm rollout success. AutomationConfigure continuous deployment within your delivery pipeline through automation. Automatically promote releases from one target to the next as well as automate rollout canary percentages. Declarative configurationNever worry about the how, just define the what and let Cloud Deploy do the heavy lifting. Cloud Deploy fully manages GKE, Cloud Run, and Anthos user cluster deployments based on desired end states.Custom target typesCustom target types extend Cloud Deploy by allowing you to define and use a custom target type with its own renderer and deployer while continuing to make use of Cloud Deploy features, including approval and promotion.OpinionationCloud Deploy provides an opinionated on-ramp to GKE, Cloud Run, and Anthos via Skaffold. Leverage built-in best practices, which keep pipelines durable by insulating them against changes.Tightly integratedNo hand wiring required. Cloud Deploy comes pre-integrated to IAM, Cloud Logging, and Cloud Audit Logs.MetricsInsights at your fingertips. Know how frequently and successfully releases progress through delivery pipelines.Auditing and traceabilityCloud Deploy integrates with Cloud Logging to provide release auditability and traceability. Maintain clarity on which releases were promoted and by whom.IAM and execution permissioningLockdown release deployments with granular IAM permissioning and scoped service accounts for execution.Connect the tools you loveCloud Deploy extends your DevOps ecosystem and plays with all the tools you love. Invoke Cloud Deploy from popular CI products using the CLI or API and federate approvals to ticketing systems of choice.Unified with your developer experienceCloud Deploy brings Skaffold to your pipelines, enabling operators to achieve pipeline parity across dev and CI/CD, while developers remain productive and insulated from platform changes while leveraging the idiomatic developer experience of Cloud Code.PricingPricingCloud Deploy customers are charged a management fee per active delivery pipeline with more than one target (“multiple target delivery pipeline”). There is no charge for the first active multiple target delivery pipeline per billing account each month, with each additional active multiple target delivery pipeline charged at $5 per month. Delivery pipelines with only one target do not incur a management fee.For all delivery pipelines, charges for underlying services also apply.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Deploy.txt b/Cloud_Deploy.txt new file mode 100644 index 0000000000000000000000000000000000000000..09d3348f296f244652ff08d836a0fd63c85b6a61 --- /dev/null +++ b/Cloud_Deploy.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/deploy +Date Scraped: 2025-02-23T12:03:07.429Z + +Content: +Jump to Cloud Deploy Cloud DeployDeliver continuously to Google Kubernetes Engine, Cloud Run, and more.Go to consoleView documentationCreate deployment pipelines for GKE, Cloud Run, and moreFully managed continuous delivery service for easy scalingEnterprise security and auditBuilt-in delivery metricsSnaps into your existing DevOps ecosystemKey featuresKey featuresStreamlined continuous deliveryCloud Deploy makes continuous delivery to GKE, Cloud Run services and jobs, and Anthos easy and powerful. Define releases and progress them through environments, such as test, stage, and production. Cloud Deploy provides easy one-step promotion and rollback of releases via the web console, CLI, or API. Built-in metrics enable insight into deployment frequency and success.Fully managed single pane of glassAs a fully managed service, Cloud Deploy has no infrastructure to set up and manage, while providing scale-up and scale-down automatically to optimize cost and performance. This centralization also provides a single pane of glass to monitor and control release candidates organization-wide as they progress toward production.Tightly integrated with Google CloudCloud Deploy is the most integrated GKE, Cloud Run, and Anthos deployment platform available. Lockdown release progression via IAM, monitor release events with Cloud Logging, and achieve traceability with Cloud Audit Logs. Connect monitoring to deployed resources.Integrates with the tools you loveCloud Deploy can be integrated with popular DevOps tools, such as CI and ticketing. Cloud Deploy brings Skaffold to your pipelines, which, in unison with Cloud Code, brings pipeline parity across dev and CI/CD.View all featuresCloud Deploy offers a streamlined approach to create CI/CD pipelines using Skaffold, along with advanced features like canary deployment and verification. Additionally, it offers a unified developers’ experience for GKE and Cloud Run, making it easy to choose the most suitable platform for applications.Jun Sakata - Head of Platform Engineering, UbieWhat's newWhat’s newBlog postCloud Deploy gains support for custom target typesRead the blogBlog postConnecting GitHub Actions and Cloud DeployRead the blogBlog postCloud Deploy adds pipeline automation and Cloud Run Jobs supportLearn moreBlog postDeploy a secure CI/CD pipeline using Google Cloud DevOpsRead the blogReport Cloud Deploy adds canary and parallel deployment supportRead the blogReport Promoting pre-prod to production in Cloud Run with Cloud DeployRead the blogDocumentationDocumentationGoogle Cloud BasicsCloud Deploy conceptsLearn more about Cloud Deploy, including how it works, terminology, and architecture.Learn moreQuickstartDeploy an application to two GKE targetsDeploy a simple app to a progression of Google Kubernetes Engine clusters.Learn moreQuickstartDeploy an application to two Cloud Run targetsDeploy a simple app to a progression of Cloud Run services.Learn moreTutorialCloud Deploy tutorialsThese hands-on tutorials guide you through setting up a pipeline and deploying a sample application using Cloud Deploy.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud DeployAll featuresLearn more about Cloud Deploy featuresPipeline visualizationVisualize the path to delivery. Define delivery pipelines and visualize the progression of candidate releases through to production.Easy rollout/rollbackRollout and rollback to GKE, Cloud Run, and Anthos user clusters has never been easier and more clear. Promote a release between target stages using a one-step operation in the web console, CLI, or API.Built-in approvalsCloud Deploy supports separation of duties and concerns with formal release promotion approvals, accessible via the web console, CLI, or API and integrated with IAM.Parallel deployDeploy to multiple GKE or Anthos clusters, or Cloud Run service regions, concurrently. Cloud Deploy orchestrates to ensure the deployment succeeds everywhere or is collectively rolled back.Canary deployProgressively deploy a new version of your application to a specified portion (such as 10%) of traffic. Deploy hooksConfigure Cloud Deploy to perform pre-deployment, post-deployment actions, or both.Deployment verificationIntegrate deployment and verification tests to have Cloud Deploy confirm rollout success. AutomationConfigure continuous deployment within your delivery pipeline through automation. Automatically promote releases from one target to the next as well as automate rollout canary percentages. Declarative configurationNever worry about the how, just define the what and let Cloud Deploy do the heavy lifting. Cloud Deploy fully manages GKE, Cloud Run, and Anthos user cluster deployments based on desired end states.Custom target typesCustom target types extend Cloud Deploy by allowing you to define and use a custom target type with its own renderer and deployer while continuing to make use of Cloud Deploy features, including approval and promotion.OpinionationCloud Deploy provides an opinionated on-ramp to GKE, Cloud Run, and Anthos via Skaffold. Leverage built-in best practices, which keep pipelines durable by insulating them against changes.Tightly integratedNo hand wiring required. Cloud Deploy comes pre-integrated to IAM, Cloud Logging, and Cloud Audit Logs.MetricsInsights at your fingertips. Know how frequently and successfully releases progress through delivery pipelines.Auditing and traceabilityCloud Deploy integrates with Cloud Logging to provide release auditability and traceability. Maintain clarity on which releases were promoted and by whom.IAM and execution permissioningLockdown release deployments with granular IAM permissioning and scoped service accounts for execution.Connect the tools you loveCloud Deploy extends your DevOps ecosystem and plays with all the tools you love. Invoke Cloud Deploy from popular CI products using the CLI or API and federate approvals to ticketing systems of choice.Unified with your developer experienceCloud Deploy brings Skaffold to your pipelines, enabling operators to achieve pipeline parity across dev and CI/CD, while developers remain productive and insulated from platform changes while leveraging the idiomatic developer experience of Cloud Code.PricingPricingCloud Deploy customers are charged a management fee per active delivery pipeline with more than one target (“multiple target delivery pipeline”). There is no charge for the first active multiple target delivery pipeline per billing account each month, with each additional active multiple target delivery pipeline charged at $5 per month. Delivery pipelines with only one target do not incur a management fee.For all delivery pipelines, charges for underlying services also apply.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Deployment_Manager.txt b/Cloud_Deployment_Manager.txt new file mode 100644 index 0000000000000000000000000000000000000000..6096fd62c836e65cfb4ccc9ed2ea6af5f25a7868 --- /dev/null +++ b/Cloud_Deployment_Manager.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/deployment-manager/docs +Date Scraped: 2025-02-23T12:04:23.079Z + +Content: +Home Cloud Deployment Manager Documentation Stay organized with collections Save and categorize content based on your preferences. Google Cloud Deployment Manager documentation View all product documentation Google Cloud Deployment Manager is an infrastructure deployment service that automates the creation and management of Google Cloud resources. Write flexible template and configuration files and use them to create deployments that have a variety of Google Cloud services, such as Cloud Storage, Compute Engine, and Cloud SQL, configured to work together. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Manage Google Cloud resources as a deployment Creating a Basic Template Installation and Setup Updating a Deployment Creating a Basic Configuration Using Images from Other Projects Using deployment-specific environment variables Understanding reusable templates Deleting deployments find_in_page Reference Example templates Supported resource types Supported Google Cloud type providers Syntax Reference Authorizing Requests Libraries REST API Runtime Configurator API gcloud reference info Resources Quotas and Limits Release Notes Billing questions Getting support Tips and troubleshooting Training Training and tutorials Networking in Google Cloud Learn about common network design patterns and automated deployment using Cloud Deployment Manager or Terraform. Learn more arrow_forward Training Training and tutorials Google Cloud Fundamentals for Azure Professionals This course introduces Azure professionals to the core capabilities of Google Cloud. Learn more arrow_forward Training Training and tutorials Google Cloud Fundamentals for AWS Professionals This course introduces AWS professionals to the core capabilities of Google Cloud. Learn more arrow_forward Use case Use cases Deploying a Slurm cluster on Compute Engine Shows how to deploy a Slurm cluster on Compute Engine Slurm Compute Engine Learn more arrow_forward Use case Use cases Migrating two-tier web applications to Google Cloud Introduces Google Cloud options to organizations who are conducting an internal assessment of moving a two-tier web application to the cloud. Compute Engine Migration Learn more arrow_forward Use case Use cases Migrating a MySQL Cluster to Compute Engine Using HAProxy Walks you through the process of migrating a MySQL database to the Google Cloud service Compute Engine using native MySQL replication and HAProxy. Migration MySQL Learn more arrow_forward Use case Use cases Policy Design for Startup Customers Shows how to design a set of policies that enable a hypothetical startup customer named OurStartupOrg to use Google Cloud. IAM Security Billing Learn more arrow_forward Code Samples Code Samples Deployment Manager examples This repository contains example templates for use with Deployment Manager. Open GitHub arrow_forward Code Samples Code Samples Example templates A comprehensive set of production-ready resource templates that follow Google's best practices. Open GitHub arrow_forward Related videos \ No newline at end of file diff --git a/Cloud_Endpoints.txt b/Cloud_Endpoints.txt new file mode 100644 index 0000000000000000000000000000000000000000..e10a594d211ed698b46b335f3663429028e70828 --- /dev/null +++ b/Cloud_Endpoints.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/endpoints/docs +Date Scraped: 2025-02-23T12:05:39.216Z + +Content: +Home Cloud Endpoints Documentation Send feedback Cloud Endpoints documentation Stay organized with collections Save and categorize content based on your preferences. Endpoints is an API management system that helps you secure, monitor, analyze, and set quotas on your APIs using the same infrastructure Google uses for its own APIs. The Endpoints options To have your API managed by Cloud Endpoints, you have three options, depending on where your API is hosted and the type of communications protocol your API uses: Cloud Endpoints for OpenAPI Cloud Endpoints for gRPC Cloud Endpoints Frameworks for the App Engine standard environment For help on deciding which option is right for you, see Choosing an Endpoints option. What's next See Endpoints features in action by doing the Quickstart for Endpoints, which uses scripts to deploy a sample API to the App Engine flexible environment. Get familiar with the deployments steps by doing one of the tutorials for the Endpoints option that you have chosen: Endpoints for OpenAPI: tutorials Endpoints for gRPC: tutorials Endpoints Frameworks for the App Engine standard environment: tutorials Send feedback \ No newline at end of file diff --git a/Cloud_Foundation_Toolkit.txt b/Cloud_Foundation_Toolkit.txt new file mode 100644 index 0000000000000000000000000000000000000000..1428c5d7386cd2ec0be763e5eaae8480e1292217 --- /dev/null +++ b/Cloud_Foundation_Toolkit.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/docs/terraform/blueprints/terraform-blueprints +Date Scraped: 2025-02-23T12:06:46.801Z + +Content: +Home Documentation Terraform on Google Cloud Guides Send feedback Terraform blueprints and modules for Google Cloud Stay organized with collections Save and categorize content based on your preferences. Blueprints and modules help you automate provisioning and managing Google Cloud resources at scale. A module is a reusable set of Terraform configuration files that creates a logical abstraction of Terraform resources. A blueprint is a package of deployable, reusable modules and policy that implements and documents a specific opinionated solution. Deployable configuration for all Terraform blueprints are packaged as Terraform modules. Select a category Compute Containers Data analytics Databases Developer tools End-to-end Healthcare and life sciences Networking Operations Security and identity Serverless computing Storage Workspace Category Blueprints and modules Description End-to-end, Data analytics ai-notebook Demonstrates how to protect confidential data in Vertex AI Workbench notebooks Data analytics, End-to-end crmint Deploy the marketing analytics application, CRMint End-to-end, Operations enterprise-application Deploy an enterprise developer platform on Google Cloud End-to-end, Operations example-foundation Shows how the CFT modules can be composed to build a secure cloud foundation End-to-end fabric Provides advanced examples designed for prototyping Developer tools, End-to-end, Security and identity secure-cicd Builds a secure CI/CD pipeline on Google Cloud End-to-end, Data analytics secured-data-warehouse Deploys a secured BigQuery data warehouse Data analytics, End-to-end, Security and identity secured-data-warehouse-onprem-ingest Deploys a secured data warehouse variant for ingesting encrypted data from on-prem sources End-to-end vertex-mlops Create a Vertex AI environment needed for MLOps Networking address Manages Google Cloud IP addresses Databases alloy-db Creates an AlloyDB for PostgreSQL instance Data analytics analytics-lakehouse Deploys a Lakehouse Architecture Solution Compute anthos-vm Creates VMs on Google Distributed Cloud clusters Developer tools apphub Creates and manages App Hub resources Containers, Developer tools artifact-registry Create and manage Artifact Registry repositories Developer tools, Operations, Security and identity bastion-host Generates a bastion host VM compatible with OS Login and IAP tunneling that can be used to access internal VMs Compute, Operations backup-dr Deploy Backup and DR appliances Data analytics bigquery Creates opinionated BigQuery datasets and tables Data analytics bigtable Create and manage Google Bigtable resources Developer tools, Operations bootstrap Bootstraps Terraform usage and related CI/CD in a new Google Cloud organization Compute, Networking cloud-armor Deploy Google Cloud Armor security policy Databases cloud-datastore Manages Datastore Developer tools cloud-deploy Create Cloud Deploy pipelines and targets Networking cloud-dns Creates and manages Cloud DNS public or private zones and their records Serverless computing cloud-functions Deploys Cloud Run functions (Gen 2) Networking, Security and identity cloud-ids Deploys a Cloud IDS instance and associated resources Networking cloud-nat Creates and configures Cloud NAT Operations cloud-operations Manages Cloud Logging and Cloud Monitoring Networking cloud-router Manages a Cloud Router on Google Cloud Serverless computing cloud-run Deploys apps to Cloud Run, along with option to map custom domain Databases cloud-spanner Deploys Spanner instances Storage cloud-storage Creates one or more Cloud Storage buckets and assigns basic permissions on them to arbitrary users Developer tools, Serverless computing cloud-workflows Manage Workflows with optional Cloud Scheduler or Eventarc triggers End-to-end, Data analytics, Operations composer Manages Cloud Composer v1 and v2 along with option to manage networking Compute, Containers container-vm Deploys containers on Compute Engine instances Data analytics data-fusion Manages Cloud Data Fusion Data analytics dataflow Handles opinionated Dataflow job configuration and deployments Data analytics datalab Creates DataLab instances with support for GPU instances Data analytics dataplex-auto-data-quality Move data between environments using Dataplex Serverless computing event-function Responds to logging events with a Cloud Run functions Developer tools folders Creates several Google Cloud folders under the same parent Developer tools gcloud Executes Google Cloud CLI commands within Terraform Developer tools github-actions-runners Creates self-hosted GitHub Actions Runners on Google Cloud Developer tools gke-gitlab Installs GitLab on Kubernetes Engine Workspace group Manages Google Groups Operations, Workspace gsuite-export Creates a Compute Engine VM instance and sets up a cronjob to export Google Workspace Admin SDK data to Cloud Logging on a schedule Healthcare and life sciences healthcare Handles opinionated Google Cloud Healthcare datasets and stores Security and identity iam Manages multiple IAM roles for resources on Google Cloud Developer tools jenkins Creates a Compute Engine instance running Jenkins Security and identity kms Allows managing a keyring, zero or more keys in the keyring, and IAM role bindings on individual keys Compute, Containers kubernetes-engine Configures opinionated GKE clusters Networking lb Creates a regional TCP proxy load balancer for Compute Engine by using target pools and forwarding rules Networking lb-http Creates a global HTTP load balancer for Compute Engine by using forwarding rules Networking lb-internal Creates an internal load balancer for Compute Engine by using forwarding rules Networking load-balanced-vms Creates a managed instance group with a load balancer Data analytics log-analysis Stores and analyzes log data Operations log-export Creates log exports at the project, folder, or organization level Operations media-cdn-vod Deploys Media CDN video-on-demand Databases memorystore Creates a fully functional Google Memorystore (redis) instance Compute, Networking netapp-volumes Deploy Google Cloud NetApp Volumes Networking network Sets up a new VPC network on Google Cloud Networking network-forensics Deploys Zeek on Google Cloud Security and identity org-policy Manages Google Cloud organization policies Networking out-of-band-security-3P Creates a 3P out-of-band security appliance deployment Security and identity pam Deploy Privileged Access Manager Operations project-factory Creates an opinionated Google Cloud project by using Shared VPC, IAM, and Google Cloud APIs Data analytics Pub/Sub Creates Pub/Sub topic and subscriptions associated with the topic Compute sap Deploys SAP products Serverless computing scheduled-function Sets up a scheduled job to trigger events and run functions Security and identity secret-manager Creates one or more Google Secret Manager secrets and manages basic permissions for them Networking, Security and identity secure-web-proxy Create and manage Secure Web Proxy on Google Cloud for secured egress web traffic Security and identity service-accounts Creates one or more service accounts and grants them basic roles Operations slo Creates SLOs on Google Cloud from custom Stackdriver metrics capability to export SLOs to Google Cloud services and other systems Databases sql-db Creates a Cloud SQL database instance Compute startup-scripts Provides a library of useful startup scripts to embed in VMs Operations, Security and identity tags Create and manage Google Cloud Tags Developer tools, Operations, Security and identity tf-cloud-agents Creates self-hosted Terraform Cloud Agent on Google Cloud Databases, Serverless computing three-tier-web-app Deploys a three-tier web application using Cloud Run and Cloud SQL Operations utils Gets the short names for a given Google Cloud region Developer tools, Operations, Security and identity vault Deploys Vault on Compute Engine Compute vertex-ai Deploy Vertex AI resources Compute vm Provisions VMs in Google Cloud Networking vpc-service-controls Handles opinionated VPC Service Controls and Access Context Manager configuration and deployments Networking vpn Sets up a Cloud VPN gateway Operations waap Deploys the WAAP solution on Google Cloud Send feedback \ No newline at end of file diff --git a/Cloud_Functions.txt b/Cloud_Functions.txt new file mode 100644 index 0000000000000000000000000000000000000000..994c4ac146ace47728509f5978f30acc8e9526ba --- /dev/null +++ b/Cloud_Functions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/functions +Date Scraped: 2025-02-23T12:09:39.993Z + +Content: +Tech leaders: Get an insider view of Google Cloud’s App Dev and Infrastructure solutions on Oct 30 at 9 AM PT. Save your seat.Jump to Cloud Run functions Cloud Run functionsYou bring the code, we handle the rest by making it simple to build and easy to maintain your platform.New customers get $300 in free credits to spend on Cloud Run functions. All customers get two million monthly invocations free, not charged against your credits.Go to consoleContact salesDeploy Google-recommended solutions that use AI and Cloud Run functions to analyze and annotate images and summarize large documentsBuild and deploy your first cloud function using only your web browser with this quickstartServe users from zero to planet-scale without even thinking about any infrastructure See how customers design applications using event-driven architectures with Cloud Run functionsBLOGCloud Functions is now Cloud Run functionsKey featuresKey featuresSimplified developer experience and increased developer velocityCloud Run functions has a simple and intuitive developer experience. Just write your code and let Google Cloud handle the operational infrastructure. Develop faster by writing and running small code snippets that respond to events. Streamline challenging orchestration problems by connecting Google Cloud products to one another or third-party services using events. Pay only for what you useYou are only billed for your function’s execution time, metered to the nearest 100 milliseconds. You pay nothing when your function is idle. Cloud Run functions automatically spins up and backs down in response to events.Avoid lock-in with open technologyUse open source FaaS (function as a service) framework to run functions across multiple environments and prevent lock-in. Supported environments include Cloud Run, Cloud Run functions, local development environment, on-premises, and other Knative-based serverless environments.View all featuresCustomersLearn from customers using Cloud Run functionsCase studyCommerzbank: Building security into the foundations of cloud banking with Google Cloud3-min readCase studyTelecom Argentina speeds up technical incident resolution using Google Cloud tools5-min readCase studyHomeAway halves development time, lowers cost by 66%+ with Cloud Functions and Firestore5-min readCase studyLucille Games automated infrastructure management with Cloud Functions5-min readCase studySmart Parking built core infrastructure from scratch in under four months5-min readCase studySemios makes real-time prediction requests to deployed machine learning models5-min readSee all customersWhat's newWhat's newCloud Functions is now Cloud Run functions. You can write and deploy functions with Cloud Run, giving you complete control over the underlying service configuration.Blog postCloud Functions second gen is GA, delivering more events, compute, and controlRead the blogVideoRun lightweight functions in different environments with Functions FrameworkWatch videoVideoGannett uses Google’s serverless platform to reach next generation of readersRead the blogDocumentationDocumentationQuickstartBuild simple, single-purpose functionsLearn how to create and deploy single-purpose functions that are attached to events emitted from your cloud infrastructure and services. Learn moreQuickstartDeploy your first functionLearn how to set up your development environment, create a new function, specify dependencies, deploy and test your function, and view logs with our quickstarts.Learn moreTutorialInteract with Firebase using HTTP-triggered Cloud Run functionsLearn how to use an HTTP-triggered Cloud Run function to interact with the Firebase Realtime Database.Learn moreTutorial Trigger a function that does ML to extract text from imagesLearn how to use a Cloud Run function to extract text from images using Cloud Vision API.Learn moreTutorialDeveloping applications with Cloud Run functionsIn this course, you'll learn to implement single-purpose function code that responds to HTTP requests and events from your cloud infrastructure.Start courseNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud Run functionsUse casesUse casesUse caseIntegration with third-party services and APIsUse Cloud Run functions to surface your own microservices via HTTP APIs or integrate with third-party services that offer webhook integrations to quickly extend your application with powerful capabilities, such as sending a confirmation email after a successful Stripe payment or responding to Twilio text message events.Use case Serverless mobile backendsUse Cloud Run functions directly from Firebase to extend your application functionality without spinning up a server. Run your code in response to user actions, analytics, and authentication events to keep your users engaged with event-based notifications and offload CPU- and networking-intensive tasks to Google Cloud.Use case Serverless IoT backendsUse Cloud Run functions with Cloud IoT Core and other fully managed services to build backends for Internet of Things (IoT) device telemetry data collection, real-time processing, and analysis. Cloud Run functions allows you to apply custom logic to each event as it arrives.Use case Real-time file processingExecute your code in response to changes in data. Cloud Run functions can respond to events from Google Cloud services, such as Cloud Storage, Pub/Sub, and Cloud Firestore to process files immediately after upload and generate thumbnails from image uploads, process logs, validate content, transcode videos, validate, aggregate, and filter data in real time.Use case Real-time stream processingUse Cloud Run functions to respond to events from Pub/Sub to process, transform, and enrich streaming data in transaction processing, click-stream analysis, application activity tracking, IoT device telemetry, social media analysis, and other types of applications.Use caseVirtual assistants and conversational experiencesEasily build artificial intelligence into your applications. Cloud Run functions with Cloud Speech API and Dialogflow can extend your products and services with voice- and text-based natural conversational experiences that help users get things done. Connect with users on Google Assistant, Amazon Alexa, Facebook Messenger, and other popular platforms and devices.Use case Video and image analysisUse Cloud Run functions with Video Intelligence API and Cloud Vision API to retrieve relevant information from videos and images, enabling you to search, discover, and derive insight from your media content.Use caseSentiment analysisUse Cloud Run functions in combination with Cloud Natural Language API to reveal the structure and meaning of text and add powerful sentiment analysis and intent extraction capabilities to your applications.View all technical guidesAll featuresAll featuresConnects and extends services to build complex applicationsCloud Run functions lets you treat all Google and third-party cloud services as building blocks. Connect and extend them with code, and rapidly move from concept to production with end-to-end solutions and complex workflows. Further, integrate with third-party services that offer webhook integrations to quickly extend your application with powerful capabilities.End-to-end development and diagnosabilityGo from code to deploy, with integrated monitoring. Get full observability and diagnosability for your application with Cloud Trace. Additionally, get support for local and disconnected development/debugging using open sourced functions framework.Develop locally, scale globallyServe users from zero to planet-scale without even thinking about any infrastructure. Cloud Run functions automatically manages and scales underlying infrastructure with the size of workload.No server managementDeploy your code and let Google run and scale it for you. Cloud Run functions abstracts away all the underlying infrastructure, so that you can focus on your code and build applications faster than ever before.Runs code in response to eventsCloud Run functions allows you to trigger your code from Google Cloud, Firebase, and Google Assistant, or call it directly from any web, mobile, or backend application via HTTP.Pay only for what you useYou are only billed for your function’s execution time, metered to the nearest 100 milliseconds. You pay nothing when your function is idle. Cloud Run functions automatically spins up and backs down in response to events.Avoid lock-in with open technologyUse open source FaaS (function as a service) framework to run functions across multiple environments and prevent lock-in. Supported environments include Cloud Run functions, Cloud Run, local development environment, on-premises, and other Knative-based serverless environments.PricingPricingCloud Run functions is priced according to how long your function runs, how many times it's invoked, and how many resources you provision for the function.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_GPUs.txt b/Cloud_GPUs.txt new file mode 100644 index 0000000000000000000000000000000000000000..abb47be8c4096b3d14bda19f5c7d857d28756016 --- /dev/null +++ b/Cloud_GPUs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/gpu +Date Scraped: 2025-02-23T12:02:36.515Z + +Content: +Announcing A3 Ultra VMs, powered by NVIDIA H200 GPUs, and Hypercompute Cluster. Learn more.Jump to Cloud GPUsCloud GPUsHigh-performance GPUs on Google Cloud for machine learning, scientific computing, and generative AI.Go to consoleSpeed up compute jobs like generative AI, 3D visualization, and HPCA wide selection of GPUs to match a range of performance and price pointsFlexible pricing and machine customizations to optimize for your workloadBLOGPowerful infrastructure innovations for your AI-first futureKey featuresKey featuresA range of GPU typesNVIDIA H200, H100, L4, P100, P4, T4, V100, and A100 GPUs provide a range of compute options to cover your workloads for a broad set of cost and performance needs.Flexible performanceOptimally balance the processor, memory, high performance disk, and up to 8 GPUs per instance for your individual workload. All with the per-second billing, so you only pay only for what you need while you are using it.All the benefits of Google CloudRun GPU workloads on Google Cloud where you have access to industry-leading storage, networking, and data analytics technologies.What's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postPowerful infrastructure innovations for your AI-first futureRead the blogBlog postAccelerate your generative AI journey with NVIDIA NeMo framework on GKERead the blogBlog postA3 supercomputers with NVIDIA H100 GPUs, purpose-built for AIRead the blogReportIntroducing G2 VMs with NVIDIA L4 GPUs — a cloud-industry firstLearn moreBlog postCheaper Cloud AI deployments with NVIDIA T4Read the blogDocumentationDocumentationGoogle Cloud Basics GPUs on Compute EngineCompute Engine provides GPUs that you can add to your virtual machine instances. Learn what you can do with GPUs and what types of GPU hardware are available.Learn moreTutorial Adding or removing GPUs on Compute EngineLearn how to add or remove GPUs from a Compute Engine VM.Learn more TutorialInstalling GPUs driversThis guide shows ways to install NVIDIA proprietary drivers after you’ve created an instance with one or more GPUs.Learn moreTutorialGPUs on Google Kubernetes EngineLearn how to use GPU hardware accelerators in your Google Kubernetes Engine clusters’ nodes.Learn moreGoogle Cloud Basics Using GPUs for training models in the cloudAccelerate the training process for many deep learning models, like image classification, video analysis, and natural language processing.Learn moreGoogle Cloud Basics Attaching GPUs to Dataproc clustersAttach GPUs to the master and worker Compute Engine nodes in a Dataproc cluster to accelerate specific workloads, such as machine learning and data processing.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.PricingPricingFor information about GPU pricing for the different GPU types and regions that are available on Compute Engine, refer to the GPU pricing document.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Healthcare_API.txt b/Cloud_Healthcare_API.txt new file mode 100644 index 0000000000000000000000000000000000000000..04baa4d48b511da7008dba19e26fff75eb376f05 --- /dev/null +++ b/Cloud_Healthcare_API.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/healthcare-api +Date Scraped: 2025-02-23T12:05:02.897Z + +Content: +Learn how customers and partners are using Cloud Healthcare API to drive innovation within their organization.Cloud Healthcare APIUnlock your healthcare data to power insights and AI A secure, compliant, fully managed service for ingesting, transforming and storing healthcare data in FHIR, HL7v2, and DICOM formats, and unstructured text.Go to consoleContact salesNew customers get $300 in free credits to spend on Cloud Healthcare API.View documentation for this product.Product highlightsCloud Healthcare API services: data modalities and capabilitiesIntegrations with Google Cloud solutionsCustomers using Cloud Healthcare APIWhat is the Cloud Healthcare API?3:28FeaturesIntegration with prebuilt AI and machine learning toolsCloud Healthcare API allows you to unlock the true value of your healthcare data by enabling integration with advanced analytics and machine learning solutions such as BigQuery, AutoML, and Vertex AI.Managed scalabilityCloud Healthcare API provides web-native, serverless scaling optimized by Google’s infrastructure. Simply activate the API and start sending requests—no initial capacity configuration required. Although some limits exist (such as Pub/Sub quotas), capacity can expand to match usage patterns.Enhanced data liquidityCloud Healthcare API supports bulk import and export of FHIR data and DICOM data, accelerating time-to-delivery for solutions with dependencies on existing datasets, and providing a convenient API for moving data between projects.Developer friendlyCloud Healthcare API organizes your healthcare information into datasets with one or more modality-specific store per set. Each store exposes both a REST and RPC interface. You can use Identity and Access Management to set fine-grained access policies.View all featuresServicesAPIDescriptionFHIR APICreate, retrieve, store, and search clinical data in the FHIR formatHL7v2 API Ingest, create, store, and search streaming clinical data DICOM APIIngest, retrieve, and store medical imaging data Healthcare Natural Language APIExtract and standardize medical concepts and relationships from unstructured healthcare dataDe-identification APIDe-identify healthcare data to meet compliance requirements when sharing data for research and analytics MedLM APIBuild healthcare solutions using the best of Google's generative AI capabilities FHIR APIDescriptionCreate, retrieve, store, and search clinical data in the FHIR formatHL7v2 API DescriptionIngest, create, store, and search streaming clinical data DICOM APIDescriptionIngest, retrieve, and store medical imaging data Healthcare Natural Language APIDescriptionExtract and standardize medical concepts and relationships from unstructured healthcare dataDe-identification APIDescriptionDe-identify healthcare data to meet compliance requirements when sharing data for research and analytics MedLM APIDescriptionBuild healthcare solutions using the best of Google's generative AI capabilities How It WorksWith support for healthcare data standards such as HL7® FHIR®, HL7® v2, and DICOM®, the Cloud Healthcare API provides a fully managed, highly scalable, enterprise-grade development environment for building clinical and analytics solutions securely on Google Cloud.Try in consoleHealthcare API and FHIR best practicesCommon UsesStore, manage, and query FHIR dataImport existing FHIR data into Cloud Healthcare APIBuild your primary and secondary clinical data repositories using any of the major versions of the FHIR standard (R4, STU3, and DSTU2). Leverage the Cloud Healthcare API for highly performant and scalable data storage, and access to your FHIR data via fully managed REST APIs.Learn how to create and manage FHIR storesHealthcare Data Engine, powered by Cloud Healthcare APIAutomate transformations and reconciliation of various healthcare data formats into FHIR to create a longitudinal patient record and accelerate application development, analytics, and AI/ML workflows with Google Cloud's Healthcare Data Engine.Get started with Healthcare Data EngineTutorials, quickstarts, & labsImport existing FHIR data into Cloud Healthcare APIBuild your primary and secondary clinical data repositories using any of the major versions of the FHIR standard (R4, STU3, and DSTU2). Leverage the Cloud Healthcare API for highly performant and scalable data storage, and access to your FHIR data via fully managed REST APIs.Learn how to create and manage FHIR storesPartners & integrationsHealthcare Data Engine, powered by Cloud Healthcare APIAutomate transformations and reconciliation of various healthcare data formats into FHIR to create a longitudinal patient record and accelerate application development, analytics, and AI/ML workflows with Google Cloud's Healthcare Data Engine.Get started with Healthcare Data EngineStore, manage, and view DICOM dataIngest, retrieve, and store medical imaging data Integrate your existing imaging devices and PACS solutions with the Cloud Healthcare API’s open source DICOMweb endpoint or import your data directly into the Cloud Healthcare API from Cloud Storage. Get started with DICOM in the Cloud Healthcare APIMedical Imaging Suite, powered by Cloud Healthcare APIAs a core component of Google Cloud's Medical Imaging Suite, Cloud Healthcare API allows easy and secure data exchange using the international DICOMweb standard for imaging, making imaging data accessible, interoperable, and useful.Get started with Medical Imaging SuiteTutorials, quickstarts, & labsIngest, retrieve, and store medical imaging data Integrate your existing imaging devices and PACS solutions with the Cloud Healthcare API’s open source DICOMweb endpoint or import your data directly into the Cloud Healthcare API from Cloud Storage. Get started with DICOM in the Cloud Healthcare APIPartners & integrationsMedical Imaging Suite, powered by Cloud Healthcare APIAs a core component of Google Cloud's Medical Imaging Suite, Cloud Healthcare API allows easy and secure data exchange using the international DICOMweb standard for imaging, making imaging data accessible, interoperable, and useful.Get started with Medical Imaging SuiteGain insights from unstructured textHealthcare Natural Language APIGain real-time analysis of insights stored in unstructured medical text. Healthcare Natural Language API allows you to distill machine-readable medical insights from medical documents, while AutoML Entity Extraction for Healthcare makes it simple to build custom knowledge extraction models for healthcare and life sciences apps—no coding skills required.Get started with Healthcare Natural Language APIClaims Acceleration Suite, powered by Cloud Healthcare APIHealthcare Natural Language API, a core component of Google Cloud’s Claims Acceleration Suite, extracts essential data elements from unstructured medical text to help streamline health insurance and prior authorization claims processing.Get started with Claims Acceleration SuiteTutorials, quickstarts, & labsHealthcare Natural Language APIGain real-time analysis of insights stored in unstructured medical text. Healthcare Natural Language API allows you to distill machine-readable medical insights from medical documents, while AutoML Entity Extraction for Healthcare makes it simple to build custom knowledge extraction models for healthcare and life sciences apps—no coding skills required.Get started with Healthcare Natural Language APIPartners & integrationsClaims Acceleration Suite, powered by Cloud Healthcare APIHealthcare Natural Language API, a core component of Google Cloud’s Claims Acceleration Suite, extracts essential data elements from unstructured medical text to help streamline health insurance and prior authorization claims processing.Get started with Claims Acceleration SuitePricingHow Cloud Healthcare API pricing worksCloud Healthcare API pricing is based on region, services, and usage. Prices for US regions are shown below.Services and usageSubscription typePrice (USD)Data storageStructured storageStorage volume is based on bytes of data ingested plus indexing overhead (as measured by indexed bytes) and backup bytes.Ranges from$0.19-$0.39Per GB per month. The first GB is free.Blob storageBlob storage prices are based on the amount of non-structured or BLOB bytes that are ingested and stored.Ranges from$0.003-$0.026Per GB per month. The first GB is free.Request volumeStandard requestsDefault for all requests.Starting at$0.39Per 100,000 requests per month. The first 25,000 requests are free.Complex requestsCaptures API requests that are computationally intense compared with standard requests.Starting at$0.69Per 100,000 requests per month. The first 25,000 requests are free.Notification volumeStandard notificationsNotifications are streaming events that originate from data stores and are sent to Google Cloud or external endpoints.$0.29Per 1 million notifications per month. The first 100,000 notifications (per 1 million) are free.ETL operationsExport batchRanges from$0.09-$0.19Per GB per month.Export streamingRanges from$0.24-$0.34Per GB per month.De-identification operationsInspectionOccurs on free text or images to discover instances of sensitive data.Ranges from$0.10-$0.30Per giga unit (GU). The first GU is free.TransformationEncompasses redaction, replacement, hashing, or changes made to sensitive data as part of the de-identification process.Ranges from$1-$3Per GU. The first GU is free.ProcessingCovers the base cost of the operation. The charges vary for structured storage and blob storage.Ranges from$0.05-$0.60Per GB. The first GB is free.Healthcare Natural Language APIEntity analysisUsage of the Healthcare Natural Language API is calculated in terms of text record monthly volume. A text record contains 1,000 characters.Ranges from$0.03-$0.10Per 1 text record. The first 2,500 text records are free.View full pricing details by region, services, and usage.How Cloud Healthcare API pricing worksCloud Healthcare API pricing is based on region, services, and usage. Prices for US regions are shown below.Data storageSubscription typeStructured storageStorage volume is based on bytes of data ingested plus indexing overhead (as measured by indexed bytes) and backup bytes.Price (USD)Ranges from$0.19-$0.39Per GB per month. The first GB is free.Blob storageBlob storage prices are based on the amount of non-structured or BLOB bytes that are ingested and stored.Subscription typeRanges from$0.003-$0.026Per GB per month. The first GB is free.Request volumeSubscription typeStandard requestsDefault for all requests.Price (USD)Starting at$0.39Per 100,000 requests per month. The first 25,000 requests are free.Complex requestsCaptures API requests that are computationally intense compared with standard requests.Subscription typeStarting at$0.69Per 100,000 requests per month. The first 25,000 requests are free.Notification volumeSubscription typeStandard notificationsNotifications are streaming events that originate from data stores and are sent to Google Cloud or external endpoints.Price (USD)$0.29Per 1 million notifications per month. The first 100,000 notifications (per 1 million) are free.ETL operationsSubscription typeExport batchPrice (USD)Ranges from$0.09-$0.19Per GB per month.Export streamingSubscription typeRanges from$0.24-$0.34Per GB per month.De-identification operationsSubscription typeInspectionOccurs on free text or images to discover instances of sensitive data.Price (USD)Ranges from$0.10-$0.30Per giga unit (GU). The first GU is free.TransformationEncompasses redaction, replacement, hashing, or changes made to sensitive data as part of the de-identification process.Subscription typeRanges from$1-$3Per GU. The first GU is free.ProcessingCovers the base cost of the operation. The charges vary for structured storage and blob storage.Subscription typeRanges from$0.05-$0.60Per GB. The first GB is free.Healthcare Natural Language APISubscription typeEntity analysisUsage of the Healthcare Natural Language API is calculated in terms of text record monthly volume. A text record contains 1,000 characters.Price (USD)Ranges from$0.03-$0.10Per 1 text record. The first 2,500 text records are free.View full pricing details by region, services, and usage.Pricing calculatorEstimate your monthly costs, including region specific pricing and fees.Estimate your costsRequest a custom quoteConnect with our sales team to get a custom quote for your organization.Contact salesUnlock the power of your healthcare dataTry Cloud Healthcare API in consoleGo to consoleGet a quick intro to Cloud Healthcare APIView guideStream and synchronize FHIR stores with BigQueryGet startedExplore public imaging datasetsUse with your applicationTry the Healthcare Natural Language API demoRun demoBusiness CaseExplore how healthcare and life sciences organizations are using Cloud Healthcare APIVirta uses Cloud Healthcare API to establish an interoperable data platform and information layerRead customer storyRelated ContentCue Health simplifies secure access for patients and caregiversApollo 24|7 uses Healthcare NLP API to derive insights from medical textBuild longitudinal data pipelines with Cloud Healthcare API and ReltioFeatured customersPartners & IntegrationRecommended partnersTechnology partnersDelivery partnersSee all partnersGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_IAM(1).txt b/Cloud_IAM(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..fdd31f73a7e452e8d13da0102a9f7e512bb731d8 --- /dev/null +++ b/Cloud_IAM(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/iam +Date Scraped: 2025-02-23T12:09:00.957Z + +Content: +Stay organized with collections Save and categorize content based on your preferences. Forrester names Google a Leader in The Forrester Wave™: Infrastructure as a Service (IaaS) Platform Native Security Q2 2023. Access the report. Enterprise-grade access control Identity and Access Management (IAM) lets administrators authorize who can take action on specific resources, giving you full control and visibility to manage Google Cloud resources centrally. For enterprises with complex organizational structures, hundreds of workgroups, and many projects, IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes. Simplicity first We recognize that an organization’s internal structure and policies can get complex fast. Projects, workgroups, and managing who has authorization to do what all change dynamically. IAM is designed with simplicity in mind: a clean, universal interface lets you manage access control across all Google Cloud resources consistently. So you learn it once, then apply everywhere. The right roles IAM provides tools to manage resource permissions with minimum fuss and high automation. Map job functions within your company to groups and roles. Users get access only to what they need to get the job done, and admins can easily grant default permissions to entire groups of users. Smart access control Permissions management can be a time-consuming task. Recommender helps admins remove unwanted access to Google Cloud resources by using machine learning to make smart access control recommendations. With Recommender, security teams can automatically detect overly permissive access and rightsize them based on similar users in the organization and their access patterns. Get granular with context-aware access IAM enables you to grant access to cloud resources at fine-grained levels, well beyond project-level access. Create more granular access control policies to resources based on attributes like device security status, IP address, resource type, and date/time. These policies help ensure that the appropriate security controls are in place when granting access to cloud resources. Streamline compliance with a built-in audit trail A full audit trail history of permissions authorization, removal, and delegation gets surfaced automatically for your admins. IAM lets you focus on business policies around your resources and makes compliance easy. Enterprise identity made easy Leverage Cloud Identity, Google Cloud’s built-in managed identity to easily create or sync user accounts across applications and projects. It's easy to provision and manage users and groups, set up single sign-on, and configure two-factor authentication (2FA) directly from the Google Admin Console. You also get access to the Google Cloud Organization, which enables you to centrally manage projects using Resource Manager. Workforce Identity Federation Workforce Identity Federation lets you use an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services. Workforce Identity Federation uses an identity federation approach instead of directory synchronization, eliminating the need to maintain separate identities across multiple platforms. Organization Policies Organization Policies provides security guardrails to enforce which resource configurations are allowed or denied to help you address your cloud governance requirements. Organization policy service gives you centralized control over your cloud resources and lets you create granular resource policies to help you meet your security and compliance goals. Features Single access control interface IAM provides a simple and consistent access control interface for all Google Cloud services. Learn one access control interface and apply that knowledge to all Google Cloud resources. Fine-grained control Grant access to users at a resource level of granularity, rather than just project level. For example, you can create an IAM access control policy that grants the Subscriber role to a user for a particular Pub/Sub topic. Automated access control recommendations Remove unwanted access to Google Cloud resources with smart access control recommendations. Using Recommender, you can automatically detect overly permissive access and rightsize them based on similar users in the organization and their access patterns. Context-aware access Control access to resources based on contextual attributes like device security status, IP address, resource type, and date/time. Flexible roles Prior to IAM, you could only grant Owner, Editor, or Viewer roles to users. A wide range of services and resources now surface additional IAM roles out of the box. For example, the Pub/Sub service exposes Publisher and Subscriber roles in addition to the Owner, Editor, and Viewer roles. Web, programmatic, and command-line access Create and manage IAM policies using the Google Cloud Console, the IAM methods, and the gcloud command line tool. Built-in audit trail To ease compliance processes for your organization, a full audit trail is made available to admins without any additional effort. Support for Cloud Identity IAM supports standard Google Accounts. Create IAM policies granting permission to a Google group, a Google-hosted domain, a service account, or specific Google Account holders using Cloud Identity. Centrally manage users and groups through the Google Admin Console. Free of charge IAM is offered at no additional charge for all Google Cloud customers. You will be charged only for use of other Google Cloud services. For information on the pricing of other Google Cloud services, see the Google Cloud Pricing Calculator. "IAM will give Snapchat the ability to grant fine-grained access control to resources within a project. This allows us to compartmentalize access based on workgroups and to manage sensitive resources around individual access needs." Subhash Sankuratripati, Security Engineer, Snapchat Technical resources IAM concepts View documentation IAM how-to guides View documentation IAM quickstart View quickstart IAM client library quickstart tutorial View tutorial Introduction to IAM video Watch video Next '23: Full-stack Identity and Access Management (IAM) Watch video Pricing IAM is available to you at no additional charge. Take the next step Start building on Google Cloud with $300 in free credits and 20+ always free products. Try it free Need help getting started? Contact sales Work with a trusted partner Find a partner Continue browsing See all products Take the next step Start your next project, explore interactive tutorials, and manage your account. Go to console Need help getting started? Contact sales Work with a trusted partner Find a partner Get tips & best practices See tutorials A product or feature listed on this page is in beta. Learn more about product launch stages. \ No newline at end of file diff --git a/Cloud_IAM.txt b/Cloud_IAM.txt new file mode 100644 index 0000000000000000000000000000000000000000..68b7da1bb7581b3e37a7ad2042bb2b738e2d7e41 --- /dev/null +++ b/Cloud_IAM.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/iam +Date Scraped: 2025-02-23T12:05:40.979Z + +Content: +Stay organized with collections Save and categorize content based on your preferences. Forrester names Google a Leader in The Forrester Wave™: Infrastructure as a Service (IaaS) Platform Native Security Q2 2023. Access the report. Enterprise-grade access control Identity and Access Management (IAM) lets administrators authorize who can take action on specific resources, giving you full control and visibility to manage Google Cloud resources centrally. For enterprises with complex organizational structures, hundreds of workgroups, and many projects, IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes. Simplicity first We recognize that an organization’s internal structure and policies can get complex fast. Projects, workgroups, and managing who has authorization to do what all change dynamically. IAM is designed with simplicity in mind: a clean, universal interface lets you manage access control across all Google Cloud resources consistently. So you learn it once, then apply everywhere. The right roles IAM provides tools to manage resource permissions with minimum fuss and high automation. Map job functions within your company to groups and roles. Users get access only to what they need to get the job done, and admins can easily grant default permissions to entire groups of users. Smart access control Permissions management can be a time-consuming task. Recommender helps admins remove unwanted access to Google Cloud resources by using machine learning to make smart access control recommendations. With Recommender, security teams can automatically detect overly permissive access and rightsize them based on similar users in the organization and their access patterns. Get granular with context-aware access IAM enables you to grant access to cloud resources at fine-grained levels, well beyond project-level access. Create more granular access control policies to resources based on attributes like device security status, IP address, resource type, and date/time. These policies help ensure that the appropriate security controls are in place when granting access to cloud resources. Streamline compliance with a built-in audit trail A full audit trail history of permissions authorization, removal, and delegation gets surfaced automatically for your admins. IAM lets you focus on business policies around your resources and makes compliance easy. Enterprise identity made easy Leverage Cloud Identity, Google Cloud’s built-in managed identity to easily create or sync user accounts across applications and projects. It's easy to provision and manage users and groups, set up single sign-on, and configure two-factor authentication (2FA) directly from the Google Admin Console. You also get access to the Google Cloud Organization, which enables you to centrally manage projects using Resource Manager. Workforce Identity Federation Workforce Identity Federation lets you use an external identity provider (IdP) to authenticate and authorize a workforce—a group of users, such as employees, partners, and contractors—using IAM, so that the users can access Google Cloud services. Workforce Identity Federation uses an identity federation approach instead of directory synchronization, eliminating the need to maintain separate identities across multiple platforms. Organization Policies Organization Policies provides security guardrails to enforce which resource configurations are allowed or denied to help you address your cloud governance requirements. Organization policy service gives you centralized control over your cloud resources and lets you create granular resource policies to help you meet your security and compliance goals. Features Single access control interface IAM provides a simple and consistent access control interface for all Google Cloud services. Learn one access control interface and apply that knowledge to all Google Cloud resources. Fine-grained control Grant access to users at a resource level of granularity, rather than just project level. For example, you can create an IAM access control policy that grants the Subscriber role to a user for a particular Pub/Sub topic. Automated access control recommendations Remove unwanted access to Google Cloud resources with smart access control recommendations. Using Recommender, you can automatically detect overly permissive access and rightsize them based on similar users in the organization and their access patterns. Context-aware access Control access to resources based on contextual attributes like device security status, IP address, resource type, and date/time. Flexible roles Prior to IAM, you could only grant Owner, Editor, or Viewer roles to users. A wide range of services and resources now surface additional IAM roles out of the box. For example, the Pub/Sub service exposes Publisher and Subscriber roles in addition to the Owner, Editor, and Viewer roles. Web, programmatic, and command-line access Create and manage IAM policies using the Google Cloud Console, the IAM methods, and the gcloud command line tool. Built-in audit trail To ease compliance processes for your organization, a full audit trail is made available to admins without any additional effort. Support for Cloud Identity IAM supports standard Google Accounts. Create IAM policies granting permission to a Google group, a Google-hosted domain, a service account, or specific Google Account holders using Cloud Identity. Centrally manage users and groups through the Google Admin Console. Free of charge IAM is offered at no additional charge for all Google Cloud customers. You will be charged only for use of other Google Cloud services. For information on the pricing of other Google Cloud services, see the Google Cloud Pricing Calculator. "IAM will give Snapchat the ability to grant fine-grained access control to resources within a project. This allows us to compartmentalize access based on workgroups and to manage sensitive resources around individual access needs." Subhash Sankuratripati, Security Engineer, Snapchat Technical resources IAM concepts View documentation IAM how-to guides View documentation IAM quickstart View quickstart IAM client library quickstart tutorial View tutorial Introduction to IAM video Watch video Next '23: Full-stack Identity and Access Management (IAM) Watch video Pricing IAM is available to you at no additional charge. Take the next step Start building on Google Cloud with $300 in free credits and 20+ always free products. Try it free Need help getting started? Contact sales Work with a trusted partner Find a partner Continue browsing See all products Take the next step Start your next project, explore interactive tutorials, and manage your account. Go to console Need help getting started? Contact sales Work with a trusted partner Find a partner Get tips & best practices See tutorials A product or feature listed on this page is in beta. Learn more about product launch stages. \ No newline at end of file diff --git a/Cloud_Identity(1).txt b/Cloud_Identity(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..35bccfc87f694f3859d0948ecd84a9c143b7f7da --- /dev/null +++ b/Cloud_Identity(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/identity +Date Scraped: 2025-02-23T12:08:43.282Z + +Content: +Forrester names Google a Leader in The Forrester Wave™: Infrastructure as a Service (IaaS) Platform Native Security Q2 2023. Access the report.Jump to Cloud IdentityCloud IdentityA unified identity, access, app, and endpoint management (IAM/EMM) platform.Contact usTry Cloud Identity PremiumGive users easy access to apps with single sign-onMulti-factor authentication protects user and company dataEndpoint management enforces policies for personal and corporate devicesGoogle named a 2021 Gartner Peer Insights Customers’ Choice for Unified Endpoint Management (UEM)Read Endpoint Management reviewsBenefitsGoogle-grade securityDefend your organization with BeyondCorp and Google’s threat intelligence signals. Control access to SaaS apps, enforce multi-factor authentication (MFA), manage endpoints, and investigate threats. User and IT simplicityEfficiently enable intuitive user experiences on endpoint devices, and unify user, access, app, and endpoint management with a single console. Enable access to thousands of apps with single sign-on (SSO). Meeting you where you areExtend your on-premises directory to the cloud with Directory Sync, enable access to traditional apps and infrastructure with secure LDAP, and automatically synchronize user information with HR systems.Key featuresModernize IT and strengthen securityMulti-factor authentication (MFA)Help protect your user accounts and company data with a wide variety of MFA verification methods such as push notifications, Google Authenticator, phishing-resistant Titan Security Keys, and using your Android or iOS device as a security key.Endpoint managementImprove your company’s device security posture on Android, iOS, and Windows devices using a unified console. Set up devices in minutes and keep your company data more secure with endpoint management. Enforce security policies, wipe company data, deploy apps, view reports, and export details.Single sign-on (SSO)Enable employees to work from virtually anywhere, on any device, with single sign-on to thousands of pre-integrated apps, both in the cloud and on-premises. Works with your favorite appsCloud Identity integrates with hundreds of cloud applications out of the box—and we’re constantly adding more to the list so you can count on us to be your single identity platform today and in the future. See current list.View all featuresNewsGoogle earns position on Constellation ShortList™ for Cloud Identity ManagementWhat's newSee the latest updates about Cloud IdentitySign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoSimplify access to LDAP apps and infrastructure with Cloud IdentityWatch videoVideoUnifying user, app, and endpoint management with Cloud IdentityWatch videoReportAirAsia: adopting a modern identity solutionLearn moreVideoUnify identity, device, and app management with Cloud IdentityWatch videoReportVoyage to the modern workplace with Cloud IdentityLearn moreVideoReduce AD Dependency with Cloud Identity and Secure LDAPWatch videoDocumentationFind resources and documentation for Cloud IdentityTutorialActive Directory user account provisioningHow to set up user and group provisioning between Active Directory and your Cloud Identity or Google Workspace account by using Google Cloud Directory Sync (GCDS).Learn moreGoogle Cloud BasicsCloud Identity one-pagerLearn the basics of Cloud Identity: A simple, secure, and flexible approach to identity and endpoint management.Learn moreTutorialSign up for Cloud Identity from the Google Cloud consoleHow to sign up for Cloud Identity through the Google Cloud console.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for Cloud IdentityAll featuresLearn more about Cloud Identity featuresAccount security and MFAHelp to protect users from phishing attacks with Google’s intelligence and threat signals and multi-factor authentication (MFA), including push notifications, Google Authenticator, phishing-resistant Titan Security Keys, and using your Android or iOS device as a security key.Device security with endpoint managementImprove your company’s device security posture on Android, iOS, and Windows devices using a unified console. Set up devices in minutes and keep your company data more secure with endpoint management. Enforce security policies, wipe company data, deploy apps, view reports, and export details.Easy app access with SSOEnable employees to work from virtually anywhere, on any device, with single sign-on (SSO) to thousands of SaaS apps, including Salesforce, SAP SuccessFactors, Google Workspace, and more. Works with your favorite appsCloud Identity integrates with hundreds of cloud applications out of the box—and we’re constantly adding more to the list so you can count on us to be your single identity platform today and in the future. See current list.Digital workspaceEnable employees to set up quickly with a digital workspace—sign in once and access 5000+ apps, including pre-integrated SAML 2.0 and OpenID Connect (OIDC) apps, custom apps, and on-premises apps.Unified management consoleUse a single admin console to manage user, access, app, and device policies, monitor your security and compliance posture with reporting and auditing capabilities, and investigate threats with Security Center.Automated user provisioningReduce administrative overhead involved in managing your users in individual third-party cloud apps by automating user provisioning to create, update, or delete user profile information in one place and have it reflected in your cloud apps.Hybrid identity managementIncrease the ROI of your existing investments by extending your Microsoft Active Directory (AD) users to the cloud with Directory Sync and enabling simpler user access to traditional apps and infrastructure with secure LDAP.Context-aware accessA core component of Google’s BeyondCorp security model, context-aware access enables you to enforce granular and dynamic access controls based on a user’s identity and the context of the access request, without the need for a traditional VPN.Account takeover protectionStrengthen user security with Google’s automatic multilayered hijacking protection. Detect anomalous login behavior and present users with additional challenges to prevent account takeovers.Technical supportGet help when issues arise with 24/7 support from a real person. Phone, email, and chat support is available in 14 languages, included with your Cloud Identity subscription. Advanced Protection ProgramA constantly evolving and easy-to-use bundle of Google’s strongest account security settings, ensuring that your most at-risk users always have the strongest possible protection.Bring your own device (BYOD) supportEndpoint management supports and enables BYOD, making it easy to keep your company data safer while letting employees use their favorite personal devices to get work done. Quick and easy endpoint management deploymentAs soon as your employee’s device gets enrolled in endpoint management, all Wi-Fi and email configurations including server-side certificates get pushed to the device instantly.No agent requiredAgentless setup for basic device management offers wipe and inventory controls for all devices in your fleet, with no user setup or disruption. User-friendly MFA methodsCloud Identity supports a variety of MFA methods—hardware security keys, phone as a security key, mobile device push notifications, SMS, and voice calls—meaning you can choose the right option for your employees.Rich MFA auditing and reportingMonitor employee usage, set alerts, and examine potential risks via detailed reports and audit logs.Easy access to on-premises appsWith secure LDAP, users can securely access traditional LDAP-based apps and infrastructure, using their Cloud Identity credentials.Automate life cycle managementProvision and deprovision users in real time from a unified admin console.PricingCloud Identity pricing detailsCloud Identity is $7.2/mo per user. Try Cloud Identity Premium or learn more about Cloud Identity features and editions pricing.Gartner, Gartner Peer Insights ‘Voice of the Customer’: Unified Endpoint Management, Peer Contributors, 5 January 2021. The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact usNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Identity.txt b/Cloud_Identity.txt new file mode 100644 index 0000000000000000000000000000000000000000..af69ff8196ce470842f8d2b417cfb2378f32e0cf --- /dev/null +++ b/Cloud_Identity.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/identity +Date Scraped: 2025-02-23T12:00:40.919Z + +Content: +Forrester names Google a Leader in The Forrester Wave™: Infrastructure as a Service (IaaS) Platform Native Security Q2 2023. Access the report.Jump to Cloud IdentityCloud IdentityA unified identity, access, app, and endpoint management (IAM/EMM) platform.Contact usTry Cloud Identity PremiumGive users easy access to apps with single sign-onMulti-factor authentication protects user and company dataEndpoint management enforces policies for personal and corporate devicesGoogle named a 2021 Gartner Peer Insights Customers’ Choice for Unified Endpoint Management (UEM)Read Endpoint Management reviewsBenefitsGoogle-grade securityDefend your organization with BeyondCorp and Google’s threat intelligence signals. Control access to SaaS apps, enforce multi-factor authentication (MFA), manage endpoints, and investigate threats. User and IT simplicityEfficiently enable intuitive user experiences on endpoint devices, and unify user, access, app, and endpoint management with a single console. Enable access to thousands of apps with single sign-on (SSO). Meeting you where you areExtend your on-premises directory to the cloud with Directory Sync, enable access to traditional apps and infrastructure with secure LDAP, and automatically synchronize user information with HR systems.Key featuresModernize IT and strengthen securityMulti-factor authentication (MFA)Help protect your user accounts and company data with a wide variety of MFA verification methods such as push notifications, Google Authenticator, phishing-resistant Titan Security Keys, and using your Android or iOS device as a security key.Endpoint managementImprove your company’s device security posture on Android, iOS, and Windows devices using a unified console. Set up devices in minutes and keep your company data more secure with endpoint management. Enforce security policies, wipe company data, deploy apps, view reports, and export details.Single sign-on (SSO)Enable employees to work from virtually anywhere, on any device, with single sign-on to thousands of pre-integrated apps, both in the cloud and on-premises. Works with your favorite appsCloud Identity integrates with hundreds of cloud applications out of the box—and we’re constantly adding more to the list so you can count on us to be your single identity platform today and in the future. See current list.View all featuresNewsGoogle earns position on Constellation ShortList™ for Cloud Identity ManagementWhat's newSee the latest updates about Cloud IdentitySign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoSimplify access to LDAP apps and infrastructure with Cloud IdentityWatch videoVideoUnifying user, app, and endpoint management with Cloud IdentityWatch videoReportAirAsia: adopting a modern identity solutionLearn moreVideoUnify identity, device, and app management with Cloud IdentityWatch videoReportVoyage to the modern workplace with Cloud IdentityLearn moreVideoReduce AD Dependency with Cloud Identity and Secure LDAPWatch videoDocumentationFind resources and documentation for Cloud IdentityTutorialActive Directory user account provisioningHow to set up user and group provisioning between Active Directory and your Cloud Identity or Google Workspace account by using Google Cloud Directory Sync (GCDS).Learn moreGoogle Cloud BasicsCloud Identity one-pagerLearn the basics of Cloud Identity: A simple, secure, and flexible approach to identity and endpoint management.Learn moreTutorialSign up for Cloud Identity from the Google Cloud consoleHow to sign up for Cloud Identity through the Google Cloud console.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for Cloud IdentityAll featuresLearn more about Cloud Identity featuresAccount security and MFAHelp to protect users from phishing attacks with Google’s intelligence and threat signals and multi-factor authentication (MFA), including push notifications, Google Authenticator, phishing-resistant Titan Security Keys, and using your Android or iOS device as a security key.Device security with endpoint managementImprove your company’s device security posture on Android, iOS, and Windows devices using a unified console. Set up devices in minutes and keep your company data more secure with endpoint management. Enforce security policies, wipe company data, deploy apps, view reports, and export details.Easy app access with SSOEnable employees to work from virtually anywhere, on any device, with single sign-on (SSO) to thousands of SaaS apps, including Salesforce, SAP SuccessFactors, Google Workspace, and more. Works with your favorite appsCloud Identity integrates with hundreds of cloud applications out of the box—and we’re constantly adding more to the list so you can count on us to be your single identity platform today and in the future. See current list.Digital workspaceEnable employees to set up quickly with a digital workspace—sign in once and access 5000+ apps, including pre-integrated SAML 2.0 and OpenID Connect (OIDC) apps, custom apps, and on-premises apps.Unified management consoleUse a single admin console to manage user, access, app, and device policies, monitor your security and compliance posture with reporting and auditing capabilities, and investigate threats with Security Center.Automated user provisioningReduce administrative overhead involved in managing your users in individual third-party cloud apps by automating user provisioning to create, update, or delete user profile information in one place and have it reflected in your cloud apps.Hybrid identity managementIncrease the ROI of your existing investments by extending your Microsoft Active Directory (AD) users to the cloud with Directory Sync and enabling simpler user access to traditional apps and infrastructure with secure LDAP.Context-aware accessA core component of Google’s BeyondCorp security model, context-aware access enables you to enforce granular and dynamic access controls based on a user’s identity and the context of the access request, without the need for a traditional VPN.Account takeover protectionStrengthen user security with Google’s automatic multilayered hijacking protection. Detect anomalous login behavior and present users with additional challenges to prevent account takeovers.Technical supportGet help when issues arise with 24/7 support from a real person. Phone, email, and chat support is available in 14 languages, included with your Cloud Identity subscription. Advanced Protection ProgramA constantly evolving and easy-to-use bundle of Google’s strongest account security settings, ensuring that your most at-risk users always have the strongest possible protection.Bring your own device (BYOD) supportEndpoint management supports and enables BYOD, making it easy to keep your company data safer while letting employees use their favorite personal devices to get work done. Quick and easy endpoint management deploymentAs soon as your employee’s device gets enrolled in endpoint management, all Wi-Fi and email configurations including server-side certificates get pushed to the device instantly.No agent requiredAgentless setup for basic device management offers wipe and inventory controls for all devices in your fleet, with no user setup or disruption. User-friendly MFA methodsCloud Identity supports a variety of MFA methods—hardware security keys, phone as a security key, mobile device push notifications, SMS, and voice calls—meaning you can choose the right option for your employees.Rich MFA auditing and reportingMonitor employee usage, set alerts, and examine potential risks via detailed reports and audit logs.Easy access to on-premises appsWith secure LDAP, users can securely access traditional LDAP-based apps and infrastructure, using their Cloud Identity credentials.Automate life cycle managementProvision and deprovision users in real time from a unified admin console.PricingCloud Identity pricing detailsCloud Identity is $7.2/mo per user. Try Cloud Identity Premium or learn more about Cloud Identity features and editions pricing.Gartner, Gartner Peer Insights ‘Voice of the Customer’: Unified Endpoint Management, Peer Contributors, 5 January 2021. The GARTNER PEER INSIGHTS CUSTOMERS’ CHOICE badge is a trademark and service mark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact usNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Key_Management.txt b/Cloud_Key_Management.txt new file mode 100644 index 0000000000000000000000000000000000000000..ff7e0ccdf9bfffa1630aedf0efe0cb5e58f24340 --- /dev/null +++ b/Cloud_Key_Management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/security-key-management +Date Scraped: 2025-02-23T12:09:13.340Z + +Content: +Automatically provision and assign customer-managed encryption keys with Autokey.Jump to Cloud Key ManagementCloud Key ManagementManage encryption keys on Google Cloud.Go to consoleDeliver scalable, centralized, fast cloud key managementHelp satisfy compliance, privacy, and security needsApply hardware security modules (HSMs) effortlessly to your most sensitive dataUse an external KMS to protect your data in Google Cloud and separate the data from the keyApprove or deny any request for your encryption keys based on clear and precise justifications34:12Customer-managed encryption keys (CMEK)BenefitsScale your security globallyScale your application to Google’s global footprint while letting Google worry about the challenges of key management, including managing redundancy, latency, and data residency.Help achieve your compliance requirementsEasily encrypt your data in the cloud using software-backed encryption keys, FIPS 140-2 Level 3 validated HSMs, customer-provided keys or an External Key Manager. Leverage from integration with Google Cloud productsUse customer-managed encryption keys (CMEK) to control the encryption of data across Google Cloud products while benefiting from additional security features, such as Google Cloud IAM and audit logs.Key featuresCore featuresCentrally manage encryption keysA cloud-hosted key management service that lets you manage symmetric and asymmetric cryptographic keys for your cloud services the same way you do on-premises. You can generate, use, rotate, and destroy AES256, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 cryptographic keys.Deliver hardware key security with HSMHost encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 validated HSMs. With this fully managed service, you can protect your most sensitive workloads without the need to worry about the operational overhead of managing an HSM cluster.Provide support for external keys with EKMEncrypt data in integrated Google services with encryption keys that are stored and managed in a third-party key management system that’s deployed outside Google’s infrastructure. External Key Manager allows you to maintain separation between your data at rest and your encryption keys while still leveraging the power of cloud for compute and analytics.Be the ultimate arbiter of access to your data Key Access Justifications works with Cloud EKM to greatly advance the control you have over your data. It’s the only product that gives you visibility into every request for an encryption key, a justification for that request, and a mechanism to approve or deny decryption in the context of that request. These controls are covered by Google’s integrity commitments.View all featuresDocumentationDocumentationGoogle Cloud BasicsCloud Key Management Service documentationLearn how to create, import, and manage cryptographic keys and perform cryptographic operations in a single centralized cloud service.Learn moreGoogle Cloud BasicsCloud HSM documentationGet an overview of Cloud HSM and learn how to create and use HSM-protected encryption keys in Cloud Key Management Service.Learn moreGoogle Cloud BasicsCloud External Key Manager documentationFind an overview of Cloud External Key Manager (Cloud EKM).Learn moreWhitepaperCloud Key Management Service deep diveLearn more about the inner workings of the Cloud KMS platform and how it helps you protect the keys and other sensitive data that you store in Google Cloud.Learn moreBest Practice Using customer-managed encryption keys (CMEK) with GKELearn how to use customer-managed encryption keys (CMEK) on Google Kubernetes Engine (GKE).Learn moreGoogle Cloud Basics Using customer-managed encryption keys with Cloud SQLThe CMEK feature lets you use your own cryptographic keys for data at rest in Cloud SQL, including MySQL, PostgreSQL, and SQL Server.Learn moreGoogle Cloud Basics Using customer-managed encryption keys (CMEK) with DataprocSee how to use CMEK to encrypt data on the PDs associated with the VMs in your Dataproc cluster and/or the cluster metadata.Learn moreGoogle Cloud Basics Using customer-managed encryption keys with Data FusionLearn how customer-managed encryption keys provide user control over the data written by Cloud Data Fusion pipelines.Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseSupport regulatory complianceCloud KMS, together with Cloud HSM and Cloud EKM, supports a wide range of compliance mandates that call for specific key management procedures and technologies. It does so in a scalable, cloud-native way, without undermining the agility of the cloud implementation. Various mandates call for hardware encryption (HSM), keys being separated from data (EKM), or keys being handled securely (KMS overall). Key management is compliant with FIPS 140-2.Use caseManage encryption keys via secure hardwareCustomers who are subject to compliance regulations may be required to store their keys and perform crypto operations in a FIPS 140-2 Level 3 validated device. By allowing customers to store their keys in a FIPS validated HSM, they are able to meet their regulator’s demand and maintain compliance in the cloud. This is also critical for customers seeking a level of assurance that the cloud provider cannot see or export their key material.Use caseManage encryption keys outside the cloudCustomers subject to regulatory or regional security requirements need to adopt cloud computing while retaining the encryption keys in their possession. External Key Manager allows them to maintain separation between data at rest and encryption keys while still leveraging the power of cloud for compute and analytics. This is accomplished with full visibility into who has access to the keys, when they have been used, and where they are located.Use caseKey Access Justifications and EKM data flowKey Access Justifications gives Google Cloud customers visibility into every request for an encryption key, a justification for that request, and a mechanism to approve or deny decryption in the context of that request. The use cases focus on both enforcement and visibility for data access.Use caseUbiquitous data encryptionSeamlessly encrypt data as it is sent to the cloud, using your external key management solution, in a way that only a confidential VM service can decrypt and compute on it.View all technical guidesAll featuresAll featuresSymmetric and asymmetric key supportCloud KMS allows you to create, use, rotate, automatically rotate, and destroy AES256 symmetric and RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 asymmetric cryptographic keys. With HSM, encrypt, decrypt, and sign with AES-256 symmetric and RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 asymmetric cryptographic keys.Create external keys with EKMGenerate your external key using one of the following external key managers: Equinix, Fortanix, Ionic, Thales, and Unbound. Once you have linked your external key with Cloud KMS, you can use it to protect data at rest in BigQuery and Compute Engine.Delay for key destructionCloud KMS has a built-in 24-hour delay for key material destruction, to prevent accidental or malicious data loss.Encrypt and decrypt via APICloud KMS is a REST API that can use a key to encrypt, decrypt, or sign data, such as secrets for storage.High global availabilityCloud KMS is available in several global locations and across multi-regions, allowing you to place your service where you want for low latency and high availability.Automated and at-will key rotationCloud KMS allows you to set a rotation schedule for symmetric keys to automatically generate a new key version at a fixed time interval. Multiple versions of a symmetric key can be active at any time for decryption, with only one primary key version used for encrypting new data. With EKM, create an externally managed key directly from the Cloud KSM console.Statement attestation with HSMWith Cloud HSM, verify that a key was created in the HSM with attestation tokens generated for key creation operations.Integration with GKEEncrypt Kubernetes secrets at the application-layer in GKE with keys you manage in Cloud KMS. In addition, you can store API keys, passwords, certificates, and other sensitive data with the Secret Manager storage system.Maintain key-data separationWith EKM, maintain separation between your data at rest and your encryption keys while still leveraging the power of cloud for compute and analytics.Key data residencyIf using Cloud KMS, your cryptographic keys will be stored in the region where you deploy the resource. You also have the option of storing those keys inside a physical Hardware Security Module located in the region you choose with Cloud HSM.Key importYou may be using existing cryptographic keys that were created on your premises or in an external key management system. You can import them into Cloud HSM keys or import software keys into Cloud KMS. Justified accessGet a clear reason for every decryption request that will cause your data to change state from at-rest to in-use with Key Access Justifications. Automated policyKey Access Justifications lets you set automated policies that approve or deny access to keys based on specific justifications. Let your external key manager, provided by Google Cloud technology partners, take care of the rest.Integrity commitmentControls provided by Key Access Justifications are covered by Google’s integrity commitments, so that you know they can be trusted.PricingPricingCloud Key Management Service charges for usage and varies based on the following products: Cloud Key Management Service, Cloud External Key Manager, and Cloud HSM.ProductPrice (US$)Cloud KMS: active key versions$0.06 per monthCloud KMS: key use operations (Encrypt/Decrypt)$0.03 per 10,000 operationsCloud KMS: key admin operationsfreeCloud HSM: key versions (AES256, RSA2048)$1.00 per monthCloud HSM: key versions (RSA 3072, RSA 4096)0–2,000 key versions: $2.50 per month2,001+ key versions: $1.00 per monthCloud HSM: key versions (EC P256, EC P384)0–2,000 key versions: $2.50 per month2,001+ key versions: $1.00 per monthCloud EKM: key versions$3.00 per monthCloud EKM: key use operations$0.03 per 10,000 operationsIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.View pricing detailsPartnersPartnersImplement External Key Manager with one of these industry-leading key management vendors.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Load_Balancing.txt b/Cloud_Load_Balancing.txt new file mode 100644 index 0000000000000000000000000000000000000000..9bcd5cb07e47788e77838203685b777fc6732191 --- /dev/null +++ b/Cloud_Load_Balancing.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/load-balancing +Date Scraped: 2025-02-23T12:07:09.246Z + +Content: +Deploy a virtual machine cluster with a load balancer that instantaneously manages and distributes global traffic.Jump to Cloud Load BalancingCloud Load BalancingHigh performance, scalable load balancing on Google Cloud.Go to consoleContact salesScalable load balancing on Google Cloud with high performanceChoose the best load balancer type for you with this guideLearn how Google Cloud Load Balancing supports 1 million+ queries per secondGet metrics for your load balancer to understand app and system services performanceLearn how customers are building global businesses on Cloud Load BalancingBenefitsGlobal with single anycast IPProvides cross-region load balancing, including automatic multi-region failover. Cloud Load Balancing reacts instantaneously to changes in users, traffic, network, backend health, and other related conditions.Software-defined with flexibilityCloud Load Balancing is a fully distributed, software-defined, managed service for all your traffic. Apply Cloud Load Balancing to all of your traffic: HTTP(S), TCP/SSL, and UDP.Seamless autoscalingScale as your users and traffic grow. Easily handles huge, unexpected, and instantaneous spikes by diverting traffic to other regions in the world that can take traffic. Zero to full throttle in seconds.Key features Key featuresHTTP(S) load balancingApplication Load Balancers can balance HTTP and HTTPS traffic across multiple backend instances, across multiple regions. Your entire app is available using a single global IP address, resulting in a simplified DNS setup. Application Load Balancers are scalable, fault-tolerant, require no pre-warming, and enable content-based load balancing. For HTTPS traffic, they provide SSL termination and load balancing.TCP/SSL/UDP load balancingNetwork Load Balancers are Layer 4 load balancers that can distribute traffic to back ends located either in a single region or across multiple regions. These load balancers are scalable, don't require pre-warming, and use health checks to help ensure that only healthy instances receive traffic. Available Network Load Balancers: Proxy Network Load Balancers and Passthrough Network Load Balancers.SSL offloadSSL offload enables you to centrally manage SSL certificates and decryption. You can enable encryption between your load balancing layer and backends to ensure the highest level of security, with some additional overhead for processing on backends.Cloud CDN integrationEnable Cloud CDN with Application Load Balancers for optimizing application delivery for your users with a single checkbox.Extensibility and programmabilityService Extensions provide programmability and extensibility on load balancing data paths. Service Extensions callouts enable gRPC calls to user-managed services during data processing, while Service Extensions plugins allow the insertion of custom code into the networking data path using WebAssembly (Wasm).View all featuresLooking for other networking products, including Cloud CDN, Cloud Armor, or Cloud DNS?Explore all productsCustomersLearn from customers using Cloud Load BalancingVideoConnect anywhere with Google's Cross-Cloud Network2:39Blog postUcraft builds global website-builder business with the help of Google Cloud5-min readBlog postHow NCR and Opus built better availability and resilience for card management in the cloud5-min readBlog postHow Pokémon GO scales to millions of requests?5-min readSee all customersWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postMeet the nine new web delivery partner integrations coming via Service ExtensionsRead the blogEventTips for troubleshooting Google Cloud Load Balancing backendsRead the blogBlog postHow to choose the correct load balancer typeRead the blogBlog postExploring Google Cloud networking enhancements for generative AI applicationsRead the blogBlog postWhat’s new for the Google Cloud global front end for web delivery and protectionRead the blogBlog post 5 ways Service Extensions callouts can improve your Cloud Load Balancing environmentRead the blogDocumentationDocumentationGoogle Cloud BasicsOverview of Cloud Load BalancingUnderstand each type of Google Cloud load balancer and link to deeper documentation. Learn moreBest PracticeChoose a load balancerFind the right load balancer that fits your needs.Learn moreTutorialSet up Network and Application Load BalancersYou'll learn the differences between Network Load Balancers and Application Load Balancers and how to set them up for your workload running on Compute Engine VMs.Learn moreTutorialApplication Load Balancer with TerraformIn this lab, you will create an Application Load Balancer to forward traffic to a custom URL map. Learn moreTutorialApplication Load Balancer with Cloud ArmorIn this lab, you configure an Application Load Balancer with global backends and stress test the load balancer and denylist the stress test IP with Cloud Armor.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud Load BalancingAll featuresAll featuresHTTP(S) load balancingApplication Load Balancers can balance HTTP and HTTPS traffic across multiple backend instances, across multiple regions. Your entire app is available using a single global IP address, resulting in a simplified DNS setup. Application Load Balancers are scalable, fault-tolerant, require no pre-warming, and are content-based. For HTTPS traffic, they also provide SSL termination and load balancing.Cloud LoggingCloud Logging for load balancing logs all the load balancing requests sent to your load balancer. These logs can be used for debugging as well as analyzing your user traffic. You can view request logs and export them to Cloud Storage, BigQuery, or Pub/Sub for analysis.TCP/UDP/SSL load balancingNetwork Load Balancers are Layer 4 load balancers that can distribute traffic to back ends located either in a single region or across multiple regions. These load balancers are scalable, don't require pre-warming, and use health checks to help ensure that only healthy instances receive traffic. Available Network Load Balancers: Proxy Network Load Balancers and Passthrough Network Load Balancers.Seamless autoscalingAutoscaling helps your applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. You just define the autoscaling policy and the autoscaler performs automatic scaling based on the measured load. No pre-warming required—go from zero to full throttle in seconds.SSL offloadSSL offload enables you to centrally manage SSL certificates and decryption. You can enable encryption between your load balancing layer and backends to ensure the highest level of security, with some additional overhead for processing on backends.High fidelity health checksHealth checks ensure that new connections are only load balanced to healthy backends that are up and ready to receive them. High fidelity health checks ensure that the probes mimic actual traffic to backends.Advanced feature supportCloud Load Balancing also includes advanced support features, such as IPv6 global load balancing, WebSockets, user-defined request headers, and protocol forwarding for private VIPs. AffinityCloud Load Balancing session affinity provides the ability to direct and stick user traffic to specific backend instances.Cloud CDN integrationEnable Cloud CDN for your Application Load Balancers to optimize application delivery for your users with a single checkbox.UDP load balancingPassthrough Network Load Balancers can spread UDP traffic over a pool of instances within a Compute Engine region. These load balancers are scalable, don't require pre-warming, and use health checks to help ensure that only healthy instances receive traffic.Extensibility and programmabilityService Extensions provide programmability and extensibility on load balancing data paths. Service Extensions callouts enable gRPC calls to user-managed services during data processing, while Service Extensions plugins allow the insertion of custom code into the networking data path using WebAssembly (Wasm).Cloud ArmorGoogle Cloud Armor security policies enable you to rate-limit or redirect requests to your Application or Network Load Balancers at the Google Cloud edge, as close as possible to the source of incoming traffic. PricingPricingTo get a custom pricing quote, connect with a sales representative.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Logging.txt b/Cloud_Logging.txt new file mode 100644 index 0000000000000000000000000000000000000000..c46bb3ab3bfb1bd964b3f033166322f3084602a2 --- /dev/null +++ b/Cloud_Logging.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/logging +Date Scraped: 2025-02-23T12:07:28.237Z + +Content: +Jump to Cloud loggingCloud LoggingFully managed, real-time log management with storage, search, analysis, and alerting at exabyte scale.New customers get $300 in free credits to spend on Cloud Logging.Go to consoleView documentationUse Logs Explorer quickstart to view your logs in the Google Cloud Console Learn how Cloud Logging helps customers improve their troubleshooting and reliability practicesStay up-to-date with the latest blogs, videos, and other logging resourcesReview the top 10 reasons to get started with Log Analytics, powered by BigQueryVIDEOCloud Logging in a minute1:50BenefitsGet started immediatelyPlatform logs are ingested from Google Cloud services and securely stored with no setup required. GKE workload logs are captured automatically and the Ops Agent captures workload logs from VMs.Quickly resolve issuesCloud Logging is integrated with Cloud Monitoring, Error Reporting, and Cloud Trace so you can troubleshoot issues across your services. Configure alerts for logs so you stay up to date on important events.Real-time insightsQuickly spot anomalies with real-time ingestion, and use log-based metrics to build Cloud Monitoring dashboards. Log Analytics brings the power of BigQuery to Cloud Logging for deeper insights.Key featuresKey featuresLogs ExplorerLogs Explorer enables you to search, sort, and analyze logs through flexible query statements, along with rich histogram visualizations, a simple field explorer, and ability to save the queries. Set alerts to notify you whenever a specific message appears in your included logs, or use Cloud Monitoring to alert on logs-based metrics you define.Regional log bucketsUse log buckets as part of your local or industry-specific compliance strategy. Log buckets store and process your workload’s logs data only in the region you specify. These buckets feature customizable access control and retention. Error ReportingError Reporting automatically analyzes your logs for exceptions and intelligently aggregates them into meaningful error groups. See your top or new errors at a glance and set up notifications to automatically alert you when a new error group is identified.Cloud Audit LogsCloud Audit Logs helps security teams maintain audit trails in Google Cloud. Achieve the same level of transparency over administrative activities and access to data in Google Cloud as in on-premises environments. Every administrative activity is recorded on a hardened, always-on audit trail, which cannot be disabled by any rogue actor.Logs RouterCloud Logging receives log entries through the Cloud Logging API where they pass through the Logs Router. The Logs Router checks each log entry against existing inclusion filters and exclusion filters to determine which log entries to discard, which to ingest, and which to include in exports.View all featuresData governance: Principles for securing and managing logsGet the whitepaperCustomersLearn how Google Cloud customers are using Cloud Logging for better operationsCase studyADEO Services: Cloud Logging as a universal back end for logs across Google Cloud, on-prem, and SaaS4-min readCase studyGannett, America’s largest newspaper publisher, improves observability with Google Cloud's ops suite7-min readCase studyVolusion aggregates logs for easy ingestion and analysis of security events4-min readBlog postHow Psyonix wins with better logging4-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoLearn how to deep dive into your logs using SQLWatch videoBlog postUse Log Analytics for deeper network insightsRead the blogBlog postSecurity insights from audit logs with Log AnalyticsRead the blogEventListen to a Twitter Space event on Log Analytics by Cloud LoggingLearn moreBlog postLog Analytics in Cloud Logging is now GARead the blogBlog postTop 10 reasons to get started with Log Analytics todayRead the blogDocumentationDocumentationQuickstartQuickstart using Cloud Logging toolsThis quickstart introduces you to some of the capabilities of Cloud Logging.Learn moreAPIs & LibrariesUsing the Cloud Logging APIFind reference documentation and sample code and try out the API here.Learn moreGoogle Cloud BasicsCloud Audit LogsGain visibility into who did what, when, and where for all user activity on Google Cloud Platform.Learn moreGoogle Cloud BasicsAccess TransparencyLearn how Access Transparency provides you with logs that capture the actions Google personnel take when accessing your content.Learn moreGoogle Cloud BasicsAccess ApprovalSee how the Access Approval API enables controlling access to your organization’s data by Google personnel.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud LoggingAll featuresAll featuresLogs ExplorerSearch, sort, and query logs through flexible query statements, along with rich histogram visualizations, simple field explorers, and ability to save the queries.Custom logs / Ingestion APIWrite any custom log, from on-premises or another cloud, using our public write APIs.Logs alertingAlert on specific messages in your logs or alert on logs-based metrics with Cloud Monitoring.Log AnalyticsPlatform and workload logging data ingested into Cloud Logging is made available in Log Analytics, which is powered by BigQuery. Perform advanced analytics using SQL to query your logs. The log data is also made available directly in BigQuery so you can correlate your logs with other business data.Logs retentionConfigure different retention periods for logs in different log buckets, and criteria for different logs using the Logs Router.Logs-based metricsCreate metrics from log data which appears seamlessly in Cloud Monitoring, where you can visualize these metrics and create dashboards.Audit loggingAccess audit logs that capture all the admin and data access events within Google Cloud, with 400 days of data retention at no additional cost.Third-party integrationsIntegrate with external systems using Pub/Sub and configuring Logs Router to export the logs.Logs archivalStore logs for a longer duration at lower cost by easily exporting into Cloud Storage.Error ReportingError Reporting lets you see problems through the noise by automatically analyzing your logs for exceptions and intelligently aggregating them into meaningful error groups.Log buckets and viewsLog buckets provide a first-class logs storage solution that lets you centralize or subdivide your logs based on your needs. From there, use log views to specify which logs a user should have access to, all through standard IAM controls.PricingPricingThe pricing for Google Cloud Observability lets you control your usage and spending. Google Cloud Observability products are priced by data volume or usage. You can use the free data usage allotments to get started with no upfront fees or commitments.The following table summarizes the pricing information for Cloud Logging.FeaturePrice1Free allotment per monthEffective dateLogging storage* except for vended network logs.$0.50/GiB;One-time charge for streaming logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data.First 50 GiB/project/monthJuly 1, 2018Vended network logs storage†$0.25/GiB;One-time charge for streaming network telemetry logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data.Not applicableOctober 1, 2024Logging retention‡$0.01 per GiB per month for logs retained more than 30 days; billed monthly according to retention.Logs retained for the default retention period don't incur a retention cost.January 1, 2022Log Router♣ No additional chargeNot applicableNot applicableLog Analytics♥ No additional chargeNot applicableNot applicable* Storage volume counts the actual size of the log entries prior to indexing. There are no storage charges for logs stored in the _Required log bucket.† Vended logs are Google Cloud networking logs that are generated by Google Cloud services when the generation of these logs is enabled. Vended logs include VPC Flow Logs, Firewall Rules Logging, and Cloud NAT logs. These logs are also subject to Network telemetry pricing. For more information, see Vended logs.‡ There are no retention charges for logs stored in the _Required log bucket, which has a fixed retention period of 400 days.♣ Log routing is defined as forwarding logs received through the Cloud Logging API to a supported destination. Destination charges might apply to routed logs.♥ There is no charge to upgrade a log bucket to use Log Analytics or to issue SQL queries from the Log Analytics page.Note: The pricing language for Cloud Logging changed on July 19, 2023; however, the free allotments and the rates haven't changed. Your bill might refer to the old pricing language.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Monitoring.txt b/Cloud_Monitoring.txt new file mode 100644 index 0000000000000000000000000000000000000000..4c696dccf7ca885796c44232130a89083eb24fd4 --- /dev/null +++ b/Cloud_Monitoring.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/monitoring +Date Scraped: 2025-02-23T12:07:30.287Z + +Content: +Jump to Cloud MonitoringCloud MonitoringGain visibility into the performance, availability, and health of your applications and infrastructure.Go to consoleView documentationGet started now with: dashboards, the Ops Agent for VMs, and Managed Service for PrometheusBuilt on the same backend used by Google that holds over 65 quadrillion points on diskLearn how Cloud Monitoring helps customers implement SRE principles to improve their businessesStay up-to-date with the latest blogs and resourcesVIDEOCloud Monitoring in a minute1:57BenefitsFor Google Cloud and other environmentsCloud Monitoring offers automatic out-of-the-box metric collection dashboards for Google Cloud services. It also supports monitoring of hybrid and multicloud environments.Identify trends, prevent issuesMetrics, events, and metadata are displayed with rich query language that helps identify issues and uncover patterns. Service-level objectives measure user experience and improve collaboration with developers.Reduce monitoring overheadOne integrated service for metrics, uptime monitoring, dashboards, and alerts reduces time spent navigating between systems. Observability in context makes metrics available within Google Cloud resource pages.Key featuresKey featuresSLO monitoringAutomatically infer or custom define service-level objectives (SLOs) for applications and get alerted when SLO violations occur. Check out our step-by-step guide to learn how to set SLOs, following SRE best practices.Managed metrics collection for Kubernetes and virtual machinesGoogle Cloud’s operations suite offers Managed Service for Prometheus for use with Kubernetes, which features self-deployed and managed collection options to simplify metrics collection, storage, and querying. For VMs, you can use the Ops Agent, which combines logging and metrics collection into a single agent that can be deployed at scale using popular configuration and management tools. Google Cloud integrationDiscover and monitor all Google Cloud resources and services, with no additional instrumentation, integrated right into the Google Cloud console.View all featuresIncreasing business value with better IT operations: A guide to site reliability engineering (SRE)Download the whitepaperCustomersCustomers are using Cloud Monitoring to improve their operationsVideoHow The Home Depot scaled monitoring globally with Managed Service for PrometheusVideo (14:17)Blog postHow Sabre is using SRE to lead a successful digital transformation4-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postMonitor Google Compute Engine instances with Prometheus and the Ops AgentRead the blogBlog post Cloud Storage gets better observability with monitoring dashboardsRead the blogBlog postAdd severity levels to your alert policies in Cloud MonitoringRead the blogBlog postGoogle Cloud Managed Service for Prometheus is now GALearn moreBlog postWebhook, Pub/Sub, and Slack Alerting notification channels launchedRead the blogReportMonitoring GKE clusters for cost optimization using Cloud MonitoringLearn moreDocumentationDocumentationTutorialInstall the Ops Agent live in the ConsoleIn this tutorial you will work directly in the Google Cloud Console to create a Compute Engine instance (VM), install the Ops Agent, and test that it is working.Learn moreQuickstartMonitoring a Compute Engine instanceLearn how to monitor a Compute Engine virtual machine (VM) instance with Cloud Monitoring.Learn moreQuickstartSet up managed collection for Managed Service for PrometheusLearn how to set up the managed collector, which is best suited for applications you are building new on GKE or refactoring.Learn moreQuickstartIntroduction to the Cloud Monitoring APIThis page describes some of the features of the Cloud Monitoring API v3.Learn moreTutorialMonitoring your API usageLearn how to track overall consumption and monitor the performance of your APIs.Learn moreBest PracticeConcepts in service monitoringGet familiar with service-level indicators (SLIs) and service-level objectives (SLOs).Learn moreTutorialCreating a service-level indicatorCreate service-level objectives (SLOs) for custom and automatically detected services. Identify metrics you want to use in your service-level indicators (SLIs).Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud MonitoringAll featuresAll featuresSynthetic monitoringContinuously track the performance of your web applications and APIs using automated script based tests. Monitor for faulty behavior such as regressions, broken features, high response times, and unexpected status codes. Create alerts to be notified proactively in case of unexpected behavior.SLO monitoringAutomatically infer or custom define service-level objectives (SLOs) for applications and get alerted when SLO violations occur.Custom metricsInstrument your application to monitor application and business-level metrics via Cloud Monitoring.Google Cloud Console integrationDiscover and monitor all Google Cloud resources and services, with no additional configuration, integrated right into the Google Cloud console.Managed Service for PrometheusMonitor and alert on your workloads, using Prometheus, without having to manually manage and operate Prometheus at scale.Ops AgentDeploy the Ops Agent on your Google Cloud VMs to collect detailed metrics and logs from your applications and system. Try the in-console, step-by-step tutorial to experience installing the agent on a live VM. Logging integrationDrill down from dashboards and charts to logs. Create, visualize, and alert on metrics based on log data.DashboardsGet visibility into your cloud resources and services with no configuration. Define custom dashboards and take advantage of Google’s powerful data visualization tools.Multiple project and group/cluster supportCreate metrics scopes to monitor single or multiple projects together, and create resource groups to define relationships based on resource names, tags, security groups, projects, regions, accounts, and other criteria. Use those relationships to create targeted dashboards and topology-aware alerting policies.AlertingConfigure alerting policies to notify you when events occur or particular system or custom metrics violate rules that you define. Use multiple conditions to define complex alerting rules. Receive notifications via email, SMS, Slack, PagerDuty, and more.Uptime monitoringMonitor the availability of your internet-accessible URLs, VMs, APIs, and load balancers from probes around the globe with uptime checks. Create alerts to be notified proactively if there is an outage.PricingPricingThe pricing for Google Cloud Monitoring lets you control your usage and spending. You can use the free data usage allotments to get started with no upfront fees or commitments. Learn more in the pricing details guide.FeaturePrice1Free allotment per monthEffective dateAll Monitoring data except data ingested by using Managed Service for Prometheus$0.2580/MiB1: first 150–100,000 MiB$0.1510/MiB: next 100,000–250,000 MiB$0.0610/MiB: >250,000 MiBAll non-chargeable Google Cloud metricsFirst 150 MiB per billing account for metrics charged by bytes ingestedJuly 1, 2018Metrics ingested by using Google Cloud Managed Service for Prometheus, including GKE control plane metrics$0.060/million samples†: first 0-50 billion samples ingested#$0.048/million samples: next 50-250 billion samples ingested$0.036/million samples: next 250-500 billion samples ingested$0.024/million samples: >500 billion samples ingestedNot applicableAugust 8, 2023Monitoring data ingested by using GKE workload metricsGKE workload metrics is deprecated and removed in GKE 1.24. During the deprecation period, ingestion of GKE workload metrics is not charged.Not applicableNot applicableMonitoring API calls$0.01/1,000 Read API calls (Write API calls are free)First 1 million Read API calls included per billing accountJuly 1, 2018 through October 1, 2025Monitoring API callsNo charge for write API callsRead API calls: $0.50/million time series returned♥Write API calls: Not applicableRead API calls: First 1 million time series returned per billing accountOctober 2, 2025Execution of Monitoring uptime checks$0.30/1,000 executions‡1 million executions per Google Cloud projectOctober 1, 2022Execution of synthetic monitors$1.20/1,000 executions*100 executions per billing accountNovember 1, 20231 For pricing purposes, all units such as MB and GB represent binary measures. For example, 1 MB is 220 bytes. 1 GB is 230 bytes. These binary units are also known as mebibyte(MiB) and gibibyte(GiB).† Google Cloud Managed Service for Prometheus uses Cloud Monitoring storage for externally created metric data and uses the Monitoring API to retrieve that data. Managed Service for Prometheus meters based on samples ingested instead of bytes to align with Prometheus' conventions. For more information about sample-based metering, see Pricing for controllability and predictability. For computational examples, see Pricing examples based on samples ingested.# Samples are counted per billing account.♥ There is no charge for read API calls issued through the Google Cloud console, excluding those issued through the Cloud Shell. Read API calls that aren't issued through the Google Cloud console and that can return time-series data are charged by the number of time series that are returned or for one time series, which ever is larger. There is no charge for other read API calls. For more information, see Cloud Monitoring API pricing.‡ Executions are charged to the billing account in which they are defined. For more information, see Pricing for uptime-check execution.* Executions are charged to the billing account in which they are defined. For each execution, you might incur additional charges from other Google Cloud services, including services such as Cloud Functions, Cloud Storage, and Cloud Logging. For information about these additional charges, see the pricing document for the respective Google Cloud service.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Monitoring_metric_export.txt b/Cloud_Monitoring_metric_export.txt new file mode 100644 index 0000000000000000000000000000000000000000..948245288e82062f272ac891d612c0990425de7f --- /dev/null +++ b/Cloud_Monitoring_metric_export.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/monitoring-metric-export +Date Scraped: 2025-02-23T11:53:12.456Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cloud Monitoring metric export Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC This article describes a solution for exporting Cloud Monitoring metrics for long-term analysis. Cloud Monitoring provides a monitoring solution for Google Cloud and Amazon Web Services (AWS). Cloud Monitoring maintains metrics for six weeks because the value in monitoring metrics is often time-bound. Therefore, the value of historical metrics decreases over time. After the six-week window, aggregated metrics might still hold value for long-term analysis of trends that might not be apparent with short-term analysis. This solution provides a guide to understanding the metric details for export and a serverless reference implementation for metric export to BigQuery. The State of DevOps reports identified capabilities that drive software delivery performance. This solution will help you with the following capabilities: Monitoring and observability Monitoring systems to inform business decisions Visual management capabilities Exporting metrics use cases Cloud Monitoring collects metrics and metadata from Google Cloud, AWS, and app instrumentation. Monitoring metrics provide deep observability into performance, uptime, and overall health of cloud apps through an API, dashboards, and a metrics explorer. These tools provide a way to review the previous 6 weeks of metric values for analysis. If you have long-term metric analysis requirements, use the Cloud Monitoring API to export the metrics for long-term storage. Cloud Monitoring maintains the latest 6 weeks of metrics. It is frequently used for operational purposes such as monitoring virtual machine infrastructure (CPU, memory, network metrics) and application performance metrics (request or response latency). When these metrics exceed preset thresholds, an operational process is triggered through alerting. The captured metrics might also be useful for long-term analysis. For example, you might want to compare app performance metrics from Cyber Monday or other high-traffic events against metrics from the previous year to plan for the next high-traffic event. Another use case is to look at Google Cloud service usage over a quarter or year to better forecast cost. There might also be app performance metrics that you want to view across months or years. In these examples, maintaining the metrics for analysis over a long-term timeframe is required. Exporting these metrics to BigQuery provides the necessary analytical capabilities to address these examples. Requirements To perform long-term analysis on Monitoring metric data, there are 3 main requirements: Export the data from Cloud Monitoring. You need to export the Cloud Monitoring metric data as an aggregated metric value. Metric aggregation is required because storing raw timeseries data points, while technically feasible, doesn't add value. Most long-term analysis is performed at the aggregate level over a longer timeframe. The granularity of the aggregation is unique to your use case, but we recommend a minimum of 1 hour aggregation. Ingest the data for analysis. You need to import the exported Cloud Monitoring metrics to an analytics engine for analysis. Write queries and build dashboards against the data. You need dashboards and standard SQL access to query, analyze, and visualize the data. Functional steps Build a list of metrics to include in the export. Read metrics from the Monitoring API. Map the metrics from the exported JSON output from the Monitoring API to the BigQuery table format. Write the metrics to BigQuery. Create a programmatic schedule to regularly export the metrics. Architecture The design of this architecture leverages managed services to simplify your operations and management effort, reduces costs, and provides the ability to scale as required. The following technologies are used in the architecture: App Engine - Scalable platform as a service (PaaS) solution used to call the Monitoring API and write to BigQuery. BigQuery - A fully-managed analytics engine used to ingest and analyze the timeseries data. Pub/Sub - A fully-managed real-time messaging service used to provide scalable asynchronous processing. Cloud Storage - A unified object storage for developers and enterprises used to store the metadata about the export state. Cloud Scheduler - A cron-style scheduler used to execute the export process. Understanding Cloud Monitoring metrics details To understand how to best export metrics from Cloud Monitoring, it's important to understand how it stores metrics. Types of metrics There are 4 main types of metrics in Cloud Monitoring that you can export. Google Cloud metrics list are metrics from Google Cloud services, such as Compute Engine and BigQuery. Agent metrics list are metrics from VM instances running the Cloud Monitoring agents. AWS metrics list are metrics from AWS services such as Amazon Redshift and Amazon CloudFront. Metrics from external sources are metrics from third-party applications, and user-defined metrics, including custom metrics. Each of these metric types have a metric descriptor, which includes the metric type, as well as other metric metadata. The following metric is an example listing of the metric descriptors from the Monitoring API projects.metricDescriptors.list method. { "metricDescriptors": [ { "name": "projects/sage-facet-201016/metricDescriptors/pubsub.googleapis.com/subscription/push_request_count", "labels": [ { "key": "response_class", "description": "A classification group for the response code. It can be one of ['ack', 'deadline_exceeded', 'internal', 'invalid', 'remote_server_4xx', 'remote_server_5xx', 'unreachable']." }, { "key": "response_code", "description": "Operation response code string, derived as a string representation of a status code (e.g., 'success', 'not_found', 'unavailable')." }, { "key": "delivery_type", "description": "Push delivery mechanism." } ], "metricKind": "DELTA", "valueType": "INT64", "unit": "1", "description": "Cumulative count of push attempts, grouped by result. Unlike pulls, the push server implementation does not batch user messages. So each request only contains one user message. The push server retries on errors, so a given user message can appear multiple times.", "displayName": "Push requests", "type": "pubsub.googleapis.com/subscription/push_request_count", "metadata": { "launchStage": "GA", "samplePeriod": "60s", "ingestDelay": "120s" } } ] } The important values to understand from the metric descriptor are the type, valueType, and metricKind fields. These fields identify the metric and impact the aggregation that is possible for a metric descriptor. Kinds of metrics Each metric has a metric kind and a value type. For more information, read Value types and metric kinds. The metric kind and the associated value type are important because their combination affects the way the metrics are aggregated. In the preceding example, the pubsub.googleapis.com/subscription/push_request_count metric metric type has a DELTA metric kind and an INT64 value type. In Cloud Monitoring, the metric kind and value types are stored in metricsDescriptors, which are available in the Monitoring API. Timeseries timeseries are regular measurements for each metric type stored over time that contain the metric type, metadata, labels and the individual measured data points. Metrics collected automatically by Monitoring, such as Google Cloud and AWS metrics, are collected regularly. As an example, the appengine.googleapis.com/http/server/response_latencies metric is collected every 60 seconds. A collected set of points for a given timeseries might grow over time, based on the frequency of the data reported and any labels associated with the metric type. If you export the raw timeseries data points, this might result in a large export. To reduce the number of timeseries data points returned, you can aggregate the metrics over a given alignment period. For example, by using aggregation you can return one data point per hour for a given metric timeseries that has one data point per minute. This reduces the number of exported data points and reduces the analytical processing required in the analytics engine. In this article, timeseries are returned for each metric type selected. Metric aggregation You can use aggregation to combine data from several timeseries into a single timeseries. The Monitoring API provides powerful alignment and aggregation functions so that you don't have to perform the aggregation yourself, passing the alignment and aggregation parameters to the API call. For more details about how aggregation works for the Monitoring API, read Filtering and aggregation and this blog post. You map metric typeto aggregation type to ensure that the metrics are aligned and that the timeseries is reduced to meet your analytical needs. There are lists of aligners and reducers, that you can use to aggregate the timeseries. Aligners and reducers have a set of metrics that you can use to align or reduce based on the metric kinds and value types. As an example, if you aggregate over 1 hour, then the result of the aggregation is 1 point returned per hour for the timeseries. Another way to fine-tune your aggregation is to use the Group By function, which lets you group the aggregated values into lists of aggregated timeseries. For example, you can choose to group App Engine metrics based on the App Engine module. Grouping by the App Engine module in combination with the aligners and reducers aggregating to 1 hour, produces 1 data point per App Engine module per hour. Metric aggregation balances the increased cost of recording individual data points against the need to retain enough data for a detailed long-term analysis. Reference implementation details The reference implementation contains the same components as described in the Architecture design diagram. The functional and relevant implementation details in each step are described below. Build metric list Cloud Monitoring defines over a thousand metric types to help you monitor Google Cloud, AWS, and third-party software. The Monitoring API provides the projects.metricDescriptors.list method, which returns a list of metrics available to a Google Cloud project. The Monitoring API provides a filtering mechanism so that you can filter to a list of metrics that you want to export for long-term storage and analysis. The reference implementation in GitHub uses a Python App Engine app to get a list of metrics and then writes each message to a Pub/Sub topic separately. The export is initiated by a Cloud Scheduler that generates a Pub/Sub notification to run the app. There are many ways to call the Monitoring API and in this case, the Cloud Monitoring and Pub/Sub APIs are called by using the Google API Client Library for Python because of its flexible access to the Google APIs. Get timeseries You extract the timeseries for the metric and then write each timeseries to Pub/Sub. With the Monitoring API you can aggregate the metric values across a given alignment period by using the project.timeseries.list method. Aggregating data reduces your processing load, storage requirements, query times, and analysis costs. Data aggregation is a best practice for efficiently conducting long-term metric analysis. The reference implementation in GitHub uses a Python App Engine app to subscribe to the topic, where each metric for export is sent as a separate message. For each message that is received, Pub/Sub pushes the message to the App Engine app. The app gets the timeseries for a given metric aggregated based on the input configuration. In this case, the Cloud Monitoring and Pub/Sub APIs are called by using the Google API Client Library. Each metric can return 1 or more timeseries. Each metric is sent by a separate Pub/Sub message to insert into BigQuery. The metric type-to-aligner and metric type-to-reducer mapping is built into the reference implementation. The following table captures the mapping used in the reference implementation based on the classes of metric kinds and value types supported by the aligners and reducers. Value type GAUGE Aligner Reducer DELTA Aligner Reducer CUMULATIVE2 Aligner Reducer BOOL yes ALIGN_FRACTION_TRUE none no N/A N/A no N/A N/A INT64 yes ALIGN_SUM none yes ALIGN_SUM none yes none none DOUBLE yes ALIGN_SUM none yes ALIGN_SUM none yes none none STRING yes excluded excluded no N/A N/A no N/A N/A DISTRIBUTION yes ALIGN_SUM none yes ALIGN_SUM none yes none none MONEY no N/A N/A no N/A N/A no N/A N/A It's important to consider the mapping of valueType to aligners and reducers because aggregation is only possible for specific valueTypes and metricKinds for each aligner and reducer. For example, consider the pubsub.googleapis.com/subscription/push_request_count metric type. Based on the DELTA metric kind and INT64 value type, one way that you can aggregate the metric is: Alignment Period - 3600s (1 hour) Aligner = ALIGN_SUM - The resulting data point in the alignment period is the sum of all data points in the alignment period. Reducer = REDUCE_SUM - Reduce by computing the sum across a timeseries for each alignment period. Along with the alignment period, aligner, and reducer values, the project.timeseries.list method requires several other inputs: filter - Select the metric to return. startTime - Select the starting point in time for which to return timeseries. endTime - Select the last time point in time for which to return timeseries. groupBy - Enter the fields upon which to group the timeseries response. alignmentPeriod - Enter the periods of time into which you want the metrics aligned. perSeriesAligner - Align the points into even time intervals defined by an alignmentPeriod. crossSeriesReducer - Combine multiple points with different label values down to one point per time interval. The GET request to the API includes all the parameters described in the preceding list. https://monitoring.googleapis.com/v3/projects/sage-facet-201016/timeSeries? interval.startTime=START_TIME_VALUE& interval.endTime=END_TIME_VALUE& aggregation.alignmentPeriod=ALIGNMENT_VALUE& aggregation.perSeriesAligner=ALIGNER_VALUE& aggregation.crossSeriesReducer=REDUCER_VALUE& filter=FILTER_VALUE& aggregation.groupByFields=GROUP_BY_VALUE The following HTTP GET provides an example call to the projects.timeseries.list API method by using the input parameters: https://monitoring.googleapis.com/v3/projects/sage-facet-201016/timeSeries? interval.startTime=2019-02-19T20%3A00%3A01.593641Z& interval.endTime=2019-02-19T21%3A00%3A00.829121Z& aggregation.alignmentPeriod=3600s& aggregation.perSeriesAligner=ALIGN_SUM& aggregation.crossSeriesReducer=REDUCE_SUM& filter=metric.type%3D%22kubernetes.io%2Fnode_daemon%2Fmemory%2Fused_bytes%22+& aggregation.groupByFields=metric.labels.key The preceding Monitoring API call includes a crossSeriesReducer=REDUCE_SUM, which means that the metrics are collapsed and reduced into a single sum as shown in the following example. { "timeSeries": [ { "metric": { "type": "pubsub.googleapis.com/subscription/push_request_count" }, "resource": { "type": "pubsub_subscription", "labels": { "project_id": "sage-facet-201016" } }, "metricKind": "DELTA", "valueType": "INT64", "points": [ { "interval": { "startTime": "2019-02-08T14:00:00.311635Z", "endTime": "2019-02-08T15:00:00.311635Z" }, "value": { "int64Value": "788" } } ] } ] } This level of aggregation aggregates data into a single data point, making it an ideal metric for your overall Google Cloud project. However, it doesn't let you drill into which resources contributed to the metric. In the preceding example, you can't tell which Pub/Sub subscription contributed the most to the request count. If you want to review the details of the individual components generating the timeseries, you can remove the crossSeriesReducer parameter. Without the crossSeriesReducer, the Monitoring API doesn't combine the various timeseries to create a single value. The following HTTP GET provides an example call to the projects.timeseries.list API method by using the input parameters. The crossSeriesReducer isn't included. https://monitoring.googleapis.com/v3/projects/sage-facet-201016/timeSeries? interval.startTime=2019-02-19T20%3A00%3A01.593641Z& interval.endTime=2019-02-19T21%3A00%3A00.829121Z aggregation.alignmentPeriod=3600s& aggregation.perSeriesAligner=ALIGN_SUM& filter=metric.type%3D%22kubernetes.io%2Fnode_daemon%2Fmemory%2Fused_bytes%22+ In the following JSON response, the metric.labels.keys are the same across both of the results because the timeseries is grouped. Separate points are returned for each of the resource.labels.subscription_ids values. Review the metric_export_init_pub and metrics_list values in the following JSON. This level of aggregation is recommended because it allows you to use Google Cloud products, included as resource labels, in your BigQuery queries. { "timeSeries": [ { "metric": { "labels": { "delivery_type": "gae", "response_class": "ack", "response_code": "success" }, "type": "pubsub.googleapis.com/subscription/push_request_count" }, "metricKind": "DELTA", "points": [ { "interval": { "endTime": "2019-02-19T21:00:00.829121Z", "startTime": "2019-02-19T20:00:00.829121Z" }, "value": { "int64Value": "1" } } ], "resource": { "labels": { "project_id": "sage-facet-201016", "subscription_id": "metric_export_init_pub" }, "type": "pubsub_subscription" }, "valueType": "INT64" }, { "metric": { "labels": { "delivery_type": "gae", "response_class": "ack", "response_code": "success" }, "type": "pubsub.googleapis.com/subscription/push_request_count" }, "metricKind": "DELTA", "points": [ { "interval": { "endTime": "2019-02-19T21:00:00.829121Z", "startTime": "2019-02-19T20:00:00.829121Z" }, "value": { "int64Value": "803" } } ], "resource": { "labels": { "project_id": "sage-facet-201016", "subscription_id": "metrics_list" }, "type": "pubsub_subscription" }, "valueType": "INT64" } ] } Each metric in the JSON output of the projects.timeseries.list API call is written directly to Pub/Sub as a separate message. There is a potential fan-out where 1 input metric generates 1 or more timeseries. Pub/Sub provides the ability to absorb a potentially large fan-out without exceeding timeouts. The alignment period provided as input means that the values over that timeframe are aggregated into a single value as shown in the preceding example response. The alignment period also defines how often to run the export. For example, if your alignment period is 3600s, or 1 hour, then the export runs every hour to regularly export the timeseries. Store metrics The reference implementation in GitHub uses a Python App Engine app to read each timeseries and then insert the records into the BigQuery table. For each message that is received, Pub/Sub pushes the message to the App Engine app. The Pub/Sub message contains metric data exported from the Monitoring API in a JSON format and needs to be mapped to a table structure in BigQuery. In this case, the BigQuery APIs are called using the Google API Client Library.. The BigQuery schema is designed to map closely to the JSON exported from the Monitoring API. When building the BigQuery table schema, one consideration is the scale of the data sizes as they grow over time. In BigQuery, we recommend that you partition the table based on a date field because it can make queries more efficient by selecting date ranges without incurring a full table scan. If you plan to run the export regularly, you can safely use the default partition based on ingestion date. If you plan to upload metrics in bulk or don't run the export periodically, partition on the end_time, which does require changes to the BigQuery schema. You can either move the end_time to a top-level field in the schema, where you can use it for partitioning, or add a new field to the schema. Moving the end_time field is required because the field is contained in a BigQuery record and partitioning must be done on a top-level field. For more information, read the BigQuery partitioning documentation. BigQuery also provides the ability to expire datasets, tables, and table partitions after an amount of time. Using this feature is a useful way to purge older data when the data is no longer useful. For example, if your analysis covers a 3-year time period, you can add a policy to delete data older than 3 years old. Schedule export Cloud Scheduler is a fully-managed cron job scheduler. Cloud Scheduler lets you use the standard cron schedule format to trigger an App Engine app, send a message by using Pub/Sub, or send a message to an arbitrary HTTP endpoint. In the reference implementation in GitHub, Cloud Scheduler triggers the list-metrics App Engine app every hour by sending a Pub/Sub message with a token that matches the App Engine's configuration. The default aggregation period in the app configuration is 3600s, or 1 hour, which correlates to how often the app is triggered. A minimum of 1 hour aggregation is recommended because it provides a balance between reducing data volumes and still retaining high fidelity data. If you use a different alignment period, change the frequency of the export to correspond to the alignment period. The reference implementation stores the last end_time value in Cloud Storage and uses that value as the subsequent start_time unless a start_time is passed as a parameter. The following screenshot from Cloud Scheduler demonstrates how you can use the Google Cloud console to configure the Cloud Scheduler to invoke the list-metrics App Engine app every hour. The Frequency field uses the cron-style syntax to tell Cloud Scheduler how frequently to execute the app. The Target specifies a Pub/Sub message that is generated, and the Payload field contains the data contained in the Pub/Sub message. Using the exported metrics With the exported data in BigQuery, you can now use standard SQL to query the data or build dashboards to visualize trends in your metrics over time. Sample query: App Engine latencies The following query finds the minimum, maximum, and average of the mean latency metric values for an App Engine app. The metric.type identifies the App Engine metric, and the labels identify the App Engine app based on the project_id label value. The point.value.distribution_value.mean is used because this metric is a DISTRIBUTION value in the Monitoring API, which is mapped to the distribution_value field object in BigQuery. Theend_time field looks back over the values for the past 30 days. SELECT metric.type AS metric_type, EXTRACT(DATE FROM point.INTERVAL.start_time) AS extract_date, MAX(point.value.distribution_value.mean) AS max_mean, MIN(point.value.distribution_value.mean) AS min_mean, AVG(point.value.distribution_value.mean) AS avg_mean FROM `sage-facet-201016.metric_export.sd_metrics_export` CROSS JOIN UNNEST(resource.labels) AS resource_labels WHERE point.interval.end_time > TIMESTAMP(DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)) AND point.interval.end_time <= CURRENT_TIMESTAMP AND metric.type = 'appengine.googleapis.com/http/server/response_latencies' AND resource_labels.key = "project_id" AND resource_labels.value = "sage-facet-201016" GROUP BY metric_type, extract_date ORDER BY extract_date Sample query: BigQuery query counts The following query returns the number of queries against BigQuery per day in a project. The int64_value field is used because this metric is an INT64 value in the Monitoring API, which is mapped to the int64_value field in BigQuery. The metric.typeidentifies the BigQuery metric, and the labels identify the project based on the project_id label value. The end_time field looks back over the values for the past 30 days. SELECT EXTRACT(DATE FROM point.interval.end_time) AS extract_date, sum(point.value.int64_value) as query_cnt FROM `sage-facet-201016.metric_export.sd_metrics_export` CROSS JOIN UNNEST(resource.labels) AS resource_labels WHERE point.interval.end_time > TIMESTAMP(DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)) AND point.interval.end_time <= CURRENT_TIMESTAMP and metric.type = 'bigquery.googleapis.com/query/count' AND resource_labels.key = "project_id" AND resource_labels.value = "sage-facet-201016" group by extract_date order by extract_date Sample query: Compute Engine instances The following query finds the weekly minimum, maximum, and average of the CPU usage metric values for Compute Engine instances of a project. The metric.type identifies the Compute Engine metric, and the labels identify the instances based on the project_id label value. The end_time field looks back over the values for the past 30 days. SELECT EXTRACT(WEEK FROM point.interval.end_time) AS extract_date, min(point.value.double_value) as min_cpu_util, max(point.value.double_value) as max_cpu_util, avg(point.value.double_value) as avg_cpu_util FROM `sage-facet-201016.metric_export.sd_metrics_export` WHERE point.interval.end_time > TIMESTAMP(DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)) AND point.interval.end_time <= CURRENT_TIMESTAMP AND metric.type = 'compute.googleapis.com/instance/cpu/utilization' group by extract_date order by extract_date Data visualization BigQuery is integrated with many tools that you can use for data visualization. Looker Studio is a free tool built by Google where you can build data charts and dashboards to visualize the metric data, and then share them with your team. The following example shows a trendline chart of the latency and count for the appengine.googleapis.com/http/server/response_latencies metric over time. Colaboratory is a research tool for machine learning education and research. It's a hosted Jupyter notebook environment that requires no set up to use and access data in BigQuery. Using a Colab notebook, Python commands, and SQL queries, you can develop detailed analysis and visualizations. Monitoring the export reference implementation When the export is running, you need to monitor the export. One way to decide which metrics to monitor is to set a Service Level Objective (SLO). An SLO is a target value or range of values for a service level that is measured by a metric. The Site reliability engineering book describes 4 main areas for SLOs: availability, throughput, error rate, and latency. For a data export, throughput and error rate are two major considerations and you can monitor them through the following metrics: Throughput - appengine.googleapis.com/http/server/response_count Error rate - logging.googleapis.com/log_entry_count For example, you can monitor the error rate by using the log_entry_count metric and filtering it for the App Engine apps (list-metrics, get-timeseries, write-metrics) with a severity of ERROR. You can then use the Alerting policies in Cloud Monitoring to alert you of errors encountered in the export app. The Alerting UI displays a graph of the log_entry_count metric as compared to the threshold for generating the alert. What's next View the reference implementation on GitHub. Read the Cloud Monitoring docs. Explore the Cloud Monitoring v3 API docs. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Read our resources about DevOps. Learn more about the DevOps capabilities related to this solution: Monitoring and observability Monitoring systems to inform business decisions Visual management capabilities. Take the DevOps quick check to understand where you stand in comparison with the rest of the industry. Send feedback \ No newline at end of file diff --git a/Cloud_NAT.txt b/Cloud_NAT.txt new file mode 100644 index 0000000000000000000000000000000000000000..d129a2029c9c67ef2e7299377d84f1289b053d64 --- /dev/null +++ b/Cloud_NAT.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/nat +Date Scraped: 2025-02-23T12:07:10.816Z + +Content: +Learn how to protect your outbound connections without public IPsCloud NATCloud-first high performance network address translationCloud NAT provides fully managed, software-defined network address translation support for Google Cloud.Go to consoleContact salesProduct HighlightsCloud-managed service applied to each workloadSimplified deployment and ongoing operationUnparallelled network performance Cloud NAT and NGFWs Advanced Networking DemoFeaturesHigh performance network address translationCloud NAT delivers high performance, reliability and scalability, avoiding the need for middle proxies. It lets you restrict inbound communications to your app instances while allowing them to have outbound communications to the internet without using public IPs.Dynamic port allocationDynamic Port Allocation (DPA) feature enable Cloud NAT to scale up and down port allocations depending on-demand while providing operator controls to set minimum and maximum port limits.Flexible IP address allocationChoose your NAT IP allocation based on your network requirements. Manual mode gives you full control when specifying IPs, while Auto mode enables the NAT IPs to be allocated and scaled automaticallySimplified deployment and operationsA single Cloud NAT gateway can provide NAT for all subnets in a VPC region and be configured to support multiple NAT IP addresses. Cloud NAT simplifies application deployment and operation, requiring no changes to networking, forwarding, or routing configurations.Unparallelled network performance Autoscale your NAT operations with minimal impact to throughput and latency and a 99.99% SLA. View all featuresHow It WorksTo use Cloud NAT, you’ll first create a NAT gateway. Then you’ll be able to configure the NAT policies to safeguard outbound internet connections to GCE and GKE workloads from external threatsView documentationCommon UsesPublic NATLearn how to configure and manage network address translation in this quickstart guideLearn how to use Public NAT with Compute EngineLearn how to use Public NAT with GKEExplore Cloud NAT for Serverless use casesTutorials, quickstarts, & labsLearn how to configure and manage network address translation in this quickstart guideLearn how to use Public NAT with Compute EngineLearn how to use Public NAT with GKEExplore Cloud NAT for Serverless use casesPrivate NAT for Network Connectivity Center spokesPrivate NAT enables private-to-private translations across Google Cloud networks and other on-premises or cloud provider networks. Inter-VPC NAT lets you create a Private NAT gateway that works in conjunction with Network Connectivity Center Virtual Private Cloud (VPC) spokes to perform network address translation (NAT) between VPC networks.Learn how to use Private NAT with Network Connectivity Center Tutorials, quickstarts, & labsPrivate NAT enables private-to-private translations across Google Cloud networks and other on-premises or cloud provider networks. Inter-VPC NAT lets you create a Private NAT gateway that works in conjunction with Network Connectivity Center Virtual Private Cloud (VPC) spokes to perform network address translation (NAT) between VPC networks.Learn how to use Private NAT with Network Connectivity Center Hybrid NATHybrid NAT lets you perform network address translations (NAT) of IP addresses between a Virtual Private Cloud (VPC) network and the connected on-premises network or any other cloud provider network Tutorials, quickstarts, & labsHybrid NAT lets you perform network address translations (NAT) of IP addresses between a Virtual Private Cloud (VPC) network and the connected on-premises network or any other cloud provider network PricingHow Cloud NAT Pricing WorksHourly price for the NAT gateway based on the # of VM instances using the gateway. Per-GiB cost for data transfer processed by the NAT gateway and an hourly price for the external IP address used by the gateway ServicesDescriptionPrice (USD)NAT Gateway supporting up to 32 VM instances$0.0014 * the number of VM instances that are using the gateway$0.045Price per GiB processed, inbound and outbound data transferNAT Gateway supporting more than 32 VM instances$0.044$0.045Price per GiB processed, inbound and outbound data transferIP address used by a NAT gatewayPrice per hour for a static or an ephemeral external IP address used by a NAT gateway for Public NAT$0.005Price per hour for an external IP usedLearn more about Cloud NAT pricing.How Cloud NAT Pricing WorksHourly price for the NAT gateway based on the # of VM instances using the gateway. Per-GiB cost for data transfer processed by the NAT gateway and an hourly price for the external IP address used by the gateway NAT Gateway supporting up to 32 VM instancesDescription$0.0014 * the number of VM instances that are using the gatewayPrice (USD)$0.045Price per GiB processed, inbound and outbound data transferNAT Gateway supporting more than 32 VM instancesDescription$0.044Price (USD)$0.045Price per GiB processed, inbound and outbound data transferIP address used by a NAT gatewayDescriptionPrice per hour for a static or an ephemeral external IP address used by a NAT gateway for Public NATPrice (USD)$0.005Price per hour for an external IP usedLearn more about Cloud NAT pricing.Pricing CalculatorEstimate your monthly Google Cloud costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet a quick intro to using Cloud NATView DocumentationSee Cloud NAT resources, release notes and moreSee resourcesLearn about Cloud NAT interactions with other Google productsRead guideLearn how Cloud NAT strengthens Macy's security Read blogLearn how to secure network access with Cloud NAT and cloud-based firewallsRead blogGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Profiler.txt b/Cloud_Profiler.txt new file mode 100644 index 0000000000000000000000000000000000000000..cb58695a87e3dfc6e7978be24da7cf0fe25b3fd3 --- /dev/null +++ b/Cloud_Profiler.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/profiler/docs +Date Scraped: 2025-02-23T12:07:38.765Z + +Content: +Home Profiler Documentation Stay organized with collections Save and categorize content based on your preferences. Cloud Profiler documentation View all product documentation Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. It attributes that information to the application's source code, helping you identify the parts of the application consuming the most resources, and otherwise illuminating the performance characteristics of the code. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Tutorial Quickstart: Measure app performance Profiling applications: Go, Java, Node.js, or Python Profiling applications running outside of Google Cloud Focus the flame graph Compare profiles View historical trends Troubleshoot emoji_objects Concepts Profiling concepts Cloud Profiler overview Flame graphs info Resources Quotas and limits Release notes Training Training and tutorials Analyze production performance with Cloud Profiler In this codelab, you learn how to set up Cloud Profiler for a Go program and then you learn how to collect, view and analyze the performance data with Cloud Profiler. Learn more arrow_forward Training Training and tutorials Optimizing a Go app In this tutorial, you download and run a Go application, and then you are guided through using profiling data to optimize that application. Learn more arrow_forward Code Samples Code Samples Go samples A collection of Go applications configured to collect profile data. Get started arrow_forward Code Samples Code Samples Profiler code samples A set of code samples for configuring Profiler for a variety of languages and environments. Get started arrow_forward Related videos \ No newline at end of file diff --git a/Cloud_Quotas.txt b/Cloud_Quotas.txt new file mode 100644 index 0000000000000000000000000000000000000000..0d55e674fa2a84cb225e8afb14b152ba5b7e4872 --- /dev/null +++ b/Cloud_Quotas.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/docs/quotas +Date Scraped: 2025-02-23T12:07:42.281Z + +Content: +Home Documentation Cloud Quotas Stay organized with collections Save and categorize content based on your preferences. Cloud Quotas documentation View all product documentation Cloud Quotas enables customers to manage quotas for all of their Google Cloud services. With Cloud Quotas, users are able to easily monitor quota usage, create and modify quota alerts, and request limit adjustments for quotas. Quotas are managed through the Cloud Quotas dashboard or the Cloud Quotas API. Learn more. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Overview Understand quotas Predefined roles and permissions find_in_page Reference REST API Client libraries info Resources Release notes \ No newline at end of file diff --git a/Cloud_Run(1).txt b/Cloud_Run(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..afb2f8086203db5de2c9c939063e742bcf969a49 --- /dev/null +++ b/Cloud_Run(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/run +Date Scraped: 2025-02-23T12:02:51.138Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud RunBuild apps or websites quickly on a fully managed platformRun frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure.Get two million requests free per month.Try it in consoleContact salesProduct highlightsThe flexibility of containers with the simplicity of serverlessStart with source code and run your app in 20+ regions at onceOnly pay when your code is runningDeploy and build web apps and websitesCloud Run in a minute1:32FeaturesAny language, any library, any binaryYou can write code using your favorite language, framework, and libraries, package it up as a container, run "gcloud run deploy," and your app will be live—provided with everything it needs to run in production. Building a container is completely optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you, using the best practices for the language you're using. Fast autoscalingWhether you own event-driven, long running services, or deploy containerized jobs to process data, Cloud Run automatically scales your containers up and down from zero—this means you only pay when your code is running.GPUs(Now in public preview) On-demand access to NVIDIA L4 GPUs for running AI inference workloads. GPU instances start in 5 seconds and scale to zero.Host your LLMs on Cloud RunRead blogCloud Run functions(Now in public preview) Write and deploy functions directly with Cloud Run, giving you complete control over the underlying service configuration.Cloud Functions is now Cloud Run functionsRead blogAutomatically build container images from your sourceCloud Run can also automate how you get to production, using buildpacks to enable you to deploy directly from source—without having to install Docker on your machine. You can automate your builds and deploy your code whenever new commits are pushed to a given branch of a Git repository.Run scheduled jobs to completionCloud Run jobs allow you to perform batch processing, with instances running in parallel. Execute run-to-completion jobs that do not respond to HTTP requests—all on a serverless platform. Let your jobs run for up to 24 hours.Direct VPC connectivitySend traffic to a VPC network directly and connect with all the services you have running on the VPC. View all featuresHow It WorksCloud Run is a fully managed platform that enables you to run your code directly on top of Google’s scalable infrastructure. Cloud Run is simple, automated, and designed to make you more productive.View documentationWhat is Cloud Run?Common UsesWebsites and web applicationsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesTutorials, quickstarts, & labsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesAI inference workloadsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsTutorials, quickstarts, & labsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsAPIs and microservicesYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Tutorials, quickstarts, & labsYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Streaming data processingCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushTutorials, quickstarts, & labsCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushBatch data processingRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a scheduleTutorials, quickstarts, & labsRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a schedulePricingHow Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. SKUPrice beyond free tier without discountFreeCPU$0.00001800 / vCPU-secondFirst 240,000 vCPU-seconds free per monthMemory$0.00000200 / GiB-secondFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.How Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. CPUPrice beyond free tier without discount$0.00001800 / vCPU-secondFreeFirst 240,000 vCPU-seconds free per monthMemoryPrice beyond free tier without discount$0.00000200 / GiB-secondFreeFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.PRICING CALCULATOREstimate your monthly Cloud Run costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry using Cloud Run in the console, with two million requests free per month.Go to my consoleHave a large project?Contact salesDeploy a sample containerQuickstartSimple integrationsCloud Run integrationsMigrate to Cloud RunMigrate an existing serviceBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud RunBBC: Keeping up with a busy news day with an end-to-end serverless architectureThe BBC went from running 150-200 container instances to over 1,000 during a massive traffic spike. “And the infrastructure just worked.”Read the storyRelated contentCarrefour: Optimizing customer engagement and shopping experiencesThe L’Oréal Beauty Tech Data PlatformGroupe Les Echos: A web modernization storyFeatured customersPartners & IntegrationCloud Run easily integrates with a wide variety of partner technologies.Partners:See all partnersFAQExpand allCan you deploy and host a website with Cloud Run? With Cloud Run, you can manage and deploy your website without any of the overhead that you need for VM- or Kubernetes-based deployments. Not only is that a simpler approach from a management perspective, but it also gives you the ability to scale to zero when there are no requests coming to your website.Deploy and host a website with Cloud RunWhat's the difference between Cloud Run and App Engine? Cloud Run is designed to improve upon the App Engine experience, incorporating many of the best features of both App Engine standard environment and App Engine flexible environment. Cloud Run services can handle the same workloads as App Engine services, including deploying and hosting websites, but Cloud Run offers customers much more flexibility in implementing these services. Compare App Engine and Cloud RunOther resources and supportView resourcesCommited use discountsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Run(2).txt b/Cloud_Run(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..eeb00eae6e67a800b34d6bd9a7dc07c84e82891b --- /dev/null +++ b/Cloud_Run(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/run +Date Scraped: 2025-02-23T12:02:59.045Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud RunBuild apps or websites quickly on a fully managed platformRun frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure.Get two million requests free per month.Try it in consoleContact salesProduct highlightsThe flexibility of containers with the simplicity of serverlessStart with source code and run your app in 20+ regions at onceOnly pay when your code is runningDeploy and build web apps and websitesCloud Run in a minute1:32FeaturesAny language, any library, any binaryYou can write code using your favorite language, framework, and libraries, package it up as a container, run "gcloud run deploy," and your app will be live—provided with everything it needs to run in production. Building a container is completely optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you, using the best practices for the language you're using. Fast autoscalingWhether you own event-driven, long running services, or deploy containerized jobs to process data, Cloud Run automatically scales your containers up and down from zero—this means you only pay when your code is running.GPUs(Now in public preview) On-demand access to NVIDIA L4 GPUs for running AI inference workloads. GPU instances start in 5 seconds and scale to zero.Host your LLMs on Cloud RunRead blogCloud Run functions(Now in public preview) Write and deploy functions directly with Cloud Run, giving you complete control over the underlying service configuration.Cloud Functions is now Cloud Run functionsRead blogAutomatically build container images from your sourceCloud Run can also automate how you get to production, using buildpacks to enable you to deploy directly from source—without having to install Docker on your machine. You can automate your builds and deploy your code whenever new commits are pushed to a given branch of a Git repository.Run scheduled jobs to completionCloud Run jobs allow you to perform batch processing, with instances running in parallel. Execute run-to-completion jobs that do not respond to HTTP requests—all on a serverless platform. Let your jobs run for up to 24 hours.Direct VPC connectivitySend traffic to a VPC network directly and connect with all the services you have running on the VPC. View all featuresHow It WorksCloud Run is a fully managed platform that enables you to run your code directly on top of Google’s scalable infrastructure. Cloud Run is simple, automated, and designed to make you more productive.View documentationWhat is Cloud Run?Common UsesWebsites and web applicationsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesTutorials, quickstarts, & labsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesAI inference workloadsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsTutorials, quickstarts, & labsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsAPIs and microservicesYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Tutorials, quickstarts, & labsYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Streaming data processingCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushTutorials, quickstarts, & labsCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushBatch data processingRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a scheduleTutorials, quickstarts, & labsRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a schedulePricingHow Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. SKUPrice beyond free tier without discountFreeCPU$0.00001800 / vCPU-secondFirst 240,000 vCPU-seconds free per monthMemory$0.00000200 / GiB-secondFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.How Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. CPUPrice beyond free tier without discount$0.00001800 / vCPU-secondFreeFirst 240,000 vCPU-seconds free per monthMemoryPrice beyond free tier without discount$0.00000200 / GiB-secondFreeFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.PRICING CALCULATOREstimate your monthly Cloud Run costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry using Cloud Run in the console, with two million requests free per month.Go to my consoleHave a large project?Contact salesDeploy a sample containerQuickstartSimple integrationsCloud Run integrationsMigrate to Cloud RunMigrate an existing serviceBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud RunBBC: Keeping up with a busy news day with an end-to-end serverless architectureThe BBC went from running 150-200 container instances to over 1,000 during a massive traffic spike. “And the infrastructure just worked.”Read the storyRelated contentCarrefour: Optimizing customer engagement and shopping experiencesThe L’Oréal Beauty Tech Data PlatformGroupe Les Echos: A web modernization storyFeatured customersPartners & IntegrationCloud Run easily integrates with a wide variety of partner technologies.Partners:See all partnersFAQExpand allCan you deploy and host a website with Cloud Run? With Cloud Run, you can manage and deploy your website without any of the overhead that you need for VM- or Kubernetes-based deployments. Not only is that a simpler approach from a management perspective, but it also gives you the ability to scale to zero when there are no requests coming to your website.Deploy and host a website with Cloud RunWhat's the difference between Cloud Run and App Engine? Cloud Run is designed to improve upon the App Engine experience, incorporating many of the best features of both App Engine standard environment and App Engine flexible environment. Cloud Run services can handle the same workloads as App Engine services, including deploying and hosting websites, but Cloud Run offers customers much more flexibility in implementing these services. Compare App Engine and Cloud RunOther resources and supportView resourcesCommited use discountsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Run(3).txt b/Cloud_Run(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..f2d6ea911876578c1b87d81a84b549f05ccb7a02 --- /dev/null +++ b/Cloud_Run(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/run +Date Scraped: 2025-02-23T12:09:38.737Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud RunBuild apps or websites quickly on a fully managed platformRun frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure.Get two million requests free per month.Try it in consoleContact salesProduct highlightsThe flexibility of containers with the simplicity of serverlessStart with source code and run your app in 20+ regions at onceOnly pay when your code is runningDeploy and build web apps and websitesCloud Run in a minute1:32FeaturesAny language, any library, any binaryYou can write code using your favorite language, framework, and libraries, package it up as a container, run "gcloud run deploy," and your app will be live—provided with everything it needs to run in production. Building a container is completely optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you, using the best practices for the language you're using. Fast autoscalingWhether you own event-driven, long running services, or deploy containerized jobs to process data, Cloud Run automatically scales your containers up and down from zero—this means you only pay when your code is running.GPUs(Now in public preview) On-demand access to NVIDIA L4 GPUs for running AI inference workloads. GPU instances start in 5 seconds and scale to zero.Host your LLMs on Cloud RunRead blogCloud Run functions(Now in public preview) Write and deploy functions directly with Cloud Run, giving you complete control over the underlying service configuration.Cloud Functions is now Cloud Run functionsRead blogAutomatically build container images from your sourceCloud Run can also automate how you get to production, using buildpacks to enable you to deploy directly from source—without having to install Docker on your machine. You can automate your builds and deploy your code whenever new commits are pushed to a given branch of a Git repository.Run scheduled jobs to completionCloud Run jobs allow you to perform batch processing, with instances running in parallel. Execute run-to-completion jobs that do not respond to HTTP requests—all on a serverless platform. Let your jobs run for up to 24 hours.Direct VPC connectivitySend traffic to a VPC network directly and connect with all the services you have running on the VPC. View all featuresHow It WorksCloud Run is a fully managed platform that enables you to run your code directly on top of Google’s scalable infrastructure. Cloud Run is simple, automated, and designed to make you more productive.View documentationWhat is Cloud Run?Common UsesWebsites and web applicationsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesTutorials, quickstarts, & labsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesAI inference workloadsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsTutorials, quickstarts, & labsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsAPIs and microservicesYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Tutorials, quickstarts, & labsYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Streaming data processingCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushTutorials, quickstarts, & labsCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushBatch data processingRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a scheduleTutorials, quickstarts, & labsRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a schedulePricingHow Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. SKUPrice beyond free tier without discountFreeCPU$0.00001800 / vCPU-secondFirst 240,000 vCPU-seconds free per monthMemory$0.00000200 / GiB-secondFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.How Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. CPUPrice beyond free tier without discount$0.00001800 / vCPU-secondFreeFirst 240,000 vCPU-seconds free per monthMemoryPrice beyond free tier without discount$0.00000200 / GiB-secondFreeFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.PRICING CALCULATOREstimate your monthly Cloud Run costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry using Cloud Run in the console, with two million requests free per month.Go to my consoleHave a large project?Contact salesDeploy a sample containerQuickstartSimple integrationsCloud Run integrationsMigrate to Cloud RunMigrate an existing serviceBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud RunBBC: Keeping up with a busy news day with an end-to-end serverless architectureThe BBC went from running 150-200 container instances to over 1,000 during a massive traffic spike. “And the infrastructure just worked.”Read the storyRelated contentCarrefour: Optimizing customer engagement and shopping experiencesThe L’Oréal Beauty Tech Data PlatformGroupe Les Echos: A web modernization storyFeatured customersPartners & IntegrationCloud Run easily integrates with a wide variety of partner technologies.Partners:See all partnersFAQExpand allCan you deploy and host a website with Cloud Run? With Cloud Run, you can manage and deploy your website without any of the overhead that you need for VM- or Kubernetes-based deployments. Not only is that a simpler approach from a management perspective, but it also gives you the ability to scale to zero when there are no requests coming to your website.Deploy and host a website with Cloud RunWhat's the difference between Cloud Run and App Engine? Cloud Run is designed to improve upon the App Engine experience, incorporating many of the best features of both App Engine standard environment and App Engine flexible environment. Cloud Run services can handle the same workloads as App Engine services, including deploying and hosting websites, but Cloud Run offers customers much more flexibility in implementing these services. Compare App Engine and Cloud RunOther resources and supportView resourcesCommited use discountsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Run.txt b/Cloud_Run.txt new file mode 100644 index 0000000000000000000000000000000000000000..126cb1969559dceac78e206429ae67a64e4cc45b --- /dev/null +++ b/Cloud_Run.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/run +Date Scraped: 2025-02-23T12:01:30.642Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud RunBuild apps or websites quickly on a fully managed platformRun frontend and backend services, batch jobs, host LLMs, and queue processing workloads without the need to manage infrastructure.Get two million requests free per month.Try it in consoleContact salesProduct highlightsThe flexibility of containers with the simplicity of serverlessStart with source code and run your app in 20+ regions at onceOnly pay when your code is runningDeploy and build web apps and websitesCloud Run in a minute1:32FeaturesAny language, any library, any binaryYou can write code using your favorite language, framework, and libraries, package it up as a container, run "gcloud run deploy," and your app will be live—provided with everything it needs to run in production. Building a container is completely optional. If you're using Go, Node.js, Python, Java, .NET Core, or Ruby, you can use the source-based deployment option that builds the container for you, using the best practices for the language you're using. Fast autoscalingWhether you own event-driven, long running services, or deploy containerized jobs to process data, Cloud Run automatically scales your containers up and down from zero—this means you only pay when your code is running.GPUs(Now in public preview) On-demand access to NVIDIA L4 GPUs for running AI inference workloads. GPU instances start in 5 seconds and scale to zero.Host your LLMs on Cloud RunRead blogCloud Run functions(Now in public preview) Write and deploy functions directly with Cloud Run, giving you complete control over the underlying service configuration.Cloud Functions is now Cloud Run functionsRead blogAutomatically build container images from your sourceCloud Run can also automate how you get to production, using buildpacks to enable you to deploy directly from source—without having to install Docker on your machine. You can automate your builds and deploy your code whenever new commits are pushed to a given branch of a Git repository.Run scheduled jobs to completionCloud Run jobs allow you to perform batch processing, with instances running in parallel. Execute run-to-completion jobs that do not respond to HTTP requests—all on a serverless platform. Let your jobs run for up to 24 hours.Direct VPC connectivitySend traffic to a VPC network directly and connect with all the services you have running on the VPC. View all featuresHow It WorksCloud Run is a fully managed platform that enables you to run your code directly on top of Google’s scalable infrastructure. Cloud Run is simple, automated, and designed to make you more productive.View documentationWhat is Cloud Run?Common UsesWebsites and web applicationsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesTutorials, quickstarts, & labsDeploy and host a website with Cloud RunBuild your web app using your favorite stack, access your SQL database, and render dynamic HTML pages. Cloud Run also gives you the ability to scale to zero when there are no requests coming to your website.In this codelab, you'll begin with a Docker image created via Cloud Build, which you trigger in Cloud Shell. Then, you'll deploy that image to Cloud Run with a command in Cloud Shell.Start codelabBuild and deploy a Python service to Cloud RunBuild and deploy a Node.js service to Cloud RunView full list of supported languages for web servicesAI inference workloadsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsTutorials, quickstarts, & labsHost LLMs with Cloud Run GPUsPerform real-time AI inferencing using LLMs of your choice on Cloud Run, including Llama 3.1, Mistral, and Gemma 2. Also ideal for compute-intensive applications, such as image recognition, video transcoding, and streaming.Configure GPUsBest practices: AI inference on Cloud Run with GPUsLearn more about GPUs on Google CloudSign up for Cloud Run with GPUsAPIs and microservicesYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Tutorials, quickstarts, & labsYou can build a REST API, GraphQL API, or private microservices that communicate over HTTP or gRPC.Deploy to Cloud Run from a Git repositoryServe traffic from multiple regionsUsing WebSocketsUsing gRPC Streaming data processingCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushTutorials, quickstarts, & labsCloud Run services can receive messages from Pub/Sub push subscriptions and events from Eventarc.Trigger from Pub/Sub pushBatch data processingRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a scheduleTutorials, quickstarts, & labsRun scripts, cron jobs, or parallelized data processing workloads. Great for long running jobs or jobs where time to completion matters.Execute jobs on a schedulePricingHow Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. SKUPrice beyond free tier without discountFreeCPU$0.00001800 / vCPU-secondFirst 240,000 vCPU-seconds free per monthMemory$0.00000200 / GiB-secondFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.How Cloud Run pricing worksPay-per-use, with an always free tier, rounded up to the nearest 100 millisecond. If you don't use it, you don't pay for it. CPUPrice beyond free tier without discount$0.00001800 / vCPU-secondFreeFirst 240,000 vCPU-seconds free per monthMemoryPrice beyond free tier without discount$0.00000200 / GiB-secondFreeFirst 450,000 GiB-seconds free per monthView pricing details Lower continuous use of Cloud Run by purchasing Committed use discounts.PRICING CALCULATOREstimate your monthly Cloud Run costs, including region specific pricing and fees.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry using Cloud Run in the console, with two million requests free per month.Go to my consoleHave a large project?Contact salesDeploy a sample containerQuickstartSimple integrationsCloud Run integrationsMigrate to Cloud RunMigrate an existing serviceBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Cloud RunBBC: Keeping up with a busy news day with an end-to-end serverless architectureThe BBC went from running 150-200 container instances to over 1,000 during a massive traffic spike. “And the infrastructure just worked.”Read the storyRelated contentCarrefour: Optimizing customer engagement and shopping experiencesThe L’Oréal Beauty Tech Data PlatformGroupe Les Echos: A web modernization storyFeatured customersPartners & IntegrationCloud Run easily integrates with a wide variety of partner technologies.Partners:See all partnersFAQExpand allCan you deploy and host a website with Cloud Run? With Cloud Run, you can manage and deploy your website without any of the overhead that you need for VM- or Kubernetes-based deployments. Not only is that a simpler approach from a management perspective, but it also gives you the ability to scale to zero when there are no requests coming to your website.Deploy and host a website with Cloud RunWhat's the difference between Cloud Run and App Engine? Cloud Run is designed to improve upon the App Engine experience, incorporating many of the best features of both App Engine standard environment and App Engine flexible environment. Cloud Run services can handle the same workloads as App Engine services, including deploying and hosting websites, but Cloud Run offers customers much more flexibility in implementing these services. Compare App Engine and Cloud RunOther resources and supportView resourcesCommited use discountsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_SDK.txt b/Cloud_SDK.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae574ac29e666954121201f4424e9fb71d49e607 --- /dev/null +++ b/Cloud_SDK.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sdk +Date Scraped: 2025-02-23T12:04:25.936Z + +Content: +Tech leaders: Get an insider view of Google Cloud’s App Dev and Infrastructure solutions on Oct 30 at 9 AM PT. Save your seat.Jump to Cloud SDK: Libraries and Command Line InterfaceCloud SDKLibraries and tools for interacting with Google Cloud products and services. Cloud SDK is available at no charge for users with a Google Cloud account.Install Google Cloud CLIContact salesIntegrate APIs using Client Libraries for Java, C++, Python, Node.js, Ruby, Go, .NET, PHP, and ABAPScript or interact with cloud resources at scale using the Google Cloud CLIAccelerate local development with emulators for Pub/Sub, Spanner, Bigtable, and Datastore VIDEOWhat is the Google Cloud SDK?3:00Key featuresKey featuresSDK Client Libraries for popular programming languagesCloud SDK provides language-specific Cloud Client Libraries supporting each language’s natural conventions and styles. This makes it easier for you to interact with Google Cloud APIs in your language of choice. Client libraries also handle authentication, reduce the amount of necessary boilerplate code, and provide helper functions for pagination of large datasets and asynchronous handling of long-running operations.Google Cloud Command Line Interface (gcloud CLI)The gcloud CLI manages authentication, local configuration, developer workflow, and general interactions with Google Cloud resources. With the Google Cloud CLI, it’s easy to perform many common cloud tasks like creating a Compute Engine VM instance, managing a Google Kubernetes Engine cluster, and deploying an App Engine application, either from the command line or in scripts and other automations.View all features5:53Learn about the Cloud SDK, the gcloud command line interface, and Cloud Shell.All featuresCloud SDKs by languageGoogle Cloud SDK: Tools for all languagesGoogle Cloud CLI lets you manage resources and services from the command line. It also contains service and data emulators to speed up local development.Cloud Shell lets you code or use a terminal directly in the web-browser.Cloud Code provides IDE extensions for VSCode and IntelliJ.Google Cloud SDK for JavaCloud SDK Libraries JavaSpring Framework SupportGoogle Cloud SDK for GoCloud SDK Libraries GoGoogle Cloud SDK for PythonCloud SDK Libraries PythonGoogle Cloud SDK for RubyCloud SDK LibrariesRubyGoogle Cloud SDK for PHPCloud SDK LibrariesPHPGoogle Cloud SDK for C#Cloud SDK Libraries C#Google Cloud SDK for C++Cloud SDK Libraries C++Google Cloud SDK for Node.jsCloud SDK LibrariesNode.jsGoogle Cloud SDK for ABAPABAP SDK for Google CloudABAPOtherAdditional (paid) tools for tracing, logging, monitoring, and error reporting.PricingPricingCloud SDK is available at no charge for users with a Google Cloud account.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Install Google Cloud CLINeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_SQL(1).txt b/Cloud_SQL(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..b6329677986b8c0c8c2bcca5c5c7d7e4dc2c8caf --- /dev/null +++ b/Cloud_SQL(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sql +Date Scraped: 2025-02-23T12:03:55.810Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud SQLFocus on your application, and leave the database to usFully managed, cost-effective relational database service for PostgreSQL, MySQL, and SQL Server. Try Enterprise Plus edition for a 99.99% availability SLA and category-leading performance.New customers get $300 in free credits to try Cloud SQL and other Google Cloud products.Try it in consoleContact salesEasily build gen AI applications that are accurate, transparent, and reliable with LangChain integration. Cloud SQL has three LangChain integrations - Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages Memory for enabling chains to recall previous conversations. Visit the GitHub repository to learn more.Product highlightsSub-second downtime maintenance with Enterprise Plus editionPrice-performance options for every budget, app, and workloadGemini-assisted management and vector search for gen AI appsCloud SQL in a minute1:31FeaturesFully managedCloud SQL manages your databases so you don't have to, enabling your business to run without disruption. We automate all your backups, replication, patches, encryption, and storage capacity increases to give your applications the reliability, scalability, and security they need.VIDEOWhat is Cloud SQL?3:09Price performance options Cloud SQL provides the flexibility to choose the right capabilities based on your performance, availability, and data protection needs for your database workloads. The Enterprise edition provides the lowest cost with balanced performance, while the Enterprise Plus edition is suitable for the most demanding transactional workloads, providing the highest performance with a fully optimized software and hardware stack featuring a data cache option. The data cache leverages flash memory to lower read latency and improve throughput by intelligently caching data across memory and high speed local storage. SQL Server Enterprise Plus instances offers two new machine families, Performance optimized (1:8 core memory ratio) and Memory optimized (1:32 core memory ratio), that deliver better performance.Learn how Chess.com used Enterprise Plus to cut response time by 71%High availability with near sub-second downtime maintenanceEasily configure built-in high availability with automatic failover across zones to protect your applications from a variety of possible failures. The Enterprise edition provides a 99.95% availability SLA for general purpose workloads, while the Enterprise Plus edition offers a 99.99% availability SLA for the most demanding workloads featuring maintenance and instance scale-up with typically sub-second downtime. It also offers enhanced disaster recovery capabilities that allow failover and switchback to your original topology with zero data loss and no application changes.Learn about Cloud SQL's high availability capabilities in this whitepaperGemini in Cloud SQLSimplify all aspects of the database journey with AI-powered assistance, helping you focus on what matters most. Gemini in Cloud SQL, in preview, simplifies all aspects of database operations, including development, performance optimization, fleet management, governance, and migration.Gemini in Database Migration Service, in preview, helps you review and convert database-resident code like stored procedures, triggers, and functions and run it in Cloud SQL for PostgreSQL. Database Center, in Preview, helps proactively de-risk your fleet with intelligent performance and security recommendations. With Gemini enabled, Database Center makes optimizing your database fleet incredibly intuitive. Use a natural-language chat interface to ask questions, quickly resolve fleet issues, and get optimization recommendations.Learn how you can supercharge database development and management with Gemini in Cloud SQLFirebase Data ConnectFirebase Data Connect bridges the gap between Firebase's powerful mobile/web development features and PostgreSQL's rich data management capabilities. Leverage the power of Cloud SQL for PostgreSQL as your backend while enjoying the seamless Firebase development experience. Build faster, smarter, and more effectively.Learn more about Firebase Data ConnectVector SearchVector Search is a critical capability for building useful and accurate gen AI-powered apps. Cloud SQL for PostgreSQL and MySQL support two search approaches for balancing speed and accuracy. Approximate nearest neighbor (ANN) vector search is ideal for large datasets where close matches suffice, while exact nearest neighbor (KNN) vector search is typically used for high precision on smaller datasets. Integrate vector search directly into your existing AI-powered apps without needing to learn and manage a separate system.VIDEOUsing pgvector to build AI-powered apps7:52Using pgvector to build AI-powered appsCost-effectiveAccording to an IDC study, Cloud SQL customers achieved a three-year ROI of 246% and a payback period of 11 months. Committed use discounts offer additional savings for one to three year commitments.Learn how companies achieved a three-year ROI of 246% in this IDC reportOpen and standards-basedCloud SQL supports the most popular open source and commercial engines, including MySQL, PostgreSQL, and SQL Server with rich support for extensions, configuration flags, and popular developer tools. It's easy to get started—simply bring your existing skills over and enjoy the flexibility to work the way you want. You can create a database with just a few clicks in the console and connect your application.Choosing a PostgreSQL database on Google Cloud19:20LangChain integrationEasily build gen AI applications with LangChain. Cloud SQL features three LangChain integrations - Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages History for saving/fetching previous chat conversations. Visit the Github repository for PostgreSQL, MySQL, and SQL Server to learn more.Easy migrationsNo matter where your database is located—whether on-premises, on Compute Engine, or in other clouds—Database Migration Service (DMS) can migrate it securely and with minimal downtime. DMS leverages the native replication capabilities of the source database to maximize the reliability of your migration. And lift and shift migrations to Cloud SQL are available at no additional charge.Learn how to connect to Cloud SQL from Kubernetes14:16IntegratedCloud SQL seamlessly integrates with Google Cloud services, such as Compute Engine, Cloud Run, Google Kubernetes Engine, and Cloud IAM, allowing developers to build and deploy applications with ease. Provision your database via APIs and third-party tools, and use federated queries from BigQuery and low-latency database replication with Datastream for near real-time insights on operational data.Learn how to connect to Cloud SQL from Kubernetes14:16Data protection and complianceConfigure backups for data protection and restore your instance to an earlier point in time with a retention period of up to 35 days. Configure where your data is stored to comply with data residency requirements. Cloud SQL automatically encrypts data, is SSAE 16, ISO 27001, and PCI DSS compliant, and supports HIPAA compliance.Learn how to secure a Cloud SQL for PostgreSQL instanceSecure access and connectivityCloud SQL data is encrypted when in transit on Google’s internal networks and when at rest in database tables, temporary files, and backups. It supports private connectivity with Virtual Private Cloud (VPC), and every Cloud SQL instance includes a network firewall, allowing you to control public network access. Cloud SQL also supports Private Service Connect, which allows you to access your database instances via private IP without going through the internet or using external IP addresses.ScalabilityEasily scale up as your data grows—add compute and storage, and scale out by adding read replicas to handle increasing read traffic. Cloud SQL can also automatically scale up storage capacity when you’re near your limit. Read replicas support high availability, can have their own read replicas, and can be located across regions.Change data capture and replicationStream data across heterogeneous databases, storage systems, and applications reliably and with minimal latency with Datastream and Datastream for BigQuery. Scale up or down with a serverless architecture and no resources to provision or manage, and enable near real-time insights on operational data.View all featuresCompare Cloud SQL to other Google Cloud databasesGoogle Cloud serviceOverviewKey benefitsCloud SQLFully managed, cost-effective relational database serviceTry Cloud SQL for the:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionAlloyDB for PostgreSQLFull PostgreSQL compatibility with superior performance and scaleLearn more about the benefits of AlloyDB for PostgreSQL, including:4x faster than standard PostgreSQL for transactional workloadsHybrid transactional and analytical processing (HTAP)Up to 100x faster analytical queries than standard PostgreSQLFull AI support, designed to run anywhereSpannerHigh performance and availability at virtually unlimited scale at zero-touch maintenanceLearn more about the benefits of Spanner, including: Strong consistency and global scaleRun large queries with zero impact on operational trafficHighly scalable database with relational and non-relational capabilities, offering 99.999% availabilityCloud SQLOverviewFully managed, cost-effective relational database serviceKey benefitsTry Cloud SQL for the:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionAlloyDB for PostgreSQLOverviewFull PostgreSQL compatibility with superior performance and scaleKey benefitsLearn more about the benefits of AlloyDB for PostgreSQL, including:4x faster than standard PostgreSQL for transactional workloadsHybrid transactional and analytical processing (HTAP)Up to 100x faster analytical queries than standard PostgreSQLFull AI support, designed to run anywhereSpannerOverviewHigh performance and availability at virtually unlimited scale at zero-touch maintenanceKey benefitsLearn more about the benefits of Spanner, including: Strong consistency and global scaleRun large queries with zero impact on operational trafficHighly scalable database with relational and non-relational capabilities, offering 99.999% availabilityHow It WorksCloud SQL scales up in minutes and replicates data across zones and regions. It uses agents for maintenance, logging, monitoring, and configuration, with services backed by a 24/7 SRE team. Manage your database through the console, CLI, or REST API and connect your app through standard database drivers. View documentationCommon UsesDatabase migrationMigrate to a fully managed database solutionSelf-managing a database, such as MySQL, PostgreSQL, or SQL Server, can be inefficient and expensive, with significant effort around patching, hardware maintenance, backups, and tuning. Migrating to a fully managed solution has never been simpler—you can lift and shift your database from any on-premises or cloud location using Database Migration Service with minimal downtime.Lab: Migrate to Cloud SQL for PostgreSQL using Database Migration ServiceDocumentation: Best practices for importing and exporting dataBlog: See how Broadcom migrated 40+ MySQL databases to Google Cloud with minimal downtimeLearning resourcesMigrate to a fully managed database solutionSelf-managing a database, such as MySQL, PostgreSQL, or SQL Server, can be inefficient and expensive, with significant effort around patching, hardware maintenance, backups, and tuning. Migrating to a fully managed solution has never been simpler—you can lift and shift your database from any on-premises or cloud location using Database Migration Service with minimal downtime.Lab: Migrate to Cloud SQL for PostgreSQL using Database Migration ServiceDocumentation: Best practices for importing and exporting dataBlog: See how Broadcom migrated 40+ MySQL databases to Google Cloud with minimal downtimeDatabase modernizationModernize your application with open sourceA cloud deployment is a good opportunity to modernize your database environment by transitioning off legacy, proprietary databases and onto open source databases, such as PostgreSQL. With open source databases having become enterprise-ready, you don't need to compromise on performance, reliability, or security.Learn how to get started with Database Migration ServiceLearning resourcesModernize your application with open sourceA cloud deployment is a good opportunity to modernize your database environment by transitioning off legacy, proprietary databases and onto open source databases, such as PostgreSQL. With open source databases having become enterprise-ready, you don't need to compromise on performance, reliability, or security.Learn how to get started with Database Migration ServiceNew application developmentBuild data-driven applicationsCloud SQL accelerates application development through integration with the larger ecosystem of Google Cloud services, Google partners, and the open source community, while giving you the freedom to work the way you want. Reuse your existing database skills while freeing yourself from mundane database administration tasks and leveraging AI/ML-driven insights and recommendations.Try the lab: Building a containerized app with a scalable databaseLab: Connect an app to a Cloud SQL for PostgreSQL instanceLab: Migrate MySQL database to Google Cloud SQL and then reconfigure the applicationLab: Create Cloud SQL instances with Terraform, then set up the Cloud SQL ProxyTutorials, quickstarts, & labsBuild data-driven applicationsCloud SQL accelerates application development through integration with the larger ecosystem of Google Cloud services, Google partners, and the open source community, while giving you the freedom to work the way you want. Reuse your existing database skills while freeing yourself from mundane database administration tasks and leveraging AI/ML-driven insights and recommendations.Try the lab: Building a containerized app with a scalable databaseLab: Connect an app to a Cloud SQL for PostgreSQL instanceLab: Migrate MySQL database to Google Cloud SQL and then reconfigure the applicationLab: Create Cloud SQL instances with Terraform, then set up the Cloud SQL ProxyThird-party applicationsDeploy applications with confidenceMany software vendors build and certify their applications for MySQL, PostgreSQL, and SQL Server. Since Cloud SQL offers standard versions of these databases, including extensions, configuration flags, and drivers, applications can run unmodified. Make your journey to the cloud and let us take tedious database administration tasks off your plate.Find a Google Cloud partnerLearning resourcesDeploy applications with confidenceMany software vendors build and certify their applications for MySQL, PostgreSQL, and SQL Server. Since Cloud SQL offers standard versions of these databases, including extensions, configuration flags, and drivers, applications can run unmodified. Make your journey to the cloud and let us take tedious database administration tasks off your plate.Find a Google Cloud partnerPricingHow Cloud SQL pricing worksPricing varies with editions, engine, and settings, including how much storage, memory, and CPU you provision. Cloud SQL offers per-second billing.ServiceDescriptionPriceComputeCloud SQL EnterpriseBest for general purpose workloads. It offers:1-96 vCPUs1:6.5 core memory ratio99.95% SLA< 60s of planned downtimeStarting at$0.0413per vCPU/hourCloud SQL Enterprise PlusBest for high performance workloads. It offers:Up to 128 vCPUs1:8 core memory ratio99.99% SLA< 10 seconds of planned downtime (MySQL, PostgreSQL)2x read and write performance (MySQL, PostgreSQL)Up to 4x better read performance (SQL Server)Memory optimized machines with 1:32 core memory ratio (SQL Server)Starting at$0.05369per vCPU/hourMemoryCloud SQL Enterpriseup to 624 GBStarting at$0.007per GB/hourCloud SQL Enterprise Plusup to 824 GB (includes performance-optimized machines on SQL Server)Starting at$0.0091per GB/hourCloud SQL Enterprise Plus (SQL Server only)Up to 512 GB for memory-optimized machinesStarting at$0.0161per GB/hourStorage - SSDStorage and networking prices depend on the region where the instance is located; Cloud SQL Enterprise pricing and Cloud SQL Enterprise Plus pricing are the same$0.17per GB/monthStorage - Local SSDThis is only available for Cloud SQL Enterprise Plus$0.16per GB/monthPITR Logs on Google Cloud StorageCloud SQL EnterpriseUp to 7 daysFreeCloud SQL Enterprise PlusUp to 35 daysFreeGet full details on pricing and learn about committed use discounts.Watch video: Supercharging your applications with Cloud SQL Enterprise Plus (2:58)How Cloud SQL pricing worksPricing varies with editions, engine, and settings, including how much storage, memory, and CPU you provision. Cloud SQL offers per-second billing.ComputeDescriptionCloud SQL EnterpriseBest for general purpose workloads. It offers:1-96 vCPUs1:6.5 core memory ratio99.95% SLA< 60s of planned downtimePriceStarting at$0.0413per vCPU/hourCloud SQL Enterprise PlusBest for high performance workloads. It offers:Up to 128 vCPUs1:8 core memory ratio99.99% SLA< 10 seconds of planned downtime (MySQL, PostgreSQL)2x read and write performance (MySQL, PostgreSQL)Up to 4x better read performance (SQL Server)Memory optimized machines with 1:32 core memory ratio (SQL Server)DescriptionStarting at$0.05369per vCPU/hourMemoryDescriptionCloud SQL Enterpriseup to 624 GBPriceStarting at$0.007per GB/hourCloud SQL Enterprise Plusup to 824 GB (includes performance-optimized machines on SQL Server)DescriptionStarting at$0.0091per GB/hourCloud SQL Enterprise Plus (SQL Server only)Up to 512 GB for memory-optimized machinesDescriptionStarting at$0.0161per GB/hourStorage - SSDDescriptionStorage and networking prices depend on the region where the instance is located; Cloud SQL Enterprise pricing and Cloud SQL Enterprise Plus pricing are the samePrice$0.17per GB/monthStorage - Local SSDDescriptionThis is only available for Cloud SQL Enterprise PlusPrice$0.16per GB/monthPITR Logs on Google Cloud StorageDescriptionCloud SQL EnterpriseUp to 7 daysPriceFreeCloud SQL Enterprise PlusUp to 35 daysDescriptionFreeGet full details on pricing and learn about committed use discounts.Watch video: Supercharging your applications with Cloud SQL Enterprise Plus (2:58)Pricing CalculatorEstimate your monthly Cloud SQL costs, including region specific pricing and fees.Estimate your costscustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Cloud SQL in the consoleGo to my consoleLearn how to use Cloud SQLView trainingBuild a resilient architecture with Cloud SQLLearn moreFederate queries from BigQuery into Cloud SQLLearn moreMigrate a database to Cloud SQL using DMSLearn moreBusiness CaseMore than 95% of Google Cloud’s top 100 customers use Cloud SQL“Given Linear’s existing data volume and our goals for finding a cost-efficient solution, we selected Cloud SQL for PostgreSQL once support for pgvector was added. We were impressed by its scalability and reliability. This choice was also compatible with our existing database usage, models, ORM, etc. and this meant the learning curve was non-existent for our team.”Tom Moor, Head of US Engineering, Linear Leveraging the power of Cloud SQL for PostgreSQL with pgvector support, Linear was able to keep pace with its expanding customer base–improving the efficiency, scalability, and reliability of data management, scaling up into the tens of terabytes without increasing engineering effort.Read customer storyRelated contentFord achieves reductions in database management tasks using Google CloudHow Wayfair quickly moved from their on-premises data centers running SQL Server to Google CloudHow Renault migrated from Oracle databases to Cloud SQL for PostgreSQLFeatured benefitsStandard connection drivers and built-in migration tools allow you to create and connect to your first database in just a few minutes.Scale your instances effortlessly with a single API call whether you start with simple testing or you need a highly available database in production.Data encryption at rest and in transit. Private connectivity with Virtual Private Cloud and user-controlled network access with firewall protection. Compliant with SSAE 16, ISO 27001, PCI DSS, and supports HIPAA compliance.Partners & IntegrationAccelerate your workloads by working with a partnerBusiness intelligence and analyticsConsulting partnersData integration and migrationData quality and observabilityData security, caching, proxy servicesBuild data-driven applications with Google Cloud Ready - Cloud SQL validated partners, and visit the partner directory for a full list of Cloud SQL partners.FAQExpand allWhat is Cloud SQL?Cloud SQL is a service that delivers fully managed relational databases in the cloud. It offers MySQL, PostgreSQL, and SQL Server database engines.How is Cloud SQL different from other cloud databases?Cloud SQL is valued for its openness, ease of use, security, cost-efficiency, and Google Cloud integration—in fact, more than 95% of Google Cloud’s top 100 customers use it. If you're comparing PostgreSQL options on Google Cloud, view our comparison chart.What's the difference between the Enterprise and Enterprise Plus editions?For PostgreSQL, the Enterprise Plus edition brings enhanced availability, performance, and data protection capabilities. Specifically, it provides a 99.99% availability SLA with near-zero downtime maintenance, optimized hardware and software configurations, intelligent data caching for read-intensive transactional workloads, a configurable data cache option and 35 days of log retention.For MySQL, the Enterprise Plus edition brings enhanced availability, performance, and data protection capabilities. Specifically, it provides a 99.99% availability SLA with near-zero downtime maintenance, optimized hardware and software configurations, intelligent data caching for read-intensive transactional workloads, a configurable data cache option, 35 days of log retention and advanced disaster recovery capabilities like orchestrated failover and switchback.For SQL Server, the Enterprise Plus edition brings enhanced availability, performance, and disaster recovery capabilities. Specifically, it provides a 99.99% availability SLA, two machine families with performance-optimized and memory-optimized machines, a configurable data cache option for read-intensive workloads and advanced disaster recovery capabilities like orchestrated failover and switchback.How do I migrate databases to Cloud SQL?Use Database Migration Service to migrate securely and with minimal downtime, no matter where your source database is located.How can I get started with Cloud SQL?Go to the Cloud SQL console and create a database instance. Get up and running quickly with a Quickstart for MySQL, PostgreSQL, or SQL Server.New customers get $300 in free credits to spend on Cloud SQL. You won’t be charged until you upgrade.Other questions and supportCloud SQL FAQAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_SQL(2).txt b/Cloud_SQL(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..cee08f664454dec7c5733bf40af9ca00305dc3ca --- /dev/null +++ b/Cloud_SQL(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sql/pricing +Date Scraped: 2025-02-23T12:10:56.598Z + +Content: +Home Pricing Cloud SQL Send feedback Stay organized with collections Save and categorize content based on your preferences. Cloud SQL pricing You can create an account to evaluate how Cloud SQL performs in real-world scenarios. New customers also get $300 in free credits to spend on Cloud SQL to run, test, and deploy workloads. You won't be charged until you upgrade. Sign up to try Cloud SQL for free. This page contains information about pricing for Cloud SQL. Cloud SQL offers two editions, Enterprise and Enterprise Plus. These editions provide different levels of availability, performance and data protection. The pricing for the vCPUs and memory for each edition varies. Cloud SQL Enterprise edition and Cloud SQL Enterprise Plus edition are supported by Cloud SQL for MySQL, Cloud SQL for PostgreSQL, and Cloud SQL for SQL Server. Pricing for Cloud SQL depends on your instance type: MySQL and PostgreSQL SQL Server MySQL and PostgreSQL pricing Cloud SQL pricing is composed of the following charges: CPU and memory pricing Storage and networking pricing Instance pricing Cloud DNS pricing Extended support pricing CPU and memory pricing For dedicated-core instances, you choose the number of CPUs and the amount of memory you want, up to 96 CPUs and 624 GB of memory for Enterprise edition and up to 128 CPUs and 864 GB of memory for Enterprise Plus edition. For Cloud SQL Enterprise Plus edition for SQL Server instances, you can also choose from performance-optimized machines (up to 128 CPUs and 864 GB of memory) and memory-optimized machines (up to 16 CPUs and 512 GB of memory). Pricing for CPUs and memory depends on the region where your instance is located. Select your region in the dropdown on the pricing table. Read replicas and failover replicas are charged at the same rate as stand-alone instances. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Cloud SQL also offers committed use discounts (CUDs) that provide deeply discounted prices in exchange for your commitment to continuously use database instances in a particular region for a one- or three-year term. In the pricing tables on this page, the prices for CUDs are listed as commitments. For more information about these commitments, see Committed use discounts. In the following tables: Select your region from the dropdown menu to see the price for that region Use the slider to choose Monthly or Hourly pricing Compare pricing between per use, 1-year, and 3-year commitments Enterprise edition If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Enterprise Plus edition Storage and networking pricing Storage and networking prices depend on the region where the instance is located. Select your region in the dropdown on the pricing table. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Note: Committed use discounts do not apply to storage or network prices. In the following table, Select your region from the dropdown menu to see the price for that region. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Network Egress Pricing When network traffic leaves a Cloud SQL instance, the charge applied depends on the destination of the traffic, and in some cases, whether a partner is involved. Internet egress is network traffic leaving a Cloud SQL instance to a client that is not a Google product, such as using a local server to read data from Cloud SQL. Destination Price Compute Engine instances and Cloud SQL cross-region replicas Within the same region: free Between regions within North America: $0.12/GB Between regions outside of North America: $0.12/GB Google Products (except Compute Engine and traffic to Cloud SQL cross-region replicas) Intra-continental: free Inter-continental: $0.12/GB Internet egress using Cloud Interconnect $0.05/GB Internet egress (not using Cloud Interconnect) $0.19/GB If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Instance pricing Instance pricing applies only to shared-core instances. Dedicated-core instances, which can have up to 96 vCPUs and 624 GB of memory, are charged by the number of cores and amount of memory they have. Instance pricing is charged for every second that the instance is running (the activation policy is set to ALWAYS). Cloud SQL uses seconds as the time unit multiplier for usage. This means that each second of usage counts toward a full billable minute. For more details, see Billing on partial seconds. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Note: Committed use discounts do not apply to instance prices. In the following table: Select your region from the dropdown menu to see the price for that region Use the slider to choose Monthly or Hourly pricing Find the machine type you want to use to view pricing details If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. *Shared CPU machine types (db-f1-micro and db-g1-small) are not covered by the Cloud SQL SLA. Billing on partial seconds Milliseconds of usage are rounded to the nearest second. If usage is under half a second, (499ms or less), it rounds down to zero and does not count toward billable usage. For example: If you use an instance for 499ms, you are not billed for that second. If you use an instance for 500ms or 1.49 seconds, in both cases, you are billed for 1 second. If you use an instance for 1.5 seconds or 2.49 seconds, in both cases you are billed for 2 seconds. Serverless export pricing Serverless export prices depend on the region where the instance is located. Select your region using the dropdown on the pricing table. Note: Committed use discounts don't apply to serverless export prices. If you perform frequent exports of a subset of your Cloud SQL instance (for example, by using the database, table, or query parameter), then it's more cost effective for you to create a read replica for your primary instance and perform your exports from that instance. You avoid impacting your database with your exports, and lower your costs because you're paying for the read replica only, and not for each export. In the following table, select your region from the dropdown menu to see the price for that region. Note: The price per GB is calculated based on the disk size of the offload instance, which is the same as the disk size of the primary instance. This price isn't based on the amount of data exported. Cloud DNS pricing With Cloud DNS pricing, the charge is per zone per month (regardless of whether you use your zone), and you also pay for queries against your zones. For more information, see Cloud DNS pricing. Extended support pricing Note: From February 1, 2025 through April 30, 2025, Google has waived charges for extended support. Starting on May 1, 2025, all instances running on major versions that have reached end of life (EOL) will be charged for extended support. Extended support pricing applies to Cloud SQL instances that are running major versions in extended support. Refer to the Database version policies documentation for the extended support timelines of Cloud SQL major versions. For dedicated-core instances, extended support is priced per vCPU per hour and charged for every second that the instance is running. For shared-core instances, extended support is priced per instance per hour and charged for every second that the instance is running. Extended support is charged in addition to regular instance pricing. For more details, see Billing on partial seconds. Pricing depends on the region where your instance is located. Select your region in the dropdown on the pricing table. Read replicas are charged at the same rate as stand-alone instances. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Note: Committed use discounts don't apply to extended support prices. In the following table, select your region from the dropdown menu to see the price for that region. To see how you can calculate your potential costs, please refer to this pricing example. SQL Server pricing Cloud SQL for SQL Server is composed of the following charges: CPU and memory pricing Storage and networking pricing Licensing Cloud DNS pricing Try it for yourself If you're new to Google Cloud, create an account to evaluate how Cloud SQL performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Try Cloud SQL free CPU and memory pricing For dedicated-core instances, you choose the number of CPUs and the amount of memory you want, up to 96 CPUs and 624 GB of memory. For Cloud SQL Enterprise Plus edition instances, you can choose from Performance-optimized machines (up to 128 CPUs and 864 GB of memory) and Memory-optimized machines (up to 16 CPUs and 512 GB of memory). Pricing for CPUs and memory depends on the region where your instance is located. Select your region in the dropdown on the pricing table. Read replicas and failover replicas are charged at the same rate as stand-alone instances. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Cloud SQL also offers committed use discounts (CUDs) that provide deeply discounted prices in exchange for your commitment to continuously use database instances in a particular region for a one- or three-year term. In the pricing tables on this page, the prices for CUDs are listed as commitments. For more information about these commitments, see Committed use discounts. In the following tables: Select your region from the dropdown menu to see the price for that region Use the slider to choose Monthly or Hourly pricing Compare pricing between per use, 1-year, and 3-year commitments Enterprise edition If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Enterprise Plus edition If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Storage and networking pricing SQL server storage and networking prices depend on the region where the instance is located. Select your region in the dropdown on the pricing table. HA prices are applied for instances configured for high availability, also called regional instances. Learn more about high availability. Note: Committed use discounts do not apply to storage or network prices. In the following table, select your region from the dropdown menu to see the price for that region. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Network Egress Pricing When network traffic leaves a Cloud SQL instance, the charge applied depends on the destination of the traffic, and in some cases, whether a partner is involved. Internet egress is network traffic leaving a Cloud SQL instance to a client that is not a Google product, such as using a local server to read data from Cloud SQL. Destination Price Compute Engine instances Within the same region: free Between regions within North America: $0.12/GB Between regions outside of North America: $0.12/GB Google Products (except Compute Engine) Intra-continental: free Inter-continental: $0.12/GB Internet egress using Cloud Interconnect $0.05/GB Internet egress (not using Cloud Interconnect) $0.19/GB If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Licensing In addition to instance and resource pricing, SQL Server also has a licensing component. High availability, or regional instances, will only incur the cost for a single license for the active resource. Note: As a managed service, Cloud SQL does not support BYOL (Bring your own license). You can learn more about instance creation here. Note: Committed Use Discounts do not apply to licenses. License Price per Core Hour Enterprise $0.47 Standard $0.13 Web $0.01134 Express $0 Microsoft SQL Server licensing requires a core license to be assigned to each virtual CPU on your instance, with a four core minimum for each instance. Instances with fewer than 4 vCPUs will be charged for SQL Server at 4 times the license rate to comply with these requirements. For instances with 4 or more vCPUs, you will be charged for the number of SQL Server licenses that is equal to the number of vCPUs. Disabling simultaneous multithreading (SMT) reduces the number of virtual CPUs (vCPUs) per core from 2 to 1, which in turn might reduce your SQL Server licensing costs. Disabling SMT doesn't change the compute engine price for SQL Server. You are billed for the number of vCPUs defined in the SQL Server CPU configuration. The following examples show how disabling SMT affects your billing: User SMT: enabled or disabled Number of vCPUs on the instance Number of vCPUs the SQL Server licensing fees are calculated for Number of vCPUs the compute charges are calculated for User1 Enabled 8 8 8 User2 Disabled 8 4 8 User3 Disabled 6 4 6 Note that despite disabling SMT, User3's SQL Server license fees are calculated for 4 vCPUs because SQL Server licensing requires a core license to be assigned to each virtual CPU on your instance, with a minimum of four cores for each instance. SQL Server instance licenses are charged a 1 minute minimum. After 1 minute, SQL Server instance licenses are charged in 1 second increments. Cloud DNS pricing With Cloud DNS pricing, the charge is per zone per month (regardless of whether you use your zone), and you also pay for queries against your zones. For more information, see Cloud DNS pricing. What's next Refer to the Pricing Overview documentation. Visit the pricing examples page to see how you can calculate your potential costs. Try the Pricing Calculator. Learn more about instance settings. Learn more about the high availability configuration. Request a custom quote With Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization. Contact sales \ No newline at end of file diff --git a/Cloud_SQL.txt b/Cloud_SQL.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b1cc79cd243eb4b8c3587ee45492d46724a3579 --- /dev/null +++ b/Cloud_SQL.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sql +Date Scraped: 2025-02-23T12:01:46.336Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud SQLFocus on your application, and leave the database to usFully managed, cost-effective relational database service for PostgreSQL, MySQL, and SQL Server. Try Enterprise Plus edition for a 99.99% availability SLA and category-leading performance.New customers get $300 in free credits to try Cloud SQL and other Google Cloud products.Try it in consoleContact salesEasily build gen AI applications that are accurate, transparent, and reliable with LangChain integration. Cloud SQL has three LangChain integrations - Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages Memory for enabling chains to recall previous conversations. Visit the GitHub repository to learn more.Product highlightsSub-second downtime maintenance with Enterprise Plus editionPrice-performance options for every budget, app, and workloadGemini-assisted management and vector search for gen AI appsCloud SQL in a minute1:31FeaturesFully managedCloud SQL manages your databases so you don't have to, enabling your business to run without disruption. We automate all your backups, replication, patches, encryption, and storage capacity increases to give your applications the reliability, scalability, and security they need.VIDEOWhat is Cloud SQL?3:09Price performance options Cloud SQL provides the flexibility to choose the right capabilities based on your performance, availability, and data protection needs for your database workloads. The Enterprise edition provides the lowest cost with balanced performance, while the Enterprise Plus edition is suitable for the most demanding transactional workloads, providing the highest performance with a fully optimized software and hardware stack featuring a data cache option. The data cache leverages flash memory to lower read latency and improve throughput by intelligently caching data across memory and high speed local storage. SQL Server Enterprise Plus instances offers two new machine families, Performance optimized (1:8 core memory ratio) and Memory optimized (1:32 core memory ratio), that deliver better performance.Learn how Chess.com used Enterprise Plus to cut response time by 71%High availability with near sub-second downtime maintenanceEasily configure built-in high availability with automatic failover across zones to protect your applications from a variety of possible failures. The Enterprise edition provides a 99.95% availability SLA for general purpose workloads, while the Enterprise Plus edition offers a 99.99% availability SLA for the most demanding workloads featuring maintenance and instance scale-up with typically sub-second downtime. It also offers enhanced disaster recovery capabilities that allow failover and switchback to your original topology with zero data loss and no application changes.Learn about Cloud SQL's high availability capabilities in this whitepaperGemini in Cloud SQLSimplify all aspects of the database journey with AI-powered assistance, helping you focus on what matters most. Gemini in Cloud SQL, in preview, simplifies all aspects of database operations, including development, performance optimization, fleet management, governance, and migration.Gemini in Database Migration Service, in preview, helps you review and convert database-resident code like stored procedures, triggers, and functions and run it in Cloud SQL for PostgreSQL. Database Center, in Preview, helps proactively de-risk your fleet with intelligent performance and security recommendations. With Gemini enabled, Database Center makes optimizing your database fleet incredibly intuitive. Use a natural-language chat interface to ask questions, quickly resolve fleet issues, and get optimization recommendations.Learn how you can supercharge database development and management with Gemini in Cloud SQLFirebase Data ConnectFirebase Data Connect bridges the gap between Firebase's powerful mobile/web development features and PostgreSQL's rich data management capabilities. Leverage the power of Cloud SQL for PostgreSQL as your backend while enjoying the seamless Firebase development experience. Build faster, smarter, and more effectively.Learn more about Firebase Data ConnectVector SearchVector Search is a critical capability for building useful and accurate gen AI-powered apps. Cloud SQL for PostgreSQL and MySQL support two search approaches for balancing speed and accuracy. Approximate nearest neighbor (ANN) vector search is ideal for large datasets where close matches suffice, while exact nearest neighbor (KNN) vector search is typically used for high precision on smaller datasets. Integrate vector search directly into your existing AI-powered apps without needing to learn and manage a separate system.VIDEOUsing pgvector to build AI-powered apps7:52Using pgvector to build AI-powered appsCost-effectiveAccording to an IDC study, Cloud SQL customers achieved a three-year ROI of 246% and a payback period of 11 months. Committed use discounts offer additional savings for one to three year commitments.Learn how companies achieved a three-year ROI of 246% in this IDC reportOpen and standards-basedCloud SQL supports the most popular open source and commercial engines, including MySQL, PostgreSQL, and SQL Server with rich support for extensions, configuration flags, and popular developer tools. It's easy to get started—simply bring your existing skills over and enjoy the flexibility to work the way you want. You can create a database with just a few clicks in the console and connect your application.Choosing a PostgreSQL database on Google Cloud19:20LangChain integrationEasily build gen AI applications with LangChain. Cloud SQL features three LangChain integrations - Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages History for saving/fetching previous chat conversations. Visit the Github repository for PostgreSQL, MySQL, and SQL Server to learn more.Easy migrationsNo matter where your database is located—whether on-premises, on Compute Engine, or in other clouds—Database Migration Service (DMS) can migrate it securely and with minimal downtime. DMS leverages the native replication capabilities of the source database to maximize the reliability of your migration. And lift and shift migrations to Cloud SQL are available at no additional charge.Learn how to connect to Cloud SQL from Kubernetes14:16IntegratedCloud SQL seamlessly integrates with Google Cloud services, such as Compute Engine, Cloud Run, Google Kubernetes Engine, and Cloud IAM, allowing developers to build and deploy applications with ease. Provision your database via APIs and third-party tools, and use federated queries from BigQuery and low-latency database replication with Datastream for near real-time insights on operational data.Learn how to connect to Cloud SQL from Kubernetes14:16Data protection and complianceConfigure backups for data protection and restore your instance to an earlier point in time with a retention period of up to 35 days. Configure where your data is stored to comply with data residency requirements. Cloud SQL automatically encrypts data, is SSAE 16, ISO 27001, and PCI DSS compliant, and supports HIPAA compliance.Learn how to secure a Cloud SQL for PostgreSQL instanceSecure access and connectivityCloud SQL data is encrypted when in transit on Google’s internal networks and when at rest in database tables, temporary files, and backups. It supports private connectivity with Virtual Private Cloud (VPC), and every Cloud SQL instance includes a network firewall, allowing you to control public network access. Cloud SQL also supports Private Service Connect, which allows you to access your database instances via private IP without going through the internet or using external IP addresses.ScalabilityEasily scale up as your data grows—add compute and storage, and scale out by adding read replicas to handle increasing read traffic. Cloud SQL can also automatically scale up storage capacity when you’re near your limit. Read replicas support high availability, can have their own read replicas, and can be located across regions.Change data capture and replicationStream data across heterogeneous databases, storage systems, and applications reliably and with minimal latency with Datastream and Datastream for BigQuery. Scale up or down with a serverless architecture and no resources to provision or manage, and enable near real-time insights on operational data.View all featuresCompare Cloud SQL to other Google Cloud databasesGoogle Cloud serviceOverviewKey benefitsCloud SQLFully managed, cost-effective relational database serviceTry Cloud SQL for the:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionAlloyDB for PostgreSQLFull PostgreSQL compatibility with superior performance and scaleLearn more about the benefits of AlloyDB for PostgreSQL, including:4x faster than standard PostgreSQL for transactional workloadsHybrid transactional and analytical processing (HTAP)Up to 100x faster analytical queries than standard PostgreSQLFull AI support, designed to run anywhereSpannerHigh performance and availability at virtually unlimited scale at zero-touch maintenanceLearn more about the benefits of Spanner, including: Strong consistency and global scaleRun large queries with zero impact on operational trafficHighly scalable database with relational and non-relational capabilities, offering 99.999% availabilityCloud SQLOverviewFully managed, cost-effective relational database serviceKey benefitsTry Cloud SQL for the:Easiest lift and shift to the cloudSame management as the MySQL and SQL Server enginesLowest cost relational database optionAlloyDB for PostgreSQLOverviewFull PostgreSQL compatibility with superior performance and scaleKey benefitsLearn more about the benefits of AlloyDB for PostgreSQL, including:4x faster than standard PostgreSQL for transactional workloadsHybrid transactional and analytical processing (HTAP)Up to 100x faster analytical queries than standard PostgreSQLFull AI support, designed to run anywhereSpannerOverviewHigh performance and availability at virtually unlimited scale at zero-touch maintenanceKey benefitsLearn more about the benefits of Spanner, including: Strong consistency and global scaleRun large queries with zero impact on operational trafficHighly scalable database with relational and non-relational capabilities, offering 99.999% availabilityHow It WorksCloud SQL scales up in minutes and replicates data across zones and regions. It uses agents for maintenance, logging, monitoring, and configuration, with services backed by a 24/7 SRE team. Manage your database through the console, CLI, or REST API and connect your app through standard database drivers. View documentationCommon UsesDatabase migrationMigrate to a fully managed database solutionSelf-managing a database, such as MySQL, PostgreSQL, or SQL Server, can be inefficient and expensive, with significant effort around patching, hardware maintenance, backups, and tuning. Migrating to a fully managed solution has never been simpler—you can lift and shift your database from any on-premises or cloud location using Database Migration Service with minimal downtime.Lab: Migrate to Cloud SQL for PostgreSQL using Database Migration ServiceDocumentation: Best practices for importing and exporting dataBlog: See how Broadcom migrated 40+ MySQL databases to Google Cloud with minimal downtimeLearning resourcesMigrate to a fully managed database solutionSelf-managing a database, such as MySQL, PostgreSQL, or SQL Server, can be inefficient and expensive, with significant effort around patching, hardware maintenance, backups, and tuning. Migrating to a fully managed solution has never been simpler—you can lift and shift your database from any on-premises or cloud location using Database Migration Service with minimal downtime.Lab: Migrate to Cloud SQL for PostgreSQL using Database Migration ServiceDocumentation: Best practices for importing and exporting dataBlog: See how Broadcom migrated 40+ MySQL databases to Google Cloud with minimal downtimeDatabase modernizationModernize your application with open sourceA cloud deployment is a good opportunity to modernize your database environment by transitioning off legacy, proprietary databases and onto open source databases, such as PostgreSQL. With open source databases having become enterprise-ready, you don't need to compromise on performance, reliability, or security.Learn how to get started with Database Migration ServiceLearning resourcesModernize your application with open sourceA cloud deployment is a good opportunity to modernize your database environment by transitioning off legacy, proprietary databases and onto open source databases, such as PostgreSQL. With open source databases having become enterprise-ready, you don't need to compromise on performance, reliability, or security.Learn how to get started with Database Migration ServiceNew application developmentBuild data-driven applicationsCloud SQL accelerates application development through integration with the larger ecosystem of Google Cloud services, Google partners, and the open source community, while giving you the freedom to work the way you want. Reuse your existing database skills while freeing yourself from mundane database administration tasks and leveraging AI/ML-driven insights and recommendations.Try the lab: Building a containerized app with a scalable databaseLab: Connect an app to a Cloud SQL for PostgreSQL instanceLab: Migrate MySQL database to Google Cloud SQL and then reconfigure the applicationLab: Create Cloud SQL instances with Terraform, then set up the Cloud SQL ProxyTutorials, quickstarts, & labsBuild data-driven applicationsCloud SQL accelerates application development through integration with the larger ecosystem of Google Cloud services, Google partners, and the open source community, while giving you the freedom to work the way you want. Reuse your existing database skills while freeing yourself from mundane database administration tasks and leveraging AI/ML-driven insights and recommendations.Try the lab: Building a containerized app with a scalable databaseLab: Connect an app to a Cloud SQL for PostgreSQL instanceLab: Migrate MySQL database to Google Cloud SQL and then reconfigure the applicationLab: Create Cloud SQL instances with Terraform, then set up the Cloud SQL ProxyThird-party applicationsDeploy applications with confidenceMany software vendors build and certify their applications for MySQL, PostgreSQL, and SQL Server. Since Cloud SQL offers standard versions of these databases, including extensions, configuration flags, and drivers, applications can run unmodified. Make your journey to the cloud and let us take tedious database administration tasks off your plate.Find a Google Cloud partnerLearning resourcesDeploy applications with confidenceMany software vendors build and certify their applications for MySQL, PostgreSQL, and SQL Server. Since Cloud SQL offers standard versions of these databases, including extensions, configuration flags, and drivers, applications can run unmodified. Make your journey to the cloud and let us take tedious database administration tasks off your plate.Find a Google Cloud partnerPricingHow Cloud SQL pricing worksPricing varies with editions, engine, and settings, including how much storage, memory, and CPU you provision. Cloud SQL offers per-second billing.ServiceDescriptionPriceComputeCloud SQL EnterpriseBest for general purpose workloads. It offers:1-96 vCPUs1:6.5 core memory ratio99.95% SLA< 60s of planned downtimeStarting at$0.0413per vCPU/hourCloud SQL Enterprise PlusBest for high performance workloads. It offers:Up to 128 vCPUs1:8 core memory ratio99.99% SLA< 10 seconds of planned downtime (MySQL, PostgreSQL)2x read and write performance (MySQL, PostgreSQL)Up to 4x better read performance (SQL Server)Memory optimized machines with 1:32 core memory ratio (SQL Server)Starting at$0.05369per vCPU/hourMemoryCloud SQL Enterpriseup to 624 GBStarting at$0.007per GB/hourCloud SQL Enterprise Plusup to 824 GB (includes performance-optimized machines on SQL Server)Starting at$0.0091per GB/hourCloud SQL Enterprise Plus (SQL Server only)Up to 512 GB for memory-optimized machinesStarting at$0.0161per GB/hourStorage - SSDStorage and networking prices depend on the region where the instance is located; Cloud SQL Enterprise pricing and Cloud SQL Enterprise Plus pricing are the same$0.17per GB/monthStorage - Local SSDThis is only available for Cloud SQL Enterprise Plus$0.16per GB/monthPITR Logs on Google Cloud StorageCloud SQL EnterpriseUp to 7 daysFreeCloud SQL Enterprise PlusUp to 35 daysFreeGet full details on pricing and learn about committed use discounts.Watch video: Supercharging your applications with Cloud SQL Enterprise Plus (2:58)How Cloud SQL pricing worksPricing varies with editions, engine, and settings, including how much storage, memory, and CPU you provision. Cloud SQL offers per-second billing.ComputeDescriptionCloud SQL EnterpriseBest for general purpose workloads. It offers:1-96 vCPUs1:6.5 core memory ratio99.95% SLA< 60s of planned downtimePriceStarting at$0.0413per vCPU/hourCloud SQL Enterprise PlusBest for high performance workloads. It offers:Up to 128 vCPUs1:8 core memory ratio99.99% SLA< 10 seconds of planned downtime (MySQL, PostgreSQL)2x read and write performance (MySQL, PostgreSQL)Up to 4x better read performance (SQL Server)Memory optimized machines with 1:32 core memory ratio (SQL Server)DescriptionStarting at$0.05369per vCPU/hourMemoryDescriptionCloud SQL Enterpriseup to 624 GBPriceStarting at$0.007per GB/hourCloud SQL Enterprise Plusup to 824 GB (includes performance-optimized machines on SQL Server)DescriptionStarting at$0.0091per GB/hourCloud SQL Enterprise Plus (SQL Server only)Up to 512 GB for memory-optimized machinesDescriptionStarting at$0.0161per GB/hourStorage - SSDDescriptionStorage and networking prices depend on the region where the instance is located; Cloud SQL Enterprise pricing and Cloud SQL Enterprise Plus pricing are the samePrice$0.17per GB/monthStorage - Local SSDDescriptionThis is only available for Cloud SQL Enterprise PlusPrice$0.16per GB/monthPITR Logs on Google Cloud StorageDescriptionCloud SQL EnterpriseUp to 7 daysPriceFreeCloud SQL Enterprise PlusUp to 35 daysDescriptionFreeGet full details on pricing and learn about committed use discounts.Watch video: Supercharging your applications with Cloud SQL Enterprise Plus (2:58)Pricing CalculatorEstimate your monthly Cloud SQL costs, including region specific pricing and fees.Estimate your costscustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Cloud SQL in the consoleGo to my consoleLearn how to use Cloud SQLView trainingBuild a resilient architecture with Cloud SQLLearn moreFederate queries from BigQuery into Cloud SQLLearn moreMigrate a database to Cloud SQL using DMSLearn moreBusiness CaseMore than 95% of Google Cloud’s top 100 customers use Cloud SQL“Given Linear’s existing data volume and our goals for finding a cost-efficient solution, we selected Cloud SQL for PostgreSQL once support for pgvector was added. We were impressed by its scalability and reliability. This choice was also compatible with our existing database usage, models, ORM, etc. and this meant the learning curve was non-existent for our team.”Tom Moor, Head of US Engineering, Linear Leveraging the power of Cloud SQL for PostgreSQL with pgvector support, Linear was able to keep pace with its expanding customer base–improving the efficiency, scalability, and reliability of data management, scaling up into the tens of terabytes without increasing engineering effort.Read customer storyRelated contentFord achieves reductions in database management tasks using Google CloudHow Wayfair quickly moved from their on-premises data centers running SQL Server to Google CloudHow Renault migrated from Oracle databases to Cloud SQL for PostgreSQLFeatured benefitsStandard connection drivers and built-in migration tools allow you to create and connect to your first database in just a few minutes.Scale your instances effortlessly with a single API call whether you start with simple testing or you need a highly available database in production.Data encryption at rest and in transit. Private connectivity with Virtual Private Cloud and user-controlled network access with firewall protection. Compliant with SSAE 16, ISO 27001, PCI DSS, and supports HIPAA compliance.Partners & IntegrationAccelerate your workloads by working with a partnerBusiness intelligence and analyticsConsulting partnersData integration and migrationData quality and observabilityData security, caching, proxy servicesBuild data-driven applications with Google Cloud Ready - Cloud SQL validated partners, and visit the partner directory for a full list of Cloud SQL partners.FAQExpand allWhat is Cloud SQL?Cloud SQL is a service that delivers fully managed relational databases in the cloud. It offers MySQL, PostgreSQL, and SQL Server database engines.How is Cloud SQL different from other cloud databases?Cloud SQL is valued for its openness, ease of use, security, cost-efficiency, and Google Cloud integration—in fact, more than 95% of Google Cloud’s top 100 customers use it. If you're comparing PostgreSQL options on Google Cloud, view our comparison chart.What's the difference between the Enterprise and Enterprise Plus editions?For PostgreSQL, the Enterprise Plus edition brings enhanced availability, performance, and data protection capabilities. Specifically, it provides a 99.99% availability SLA with near-zero downtime maintenance, optimized hardware and software configurations, intelligent data caching for read-intensive transactional workloads, a configurable data cache option and 35 days of log retention.For MySQL, the Enterprise Plus edition brings enhanced availability, performance, and data protection capabilities. Specifically, it provides a 99.99% availability SLA with near-zero downtime maintenance, optimized hardware and software configurations, intelligent data caching for read-intensive transactional workloads, a configurable data cache option, 35 days of log retention and advanced disaster recovery capabilities like orchestrated failover and switchback.For SQL Server, the Enterprise Plus edition brings enhanced availability, performance, and disaster recovery capabilities. Specifically, it provides a 99.99% availability SLA, two machine families with performance-optimized and memory-optimized machines, a configurable data cache option for read-intensive workloads and advanced disaster recovery capabilities like orchestrated failover and switchback.How do I migrate databases to Cloud SQL?Use Database Migration Service to migrate securely and with minimal downtime, no matter where your source database is located.How can I get started with Cloud SQL?Go to the Cloud SQL console and create a database instance. Get up and running quickly with a Quickstart for MySQL, PostgreSQL, or SQL Server.New customers get $300 in free credits to spend on Cloud SQL. You won’t be charged until you upgrade.Other questions and supportCloud SQL FAQAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Scheduler(1).txt b/Cloud_Scheduler(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..79e48e52410b8c4d376e63e291fc99c91e883c01 --- /dev/null +++ b/Cloud_Scheduler(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/scheduler/docs +Date Scraped: 2025-02-23T12:05:23.823Z + +Content: +Home Cloud Scheduler Documentation Stay organized with collections Save and categorize content based on your preferences. Google Cloud Scheduler documentation View all product documentation You can use Google Cloud Scheduler to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Set up your environment Create and configure cron jobs Configure cron job schedules View logs Use authentication with HTTP targets Limit target types find_in_page Reference REST reference RPC reference gcloud command reference Cloud Client Libraries info Resources Release notes Quotas and limits Get support Pricing Training Training and tutorials Quickstart: Schedule and run a cron job Learn how to perform basic operations using Cloud Scheduler. Learn more arrow_forward Training Training and tutorials Use Cloud Scheduler and Pub/Sub to trigger a Cloud Run function Learn how to use Cloud Scheduler and Pub/Sub to trigger a Cloud Run function. Learn more arrow_forward Training Training and tutorials Schedule compute instances with Cloud Scheduler Learn how to use Cloud Scheduler and Cloud Run functions to automatically start and stop Compute Engine instances on a regular schedule using resource labels. Learn more arrow_forward Training Training and tutorials Schedule data exports from Firebase Learn how to schedule exports of your Firestore data. Learn more arrow_forward Training Training and tutorials Schedule Workflows with Cloud Scheduler Learn how to use Cloud Scheduler to automatically execute Workflows so that a workflow runs on a particular schedule. Learn more arrow_forward \ No newline at end of file diff --git a/Cloud_Scheduler.txt b/Cloud_Scheduler.txt new file mode 100644 index 0000000000000000000000000000000000000000..497211aa93ba1ee3da099487c7a3cbc61d3fc9e7 --- /dev/null +++ b/Cloud_Scheduler.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/scheduler/docs +Date Scraped: 2025-02-23T12:04:28.542Z + +Content: +Home Cloud Scheduler Documentation Stay organized with collections Save and categorize content based on your preferences. Google Cloud Scheduler documentation View all product documentation You can use Google Cloud Scheduler to schedule virtually any job, including batch, big data jobs, cloud infrastructure operations, and more. You can automate everything, including retries in case of failure to reduce manual toil and intervention. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Set up your environment Create and configure cron jobs Configure cron job schedules View logs Use authentication with HTTP targets Limit target types find_in_page Reference REST reference RPC reference gcloud command reference Cloud Client Libraries info Resources Release notes Quotas and limits Get support Pricing Training Training and tutorials Quickstart: Schedule and run a cron job Learn how to perform basic operations using Cloud Scheduler. Learn more arrow_forward Training Training and tutorials Use Cloud Scheduler and Pub/Sub to trigger a Cloud Run function Learn how to use Cloud Scheduler and Pub/Sub to trigger a Cloud Run function. Learn more arrow_forward Training Training and tutorials Schedule compute instances with Cloud Scheduler Learn how to use Cloud Scheduler and Cloud Run functions to automatically start and stop Compute Engine instances on a regular schedule using resource labels. Learn more arrow_forward Training Training and tutorials Schedule data exports from Firebase Learn how to schedule exports of your Firestore data. Learn more arrow_forward Training Training and tutorials Schedule Workflows with Cloud Scheduler Learn how to use Cloud Scheduler to automatically execute Workflows so that a workflow runs on a particular schedule. Learn more arrow_forward \ No newline at end of file diff --git a/Cloud_Service_Mesh.txt b/Cloud_Service_Mesh.txt new file mode 100644 index 0000000000000000000000000000000000000000..9535f82d3a8f22fe2744cfee9fde4e9705c9882c --- /dev/null +++ b/Cloud_Service_Mesh.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/service-mesh +Date Scraped: 2025-02-23T12:04:56.903Z + +Content: +Jump to Cloud Service MeshCloud Service MeshThe fully managed service mesh based on Envoy and Istio.Go to consoleView documentationTake a services-first approach Unify your service mesh across your app platforms: from VMs to containers to serverlessUnburden your operations and development teams with a fully managed serviceLeverage leading open source projects like Istio and Envoy36:04BenefitsFully managed, full stopAs a fully managed offering, Cloud Service Mesh takes all the guesswork and effort out of procuring and managing your service mesh solution. You focus on developing great apps; let us worry about your mesh.Hybrid and multicloudCloud Service Mesh gives you the flexibility to support your workloads in Google Cloud, other public clouds, and on-prem deployments.Modernize at your paceCloud Service Mesh works for VM-based (Compute Engine) and containerized applications (Cloud Run, Google Kubernetes Engine, or self-managed Kubernetes) and can be incrementally introduced for your services.Key featuresToil-free, secure service networking and traffic managementManaged by GoogleCloud Service Mesh is a Google-managed service: if there is a problem, our operators get paged, not yours. You don't have to worry about deploying and managing the control plane, which means your people can focus on your business.Sophisticated traffic management made easyWith Cloud Service Mesh, you can control traffic flows and API calls between services while also gaining visibility into your traffic. This makes calls more reliable and your network more robust, even in adverse conditions, while enabling you to catch issues before they become problems.Security simplifiedSecuring your service mesh can feel daunting. Cloud Service Mesh helps you embrace a zero-trust security model by giving you the tools to automatically and declaratively secure your services and their communication. You can manage authentication, authorization, and encryption between services with a diverse set of features—all with little or no changes to the applications themselves.Fault injection toolsEven with robust failure-recovery features, it’s critical to test your mesh’s resilience. That’s where fault injection comes in. You can easily configure delay and abort faults to be injected into requests that match certain conditions, and even restrict the percentage of requests that should be subjected to faults.Flexible authorizationDecide who has access to what services in your mesh with easy-to-use role-based access control (RBAC). You specify the permissions, then grant access to them at the level you choose, from namespace all the way down to users.What's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postAnnouncing Cloud Service MeshRead the blogVideoIntroducing Cloud Service Mesh: A fully managed global scale service meshWatch videoReportNext '24 Presentation: Introducing Cloud Service MeshLearn moreDocumentationFind documentation and resources for Cloud Service MeshGoogle Cloud BasicsCloud Service Mesh overviewGet an overview of Cloud Service Mesh and key implementation options. Learn moreGoogle Cloud BasicsSupported platformsLearn about various environments supported by Cloud Service Mesh.Learn moreGoogle Cloud BasicsObservability guideLearn how Cloud Service Mesh provides observability into the health and performance of your services.Learn moreGoogle Cloud BasicsSecurity guideLearn how Cloud Service Mesh helps you mitigate insider threats and reduce the risk of a data breach by ensuring that all communications between workloads are encrypted, mutually authenticated, and authorized.Learn moreQuickstartDeploy Cloud Service Mesh on GKEEnable and provision Cloud Service Mesh on Google Kubernetes Engine (GKE).Learn moreQuickstartDeploy Cloud Service Mesh on GCEEnable and provision Cloud Service Mesh on Google Compute Engine (GCE).Learn moreQuickstartDeploy Cloud Service Mesh outside Google CloudEnable and provision Cloud Service Mesh in hybrid or multicloud environments.Learn moreNot seeing what you’re looking for?View all product documentationPricingPricingCloud Service Mesh is available as part of GKE Enterprise or as a standalone offering on Google Cloud. Google APIs enabled on the project determine how you are billed. If you want to use Cloud Service Mesh on-premises or on other clouds, you must subscribe to GKE Enterprise. GKE Enterprise customers are not billed separately for Cloud Service Mesh because it is already included in the GKE Enterprise pricing. To use Cloud Service Mesh as a standalone service, don't enable the GKE Enterprise API on your project.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Shell.txt b/Cloud_Shell.txt new file mode 100644 index 0000000000000000000000000000000000000000..e89e0fa08bc29500ef22c64628f6416b0362168a --- /dev/null +++ b/Cloud_Shell.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/shell/docs +Date Scraped: 2025-02-23T12:05:34.414Z + +Content: +Home Cloud Shell Documentation Stay organized with collections Save and categorize content based on your preferences. Cloud Shell documentation View all product documentation Cloud Shell is an interactive shell environment for Google Cloud that lets you learn and experiment with Google Cloud and manage your projects and resources from your web browser. With Cloud Shell, the Google Cloud CLI and other utilities you need are pre-installed, fully authenticated, up-to-date, and always available when you need them. Cloud Shell comes with a built-in code editor with an integrated Cloud Code experience, allowing you to develop, build, debug, and deploy your cloud-based apps entirely in the cloud. You can also launch interactive tutorials, open cloned repositories, and preview web apps on a Cloud Shell virtual machine instance. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Run gcloud commands with Cloud Shell Quickstart: Deploy a GKE app by using Cloud Shell Use Cloud Shell Use Web Preview Launch the Cloud Shell Editor Cloud Shell Editor interface overview Get started with Cloud Code Configure Cloud Shell Disable or reset Cloud Shell find_in_page Reference Tutorial Markdown Reference REST API RPC API info Resources Pricing Get support Limitations and restrictions How Cloud Shell works Training Training and tutorials Google Cloud Fundamentals: Core Infrastructure These lectures, demos, and hands-on labs give you an overview of Google Cloud products and services so that you can learn the value of Google Cloud and how to incorporate cloud-based solutions into your business strategies. Learn more arrow_forward Training Training and tutorials Codelab: Build and launch an ASP.NET Core app from Cloud Shell Steps you through how to build and launch an ASP.NET Core app from Cloud Shell without ever leaving the browser. Learn more arrow_forward Training Training and tutorials Create interactive tutorials in the Cloud console Learn about interactive tutorials by working through an actual interactive tutorial. Learn more arrow_forward Training Training and tutorials Codelab: Get started with Cloud Shell and the gcloud CLI Steps you through connecting to computing resources hosted on Google Cloud using Cloud Shell command-line access. You'll learn how to use the Cloud SDK gcloud command. Learn more arrow_forward Related videos Try Google Cloud for yourself Create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Get started for free \ No newline at end of file diff --git a/Cloud_Source_Repositories.txt b/Cloud_Source_Repositories.txt new file mode 100644 index 0000000000000000000000000000000000000000..725b329d387d6284f3212438e28eb4e6bae61c1d --- /dev/null +++ b/Cloud_Source_Repositories.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/source-repositories/docs +Date Scraped: 2025-02-23T12:04:31.051Z + +Content: +Effective June 17, 2024, Cloud Source Repositories isn't available to new customers. If your organization hasn't previously used Cloud Source Repositories, you can't enable the API or use Cloud Source Repositories. New projects not connected to an organization can't enable the Cloud Source Repositories API. Organizations that have used Cloud Source Repositories prior to June 17, 2024 are not affected by this change. Home Cloud Source Repositories Documentation Stay organized with collections Save and categorize content based on your preferences. Cloud Source Repositories documentation View all product documentation Cloud Source Repositories are fully featured, private Git repositories hosted on Google Cloud. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Setting up local authentication Cloning a repository Mirroring a GitHub repository Adding a repository as a remote Pushing code from an existing repository Creating an empty repository Mirroring a Bitbucket repository find_in_page Reference REST API RPC API info Resources Pricing and quotas Troubleshooting Support Release notes Related videos \ No newline at end of file diff --git a/Cloud_Storage(1).txt b/Cloud_Storage(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..80325276fdb1798abc2107dec57830e960aa0a1b --- /dev/null +++ b/Cloud_Storage(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/storage +Date Scraped: 2025-02-23T12:09:49.056Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud StorageObject storage for companies of all sizesCloud Storage is a managed service for storing unstructured data. Store any amount of data and retrieve it as often as you like.Try it in consoleContact salesLooking for something else? Check out personal, team, and block storage options.Product highlightsAutomatically transition to lower-cost storage classesStandard, nearline, coldline, and archive storage optionsFast, low-cost, highly durable archive and backup storageCloud Storage in a minute 1 min videoFeaturesLeading analytics and ML/AI toolsOnce your data is stored in Cloud Storage, easily plug into Google Cloud’s powerful tools to create your data warehouse with BigQuery, run open source analytics with Dataproc, or build and deploy machine learning (ML) models with Vertex AI. Start querying Cloud Storage data with BigQueryAutomatic storage class transitionsWith features like Object Lifecycle Management (OLM) and Autoclass you can easily optimize costs with object placement across storage classes. You can enable, at the bucket level, policy-based automatic object movement to colder storage classes based on the last access time. There are no early deletion or retrieval fees, nor class transition charges for object access in colder storage classes. IDC projects that Cloud Storage customers can realize average annual benefits worth $86,500 per petabyte. Read report.Continental-scale and SLA-backed replicationIndustry leading dual-region buckets support an expansive number of regions. A single, continental-scale bucket offers nine regions across three continents, providing a Recovery Time Objective (RTO) of zero. In the event of an outage, applications seamlessly access the data in the alternate region. There is no failover and failback process. For organizations requiring ultra availability, turbo replication with dual-region buckets offers a 15 minute Recovery Point Objective (RPO) SLA. Use Cloud Storage as a local filesystemCloud Storage is a common choice for storing training data, models, and checkpoints for machine learning projects in Cloud Storage buckets. With Cloud Storage FUSE, you can take advantage of the scale, affordability, throughput, and simplicity that Cloud Storage provides, while maintaining compatibility with applications that use or require filesystem semantics. Cloud Storage FUSE also now offers caching, which provides up to 2.2x faster time to train and 2.9x higher training throughput, compared to native ML framework dataloaders.Learn how Cloud Storage FUSE is optimized for GKE and AI workloadsManage your object storage with file inventory reportsInventory reports contain metadata information about your objects, such as the object's storage class, ETag, and content type. This information helps you analyze your storage costs, audit and validate your objects, and ensure data security and compliance. You can export inventory reports as comma-separated value (CSV) or Apache Parquet files so you can further analyze it using tools, such as BigQuery. Learn more about Storage Insights inventory reports.Fast and flexible transfer servicesStorage Transfer Service offers a highly performant, online pathway to Cloud Storage—both with the scalability and speed you need to simplify the data transfer process. For offline data transfer our Transfer Appliance is a shippable storage server that sits in your datacenter and then ships to an ingest location where the data is uploaded to Cloud Storage.VIDEOIntroducing Google Cloud’s Transfer Appliance2:54Default and configurable data securityCloud Storage offers secure-by-design features to protect your data and advanced controls and capabilities to keep your data private and secure against leaks or compromises. Security features include access control policies, data encryption, retention policies, retention policy locks, and signed URLs.Object lifecycle managementDefine conditions that trigger data deletion or transition to a cheaper storage class.Object Versioning Continue to store old copies of objects when they are deleted or overwritten.Soft deleteSoft delete offers improved protection against accidental and malicious data deletion by providing you with a way to retain and restore recently deleted data.Retention policiesDefine minimum retention periods that objects must be stored for before they’re deletable.Object holdsPlace a hold on an object to prevent its deletion.Customer-managed encryption keysEncrypt object data with encryption keys stored by the Cloud Key Management Service and managed by you.Customer-supplied encryption keys Encrypt object data with encryption keys created and managed by you.Uniform bucket-level accessUniformly control access to your Cloud Storage resources by disabling object ACLs.Requester paysRequire accessors of your data to include a project ID to bill for network charges, operation charges, and retrieval fees.Bucket LockBucket Lock allows you to configure a data retention policy for a Cloud Storage bucket that governs how long objects in the bucket must be retained.Pub/Sub notifications for Cloud Storage Send notifications to Pub/Sub when objects are created, updated, or deleted.Cloud Audit Logs with Cloud StorageMaintain admin activity logs and data access logs for your Cloud Storage resources.Object- and bucket-level permissionsCloud Identity and Access Management (IAM) allows you to control who has access to your buckets and objects.View all featuresStorage optionsStorage typeDescriptionBest forStandard storageStorage for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time."Hot" data, including websites, streaming videos, and mobile apps.Nearline storageLow cost, highly durable storage service for storing infrequently accessed data.Data that can be stored for 30 days.Coldline storageA very low cost, highly durable storage service for storing infrequently accessed data.Data that can be stored for 90 days.Archive storageThe lowest cost, highly durable storage service for data archiving, online backup, and disaster recovery.Data that can be stored for 365 days.Learn more about storage classes.Standard storageDescriptionStorage for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time.Best for"Hot" data, including websites, streaming videos, and mobile apps.Nearline storageDescriptionLow cost, highly durable storage service for storing infrequently accessed data.Best forData that can be stored for 30 days.Coldline storageDescriptionA very low cost, highly durable storage service for storing infrequently accessed data.Best forData that can be stored for 90 days.Archive storageDescriptionThe lowest cost, highly durable storage service for data archiving, online backup, and disaster recovery.Best forData that can be stored for 365 days.Learn more about storage classes.How It WorksTo use Cloud Storage, you’ll first create a bucket, a basic container that holds your data in Cloud Storage. You’ll then upload objects into that bucket—where you can download, share, and manage objects.View documentationWhat is Cloud Storage? (2:05)Common UsesBackups and archivesUse Cloud Storage for backup, archives, and recoveryCloud Storage's nearline storage provides fast, low-cost, highly durable storage for data accessed less than once a month, reducing the cost of backups and archives while still retaining immediate access. Backup data in Cloud Storage can be used for more than just recovery because all storage classes have ms latency and are accessed through a single API.Learn more about nearline storageTutorials, quickstarts, & labsUse Cloud Storage for backup, archives, and recoveryCloud Storage's nearline storage provides fast, low-cost, highly durable storage for data accessed less than once a month, reducing the cost of backups and archives while still retaining immediate access. Backup data in Cloud Storage can be used for more than just recovery because all storage classes have ms latency and are accessed through a single API.Learn more about nearline storageMedia content storage and deliveryStore data to stream audio or video Stream audio or video directly to apps or websites with Cloud Storage’s geo-redundant capabilities. Geo-redundant storage with the highest level of availability and performance is ideal for low-latency, high-QPS content serving to users distributed across geographic regions.Learn more about geo-redundant storageLearn how to set up a Media CDN, for planet-scale media delivery Tutorial: Design for scale and high availabilityGoogle Cloud Skills Boost: Storing Image and Video Files in Cloud Storage - PythonUnlock infinite capacity and innovationCloud Storage’s multi-regional performance and availability powers the world’s biggest media and entertainment companies. Build a hybrid render farm using Cloud StorageCreate a virtual workstation with Cloud StorageLearn how Vimeo uses multi-regional storage for fast, resumable uploadsTutorials, quickstarts, & labsStore data to stream audio or video Stream audio or video directly to apps or websites with Cloud Storage’s geo-redundant capabilities. Geo-redundant storage with the highest level of availability and performance is ideal for low-latency, high-QPS content serving to users distributed across geographic regions.Learn more about geo-redundant storageLearn how to set up a Media CDN, for planet-scale media delivery Tutorial: Design for scale and high availabilityGoogle Cloud Skills Boost: Storing Image and Video Files in Cloud Storage - PythonLearning resourcesUnlock infinite capacity and innovationCloud Storage’s multi-regional performance and availability powers the world’s biggest media and entertainment companies. Build a hybrid render farm using Cloud StorageCreate a virtual workstation with Cloud StorageLearn how Vimeo uses multi-regional storage for fast, resumable uploadsData lakes and big data analyticsCreate a data lake for analyticsDevelop and deploy data pipelines and storage to analyze large amounts of data. Cloud Storage offers high availability and performance while being strongly consistent, giving you confidence and accuracy in analytics workloads.Learn more about data lakes using Cloud Storage Run Apache Hadoop or Apache Spark on data with data connectorGoogle Cloud Skills Boost: Modernizing Data Lakes and Data Warehouses with Google CloudGoogle Cloud Skills Boost: Google Cloud Big Data and Machine Learning FundamentalsTutorials, quickstarts, & labsCreate a data lake for analyticsDevelop and deploy data pipelines and storage to analyze large amounts of data. Cloud Storage offers high availability and performance while being strongly consistent, giving you confidence and accuracy in analytics workloads.Learn more about data lakes using Cloud Storage Run Apache Hadoop or Apache Spark on data with data connectorGoogle Cloud Skills Boost: Modernizing Data Lakes and Data Warehouses with Google CloudGoogle Cloud Skills Boost: Google Cloud Big Data and Machine Learning FundamentalsMachine learning and AIDeploy an AI summarization solution in the Google Cloud consoleLaunch a Google-recommended, preconfigured solution that uses generative AI to quickly extract text and summarize large documents stored in Cloud Storage.Deploy in consoleWatch how to store data for machine learningStart using Vertex AI with your stored dataNuro powers its autonomous vehicles' rapid decision-making with Cloud StorageTutorials, quickstarts, & labsDeploy an AI summarization solution in the Google Cloud consoleLaunch a Google-recommended, preconfigured solution that uses generative AI to quickly extract text and summarize large documents stored in Cloud Storage.Deploy in consoleWatch how to store data for machine learningStart using Vertex AI with your stored dataNuro powers its autonomous vehicles' rapid decision-making with Cloud StorageHost a website Build, host, and run dynamic websites in the Google Cloud consoleLaunch a sample drop-ship retail product website that’s publicly accessible and customizable, leveraging Python and JavaScript.Deploy in consoleTutorial: Configure a Cloud Storage bucket to host a static website for a domain you ownVideo: 3 ways to run scalable web apps on Google CloudTutorials, quickstarts, & labsBuild, host, and run dynamic websites in the Google Cloud consoleLaunch a sample drop-ship retail product website that’s publicly accessible and customizable, leveraging Python and JavaScript.Deploy in consoleTutorial: Configure a Cloud Storage bucket to host a static website for a domain you ownVideo: 3 ways to run scalable web apps on Google CloudPricingHow Cloud Storage pricing worksPricing for Cloud Storage services is primarily based on location and storage class. Additional usage-based data processing and data transfer charges may also apply.Service usage and typeDescriptionPrice (USD)Always Free usageAll customers get 5 GiB of US regional storage free per month, not charged against your credits. Learn more about Always Free limits.FreeStorage classStandard storageBest for frequently accessed ("hot" data) and/or stored for only brief periods of time.Starting at$.02per GiB per monthNearline storageBest for service for storing infrequently accessed data.Starting at$.01per GiB per monthColdline storageBest for storing infrequently accessed data.Starting at$.004per GiB per monthArchive storageBest for data archiving, online backup, and disaster recovery.Starting at$.0012per GiB per monthData transfer and special network service ratesData transfer within Google CloudApplies when moving, copying, accessing data in Cloud Storage or between Google Cloud services.Check your network product for pricing information.General network usageGeneral network usage applies for any data that does not fall into one of the above categories or the Always Free usage limits.Ranges from $0.12-$0.20based on monthly usageLearn more about Cloud Storage pricing. View all pricing details.How Cloud Storage pricing worksPricing for Cloud Storage services is primarily based on location and storage class. Additional usage-based data processing and data transfer charges may also apply.Always Free usageDescriptionAll customers get 5 GiB of US regional storage free per month, not charged against your credits. Learn more about Always Free limits.Price (USD)FreeStorage classDescriptionStandard storageBest for frequently accessed ("hot" data) and/or stored for only brief periods of time.Price (USD)Starting at$.02per GiB per monthNearline storageBest for service for storing infrequently accessed data.DescriptionStarting at$.01per GiB per monthColdline storageBest for storing infrequently accessed data.DescriptionStarting at$.004per GiB per monthArchive storageBest for data archiving, online backup, and disaster recovery.DescriptionStarting at$.0012per GiB per monthData transfer and special network service ratesDescriptionData transfer within Google CloudApplies when moving, copying, accessing data in Cloud Storage or between Google Cloud services.Price (USD)Check your network product for pricing information.General network usageDescriptionGeneral network usage applies for any data that does not fall into one of the above categories or the Always Free usage limits.Price (USD)Ranges from $0.12-$0.20based on monthly usageLearn more about Cloud Storage pricing. View all pricing details.Pricing calculatorEstimate your monthly Cloud Storage charges, including cluster management fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Cloud Storage in the consoleGo to my consoleHave a large project?Contact salesLearn how to get started with Cloud StorageRead guideIntroduction and best practices for Cloud StorageWatch videoHow to name bucketsRead guideBusiness CaseExplore how companies are using Cloud StorageVimeo is delivering more high-quality videos than ever while reducing costs.Vimeo replaced its servers for accepting video uploads with Cloud Storage, fronted by the Fastly edge cloud to help ensure regional routing and low-latency, high-throughput connections for Vimeo’s publishers. Multi-regional Cloud Storage offers fast, resumable upload capability that helps make for better user experience.Read customer storyRelated contentMixpanel maintains excellent performance and reliability while keeping its engineers focused on innovationRobust building blocks that exist on top of core data storage, computing, and network services help take away much of the backend hassle on the way to new product creationRedSalud meets telemedicine challenges by implementing SAP for Google CloudTrusted by top retailers Target and Carrefour.Used by top financial institutions KeyBank, PayPal, and Commerzbank.Powering top autonomous vehicle companies Cruise and Nuro.FAQExpand allDoes Cloud Storage work for personal photos, videos, and files?No. For personal storage options, including photos, device backup, VPN, and more, visit Google One. How does Cloud Storage differ from other types of storage? Cloud Storage is a service for storing objects in Google Cloud. An object is an immutable piece of data consisting of a file of any format. You store objects in containers called buckets. All buckets are associated with a project, and you can group your projects under an organization. Learn more about other Google Cloud storage products, including block storage, data transfer, and file storage. Other inquiries and supportBilling and troubleshootingAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Storage(2).txt b/Cloud_Storage(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..3d1564c917d842d6b2cebf0d3c5f2a43cc5c2068 --- /dev/null +++ b/Cloud_Storage(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/storage/pricing +Date Scraped: 2025-02-23T12:11:03.846Z + +Content: +Home Pricing Cloud Storage Send feedback Stay organized with collections Save and categorize content based on your preferences. Cloud Storage pricing This document discusses pricing for Cloud Storage. For Google Drive, which offers simple online storage for your personal files, see Google Drive pricing. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Overview Cloud Storage pricing is based on the following components: Data storage: the amount of data stored in your buckets. Storage rates vary depending on the storage class of your data and location of your buckets. Data processing: the processing done by Cloud Storage, which includes operations charges, any applicable retrieval fees, and inter-region replication. Network usage: the amount of data read from or moved between your buckets. Try it for yourself If you're new to Google Cloud, create an account to evaluate how Cloud Storage performs in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads. Try Cloud Storage free Pricing tables Important: Pricing updates for Cloud Storage took effect on Oct. 1, 2022 and on Apr. 1, 2023. The pricing tables below show what charges apply when using Cloud Storage. For example scenarios that show usage and charges, see the Pricing examples page. For the Google Cloud pricing calculator, see the Calculator page. Data storage Click on a geographic area to view the at-rest costs for associated locations: Regions North America Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Iowa (us-central1) $0.020 $0.010 $0.004 $0.0012 South Carolina (us-east1) $0.020 $0.010 $0.004 $0.0012 Northern Virginia (us-east4) $0.023 $0.013 $0.006 $0.0025 Columbus (us-east5) $0.020 $0.010 $0.004 $0.0012 Oregon (us-west1) $0.020 $0.010 $0.004 $0.0012 Los Angeles (us-west2) $0.023 $0.016 $0.007 $0.0025 Salt Lake City (us-west3) $0.023 $0.016 $0.007 $0.0025 Las Vegas (us-west4) $0.023 $0.013 $0.006 $0.0025 Dallas (us-south1) $0.020 $0.010 $0.004 $0.0012 Montréal (northamerica-northeast1) $0.023 $0.013 $0.007 $0.0025 Toronto (northamerica-northeast2) $0.023 $0.013 $0.007 $0.0025 Mexico (northamerica-south1) $0.026 $0.018 $0.006 $0.0027 South America Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) São Paulo (southamerica-east1) $0.035 $0.020 $0.007 $0.0030 Santiago (southamerica-west1) $0.030 $0.018 $0.006 $0.0027 Europe Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Warsaw (europe-central2) $0.023 $0.013 $0.006 $0.0025 Finland (europe-north1) $0.020 $0.010 $0.004 $0.0012 Belgium (europe-west1) $0.020 $0.010 $0.004 $0.0012 London (europe-west2) $0.023 $0.013 $0.007 $0.0025 Frankfurt (europe-west3) $0.023 $0.013 $0.006 $0.0025 Netherlands (europe-west4) $0.020 $0.010 $0.004 $0.0012 Zürich (europe-west6) $0.025 $0.014 $0.007 $0.0025 Milan (europe-west8) $0.023 $0.013 $0.006 $0.0025 Paris (europe-west9) $0.023 $0.013 $0.006 $0.0025 Berlin (europe-west10) $0.025 $0.014 $0.007 $0.0024 Turin (europe-west12) $0.023 $0.013 $0.006 $0.0025 Madrid (europe-southwest1) $0.023 $0.013 $0.006 $0.0025 Middle East Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Doha (me-central1) $0.023 $0.013 $0.006 $0.0025 Dammam (me-central2) $0.03 $0.018 $0.006 $0.0027 Tel Aviv (me-west1) $0.021 $0.0125 $0.004 $0.0012 Asia Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Taiwan (asia-east1) $0.020 $0.010 $0.005 $0.0015 Hong Kong (asia-east2) $0.023 $0.016 $0.007 $0.0025 Tokyo (asia-northeast1) $0.023 $0.016 $0.006 $0.0025 Osaka (asia-northeast2) $0.023 $0.016 $0.006 $0.0025 Seoul (asia-northeast3) $0.023 $0.016 $0.006 $0.0025 Mumbai (asia-south1) $0.023 $0.016 $0.006 $0.0025 Delhi (asia-south2) $0.023 $0.016 $0.006 $0.0025 Singapore (asia-southeast1) $0.020 $0.010 $0.005 $0.0015 Indonesia Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Jakarta (asia-southeast2) $0.023 $0.016 $0.006 $0.0025 Africa Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Johannesburg (africa-south1) $0.025 $0.014 $0.007 $0.002499996 Australia Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Sydney (australia-southeast1) $0.023 $0.016 $0.006 $0.0025 Melbourne (australia-southeast2) $0.023 $0.016 $0.006 $0.0025 Dual-regions United States Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Iowa (us-central1) $0.022 $0.011 $0.0044 $0.0014 South Carolina (us-east1) $0.022 $0.011 $0.0044 $0.0014 Northern Virginia (us-east4) $0.0253 $0.0143 $0.0066 $0.0028 Columbus (us-east5) $0.022 $0.011 $0.0044 $0.0014 Oregon (us-west1) $0.022 $0.011 $0.0044 $0.0014 Los Angeles (us-west2) $0.0253 $0.0176 $0.0077 $0.0028 Salt Lake City (us-west3) $0.0253 $0.0176 $0.0077 $0.0028 Las Vegas (us-west4) $0.0253 $0.0143 $0.0066 $0.0028 Dallas (us-south1) $0.022 $0.011 $0.0044 $0.0014 Iowa and South Carolina (nam4) $0.044 $0.022 $0.0088 $0.0028 Canada Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Montréal (northamerica-northeast1) $0.0253 $0.0143 $0.0077 $0.0028 Toronto (northamerica-northeast2) $0.0253 $0.0143 $0.0077 $0.0028 Europe Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Warsaw (europe-central2) $0.0253 $0.0143 $0.0066 $0.0028 Finland (europe-north1) $0.022 $0.011 $0.0044 $0.0014 Madrid (europe-southwest1) $0.0253 $0.0143 $0.0066 $0.0028 Belgium (europe-west1) $0.022 $0.011 $0.0044 $0.0014 Frankfurt (europe-west3) $0.0253 $0.0143 $0.0066 $0.0028 Netherlands (europe-west4) $0.022 $0.011 $0.0044 $0.0014 Milan (europe-west8) $0.0253 $0.0143 $0.0066 $0.0028 Paris (europe-west9) $0.0253 $0.0143 $0.0066 $0.0028 Finland and Netherlands (eur4) $0.044 $0.022 $0.0088 $0.0028 Belgium and London (eur5) $0.0473 $0.0253 $0.0121 $0.0041 Frankfurt and London (eur7) $0.0506 $0.0286 $0.0143 $0.0055 Frankfurt and Zürich (eur8) $0.0528 $0.0297 $0.0143 $0.0055 Asia Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Taiwan (asia-east1) $0.022 $0.011 $0.0055 $0.0017 Singapore (asia-southeast1) $0.022 $0.011 $0.0055 $0.0017 Tokyo and Osaka (asia1) $0.0506 $0.0352 $0.0132 $0.0056 India Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Mumbai (asia-south1) $0.0253 $0.0176 $0.0066 $0.0028 Delhi (asia-south2) $0.0253 $0.0176 $0.0066 $0.0028 Australia Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) Sydney (australia-southeast1) $0.0253 $0.0176 $0.0066 $0.0028 Melbourne (australia-southeast2) $0.0253 $0.0176 $0.0066 $0.0028 Multi-regions Location Standard storage(per GB per Month) Nearline storage(per GB per Month) Coldline storage(per GB per Month) Archive storage(per GB per Month) US (United States multi-region) $0.026 $0.015 $0.007 $0.0024 EU (European Union multi-region) $0.026 $0.015 $0.007 $0.0024 Asia (Asia multi-region) $0.026 $0.015 $0.00875 $0.0030 If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Dual-regions are billed to both underlying regions at the above prices. For example, Standard Storage in a dual-region comprised of Iowa and Oregon will be billed at $0.022 per GB per month for the us-central1 dual-region SKU and $0.022 per GB per month for the us-west1 dual-region SKU. The six predefined dual-regions asia1, eur4, eur5, eur7, eur8, and nam4 bill usage against their locational SKUs at the prices listed. Data storage charges are prorated to the sub-second for each object, and data storage rates are based on the storage class of each object, not the default storage class set on the bucket that contains them. Data storage charges apply in the same way to live objects, noncurrent objects, and soft-deleted objects. In addition to the data contained in your uploaded objects, the following count toward your monthly storage usage: Custom metadata. For example, for the custom metadata NAME:VALUE, Cloud Storage counts each character in NAME and VALUE as a byte stored with the object. Uploaded parts of an XML API multipart upload, until the multipart upload is either completed or aborted. Minimum storage duration A minimum storage duration applies to data stored using Nearline storage, Coldline storage, or Archive storage. The following table shows the minimum storage duration for each storage class: Standard storage Nearline storage Coldline storage Archive storage None 30 days 90 days 365 days If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. You can delete, replace, or move an object before it has been stored for the minimum duration, but at the time you delete, replace, or move the object, you are charged as if the object was stored for the minimum duration. See the early deletion example to see how charges apply. Note the following regarding minimum storage durations and early deletion charges: Early deletion charges are billed through early delete SKUs. Early deletion charges apply when rewriting objects, such as when you change an object's storage class, because a rewrite replaces the existing object. Early deletion charges do not apply in the following cases: When Object Lifecycle Management changes an object's storage class. When the object exists in a bucket that has Autoclass enabled. In buckets that use Object Versioning, early deletion charges apply when a noncurrent object is deleted, not when it became noncurrent. That object will become soft deleted if Soft Delete is enabled. Otherwise it will be permanently deleted. In buckets that use Soft Delete, early deletion charges apply when an object is soft deleted. These charges are reduced based on the length of the Soft Delete retention duration. For XML API multipart uploads, a part is subject to early deletion charges if it's not used when assembling the final object, or if the part is overwritten by another part, or if the multipart upload is aborted. The storage duration for each part in a multipart upload begins at the time the upload of the part completes, and the storage duration for the overall object begins when the object is assembled. Tags Each tag that you attach to a bucket is charged at $0.005 per month. Data processing Data processing costs consist of the following: Operation charges for all requests made to Cloud Storage Retrieval fees for reading data stored in certain storage classes Inter-region replication charges for data written to dual-regions and multi-regions Autoclass charges for buckets with Autoclass enabled Operation charges Operation charges apply when you perform operations within Cloud Storage. An operation is an action that makes changes to or requests information about resources such as buckets and objects in Cloud Storage. Operations are divided into three categories: Class A, Class B, and free. See below for a breakdown of which operations fall into each class. For buckets located in a single region: Flat Namespace Storage Class1 Class A operations(per 1,000 operations) Class B operations(per 1,000 operations) Free operations Standard storage $0.0050 $0.0004 Free Nearline storage and Durable Reduced Availability (DRA) storage $0.0100 $0.0010 Free Coldline storage $0.0200 $0.0100 Free Archive storage $0.0500 $0.0500 Free Hierarchical Namespace (HNS) Storage Class1 Class A operations(per 1,000 operations) Class B operations(per 1,000 operations) Free operations Standard storage $0.0065 $0.0005 Free Nearline storage and Durable Reduced Availability (DRA) storage $0.0130 $0.0013 Free Coldline storage $0.0260 $0.0130 Free Archive storage $0.0650 $0.0650 Free If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. For buckets located in a dual-region or multi-region: Flat Namespace Storage Class1 Class A operations(per 1,000 operations) Class B operations(per 1,000 operations) Free operations Standard storage $0.0100 $0.0004 Free Nearline storage and Durable Reduced Availability (DRA) storage $0.0200 $0.0010 Free Coldline storage $0.0400 $0.0100 Free Archive storage $0.1000 $0.0500 Free Hierarchical Namespace (HNS) Storage Class1 Class A operations(per 1,000 operations) Class B operations(per 1,000 operations) Free operations Standard storage $0.0130 $0.0005 Free Nearline storage and Durable Reduced Availability (DRA) storage $0.0260 $0.0013 Free Coldline storage $0.0520 $0.0130 Free Archive storage $0.1300 $0.0650 Free If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. 1 The storage class for an operation is determined by the following considerations: When listing buckets in a project, the Class A Standard storage rate always applies. When an operation applies to a bucket, such as listing the objects in a bucket, the default storage class set for that bucket determines the operation cost. When an operation applies to a tag, such as attaching or detaching tags, the default storage class set for the tagged bucket determines the operation cost. When an operation applies to an object, the storage class of that object determines the operation cost. The following are exceptions to this rule: When changing the storage class of an object, either yourself or with Object Lifecycle Management, the Class A rate associated with the object's destination storage class applies. For example, changing an object from Standard storage to Coldline storage using Object Lifecycle Management counts as a Class A operation and is billed at the Class A operation rate for Coldline storage. Changing storage classes can have a significant billing impact, especially when a high proportion of objects are under 1 MB in size. When changing the storage class of an object using Autoclass, most transitions are free. However, the Class A Standard storage rate applies for transitions from Coldline storage or Archive storage to Standard storage or Nearline storage. In buckets with Autoclass enabled, operations are always charged at the Standard storage rate. In buckets with Soft Delete enabled, restore operations are always charged at the Standard storage rate. In buckets that use Soft Delete, one Class A Standard storage operation is charged per 1,000 objects processed as part of a bulk restore operation, rounded up so that at least one Class A operation is always billed. This is in addition to the operations charge assessed per object restored. Operations that fall into each class The following table lists the operations that fall into each class for the JSON API, XML API, and gRPC. Keep in mind the following: Except as noted in the footnotes, each request is considered one operation, regardless of the content sent or received as part of the request. Tools such as the Google Cloud console, the Google Cloud CLI, and the Cloud Storage client libraries might use two or more operations to perform a task. For example, when you click on a bucket name in the Google Cloud console, the system performs an operation to get the list of objects in the bucket and a separate operation to get the metadata for the bucket. The Google Cloud console uses the JSON API to make requests. Other tools might use either or both the JSON API and XML API. Consult the tool's reference documentation for information about the underlying API that it uses. API or Feature Class A Operations Class B Operations Free Operations JSON API or gRPC storage.*.insert1 storage.*.patch storage.*.update storage.*.setIamPolicy storage.buckets.list storage.buckets.lockRetentionPolicy storage.notifications.delete storage.objects.compose storage.objects.copy storage.objects.list storage.objects.restore storage.objects.rewrite1 storage.objects.watchAll storage.projects.hmacKeys.create storage.projects.hmacKeys.list storage.*AccessControls.delete storage.folders.list storage.folders.rename storage.*.get storage.*.getIamPolicy storage.*.testIamPermissions storage.*AccessControls.list storage.notifications.list Each object change notification2 storage.buckets.getStorageLayout storage.channels.stop storage.buckets.delete storage.objects.delete storage.projects.hmacKeys.delete storage.folders.delete XML API GET Service GET Bucket (when listing objects in a bucket) PUT POST GET Bucket (when retrieving bucket configuration or when listing ongoing multipart uploads) GET Object HEAD DELETE Object Lifecycle Management SetStorageClass AbortIncompleteMultipartUpload Delete Autoclass The following storage class transitions: Coldline to Standard Archive to Standard Coldline to Nearline Archive to Nearline The following storage class transitions: Nearline to Standard Standard to Nearline Nearline to Coldline Coldline to Archive Tags3 Attach a tag Detach a tag List tags attached to a bucket Soft Delete Process 1,000 objects during a bulk restore Restore a soft-deleted object List soft-deleted objects In Hierarchical Namespace buckets, iterative (recursive) folder operations are billed as class A for each child operation. There are two types of iterative folder operations: Operations that create missing parent folders automatically. Each parent folder created is considered a child operation. This includes the following operations (and their equivalents in the XML API, if applicable): a. storage.objects.insert b. storage.objects.compose c. storage.objects.copy d. storage.objects.rewrite e. storage.objects.restore f. storage.managedFolders.insert g. storage.folders.insert (when recursive is set to true) h. Complete a multipart upload (this is a POST operation in the XML API) The rename folder operation. Each child folder under the top-level folder being renamed is considered a child operation. 1 A rewrite or resumable upload of a single object performed using the JSON API or gRPC is billed as a single Class A operation, even though these actions can require multiple requests to complete. 2 Applies specifically to Object Change Notifications. For Pub/Sub notifications, see Pub/Sub pricing. 3 Tag operations are not eligible for the Cloud Storage Always Free program. Note: Generally, you are not charged for operations that return 307, 4xx, or 5xx responses. The exception is 404 responses returned by buckets with Website Configuration enabled and the NotFoundPage property set to a public object in that bucket. Retrieval fees A retrieval fee applies when you read, copy, move, or rewrite object data or metadata that is stored using Nearline storage, Coldline storage, or Archive storage. This cost is in addition to any operations charges and network charges associated with reading the data. The following table shows the retrieval rates for each storage class: Standard storage Nearline storage Coldline storage Archive storage $0 per GB $0.01 per GB $0.02 per GB $0.05 per GB If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Retrieval fees do not apply when an object exists in a bucket that has Autoclass enabled. Retrieval fees do not apply when restoring soft-deleted objects. Inter-region replication Inter-region replication is billed on a per-GB basis for all data written to buckets located in a dual-region or multi-region location. Writes include puts, rewrites, copies, and any other actions that create new objects. Click on a geographic area to view the inter-region replication costs for associated locations: North America Location Default replication(per GB) Turbo replication(per GB) North American dual-regions, including nam4 $0.02 $0.04 US (multi-region) $0.02 Not available Europe Location Default replication(per GB) Turbo replication(per GB) European dual-regions, including eur4, eur5, eur7, and eur8 $0.02 $0.04 EU (multi-region) $0.02 Not available Asia Location Default replication(per GB) Turbo replication(per GB) Asian dual-regions, including asia1 $0.08 $0.11 Asia (multi-region) $0.08 Not available Oceania Location Default replication(per GB) Turbo replication(per GB) Oceania dual-regions $0.08 $0.12 Inter-region replication fees do not apply when restoring soft-deleted objects. Autoclass charges The following additional charges are associated with buckets that use the Autoclass feature: Autoclass management fee: Buckets that have Autoclass enabled incur a fee of $0.0025 for every 1000 objects stored for 30 days within them. Objects smaller than 128 kibibytes are not managed by Autoclass and are thus not counted when determining the fee. The fee is prorated to the millisecond for each object that isn't stored for the 30-day period. The fee is also prorated to the millisecond when disabling Autoclass. Soft-deleted objects do not incur this fee. Autoclass enablement charge: Buckets that enable Autoclass have a one-time charge for configuring existing objects to use Autoclass. This charge applies even if you immediately disable Autoclass and includes the following, as applicable: Early delete charges for objects that haven't met their minimum storage duration Retrieval fees for objects not currently in Standard storage A Class A operation charge for each object in the bucket, in order to transition them to Autoclass pricing and Standard storage Objects that are smaller than 128 kibibytes and already stored in Standard storage at the time Autoclass is enabled are excluded from this operation charge The Autoclass enablement charge does not apply to soft-deleted objects, which retain their existing storage classes and are billed as such until the end of their Soft Delete retention duration. Network Outbound data transfer represents data sent from Cloud Storage in HTTP responses. Data or metadata read from a Cloud Storage bucket are examples of data transfer. Inbound data transfer represents data sent to Cloud Storage in HTTP requests. Data or metadata written to a Cloud Storage bucket are examples of inbound data transfer. Network usage charges apply for data transfer and are divided into the following cases: Data transfer within Google Cloud, when data transfer is to other Cloud Storage buckets or to Google Cloud services. Specialty network services, when data transfer uses certain Google Cloud network products. General data transfer, when data transfer is out of Google Cloud or between continents. Data transfer within Google Cloud Data transfer within Google Cloud applies when you move or copy data from one Cloud Storage bucket to another or when another Google Cloud service accesses data in your Cloud Storage bucket. The following cases of data transfer from a Cloud Storage bucket to within Google Cloud are free: Case Examples Notes Data moves within the same location. US-EAST1 to US-EAST1 EU to EU A region is not considered the same location as a multi-region, even if the region is within the geographic limits of a multi-region. Starting February 21, 2025, for data transfer out to BigQuery datasets, Cloud Storage will consider BigQuery US to be equivalent to us-central1 and BigQuery EU to be equivalent to europe-west4. As an example, no data transfer out charges will be assessed when BigQuery US reads data from a bucket in us-central1, or from a dual-region bucket with one region set as us-central1. However, data transfer out charges will apply when BigQuery US reads data from any other Cloud Storage bucket. Data moves from a Cloud Storage bucket located in a dual-region to a different Google Cloud service located in one of the regions that make up the dual-region. Accessing data in an NAM4 bucket with an US-CENTRAL1 GKE instance This case does not include bucket-to-bucket data moves. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. For all other data transfer from your Cloud Storage buckets to within Google Cloud, pricing is determined by the bucket's location and the destination location, as defined in the following matrix: Bucket location Destination location Northern America Europe Asia Indonesia Oceania Middle East Latin America Africa Northern America $0.02/GB $0.05/GB $0.08/GB $0.10/GB $0.10/GB $0.11/GB $0.14/GB $0.11/GB Europe $0.05/GB $0.02/GB $0.08/GB $0.10/GB $0.10/GB $0.11/GB $0.14/GB $0.11/GB Asia $0.08/GB $0.08/GB $0.08/GB $0.10/GB $0.10/GB $0.11/GB $0.14/GB $0.11/GB Indonesia $0.10/GB $0.10/GB $0.10/GB N/A $0.08/GB $0.11/GB $0.14/GB $0.14/GB Oceania $0.10/GB $0.10/GB $0.10/GB $0.08/GB $0.08/GB $0.11/GB $0.14/GB $0.14/GB Middle East $0.11/GB $0.11/GB $0.11/GB $0.11/GB $0.11/GB $0.08/GB $0.14/GB $0.11/GB Latin America $0.14/GB $0.14/GB $0.14/GB $0.14/GB $0.14/GB $0.14/GB $0.14/GB $0.14/GB Africa $0.11/GB $0.11/GB $0.11/GB $0.14/GB $0.14/GB $0.11/GB $0.14/GB N/A Specialty network services If you have chosen to use certain Google Cloud network products, data transfer pricing is based on their pricing tables: For Cloud CDN, Cloud Storage data transfer charges are waived, but cache fill charges may apply. For more information, see Cloud CDN pricing. For Media CDN, Cloud Storage data transfer charges are waived. For more information, contact sales. For CDN Interconnect, see CDN Interconnect pricing. For Cloud Interconnect, see Cloud Interconnect pricing. For more information, see the Cloud Interconnect overview. For Direct Peering, see Direct Peering pricing. General network usage General network usage applies for any data read from your Cloud Storage bucket that does not fall into one of the above categories or the Always Free usage limits. For example, general network usage applies when data moves from a Cloud Storage bucket to the Internet. Monthly Usage Data transfer to Worldwide Destinations (excluding Asia & Australia)(per GB) Data transfer to Asia Destinations (excluding China, but including Hong Kong)(per GB) Data transfer to China Destinations (excluding Hong Kong)(per GB) Data transfer to Australia Destinations (per GB) Inbound data transfer 0-1 TB $0.12 $0.12 $0.23 $0.19 Free 1-10 TB $0.11 $0.11 $0.22 $0.18 Free 10+ TB $0.08 $0.08 $0.20 $0.15 Free If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. You can view your current usage in the billing details for your project. Pricing notes Storage and network usage are calculated in JEDEC binary gigabytes (GB), also known as IEC gibibytes (GiB), where 1 JEDEC GB is 230 bytes. Similarly, 1 JEDEC TB is 240 bytes, or 1024 JEDEC GBs. When rewriting or copying data from one Cloud Storage bucket to another, inter-region replication charges, if applicable, are billed to the billing account associated with the destination bucket. All other applicable charges are billed to the billing account associated with the source bucket. Data transfer costs and retrieval fees are based on the amount of data accessed, not the size of the entire object. For example, if you request only the first 8 MB of a 100 MB Nearline storage object or if the download connection is broken after 8 MB is served, the data transfer cost and the retrieval fee are based on 8 MB. Charges accrue daily, but Cloud Storage bills you only at the end of the billing period. You can view unbilled usage in your project's billing page in the Google Cloud console. For compressed objects that are transcoded during download, storage rates are based on the compressed size of the object. Data transfer rates are based on the uncompressed size of the object. For buckets with Object Versioning enabled, each noncurrent version of an object is charged at the same rate as the live version of the object. Cloud Storage also has the storage class Durable Reduced Availability (DRA) storage; however, you should use Standard storage in favor of DRA. Standard storage has lower pricing for operations but otherwise has the same price structure. Standard storage also provides better performance, particularly in terms of availability. There are no extra costs for using the Storage Transfer Service; however, normal Cloud Storage and external provider costs apply when using the Storage Transfer Service. See Storage Transfer Service pricing for a list of potential costs. Cloud Storage Always Free usage limits As part of the Google Cloud Free Tier, Cloud Storage provides resources that are free to use up to specific limits. These usage limits are available both during and after the free trial period. If you are no longer in the free trial period, usage beyond these Always Free limits is charged according to the pricing tables above. Resource Monthly Free Usage Limits1 Standard storage 5 GB-months Class A Operations 5,000 Class B Operations 50,000 Data transfer 100 GB from North America to each Google Cloud Data transfer destination (Australia and China excluded) 1Cloud Storage Always Free quotas apply to usage in US-WEST1, US-CENTRAL1, and US-EAST1 regions. Usage is aggregated across these 3 regions. Always Free is subject to change. Please see our FAQ for eligibility requirements and other restrictions. To prevent getting billed for usage beyond the Always Free usage limits, you can set API request caps. Usage Policies The use of this service must adhere to the Google Cloud Terms of Service and Program Policies, as well as Google's Privacy Policy. What's next Visit the Cloud Storage documentation. Explore Cloud Storage pricing examples. Get started with Cloud Storage by following the Quickstart using the Console. Get started with Cloud Storage using a Cloud Storage client library. Learn about Cloud Storage solutions and use cases. Request a custom quote With Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization. Contact sales \ No newline at end of file diff --git a/Cloud_Storage.txt b/Cloud_Storage.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad3c7ab066c7e14fe7d4c9cce4e530d0bba37c59 --- /dev/null +++ b/Cloud_Storage.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/storage +Date Scraped: 2025-02-23T12:01:25.818Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Cloud StorageObject storage for companies of all sizesCloud Storage is a managed service for storing unstructured data. Store any amount of data and retrieve it as often as you like.Try it in consoleContact salesLooking for something else? Check out personal, team, and block storage options.Product highlightsAutomatically transition to lower-cost storage classesStandard, nearline, coldline, and archive storage optionsFast, low-cost, highly durable archive and backup storageCloud Storage in a minute 1 min videoFeaturesLeading analytics and ML/AI toolsOnce your data is stored in Cloud Storage, easily plug into Google Cloud’s powerful tools to create your data warehouse with BigQuery, run open source analytics with Dataproc, or build and deploy machine learning (ML) models with Vertex AI. Start querying Cloud Storage data with BigQueryAutomatic storage class transitionsWith features like Object Lifecycle Management (OLM) and Autoclass you can easily optimize costs with object placement across storage classes. You can enable, at the bucket level, policy-based automatic object movement to colder storage classes based on the last access time. There are no early deletion or retrieval fees, nor class transition charges for object access in colder storage classes. IDC projects that Cloud Storage customers can realize average annual benefits worth $86,500 per petabyte. Read report.Continental-scale and SLA-backed replicationIndustry leading dual-region buckets support an expansive number of regions. A single, continental-scale bucket offers nine regions across three continents, providing a Recovery Time Objective (RTO) of zero. In the event of an outage, applications seamlessly access the data in the alternate region. There is no failover and failback process. For organizations requiring ultra availability, turbo replication with dual-region buckets offers a 15 minute Recovery Point Objective (RPO) SLA. Use Cloud Storage as a local filesystemCloud Storage is a common choice for storing training data, models, and checkpoints for machine learning projects in Cloud Storage buckets. With Cloud Storage FUSE, you can take advantage of the scale, affordability, throughput, and simplicity that Cloud Storage provides, while maintaining compatibility with applications that use or require filesystem semantics. Cloud Storage FUSE also now offers caching, which provides up to 2.2x faster time to train and 2.9x higher training throughput, compared to native ML framework dataloaders.Learn how Cloud Storage FUSE is optimized for GKE and AI workloadsManage your object storage with file inventory reportsInventory reports contain metadata information about your objects, such as the object's storage class, ETag, and content type. This information helps you analyze your storage costs, audit and validate your objects, and ensure data security and compliance. You can export inventory reports as comma-separated value (CSV) or Apache Parquet files so you can further analyze it using tools, such as BigQuery. Learn more about Storage Insights inventory reports.Fast and flexible transfer servicesStorage Transfer Service offers a highly performant, online pathway to Cloud Storage—both with the scalability and speed you need to simplify the data transfer process. For offline data transfer our Transfer Appliance is a shippable storage server that sits in your datacenter and then ships to an ingest location where the data is uploaded to Cloud Storage.VIDEOIntroducing Google Cloud’s Transfer Appliance2:54Default and configurable data securityCloud Storage offers secure-by-design features to protect your data and advanced controls and capabilities to keep your data private and secure against leaks or compromises. Security features include access control policies, data encryption, retention policies, retention policy locks, and signed URLs.Object lifecycle managementDefine conditions that trigger data deletion or transition to a cheaper storage class.Object Versioning Continue to store old copies of objects when they are deleted or overwritten.Soft deleteSoft delete offers improved protection against accidental and malicious data deletion by providing you with a way to retain and restore recently deleted data.Retention policiesDefine minimum retention periods that objects must be stored for before they’re deletable.Object holdsPlace a hold on an object to prevent its deletion.Customer-managed encryption keysEncrypt object data with encryption keys stored by the Cloud Key Management Service and managed by you.Customer-supplied encryption keys Encrypt object data with encryption keys created and managed by you.Uniform bucket-level accessUniformly control access to your Cloud Storage resources by disabling object ACLs.Requester paysRequire accessors of your data to include a project ID to bill for network charges, operation charges, and retrieval fees.Bucket LockBucket Lock allows you to configure a data retention policy for a Cloud Storage bucket that governs how long objects in the bucket must be retained.Pub/Sub notifications for Cloud Storage Send notifications to Pub/Sub when objects are created, updated, or deleted.Cloud Audit Logs with Cloud StorageMaintain admin activity logs and data access logs for your Cloud Storage resources.Object- and bucket-level permissionsCloud Identity and Access Management (IAM) allows you to control who has access to your buckets and objects.View all featuresStorage optionsStorage typeDescriptionBest forStandard storageStorage for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time."Hot" data, including websites, streaming videos, and mobile apps.Nearline storageLow cost, highly durable storage service for storing infrequently accessed data.Data that can be stored for 30 days.Coldline storageA very low cost, highly durable storage service for storing infrequently accessed data.Data that can be stored for 90 days.Archive storageThe lowest cost, highly durable storage service for data archiving, online backup, and disaster recovery.Data that can be stored for 365 days.Learn more about storage classes.Standard storageDescriptionStorage for data that is frequently accessed ("hot" data) and/or stored for only brief periods of time.Best for"Hot" data, including websites, streaming videos, and mobile apps.Nearline storageDescriptionLow cost, highly durable storage service for storing infrequently accessed data.Best forData that can be stored for 30 days.Coldline storageDescriptionA very low cost, highly durable storage service for storing infrequently accessed data.Best forData that can be stored for 90 days.Archive storageDescriptionThe lowest cost, highly durable storage service for data archiving, online backup, and disaster recovery.Best forData that can be stored for 365 days.Learn more about storage classes.How It WorksTo use Cloud Storage, you’ll first create a bucket, a basic container that holds your data in Cloud Storage. You’ll then upload objects into that bucket—where you can download, share, and manage objects.View documentationWhat is Cloud Storage? (2:05)Common UsesBackups and archivesUse Cloud Storage for backup, archives, and recoveryCloud Storage's nearline storage provides fast, low-cost, highly durable storage for data accessed less than once a month, reducing the cost of backups and archives while still retaining immediate access. Backup data in Cloud Storage can be used for more than just recovery because all storage classes have ms latency and are accessed through a single API.Learn more about nearline storageTutorials, quickstarts, & labsUse Cloud Storage for backup, archives, and recoveryCloud Storage's nearline storage provides fast, low-cost, highly durable storage for data accessed less than once a month, reducing the cost of backups and archives while still retaining immediate access. Backup data in Cloud Storage can be used for more than just recovery because all storage classes have ms latency and are accessed through a single API.Learn more about nearline storageMedia content storage and deliveryStore data to stream audio or video Stream audio or video directly to apps or websites with Cloud Storage’s geo-redundant capabilities. Geo-redundant storage with the highest level of availability and performance is ideal for low-latency, high-QPS content serving to users distributed across geographic regions.Learn more about geo-redundant storageLearn how to set up a Media CDN, for planet-scale media delivery Tutorial: Design for scale and high availabilityGoogle Cloud Skills Boost: Storing Image and Video Files in Cloud Storage - PythonUnlock infinite capacity and innovationCloud Storage’s multi-regional performance and availability powers the world’s biggest media and entertainment companies. Build a hybrid render farm using Cloud StorageCreate a virtual workstation with Cloud StorageLearn how Vimeo uses multi-regional storage for fast, resumable uploadsTutorials, quickstarts, & labsStore data to stream audio or video Stream audio or video directly to apps or websites with Cloud Storage’s geo-redundant capabilities. Geo-redundant storage with the highest level of availability and performance is ideal for low-latency, high-QPS content serving to users distributed across geographic regions.Learn more about geo-redundant storageLearn how to set up a Media CDN, for planet-scale media delivery Tutorial: Design for scale and high availabilityGoogle Cloud Skills Boost: Storing Image and Video Files in Cloud Storage - PythonLearning resourcesUnlock infinite capacity and innovationCloud Storage’s multi-regional performance and availability powers the world’s biggest media and entertainment companies. Build a hybrid render farm using Cloud StorageCreate a virtual workstation with Cloud StorageLearn how Vimeo uses multi-regional storage for fast, resumable uploadsData lakes and big data analyticsCreate a data lake for analyticsDevelop and deploy data pipelines and storage to analyze large amounts of data. Cloud Storage offers high availability and performance while being strongly consistent, giving you confidence and accuracy in analytics workloads.Learn more about data lakes using Cloud Storage Run Apache Hadoop or Apache Spark on data with data connectorGoogle Cloud Skills Boost: Modernizing Data Lakes and Data Warehouses with Google CloudGoogle Cloud Skills Boost: Google Cloud Big Data and Machine Learning FundamentalsTutorials, quickstarts, & labsCreate a data lake for analyticsDevelop and deploy data pipelines and storage to analyze large amounts of data. Cloud Storage offers high availability and performance while being strongly consistent, giving you confidence and accuracy in analytics workloads.Learn more about data lakes using Cloud Storage Run Apache Hadoop or Apache Spark on data with data connectorGoogle Cloud Skills Boost: Modernizing Data Lakes and Data Warehouses with Google CloudGoogle Cloud Skills Boost: Google Cloud Big Data and Machine Learning FundamentalsMachine learning and AIDeploy an AI summarization solution in the Google Cloud consoleLaunch a Google-recommended, preconfigured solution that uses generative AI to quickly extract text and summarize large documents stored in Cloud Storage.Deploy in consoleWatch how to store data for machine learningStart using Vertex AI with your stored dataNuro powers its autonomous vehicles' rapid decision-making with Cloud StorageTutorials, quickstarts, & labsDeploy an AI summarization solution in the Google Cloud consoleLaunch a Google-recommended, preconfigured solution that uses generative AI to quickly extract text and summarize large documents stored in Cloud Storage.Deploy in consoleWatch how to store data for machine learningStart using Vertex AI with your stored dataNuro powers its autonomous vehicles' rapid decision-making with Cloud StorageHost a website Build, host, and run dynamic websites in the Google Cloud consoleLaunch a sample drop-ship retail product website that’s publicly accessible and customizable, leveraging Python and JavaScript.Deploy in consoleTutorial: Configure a Cloud Storage bucket to host a static website for a domain you ownVideo: 3 ways to run scalable web apps on Google CloudTutorials, quickstarts, & labsBuild, host, and run dynamic websites in the Google Cloud consoleLaunch a sample drop-ship retail product website that’s publicly accessible and customizable, leveraging Python and JavaScript.Deploy in consoleTutorial: Configure a Cloud Storage bucket to host a static website for a domain you ownVideo: 3 ways to run scalable web apps on Google CloudPricingHow Cloud Storage pricing worksPricing for Cloud Storage services is primarily based on location and storage class. Additional usage-based data processing and data transfer charges may also apply.Service usage and typeDescriptionPrice (USD)Always Free usageAll customers get 5 GiB of US regional storage free per month, not charged against your credits. Learn more about Always Free limits.FreeStorage classStandard storageBest for frequently accessed ("hot" data) and/or stored for only brief periods of time.Starting at$.02per GiB per monthNearline storageBest for service for storing infrequently accessed data.Starting at$.01per GiB per monthColdline storageBest for storing infrequently accessed data.Starting at$.004per GiB per monthArchive storageBest for data archiving, online backup, and disaster recovery.Starting at$.0012per GiB per monthData transfer and special network service ratesData transfer within Google CloudApplies when moving, copying, accessing data in Cloud Storage or between Google Cloud services.Check your network product for pricing information.General network usageGeneral network usage applies for any data that does not fall into one of the above categories or the Always Free usage limits.Ranges from $0.12-$0.20based on monthly usageLearn more about Cloud Storage pricing. View all pricing details.How Cloud Storage pricing worksPricing for Cloud Storage services is primarily based on location and storage class. Additional usage-based data processing and data transfer charges may also apply.Always Free usageDescriptionAll customers get 5 GiB of US regional storage free per month, not charged against your credits. Learn more about Always Free limits.Price (USD)FreeStorage classDescriptionStandard storageBest for frequently accessed ("hot" data) and/or stored for only brief periods of time.Price (USD)Starting at$.02per GiB per monthNearline storageBest for service for storing infrequently accessed data.DescriptionStarting at$.01per GiB per monthColdline storageBest for storing infrequently accessed data.DescriptionStarting at$.004per GiB per monthArchive storageBest for data archiving, online backup, and disaster recovery.DescriptionStarting at$.0012per GiB per monthData transfer and special network service ratesDescriptionData transfer within Google CloudApplies when moving, copying, accessing data in Cloud Storage or between Google Cloud services.Price (USD)Check your network product for pricing information.General network usageDescriptionGeneral network usage applies for any data that does not fall into one of the above categories or the Always Free usage limits.Price (USD)Ranges from $0.12-$0.20based on monthly usageLearn more about Cloud Storage pricing. View all pricing details.Pricing calculatorEstimate your monthly Cloud Storage charges, including cluster management fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Cloud Storage in the consoleGo to my consoleHave a large project?Contact salesLearn how to get started with Cloud StorageRead guideIntroduction and best practices for Cloud StorageWatch videoHow to name bucketsRead guideBusiness CaseExplore how companies are using Cloud StorageVimeo is delivering more high-quality videos than ever while reducing costs.Vimeo replaced its servers for accepting video uploads with Cloud Storage, fronted by the Fastly edge cloud to help ensure regional routing and low-latency, high-throughput connections for Vimeo’s publishers. Multi-regional Cloud Storage offers fast, resumable upload capability that helps make for better user experience.Read customer storyRelated contentMixpanel maintains excellent performance and reliability while keeping its engineers focused on innovationRobust building blocks that exist on top of core data storage, computing, and network services help take away much of the backend hassle on the way to new product creationRedSalud meets telemedicine challenges by implementing SAP for Google CloudTrusted by top retailers Target and Carrefour.Used by top financial institutions KeyBank, PayPal, and Commerzbank.Powering top autonomous vehicle companies Cruise and Nuro.FAQExpand allDoes Cloud Storage work for personal photos, videos, and files?No. For personal storage options, including photos, device backup, VPN, and more, visit Google One. How does Cloud Storage differ from other types of storage? Cloud Storage is a service for storing objects in Google Cloud. An object is an immutable piece of data consisting of a file of any format. You store objects in containers called buckets. All buckets are associated with a project, and you can group your projects under an organization. Learn more about other Google Cloud storage products, including block storage, data transfer, and file storage. Other inquiries and supportBilling and troubleshootingAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_Tasks.txt b/Cloud_Tasks.txt new file mode 100644 index 0000000000000000000000000000000000000000..4457e1e15fcfebd78fe88c4836830abb136caf5a --- /dev/null +++ b/Cloud_Tasks.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/tasks/docs +Date Scraped: 2025-02-23T12:05:23.274Z + +Content: +Home Cloud Tasks Documentation Stay organized with collections Save and categorize content based on your preferences. Cloud Tasks documentation View all product documentation Cloud Tasks is a fully managed service that allows you to manage the execution, dispatch and delivery of a large number of distributed tasks. You can asynchronously perform work outside of a user request. Your tasks can be executed on App Engine or any arbitrary HTTP endpoint. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Add a task to a Cloud Tasks queue Create queues Create HTTP Target tasks Create App Engine tasks Configure queues Manage queues and tasks find_in_page Reference RPC API REST API Client libraries info Resources Pricing Quotas and limits Release notes Get support Billing questions Training Training and tutorials Trigger Cloud Functions using Cloud Tasks This tutorial shows you how to use Cloud Tasks within a Google App Engine application to trigger a Cloud Function and send a scheduled email. Learn more arrow_forward Training Training and tutorials Google Cloud Fundamentals: Core Infrastructure These lectures, demos, and hands-on labs give you an overview of Google Cloud products and services so that you can learn the value of Google Cloud and how to incorporate cloud-based solutions into your business strategies. Learn more arrow_forward Training Training and tutorials Architecting with Google Cloud: Design and Process This course features a combination of lectures, design activities, and hands-on labs to show you how to use proven design patterns on Google Cloud to build highly reliable and efficient solutions and operate deployments that are highly available and cost-effective. Learn more arrow_forward Use case Use cases Build scalable and resilient apps Learn patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. A well-designed app scales up and down as demand increases and decreases, and is resilient enough to withstand service disruptions. Scaling Resilience Design Learn more arrow_forward Related videos \ No newline at end of file diff --git a/Cloud_Trace.txt b/Cloud_Trace.txt new file mode 100644 index 0000000000000000000000000000000000000000..1dd36e38fbd4b53a7cac50a9dc6746d56d4da0f4 --- /dev/null +++ b/Cloud_Trace.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/trace/docs +Date Scraped: 2025-02-23T12:07:36.531Z + +Content: +Home Google Cloud Observability Trace Documentation Stay organized with collections Save and categorize content based on your preferences. Cloud Trace documentation View all product documentation Cloud Trace is a distributed tracing system for Google Cloud that collects latency data from applications and displays it in near real-time in the Google Cloud console. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: View trace app latency Find and explore traces Instrument Node.js app with OpenTelemetry Instrument Go app with OpenTelemetry Instrument Java app with OpenTelemetry Instrument Python app with OpenTelemetry Using Cloud Trace with Zipkin Create and view reports Troubleshoot find_in_page Reference Cloud Trace API Authenticate to Trace Cloud Trace filters Client libraries RPC API REST API Trace labels info Resources Quotas and limits Release Notes Pricing Related videos \ No newline at end of file diff --git a/Cloud_Workstations.txt b/Cloud_Workstations.txt new file mode 100644 index 0000000000000000000000000000000000000000..dbd1ca6de026d7a718af86243e06074f85d107c3 --- /dev/null +++ b/Cloud_Workstations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/workstations +Date Scraped: 2025-02-23T12:04:35.274Z + +Content: +Jump to Cloud WorkstationsCloud WorkstationsFully managed development environments built to meet the needs of security-sensitive enterprises. It enhances the security of development environments while accelerating developer onboarding and productivity, including a native integration with Gemini for Google Cloud.Try free trialAccess secure and fast development environments anytime using browser or local IDEEnable administrators to easily provision, scale, manage, and secure development environmentsCustomize development environments with your preferred IDE and through custom container imagesBuild applications faster with AI-powered assistance from Gemini Code AssistVideoForrester Total Economic Impact Study on Cloud Workstations52:10BenefitsEnhance security of your development environmentsExtend your security posture to your IDEs with centrally managed, yet configurable, security mechanisms based on industry best practices. Mitigate exfiltration by preventing the storage of source code locally.Increase developer productivity with custom toolsImprove developer productivity with secure and fast development environments accessible using browser or local IDE, while supporting multiple popular IDEs, customizable developer tools, and Gemini Code Assist. Simplify onboarding for new and remote developersOnboard your developers faster no matter where they are located, with managed cloud-based development environments, while ensuring replicability and consistency using container-defined environments.Key featuresKey featuresRun code in your real environment, inside your VPCCloud Workstations can run inside your VPC, allowing you to develop and run code inside your private network and in your staging environment, so you don’t need to emulate your services. You can also enforce “no source code on local devices” policies and bring the same security mechanisms used for production workloads to your development environments, such as VPC Service Controls (VPC SC), private ingress/egress, Cloud Audit Logs, and granular IAM controls.Multi-IDE supportCloud Workstations supports any code editors and applications that can be run in a container. IDEs can also be personalized and support extensions. Enjoy the benefits of remote development without needing to change your IDE or workflow using our managed IDEs such as Code OSS for Cloud Workstations, or multiple JetBrains IDEs such as IntelliJ IDEA, PyCharm, Rider, and CLion through JetBrains Gateway, as well as Posit Workbench (with RStudio Pro).Dev environments ready to go in minutesQuickly onboard developers using the Google Cloud console, and use shared workstation configurations to enable consistent development environment definitions, which can be easily updated and synchronized across all developers with a single action. Developers can create and start a workstation in minutes, where the workstation configuration will be automatically applied, addressing “works on my machine” and configuration drift problems.Consistent environments across teamsCloud Workstations provides a managed experience using predefined or custom containers to specify your environment configuration, such as pre-installed tools, libraries, IDE extensions, preloaded files, and start-up scripts. You can also ensure all developers get the latest versions and patches when they start working by setting a session limit and simply updating your container images. Cloud Workstations will then handle ensuring that they are all updated according to the container image you specified.Built-in Gemini Code Assist integrationsCloud Workstations supports Gemini Code Assist, which provides AI-powered assistance to developers, such as auto code completion, code generation, and chat. Developers can leverage these Gemini Code Assist capabilities directly in Cloud Workstations to build applications faster and more efficiently.View all featuresLearn more about Cloud Workstations, its benefits and use cases.7:38CustomersCheck out the stories from our customersBlog postL’Oreal uses Cloud Workstations to increase its developer productivity and security5-min readVideoCommerzbank chose Cloud Workstations to protect its development environmentsVideo (17:49)Blog postHow DZ BANK improved developer productivity5-min readSee all customersCloud Workstations removes the technical barriers by providing a powerful and scalable solution for all the developers we have across the world.Sebastian Moran, Head of Data Engineering, L’OréalCheck out the storyWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoIncrease developer productivity and security with cloud-based developmentWatch videoEventTwitter Space: Increase developer velocity with cloud-based app developmentLearn moreVideoHow to accelerate developer velocity with Cloud WorkstationsWatch videoVideoManaged and secure development environment on Google CloudWatch videoBlog postRemote Development in JetBrains IDEs now available in Google Cloud Workstations Learn moreDocumentationCloud Workstations quickstart, guides, and moreGoogle Cloud BasicsCreate a workstationLearn how to create your first workstation in Cloud Workstations. Learn moreArchitectureCloud Workstations architectureLearn about the architecture and resources used by Cloud Workstations.Learn moreGoogle Cloud BasicsBase editor overviewLearn about the Cloud Workstations base editor, its components, and helpful features. Learn moreGoogle Cloud BasicsDevelop remotely with JetBrains IDEsLearn about the plugin for JetBrains Gateway, which lets you develop with JetBrains IDEs, such as IntelliJ IDEA, PyCharm, Rider, CLion, PhpStorm, and WebStorm.Learn moreGoogle Cloud BasicsConfigure private clustersLearn how private clusters work and how to set them up in Cloud Workstations using Private Service Connect and VPC Service Controls. Learn moreTutorialCode with Gemini Code Assist Check this tutorial on how to create an application with Gemini Code Assist in Cloud Workstations.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud WorkstationsAll featuresAll featuresAny language, any library, any code editorInstall and customize Cloud Workstations to use any combination of languages, libraries, or even code editors of your choice. You can even bring your own internal tools.Support for self-hosted toolsSupports third-party developer and DevOps tools such as GitLab and TeamCity. You can configure access to tools that are either external, self-hosted, on-premises, or even in other clouds. Easy to scale and manageNo infrastructure to manage. You create a workstation configuration for each of your teams, and Cloud Workstations provides on-demand development environments according to the templates you define.Custom virtual machinesProvides flexible and configurable machine types to help you size workstations to your needs, with configurable CPU, RAM, and storage settings.GPU supportSupports GPU platforms, including NVIDIA A100, T4, V100, P100, and P4 to accelerate machine learning and data processing tasks.Persistent Disk supportCloud Workstations supports attaching a Persistent Disk to each workstation’s home folder, persisting data so you don’t need to keep your workstations running when not in use.Resource cost optimizationSet up inactivity timeouts to automatically shut down idle workstations and reduce unnecessary costs.Develop in your staging environment using VPC supportCloud Workstations can run inside your VPC, allowing you to develop and run code inside your private network so you don’t need to emulate your services.VPC Service ControlsDefine a security perimeter around your Cloud Workstations to constrain access to sensitive resources and mitigate data exfiltration risks.Private ingress and egressFor enhanced security, Cloud Workstations can be configured to limit access to only users with direct access to your private network.Granular IAM controlsCloud Workstations follows the principle of least privilege, whereby default users only have access to the workstations they created. Administrators have the option to grant additional access levels as needed.BeyondCorp Enterprise integrationContext-based access policies and prevention of code exfiltration on download, copy, paste, print, and more.Automatic environment updatesSet up a maximum workstation session limit. This ensures that all developers use the latest version of your development environment updates and patches automatically when they log in.Full customization using container imagesAll development environments in Cloud Workstations are defined as containers, which you can extend, modify, or even fully replace. This also gives you added flexibility to customize tools, libraries, IDE extensions, preloaded files, and start-up scripts.Access using multiple interfacesAccess Cloud Workstations through multiple different interfaces such as browsed IDEs, local IDEs, SSH tunnels, or even TCP tunnels, so you can develop from the interface that best suits your needs.Multi-IDE supportUse managed IDEs such as Code OSS for Cloud Workstations, or multiple JetBrains IDEs such as IntelliJ IDEA, PyCharm, Rider, CLion. You can also use Posit Workbench (with RStudio Pro), or bring your own code editor for extra flexibility when defining your development workflow and tooling.Web previewQuickly access any Cloud Workstations ports directly from your browser with built-in port forwarding, which IAM controls automatically enforce.JetBrains remote development supportJetBrains IDE developers can access Cloud Workstations using JetBrains Gateway, so they can quickly start developing with their preferred IDE, while having a remote backend in the cloud.Visual Studio Code remote development supportVisual Studio Code developers can access Cloud Workstations using remote SSH, so they can use Visual Studio Code locally while having a remote backend in the cloud.SSH access enforced using IAM policiesCloud Workstations supports SSH access, tunneled over a WebSocket connection. All SSH access is subject to Google Cloud authorization and IAM permissions, so you don’t need to manage SSH keys or store them locally while ensuring access controls.PricingPricingPricing for Cloud Workstations is based on per-hour usage of the Cloud Workstations VMs, disk storage, workstation management, control plane, and network traffic that you use to support your developer workstations. View pricing detailsPartnersOur partnersCloud Workstations integrates with leading developer solutions to bring a better experience to our customers.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Get startedNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_bursting_pattern.txt b/Cloud_bursting_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..b193e16f1ab04f8f6176ee7f75c44a2b684fe383 --- /dev/null +++ b/Cloud_bursting_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/cloud-bursting-pattern +Date Scraped: 2025-02-23T11:50:13.222Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cloud bursting pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC Internet applications can experience extreme fluctuations in usage. While most enterprise applications don't face this challenge, many enterprises must deal with a different kind of bursty workload: batch or CI/CD jobs. This architecture pattern relies on a redundant deployment of applications across multiple computing environments. The goal is to increase capacity, resiliency, or both. While you can accommodate bursty workloads in a data-center-based computing environment by overprovisioning resources, this approach might not be cost effective. With batch jobs, you can optimize use by stretching their execution over longer time periods, although delaying jobs isn't practical if they're time sensitive. The idea of the cloud bursting pattern is to use a private computing environment for the baseline load and burst to the cloud temporarily when you need extra capacity. In the preceding diagram, when data capacity is at its limit in an on-premises private environment, the system can gain extra capacity from a Google Cloud environment when needed. The key drivers of this pattern are saving money and reducing the time and effort needed to respond to scale requirement changes. With this approach, you only pay for the resources used when handling extra loads. That means you don't need to overprovision your infrastructure. Instead you can take advantage of on-demand cloud resources and scale them to fit the demand, and any predefined metrics. As a result, your company might avoid service interruptions during peak demand times. A potential requirement for cloud bursting scenarios is workload portability. When you allow workloads to be deployed to multiple environments, you must abstract away the differences between the environments. For example, Kubernetes gives you the ability to achieve consistency at the workload level across diverse environments that use different infrastructures. For more information, see GKE Enterprise hybrid environment reference architecture. Design considerations The cloud bursting pattern applies to interactive and batch workloads. When you're dealing with interactive workloads, however, you must determine how to distribute requests across environments: You can route incoming user requests to a load balancer that runs in the existing data center, and then have the load balancer distribute requests across the local and cloud resources. This approach requires the load balancer or another system that is running in the existing data center to also track the resources that are allocated in the cloud. The load balancer or another system must also initiate the automatic upscaling or downscaling of resources. Using this approach you can decommission all cloud resources during times of low activity. However, implementing mechanisms to track resources might exceed the capabilities of your load balancer solutions, and therefore increase overall complexity. Instead of implementing mechanisms to track resources, you can use Cloud Load Balancing with a hybrid connectivity network endpoint group (NEG) backend. You use this load balancer to route internal client requests or external client requests to backends that are located both on-premises and in Google Cloud and that are based on different metrics, like weight-based traffic splitting. Also you can scale backends based on load balancing serving capacity for workloads in Google Cloud. For more information, see Traffic management overview for global external Application Load Balancer. This approach has several additional benefits, such as taking advantage of Google Cloud Armor DDoS protection capabilities, WAF, and caching content at the cloud edge using Cloud CDN. However, you need to size the hybrid network connectivity to handle the additional traffic. As highlighted in Workload portability, an application might be portable to a different environment with minimal changes to achieve workload consistency, but that doesn't mean that the application performs equally the same in both environments. Differences in underlying compute, infrastructure security capabilities, or networking infrastructure, along with proximity to dependent services, typically determine performance. Through testing, you can have more accurate visibility and understand the performance expectations. You can use cloud infrastructure services to build an environment to host your applications without portability. Use the following approaches to handle client requests when traffic is redirected during peak demand times: Use consistent tooling to monitor and manage these two environments. Ensure consistent workload versioning and that your data sources are current. You might need to add automation to provision the cloud environment and reroute traffic when demand increases and the cloud workload is expected to accept client requests for your application. If you intend to shut down all Google Cloud resources during times of low demand, using DNS routing policies primarily for traffic load balancing might not always be optimal. This is mainly because: Resources can require some time to initialize before they can serve users. DNS updates tend to propagate slowly over the internet. As a result: Users might be routed to the Cloud environment even when no resources are available to process their requests. Users might keep being routed to the on-premises environment temporarily while DNS updates propagate across the internet. With Cloud DNS, you can choose the DNS policy and routing policy that align with your solution architecture and behavior, such as geolocation DNS routing policies. Cloud DNS also supports health checks for internal passthrough Network Load Balancer, and internal Application Load Balancer. In which case, you could incorporate it with your overall hybrid DNS setup that's based on this pattern. In some scenarios, you can use Cloud DNS to distribute client requests with health checks on Google Cloud, like when using internal Application Load Balancers or cross-region internal Application Load Balancers. In this scenario, Cloud DNS checks the overall health of the internal Application Load Balancer, which itself checks the health of the backend instances. For more information, see Manage DNS routing policies and health checks. You can also use Cloud DNS split horizon. Cloud DNS split horizon is an approach for setting up DNS responses or records to the specific location or network of the DNS query originator for the same domain name. This approach is commonly used to address requirements where an application is designed to offer both a private and a public experience, each with unique features. The approach also helps to distribute traffic load across environments. Given these considerations, cloud bursting generally lends itself better to batch workloads than to interactive workloads. Advantages Key advantages of the cloud bursting architecture pattern include: Cloud bursting lets you reuse existing investments in data centers and private computing environments. This reuse can either be permanent or in effect until existing equipment becomes due for replacement, at which point you might consider a full migration. Because you no longer have to maintain excess capacity to satisfy peak demands, you might be able to increase the use and cost effectiveness of your private computing environments. Cloud bursting lets you run batch jobs in a timely fashion without the need for overprovisioning compute resources. Best practices When implementing cloud bursting, consider the following best practices: To ensure that workloads running in the cloud can access resources in the same fashion as workloads running in an on-premises environment, use the meshed pattern with the least privileged security access principle. If the workload design permits it, you can allow access only from the cloud to the on-premises computing environment, not the other way round. To minimize latency for communication between environments, pick a Google Cloud region that is geographically close to your private computing environment. For more information, see Best practices for Compute Engine regions selection. When using cloud bursting for batch workloads only, reduce the security attack surface by keeping all Google Cloud resources private. Disallow any direct access from the internet to these resources, even if you're using Google Cloud external load balancing to provide the entry point to the workload. Select the DNS policy and routing policy that aligns with your architecture pattern and the targeted solution behavior. As part of this pattern, you can apply the design of your DNS policies permanently or when you need extra capacity using another environment during peak demand times. You can use geolocation DNS routing policies to have a global DNS endpoint for your regional load balancers. This tactic has many use cases for geolocation DNS routing policies, including hybrid applications that use Google Cloud alongside an on-premises deployment where Google Cloud region exists. If you need to provide different records for the same DNS queries, you can use split horizon DNS—for example, queries from internal and external clients. For more information, see reference architectures for hybrid DNS To ensure that DNS changes are propagated quickly, configure your DNS with a reasonably short time to live value so that you can reroute users to standby systems when you need extra capacity using cloud environments. For jobs that aren't highly time critical, and don't store data locally, consider using Spot VM instances, which are substantially cheaper than regular VM instances. A prerequisite, however, is that if the VM job is preempted, the system must be able to automatically restart the job. Use containers to achieve workload portability where applicable. Also, GKE Enterprise can be a key enabling technology for that design. For more information, see GKE Enterprise hybrid environment reference architecture. Monitor any traffic sent from Google Cloud to a different computing environment. This traffic is subject to outbound data transfer charges. If you plan to use this architecture long term with high outbound data transfer volume, consider using Cloud Interconnect. Cloud Interconnect can help to optimize the connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. When Cloud Load Balancing is used, you should use its application capacity optimizations abilities where applicable. Doing so can help you address some of the capacity challenges that can occur in globally distributed applications. Authenticate the people who use your systems by establishing common identity between environments so that systems can securely authenticate across environment boundaries. To protect sensitive information, encrypting all communications in transit is highly recommended. If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. Previous arrow_back Business continuity hybrid and multicloud patterns Next What's next arrow_forward Send feedback \ No newline at end of file diff --git a/Cloud_computing_basics.txt b/Cloud_computing_basics.txt new file mode 100644 index 0000000000000000000000000000000000000000..b89b2a6e516b3336394cefa8b1cae2d894b3d9a7 --- /dev/null +++ b/Cloud_computing_basics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/discover +Date Scraped: 2025-02-23T12:11:20.663Z + +Content: +Google Cloud Topics Learn the basics of cloud computing and how to get started.Contact usSee pricingFilter byFiltersAI and MLApp modernizationCloud basicsData analyticsDatabasesInfrastructureSecurityStoragesearchsendAI and MLApplications of AILearn about the applications of artificial intelligence in various industries.What is artificial intelligence?Learn about artificial intelligence benefits, use cases, and examples.What is artificial general intelligence?Learn about artificial general intelligence (AGI) and its use cases.Artificial intelligence versus machine learning Learn about the differences between AI and ML and how they're connected.What are AI agents?Learn what AI agents are including key features, benefits, and how they work.What are AI chatbots?Learn more about AI chatbots and their common use cases.What is AI code generation?Learn about AI code generation including what it is, and how to use it.What is AI data analytics?Learn about AI data analytics including what it is, how AI can be used in data analytics.What is AI in banking?Learn more about how AI is used in the banking industry.What is AI for developers?Learn about AI for developers including what the benefits are and use cases. What is AI in finance?Learn how AI is being used in the finance industry.What is AI in healthcare?Learn how AI is being used in the healthcare industry.What are AI hallucinations?Learn about AI hallucinations, how they happen, and ways to prevent them.What is AI summarization?Learn about AI summarization including what it is, its benefits, and how to use it.What is cognitive computing?Learn about cognitive computing including how it works and its benefits. What is conversational AI?Learn about conversational AI including what it is, how it works, and use cases.What is data labeling?Learn about data labeling, including its importance and how it works.What is deep learning?Learn about deep learning, how it works, its applications, algorithms, and more.Deep learning versus machine learning versus AILearn about the differences between deep learning, machine learning, and AI.What is enterprise AI?Learn about enterprise AI, including its advantages and how to implement it.What is full-text search?Learn about full-text search including how it works, benefits, and how to implement it.What is fuzzy search?Learn about fuzzy search including how it works, examples, and how to implement it. What is generative AI?Learn about generative AI, how it works, and its use cases, including with Google Cloud.What is GPT?Learn about generative pre-trained transformers (GPTs), including how they work and how they're trained. What is a GPU and its role in AI?Learn about GPUs and their importance in artificial intelligence.What is human-in-the-loop?Learn more about human-in-the-loop (HITL) and why it's important.What is Kubeflow?Learn about Kubeflow, what it's used for, and its benefits.What is LangChain?Learn about LangChain, how it works, and examples of LangChain.What is a large language model?Learn about large language models—how they work, benefits, use cases, and more.What is LLMOps?Learn about large language model operations (LLMOps), its benefits, and best practices.What is machine learning?Learn about machine learning including how it works, the different types, and its uses.What is MLOps?Learn about machine learning operations (MLOps) and why it's important.Multimodal AILearn about multimodal models in AI and how to use them.What is natural language processing (NLP)?Learn how NLP provides insights into the structure and meaning of text.What is a neural network?Learn about neural networks including the different types and their uses.Optical character recognition (OCR)Learn about OCR and how it's being used with Google AI.What is prompt engineering?Learn about prompt engineering, the types of prompts, and prompt engineering use cases. What is retrieval-augmented generation?Learn about retrieval-augmented generation (RAG), how it works, and its advantages.What is semantic search?Learn about semantic search including how it works and how it can be used.Supervised versus unsupervised learning Learn about the differences between supervised and unsupervised learning with AI and ML.What is supervised learning? Learn how supervised learning helps AI and ML models learn from labeled data.What is text-to-image AI?Learn about text-to-image AI including what it is, how it works, and use cases.What is time series? Learn how to model historical time-series data to make predictions about future time points.What is unsupervised learning?Learn more about how unsupervised learning works with ML models and unlabeled data.App modernizationWhat is API management?API management helps organizations with developing, designing, monitoring, testing, securing, and analyzing APIs.What is cloud FinOps?Learn about the value of FinOps, the five key building blocks of cloud FinOps, and the benefits. What are containers?Learn about containers—lightweight packages of software that contain all of the necessary elements to run in any environment.What are containerized applications?Learn about containerized applications including the benefits and how the technology works.What is container orchestration?Learn more about how container orchestration can help deploy, scale, and manage applications.What is hybrid cloud?Learn about a hybrid cloud approach where applications or their components run in a combination of on-premises and public cloud infrastructure.What is Istio?Learn about Istio and how it helps organizations run distributed, microservices-based apps anywhere.What is Kubernetes?Learn about Kubernetes (K8s), an open source system to deploy, scale, and manage containerized applications anywhere.What is OpenTelemetry? Learn about OpenTelemetry, software and tools for capturing and exporting telemetry data from your cloud-native software and infrastructure.What is a private cloud?Discover more about private clouds and the major differences versus public clouds, advantages, disadvantages, and more.What is a public cloud?Learn about what a public cloud is, and the difference between a public and private cloud.What is serverless computing?Learn more about serverless computing, how it works, examples, and advantages versus disadvantages.Containers versus VMs (Virtual machines) Learn about the key differences between containers and virtual machines.What is Prometheus?Learn about the open source monitoring and alerting toolset primarily used for Kubernetes applications.Cloud basicsWhat is cloud computing?Learn how cloud computing solves issues by offering scalable and on-demand services. Learn about the types and benefits of cloud computing.What are the advantages of cloud computing?Learn more about the advantages and disadvantages of cloud computing.What is a cloud service provider? Learn about the types of cloud service providers, examples, benefits, and how to choose a cloud service provider that works for you.What is cloud management? Learn more about cloud management, how it works, and the role cloud management plays.What is cloud native?Learn more about what it means to become "cloud native." What is digital transformation?Learn about business digital transformation for business processes, culture, and customer experience. What are the different types of cloud computing?Learn about the different types of cloud computing models, the types of cloud computing services, and how they differ.What is high performance computing (HPC)?Learn about high performance computing and how it's used.What is microservices architecture?Learn how microservices architecture allows large applications to be separated into single services that are developed, deployed, and maintained independently.What is multicloud?Learn about multicloud, its benefits, challenges, multicloud management, use cases, and more.What is a virtual private server (VPS)?Learn about virtual private servers, how they work, and the difference between a VPS and a dedicated server.Data analyticsWhat is Apache Hadoop?Learn the basics of Apache Hadoop, including what it is, how it’s used, and what advantages it brings to big data environments.What is Apache Kafka?Learn about Apache Kafka, a platform for collecting, processing, and storing streaming data.What is Apache Spark?Learn about Apache Spark, an analytics engine for large-scale data processing. What is big data?Learn about big data with an overview, characteristics, and examples. What is business intelligence?Learn about business intelligence (BI), the process of analyzing company data to improve operations.What is cloud analytics?Learn about cloud analytics including how it works and the different types available.What is a data cloud?Learn about how a data cloud works to provides an open, cloud-based data infrastructure.What is data governance?Learn about data governance—the practice of making data secure, accurate, and available.What is data integration?Learn about data integration—the process of unifying data from different sources into a more useful view.What is a data lake?Learn how data lakes store, process, and secure large amounts of data. What is a data lakehouse?Learn about data lakehouses, how they work, and some of the benefits of using one.What is a data warehouse?Learn about data warehouses (DW), which are systems for data analysis and reporting.What is data warehouse as a service (DWaaS)?Discover more about data warehouse as a service offerings, benefits, and more.What is ETL?Learn how ETL lets companies convert structured and unstructured data to drive business decisions.What is predictive analytics?Learn how predictive analytics uses data, statistics, modeling, and machine learning to help predict and plan for future events, or find opportunities.What is Presto?Learn how Presto, an open source distributed SQL query engine created by Facebook developers, runs interactive analytics against large volumes of data.What is streaming analytics?Learn about streaming analytics, which processes and analyzes data from sources that continuously send data.What is time series?Learn how to model historical time-series data in order to make predictions about future time points and common use cases.DatabasesWhat are the differences between PostgreSQL and SQL Server?Learn about the key differences between PostgreSQL and SQL Server, the pros and cons of each, and which one may be the right fit for your project.What is a cloud database?Learn about the different types of cloud databases, and how they're used in cloud computing.What is database migration?Learn about data and database migrations including the different types and how they work.What is a managed database service?Learn about managed database services including the basics, the benefits, and how to choose one.What is MySQL?Learn about MySQL and its benefits and use cases.What is NoSQL? Learn more about NoSQL databases, how they work, and advantages versus disadvantages.What is PostgreSQL?Learn about PostgreSQL including what it is, benefits, and use cases.What is Redis database?Learn about Redis including its definition, benefits, and use cases.What is a relational database?Learn about relational databases, the relational database model, its benefits, and examples.What are SQL databases?Learn about SQL and SQL databases including definitions, benefits, and use cases.What are transactional databases?Learn about transactional databases and their benefits. What is a vector database?Learn how vector databases store and query vector embeddings to enable efficient similarity search and build generative AI applications.InfrastructureWhat is cloud architecture?Learn about how cloud architecture works, its components, layers, and more.What is cloud hosting?Learn about cloud hosting, how it works, and how it differs from web hosting and VPS.What is cloud migration?Learn more about the types of cloud migration, how it works, and the benefits of migrating to the cloud.what is IaaS?Learn about Iaas (infrastructure as a service), a computing model that offers resources on demand to businesses and individuals using the cloud.What is load balancing?Learn about load balancing in the cloud, how it works, common uses, and more.What is PaaS (Platform as a Service)?Learn how PaaS solutions work, their benefits, and the differences between PaaS, IaaS, and SaaS.PaaS versus IaaS versus SaaS versus CaaS: How are they different?Learn the differences between the PaaS, IaaS, SaaS, and CaaS cloud computing models.What is software as a service (SaaS)?Learn about SaaS including definition, how it works and why it's important.What is serverless architecture?Learn about serverless architecture including how it works and examples of serverless architecture.What is video on demand (VOD)? Learn about how video on demand distribution systems and infrastructure help to deliver on-demand streaming globally.What are virtual machines?Learn how virtual machines (VM) run programs and operating systems, store data, connect to networks, and more.What is a virtual server?Learn about virtual servers, how they re-create the functionality of a physical server, and how they are configured so that multiple users can share their processing power.SecurityWhat is cloud security? Learn about cloud security, why it's important, how it works, and the most common cloud security strategies.What is cloud data security?Learn how cloud data security measures help to protect and secure data as it moves in and out of the cloud.What is cloud network security?Learn how cloud network security measures are used to protect public, private, and hybrid cloud networks. What is disaster recovery?Learn about disaster recovery and how Google Cloud covers cost, recovery, and scale, with disaster recovery as a service (DRaaS).What is encryption?Learn about the different types of encryption, how it works, the importance of data encryption, and more.What is zero-trust security?Learn more about the zero-trust security model, how it works, and why you may want to use a zero-trust model.StorageWhat is Cloud Storage?Learn more about the types of Cloud Storage, its benefits, and how it works.What is object storage?Learn about object storage and its most common use cases.Object versus block versus file storageLearn the differences between object storage, block storage, and file storage.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cloud_console.txt b/Cloud_console.txt new file mode 100644 index 0000000000000000000000000000000000000000..ca3dab6da2925df3089931ca99991a6c3b3066d0 --- /dev/null +++ b/Cloud_console.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cloud-console +Date Scraped: 2025-02-23T12:05:37.004Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayLog in to the Google Cloud consoleAccess and manage your apps, infrastructure, data, and more in our intuitive web UI.Go to my consoleContact salesMore ways you can get value from the Google Cloud consoleExperiment with the latest in AIBuild chatbots trained on your data, deploy an image recognition and classification pipeline, and summarize large documents with AI.Set up billing alerts to control costsCreate budget alerts that help monitor overall spend and trigger email notifications as you track towards your establish spend threshold.Explore step-by-step guides and tutorialsAccess easy-to-follow guides and documentation for configuring your platform setup—organized by job function.Prototype with over 150 products in the consolePopular use casesLaunch virtual machinesCreate and manage virtual machines (VMs) for running custom applications and workloads with Compute Engine.Deploy a sample prebuilt load balanced VMSpin up new instances and experiment with different configurationsGet one non-preemptible e2-micro VM instance free per monthSet up object storageStore any amount of data, integrate it with other Google Cloud services, and retrieve it as often as you like with Cloud Storage.Easily set up storage for static website assetsReliable and cost-effective standard, nearline, coldline, and archival storageGet 5 GB-months of regional storage (US regions only) free per monthBuild applications and websitesRun containers on a fully managed platform with Cloud Run—only pay when your code is running.Deploy web applications and APIs that scale automatically with traffic fluctuationsRun batch jobs and scheduled tasks without managing infrastructureGet 2 million requests free per monthRun event-driven functionsRun your code in the cloud with no servers or containers to manage with Cloud Run functions.Deploy sample prebuilt solutions that use AI and Cloud Run functions to analyze and annotate images and summarize large documentsBuild event-driven applications, webhooks, and APIsGet 2 million invocations free per monthBuild a data warehouseManage and analyze large datasets across cloud environments in BigQuery, a fully managed data warehouse.Leverage built-in machine learning for surfacing predictions and insightsGet up and running quickly with an intuitive, SQL interface Store 10 GiB of data and run up to 1 TiB of queries for free per monthPopular use casesLaunch virtual machinesCreate and manage virtual machines (VMs) for running custom applications and workloads with Compute Engine.Deploy a sample prebuilt load balanced VMSpin up new instances and experiment with different configurationsGet one non-preemptible e2-micro VM instance free per monthSet up object storageStore any amount of data, integrate it with other Google Cloud services, and retrieve it as often as you like with Cloud Storage.Easily set up storage for static website assetsReliable and cost-effective standard, nearline, coldline, and archival storageGet 5 GB-months of regional storage (US regions only) free per monthBuild applications and websitesRun containers on a fully managed platform with Cloud Run—only pay when your code is running.Deploy web applications and APIs that scale automatically with traffic fluctuationsRun batch jobs and scheduled tasks without managing infrastructureGet 2 million requests free per monthRun event-driven functionsRun your code in the cloud with no servers or containers to manage with Cloud Run functions.Deploy sample prebuilt solutions that use AI and Cloud Run functions to analyze and annotate images and summarize large documentsBuild event-driven applications, webhooks, and APIsGet 2 million invocations free per monthBuild a data warehouseManage and analyze large datasets across cloud environments in BigQuery, a fully managed data warehouse.Leverage built-in machine learning for surfacing predictions and insightsGet up and running quickly with an intuitive, SQL interface Store 10 GiB of data and run up to 1 TiB of queries for free per monthTake the next stepStart your next project, explore interactive tutorials, and manage your account.Got to my consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Comparative_analysis.txt b/Comparative_analysis.txt new file mode 100644 index 0000000000000000000000000000000000000000..0e390a7827d208c22d10a3f7013b46759bda452d --- /dev/null +++ b/Comparative_analysis.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/comparison +Date Scraped: 2025-02-23T11:44:51.130Z + +Content: +Home Docs Cloud Architecture Center Send feedback Comparative analysis of Google Cloud deployment archetypes Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide compares the deployment archetypes in terms of availability, robustness against outages, cost, and operational complexity. The following table summarizes the comparative analysis for the basic deployment archetypes: zonal, regional, multi-regional, and global. For the hybrid and multicloud topologies, the deployment archetype that's used for the Google Cloud part of the topology influences the availability, robustness against outages, cost, and operational complexity. Design consideration Zonal Regional Multi-regional Global Infrastructure availability 99.9% (3 nines) 99.99% (4 nines) 99.999% (5 nines) 99.999% (5 nines) Robustness of infrastructure against zone outages RTO of hours or days Near-zero RTO if replication is synchronous Near-zero RTO if replication is synchronous Near-zero RTO if replication is synchronous Robustness of infrastructure against region outages RTO of hours or days RTO of hours or days Near-zero RTO if replication is synchronous Near-zero RTO if replication is synchronous Cost of Google Cloud resources Low Medium High Medium Operational complexity Simpler than the other deployment archetypes More complex than zonal More complex than regional Potentially simpler than multi-regional Note: For more information about region-specific considerations, see Geography and regions. The following sections describe the comparative analysis that's summarized in the preceding table. Infrastructure availability The following sections describe the differences in infrastructure availability between the deployment archetypes. Zonal, regional, multi-regional, and global deployment archetypes Google Cloud infrastructure is built to support a target availability of 99.9% for your workload when you use the zonal deployment archetype, 99.99% for regional deployments, and 99.999% for multi-regional and global deployments. These availability numbers are targets for the platform-level infrastructure. The availability that you can expect from an application that's deployed in Google Cloud depends on the following factors besides the deployment archetype: Design of the application Number of interdependent tiers in the application stack Uptime Service Level Agreements (SLAs) for the Google Cloud services used Amount of redundant resources Location scopes of the resources For more information, see Building blocks of reliability in Google Cloud. Hybrid and multicloud deployment archetypes For a hybrid or multicloud topology, the overall availability depends on the infrastructure in each environment and the interdependencies between the environments. If critical interdependencies exist between components in Google Cloud and components outside Google Cloud, the overall availability is lower than the availability of the component that provides the least availability across all the environments. If every component of the application is deployed redundantly across Google Cloud and on-premises or in other cloud platforms, the redundancy ensures high availability. Robustness of infrastructure against zone and region outages The following sections describe the differences between the deployment archetypes in terms of the ability of the infrastructure to continue to support your workloads in the event of Google Cloud zone and region outages. Zonal deployment archetype An architecture that uses the basic single-zone deployment archetype isn't robust against zone outages. You must plan for recovering from zone outages based on your recovery point objective (RPO) and recovery time objective (RTO). For example, you can maintain a passive or scaled-down replica of the infrastructure in another (failover) zone. If an outage occurs in the primary zone, you can promote the database in the failover zone to be the primary database and update the load balancer to send traffic to the frontend in the failover zone. Regional deployment archetype An architecture that uses the regional deployment archetype is robust against zone outages. A failure in one zone is unlikely to affect infrastructure in other zones. The RTO is near zero if data is replicated synchronously. However, when an outage affects an entire Google Cloud region, the application becomes unavailable. Plan for recovering from outages according to your RPO and RTO for the application. For example, you can provision a passive replica of the infrastructure in a different region, and activate the replica during region outages. Multi-regional and global deployment archetypes An architecture that uses the multi-regional or global deployment archetype is robust against zone and region outages. The RTO is near zero if data is replicated synchronously. An architecture where the application runs as a globally distributed location-unaware stack provides the highest level of robustness against region outages. Hybrid and multicloud deployment archetypes The robustness of a hybrid and multicloud architecture depends on the robustness of each environment (Google Cloud, on-premises, and other cloud platforms), and the interdependencies between the environments. For example, if every component of an application runs redundantly across both Google Cloud and another environment (on-premises or another cloud platform), then the application is robust against any Google Cloud outages. If critical interdependencies exist between components in Google Cloud and components that are deployed on-premises or on other cloud platforms, the robustness against Google Cloud outages depends on the robustness of the deployment archetype that you use for the Google Cloud part of the architecture. Cost of Google Cloud resources The cost of the Google Cloud resources that are required for an application depends on the Google Cloud services that you use, the number of resources that you provision, the period for which you retain or use resources, and the deployment archetype that you choose. To estimate the cost of Google Cloud resources in an architecture based on any deployment archetype, you can use the Google Cloud Pricing Calculator. The following sections describe the differences in the cost of the Google Cloud resources between the various deployment archetypes. Zonal versus regional and multi-regional deployment archetypes When compared with an architecture that uses the zonal deployment archetype, an architecture that uses the multi-regional deployment archetype might incur extra costs for redundant storage. Also, for any network traffic that crosses region boundaries, you need to consider the cross-region data transfer costs. Global deployment archetype With this archetype, you have the opportunity to use highly available global resources, like a global load balancer. The cost of setting up and operating the cloud resources can be lower than a multi-regional deployment where you provision and configure multiple instances of regional resources. However, global resources might entail higher costs in some cases. For example, the global load balancer requires Premium Tier networking, but for regional load balancers, you can choose Standard Tier. Hybrid and multicloud deployment archetypes In a hybrid or multicloud deployment architecture, you need to consider additional costs along with the cost of the resources that you provision. For example, consider costs like hybrid or cross-cloud networking, and the cost of monitoring and managing the resources across multiple environments. Considerations for all the deployment archetypes When you assess the cost of running a cloud workload, you need to consider additional costs along with the cost of the Google Cloud resources that you provision. For example, consider personnel expenses and the overhead costs to design, build, and maintain your cloud deployment. To compare the cost of Google Cloud resources across the deployment archetypes, also consider the cost per unit of work that the application performs. Identify units of work that reflect the business drivers of the application, like the number of users the application serves or the number of requests processed. By carefully managing the utilization of your Google Cloud resources and adopting Google-recommended best practices, you can optimize the cost of your cloud deployments. For more information, see Google Cloud Architecture Framework: Cost optimization. Operational complexity This following sections describe the differences in operational complexity between the deployment archetypes, which depends on the number of infrastructure resources, features, and application stacks that you need to operate. Zonal versus regional and multi-regional deployment archetypes An architecture that's based on the zonal deployment archetype is easier to set up and operate when compared with the other deployment architectures. An application that runs redundantly in multiple zones or regions requires higher operational effort, due to the following reasons: The status of the application stacks in multiple locations must be monitored, both at the stack level and for each component of the application. If a component becomes unavailable in any location, in-process requests must be handled gracefully. Application changes must be rolled out carefully. The databases must be synchronized across all the locations. Global deployment archetype The global deployment archetype lets you use highly available global resources like a global load balancer and a global database. The effort to set up and operate cloud resources can be lower than a multi-regional deployment where you need to manage multiple instances of regional resources. However, you must carefully manage changes to global resources. The effort to operate an architecture that uses the global deployment archetype also depends on whether you deploy a distributed location-unaware stack or multiple regionally isolated stacks: A distributed, location-unaware application can be expanded and scaled with greater flexibility. For example, if certain components have critical end-user latency requirements in only specific locations, you can deploy these components in the required locations and operate the remainder of the stack in other locations. An application that's deployed as multiple regionally isolated stacks requires higher effort to operate and maintain, due to the following factors: The status of the application stacks in multiple locations must be monitored, both at the stack level and for each component. If a component becomes unavailable in any location, in-process requests must be handled gracefully. Application changes must be rolled out carefully. The databases must be synchronized across all the locations. Hybrid and multicloud deployment archetypes Hybrid or multicloud topologies require more effort to set up and operate than an architecture that uses only Google Cloud. Resources must be managed consistently across the on-premises and Google Cloud topologies. To manage containerized hybrid applications, you can use solutions like GKE Enterprise, which is a unified cloud operating model to provision, update, and optimize Kubernetes clusters in multiple locations. You need a way to efficiently provision and manage resources across multiple platforms. Tools like Terraform can help to reduce the provisioning effort. Security features and tools aren't standard across cloud platforms. Your security administrators need to acquire skills and expertise to manage the security of resources distributed across all the cloud platforms that you use. Previous arrow_back Multicloud Next What's next arrow_forward Send feedback \ No newline at end of file diff --git a/Compute.txt b/Compute.txt new file mode 100644 index 0000000000000000000000000000000000000000..21e5ec62d86a55fcad1703ff26ffbaddf745adf7 --- /dev/null +++ b/Compute.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/compute +Date Scraped: 2025-02-23T12:02:31.063Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Compute Engine Virtual machines for any workloadEasily create and run online VMs on high-performance, reliable cloud infrastructure. Choose from preset or custom machine types for web servers, databases, AI, and more.Get one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers free per month.Try it in consoleContact salesProduct highlightsHow to choose and deploy your first VMEncrypt sensitive data with Confidential VMsCustomize your VMs for different compute, memory, performance, and cost requirementsHow to choose the right VM5-minute video explainerFeaturesPreset and custom configurationsDeploy an application in minutes with prebuilt samples called Jump Start Solutions. Create a dynamic website, load-balanced VM, three-tier web app, or ecommerce web app.Choose from predefined machine types, sizes, and configurations for any workload, from large enterprise applications, to modern workloads (like containers) or AI/ML projects that require GPUs and TPUs.For more flexibility, create a custom machine type between 1 and 96 vCPUs with up to 8.0 GB of memory per core. And leverage one of many block storage options, from flexible Persistent Disk to high performance and low-latency Local SSD.Industry-leading reliabilityCompute Engine offers the best single instance compute availability SLA of any cloud provider: 99.95% availability for memory-optimized VMs and 99.9% for all other VM families. Is downtime keeping you up at night? Maintain workload continuity during planned and unplanned events with live migration. When a VM goes down, Compute Engine performs a live migration to another host in the same zone.Automations and recommendations for resource efficiencyAutomatically add VMs to handle peak load and replace underperforming instances with managed instance groups. Manually adjust your resources using historical data with rightsizing recommendations, or guarantee capacity for planned demand spikes with future reservations.All of our latest compute instances (including C4A, C4, N4, C3D, X4, and Z3) run on Titanium, a system of purpose-built microcontrollers and tiered scale-out offloads to improve your infrastructure performance, life cycle management, and security.Transparent pricing and discountingReview detailed pricing guidance for any VM type or configuration, or use our pricing calculator to get a personalized estimate.To save on batch jobs and fault-tolerant workloads, use Spot VMs to reduce your bill from 60-91%.Receive automatic discounts for sustained use, or up to 70% off when you sign up for committed use discounts.Security controls and configurationsEncrypt data-in-use and while it’s being processed with Confidential VMs. Defend against rootkits and bootkits with Shielded VMs.Meet stringent compliance standards for data residency, sovereignty, access, and encryption with Assured Workloads.Workload ManagerNow available for SAP workloads, Workload Manager evaluates your application workloads by detecting deviations from documented standards and best practices to proactively prevent issues, continuously analyze workloads, and simplify system troubleshooting.VM ManagerVM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine.Sole-tenant nodesSole-tenant nodes are physical Compute Engine servers dedicated exclusively for your use. Sole-tenant nodes simplify deployment for bring-your-own-license (BYOL) applications. Sole-tenant nodes give you access to the same machine types and VM configuration options as regular compute instances.TPU acceleratorsCloud TPUs can be added to accelerate machine learning and artificial intelligence applications. Cloud TPUs can be reserved, used on-demand, or available as preemptible VMs.Linux and Windows supportRun your choice of OS, including Debian, CentOS Stream, Fedora CoreOS, SUSE, Ubuntu, Red Hat Enterprise Linux, FreeBSD, or Windows Server 2008 R2, 2012 R2, and 2016. You can also use a shared image from the Google Cloud community or bring your own.Container supportRun, manage, and orchestrate Docker containers on Compute Engine VMs with Google Kubernetes Engine.Placement policyUse placement policy to specify the location of your underlying hardware instances. Spread placement policy provides higher reliability by placing instances on distinct hardware, reducing the impact of underlying hardware failures. Compact placement policy provides lower latency between nodes by placing instances close together within the same network infrastructure. View all featuresChoose the right VM for your workload and requirementsOptimizationWorkloadsOur recommendationEfficientLowest cost per core.Web and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. Web and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)General purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.Web and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.Web and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLSpecialized H-SeriesH3MemoryHighest memory per core.Databases (large)In-memory cachesElectronic design automationModeling and simulationSpecialized M-Series X4, M3Storage Highest storage per core.Data analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Specialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.CUDA-enabled ML training and inferenceVideo transcodingSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.Massively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Specialized A-seriesA3Documentation: Machine families resource and comparison guideEfficientLowest cost per core.WorkloadsWeb and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsOur recommendationGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. WorkloadsWeb and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)Our recommendationGeneral purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.WorkloadsWeb and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLOur recommendationGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.WorkloadsWeb and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLOur recommendationSpecialized H-SeriesH3MemoryHighest memory per core.WorkloadsDatabases (large)In-memory cachesElectronic design automationModeling and simulationOur recommendationSpecialized M-Series X4, M3Storage Highest storage per core.WorkloadsData analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Our recommendationSpecialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.WorkloadsCUDA-enabled ML training and inferenceVideo transcodingOur recommendationSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.WorkloadsMassively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Our recommendationSpecialized A-seriesA3Documentation: Machine families resource and comparison guideHow It WorksCompute Engine is a computing and hosting service that lets you create and run virtual machines on Google infrastructure, comparable to Amazon EC2 and Azure Virtual Machines. Compute Engine offers scale, performance, and value that lets you easily launch large compute clusters with no up-front investment.Guides: How to get startedCompute Engine in 2-minutesCommon UsesCreate your first VMThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups How to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsTutorials, quickstarts, & labsThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups Learning resourcesHow to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsMigrate and optimize enterprise applicationsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineTutorials, quickstarts, & labsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterLearning resourcesAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineBackup and restore your applicationsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudTutorials, quickstarts, & labsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreLearning resourcesAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudRun modern container-based applicationsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunTutorials, quickstarts, & labsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunInfrastructure for AI workloadsAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?Learning resourcesAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?PricingHow Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.ServicesDescriptionPrice (USD)Get started freeNew users get $300 in free trial credits to use within 90 days.FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.FreeVM instancesPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.Starting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.Save up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.Save up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).Save up to 30%StoragePersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.Starting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.Starting at$0.08Per GB per monthNetworkingStandard tierLeverage the public internet to carry traffic between your services and your users.FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.Starting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.How Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.Get started freeDescriptionNew users get $300 in free trial credits to use within 90 days.Price (USD)FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.DescriptionFreeVM instancesDescriptionPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Price (USD)Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.DescriptionStarting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).Description+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.DescriptionSave up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.DescriptionSave up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).DescriptionSave up to 30%StorageDescriptionPersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Price (USD)Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.DescriptionStarting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.DescriptionStarting at$0.08Per GB per monthNetworkingDescriptionStandard tierLeverage the public internet to carry traffic between your services and your users.Price (USD)FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.DescriptionStarting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.Pricing CalculatorEstimate your monthly Compute Engine charges, including cluster management fees.Estimate pricingNeed help?Chat to us online, call us directly or request a call back.Contact usStart your proof of conceptTry Compute Engine in the console, with one e2-micro VM instance free per month.Go to my consoleHave a large project?Contact salesBrowse quickstarts, tutorials, or interactive walkthroughs for Compute EngineBrowse quickstartsChoose a learning path, build your skills, and validate your knowledge with Cloud Skills BoostBrowse learning pathsLearn and experiment with pre-built solution templates handpicked by our expertsBrowse Jump Start SolutionsBusiness CaseLearn from Compute Engine customers Migrating 40,000 on-prem VMs to the cloud, Sabre reduced their IT costs by 40%. Joe DiFonzo, CIO, Sabre“We’ve taken hundreds of millions of dollars of costs out of our business.” Watch the interviewRelated contentGamebear uses Cloud Load Balancing to decreased network latency by 10X.With a 99.99% uptime SLA, Macy's feel confident their systems will run seamlessly.Wayfair uses GPUs to automate 3D model creation, reducing costs by $9M.Partners & IntegrationAccelerate your migration with partnersAssessment and planningMigrationReady to move your compute workloads to Google Cloud? These partners can guide you through every stage—from initial planning and assessment to migration.FAQExpand allWhat is Compute Engine? What can it do?Compute Engine is an Infrastructure-as-a-Service product offering flexible, self-managed virtual machines (VMs) hosted on Google's infrastructure. Compute Engine includes Linux and Windows-based VMs running on KVM, local and durable storage options, and a simple REST-based API for configuration and control. The service integrates with Google Cloud technologies, such as Cloud Storage, App Engine, and BigQuery to extend beyond the basic computational capability to create more complex and sophisticated apps.What is a virtual CPU in Compute Engine?On Compute Engine, each virtual CPU (vCPU) is implemented as a single hardware hyper-thread on one of the available CPU Platforms. On Intel Xeon processors, Intel Hyper-Threading Technology allows multiple application threads to run on each physical processor core. You configure your Compute Engine VMs with one or more of these hyper-threads as vCPUs. The machine type specifies the number of vCPUs that your instance has.How do App Engine and Compute Engine relate to each other?We see the two as being complementary. App Engine is Google's Platform-as-a-Service offering and Compute Engine is Google's Infrastructure-as-a-Service offering. App Engine is great for running web-based apps, line of business apps, and mobile backends. Compute Engine is great for when you need more control of the underlying infrastructure. For example, you might use Compute Engine when you have highly customized business logic or you want to run your own storage system.How do I get started?Try these getting started guides, or try one of our quickstart tutorials.How does pricing and purchasing work?Compute Engine charges based on compute instance, storage, and network use. VMs are charged on a per-second basis with a one minute minimum. Storage cost is calculated based on the amount of data you store. Network cost is calculated based on the amount of data transferred between VMs that communicate with each other and with the internet. For more information, review our price sheet.Do you offer paid support?Yes, we offer paid support for enterprise customers. For more information, contact our sales organization.Do you offer a Service Level Agreement (SLA)?Yes, we offer a Compute Engine SLA.Where can I send feedback?For billing-related questions, you can send questions to the appropriate support channel.For feature requests and bug reports, submit an issue to our issues tracker.How can I create a project?Go to the Google Cloud console. When prompted, select an existing project or create a new project.Follow the prompts to set up billing. If you are new to Google Cloud, you have free trial credit to pay for your instances.What is the difference between a project number and a project ID?Every project can be identified in two ways: the project number or the project ID. The project number is automatically created when you create the project, whereas the project ID is created by you, or whoever created the project. The project ID is optional for many services, but is required by Compute Engine. For more information, see Google Cloud console projects.What steps does Google take to protect my data?See disk encryption.How do I choose the right size for my persistent disk?Persistent disk performance scales with the size of the persistent disk. Use the persistent disk performance chart to help decide what size disk works for you. If you're not sure, read the documentation to decide how big to make your persistent disk.Where can I request more quota for my project?By default, all Compute Engine projects have default quotas for various resource types. However, these default quotas can be increased on a per-project basis. Check your quota limits and usage in the quota page on the Google Cloud console. If you reach the limit for your resources and need more quota, make a request to increase the quota for certain resources using the IAM quotas page. You can make a request using the Edit Quotas button on the top of the page.What kind of machine configuration (memory, RAM, CPU) can I choose for my instance?Compute Engine offers several configurations for your instance. You can also create custom configurations that match your exact instance needs. See the full list of available options on the machine types page.If I accidentally delete my instance, can I retrieve it?No, instances that have been deleted cannot be retrieved. However, if an instance is simply stopped, you can start it again.Do I have the option of using a regional data center in selected countries?Yes, Compute Engine offers data centers around the world. These data center options are designed to provide low latency connectivity options from those regions. For specific region information, including the geographic location of regions, see regions and zones.How can I tell if a zone is offline?The Compute Engine Zones section in the Google Cloud console shows the status of each zone. You can also get the status of zones through the command-line tool by running gcloud compute zones list, or through the Compute Engine API with the compute.zones.list method.What operating systems can my instances run on?Compute Engine supports several operating system images and third-party images. Additionally, you can create a customized version of an image or build your own image.What are the available zones I can create my instance in?For a list of available regions and zones, see regions and zones.What if my question wasn’t answered here?Take a look at a longer list of FAQs here.More ways to get your questions answeredAll Compute FAQsAsk us a questionGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Compute_Engine(1).txt b/Compute_Engine(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..2fe8406747023653ad200782c115ec5937f59f97 --- /dev/null +++ b/Compute_Engine(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/compute +Date Scraped: 2025-02-23T12:02:31.445Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Compute Engine Virtual machines for any workloadEasily create and run online VMs on high-performance, reliable cloud infrastructure. Choose from preset or custom machine types for web servers, databases, AI, and more.Get one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers free per month.Try it in consoleContact salesProduct highlightsHow to choose and deploy your first VMEncrypt sensitive data with Confidential VMsCustomize your VMs for different compute, memory, performance, and cost requirementsHow to choose the right VM5-minute video explainerFeaturesPreset and custom configurationsDeploy an application in minutes with prebuilt samples called Jump Start Solutions. Create a dynamic website, load-balanced VM, three-tier web app, or ecommerce web app.Choose from predefined machine types, sizes, and configurations for any workload, from large enterprise applications, to modern workloads (like containers) or AI/ML projects that require GPUs and TPUs.For more flexibility, create a custom machine type between 1 and 96 vCPUs with up to 8.0 GB of memory per core. And leverage one of many block storage options, from flexible Persistent Disk to high performance and low-latency Local SSD.Industry-leading reliabilityCompute Engine offers the best single instance compute availability SLA of any cloud provider: 99.95% availability for memory-optimized VMs and 99.9% for all other VM families. Is downtime keeping you up at night? Maintain workload continuity during planned and unplanned events with live migration. When a VM goes down, Compute Engine performs a live migration to another host in the same zone.Automations and recommendations for resource efficiencyAutomatically add VMs to handle peak load and replace underperforming instances with managed instance groups. Manually adjust your resources using historical data with rightsizing recommendations, or guarantee capacity for planned demand spikes with future reservations.All of our latest compute instances (including C4A, C4, N4, C3D, X4, and Z3) run on Titanium, a system of purpose-built microcontrollers and tiered scale-out offloads to improve your infrastructure performance, life cycle management, and security.Transparent pricing and discountingReview detailed pricing guidance for any VM type or configuration, or use our pricing calculator to get a personalized estimate.To save on batch jobs and fault-tolerant workloads, use Spot VMs to reduce your bill from 60-91%.Receive automatic discounts for sustained use, or up to 70% off when you sign up for committed use discounts.Security controls and configurationsEncrypt data-in-use and while it’s being processed with Confidential VMs. Defend against rootkits and bootkits with Shielded VMs.Meet stringent compliance standards for data residency, sovereignty, access, and encryption with Assured Workloads.Workload ManagerNow available for SAP workloads, Workload Manager evaluates your application workloads by detecting deviations from documented standards and best practices to proactively prevent issues, continuously analyze workloads, and simplify system troubleshooting.VM ManagerVM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine.Sole-tenant nodesSole-tenant nodes are physical Compute Engine servers dedicated exclusively for your use. Sole-tenant nodes simplify deployment for bring-your-own-license (BYOL) applications. Sole-tenant nodes give you access to the same machine types and VM configuration options as regular compute instances.TPU acceleratorsCloud TPUs can be added to accelerate machine learning and artificial intelligence applications. Cloud TPUs can be reserved, used on-demand, or available as preemptible VMs.Linux and Windows supportRun your choice of OS, including Debian, CentOS Stream, Fedora CoreOS, SUSE, Ubuntu, Red Hat Enterprise Linux, FreeBSD, or Windows Server 2008 R2, 2012 R2, and 2016. You can also use a shared image from the Google Cloud community or bring your own.Container supportRun, manage, and orchestrate Docker containers on Compute Engine VMs with Google Kubernetes Engine.Placement policyUse placement policy to specify the location of your underlying hardware instances. Spread placement policy provides higher reliability by placing instances on distinct hardware, reducing the impact of underlying hardware failures. Compact placement policy provides lower latency between nodes by placing instances close together within the same network infrastructure. View all featuresChoose the right VM for your workload and requirementsOptimizationWorkloadsOur recommendationEfficientLowest cost per core.Web and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. Web and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)General purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.Web and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.Web and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLSpecialized H-SeriesH3MemoryHighest memory per core.Databases (large)In-memory cachesElectronic design automationModeling and simulationSpecialized M-Series X4, M3Storage Highest storage per core.Data analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Specialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.CUDA-enabled ML training and inferenceVideo transcodingSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.Massively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Specialized A-seriesA3Documentation: Machine families resource and comparison guideEfficientLowest cost per core.WorkloadsWeb and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsOur recommendationGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. WorkloadsWeb and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)Our recommendationGeneral purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.WorkloadsWeb and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLOur recommendationGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.WorkloadsWeb and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLOur recommendationSpecialized H-SeriesH3MemoryHighest memory per core.WorkloadsDatabases (large)In-memory cachesElectronic design automationModeling and simulationOur recommendationSpecialized M-Series X4, M3Storage Highest storage per core.WorkloadsData analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Our recommendationSpecialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.WorkloadsCUDA-enabled ML training and inferenceVideo transcodingOur recommendationSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.WorkloadsMassively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Our recommendationSpecialized A-seriesA3Documentation: Machine families resource and comparison guideHow It WorksCompute Engine is a computing and hosting service that lets you create and run virtual machines on Google infrastructure, comparable to Amazon EC2 and Azure Virtual Machines. Compute Engine offers scale, performance, and value that lets you easily launch large compute clusters with no up-front investment.Guides: How to get startedCompute Engine in 2-minutesCommon UsesCreate your first VMThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups How to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsTutorials, quickstarts, & labsThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups Learning resourcesHow to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsMigrate and optimize enterprise applicationsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineTutorials, quickstarts, & labsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterLearning resourcesAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineBackup and restore your applicationsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudTutorials, quickstarts, & labsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreLearning resourcesAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudRun modern container-based applicationsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunTutorials, quickstarts, & labsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunInfrastructure for AI workloadsAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?Learning resourcesAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?PricingHow Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.ServicesDescriptionPrice (USD)Get started freeNew users get $300 in free trial credits to use within 90 days.FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.FreeVM instancesPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.Starting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.Save up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.Save up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).Save up to 30%StoragePersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.Starting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.Starting at$0.08Per GB per monthNetworkingStandard tierLeverage the public internet to carry traffic between your services and your users.FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.Starting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.How Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.Get started freeDescriptionNew users get $300 in free trial credits to use within 90 days.Price (USD)FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.DescriptionFreeVM instancesDescriptionPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Price (USD)Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.DescriptionStarting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).Description+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.DescriptionSave up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.DescriptionSave up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).DescriptionSave up to 30%StorageDescriptionPersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Price (USD)Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.DescriptionStarting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.DescriptionStarting at$0.08Per GB per monthNetworkingDescriptionStandard tierLeverage the public internet to carry traffic between your services and your users.Price (USD)FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.DescriptionStarting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.Pricing CalculatorEstimate your monthly Compute Engine charges, including cluster management fees.Estimate pricingNeed help?Chat to us online, call us directly or request a call back.Contact usStart your proof of conceptTry Compute Engine in the console, with one e2-micro VM instance free per month.Go to my consoleHave a large project?Contact salesBrowse quickstarts, tutorials, or interactive walkthroughs for Compute EngineBrowse quickstartsChoose a learning path, build your skills, and validate your knowledge with Cloud Skills BoostBrowse learning pathsLearn and experiment with pre-built solution templates handpicked by our expertsBrowse Jump Start SolutionsBusiness CaseLearn from Compute Engine customers Migrating 40,000 on-prem VMs to the cloud, Sabre reduced their IT costs by 40%. Joe DiFonzo, CIO, Sabre“We’ve taken hundreds of millions of dollars of costs out of our business.” Watch the interviewRelated contentGamebear uses Cloud Load Balancing to decreased network latency by 10X.With a 99.99% uptime SLA, Macy's feel confident their systems will run seamlessly.Wayfair uses GPUs to automate 3D model creation, reducing costs by $9M.Partners & IntegrationAccelerate your migration with partnersAssessment and planningMigrationReady to move your compute workloads to Google Cloud? These partners can guide you through every stage—from initial planning and assessment to migration.FAQExpand allWhat is Compute Engine? What can it do?Compute Engine is an Infrastructure-as-a-Service product offering flexible, self-managed virtual machines (VMs) hosted on Google's infrastructure. Compute Engine includes Linux and Windows-based VMs running on KVM, local and durable storage options, and a simple REST-based API for configuration and control. The service integrates with Google Cloud technologies, such as Cloud Storage, App Engine, and BigQuery to extend beyond the basic computational capability to create more complex and sophisticated apps.What is a virtual CPU in Compute Engine?On Compute Engine, each virtual CPU (vCPU) is implemented as a single hardware hyper-thread on one of the available CPU Platforms. On Intel Xeon processors, Intel Hyper-Threading Technology allows multiple application threads to run on each physical processor core. You configure your Compute Engine VMs with one or more of these hyper-threads as vCPUs. The machine type specifies the number of vCPUs that your instance has.How do App Engine and Compute Engine relate to each other?We see the two as being complementary. App Engine is Google's Platform-as-a-Service offering and Compute Engine is Google's Infrastructure-as-a-Service offering. App Engine is great for running web-based apps, line of business apps, and mobile backends. Compute Engine is great for when you need more control of the underlying infrastructure. For example, you might use Compute Engine when you have highly customized business logic or you want to run your own storage system.How do I get started?Try these getting started guides, or try one of our quickstart tutorials.How does pricing and purchasing work?Compute Engine charges based on compute instance, storage, and network use. VMs are charged on a per-second basis with a one minute minimum. Storage cost is calculated based on the amount of data you store. Network cost is calculated based on the amount of data transferred between VMs that communicate with each other and with the internet. For more information, review our price sheet.Do you offer paid support?Yes, we offer paid support for enterprise customers. For more information, contact our sales organization.Do you offer a Service Level Agreement (SLA)?Yes, we offer a Compute Engine SLA.Where can I send feedback?For billing-related questions, you can send questions to the appropriate support channel.For feature requests and bug reports, submit an issue to our issues tracker.How can I create a project?Go to the Google Cloud console. When prompted, select an existing project or create a new project.Follow the prompts to set up billing. If you are new to Google Cloud, you have free trial credit to pay for your instances.What is the difference between a project number and a project ID?Every project can be identified in two ways: the project number or the project ID. The project number is automatically created when you create the project, whereas the project ID is created by you, or whoever created the project. The project ID is optional for many services, but is required by Compute Engine. For more information, see Google Cloud console projects.What steps does Google take to protect my data?See disk encryption.How do I choose the right size for my persistent disk?Persistent disk performance scales with the size of the persistent disk. Use the persistent disk performance chart to help decide what size disk works for you. If you're not sure, read the documentation to decide how big to make your persistent disk.Where can I request more quota for my project?By default, all Compute Engine projects have default quotas for various resource types. However, these default quotas can be increased on a per-project basis. Check your quota limits and usage in the quota page on the Google Cloud console. If you reach the limit for your resources and need more quota, make a request to increase the quota for certain resources using the IAM quotas page. You can make a request using the Edit Quotas button on the top of the page.What kind of machine configuration (memory, RAM, CPU) can I choose for my instance?Compute Engine offers several configurations for your instance. You can also create custom configurations that match your exact instance needs. See the full list of available options on the machine types page.If I accidentally delete my instance, can I retrieve it?No, instances that have been deleted cannot be retrieved. However, if an instance is simply stopped, you can start it again.Do I have the option of using a regional data center in selected countries?Yes, Compute Engine offers data centers around the world. These data center options are designed to provide low latency connectivity options from those regions. For specific region information, including the geographic location of regions, see regions and zones.How can I tell if a zone is offline?The Compute Engine Zones section in the Google Cloud console shows the status of each zone. You can also get the status of zones through the command-line tool by running gcloud compute zones list, or through the Compute Engine API with the compute.zones.list method.What operating systems can my instances run on?Compute Engine supports several operating system images and third-party images. Additionally, you can create a customized version of an image or build your own image.What are the available zones I can create my instance in?For a list of available regions and zones, see regions and zones.What if my question wasn’t answered here?Take a look at a longer list of FAQs here.More ways to get your questions answeredAll Compute FAQsAsk us a questionGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Compute_Engine(2).txt b/Compute_Engine(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..d6db885dccaca3c55e4f4a43bf58bbc2ceba61a6 --- /dev/null +++ b/Compute_Engine(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/compute/all-pricing +Date Scraped: 2025-02-23T12:10:54.662Z + +Content: +Compute Engine pricingThis page lists all the pricing for Compute Engine.Note: This page is a list of Compute Engine pricing in a single place for convenience. It is intended for reference purposes and does not provide detailed pricing explanations. For explanations on how pricing works, see the following pages:VM instance pricingSpot VM instance pricingNetworking pricingSole-tenant node pricingGPU pricingDisk and image pricingConfidential VM pricingVM Manager pricingTPU pricingCompute Engine charges for usage based on the following price sheet. A bill is sent out at the end of each billing cycle, providing a sum of Google Cloud charges. Prices on this page are listed in U.S. dollars (USD).For Compute Engine, disk size, machine type memory, and network usage are calculated in JEDEC binary gigabytes (GB), or IEC gibibytes (GiB), where 1 GiB is 230 bytes. Similarly, 1 TiB is 240 bytes, or 1024 JEDEC GBs.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.You can also find pricing information with the following options:See the estimated costs of your instances and Compute Engine resources when you create them in the Google Cloud console.Estimate your total project costs with the Google Cloud Pricing Calculator.View and download prices from the Pricing Table in the Google Cloud console.View more information about costs and usage in Cloud Billing reports.Use the Cloud Billing Catalog API for programmatic access to SKU information.General-purpose machine type familyC4 machine typesIowa (us-central1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Melbourne (australia-southeast2)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)Las Vegas (us-west4)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.03465 / 1 hour$0.02183 / 1 hour$0.015593 / 1 hour$0.024948 / 1 hour$0.018711 / 1 hourPredefined Memory$0.003938 / 1 gibibyte hour$0.002481 / 1 gibibyte hour$0.001772 / 1 gibibyte hour$0.00283536 / 1 hour$0.00212652 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4 standard machine typesIowa (us-central1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Melbourne (australia-southeast2)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)Las Vegas (us-west4)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4-standard-227 GiB$0.096866 / 1 hourc4-standard-4415 GiB$0.19767 / 1 hourc4-standard-8830 GiB$0.39534 / 1 hourc4-standard-161660 GiB$0.79068 / 1 hourc4-standard-3232120 GiB$1.58136 / 1 hourc4-standard-4848180 GiB$2.37204 / 1 hourc4-standard-9696360 GiB$4.74408 / 1 hourc4-standard-192192720 GiB$9.48816 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4 high-memory machine typesIowa (us-central1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Melbourne (australia-southeast2)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)Las Vegas (us-west4)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4-highmem-2215 GiB$0.12837 / 1 hourc4-highmem-4431 GiB$0.260678 / 1 hourc4-highmem-8862 GiB$0.521356 / 1 hourc4-highmem-1616124 GiB$1.042712 / 1 hourc4-highmem-3232248 GiB$2.085424 / 1 hourc4-highmem-4848372 GiB$3.128136 / 1 hourc4-highmem-9696744 GiB$6.256272 / 1 hourc4-highmem-1921921488 GiB$12.512544 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4 high-CPU machine typesIowa (us-central1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Melbourne (australia-southeast2)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)Las Vegas (us-west4)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4-highcpu-224 GiB$0.085052 / 1 hourc4-highcpu-448 GiB$0.170104 / 1 hourc4-highcpu-8816 GiB$0.340208 / 1 hourc4-highcpu-161632 GiB$0.680416 / 1 hourc4-highcpu-323264 GiB$1.360832 / 1 hourc4-highcpu-484896 GiB$2.041248 / 1 hourc4-highcpu-9696192 GiB$4.082496 / 1 hourc4-highcpu-192192384 GiB$8.164992 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A machine typesIowa (us-central1)Taiwan (asia-east1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Alabama (us-east7)Oregon (us-west1)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.03086 / 1 hour$0.02037 / 1 hour$0.01389 / 1 hour$0.0222192 / 1 hour$0.0166644 / 1 hourPredefined Memory$0.00351 / 1 gibibyte hour$0.00231 / 1 gibibyte hour$0.00158 / 1 gibibyte hour$0.0025272 / 1 hour$0.0018954 / 1 hourPredefined local Titanium SSD storage$0.00016438 / 1 gibibyte hour$0.00010849 / 1 gibibyte hour$0.00007397 / 1 gibibyte hour$0.00011836 / 1 gibibyte hour$0.00008877 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A standard machine typesIowa (us-central1)Taiwan (asia-east1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Alabama (us-east7)Oregon (us-west1)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4a-standard-114 GiB$0.0449 / 1 hourc4a-standard-228 GiB$0.0898 / 1 hourc4a-standard-4416 GiB$0.1796 / 1 hourc4a-standard-8832 GiB$0.3592 / 1 hourc4a-standard-161664 GiB$0.7184 / 1 hourc4a-standard-3232128 GiB$1.4368 / 1 hourc4a-standard-4848192 GiB$2.1552 / 1 hourc4a-standard-6464256 GiB$2.8736 / 1 hourc4a-standard-7272288 GiB$3.2328 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A high-memory machine typesIowa (us-central1)Taiwan (asia-east1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Alabama (us-east7)Oregon (us-west1)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4a-highmem-118 GiB$0.05894 / 1 hourc4a-highmem-2216 GiB$0.11788 / 1 hourc4a-highmem-4432 GiB$0.23576 / 1 hourc4a-highmem-8864 GiB$0.47152 / 1 hourc4a-highmem-1616128 GiB$0.94304 / 1 hourc4a-highmem-3232256 GiB$1.88608 / 1 hourc4a-highmem-4848384 GiB$2.82912 / 1 hourc4a-highmem-6464512 GiB$3.77216 / 1 hourc4a-highmem-7272576 GiB$4.24368 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A high-CPU machine typesIowa (us-central1)Taiwan (asia-east1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Alabama (us-east7)Oregon (us-west1)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c4a-highcpu-112 GiB$0.03788 / 1 hourc4a-highcpu-224 GiB$0.07576 / 1 hourc4a-highcpu-448 GiB$0.15152 / 1 hourc4a-highcpu-8816 GiB$0.30304 / 1 hourc4a-highcpu-161632 GiB$0.60608 / 1 hourc4a-highcpu-323264 GiB$1.21216 / 1 hourc4a-highcpu-484896 GiB$1.81824 / 1 hourc4a-highcpu-6464128 GiB$2.42432 / 1 hourc4a-highcpu-7272144 GiB$2.72736 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A Standard with Local SSDIowa (us-central1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Oregon (us-west1)HourlyHourlyMonthlyMonthlyVM ShapevCPUsMemoryLocal SSDPrice (USD)c4a-standard-4-lssd416 GiB375 GiB$0.24124384 / 1 hourc4a-standard-8-lssd832 GiB750 GiB$0.48248767 / 1 hourc4a-standard-16-lssd1664 GiB1500 GiB$0.96497534 / 1 hourc4a-standard-32-lssd32128 GiB2250 GiB$1.80666301 / 1 hourc4a-standard-48-lssd48192 GiB3750 GiB$2.77163836 / 1 hourc4a-standard-64-lssd64256 GiB5250 GiB$3.7366137 / 1 hourc4a-standard-72-lssd72288 GiB6000 GiB$4.21910137 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C4A Highmem with Local SSDIowa (us-central1)Singapore (asia-southeast1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Oregon (us-west1)HourlyHourlyMonthlyMonthlyVM ShapevCPUsMemoryLocal SSDPrice (USD)c4a-highmem-4-lssd432 GiB375 GiB$0.29740384 / 1 hourc4a-highmem-8-lssd864 GiB750 GiB$0.59480767 / 1 hourc4a-highmem-16-lssd16128 GiB1500 GiB$1.18961534 / 1 hourc4a-highmem-32-lssd32256 GiB2250 GiB$2.25594301 / 1 hourc4a-highmem-48-lssd48384 GiB3750 GiB$3.44555836 / 1 hourc4a-highmem-64-lssd64512 GiB5250 GiB$4.6351737 / 1 hourc4a-highmem-72-lssd72576 GiB6000 GiB$5.22998137 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N4 machine typesIowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.032578 / 1 hour$0.02052414 / 1 hour$0.0146601 / 1 hour$0.02345616 / 1 hour$0.01759212 / 1 hourPredefined Memory$0.003702 / 1 gibibyte hour$0.00233226 / 1 gibibyte hour$0.0016659 / 1 gibibyte hour$0.00266544 / 1 hour$0.00199908 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N4 standard machine typesIowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n4-standard-228 GiB$0.094772 / 1 hourn4-standard-4416 GiB$0.189544 / 1 hourn4-standard-8832 GiB$0.379088 / 1 hourn4-standard-161664 GiB$0.758176 / 1 hourn4-standard-3232128 GiB$1.516352 / 1 hourn4-standard-4848192 GiB$2.274528 / 1 hourn4-standard-6464256 GiB$3.032704 / 1 hourn4-standard-8080320 GiB$3.79088 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N4 high-memory machine typesIowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n4-highmem-2216 GiB$0.124388 / 1 hourn4-highmem-4432 GiB$0.248776 / 1 hourn4-highmem-8864 GiB$0.497552 / 1 hourn4-highmem-1616128 GiB$0.995104 / 1 hourn4-highmem-3232256 GiB$1.990208 / 1 hourn4-highmem-4848384 GiB$2.985312 / 1 hourn4-highmem-6464512 GiB$3.980416 / 1 hourn4-highmem-8080640 GiB$4.97552 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N4 high-CPU machine typesIowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n4-highcpu-224 GiB$0.079964 / 1 hourn4-highcpu-448 GiB$0.159928 / 1 hourn4-highcpu-8816 GiB$0.319856 / 1 hourn4-highcpu-161632 GiB$0.639712 / 1 hourn4-highcpu-323264 GiB$1.279424 / 1 hourn4-highcpu-484896 GiB$1.90803 / 1 hourn4-highcpu-6464128 GiB$2.558848 / 1 hourn4-highcpu-8080160 GiB$3.19856 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N4 custom vCPUs and memoryIowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)Resource-based commitment price premium (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Custom vCPUs$0.0342069 / 1 hour$0.000733 / 1 hour$0.02462897 / 1 hour$0.01847173 / 1 hourCustom Memory$0.0038871 / 1 gibibyte hour$0.0000833 / 1 gibibyte hour$0.00279871 / 1 hour$0.00209903 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.The CUDs prices for Custom Machine Types reflect a 5% premium over predefined shapes. The premium applies only for the duration and amount of CMTs used and covered by the CUDs. More details please refer to the documentation for Resource-based committed use discounts.N4 extended custom memoryFor custom machine types, any memory up to and including 8 GiB of memory per vCPU is charged at the standard custom vCPU and memory pricing rate. Any memory above 8 GiB per vCPU is charged according to the following extended memory prices. To learn how to create instances with custom machine types and extended memory, see Adding extended memory to a machine type.The on-demand prices for custom machine types include a 5% premium over the on-demand prices for standard machine types. For an accurate estimate of your billing with custom machine types, use the Google Cloud Pricing Calculator.Iowa (us-central1)Tokyo (asia-northeast1)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Sydney (australia-southeast1)Stockholm (europe-north2)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Paris (europe-west9)Dammam (me-central2)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)HourlyHourlyMonthlyMonthlyItemPrice (USD)1 year resource-based commitment price (USD)3 year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Extended custom memory$0.00874598 / 1 gibibyte hourUnavailableUnavailable$0.0062971 / 1 gibibyte hour$0.00472283 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices, which typically offer the largest discounts—up to 91% off of the corresponding on-demand price—are listed separately on the Spot VMs pricing page.C3 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.03465 / 1 hour$0.02183 / 1 hour$0.015593 / 1 hour$0.024948 / 1 hour$0.018711 / 1 hourPredefined Memory$0.003938 / 1 gibibyte hour$0.002481 / 1 gibibyte hour$0.001772 / 1 gibibyte hour$0.00283536 / 1 hour$0.00212652 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3 standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3-standard-4416 GiB$0.201608 / 1 hourc3-standard-8832 GiB$0.403216 / 1 hourc3-standard-222288 GiB$1.108844 / 1 hourc3-standard-4444176 GiB$2.217688 / 1 hourc3-standard-8888352 GiB$4.435376 / 1 hourc3-standard-176176704 GiB$8.870752 / 1 hourc3-standard-192-metal192768 GiB$9.677184 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3 high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3-highmem-4432 GiB$0.264616 / 1 hourc3-highmem-8864 GiB$0.529232 / 1 hourc3-highmem-2222176 GiB$1.455388 / 1 hourc3-highmem-4444352 GiB$2.910776 / 1 hourc3-highmem-8888704 GiB$5.821552 / 1 hourc3-highmem-1761761408 GiB$11.643104 / 1 hourc3-highmem-192-metal1921536 GiB$12.701568 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3 high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3-highcpu-448 GiB$0.170104 / 1 hourc3-highcpu-8816 GiB$0.340208 / 1 hourc3-highcpu-222244 GiB$0.935572 / 1 hourc3-highcpu-444488 GiB$1.871144 / 1 hourc3-highcpu-8888176 GiB$3.742288 / 1 hourc3-highcpu-176176352 GiB$7.484576 / 1 hourc3-highcpu-192-metal192512 GiB$8.669056 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3D machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.029563 / 1 hour$0.013303 / 1 hour$0.018624 / 1 hour$0.02128536 / 1 hour$0.01596402 / 1 hourPredefined Memory$0.003959 / 1 gibibyte hour$0.002494 / 1 gibibyte hour$0.001781 / 1 gibibyte hour$0.00285048 / 1 hour$0.00213786 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3D standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3d-standard-4416 GiB$0.181596 / 1 hourc3d-standard-8816 GiB$0.363192 / 1 hourc3d-standard-161664 GiB$0.726384 / 1 hourc3d-standard-3030120 GiB$1.36197 / 1 hourc3d-standard-6060240 GiB$2.72394 / 1 hourc3d-standard-9090360 GiB$4.08591 / 1 hourc3d-standard-180180720 GiB$8.17182 / 1 hourc3d-standard-3603601440 GiB$16.34364 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3D high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3d-highmem-4432 GiB$0.24494 / 1 hourc3d-highmem-8864 GiB$0.48988 / 1 hourc3d-highmem-1616128 GiB$0.97976 / 1 hourc3d-highmem-3030240 GiB$1.83705 / 1 hourc3d-highmem-6060480 GiB$3.6741 / 1 hourc3d-highmem-9090720 GiB$5.51115 / 1 hourc3d-highmem-1801801440 GiB$11.0223 / 1 hourc3d-highmem-3603602880 GiB$22.0446 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C3D high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c3d-highcpu-448 GiB$0.149924 / 1 hourc3d-highcpu-8816 GiB$0.299848 / 1 hourc3d-highcpu-161632 GiB$0.599696 / 1 hourc3d-highcpu-303060 GiB$1.12443 / 1 hourc3d-highcpu-6060120 GiB$2.24886 / 1 hourc3d-highcpu-9090180 GiB$3.37329 / 1 hourc3d-highcpu-180180360 GiB$6.74658 / 1 hourc3d-highcpu-360360720 GiB$13.49316 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.E2 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.02181159 / 1 hour$0.0137413 / 1 hour$0.00981522 / 1 hour$0.01570434 / 1 hour$0.01177826 / 1 hourPredefined Memory$0.00292353 / 1 gibibyte hour$0.00184182 / 1 gibibyte hour$0.00131559 / 1 gibibyte hour$0.00210494 / 1 hour$0.00157871 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.E2 standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)e2-standard-228 GiB$0.06701142 / 1 houre2-standard-4416 GiB$0.13402284 / 1 houre2-standard-8832 GiB$0.26804568 / 1 houre2-standard-161664 GiB$0.53609136 / 1 houre2-standard-3232128 GiB$1.07218272 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.E2 high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)e2-highmem-2216 GiB$0.09039966 / 1 houre2-highmem-4432 GiB$0.18079932 / 1 houre2-highmem-8864 GiB$0.36159864 / 1 houre2-highmem-1616128 GiB$0.72319728 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.E2 high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)e2-highcpu-222 GiB$0.04947024 / 1 houre2-highcpu-444 GiB$0.09894048 / 1 houre2-highcpu-888 GiB$0.19788096 / 1 houre2-highcpu-161616 GiB$0.39576192 / 1 houre2-highcpu-323232 GiB$0.79152384 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.E2 custom vCPUs and memoryE2 custom and E2 shared-core custom machine types are subject to the same pricing rates. E2 shared-core custom machines have fractional vCPUs with a custom memory range.0.25 vCPU for micro machines0.50 vCPU for small machines1 vCPU for medium machinesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)Resource-based commitment price premium (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Custom vCPUs$0.02290217 / 1 hour$0.00049076 / 1 hour$0.01648956 / 1 hour$0.01236717 / 1 hourCustom Memory$0.00306971 / 1 hour$0.00006578 / 1 gibibyte hour$0.00221019 / 1 hour$0.00165764 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.The CUDs prices for Custom Machine Types reflect a 5% premium over predefined shapes. The premium applies only for the duration and amount of CMTs used and covered by the CUDs. More details please refer to the documentation for Resource-based committed use discounts.N2 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1 year resource-based commitment price (USD)3 year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.031611 / 1 hour$0.019915 / 1 hour$0.014225 / 1 hour$0.02275992 / 1 hour$0.01706994 / 1 hourPredefined Memory$0.004237 / 1 gibibyte hour$0.002669 / 1 gibibyte hour$0.001907 / 1 gibibyte hour$0.00305064 / 1 hour$0.00228798 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2 standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2-standard-228 GiB$0.097118 / 1 hourn2-standard-4416 GiB$0.194236 / 1 hourn2-standard-8832 GiB$0.388472 / 1 hourn2-standard-161664 GiB$0.776944 / 1 hourn2-standard-3232128 GiB$1.553888 / 1 hourn2-standard-4848192 GiB$2.330832 / 1 hourn2-standard-6464256 GiB$3.107776 / 1 hourn2-standard-8080320 GiB$3.88472 / 1 hourn2-standard-9696384 GiB$4.661664 / 1 hourn2-standard-128128512 GiB$6.215552 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory .If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2 high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2-highmem-2216 GiB$0.131014 / 1 hourn2-highmem-4432 GiB$0.262028 / 1 hourn2-highmem-8864 GiB$0.524056 / 1 hourn2-highmem-1616128 GiB$1.048112 / 1 hourn2-highmem-3232256 GiB$2.096224 / 1 hourn2-highmem-4848384 GiB$3.144336 / 1 hourn2-highmem-6464512 GiB$4.192448 / 1 hourn2-highmem-8080640 GiB$5.24056 / 1 hourn2-highmem-9696768 GiB$6.288672 / 1 hourn2-highmem-128128864 GiB$7.706976 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPUs and memory .If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2 high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2-highcpu-222 GiB$0.071696 / 1 hourn2-highcpu-444 GiB$0.143392 / 1 hourn2-highcpu-888 GiB$0.286784 / 1 hourn2-highcpu-161616 GiB$0.573568 / 1 hourn2-highcpu-323232 GiB$1.147136 / 1 hourn2-highcpu-484848 GiB$1.720704 / 1 hourn2-highcpu-646464 GiB$2.294272 / 1 hourn2-highcpu-808080 GiB$2.86784 / 1 hourn2-highcpu-969696 GiB$3.441408 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2 custom vCPUs and memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)Resource-based commitment price premium (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Custom vCPUs$0.03319155 / 1 hour$0.00071125 / 1 hour$0.02389792 / 1 hour$0.01792344 / 1 hourCustom Memory$0.00444885 / 1 gibibyte hour$0.00009535 / 1 gibibyte hour$0.01007917 / 1 hour$0.00755938 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.The CUDs prices for Custom Machine Types reflect a 5% premium over predefined shapes. The premium applies only for the duration and amount of CMTs used and covered by the CUDs. More details please refer to the documentation for Resource-based committed use discounts.N2 extended custom memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)1 year resource-based commitment price (USD)3 year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Extended custom memory$0.00955 / 1 gibibyte hour$0.002669 / 1 gibibyte hour$0.001907 / 1 gibibyte hour$0.01007917 / 1 hour$0.00755938 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2D machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.027502 / 1 hour$0.017326 / 1 hour$0.012376 / 1 hour$0.01980144 / 1 hour$0.01485108 / 1 hourPredefined Memory$0.003686 / 1 gibibyte hour$0.002322 / 1 gibibyte hour$0.001659 / 1 gibibyte hour$0.00265392 / 1 hour$0.00199044 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2D standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2d-standard-228 GiB$0.084492 / 1 hourn2d-standard-4416 GiB$0.168984 / 1 hourn2d-standard-8832 GiB$0.337968 / 1 hourn2d-standard-161664 GiB$0.675936 / 1 hourn2d-standard-3232128 GiB$1.351872 / 1 hourn2d-standard-4848192 GiB$2.027808 / 1 hourn2d-standard-6464256 GiB$2.703744 / 1 hourn2d-standard-8080320 GiB$3.37968 / 1 hourn2d-standard-9696384 GiB$4.055616 / 1 hourn2d-standard-128128512 GiB$5.407488 / 1 hourn2d-standard-224224896 GiB$9.463104 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2D high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2d-highmem-2216 GiB$0.11398 / 1 hourn2d-highmem-4432 GiB$0.22796 / 1 hourn2d-highmem-8864 GiB$0.45592 / 1 hourn2d-highmem-1616128 GiB$0.91184 / 1 hourn2d-highmem-3232256 GiB$1.82368 / 1 hourn2d-highmem-4848384 GiB$2.73552 / 1 hourn2d-highmem-6464512 GiB$3.64736 / 1 hourn2d-highmem-8080640 GiB$4.5592 / 1 hourn2d-highmem-9696768 GiB$5.47104 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPU and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2D high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n2d-highcpu-222 GiB$0.062376 / 1 hourn2d-highcpu-444 GiB$0.124752 / 1 hourn2d-highcpu-888 GiB$0.249504 / 1 hourn2d-highcpu-161616 GiB$0.499008 / 1 hourn2d-highcpu-323232 GiB$0.998016 / 1 hourn2d-highcpu-484848 GiB$1.497024 / 1 hourn2d-highcpu-646464 GiB$1.996032 / 1 hourn2d-highcpu-808080 GiB$2.49504 / 1 hourn2d-highcpu-969696 GiB$2.994048 / 1 hourn2d-highcpu-128128128 GiB$3.992064 / 1 hourn2d-highcpu-224224224 GiB$6.986112 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. For more information, see Custom vCPUs and memory.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N2D custom vCPUs and memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)Resource-based commitment price premium (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Custom vCPUs$0.0288771 / 1 hour$0.0006188 / 1 hour$0.02079151 / 1 hour$0.01559363 / 1 hourCustom Memory$0.0038703 / 1 gibibyte hour$0.00008295 / 1 gibibyte hour$0.0087691 / 1 hour$0.00657682 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.The CUDs prices for Custom Machine Types reflect a 5% premium over predefined shapes. The premium applies only for the duration and amount of CMTs used and covered by the CUDs. More details please refer to the documentation for Resource-based committed use discounts.N2D extended custom memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)1 year resource-based commitment price (USD)3 year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Extended custom memory$0.008309 / 1 gibibyte hour$0.002322 / 1 gibibyte hour$0.001659 / 1 gibibyte hour$0.0087691 / 1 hour$0.00657682 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Tau T2D machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price1 year commitment price3 year commitment pricePredefined vCPUs$0.027502 / 1 hour$0.017326 / 1 hour$0.012376 / 1 hourPredefined Memory$0.003686 / 1 gibibyte hour$0.002322 / 1 gibibyte hour$0.001659 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Tau T2D standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)t2d-standard-114 GiB$0.042246 / 1 hourt2d-standard-228 GiB$0.084492 / 1 hourt2d-standard-4416 GiB$0.168984 / 1 hourt2d-standard-8832 GiB$0.337968 / 1 hourt2d-standard-161664 GiB$0.675936 / 1 hourt2d-standard-3232128 GiB$1.351872 / 1 hourt2d-standard-4848192 GiB$2.027808 / 1 hourt2d-standard-6060240 GiB$2.53476 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Tau T2A machine typesIowa (us-central1)Singapore (asia-southeast1)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Oregon (us-west1)HourlyHourlyMonthlyMonthlyItemOn-demand pricePredefined vCPUs$0.0249 / 1 hourPredefined Memory$0.0034 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Tau T2A standard machine typesIowa (us-central1)Singapore (asia-southeast1)Netherlands (europe-west4)Iowa (us-central1)South Carolina (us-east1)Oregon (us-west1)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)t2a-standard-114 GiB$0.0385 / 1 hourt2a-standard-228 GiB$0.077 / 1 hourt2a-standard-4416 GiB$0.154 / 1 hourt2a-standard-8832 GiB$0.308 / 1 hourt2a-standard-161664 GiB$0.616 / 1 hourt2a-standard-3232128 GiB$1.232 / 1 hourt2a-standard-4848192 GiB$1.848 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.N1 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Predefined vCPUs$0.031611 / 1 hour$0.019915 / 1 hour$0.014225 / 1 hour$0.02275992 / 1 hour$0.01706994 / 1 hourPredefined Memory$0.004237 / 1 gibibyte hour$0.002669 / 1 gibibyte hour$0.001907 / 1 gibibyte hour$0.00305064 / 1 hour$0.00228798 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N1 standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n1-standard-113.75 GiB$0.04749975 / 1 hourn1-standard-227.5 GiB$0.0949995 / 1 hourn1-standard-4415 GiB$0.189999 / 1 hourn1-standard-8830 GiB$0.379998 / 1 hourn1-standard-161660 GiB$0.759996 / 1 hourn1-standard-3232120 GiB$1.519992 / 1 hourn1-standard-6464240 GiB$3.039984 / 1 hourn1-standard-96 Skylake Platform only96360 GiB$4.559976 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. Read more about Custom machine types.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N1 high-memory machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n1-highmem-2213 GiB$0.118303 / 1 hourn1-highmem-4426 GiB$0.236606 / 1 hourn1-highmem-8852 GiB$0.473212 / 1 hourn1-highmem-1616104 GiB$0.946424 / 1 hourn1-highmem-3232208 GiB$1.892848 / 1 hourn1-highmem-6464416 GiB$3.785696 / 1 hourn1-highmem-96 Skylake Platform only96624 GiB$5.678544 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. Read more about Custom machine types.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N1 high-CPU machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)n1-highcpu-221.8 GiB$0.0708486 / 1 hourn1-highcpu-443.60 GiB$0.1416972 / 1 hourn1-highcpu-887.20 GiB$0.2833944 / 1 hourn1-highcpu-161614.40 GiB$0.5667888 / 1 hourn1-highcpu-323228.80 GiB$1.1335776 / 1 hourn1-highcpu-646457.6 GiB$2.2671552 / 1 hourn1-highcpu-96 Skylake Platform only9686.4 GiB$3.4007328 / 1 hourCustom machine type: If your ideal machine shape is in between two predefined types, using a custom machine type could save you as much as 40%. Read more about Custom machine types.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N1 custom vCPUs and memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)Resource-based commitment price premium (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Custom vCPUs$0.03319155 / 1 hour$0.00071125 / 1 hour$0.02389792 / 1 hour$0.01792344 / 1 hourCustom Memory$0.004446 / 1 gibibyte hour$0.00009535 / 1 gibibyte hour$0.01007712 / 1 hour$0.00755784 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.The CUDs prices for Custom Machine Types reflect a 5% premium over predefined shapes. The premium applies only for the duration and amount of CMTs used and covered by the CUDs. More details please refer to the documentation for Resource-based committed use discounts.N1 extended custom memoryIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)1 year resource-based commitment price (USD)3 year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Extended custom memory$0.00955 / 1 gibibyte hourUnavailableUnavailable$0.01007712 / 1 hour$0.00755784 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Compute-optimized machine type familyH3 machine typesH3 VMs are powered by the 4th generation Intel Xeon Scalable processors (code-named Sapphire Rapids), DDR5 memory, and Google's custom Intel Infrastructure Processing Engine (IPU). The following table describes the pricing per vCPU and GB of memory for H3 machine types.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)Compute-optimized Cores$0.04411 / 1 hour$0.03617 / 1 hour$0.02647 / 1 hourCompute-optimized Memory$0.00296 / 1 gibibyte hour$0.00243 / 1 gibibyte hour$0.00178 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.H3 Standard machine typesThe following table shows the calculated cost for h3-standard-88 machine types, which is the H3 predefined machine type. The vCPUs and memory from each of these machine types are billed by their individual compute-optimized vCPUs and memory prices but these tables provide the cost that you can expect using a specific machine type.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeCoresMemoryOn Demand List Price (USD)1yr CUD List Price (USD)3yr CUD List Price (USD)h3-standard-8888352 GiB$4.9236 / 1 hour$4.03832 / 1 hour$2.95592 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Compute-optimized vCPUs$0.033982 / 1 hour$0.021409 / 1 hour$0.013593 / 1 hour$0.02446704 / 1 hour$0.01835028 / 1 hourCompute-optimized Memory$0.004555 / 1 gibibyte hour$0.002869 / 1 gibibyte hour$0.001822 / 1 gibibyte hour$0.0032796 / 1 hour$0.0024597 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2 standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)c2-standard-4416 GiB$0.208808 / 1 hourc2-standard-8832 GiB$0.417616 / 1 hourc2-standard-161664 GiB$0.835232 / 1 hourc2-standard-3030120 GiB$1.56606 / 1 hourc2-standard-6060240 GiB$3.13212 / 1 hourSpot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2D machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Compute-optimized vCPUs$0.029563 / 1 hour$0.018624 / 1 hour$0.013303 / 1 hour$0.02128536 / 1 hour$0.01596402 / 1 hourCompute-optimized Memory$0.003959 / 1 gibibyte hour$0.002494 / 1 gibibyte hour$0.001781 / 1 gibibyte hour$0.00285048 / 1 hour$0.00213786 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2D Standard machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typevCPUMemoryOn Demand List Price1yr CUD List Price3yr CUD List Pricec2d-standard-228 GiB$0.090798 / 1 hour$0.0572 / 1 hour$0.040854 / 1 hourc2d-standard-4416 GiB$0.181596 / 1 hour$0.1144 / 1 hour$0.081708 / 1 hourc2d-standard-8832 GiB$0.363192 / 1 hour$0.2288 / 1 hour$0.163416 / 1 hourc2d-standard-161664 GiB$0.726384 / 1 hour$0.4576 / 1 hour$0.326832 / 1 hourc2d-standard-3232128 GiB$1.452768 / 1 hour$0.9152 / 1 hour$0.653664 / 1 hourc2d-standard-5656224 GiB$2.542344 / 1 hour$1.6016 / 1 hour$1.143912 / 1 hourc2d-standard-112112448 GiB$5.084688 / 1 hour$3.2032 / 1 hour$2.287824 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2D Highmem machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typevCPUMemoryOn Demand List Price1yr CUD List Price3yr CUD List Pricec2d-highmem-2216 GiB$0.12247 / 1 hour$0.077152 / 1 hour$0.055102 / 1 hourc2d-highmem-4432 GiB$0.24494 / 1 hour$0.154304 / 1 hour$0.110204 / 1 hourc2d-highmem-8864 GiB$0.48988 / 1 hour$0.308608 / 1 hour$0.220408 / 1 hourc2d-highmem-1616128 GiB$0.97976 / 1 hour$0.617216 / 1 hour$0.440816 / 1 hourc2d-highmem-3232256 GiB$1.95952 / 1 hour$1.234432 / 1 hour$0.881632 / 1 hourc2d-highmem-5656448 GiB$3.42916 / 1 hour$2.160256 / 1 hour$1.542856 / 1 hourc2d-highmem-112112896 GiB$6.85832 / 1 hour$4.320512 / 1 hour$3.085712 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.C2D Highcpu machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typevCPUMemoryOn Demand List Price1yr CUD List Price3yr CUD List Pricec2d-highcpu-224 GiB$0.074962 / 1 hour$0.047224 / 1 hour$0.03373 / 1 hourc2d-highcpu-448 GiB$0.149924 / 1 hour$0.094448 / 1 hour$0.06746 / 1 hourc2d-highcpu-8816 GiB$0.299848 / 1 hour$0.188896 / 1 hour$0.13492 / 1 hourc2d-highcpu-161632 GiB$0.599696 / 1 hour$0.377792 / 1 hour$0.26984 / 1 hourc2d-highcpu-323264 GiB$1.199392 / 1 hour$0.755584 / 1 hour$0.53968 / 1 hourc2d-highcpu-5656128 GiB$2.098936 / 1 hour$1.322272 / 1 hour$0.94444 / 1 hourc2d-highcpu-112112224 GiB$4.197872 / 1 hour$2.644544 / 1 hour$2.644544 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Memory-optimized machine type familyM3 memory-optimized machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)1 year commitment price (USD)3 year commitment price (USD)m3-ultramem-3232976 GiB$6.0912 / 1 hour$3.60512 / 1 hour$1.82768 / 1 hourm3-ultramem-64641952 GiB$12.1824 / 1 hour$7.21024 / 1 hour$3.65536 / 1 hourm3-ultramem-1281283904 GiB$24.3648 / 1 hour$14.42048 / 1 hour$7.31072 / 1 hourm3-megamem-6464976 GiB$7.2048 / 1 hour$4.26272 / 1 hour$2.16208 / 1 hourm3-megamem-1281281952 GiB$14.4096 / 1 hour$8.52544 / 1 hour$4.32416 / 1 hourCommitted Use Discounts apply to memory-optimized machine types only if you buy the commitment type specifically for M3 machine types.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.M2 memory-optimized machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryEvaluative price (USD)1 year commitment price (USD)3 year commitment price (USD)m2-ultramem-2082085888 GiB$42.111936 / 1 hour$26.900896 / 1 hour$16.026976 / 1 hourm2-ultramem-41641611776 GiB$84.223872 / 1 hour$53.801792 / 1 hour$32.053952 / 1 hourm2-megamem-4164165888 GiB$50.291328 / 1 hour$32.116288 / 1 hour$19.141568 / 1 hourm2-hypermem-4164168832 GiB$67.2576 / 1 hour$42.95904 / 1 hour$25.59776 / 1 hourSpot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.M1 memory-optimized machine typesNote: The prefix in the following machine names changed from "n1" to "m1" to more clearly identify the machines as members of the memory-optimized machine family:n1-megamem-96 is now m1-megamem-96n1-ultramem-40 is now m1-ultramem-40n1-ultramem-80 is now m1-ultramem-80n1-ultramem-160 is now m1-ultramem-160The machines themselves did not change and the former names are still supported as aliases for these machines.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)1 year commitment price (USD)3 year commitment price (USD)m1-ultramem-4040961 GiB$6.2931 / 1 hour$3.72422 / 1 hour$1.88833 / 1 hourm1-ultramem-80801922 GiB$12.5862 / 1 hour$7.44844 / 1 hour$3.77666 / 1 hourm1-ultramem-1601603844 GiB$25.1724 / 1 hour$14.89688 / 1 hour$7.55332 / 1 hourm1-megamem-96961433.6 GiB$10.65216 / 1 hour$6.302272 / 1 hour$3.196608 / 1 hourCommitted Use Discounts apply to memory-optimized machine types only if you buy the commitment type specifically for memory-optimized machine types.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Storage-optimized machine type familyZ3 machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemOn-demand price (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)Storage-optimized vCPUs$0.0496531 / 1 hour$0.0312815 / 1 hour$0.0223439 / 1 hour$0.03575023 / 1 hour$0.02681267 / 1 hourStorage-optimized Memory$0.0066553 / 1 gibibyte hour$0.0041928 / 1 gibibyte hour$0.0029949 / 1 gibibyte hour$0.00479182 / 1 hour$0.00359386 / 1 hourStorage-optimized Local SSD$0.00010959 / 1 gibibyte hour$0.00006904 / 1 gibibyte hour$0.00004932 / 1 gibibyte hourN/AN/AZ3 highmem machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeCoresMemoryOn Demand List Price1yr CUD List Price3yr CUD List Pricez3-highmem-8888704 GiB$9.054804 / 1 hour$5.7045032 / 1 hour$4.0746728 / 1 hourz3-highmem-1761761408 GiB$18.109608 / 1 hour$11.4090064 / 1 hour$8.1493456 / 1 hourAccelerator-optimized machine type familyA3 High machine typesThe following table shows the total calculated cost that you can expect for predefined A3 High machine types. This total cost includes the cost for the GPUs, vCPU, memory, and Local SSD storage.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeGPUsvCPUsMemoryLocal SSDOn-demand price (USD)1 year commitment price (USD)3 year commitment price (USD)a3-highgpu-1g126234 GiB750 GiB$11.06125002 / 1 hour$7.67295928 / 1 hour$4.8580479 / 1 houra3-highgpu-2g252468 GiB1500 GiB$22.12250003 / 1 hour$15.34591856 / 1 hour$9.7160958 / 1 houra3-highgpu-4g4104936 GiB3000 GiB$44.24500006 / 1 hour$30.69183712 / 1 hour$19.4321916 / 1 houra3-highgpu-8g82081872 GiB6000 GiB$88.49000012 / 1 hour$61.38367423 / 1 hour$38.8643832 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.A2 Ultra machine types (total cost)The following table shows the total calculated cost that you can expect for these predefined A2 Ultra machine types. This total cost includes the cost for the GPUs, vCPU, memory, and Local SSD storage.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeGPUsvCPUsMemoryLocal SSDOn-demand price (USD)a2-ultragpu-1g112170 GiB375 GiB$5.06879789 / 1 houra2-ultragpu-2g224340 GiB750 GiB$10.13759578 / 1 houra2-ultragpu-4g448680 GiB1500 GiB$20.27519156 / 1 houra2-ultragpu-8g8961360 GiB3000 GiB$40.55038312 / 1 hour**For committed use discounts pricing on the A2 Ultra machine series, connect with your sales account team.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.A2 Standard machine types (total cost)The following table shows the total calculated cost that you can expect for these predefined A2 machine types. This total cost includes the cost for the GPUs, vCPU, and memory.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeGPUsvCPUsMemoryOn-demand price (USD)1 year commitment price (USD)3 year commitment price (USD)a2-highgpu-1g11285 GiB$3.673385 / 1 hour$2.31423255 / 1 hour$1.28568475 / 1 houra2-highgpu-2g224170 GiB$7.34677 / 1 hour$4.6284651 / 1 hour$2.5713695 / 1 houra2-highgpu-4g448340 GiB$14.69354 / 1 hour$9.2569302 / 1 hour$5.142739 / 1 houra2-highgpu-8g896680 GiB$29.38708 / 1 hour$18.5138604 / 1 hour$10.285478 / 1 houra2-megagpu-16g16961360 GiB$55.739504 / 1 hour$35.11588752 / 1 hour$19.5088264 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.G2 machine types (total cost)The following table shows the total calculated cost that you can expect for these predefined G2 machine types. This total cost includes the cost for the GPUs, vCPU, and default memory.The on-demand prices for custom machine types include a 5% premium over the on-demand prices for standard machine types. If you are specifying custom memory, use the Google Cloud Pricing Calculator to calculate the total cost.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeGPUsvCPUsMemoryOn-demand price (USD)1 year commitment price (USD)3 year commitment price (USD)g2-standard-41416 GiB$0.70683228 / 1 hour$0.44530434 / 1 hour$0.31807452 / 1 hourg2-standard-81832 GiB$0.85362431 / 1 hour$0.53778332 / 1 hour$0.38413094 / 1 hourg2-standard-1211248 GiB$1.00041635 / 1 hour$0.6302623 / 1 hour$0.45018736 / 1 hourg2-standard-1611664 GiB$1.14720838 / 1 hour$0.72274129 / 1 hour$0.51624377 / 1 hourg2-standard-2422496 GiB$2.0008327 / 1 hour$1.26052461 / 1 hour$0.90037471 / 1 hourg2-standard-32132128 GiB$1.73437653 / 1 hour$1.09265722 / 1 hour$0.78046944 / 1 hourg2-standard-48448192 GiB$4.00166539 / 1 hour$2.52104921 / 1 hour$1.80074942 / 1 hourg2-standard-96896384 GiB$8.00333078 / 1 hour$5.04209842 / 1 hour$3.60149885 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Shared-core machine typesE2 shared-core machine typesE2 shared-core custom machine types are subject to the same pricing rates as E2 custom machines. These instances have fractional vCPUs with a custom memory range.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typevCPUsMemoryPrice (USD)1 year commitment (USD)3 year commitment (USD)e2-micro21 GiB$0.00837643 / 1 hour$0.00527715 / 1 hour$0.00376939 / 1 houre2-small22 GiB$0.01675286 / 1 hour$0.0105543 / 1 hour$0.00753879 / 1 houre2-medium24 GiB$0.03350571 / 1 hour$0.0211086 / 1 hour$0.01507757 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.N1 shared-core machine typesIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeVirtual CPUsMemoryPrice (USD)f1-micro0.20.60 GiB$0.0076 / 1 hourg1-small0.51.70 GiB$0.0257 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation.Tier_1 higher bandwidth network pricingYou can configure your N2, N2D, C2, C2D, C3, C3D, C4, C4A, Z3 and M3 machine types to use per VM Tier_1 networking performance. The following machine series require a minimum number of vCPUs to use this feature:C2 and C3D: at least 30 vCPUsC2D, C4A, and N2: at least 32 vCPUsC3: at least 44 vCPUsC4 and N2D: at least 48 vCPUsM3: at least 64 vCPUsPricing is also dependent upon the regions and zones where the VM is located.N2, N2D, C2, C2D, M3Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps (N2, N2D, C2, C2D, M3)$0.44895 / 1 hour75 Gbps (N2 only)$0.673425 / 1 hour100 Gbps (N2, N2D, C2, C2D, M3)$0.8979 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.C3The cost of using Tier_1 networking with C3 is described below.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps$0.18 / 1 hour100 Gbps$0.38 / 1 hour200 Gbps$1 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.C3DThe cost of using Tier_1 networking with C3D is described in the following table.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps$0.3 / 1 hour75 Gbps$0.35 / 1 hour100 Gbps$0.4 / 1 hour150 Gbps$0.5 / 1 hour200 Gbps$1 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.C4The cost of using Tier_1 networking with C4 is described in the following table.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps$0.16 / 1 hour100 Gbps$0.33 / 1 hour200 Gbps$1 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.C4AThe cost of using Tier_1 networking with C4A is described in the following table.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps (32vCPU)$0.27 / 1 hour50 Gbps (48vCPU)$0.16 / 1 hour75 Gbps$0.3 / 1 hour100 Gbps$0.5 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.Z3The cost of using Tier_1 networking with Z3 is described in the following table.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Alabama (us-east7)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyItemPrice (USD)50 Gbps$0.18 / 1 hour100 Gbps$0.38 / 1 hour200 Gbps$1 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.*Spot prices are dynamic and can change up to once every 30 days. For more information about Spot VM pricing, see the Spot VMs documentation.Simulated maintenance event pricingStarting February 10, 2020, there is no cost to running simulated maintenance events.Note: Normal 1-minute-minimum usage charges for machine types and premium images still apply to instances that you stop or Spot VMs (or preemptible VMs) during a simulated maintenance event. See the machine type billing model and premium image pricing for details.Prior to this date, the following charges apply:Simulated maintenance on instances configured for live migration incur costs for each of the following instance resources:Price per vCPU on the instance, where f1-micro and g1-small are each equivalent to 1 vCPU: $0.040Price per GB of memory: $0.010Price per GB of Local SSD space: $0.001Simulated maintenance on Spot VMs (and preemptible VMs): FreeSimulated maintenance on instances configured to stop and restart: FreeSuspended VM instancesWhen you suspend an instance, Compute Engine preserves the memory and device state. While you are not charged for the VM instance as if it were running, suspended instances still incur charges for the following:Suspended instance memory and device state.Suspended Local SSD data & metadata.Any Persistent Disk usage.Any static IPs attached to the VM instance.Note: Suspended Instance and Local SSD state are stored in Persistent Disk volumes and as such are subject to Persistent Disk pricing. All preserved state charges in this section are prorated based on a granularity of seconds. For example, if you suspended 1 GB of space for half the month, then you are charged for only half of the month.Johannesburg (africa-south1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Finland (europe-north1)Stockholm (europe-north2)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Doha (me-central1)Dammam (me-central2)Montreal (northamerica-northeast1)Sao Paulo (southamerica-east1)Northern Virginia (us-east4)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyTypePrice (USD)Instance memory and device state$0.00023288 / 1 gibibyte hourSole-tenant node pricingIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyMachine typeOn-demand (USD)1-year resource-based commitment price (USD)3-year resource-based commitment price (USD)1-year flexible CUD consumption rate (USD)3-year flexible CUD consumption rate (USD)c2-node-60-240$3.445332 / 1 hour$2.286312 / 1 hour$1.566072 / 1 hour$2.48063904 / 1 hour$1.86047928 / 1 hourc3-node-176-352$8.2330336 / 1 hour$5.4638496 / 1 hour$4.1165696 / 1 hourUnavailableUnavailablec3-node-176-704$9.7578272 / 1 hour$6.4757792 / 1 hour$4.8789312 / 1 hourUnavailableUnavailablec3-node-176-1408$12.8074144 / 1 hour$8.4996384 / 1 hour$6.4036544 / 1 hourUnavailableUnavailablec4-node-192-384$8.9814912 / 1 hour$5.9605632 / 1 hour$4.4908032 / 1 hourUnavailableUnavailablec4-node-192-720$10.436976 / 1 hour$6.926496 / 1 hour$5.218512 / 1 hourUnavailableUnavailablec4-node-192-1488$13.7637984 / 1 hour$9.1343424 / 1 hour$6.8818464 / 1 hourUnavailableUnavailablec4a-node-72-144$3.000096 / 1 hour$2.072016 / 1 hour$1.500336 / 1 hour$2.16006912 / 1 hour$1.62005184 / 1 hourc4a-node-72-288$3.55608 / 1 hour$2.4552 / 1 hour$1.7784 / 1 hour$2.5603776 / 1 hour$1.9202832 / 1 hourc4a-node-72-576$4.668048 / 1 hour$3.221568 / 1 hour$2.334528 / 1 hour$3.36099456 / 1 hour$2.52074592 / 1 hourg2-node-96-384$8.35563167 / 1 hour$5.39439931 / 1 hour$3.95379974 / 1 hourUnavailableUnavailableg2-node-96-432$8.51020098 / 1 hour$5.49697713 / 1 hour$4.03108439 / 1 hourUnavailableUnavailablem1-node-96-1433$11.71401 / 1 hour$7.36537 / 1 hour$4.2606 / 1 hourUnavailableUnavailablem1-node-160-3844$27.68964 / 1 hour$17.41412 / 1 hour$10.07056 / 1 hourUnavailableUnavailablem2-node-416-11776Unavailable$51.56576 / 1 hour$9.77696 / 1 hourUnavailableUnavailablem2-node-416-8832Unavailable$41.17344 / 1 hour$23.81216 / 1 hourUnavailableUnavailablem3-node-128-1952$15.85056 / 1 hour$9.9664 / 1 hour$5.76512 / 1 hourUnavailableUnavailablem3-node-128-3904$26.80128 / 1 hour$16.85696 / 1 hour$9.7472 / 1 hourUnavailableUnavailablen1-node-96-624$6.2463984 / 1 hour$4.1451504 / 1 hour$3.1234224 / 1 hour$4.49740685 / 1 hour$3.37305514 / 1 hourn2-node-80-640$5.764616 / 1 hour$3.825416 / 1 hour$2.882536 / 1 hour$4.15052352 / 1 hour$3.11289264 / 1 hourn2-node-128-864$8.4776736 / 1 hour$5.6258336 / 1 hour$4.2391456 / 1 hour$6.10392499 / 1 hour$4.57794374 / 1 hourn2d-node-224-896$10.4094144 / 1 hour$6.9078464 / 1 hour$5.2049984 / 1 hour$7.49477837 / 1 hour$5.62108378 / 1 hourn2d-node-224-1792$14.042336 / 1 hour$9.318624 / 1 hour$7.021728 / 1 hourUnavailableUnavailableIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Premium imagesRed Hat Enterprise Linux (RHEL) and RHEL for SAP imagesPrice Model Update starting July 1, 2024.Red Hat announced a price model update on RHEL and RHEL for SAP to all Cloud providers, including Google Cloud. Starting July 1, 2024, RHEL, RHEL for SAP, and the new RHEL 7 ELS license will be billed on a core-hour model.Pricing before July 1, 2024RHEL images:$0.06 USD/hour for instances with 4 or fewer vCPUs$0.13 USD/hour for instances with more than 4 vCPUsRHEL 6 ELS images:$0.02 USD/hour for instances with 4 or fewer vCPUs$0.05 USD/hour for instances with more than 4 vCPUsRHEL for SAP with HA and Update Services images:$0.10 USD/hour for instances with 4 or fewer vCPUs$0.225 USD/hour for instances with more than 4 vCPUsNew pricing starting July 1, 2024RHEL:$0.0144 USD per core, per hour for instances with 8 or fewer vCPUs$0.0108 USD per core, per hour for instances with 9 to 127 vCPUs$0.0096 USD per core, per hour for instances with 128 vCPUs and moreRHEL for SAP:$0.0225 USD per core, per hour for instances with 1 to 8 vCPUs$0.01625 USD per core, per hour for instances with 9 to 127 vCPUs$0.01500 USD per core, per hour for instances with 128 vCPUs and moreRHEL 7 ELS add-on:$0.0084 USD per core, per hour for instances with 1 to 8 vCPUs$0.0060 USD per core, per hour for instances with 9 to 127 vCPUs$0.0050 USD per core, per hour for instances with 128 vCPUs and moreCalculating CostsSee below examples to understand how to calculate software costs under the new pricing model:Red Hat Enterprise Linux software cost calculationInstance Size[Current]Instance Hour Price Model[New]Core Hour Pricing Model (effective July 1, 2024)2 vCPUs$0.06/hour * 730 hours/month= $43.8/month$0.0144/hour * 730 hours/month * 2 cores= $21.02/month4 vCPUs$0.06/hour * 730 hours/month= $43.8/month$0.0144/hour * 730 hours/month * 4 cores= $42.05/month16 vCPUs$0.13/hour * 730 hours/month= $94.9/month$0.0108/hour * 730 hours/month * 16 cores= $126.14/month128 vCPUs$0.13/hour * 730 hours/month= $94.9/month$0.0096/hour * 730 hours/month * 128 cores= $897.02/monthRed Hat Enterprise Linux for SAP software cost calculationInstance Size[Current]Instance Hour Price Model[New]Core Hour Pricing Model (effective July 1, 2024)2 vCPUs$0.100/hour * 730 hours/month= $73.00/month$0.02250/hour * 730 hours/month * 2 cores= $32.85/month4 vCPUs$0.100/hour * 730 hours/month= $73.00/month$0.02250/hour * 730 hours/month * 4 cores= $65.70/month16 vCPUs$0.225/hour * 730 hours/month= $164.25/month$0.01625/hour * 730 hours/month * 16 cores= $189.80/month128 vCPUs$0.225/hour * 730 hours/month= $164.25/month$0.01500/hour * 730 hours/month * 128 cores= $1,401.60/monthWith this price update, RHEL and RHEL for SAP subscriptions costs will scale linearly with machine size. Instances with 12 vCPUs or less are expected to see equivalent or reduced costs; instances with more than 12 vCPUs are expected to see increased costs. Please use the tables below to estimate how this change will impact your RHEL and RHEL for SAP costs.Red Hat Enterprise LinuxHourlyHourlyMonthlyMonthlyInstance vCPU countPrice per core1, 2, 4 or 8$0.0144 / 1 hour12, 16, 32 or 64$0.0108 / 1 hour128, 256 or 512$0.0096 / 1 hourRed Hat Enterprise Linux for SAPHourlyHourlyMonthlyMonthlyInstance vCPU countPrice per core1, 2, 4 or 8$0.0225 / 1 hour12, 16, 32 or 64$0.01625 / 1 hour128, 256 or 512$0.015 / 1 hourPrices in this table are estimates only to the RHEL pricing update impact.All RHEL and RHEL for SAP images are charged a 1 minute minimum. After 1 minute, RHEL images are charged in 1 second increments.If you have concerns over RHEL and RHEL for SAP software costs, contact your Google Cloud account representative.SLES and SLES for SAP imagesSLES images:$0.02 USD/hour for f1-micro and g1-small machine types$0.11 USD/hour for all other machine typesSLES for SAP images:$0.17 USD/hour for instances with 1 - 2 vCPUs$0.34 USD/hour for instances with 3 - 4 vCPUs$0.41 USD/hour for instances with 5 or more vCPUsAll SLES images are charged a 1 minute minimum. After 1 minute, SLES images are charged in 1 second increments.You can purchase commitments and receive committed use discounts (CUDs) for your SUSE Linux Enterprise Server (SLES) and SLES for SAP licenses. Because you commit to a minimum level of resource usage when you purchase your license commitment, you have to pay the agreed-upon prices for the duration of that commitment, even if your resource usage is lower than that minimum level. As a result, to maximize the benefit of your CUDs, ensure that you use all of your committed licenses and run VMs with those licenses for 100% time of your commitment's term. To learn more about CUDs for licenses and how to purchase a commitment, see Purchase commitments for licenses.Ubuntu pro imagesThe following sections outline the license cost for using Ubuntu Pro images on Compute EngineMemoryLicense cost for memory is charged at one flat rate of $0.000127 per GB/hour in USD.vCPULicense cost for vCPU varies by the number of vCPUs that the Ubuntu Pro VM has. The following table summarizes the license cost per hour in USD.HourlyHourlyMonthlyMonthlyNumber of vCPUsLicense cost (USD)/hour1$0.00166 / 1 hour2$0.002971 / 1 hour4$0.005545 / 1 hour6-8$0.00997 / 1 hour10-16$0.018063 / 1 hour18-48$0.033378 / 1 hour50-78$0.060548 / 1 hour80-96$0.077871 / 1 hour98-222$0.102401 / 1 hour>222$0.122063 / 1 hourExampleFor example, if your Ubuntu Pro VM has 64 GB RAM and 16 vCPUs, the license cost is calculated as follows:Hourly license cost per VM = (0.000127 * 64) + (0.018063) = $0.026191Monthly license cost (31 day month) per VM = 0.026191 * 744 = $19.486104Ubuntu pro GPU licenseWhen running VMs that use the Ubuntu Pro images with attached GPUs, you incur license cost for the premium image and a GPU license in addition to the regular cost of running the VM and the cost of the attached GPU.The following table summarizes the per GPU license rates per month in USD for Ubuntu Pro VMs. The license fee varies based on the number of GPUs attached to the VM but is the same for all GPU models that are available on Compute Engine.HourlyHourlyMonthlyMonthlyNumber of GPUsLicense cost (USD)1$0.035 / 1 hour2$0.066 / 1 hour4$0.12 / 1 hour8$0.208 / 1 hour>8$0.3 / 1 hourWindows Server imagesIf you use a Compute Engine Windows Server image or you import a Windows Server image without bringing your own licenses, you are billed based on the machine types you use as follows:f1-micro and g1-small machine types: $0.023 USD/hour.All other machine types: $0.046 USD/hour per visible vCPU. For example, n2-highcpu-4, and n2-highmem-4 have 4 vCPUs, so are charged at $0.184 USD/hour (4 x $0.046 USD/hour).Windows Server images are charged a 1 minute minimum. After 1 minute, Windows images are charged in 1 second increments.For information about licensing for Windows Server images, see Microsoft licenses.SQL Server imagesSQL Server images incur costs in addition to the cost for Windows Server images and the cost for the selected machine type.$0.399 USD per core/hour for SQL Server Enterprise$0.1200 USD per core/hour for SQL Server Standard$0.011 USD per core/hour for SQL Server WebNo additional charge for SQL Server ExpressMicrosoft SQL Server licensing requires a core license to be assigned to each virtual CPU on your virtual machine instance, with a four core minimum for each instance. Instances with fewer than 4 vCPUs will be charged for 4 vCPUs to comply with these requirements.Google recommends that you not use SQL Server images on f1-micro or g1-small machine types based on Microsoft's minimum hardware and software recommendations.SQL Server images are charged a 1 minute minimum. After 1 minute, SQL Server images are charged in 1 second increments.For information about licensing for SQL Server OS images, see Microsoft licenses.Disk pricingThe information included here doesn't include all Persistent Disk features. The basic disk pricing information is included here for convenience and is intended for reference purposes. For detailed pricing information and explanations about how disk pricing works, see the Disk and image pricing page.Persistent disk space pricingIowa (us-central1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)HourlyHourlyMonthlyMonthlyItemPrice (USD)Standard provisioned space$0.0000548 / 1 gibibyte hourSSD provisioned space$0.00023288 / 1 gibibyte hourBalanced provisioned space$0.00013699 / 1 gibibyte hourExtreme provisioned space$0.00017123 / 1 gibibyte hourExtreme provisioned IOPS$0.00008904 / 1 hourRegional standard provisioned space$0.00010959 / 1 gibibyte hourRegional SSD provisioned space$0.00046575 / 1 gibibyte hourRegional balanced provisioned space$0.00027397 / 1 gibibyte hourHyperdisk Extreme provisioned space$0.00017123 / 1 gibibyte hourHyperdisk Extreme provisioned IOPS$0.00004384 / 1 hourHyperdisk Throughput provisioned space$0.00000685 / 1 gibibyte hourHyperdisk Throughput provisioned throughput$0.00034247 / 1 hourHyperdisk Balanced provisioned space$0.00010959 / 1 gibibyte hourHyperdisk Balanced provisioned IOPS$0.00000685 / 1 hourHyperdisk Balanced provisioned throughput$0.0000548 / 1 hourHyperdisk Balanced High Availability provisioned space$0.00021918 / 1 gibibyte hourHyperdisk Balanced High Availability provisioned IOPS$0.0000137 / 1 hourHyperdisk Balanced High Availability provisioned throughput$0.00010959 / 1 hourHyperdisk Storage Pool Throughput provisioned space standard$0.00000685 / 1 gibibyte hourHyperdisk Storage Pool Throughput provisioned space advanced$0.00001233 / 1 gibibyte hourHyperdisk Storage Pool Throughput provisioned throughput standard$0.00034247 / 1 hourHyperdisk Storage Pool Throughput provisioned throughput advanced$0.00065068 / 1 hourHyperdisk Storage Pool Balanced provisioned space standard$0.00010959 / 1 gibibyte hourHyperdisk Storage Pool Balanced provisioned space advanced$0.00019178 / 1 gibibyte hourHyperdisk Storage Pool Balanced provisioned IOPS standard$0.00000685 / 1 hourHyperdisk Storage Pool Balanced provisioned IOPS advanced$0.00001233 / 1 gibibyte hourHyperdisk Storage Pool Balanced provisioned throughput standard$0.0000548 / 1 hourHyperdisk Storage Pool Balanced provisioned throughput advanced$0.00010411 / 1 hourHyperdisk ML provisioned space$0.00010959 / 1 gibibyte hourHyperdisk ML provisioned throughput$0.00016438 / 1 hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Local SSD pricingIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyTypePrice (per GB per month in USD)1 year commitment price (per GB per month in USD)3 year commitment price (per GB per month in USD)Local SSD provisioned space$0.00010959 / 1 gibibyte hour$0.00006904 / 1 gibibyte hour$0.00004932 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices, which typically offer the largest discounts—up to 91% off of the corresponding on-demand price—are listed separately on the Spot VMs pricing page.Custom image storage pricingIf you import or create custom images in Compute Engine, these images incur a storage cost. The cost of these custom images depends on the location where you store the image.Iowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Stockholm (europe-north2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyTypePrice (USD)Image storage$0.00006849 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Machine image pricingIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Finland (europe-north1)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Toronto (northamerica-northeast2)Mexico (northamerica-south1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyTypePrice (USD)Machine image$0.00006849 / 1 gibibyte hourIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.GPU pricingIowa (us-central1)Johannesburg (africa-south1)Taiwan (asia-east1)Hong Kong (asia-east2)Tokyo (asia-northeast1)Osaka (asia-northeast2)Seoul (asia-northeast3)Mumbai (asia-south1)Delhi (asia-south2)Singapore (asia-southeast1)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Madrid (europe-southwest1)Belgium (europe-west1)Berlin (europe-west10)Turin (europe-west12)London (europe-west2)Frankfurt (europe-west3)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Montreal (northamerica-northeast1)Sao Paulo (southamerica-east1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Northern Virginia (us-east4)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Los Angeles (us-west2)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyModelGPUsGPU memoryGPU price (USD)1 year commitment price** (USD)3 year commitment price** (USD)NVIDIA T41 GPU16 GB GDDR6$0.35 / 1 hour$0.22 / 1 hour$0.16 / 1 hour2 GPUs32 GB GDDR64 GPUs64 GB GDDR6NVIDIA P41 GPU8 GB GDDR5$0.6 / 1 hour$0.378 / 1 hour$0.27 / 1 hour2 GPUs16 GB GDDR54 GPUs32 GB GDDR5NVIDIA V1001 GPU16 GB HBM2$2.48 / 1 hour$1.562 / 1 hour$1.116 / 1 hour2 GPUs32 GB HBM24 GPUs64 GB HBM28 GPUs128 GB HBM2NVIDIA P1001 GPU16 GB HBM2$1.46 / 1 hour$0.919 / 1 hour$0.657 / 1 hour2 GPUs32 GB HBM24 GPUs64 GB HBM2**For committed use discounts pricing on the A2 ultra machine series, connect with your sales account team.Spot prices are dynamic and can change up to once every 30 days, but provide discounts of 60-91% off of the corresponding on-demand price for most machine types and GPUs. Spot prices also provide smaller discounts for local SSDs and A3 machine types. For more information, see the Spot VMs documentation. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.NVIDIA RTX virtual workstations (formerly known as NVIDIA GRID)Iowa (us-central1)Johannesburg (africa-south1)Osaka (asia-northeast2)Seoul (asia-northeast3)Delhi (asia-south2)Jakarta (asia-southeast2)Sydney (australia-southeast1)Melbourne (australia-southeast2)Warsaw (europe-central2)Madrid (europe-southwest1)Berlin (europe-west10)Turin (europe-west12)Netherlands (europe-west4)Zurich (europe-west6)Milan (europe-west8)Paris (europe-west9)Doha (me-central1)Dammam (me-central2)Tel Aviv (me-west1)Santiago (southamerica-west1)Iowa (us-central1)South Carolina (us-east1)Columbus (us-east5)Dallas (us-south1)Oregon (us-west1)Salt Lake City (us-west3)Las Vegas (us-west4)Phoenix (us-west8)HourlyHourlyMonthlyMonthlyModelGPUsGPU memoryGPU price (USD)1 year commitment price** (USD)3 year commitment price** (USD)NVIDIA T4 Virtual Workstation1 GPU16 GB GDDR6$0.55 / 1 hour$0.42 / 1 hour$0.36 / 1 hour2 GPUs32 GB GDDR64 GPUs64 GB GDDR6NVIDIA P4 Virtual Workstation1 GPU8 GB GDDR5$0.8 / 1 hour$0.578 / 1 hour$0.47 / 1 hour2 GPUs16 GB GDDR54 GPUs32 GB GDDR5NVIDIA P100 Virtual Workstation1 GPU16 GB HBM2$1.66 / 1 hour$1.119 / 1 hour$0.857 / 1 hour2 GPUs32 GB HBM24 GPUs64 GB HBM2**For committed use discounts pricing on the A2 ultra machine series, connect with your sales account team.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Spot prices, which typically offer the largest discounts—up to 91% off of the corresponding on-demand price—are listed separately on the Spot VMs pricing page.What's nextRead the Compute Engine documentation.Get started with Compute Engine.Try the Pricing calculator.Learn about Compute Engine solutions and use cases.Request a custom quoteWith Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization.Contact salesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Compute_Engine.txt b/Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..f8d4b54dde848b254c5f8797b1f04ff5e66a650d --- /dev/null +++ b/Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/compute +Date Scraped: 2025-02-23T12:01:23.712Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Compute Engine Virtual machines for any workloadEasily create and run online VMs on high-performance, reliable cloud infrastructure. Choose from preset or custom machine types for web servers, databases, AI, and more.Get one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers free per month.Try it in consoleContact salesProduct highlightsHow to choose and deploy your first VMEncrypt sensitive data with Confidential VMsCustomize your VMs for different compute, memory, performance, and cost requirementsHow to choose the right VM5-minute video explainerFeaturesPreset and custom configurationsDeploy an application in minutes with prebuilt samples called Jump Start Solutions. Create a dynamic website, load-balanced VM, three-tier web app, or ecommerce web app.Choose from predefined machine types, sizes, and configurations for any workload, from large enterprise applications, to modern workloads (like containers) or AI/ML projects that require GPUs and TPUs.For more flexibility, create a custom machine type between 1 and 96 vCPUs with up to 8.0 GB of memory per core. And leverage one of many block storage options, from flexible Persistent Disk to high performance and low-latency Local SSD.Industry-leading reliabilityCompute Engine offers the best single instance compute availability SLA of any cloud provider: 99.95% availability for memory-optimized VMs and 99.9% for all other VM families. Is downtime keeping you up at night? Maintain workload continuity during planned and unplanned events with live migration. When a VM goes down, Compute Engine performs a live migration to another host in the same zone.Automations and recommendations for resource efficiencyAutomatically add VMs to handle peak load and replace underperforming instances with managed instance groups. Manually adjust your resources using historical data with rightsizing recommendations, or guarantee capacity for planned demand spikes with future reservations.All of our latest compute instances (including C4A, C4, N4, C3D, X4, and Z3) run on Titanium, a system of purpose-built microcontrollers and tiered scale-out offloads to improve your infrastructure performance, life cycle management, and security.Transparent pricing and discountingReview detailed pricing guidance for any VM type or configuration, or use our pricing calculator to get a personalized estimate.To save on batch jobs and fault-tolerant workloads, use Spot VMs to reduce your bill from 60-91%.Receive automatic discounts for sustained use, or up to 70% off when you sign up for committed use discounts.Security controls and configurationsEncrypt data-in-use and while it’s being processed with Confidential VMs. Defend against rootkits and bootkits with Shielded VMs.Meet stringent compliance standards for data residency, sovereignty, access, and encryption with Assured Workloads.Workload ManagerNow available for SAP workloads, Workload Manager evaluates your application workloads by detecting deviations from documented standards and best practices to proactively prevent issues, continuously analyze workloads, and simplify system troubleshooting.VM ManagerVM Manager is a suite of tools that can be used to manage operating systems for large virtual machine (VM) fleets running Windows and Linux on Compute Engine.Sole-tenant nodesSole-tenant nodes are physical Compute Engine servers dedicated exclusively for your use. Sole-tenant nodes simplify deployment for bring-your-own-license (BYOL) applications. Sole-tenant nodes give you access to the same machine types and VM configuration options as regular compute instances.TPU acceleratorsCloud TPUs can be added to accelerate machine learning and artificial intelligence applications. Cloud TPUs can be reserved, used on-demand, or available as preemptible VMs.Linux and Windows supportRun your choice of OS, including Debian, CentOS Stream, Fedora CoreOS, SUSE, Ubuntu, Red Hat Enterprise Linux, FreeBSD, or Windows Server 2008 R2, 2012 R2, and 2016. You can also use a shared image from the Google Cloud community or bring your own.Container supportRun, manage, and orchestrate Docker containers on Compute Engine VMs with Google Kubernetes Engine.Placement policyUse placement policy to specify the location of your underlying hardware instances. Spread placement policy provides higher reliability by placing instances on distinct hardware, reducing the impact of underlying hardware failures. Compact placement policy provides lower latency between nodes by placing instances close together within the same network infrastructure. View all featuresChoose the right VM for your workload and requirementsOptimizationWorkloadsOur recommendationEfficientLowest cost per core.Web and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. Web and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)General purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.Web and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.Web and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLSpecialized H-SeriesH3MemoryHighest memory per core.Databases (large)In-memory cachesElectronic design automationModeling and simulationSpecialized M-Series X4, M3Storage Highest storage per core.Data analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Specialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.CUDA-enabled ML training and inferenceVideo transcodingSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.Massively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Specialized A-seriesA3Documentation: Machine families resource and comparison guideEfficientLowest cost per core.WorkloadsWeb and app servers (low traffic)Dev and test environmentsContainerized microservicesVirtual desktopsOur recommendationGeneral purpose E-SeriesE2FlexibleBest price-performance for balanced and flexible workloads. WorkloadsWeb and app servers (low to medium traffic)Containerized microservicesVirtual desktopsBack-office, CRM, or BI applicationsData pipelinesDatabases (small to medium sized)Our recommendationGeneral purpose N-SeriesN4, N2, N2D, and N1PerformanceBest performance with advanced capabilities.WorkloadsWeb and app servers (high traffic)Ad serversGame serversData analyticsDatabases (any size)In-memory cachesMedia streaming and transcodingCPU-based AI/MLOur recommendationGeneral purpose C-SeriesC4A, C4, C3, C3DComputeHighest compute per core.WorkloadsWeb and app servers Game serversMedia streaming and transcoding Compute-bound workloads High performance computing (HPC)CPU-based AI/MLOur recommendationSpecialized H-SeriesH3MemoryHighest memory per core.WorkloadsDatabases (large)In-memory cachesElectronic design automationModeling and simulationOur recommendationSpecialized M-Series X4, M3Storage Highest storage per core.WorkloadsData analyticsDatabases (large horizontal scale-out, flash-optimized, data warehouses, and more)Our recommendationSpecialized Z-SeriesZ3Inference and visualization with GPUsBest performance for inference and visualization tasks requiring GPUs.WorkloadsCUDA-enabled ML training and inferenceVideo transcodingOur recommendationSpecialized G-seriesG2All other GPU tasksHighest performing GPUs.WorkloadsMassively parallelized computation BERT natural language processing Deep learning recommendation model (DLRM)Our recommendationSpecialized A-seriesA3Documentation: Machine families resource and comparison guideHow It WorksCompute Engine is a computing and hosting service that lets you create and run virtual machines on Google infrastructure, comparable to Amazon EC2 and Azure Virtual Machines. Compute Engine offers scale, performance, and value that lets you easily launch large compute clusters with no up-front investment.Guides: How to get startedCompute Engine in 2-minutesCommon UsesCreate your first VMThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups How to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsTutorials, quickstarts, & labsThree ways to get startedComplete a tutorial. Learn how to deploy a Linux VM, Windows Server VM, load balanced VM, Java app, custom website, LAMP stack, and much more.Deploy a pre-configured sample application—Jump Start Solution—in just a few clicks.Create a VM from scratch using the Google Cloud console, CLI, API, or Client Libraries like C#, Go, and Java. Use our documentation for step-by-step guidance.Documentation: Creating a VM instanceBlog: What is a Jump Start Solution?Documentation: Create custom VM images from source disks, images, and snapshotsDocumentation: Create multiple VMs that you can treat as a single entity with managed instance groups Learning resourcesHow to choose the right VMWith thousands of applications, each with different requirements, which VM is right for you?Video: Choose the right VMDocumentation: View available regions and zonesDocumentation: Choose a VM deployment strategyDocumentation: Understand networking for VMsMigrate and optimize enterprise applicationsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineTutorials, quickstarts, & labsThree ways to get startedComplete a lab or tutorial. Generate a rapid estimate of your migration costs, learn how to migrate a Linux VM, VMware, SQL servers, and much more.Visit the Cloud Architecture Center for advice on how to plan, design, and implement your cloud migration.Apply for end-to-end migration and modernization support via Google Cloud’s Rapid Migration Program (RaMP).Guide: Migrate to Google Cloud Guide and checklist: Migrating workloads to the public cloudAnnounced at Next ’24: Optimize costs and efficiency with new compute operations solutionsGuide: Visit the Cloud Architecture CenterLearning resourcesAccess documentation, guides, and reference architecturesMigration Center is Google Cloud's unified migration platform. With features like cloud spend estimation, asset discovery, and a variety of tooling for different migration scenarios, it provides you with what you need to get started.Start here: Google Cloud Migration CenterGuide: Bring your own licensesGuide: SAP HANA on Compute EngineGuide: Migrate VMware to Compute EngineBackup and restore your applicationsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudTutorials, quickstarts, & labsExplore your optionsCompute Engine offers ways to backup and restore:Virtual machine instancesPersistent Disk and Hyperdisk volumesWorkloads running in Compute Engine and on-premisesStart with a tutorial, or read the detailed options in our documentation.Documentation: Backup and restoreLearning resourcesAccess a fully managed backup and disaster recovery serviceWe offer a managed backup and disaster recovery (DR) service for centralized data protection of VMs and other workloads running in Google Cloud and on-premises. It uses snapshots to incrementally backup data from your persistent disks at the instance level. Overview: Backup and DR serviceVideo: Rock-solid business continuity and data protection on Google CloudRun modern container-based applicationsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunTutorials, quickstarts, & labsThree ways to deploy containersContainers let you run your apps with fewer dependencies on the host virtual machine and independently from other containerized apps using the same host.If you need complete control over your environment, run container images directly on Compute Engine.To simplify cluster management and container orchestration tasks, use Google Kubernetes Engine (GKE).To completely remove the need for clusters or infrastructure management, use Cloud Run.Guide: What are containers?Documentation: Deploy containers on Compute EngineDocumentation: Deploy containers on Google Kubernetes EngineDocumentation: Deploy containers on Google Cloud RunInfrastructure for AI workloadsAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?Learning resourcesAI-optimized hardware We designed the accelerator-optimized machine family to deliver the performance and efficiency you need for AI workloads. Start by comparing our GPUs, or learn about TPUs for large scale AI training and inference tasks.Documentation: Accelerator-optimized VMsArchitecture: Learn about Google Cloud’s supercomputer architecture, AI HypercomputerDocumentation: Understand and compare GPUsOverview: What is a TPU?What’s the difference between a CPU, GPU, and TPU?PricingHow Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.ServicesDescriptionPrice (USD)Get started freeNew users get $300 in free trial credits to use within 90 days.FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.FreeVM instancesPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.Starting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.Save up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.Save up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).Save up to 30%StoragePersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.Starting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.Starting at$0.08Per GB per monthNetworkingStandard tierLeverage the public internet to carry traffic between your services and your users.FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.Starting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.How Compute Engine pricing worksCompute Engine pricing varies based on your requirements for performance, storage, networking, location, and more.Get started freeDescriptionNew users get $300 in free trial credits to use within 90 days.Price (USD)FreeThe Compute Engine free tier gives you one e2-micro VM instance, up to 30 GB storage, and up to 1 GB of outbound data transfers per month.DescriptionFreeVM instancesDescriptionPay-as-you-goOnly pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage.Price (USD)Starting at$0.01(e2-micro)Confidential VMsEncrypt data-in-use and while it’s being processed.DescriptionStarting at$0.936Per vCPU per monthSole tenant nodesPhysical servers dedicated to your project. Pay a premium on top of the standard price (pay-as-you-go rate for selected vCPU and memory resources).Description+10%On top of standard priceDiscount: Committed usePay less when you commit to a minimum spend in advance.DescriptionSave up to 70%Discount: Spot VMsPay less when you run fault-tolerant jobs using excess Compute Engine capacity.DescriptionSave up to 91%Discount: Sustained usePay less on resources that are used for more than 25% of a month (and are not receiving any other discounts).DescriptionSave up to 30%StorageDescriptionPersistent diskDurable network storage devices that your virtual machine (VM) instances can access. The data on each Persistent Disk volume is distributed across several physical disks.Price (USD)Starting at$0.04 Per GB per monthHyperdiskThe fastest persistent disk storage for Compute Engine, with configurable performance and volumes that can be dynamically resized.DescriptionStarting at$0.125 Per GB per monthLocal SSDPhysically attached to the server that hosts your VM.DescriptionStarting at$0.08Per GB per monthNetworkingDescriptionStandard tierLeverage the public internet to carry traffic between your services and your users.Price (USD)FreeInbound transfers, always. Outbound transfers, up to 200 GB per month.Premium tierLeverage Google's premium backbone to carry traffic to and from your external users.DescriptionStarting at$0.08Per GB per month for outbound data transfers. Inbound transfers remain free.To estimate costs based on your requirements, use our pricing calculator or reach out to our sales team to request a quote.Pricing CalculatorEstimate your monthly Compute Engine charges, including cluster management fees.Estimate pricingNeed help?Chat to us online, call us directly or request a call back.Contact usStart your proof of conceptTry Compute Engine in the console, with one e2-micro VM instance free per month.Go to my consoleHave a large project?Contact salesBrowse quickstarts, tutorials, or interactive walkthroughs for Compute EngineBrowse quickstartsChoose a learning path, build your skills, and validate your knowledge with Cloud Skills BoostBrowse learning pathsLearn and experiment with pre-built solution templates handpicked by our expertsBrowse Jump Start SolutionsBusiness CaseLearn from Compute Engine customers Migrating 40,000 on-prem VMs to the cloud, Sabre reduced their IT costs by 40%. Joe DiFonzo, CIO, Sabre“We’ve taken hundreds of millions of dollars of costs out of our business.” Watch the interviewRelated contentGamebear uses Cloud Load Balancing to decreased network latency by 10X.With a 99.99% uptime SLA, Macy's feel confident their systems will run seamlessly.Wayfair uses GPUs to automate 3D model creation, reducing costs by $9M.Partners & IntegrationAccelerate your migration with partnersAssessment and planningMigrationReady to move your compute workloads to Google Cloud? These partners can guide you through every stage—from initial planning and assessment to migration.FAQExpand allWhat is Compute Engine? What can it do?Compute Engine is an Infrastructure-as-a-Service product offering flexible, self-managed virtual machines (VMs) hosted on Google's infrastructure. Compute Engine includes Linux and Windows-based VMs running on KVM, local and durable storage options, and a simple REST-based API for configuration and control. The service integrates with Google Cloud technologies, such as Cloud Storage, App Engine, and BigQuery to extend beyond the basic computational capability to create more complex and sophisticated apps.What is a virtual CPU in Compute Engine?On Compute Engine, each virtual CPU (vCPU) is implemented as a single hardware hyper-thread on one of the available CPU Platforms. On Intel Xeon processors, Intel Hyper-Threading Technology allows multiple application threads to run on each physical processor core. You configure your Compute Engine VMs with one or more of these hyper-threads as vCPUs. The machine type specifies the number of vCPUs that your instance has.How do App Engine and Compute Engine relate to each other?We see the two as being complementary. App Engine is Google's Platform-as-a-Service offering and Compute Engine is Google's Infrastructure-as-a-Service offering. App Engine is great for running web-based apps, line of business apps, and mobile backends. Compute Engine is great for when you need more control of the underlying infrastructure. For example, you might use Compute Engine when you have highly customized business logic or you want to run your own storage system.How do I get started?Try these getting started guides, or try one of our quickstart tutorials.How does pricing and purchasing work?Compute Engine charges based on compute instance, storage, and network use. VMs are charged on a per-second basis with a one minute minimum. Storage cost is calculated based on the amount of data you store. Network cost is calculated based on the amount of data transferred between VMs that communicate with each other and with the internet. For more information, review our price sheet.Do you offer paid support?Yes, we offer paid support for enterprise customers. For more information, contact our sales organization.Do you offer a Service Level Agreement (SLA)?Yes, we offer a Compute Engine SLA.Where can I send feedback?For billing-related questions, you can send questions to the appropriate support channel.For feature requests and bug reports, submit an issue to our issues tracker.How can I create a project?Go to the Google Cloud console. When prompted, select an existing project or create a new project.Follow the prompts to set up billing. If you are new to Google Cloud, you have free trial credit to pay for your instances.What is the difference between a project number and a project ID?Every project can be identified in two ways: the project number or the project ID. The project number is automatically created when you create the project, whereas the project ID is created by you, or whoever created the project. The project ID is optional for many services, but is required by Compute Engine. For more information, see Google Cloud console projects.What steps does Google take to protect my data?See disk encryption.How do I choose the right size for my persistent disk?Persistent disk performance scales with the size of the persistent disk. Use the persistent disk performance chart to help decide what size disk works for you. If you're not sure, read the documentation to decide how big to make your persistent disk.Where can I request more quota for my project?By default, all Compute Engine projects have default quotas for various resource types. However, these default quotas can be increased on a per-project basis. Check your quota limits and usage in the quota page on the Google Cloud console. If you reach the limit for your resources and need more quota, make a request to increase the quota for certain resources using the IAM quotas page. You can make a request using the Edit Quotas button on the top of the page.What kind of machine configuration (memory, RAM, CPU) can I choose for my instance?Compute Engine offers several configurations for your instance. You can also create custom configurations that match your exact instance needs. See the full list of available options on the machine types page.If I accidentally delete my instance, can I retrieve it?No, instances that have been deleted cannot be retrieved. However, if an instance is simply stopped, you can start it again.Do I have the option of using a regional data center in selected countries?Yes, Compute Engine offers data centers around the world. These data center options are designed to provide low latency connectivity options from those regions. For specific region information, including the geographic location of regions, see regions and zones.How can I tell if a zone is offline?The Compute Engine Zones section in the Google Cloud console shows the status of each zone. You can also get the status of zones through the command-line tool by running gcloud compute zones list, or through the Compute Engine API with the compute.zones.list method.What operating systems can my instances run on?Compute Engine supports several operating system images and third-party images. Additionally, you can create a customized version of an image or build your own image.What are the available zones I can create my instance in?For a list of available regions and zones, see regions and zones.What if my question wasn’t answered here?Take a look at a longer list of FAQs here.More ways to get your questions answeredAll Compute FAQsAsk us a questionGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Concepts,_principles,_and_terminology.txt b/Concepts,_principles,_and_terminology.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6594425596defe616b82dadec4074e5d422f603 --- /dev/null +++ b/Concepts,_principles,_and_terminology.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/database-migration-concepts-principles-part-1 +Date Scraped: 2025-02-23T11:52:27.902Z + +Content: +Home Docs Cloud Architecture Center Send feedback Database migration: Concepts and principles (Part 1) Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-07 UTC This document introduces concepts, principles, terminology, and architecture of near-zero downtime database migration for cloud architects who are migrating databases to Google Cloud from on-premises or other cloud environments. This document is part 1 of two parts. Part 2 discusses setting up and executing the migration process, including failure scenarios. Database migration is the process of migrating data from one or more source databases to one or more target databases by using a database migration service. When a migration is finished, the dataset in the source databases resides fully, though possibly restructured, in the target databases. Clients that accessed the source databases are then switched over to the target databases, and the source databases are turned down. The following diagram illustrates this database migration process. This document describes database migration from an architectural standpoint: The services and technologies involved in database migration. The differences between homogeneous and heterogeneous database migration. The tradeoffs and selection of a migration downtime tolerance. A setup architecture that supports a fallback if unforeseen errors occur during a migration. This document does not describe how you set up a particular database migration technology. Rather, it introduces database migration in fundamental, conceptual, and principle terms. Architecture The following diagram shows a generic database migration architecture. A database migration service runs within Google Cloud and accesses both source and target databases. Two variants are represented: (a) shows the migration from a source database in an on-premises data center or a remote cloud to a managed database like Spanner; (b) shows a migration to a database on Compute Engine. Even though the target databases are different in type (managed and unmanaged) and setup, the database migration architecture and configuration is the same for both cases. Terminology The most important data migration terms for these documents are defined as follows: source database: A database that contains data to be migrated to one or more target databases. target database: A database that receives data migrated from one or more source databases. database migration: A migration of data from source databases to target databases with the goal of turning down the source database systems after the migration completes. The entire dataset, or a subset, is migrated. homogeneous migration: A migration from source databases to target databases where the source and target databases are of the same database management system from the same provider. heterogeneous migration: A migration from source databases to target databases where the source and target databases are of different database management systems from different providers. database migration system: A software system or service that connects to source databases and target databases and performs data migrations from source to target databases. data migration process: A configured or implemented process executed by the data migration system to transfer data from source to target databases, possibly transforming the data during the transfer. database replication: A continuous transfer of data from source databases to target databases without the goal of turning down the source databases. Database replication (sometimes called database streaming) is an ongoing process. Classification of database migrations There are different types of database migrations that belong to different classes. This section describes the criteria that defines those classes. Replication versus migration In a database migration, you move data from source databases to target databases. After the data is completely migrated, you delete source databases and redirect client access to the target databases. Sometimes you keep the source databases as a fallback measure if you encounter unforeseen issues with the target databases. However, after the target databases are reliably operating, you eventually delete the source databases. With database replication, in contrast, you continuously transfer data from the source databases to the target databases without deleting the source databases. Sometimes database replication is referred to as database streaming. While there is a defined starting time, there is typically no defined completion time. The replication might be stopped or become a migration. This document discusses only database migration. Partial versus complete migration Database migration is understood to be a complete and consistent transfer of data. You define the initial dataset to be transferred as either a complete database or a partial database (a subset of the data in a database) plus every change committed on the source database system thereafter. Heterogeneous migration versus homogeneous migration A homogeneous database migration is a migration between the source and target databases of the same database technology, for example, migrating from a MySQL database to a MySQL database, or from an Oracle® database to an Oracle database. Homogeneous migrations also include migrations between a self-hosted database system such as PostgreSQL to a managed version of it such as Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL. In a homogenous database migration, the schemas for the source and target databases are likely identical. If the schemas are different, the data from the source databases must be transformed during migration. Heterogeneous database migration is a migration between source and target databases of different database technologies, for example, from an Oracle database to Spanner. Heterogeneous database migration can be between the same data models (for example, from relational to relational), or between different data models (for example, from relational to key-value). Migrating between different database technologies doesn't necessarily involve different data models. For example, Oracle, MySQL, PostgreSQL, and Spanner all support the relational data model. However, multi-model databases like Oracle, MySQL, or PostgreSQL support different data models. Data stored as JSON documents in a multi-model database can be migrated to MongoDB with little or no transformation necessary, as the data model is the same in the source and the target database. Although the distinction between homogeneous and heterogeneous migration is based on database technologies, an alternative categorization is based on database models involved. For example, a migration from an Oracle database to Spanner is homogeneous when both use the relational data model; a migration is heterogeneous if, for example, data stored as JSON objects in Oracle is migrated to a relational model in Spanner. Categorizing migrations by data model more accurately expresses the complexity and effort required to migrate the data than basing the categorization on the database system involved. However, because the commonly used categorization in the industry is based on the database systems involved, the remaining sections are based on that distinction. Migration downtime: zero versus minimal versus significant After you successfully migrate a dataset from the source to the target database, you then switch client access over to the target database and delete the source database. Switching clients from the source databases to the target databases involves several processes: To continue processing, clients must close existing connections to the source databases and create new connections to the target databases. Ideally, closing connections is graceful, meaning that you don't unnecessarily roll back ongoing transactions. After closing connections on the source databases, you must migrate remaining changes from the source databases to the target databases (called draining) to ensure that all changes are captured. You might need to test target databases to ensure that these databases are functional and that clients are functional and operate within their defined service level objectives (SLOs). In a migration, achieving truly zero downtime for clients is impossible; there are times when clients cannot process requests. However, you can minimize the duration that clients are unable to process requests in several ways (near-zero downtime): You can start your test clients in read-only mode against the target databases long before you switch the clients over. With this approach, testing is concurrent with the migration. You can configure the amount of data being migrated (that is, in flight between the source and target databases) to be as small as possible when the switch over period approaches. This step reduces the time for draining because there are fewer differences between the source databases and the target databases. If new clients operating on the target databases can be started concurrently with existing clients operating on the source databases, you can shorten the switch over time because the new clients are ready to execute as soon as all data is drained. While it's unrealistic to achieve zero downtime during a switch over, you can minimize the downtime by starting activities concurrently with the ongoing data migration when possible. In some database migration scenarios, significant downtime is acceptable. Typically, this allowance is a result of business requirements. In such cases, you can simplify your approach. For example, with a homogeneous database migration, you might not require data modification; export and import or backup and restore are perfect approaches. With heterogeneous migrations, the database migration system does not have to deal with updates of source database systems during the migration. However, you need to establish that the acceptable downtime is long enough for the database migration and follow-up testing to occur. If this downtime cannot be clearly established or is unacceptably long, you need to plan a migration that involves minimal downtime. Database migration cardinality In many situations database migration takes place between a single source database and a single target database. In such situations, the cardinality is 1:1 (direct mapping). That is, a source database is migrated without changes to a target database. A direct mapping, however, is not the only possibility. Other cardinalities include the following: Consolidation (n:1). In a consolidation, you migrate data from several source databases to a smaller number of target databases (or even one target). You might use this approach to simplify database management or employ a target database that can scale. Distribution (1:n). In a distribution, you migrate data from one source database to several target databases. For example, you might use this approach when you need to migrate a large centralized database containing regional data to several regional target databases. Re-distribution (n:m). In a re-distribution, you migrate data from several source databases to several target databases. You might use this approach when you have sharded source databases with shards of very different sizes. The re-distribution evenly distributes the sharded data over several target databases that represent the shards. Database migration provides an opportunity to redesign and implement your database architecture in addition to merely migrating data. Migration consistency The expectation is that a database migration is consistent. In the context of migration, consistent means the following: Complete. All data that is specified to be migrated is actually migrated. The specified data could be all data in a source database or a subset of the data. Duplicate free. Each piece of data is migrated once, and only once. No duplicate data is introduced into the target database. Ordered. The data changes in the source database are applied to the target database in the same order as the changes occurred in the source database. This aspect is essential to ensure data consistency. An alternative way to describe migration consistency is that after a migration completes, the data state between the source and the target databases is equivalent. For example, in a homogenous migration that involves the direct mapping of a relational database, the same tables and rows must exist in the source and the target databases. This alternative way of describing migration consistency is important because not all data migrations are based on sequentially applying transactions in the source database to the target database. For example, you might back up the source database and use the backup to restore the source database content into the target database (when significant downtime is possible). Active-passive versus active-active migration An important distinction is whether the source and target databases are both open to modifying query processing. In an active-passive database migration, the source databases can be modified during the migration, while the target databases allow only read-only access. An active-active migration supports clients writing into both the source as well as the target databases during the migration. In this type of migration, conflicts can occur. For example, if the same data item in the source and target database is modified so as to conflict with each other semantically, you might need to run conflict resolution rules to resolve the conflict. In an active-active migration, you must be able to resolve all data conflicts by using conflict resolution rules. If you cannot, you might experience data inconsistency. Database migration architecture A database migration architecture describes the various components required for executing a database migration. This section introduces a generic deployment architecture and treats the database migration system as a separate component. It also discusses the features of a database management system that support data migration as well as non-functional properties that are important for many use cases. Deployment architecture A database migration can occur between source and target databases located in any environment, like on-premises or different clouds. Each source and target database can be in a different environment; it is not necessary that all are collocated in the same environment. The following diagram shows an example of a deployment architecture involving several environments. DB1 and DB2 are two source databases, and DB3 and Spanner are the target databases. Two clouds and two on-premises data centers are involved in this database migration. The arrows represent the invocation relationships: the database migration service invokes interfaces of all source and target databases. A special case not discussed here is the migration of data from a database into the same database. This special case uses the database migration system for data transformation only, not for migrating data between different systems across different environments. Fundamentally, there are three approaches to database migration, which this section discusses: Using a database migration system Using database management system replication functionality Using custom database migration functionality Database migration system The database migration system is at the core of database migration. The system executes the actual data extraction from the source databases, transports the data to the target databases, and optionally modifies the data during transit. This section discusses the basic database migration system features in general. Examples of database migration systems include Database Migration Service, Striim, Debezium, tcVision and Cloud Data Fusion. Data migration process The core technical building block of a database migration system is the data migration process. The data migration process is specified by a developer and defines the source databases from which data is extracted, the target databases into which data is migrated, and any data modification logic applied to the data during the migration. You can specify one or more data migration processes and execute them sequentially or concurrently depending on the needs of the migration. For example, if you migrate independent databases, the corresponding data migration processes can run in parallel. Data extraction and insertion You can detect changes (insertions, updates, deletions) in a database system in two ways: database-supported change data capture (CDC) based on a transaction log, and differential querying of data itself using the query interface of a database management system. CDC based on a transaction log Database-supported CDC is based on database management features that are separate from the query interface. One approach is based on transaction logs (for example the binary log in MySQL). A transaction log contains the changes made to data in the correct order. The transaction log is continuously read, and so every change can be observed. For database migration, this logging is extremely useful, as CDC ensures that each change is visible and is subsequently migrated to the target database without loss and in the correct order. CDC is the preferred approach for capturing changes in a database management system. CDC is built into the database itself and has the least load impact on the system. Differential querying If no database management system feature exists that supports observing all changes in the correct order, you can use differential querying as an alternative. In this approach, each data item in a database gets an additional attribute that contains a timestamp or a sequence number. Every time the data item is changed, the change timestamp is added or the sequence number is increased. A polling algorithm reads all data items since the last time it executed or since the last sequence number it used. Once the polling algorithm determines the changes, it records the current time or sequence number into its internal state and then passes on the changes to the target database. While this approach works without problems for inserts and updates, you need to carefully design deletes because a delete removes a data item from the database. After the data is deleted, it is impossible for the poller to detect that a deletion occurred. You implement a deletion by using an additional status field (a logical delete flag) that indicates the data is deleted. Alternatively, deleted data items can be collected into one or more tables, and the poller accesses those tables to determine if deletion occurred. For variants on differential querying, see Change data capture. Differential querying is the least preferred approach because it involves schema and functionality changes. Querying the database also adds a query load that does not relate to executing client logic. Adapter and agent The database migration system requires access to the source and to the database systems. Adapters are the abstraction that encapsulates the access capabilities. In the simplest form, an adapter can be a JDBC driver for inserting data into a target database that supports JDBC. In a more complex case, an adapter is running in the environment of the target (sometimes called agent), accessing a built-in database interface like log files. In an even more complex case an adapter or agent interfaces with yet another software system, which in turn accesses the database. For example, an agent accesses Oracle GoldenGate, and that in turn accesses an Oracle database. The adapter or agent that accesses a source database implements the CDC interface or the differential querying interface, depending on the design of the database system. In both cases, the adapter or agent provides changes to the database migration system, and the database migration system is unaware if the changes were captured by CDC or differential querying. Data modification In some use cases, data is migrated from source databases to target databases unmodified. These straight-through migrations are typically homogeneous. Many use cases, however, require data to be modified during the migration process. Typically, modification is required when there are differences in schema, differences in data values, or opportunities to clean up data while it is in transition. The following sections discuss several types of modifications that can be required in a data migration—data transformation, data enrichment or correlation, and data reduction or filtering. Data transformation Data transformation transforms some or all data values from the source database. Some examples include the following: Data type transformation. Sometimes data types between the source and target databases are not equivalent. In these cases, data type transformation casts the source value into the target value based on type transformation rules. For example, a timestamp type from the source might be transformed into a string in the target. Data structure transformation. Data structure transformation modifies the structure in the same database model or between different database models. For example, in a relational system, one source table might be split into two target tables, or several source tables might be denormalized into one target table by using a join. A 1:n relationship in the source database might be transformed into a parent and child relationship in Spanner. Documents from a source document database system might be decomposed into a set of relational rows in a target system. Data value transformation. Data value transformation is separate from data type transformation. Data value transformation changes the value without changing the data type. For example, a local time zone is converted to Coordinated Universal Time (UTC). Or a short zip code (five digits) represented as a string is converted to a long zip code (five digits followed by a dash followed by 4 digits, also known as ZIP+4). Data enrichment and correlation Data transformation is applied on the existing data without reference to additional, related reference data. With data enrichment, additional data is queried to enrich source data before it's stored in the target database. Data correlation. It is possible to correlate source data. For example, you can combine data from two tables in two source databases. In one target database, for example, you might relate a customer to all open, fulfilled, and canceled orders whereby the customer data and the order data originate from two different source databases. Data enrichment. Data enrichment adds reference data. For example, you might enrich records that only contain a zip code by adding the city name corresponding to the zip code. A reference table containing zip codes and the corresponding city names is a static dataset accessed for this use case. Reference data can be dynamic as well. For example, you might use a list of all known customers as reference data. Data reduction and filtering Another type of data transformation is reducing or filtering the source data before migrating it to a target database. Data reduction. Data reduction removes attributes from a data item. For example, if a zip code is present in a data item, the corresponding city name might not be required and is dropped, because it can be recalculated or because it is not needed anymore. Sometimes this information is kept for historical reasons to record the name of the city as entered by the user, even if the city name changes in time. Data filtering. Data filtering removes a data item altogether. For example, all canceled orders might be removed and not transferred to the target database. Data combination or recombination If data is migrated from different source databases to different target databases, it can be necessary to combine data differently between source and target databases. Suppose that customers and orders are stored in two different source databases. One source database contains all orders, and a second source database contains all customers. After migration, customers and their orders are stored in a 1:n relationship within a single target database schema—not in a single target database, however, but several target databases where each contains a partition of the data. Each target database represents a region and contains all customers and their orders located in that region. Target database addressing Unless there is only one target database, each data item that is migrated needs to be sent to the correct target database. A couple of approaches to addressing the target database include the following: Schema-based addressing. Schema-based addressing determines the target database based on the schema. For example, all data items of a customer collection or all rows of a customer table are migrated to the same target database storing customer information, even though this information was distributed in several source databases. Content-based routing. Content-based routing (using a content-based router, for example) determines the target database based on data values. For example, all customers located in the Latin America region are migrated to a specific target database that represents that region. You can use both types of addressing at the same time in a database migration. Regardless of the addressing type used, the target database must have the correct schema in place so that data items are stored. Persistence of in-transit data Database migration systems, or the environments on which they run, can fail during a migration, and in-transit data can be lost. When failures occur, you need to restart the database migration system and ensure that the data stored in the source database is consistently and completely migrated to the target databases. As part of the recovery, the database migration system needs to identify the last successfully migrated data item to determine where to begin extracting from the source databases. To resume at the point of failure, the system needs to keep an internal state on the migration progress. You can maintain state in several ways: You can store all extracted data items within the database migration system before any database modification, and then remove the data item once its modified version is successfully stored in the target database. This approach ensures that the database migration system can exactly determine what is extracted and stored. You can maintain a list of references to the data items in transit. One possibility is to store the primary keys or other unique identifiers of each data item together with a status attribute. After a failure, this state is the basis for recovering the system consistently. You can query the source and target databases after a failure to determine the difference between the source and target database systems. The next data item to be extracted is determined based on the difference. Other approaches to maintaining state can depend on the specific source databases. For example, a database migration system can keep track of which transaction log entries are fetched from the source database and which are inserted into the target database. If a failure occurs, the migration can be restarted from the last successful inserted entry. Persistence of in-transit data is also important for other reasons than errors or failures. For example, it might not be possible to query data from the source database to determine its state. If, for example, the source database contained a queue, the messages in that queue might have been removed at some point. Yet another use case for persistence of in-transit data is large window processing of the data. During data modification, data items can be transformed independently of each other. However, sometimes the data modification depends on several data items (for example, numbering the data items processed per day, starting at zero every day). A final use case for persistence of in-transit data is to provide repeatability of the data during data modification when the database system cannot access the source databases again. For example, you might need to re-execute the data modifications with different modification rules and then verify and compare the results with the initial data modifications. This approach might be necessary if you need to track any inconsistencies in the target database because of an incorrect data modification. Completeness and consistency verification You need to verify that your database migration is complete and consistent. This check ensures that each data item is migrated only once, and that the datasets in the source and target databases are identical and that the migration is complete. Depending on the data modification rules, it is possible that a data item is extracted but not inserted into a target database. For this reason, directly comparing the source and target databases is not a solid approach for verifying completeness and consistency. However, if the database migration system tracks the items that are filtered out, you can then compare the source and target databases along with the filtered items. Replication feature of the database management system A special use case in a homogeneous migration is where the target database is a copy of the source database. Specifically, the schemas in the source and target databases are the same, the data values are the same, and each source database is a direct mapping (1:1) to a target database. In this case, you can use the built-in replication feature that comes with most database management systems to replicate one database to another. There are two types of data replication: logical and physical. Logical replication: In the case of logical replication, changes in database objects are transferred based on their replication identifiers (usually primary keys). The advantages of logical replication are that it is flexible, granular, and you can customize it. In some cases, logical replication lets you replicate changes between different database engine versions. Many database engines support logical replication filters, where you can define the set of data to be replicated. The main disadvantages are that logical replication might introduce some performance overhead and the latency of this replication method is usually higher than that of physical replication. Physical replication: In contrast, physical replication works on the disk block level and offers better performance with lower replication latency. For large datasets, physical replication can be more straightforward and efficient, especially in the case of non-relational data structures. However, it is not customizable and depends highly on the database engine version. Examples are MySQL replication, PostgreSQL replication (see also pglogical), or Microsoft SQL Server replication. However, if data modification is required, or you have any cardinality other than a direct mapping, a database migration system's capabilities are needed to address such a use case. Custom database migration functionality Some reasons for building database migration features instead of using a database migration system or database management system include the following: You need full control over every detail. You want to reuse the database migration capabilities. You want to reduce costs or simplify your technological footprint. Building blocks for building migration features include the following: Export and import: If downtime is not a factor, you can use database export and database import to migrate data in homogenous database migrations. Export and import, however, requires that you quiesce the source database to prevent updates before you export the data. Otherwise, changes might not be captured in the export, and the target database won't be an exact copy of the source database. Backup and restore: Like in the case of export and import, backup and restore incurs downtime because you need to quiesce the source database so that the backup contains all data and the latest changes. The downtime continues until the restore is completed successfully on the target database. Differential querying: If changing the database schema is an option, you can extend the schema so that database changes can be queried at the query interface. An additional timestamp attribute is added, indicating the time of the last change. An additional delete flag can be added, indicating if the data item is deleted or not (logical delete). With these two changes, a poller executing in a regular interval can query all changes since its last execution. The changes are applied to the target database. Additional approaches are discussed in Change data capture. These are only a few of the possible options to build a custom database migration. Although a custom solution provides the most flexibility and control over implementation, it also requires constant maintenance to address bugs, scalability limitations, and other issues that might arise during a database migration. Additional considerations of database migration The following sections briefly discuss non-functional aspects that are important in the context of database migration. These aspects include error handling, scalability, high availability, and disaster recovery. Error handling Failures during database migration must not cause data loss or the processing of database changes out of order. Data integrity must be preserved regardless of what caused the failure (such as a bug in the system, a network interruption, a VM crash, or a zone failure). A data loss occurs when a migration system retrieves the data from the source databases and does not store it in the target databases because of some error. When data is lost, the target databases don't match the source databases and are thus inconsistent and incomplete. The completeness and consistency verification feature flags this state (Completeness and consistency verification). Scalability In a database migration, time-to-migrate is an important metric. In a zero downtime migration (in the sense of minimal downtime), the migration of the data occurs while the source databases continue to change. To migrate in a reasonable timeframe, the rate of data transfer must be significantly faster than the rate of updates of the source database systems, especially when the source database system is large. The higher the transfer rate, the faster the database migration can be completed. When the source database systems are quiesced and are not being modified, the migration might be faster because there are no changes to incorporate. In a homogeneous database, the time-to-migrate might be quite fast because you can use backup and restore or export and import features, and the transfer of files scales. High availability and disaster recovery In general, source and target databases are configured for high availability. A primary database has a corresponding read replica that is promoted to be the primary database when a failure occurs. When a zone fails, the source or target databases fail over to a different zone to be continuously available. If a zone failure occurs during a database migration, the migration system itself is impacted because several of the source or target databases it accesses become inaccessible. The migration system must reconnect to the newly promoted primary databases that are running after a failure. Once the database migration system is reconnected, it must recover the migration itself to ensure the completeness and consistency of the data in the target databases. The migration system must determine the last consistent transfer to establish where to resume. If the database migration system itself fails (for example, the zone it runs in becomes inaccessible), then it must be recovered. One recovery approach is a cold restart. In this approach, the database migration system is installed in an operational zone and restarted. The biggest issue to address is that the migration system must be able to determine the last consistent data transfer before the failure and continue from that point to ensure data completeness and consistency in the target databases. If the database migration system is enabled for high availability, it can fail over and continue processing afterwards. If limited downtime of the database migration system is important, you need to select a database and implement high availability. In terms of recovering the database migration, disaster recovery is very similar to high availability. Instead of reconnecting to newly promoted primary databases in a different zone, the database migration system must reconnect to databases in a different region (a failover region). The same holds true for the database migration system itself. If the region where the database migration system runs becomes inaccessible, the database migration system must fail over to a different region and continue from the last consistent data transfer. Pitfalls Several pitfalls can cause inconsistent data in the target databases. Some common ones to avoid are the following: Order violation. If scalability of the migration system is achieved by scaling out, then several data transfer processes are running concurrently (in parallel). Changes in a source database system are ordered according to committed transactions. If changes are picked up from the transaction log, the order must be maintained throughout the migration. Parallel data transfer can change the order because of varying speed between the underlying processes. It is necessary to ensure that the data is inserted into the target databases in the same order as it is received from the source databases. Consistency violation. With differential queries, the source databases have additional data attributes that contain, for example, commit timestamps. The target databases won't have commit timestamps because the commit timestamps are only put in place to establish change management in the source databases. It is important to ensure that inserts into the target databases must be timestamp consistent, which means all changes with the same timestamp must be in the same insert or update or upsert transaction. Otherwise, the target database might have an inconsistent state (temporarily) if some changes are inserted and others with the same timestamp are not. This temporary inconsistent state does not matter if the target databases are not accessed for processing. However, if they are used for testing, consistency is paramount. Another aspect is the creation of the timestamp values in the source database and how they relate to the commit time of the transaction in which they are set. Because of transaction commit dependencies, a transaction with an earlier timestamp might become visible after a transaction with a later timestamp. If the differential query is executed between the two transactions, it won't see the transaction with the earlier timestamp, resulting in an inconsistency on the target database. Missing or duplicate data. When a failover occurs, a careful recovery is required if some data is not replicated between the primary and the failover replica. For example, a source database fails over and not all data is replicated to the failover replica. At the same time, the data is already migrated to the target database before the failure. After failover, the newly promoted primary database is behind in terms of data changes to the target database (called flashback). A migration system needs to recognize this situation and recover from it in such a way that the target database and the source database get back into a consistent state. Local transactions. To have the source and target database receive the same changes, a common approach is to have clients write to both the source and target databases instead of using a data migration system. This approach has several pitfalls. One pitfall is that two database writes are two separate transactions; you might encounter a failure after the first finishes and before the second finishes. This scenario causes inconsistent data from which you must recover. Also, there are several clients in general, and they are not coordinated. The clients do not know the source database transaction commit order and therefore cannot write to the target databases implementing that transaction order. The clients might change the order, which can lead to data inconsistency. Unless all access goes through coordinated clients, and all clients ensure the target transaction order, this approach can lead to an inconsistent state with the target database. In general, there are other pitfalls to watch out for. The best way to find problems that might lead to data inconsistency is to do a complete failure analysis that iterates through all possible failure scenarios. If concurrency is implemented in the database migration system, all possible data migration process execution orders must be examined to ensure that data consistency is preserved. If high availability or disaster recovery (or both) is implemented, all possible failure combinations must be examined. What's next Read Database migrations: Concepts and principles (Part 2). Read about database migration in the following documents: Migrating from PostgreSQL to Spanner Migrating from an Oracle® OLTP system to Spanner See Database migration for more database migration guides. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Conduct_thorough_postmortems.txt b/Conduct_thorough_postmortems.txt new file mode 100644 index 0000000000000000000000000000000000000000..72be6891055d4b3ff2c8a65376b28a517469a4bf --- /dev/null +++ b/Conduct_thorough_postmortems.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/conduct-postmortems +Date Scraped: 2025-02-23T11:43:42.819Z + +Content: +Home Docs Cloud Architecture Center Send feedback Conduct thorough postmortems Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you conduct effective postmortems after failures and incidents. This principle is relevant to the learning focus area of reliability. Principle overview A postmortem is a written record of an incident, its impact, the actions taken to mitigate or resolve the incident, the root causes, and the follow-up actions to prevent the incident from recurring. The goal of a postmortem is to learn from mistakes and not assign blame. The following diagram shows the workflow of a postmortem: The workflow of a postmortem includes the following steps: Create postmortem Capture the facts Identify and analyze the root causes Plan for the future Execute the plan Conduct postmortem analyses after major events and non-major events like the following: User-visible downtimes or degradations beyond a certain threshold. Data losses of any kind. Interventions from on-call engineers, such as a release rollback or rerouting of traffic. Resolution times above a defined threshold. Monitoring failures, which usually imply manual incident discovery. Recommendations Define postmortem criteria before an incident occurs so that everyone knows when a post mortem is necessary. To conduct effective postmortems, consider the recommendations in the following subsections. Conduct blameless postmortems Effective postmortems focus on processes, tools, and technologies, and don't place blame on individuals or teams. The purpose of a postmortem analysis is to improve your technology and future, not to find who is guilty. Everyone makes mistakes. The goal should be to analyze the mistakes and learn from them. The following examples show the difference between feedback that assigns blame and blameless feedback: Feedback that assigns blame: "We need to rewrite the entire complicated backend system! It's been breaking weekly for the last three quarters and I'm sure we're all tired of fixing things piecemeal. Seriously, if I get paged one more time I'll rewrite it myself…" Blameless feedback: "An action item to rewrite the entire backend system might actually prevent these pages from continuing to happen. The maintenance manual for this version is quite long and really difficult to be fully trained up on. I'm sure our future on-call engineers will thank us!" Make the postmortem report readable by all the intended audiences For each piece of information that you plan to include in the report, assess whether that information is important and necessary to help the audience understand what happened. You can move supplementary data and explanations to an appendix of the report. Reviewers who need more information can request it. Avoid complex or over-engineered solutions Before you start to explore solutions for a problem, evaluate the importance of the problem and the likelihood of a recurrence. Adding complexity to the system to solve problems that are unlikely to occur again can lead to increased instability. Share the postmortem as widely as possible To ensure that issues don't remain unresolved, publish the outcome of the postmortem to a wide audience and get support from management. The value of a postmortem is proportional to the learning that occurs after the postmortem. When more people learn from incidents, the likelihood of similar failures recurring is reduced. Previous arrow_back Perform testing for recovery from data loss Send feedback \ No newline at end of file diff --git a/Confidential_computing_for_data_analytics_and_AI.txt b/Confidential_computing_for_data_analytics_and_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..79cc344b93ebe3266a5943ca3c1b91a2ee4a8120 --- /dev/null +++ b/Confidential_computing_for_data_analytics_and_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/confidential-computing-analytics-ai +Date Scraped: 2025-02-23T11:46:19.573Z + +Content: +Home Docs Cloud Architecture Center Send feedback Confidential computing for data analytics, AI, and federated learning Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-20 UTC This document provides a general overview of confidential computing, including how you can use it for secure data collaboration, AI model training, and federated learning. The document also provides information about the Confidential Computing services in Google Cloud and architecture references for different use cases. This document is intended to help technology executives understand the business potential of confidential computing with generative AI and applied AI across various industries, including financial services and healthcare. What is confidential computing? Data security practices have conventionally centered on protecting data at rest and in transit through encryption. Confidential computing adds a new layer of protection by addressing the vulnerability of data during its active use. This technology ensures that sensitive information remains confidential even as it is being processed, thus helping to close a critical gap in data security. A confidential computing environment implements the protection of data in use with a hardware-based trusted execution environment (TEE). A TEE is a secure area within a processor that protects the confidentiality and integrity of code and data loaded inside it. TEE acts as a safe room for sensitive operations, which mitigates risk to data even if the system is compromised. With confidential computing, data can be kept encrypted in memory during processing. For example, you can use confidential computing for data analytics and machine learning to help achieve the following: Enhanced privacy: Perform analysis on sensitive datasets (for example, medical records or financial data) without exposing the data to the underlying infrastructure or the parties that are involved in the computation. Secure collaboration: Jointly train machine learning models or perform analytics on the combined datasets of multiple parties without revealing individual data to each other. Confidential computing fosters trust and enables the development of more robust and generalizable models, particularly in sectors like healthcare and finance. Improved data security: Mitigate the risk of data breaches and unauthorized access, ensuring compliance with data protection regulations — such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Increased trust and transparency: Provide verifiable proof that computations are performed on the intended data and in a secure environment, increasing trust among stakeholders. How a confidential computing environment works Confidential computing environments have the following properties: Runtime encryption: The processor keeps all confidential computing environment data encrypted in memory. Any system component or hardware attacker that attempts to read confidential computing environment data directly from memory only sees encrypted data. Likewise, encryption prevents the modification of confidential computing environment data through direct access to memory. Isolation: The processor blocks software-based access to the confidential computing environment. The operating system and other applications can only communicate with the confidential computing environment over specific interfaces. Attestation: In the context of confidential computing, attestation verifies the trustworthiness of the confidential computing environment. Using attestation, users can see the evidence that confidential computing is safeguarding their data because attestation lets you authenticate the TEE instance. During the attestation process, the CPU chip that supports the TEE produces a cryptographically signed report (known as an attestation report) of the measurement of the instance. The measurement is then sent to an attestation service. An attestation for process isolation authenticates an application. An attestation for VM isolation authenticates a VM, the virtual firmware that is used to launch the VM, or both. Data lifecycle security: Confidential computing creates a secure processing environment to provide hardware-backed protection for data in use. Confidential computing technology The following technologies enable confidential computing: Secure enclaves, also known as application-based confidential computing Confidential VMs and GPUs, also known as VM-based confidential computing Google Cloud uses Confidential VM to enable confidential computing. For more information, see Implement confidential computing on Google Cloud. Secure enclaves A secure enclave is a computing environment that provides isolation for code and data from the operating system using hardware-based isolation or isolating an entire VM by placing the hypervisor within the trusted computing base (TCB). Secure enclaves are designed to ensure that even users with physical or root access to the machines and operating system can't learn the contents of secure enclave memory or tamper with the execution of code inside the enclave. An example of a secure enclave is Intel Software Guard Extension (SGX). Confidential VMs and confidential GPUs A confidential VM is a type of VM that uses hardware-based memory encryption to help protect data and applications. Confidential VM offers isolation and attestation to improve security. Confidential VM computing technologies include AMD SEV, AMD SEV-SNP, Intel TDX, Arm CCA, IBM Z, IBM LinuxONE, and Nvidia Confidential GPU. Confidential GPUs help protect data and accelerate computing, especially in cloud and shared environments. They use hardware-based encryption and isolation techniques to help protect data while it's being processed on the GPU, ensuring that even the cloud provider or malicious actors cannot access sensitive information. Confidential data analytics, AI, and federated learning use cases The following sections provide examples of confidential computing use cases for various industries. Healthcare and life sciences Confidential computing enables secure data sharing and analysis across organizations while preserving patient privacy. Confidential computing lets healthcare organizations participate in collaborative research, disease modeling, drug discovery, and personalized treatment plans. The following table describes some example uses for confidential computing in healthcare. Use case Description Disease prediction and early detection Hospitals train a federated learning model to detect cancerous lesions from medical imaging data (for example, MRI scans or CT scans across multiple hospitals or hospital regions) while maintaining patient confidentiality. Real-time patient monitoring Health care providers analyze data from wearable health devices and mobile health apps for real-time monitoring and alerts. For example, wearable devices collect data on glucose levels, physical activity, and dietary habits to provide personalized recommendations and early warnings for blood sugar fluctuations. Collaborative drug discovery Pharmaceutical companies train models on proprietary datasets to accelerate drug discovery, enhancing collaboration while protecting intellectual property. Financial services Confidential computing lets financial institutions create a more secure and resilient financial system. The following table describes some example uses for confidential computing in financial services. Use case Description Financial crimes Financial institutions can collaborate on anti-money laundering (AML) or general fraud model efforts by sharing information about suspicious transactions while protecting customer privacy. Using confidential computing, institutions can analyze this shared data in a secure manner, and train the models to identify and disrupt complex money laundering schemes more effectively. Privacy-preserving credit risk assessment Lenders can assess credit risk using a wider range of data sources, including data from other financial institutions or even non-financial entities. Using confidential computing, lenders can access and analyze this data without exposing it to unauthorized parties, enhancing the accuracy of credit scoring models while maintaining data privacy. Privacy-preserving pricing discovery In the financial world, especially in areas like over-the-counter markets or illiquid assets, accurate pricing is crucial. Confidential computing lets multiple institutions calculate accurate prices collaboratively, without revealing their sensitive data to each other. Public sector Confidential computing lets governments create more transparent, efficient, and effective services, while retaining control and sovereignty of their data. The following table describes some example uses for confidential computing in the public sector. Use case Description Digital sovereignty Confidential computing ensures that data is always encrypted, even while being processed. It enables secure cloud migrations of citizens' data, with data being protected even when hosted on external infrastructure, across hybrid, public, or multi-cloud environments. Confidential computing supports and empowers digital sovereignty and digital autonomy, with additional data control and protection for data in use so that encryption keys are not accessible by the cloud provider. Multi-agency confidential analytics Confidential computing enables multi-party data analytics across multiple government agencies (for example, health, tax, and education), or across multiple governments in different regions or countries. Confidential computing helps ensure that trust boundaries and data privacy are protected, while enabling data analytics (using data loss prevention (DLP), large-scale analytics, and policy engines) and AI training and serving. Trusted AI Government data is critical and can be used to train private AI models in a trusted way to improve internal services as well as citizen interactions. Confidential computing allows for trusted AI frameworks, with confidential prompting or confidential retrieval augmented generation (RAG) training to keep citizen data and models private and secure. Supply chain Confidential computing lets organizations manage their supply chain and sustainability collaborate and share insights while maintaining data privacy. The following table describes some example uses for confidential computing in supply chains. Use case Description Demand forecasting and inventory optimization With confidential computing, each business trains their own demand forecasting model on their own sales and inventory data. These models are then securely aggregated into a global model, providing a more accurate and holistic view of demand patterns across the supply chain. Privacy-preserving supplier risk assessment Each organization involved in supplier risk assessment (for example, buyers, financial institutions, and auditors) trains their own risk-assessment model on their own data. These models are aggregated to create a comprehensive and privacy-preserving supplier risk profile, thereby enabling early identification of potential supplier risks, improved supply-chain resilience, and better decision making in supplier selection and management. Carbon footprint tracking and reduction Confidential computing offers a solution for tackling the challenges of data privacy and transparency in carbon footprint tracking and reduction efforts. Confidential computing lets organizations share and analyze data without revealing its raw form, which empowers organizations to make informed decisions and take effective action towards a more sustainable future. Digital advertising Digital advertising has moved away from third-party cookies and towards more privacy-safe alternatives, like Privacy Sandbox. Privacy Sandbox supports critical advertising use cases while limiting cross-site and application tracking. Privacy Sandbox uses TEEs to ensure secure processing of users' data by advertising firms. You can use TEEs in the following digital advertising use cases: Matching algorithms: Finding correspondences or relationships within datasets. Attribution: Linking effects or events back to their likely causes. Aggregation: Calculating summaries or statistics from the raw data. Implement confidential computing on Google Cloud Google Cloud includes the following services that enable confidential computing: Confidential VM: Enable encryption of data in use for workloads that use VMs Confidential GKE: Enable encryption of data in use for workloads that use containers Confidential Dataflow: Enable encryption of data in use for streaming analytics and machine learning Confidential Dataproc: Enable encryption of data in use for data processing Confidential Space: Enable encryption of data in use for joint data analysis and machine learning These services let you reduce your trust boundary so that fewer resources have access to your confidential data. For example, in a Google Cloud environment without Confidential Computing, the trust boundary includes the Google Cloud infrastructure (hardware, hypervisor, and host OS) and the guest OS. In a Google Cloud environment that includes Confidential Computing (without Confidential Space), the trust boundary includes only the guest OS and the application. In a Google Cloud environment with Confidential Space, the trust boundary is just the application and its associated memory space. The following table shows how the trust boundary is reduced with Confidential Computing and Confidential Space. Elements Within trust boundary without using Confidential Computing Within trust boundary when using Confidential Computing Within trust boundary when using Confidential Space Cloud stack and administrators Yes No No BIOS and firmware Yes No No Host OS and hypervisor Yes No No VM guest admin Yes Yes No VM guest OS Yes Yes Yes, measured and attested Applications Yes Yes Yes, measured and attested Confidential data Yes Yes Yes Confidential Space creates a secure area within a VM to provide the highest level of isolation and protection for sensitive data and applications. The main security benefits of Confidential Space include the following: Defense in depth: Adds an extra layer of security on top of existing confidential computing technologies. Reduced attack surface: Isolates applications from potential vulnerabilities in the guest OS. Enhanced control: Provides granular control over access and permissions within the secure environment. Stronger trust: Offers higher assurance of data confidentiality and integrity. Confidential Space is designed for handling highly sensitive workloads, especially in regulated industries or scenarios involving multi-party collaborations where data privacy is paramount. Architecture references for confidential analytics, AI, and federated learning You can implement confidential computing on Google Cloud to address the following use cases: Confidential analytics Confidential AI Confidential federated learning The following sections provide more information about the architecture for these use cases, including examples for financial and healthcare businesses. Confidential analytics architecture for healthcare institutions The confidential analytics architecture demonstrates how multiple healthcare institutions (such as providers, biopharmaceutical, and research institutions) can work together to accelerate drug research. This architecture uses confidential computing techniques to create a digital clean room for running confidential collaborative analytics. This architecture has the following benefits: Enhanced insights: Collaborative analytics lets health organizations gain broader insights and decrease time to market for enhanced drug discovery. Data privacy: Sensitive transaction data remains encrypted and is never exposed to other participants or the TEE, ensuring confidentiality. Regulatory compliance: The architecture helps health institutions comply with data protection regulations by maintaining strict control over their data. Trust and collaboration: The architecture enables secure collaboration between competing institutions, fostering a collective effort to discover drugs. The following diagram shows this architecture. The key components in this architecture include the following: TEE OLAP aggregation server: A secure, isolated environment where machine learning model training and inference occur. Data and code within the TEE are protected from unauthorized access, even from the underlying operating system or cloud provider. Collaboration partners: Each participating health institution has a local environment that acts as an intermediary between the institution's private data and the TEE. Provider-specific encrypted data: Each healthcare institution stores its own private, encrypted patient data that includes electronic health records. This data remains encrypted during the analytics process, which ensures data privacy. The data is only released to the TEE after validating the attestation claims from the individual providers. Analytics client: Participating health institutions can run confidential queries against their data to gain immediate insights. Confidential AI architecture for financial institutions This architectural pattern demonstrates how financial institutions can collaboratively train a fraud detection model while using fraud labels to preserve the confidentiality of their sensitive transaction data. The architecture uses confidential computing techniques to enable secure, multi-party machine learning. This architecture has the following benefits: Enhanced fraud detection: Collaborative training uses a larger, more diverse dataset, leading to a more accurate and effective fraud detection model. Data privacy: Sensitive transaction data remains encrypted and is never exposed to other participants or the TEE, ensuring confidentiality. Regulatory compliance: The architecture helps financial institutions comply with data protection regulations by maintaining strict control over their data. Trust and collaboration: This architecture enables secure collaboration between competing institutions, fostering a collective effort to combat financial fraud. The following diagram shows this architecture. The key components of this architecture include the following: TEE OLAP aggregation server: A secure, isolated environment where machine learning model training and inference occur. Data and code within the TEE are protected from unauthorized access, even from the underlying operating system or cloud provider. TEE model training: The global fraud base model is packaged as containers to run the ML training. Within the TEE, the global model is further trained using the encrypted data from all participating banks. The training process employs techniques like federated learning or secure multi-party computation to ensure that no raw data is exposed. Collaborator partners: Each participating financial institution has a local environment that acts as an intermediary between the institution's private data and the TEE. Bank-specific encrypted data: Each bank holds its own private, encrypted transaction data that includes fraud labels. This data remains encrypted throughout the entire process, ensuring data privacy. The data is only released to the TEE after validating the attestation claims from individual banks. Model repository: A pre-trained fraud detection model that serves as the starting point for collaborative training. Global fraud trained model and weights (symbolized by the green line): The improved fraud detection model, along with its learned weights, is securely exchanged back to the participating banks. They can then deploy this enhanced model locally for fraud detection on their own transactions. Confidential federated learning architecture for financial institutions Federated learning offers an advanced solution for customers who value stringent data privacy and data sovereignty. The confidential federated learning architecture provides a secure, scalable, and efficient way to use data for AI applications. This architecture brings the models to the location where the data is stored, rather than centralizing the data in a single location, thereby reducing the risks associated with data leakage. This architectural pattern demonstrates how multiple financial institutions can collaboratively train a fraud detection model while preserving the confidentiality of their sensitive transaction data with fraud labels. It uses federated learning along with confidential computing techniques to enable secure, multi-party machine learning without training data movement. This architecture has the following benefits: Enhanced data privacy and security: Federated learning enables data privacy and data locality by ensuring that sensitive data remains at each site. Additionally, financial institutions can use privacy preserving techniques such as homomorphic encryption and differential privacy filters to further protect any transferred data (such as the model weights). Improved accuracy and diversity: By training with a variety of data sources across different clients, financial institutions can develop a robust and generalizable global model to better represent heterogeneous datasets. Scalability and network efficiency: With the ability to perform training at the edge, institutions can scale federated learning across the globe. Additionally, institutions only need to transfer the model weights rather than entire datasets, which enables efficient use of network resources. The following diagram shows this architecture. The key components of this architecture include the following: Federated server in the TEE cluster: A secure, isolated environment where the federated learning server orchestrates the collaboration of multiple clients by first sending an initial model to the federated learning clients. The clients perform training on their local datasets, then send the model updates back to the federated learning server for aggregation to form a global model. Federated learning model repository: A pre-trained fraud detection model that serves as the starting point for federated learning. Local application inference engine: An application that executes tasks, performs local computation and learning with local datasets, and submits results back to federated learning server for secure aggregation. Local private data: Each bank holds its own private, encrypted transaction data that includes fraud labels. This data remains encrypted throughout the entire process, ensuring data privacy. Secure aggregation protocol (symbolized by the dotted blue line): The federated learning server doesn't need to access any individual bank's update to train the model; it requires only the element-wise weighted averages of the update vectors, taken from a random subset of banks or sites. Using a secure aggregation protocol to compute these weighted averages helps ensure that the server can learn only that one or more banks in this randomly selected subset wrote a given word, but not which banks, thereby preserving the privacy of each participant in the federated learning process. Global fraud-trained model and aggregated weights (symbolized by the green line): The improved fraud detection model, along with its learned weights, is securely sent back to the participating banks. The banks can then deploy this enhanced model locally for fraud detection on their own transactions. What's next Read Confidential AI: Intel seeks to overcome AI's data protection problem. Read The Present and Future of Confidential Computing. View Enabling secure multi-party collaboration with confidential computing by Keith Moyer (Google) | OC3 (YouTube). View What's new in confidential computing? (YouTube). Implement Confidential Computing and Confidential Space in your environment. Learn more about the fundamentals of Confidential Computing on Google Cloud. Learn more about enabling more private generative AI. Contributors Arun Santhanagopalan | Head of Technology and Incubation, Google Cloud Pablo Rodriguez | Technical Director, Office of CTO Vineet Dave | Head of Technology and Incubation, Google Cloud Send feedback \ No newline at end of file diff --git a/Config_Connector.txt b/Config_Connector.txt new file mode 100644 index 0000000000000000000000000000000000000000..5db5e51636cd34693822c1fb27db8cfdaa073e8e --- /dev/null +++ b/Config_Connector.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/config-connector/docs/overview +Date Scraped: 2025-02-23T12:05:53.810Z + +Content: +Home Config Connector Documentation Guides Send feedback Stay organized with collections Save and categorize content based on your preferences. Config Connector overview Standard Config Connector is an open source Kubernetes add-on that lets you manage Google Cloud resources through Kubernetes. Many cloud-native development teams work with a mix of configuration systems, APIs, and tools to manage their infrastructure. This mix is often difficult to understand, leading to reduced velocity and expensive mistakes. Config Connector provides a method to configure many Google Cloud services and resources using Kubernetes tooling and APIs. With Config Connector, your environments can use Kubernetes-managed Resources including: RBAC for access control. Events for visibility. Single source of configuration and desired state management for reduced complexity. Eventual consistency for loosely coupling dependencies. You can manage your Google Cloud infrastructure the same way you manage your Kubernetes applications, reducing the complexity and cognitive load for developers. How Config Connector works Config Connector provides a collection of Kubernetes Custom Resource Definitions (CRDs) and controllers. The Config Connector CRDs allow Kubernetes to create and manage Google Cloud resources when you configure and apply Objects to your cluster. For Config Connector CRDs to function correctly, Config Connector deploys Pods to your nodes that have elevated RBAC permissions, such as the ability to create, delete, get, and list CustomResourceDefinitions (CRDs). These permissions are required for Config Connector to create and reconcile Kubernetes resources. To get started, install Config Connector and create your first resource. Config Connector's controllers eventually reconcile your environment with your desired state. Customizing Config Connector's behavior Config Connector provides additional features beyond creating resources. For example, you can manage existing Google Cloud resources, and use Kubernetes Secrets to provide sensitive data, such as passwords, to your resources. For more information, see the list of how-to guides. In addition, you can learn more about how Config Connector uses Kubernetes constructs to manage Resources and see the Google Cloud resources Config Connector can manage. What's next Install Config Connector. Get started by creating your first resource. Explore Config Connector source code. Config Connector is fully open sourced on GitHub. Send feedback \ No newline at end of file diff --git a/Configure_Active_Directory_for_VMs_to_automatically_join_a_domain.txt b/Configure_Active_Directory_for_VMs_to_automatically_join_a_domain.txt new file mode 100644 index 0000000000000000000000000000000000000000..1f9221812d040b65b7d90bc40226f4cc1990ccb3 --- /dev/null +++ b/Configure_Active_Directory_for_VMs_to_automatically_join_a_domain.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/configuring-active-directory-for-vms-to-automatically-join-the-domain +Date Scraped: 2025-02-23T11:51:18.633Z + +Content: +Home Docs Cloud Architecture Center Send feedback Configure Active Directory for VMs to automatically join a domain Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This document shows you how to configure Active Directory and Compute Engine so that Windows virtual machine (VM) instances can automatically join an Active Directory domain. Automating the process of joining Windows VMs to Active Directory helps you simplify the process of provisioning Windows servers. The approach also lets you take advantage of autoscaling without sacrificing the benefits of using Active Directory to manage access and configuration. This document is intended for system administrators and assumes that you are familiar with Active Directory and Google Cloud networking. The configuration that you create by following the procedure in this document can be the basis of additional work that you do with Windows Servers in Google Cloud. For example, after you've finished this procedure, you can deploy ASP.NET applications with Windows Authentication in Windows containers. If you're using Managed Microsoft AD and don't require automatic cleanup of stale computer accounts, consider joining the Windows VMs using the automated domain join feature. For more information, see Join a Windows VM automatically to a domain. Objectives Deploy a Cloud Run app that enables VM instances from selected projects to automatically join your Active Directory domain. Create a Cloud Scheduler job that periodically scans your Active Directory domain for stale computer accounts and removes them. Test the setup by creating an autoscaled managed instance group (MIGs) of domain-joined VM instances. Costs In this document, you use the following billable components of Google Cloud: Compute Engine Serverless VPC Access Cloud Run Artifact Registry Secret Manager Cloud Scheduler The instructions in this document are designed to keep your resource usage within the limits of Google Cloud's Always Free tier. To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish this document, you can avoid continued billing by deleting the resources you created. For more information, see Clean up. Before you begin This document assumes that you've already deployed Active Directory on Google Cloud by using Managed Service for Microsoft Active Directory (Managed Microsoft AD) or by deploying self-managed domain controllers on Google Cloud. To complete the procedures, ensure that you have the following: Administrative access to your Active Directory domain, including the ability to create users, groups, and organizational units (OUs). An unused /28 CIDR IP range in the VPC that your Active Directory domain controllers are deployed in. You use this IP range to configure Serverless VPC Access. A subnet into which you deploy Windows instances. The subnet must be configured to use Private Google Access. Note: Private Google Access is required so that all VM instances in the subnet can access Cloud Run. If you don't enable Private Google Access, only VM instances with internet access will be able to automatically join your Active Directory domain. If you use a self-managed domain controller, you also need the following: A private DNS forwarding zone that forwards DNS queries to your Active Directory domain controllers. Implementing this approach In an on-premises environment, you might rely on answer files (unattend.xml) and the JoinDomain customization to automatically join new computers to a domain. Although you can use the same process in Google Cloud, this approach has several limitations: Using a customized unattend.xml file requires that you maintain a custom Compute Engine image. Keeping a custom image current using Windows Updates requires either ongoing maintenance or initial work to set up automation. Unless you need to maintain a custom image for other reasons, this extra effort might not be justified. Using the JoinDomain customization ties an image to a single Active Directory domain because the domain name must be specified in unattend.xml. If you maintain multiple Active Directory domains or forests (for example, for separate testing and production environments), then you might need to maintain multiple custom images for each domain. Joining a Windows computer to a domain requires user credentials that have permissions to create a computer object in the directory. If you use the JoinDomain customization in unattend.xml, you must embed these credentials as plaintext in unattend.xml. These embedded credentials can turn the image into a potential target for attackers. Although you can control access to the image by setting appropriate Identity and Access Management (IAM) permissions, managing access to a custom image adds unnecessary complexity. The approach in this document doesn't use answer files and therefore does not require specially prepared images. Instead, you use the following sysprep specialize scriptlet when you create a VM instance: iex((New-Object System.Net.WebClient).DownloadString('https://DOMAIN')) This sysprep specialize scriptlet initiates a process that the following diagram illustrates. The process works as follows: After a VM instance is created, Windows boots for the first time. As part of the specialize configuration pass, Windows runs the sysprep specialize scriptlet. The specialize scriptlet invokes the register-computer Cloud Run app and downloads a PowerShell script that controls the domain joining process. Windows invokes the downloaded PowerShell script. The PowerShell script calls the metadata server to obtain an ID token that securely identifies the VM instance. The script calls the register-computer app again, passing the ID token to authenticate itself. The app validates the ID token and extracts the name, zone, and Google Cloud project ID of the VM instance. The app verifies that the Active Directory domain is configured to permit VM instances from the given project to join the domain. To complete this verification, the app locates and connects to an Active Directory domain controller to check for an organizational unit (OU) whose name matches the Google Cloud project ID from the ID token. If a matching OU is found, then VM instances of the project are authorized to join the Active Directory domain in the given OU. The app verifies that the Google Cloud project is configured to allow VM instances to join Active Directory. To complete this verification, the app checks whether it can access the VM instance by using the Compute Engine API. If all checks pass successfully, the app prestages a computer account in Active Directory. The app saves the VM instance's name, zone, and ID as attributes in the computer account object so that it can be associated with the VM instance. Using the Kerberos set password protocol, the app then assigns a random password to the computer account. The computer name and password are returned to the Windows instance over a TLS-secured channel. Using the prestaged computer account, the PowerShell script joins the computer to the domain. After the specialize configuration pass is complete, the machine reboots itself. The remainder of this procedure walks you through the steps that are required to set up automated domain joining. Preparing the Active Directory domain First, you prepare your Active Directory domain. To complete this step, you need a machine that has administrative access to your Active Directory domain. Optional: Limit who can join computers to the domain You might want to restrict who can join computers to the domain. By default, the Group Policy Object (GPO) configuration for the Default Domain Controller Policy grants the Add workstations to domain user right to all authenticated users. Anyone with that user right can join computers to the domain. Because you are automating the process of joining computers to your Active Directory domain, universally granting this level of access is an unnecessary security risk. Note: If you use Managed Microsoft AD, only members of the Cloud Service Domain Join Accounts group are allowed to join computers to your domain by default. To limit who can join computers to your Active Directory domain, change the default configuration of the Default Domain Controller Policy GPO: Using an RDP client, sign in to a machine that has administrative access to your Active Directory domain. Open the Group Policy Management Console (GPMC). Go to Forest > Domains > domain-name > Group Policy Objects, where domain-name is the name of your Active Directory domain. Right-click Default Domain Controller Policy and click Edit. In the Group Policy Management Editor console, go to Computer Configuration > Policies > Windows Settings > Security Settings > Local Policies > User Rights Assignment. Double-click Add workstations to domain. In Properties, remove Authenticated Users from the list. To let administrators join the domain manually (optional), click Add user or group, and then add an administrative group to the list. Click OK. You can now close the Group Policy Management Editor console and GPMC. Initialize a directory structure You now create an OU that serves as a container for all project-specific OUs: Using an RDP client, sign in to a machine that has administrative access to your Active Directory domain. Open an elevated PowerShell session. Create a new organizational unit: $ParentOrgUnitPath = (Get-ADDomain).ComputersContainer $ProjectsOrgUnitPath = New-ADOrganizationalUnit ` -Name 'Projects' ` -Path $ParentOrgUnitPath ` -PassThru Note: Adjust the value of $ParentOrgUnitPath to create the OU in a location other than the default container. Create an Active Directory user account To access Active Directory and prestage computer accounts, the register-computer app needs an Active Directory user account: Create an Active Directory user account named register-computer, assign it a random password, and then place it in the Projects OU: # Generate a random password $Password = [Guid]::NewGuid().ToString()+"-"+[Guid]::NewGuid().ToString() # Create user $UpnSuffix = (Get-ADDomain).DNSRoot $RegisterComputerUser = New-ADUser ` -Name "register-computer Cloud Run app" ` -GivenName "Register" ` -Surname "Computer" ` -Path $ProjectsOrgUnitPath ` -SamAccountName "register-computer" ` -UserPrincipalName "register-computer@$UpnSuffix" ` -AccountPassword (ConvertTo-SecureString "$Password" -AsPlainText -Force) ` -PasswordNeverExpires $True ` -Enabled $True ` -PassThru Grant the register-computer account the minimum set of permissions needed to manage computer accounts and groups in the Projects OU and sub-OUs: $AcesForContainerAndDescendents = @( "CCDC;Computer", # Create/delete computers "CCDC;Group" # Create/delete users ) $AcesForDescendents = @( "LC;;Computer" , # List child objects "RC;;Computer" , # Read security information "WD;;Computer" , # Change security information "WP;;Computer" , # Write properties "RP;;Computer" , # Read properties "CA;Reset Password;Computer", # ... "CA;Change Password;Computer", # ... "WS;Validated write to service principal name;Computer", "WS;Validated write to DNS host name;Computer", "LC;;Group", # List child objects "RC;;Group", # Read security information "WD;;Group", # Change security information "WP;;Group", # Write properties "RP;;Group" # Read properties ) $AcesForContainerAndDescendents | % { dsacls.exe $ProjectsOrgUnitPath /G "${RegisterComputerUser}:${_}" /I:T | Out-Null } $AcesForDescendents | % { dsacls.exe $ProjectsOrgUnitPath /G "${RegisterComputerUser}:${_}" /I:S | Out-Null } The command might take a few minutes to complete. Grant the register-computer account permission to delete DNS records: Managed Microsoft AD Add-ADGroupMember -Identity "Cloud Service DNS Administrators" -Members ${RegisterComputerUser} Self-managed domain $DnsPartition=(Get-ADDomain).SubordinateReferences | Where-Object {$_.StartsWith('DC=DomainDnsZones')} $DnsContainer="DC=$((Get-ADDomain).DNSRoot),CN=MicrosoftDNS,$DnsPartition" dsacls $DnsContainer /G "${RegisterComputerUser}:SD" /I:S Reveal the Projects OU path and the generated password of the register-computer Active Directory user account. Note the values because you will need them later. Write-Host "Password: $Password" Write-Host "Projects OU: $ProjectsOrgUnitPath" Preparing the Google Cloud project You now configure your domain project: If you use Managed Microsoft AD, your domain project is the project in which you deployed Managed Microsoft AD. If you use self-managed Active Directory, your domain project is the project that runs your Active Directory domain controllers. In the case of a Shared VPC, this project must be the same as the VPC host project. You use this domain project to do the following: Create a Secret Manager secret that contains the password of the register-computer Active Directory user account. Deploy the register-computer app. Configure Cloud Scheduler so that it triggers cleanups of stale computer accounts. We recommend that you grant access to the domain project on a least-privilege basis. Create a Secret Manager secret In the Google Cloud console, open Cloud Shell. Open Cloud Shell Launch PowerShell: pwsh Initialize the following variable, replacing domain-project-id with the ID of your domain project: $DomainProjectId = "domain-project-id" Set the domain project as the default project: & gcloud config set project $DomainProjectId Enable the Secret Manager API: & gcloud services enable secretmanager.googleapis.com Enter the password of the register-computer Active Directory user account and store it in a Secret Manager secret: $RegisterComputerCredential = (Get-Credential -Credential 'register-computer') $TempFile = New-TemporaryFile Set-Content $TempFile $($RegisterComputerCredential.GetNetworkCredential().Password) -NoNewLine & gcloud secrets create ad-password --data-file $TempFile Remove-Item $TempFile Grant access to Kerberos and LDAP To manage domain joins, the register-computer app accesses your domain controllers by using the following protocols: LDAP (TCP/389) or LDAPS (TCP/636) Kerberos (UDP/88, TCP/88) Kerberos password change (UDP/464, TCP/464) Managed Microsoft AD You don't need to configure any firewall rules. Self-managed domain Create a firewall rule that allows access to your domain controllers. You can apply the rule based on a network tag that you have assigned to your domain controllers, or you can apply it by using a service account. By network tag & gcloud compute firewall-rules create allow-adkrb-from-serverless-to-dc ` --direction INGRESS ` --action allow ` --rules udp:88,tcp:88,tcp:389,tcp:636,udp:464,tcp:464 ` --source-ranges $ServerlessIpRange ` --target-tags dc-tag ` --network $VpcName ` --project vpc-project-id ` --priority 10000 Replace the following: dc-tag: The network tag assigned to your domain controller VMs. vpc-project-id: The ID of the project the VPC is defined in. If you use a Shared VPC, use the VPC host project; otherwise, use the ID of the domain project. By service account & gcloud compute firewall-rules create allow-adkrb-from-serverless-to-dc ` --direction INGRESS ` --action allow ` --rules udp:88,tcp:88,tcp:389,tcp:636,udp:464,tcp:464 ` --source-ranges $ServerlessIpRange ` --target-service-accounts dc-sa ` --network $VpcName ` --project vpc-project-id ` --priority 10000 Replace the following: dc-sa: The email address of the service account that your domain controller VMs use. vpc-project-id: The ID of the project the VPC is defined in. If you use a Shared VPC, use the VPC host project; otherwise, use the ID of the domain project. Deploy the Cloud Run app You now set up Cloud Build to deploy the register-computer app to Cloud Run: In Cloud Shell, clone the GitHub repository: & git clone https://github.com/GoogleCloudPlatform/gce-automated-ad-join.git cd gce-automated-ad-join/ad-joining Initialize the following variables: $ServerlessRegion = "serverless-region" $VpcName = "vpc-name" $VpcSubnet = "subnet-name" $AdDomain = "dns-domain-name" $AdNetbiosDomain = "netbios-domain-name" $ProjectsOrgUnitPath = "projects-ou-distinguished-name" Replace the following: serverless-region: The region to deploy the register-computer app in. The region does not have to be the same region as the one you plan to deploy VM instances in. vpc-name: The name of the VPC network that contains your Active Directory domain controllers. subnet-name: The subnet of vpc-name to use for direct VPC access. The subnet must be in the same region as serverless-region. dns-domain-name: The DNS domain name of your Active Directory domain. netbios-domain-name: The NetBIOS name of your Active Directory domain. projects-ou-distinguished-name: The distinguished name of your Projects OU. Enable the Cloud Run and Cloud Build APIs: & gcloud services enable run.googleapis.com cloudbuild.googleapis.com Create a service account register-computer-app for the Cloud Run app: & gcloud iam service-accounts create register-computer-app ` --display-name="register computer Cloud Run app" Create a service account build-service for running Cloud Build triggers: & gcloud iam service-accounts create build-service ` --display-name="Cloud Build build agent" Allow the Cloud Run service account to read the secret that contains the Active Directory password: & gcloud secrets add-iam-policy-binding ad-password ` --member "serviceAccount:register-computer-app@$DomainProjectId.iam.gserviceaccount.com" ` --role "roles/secretmanager.secretAccessor" Grant Cloud Build the necessary permissions to deploy to Cloud Run: $DomainProjectNumber = (gcloud projects describe $DomainProjectId --format='value(projectNumber)') & gcloud iam service-accounts add-iam-policy-binding register-computer-app@$DomainProjectId.iam.gserviceaccount.com ` --member "serviceAccount:build-service@$DomainProjectId.iam.gserviceaccount.com" ` --role "roles/iam.serviceAccountUser" & gcloud projects add-iam-policy-binding $DomainProjectId ` --member "serviceAccount:build-service@$DomainProjectId.iam.gserviceaccount.com" ` --role roles/cloudbuild.builds.builder & gcloud projects add-iam-policy-binding $DomainProjectId ` --member "serviceAccount:build-service@$DomainProjectId.iam.gserviceaccount.com" ` --role roles/run.admin Use the file cloudbuild.yaml as a template to create a custom Cloud Run build config that matches your environment: $Build = (Get-Content cloudbuild.yaml) $Build = $Build.Replace('__SERVERLESS_REGION__', "$ServerlessRegion") $Build = $Build.Replace('__PROJECTS_DN__', "$ProjectsOrgUnitPath") $Build = $Build.Replace('__AD_DOMAIN__', "$AdDomain") $Build = $Build.Replace('__AD_NETBIOS_DOMAIN__', "$AdNetbiosDomain") $Build = $Build.Replace('__SERVICE_ACCOUNT_EMAIL__', "register-computer-app@$DomainProjectId.iam.gserviceaccount.com") $Build = $Build.Replace('__SERVERLESS_NETWORK__', "$VpcName") $Build = $Build.Replace('__SERVERLESS_SUBNET__', "$VpcSubnet") $Build | Set-Content .\cloudbuild.hydrated.yaml Note: The app provides additional configurable settings, such as using LDAPS instead of LDAP. For more information, see the app's README.md file. To apply additional configurations, edit the cloudbuild.yaml file and add the required environment variables to the cloud-sdk step arguments. Build the app and deploy it to Cloud Run: & gcloud builds submit . ` --config cloudbuild.hydrated.yaml ` --substitutions _IMAGE_TAG=$(git rev-parse --short HEAD) ` --service-account "projects/$DomainProjectId/serviceAccounts/build-service@$DomainProjectId.iam.gserviceaccount.com" ` --default-buckets-behavior regional-user-owned-bucket The deployment can take a couple of minutes to complete. Note: If you keep the file cloudbuild.hydrated.yaml, you can later redeploy the app by re-running the preceding command. Determine the URL of the Cloud Run app: $RegisterUrl = (gcloud run services describe register-computer ` --platform managed ` --region $ServerlessRegion ` --format=value`(status.url`)) Write-Host $RegisterUrl Note the URL. You will need it whenever you create a VM instance that should be joined to Active Directory. Invoke the Cloud Run app to verify that the deployment worked: Invoke-RestMethod $RegisterUrl A PowerShell script displays. The VM runs this script during the specialize phase that joins it to the domain. Enabling a project for automatic domain joining The register-computer app does not allow VM instances to join an Active Directory domain unless the VM's project is enabled for automatic domain joining. This security measure helps prevent VMs that are connected to unauthorized projects from accessing your domain. Note: If you deployed Active Directory into a Shared VPC, we recommend that you create a project for testing purposes and attach the project to the Shared VPC. Deploying Serverless VPC Access creates a subnet in your VPC. You don't need to share this subnet when you attach service projects to the VPC host project. To enable a project for automatic domain joining, you must do the following: Create an OU in Active Directory whose name matches your Google Cloud project ID. Grant the register-computer app access to the Google Cloud project. First, create the OU: Using an RDP client, sign in to a machine that has administrative access to your Active Directory domain. In the Active Directory Users and Computers MMC snap-in, go to the Projects OU. Right-click the OU and select New > Organizational Unit. In the New Object dialog, enter the ID for the Google Cloud project to deploy your VMs in. Click OK. Next, grant the register-computer app access to the Google Cloud project: In Cloud Shell, launch PowerShell: pwsh Initialize the following variables: $ProjectId = "project-id" $DomainProjectId = "domain-project-id" Replace project-id with the ID of the Google Cloud project to deploy your VMs in domain-project-id with the ID of your domain project Grant the register-computer-app service account the Compute Viewer role on the project: & gcloud projects add-iam-policy-binding $ProjectId ` --member "serviceAccount:register-computer-app@$DomainProjectId.iam.gserviceaccount.com" ` --role "roles/compute.viewer" Your project is now ready to support automatic domain joining. Testing domain joining You can now verify that the setup is working correctly by: Creating a single VM instance that automatically joins the Active Directory domain Creating a managed instance group of VM instances that automatically join the Active Directory domain Create and join a single VM instance Create a VM instance that automatically joins the Active Directory domain: Return to the PowerShell session in Cloud Shell and initialize the following variables: $Region = "vpc-region-to-deploy-vm" $Zone = "zone-to-deploy-vm" $Subnet = "vpc-subnet-to-deploy-vm" $ServerlessRegion = "serverless-region" Replace the following: vpc-region-to-deploy-vm: The region to deploy the VM instance in. vpc-subnet-to-deploy-vm: The subnet to deploy the VM instance in. zone-to-deploy-vm: The zone to deploy the VM instance in. serverless-region: The region you deployed the Cloud Run app in. Set the default project and zone: & gcloud config set project $ProjectId & gcloud config set compute/zone $Zone Lookup the URL of the Cloud Run app again: $RegisterUrl = (gcloud run services describe register-computer ` --platform managed ` --region $ServerlessRegion ` --format value`(status.url`) ` --project $DomainProjectId) Create an instance by passing the specialize scriptlet that causes the VM to join the domain: Shared VPC $VpchostProjectId = (gcloud compute shared-vpc get-host-project $ProjectId --format=value`(name`)) & gcloud compute instances create join-01 ` --image-family windows-2019-core ` --image-project windows-cloud ` --machine-type n1-standard-2 ` --no-address ` --subnet projects/$VpchostProjectId/regions/$Region/subnetworks/$Subnet ` --metadata "sysprep-specialize-script-ps1=iex((New-Object System.Net.WebClient).DownloadString('$RegisterUrl'))" Standalone VPC & gcloud compute instances create join-01 ` --image-family=windows-2019-core ` --image-project=windows-cloud ` --machine-type=n1-standard-2 ` --no-address ` --subnet $Subnet ` --metadata "sysprep-specialize-script-ps1=iex((New-Object System.Net.WebClient).DownloadString('$RegisterUrl'))" If you want to use a custom hostname, add a --hostname parameter to the command. If you use a Windows Server version prior to Windows Server 2019, TLS 1.2 might be disabled by default, which can cause the specialize scriptlet to fail. To enable TLS 1.2, use the following scriptlet instead: [Net.ServicePointManager]::SecurityProtocol=[Net.SecurityProtocolType]::Tls12;iex((New-Object System.Net.WebClient).DownloadString('$RegisterUrl')) Note: NetBIOS only supports computer names of up to 15 characters. If the hostname of a VM instance exceeds this maximum length, the specialize scriptlet automatically changes the computer name to a name that is unique and has less than 15 characters. When you create additional VM instances, make sure to keep the instance name under this character limit; otherwise, joining a VM instance to your domain is likely to fail. Monitor the boot process: & gcloud compute instances tail-serial-port-output join-01 After about one minute, the machine is joined to your Active Directory domain. The output is similar to the following: Domain : corp.example.com DomainController : dc-01.corp.example.com. OrgUnitPath : OU=test-project-123,OU=Projects,DC=corp,DC=example,DC=com WARNING: The changes will take effect after you restart the computer Computer successfully joined to domain To stop observing the boot process, press CTRL+C. Verify that the VM is joined to Active Directory Using an RDP client, sign in to a machine that has administrative access to your Active Directory domain. Open the Active Directory Users and Computers MMC snap-in. In the menu, ensure that View > Advanced Features is enabled. Go to the OU named after the Google Cloud project ID that you created a VM instance in. Double-click the join-01 account. In the Properties dialog, click the Attribute Editor tab. The computer account is annotated with additional LDAP attributes. These attributes let you track the association between the computer object and the Compute Engine instance. Verify that the list contains the following LDAP attributes and values. LDAP attribute Value msDS-cloudExtensionAttribute1 Google Cloud project ID msDS-cloudExtensionAttribute2 Compute Engine zone msDS-cloudExtensionAttribute3 Compute Engine instance name The msDS-cloudExtensionAttribute attributes are general-purpose attributes and are not used by Active Directory itself. Diagnose errors If your VM instance failed to join the domain, check the log of the register-computer app: In the Google Cloud console, go to Cloud Run. Go to Cloud Run Click the register-computer app. In the menu, click Logs. Delete the instance After you verify that the VM instance is joined to the Active Directory domain, you delete the instance. Delete the instance: & gcloud compute instances delete join-01 --quiet Create and join a managed instance group You can also verify that instances from a MIG can automatically join your domain. Create an instance template by passing the specialize script that causes the VM to join the domain: Shared VPC $VpchostProjectId = (gcloud compute shared-vpc get-host-project $ProjectId --format=value`(name`)) & gcloud compute instance-templates create ad-2019core-n1-std-2 ` --image-family windows-2019-core ` --image-project windows-cloud ` --no-address ` --machine-type n1-standard-2 ` --subnet projects/$VpchostProjectId/regions/$Region/subnetworks/$Subnet ` --metadata "sysprep-specialize-script-ps1=iex((New-Object System.Net.WebClient).DownloadString('$RegisterUrl'))" Standalone VPC & gcloud compute instance-templates create ad-2019core-n1-std-2 ` --image-family windows-2019-core ` --image-project windows-cloud ` --no-address ` --machine-type n1-standard-2 ` --subnet projects/$ProjectId/regions/$Region/subnetworks/$Subnet ` --metadata "sysprep-specialize-script-ps1=iex((New-Object System.Net.WebClient).DownloadString('$RegisterUrl'))" Create a managed instance group that uses the instance template: & gcloud compute instance-groups managed create group-01 ` --template ad-2019core-n1-std-2 ` --size=3 Wait a few minutes, and then use the Active Directory Users and Computers MMC snap-in to verify that four new objects have been created in Active Directory: 3 computer accounts corresponding to the 3 VM instances of the managed instance group. 1 group named group-01 that contains the 3 computer accounts. If you plan to use group managed service accounts (gMSA), you can use this group to grant access to the gMSA. Note: NetBIOS only supports computer names of up to 15 characters. If the hostname of a VM instance exceeds this maximum length, the specialize scriptlet automatically changes the computer name to a name that is unique and has less than 15 characters. Make sure to use short names for managed instance groups so that the names of VM instances don't exceed 15 characters. After you verify that the VM instances from your MIGs can join your Active Directory domain, you can delete the managed group and instance template by following these steps: In Cloud Shell, delete the instance group: & gcloud compute instance-groups managed delete group-01 --quiet Delete the instance template: & gcloud compute instance-templates delete ad-2019core-n1-std-2 --quiet Scheduling cleanup of stale computer accounts Automating the process of joining computers to the domain reduces the effort to set up new servers and lets you use domain-joined servers in managed instance groups. Over time, however, stale computer accounts can accumulate in the domain. To prevent this accumulation, we recommend that you set up the register-computer app to periodically scan your Active Directory domain to find and automatically remove stale accounts and their associated DNS records. The register-computer app can use the msDS-cloudExtensionAttribute attributes of computer accounts to identify which computer accounts are stale. These attributes contain the project, zone, and instance name of the corresponding VM instance in Compute Engine. For each computer account, the app can check if the corresponding VM instance is still available. If it is not, then the computer account is considered stale and removed. To trigger a computer account cleanup, you invoke the /cleanup endpoint of the Cloud Run app. To prevent unauthorized users from triggering a cleanup, this request must be authenticated by using the register-computer-app service account. Configure Cloud Scheduler The following steps show you how to set up Cloud Scheduler in conjunction with Pub/Sub to automatically trigger a cleanup once every 24 hours: In Cloud Shell, enable the Cloud Scheduler API in your domain project: & gcloud services enable cloudscheduler.googleapis.com Set AppEngineLocation to a valid App Engine location in which to deploy Cloud Scheduler: $AppEngineLocation = "location" Replace location with the App Engine region that you selected for your VPC resources, for example, us-central. If that region is not available as an App Engine location, choose a location that is geographically close to you. For more information, see Regions and zones. Initialize App Engine: & gcloud app create --region $AppEngineLocation --project $DomainProjectId Create a Cloud Scheduler job: & gcloud scheduler jobs create http cleanup-computer-accounts ` --schedule "every 24 hours" ` --uri "$RegisterUrl/cleanup" ` --oidc-service-account-email register-computer-app@$DomainProjectId.iam.gserviceaccount.com ` --oidc-token-audience "$RegisterUrl/" ` --project $DomainProjectId This job calls the register-computer app once every 24 hours and uses the register-computer-app service account for authentication. Trigger a cleanup To verify your configuration for cleaning up stale computer accounts, you can trigger the Cloud Scheduler job manually. In the Google Cloud console, go to Cloud Scheduler. Go to Cloud Scheduler For the cleanup-computer-accounts job that you created, click Run Now. After a few seconds, the Result column displays Success, indicating that the cleanup completed successfully. If the result column does not update automatically within a few seconds, click the Refresh button. For more details about which accounts were cleaned up, check the logs of the register-computer app. In the Google Cloud console, go to Cloud Run. Go to Cloud Run Click the register-computer app. In the menu, click Logs. Log entries indicate that the computer accounts of the VM instances you used to test domain joining were identified as stale and removed. Clean up If you are using this document as a baseline for other reference architectures and deployments, read the other documents about when to run the cleanup steps. If you don't want to keep the Google Cloud setup used in this document, you can revert this setup by doing the following: In Cloud Shell, delete the Cloud Scheduler job: & gcloud scheduler jobs delete cleanup-computer-accounts ` --project $DomainProjectId Delete the Cloud Run app: & gcloud run services delete register-computer ` --platform managed ` --project $DomainProjectId ` --region $ServerlessRegion Delete the Secret Manager secret: gcloud secrets delete ad-password --project $DomainProjectId Delete the firewall rule for LDAP and Kerberos access: gcloud compute firewall-rules delete allow-adkrb-from-serverless-to-dc --project=vpc-project-id Replace vpc-project-id with the ID of the project the VPC is defined in. If you use a Shared VPC, use the VPC host project; otherwise, use the ID of the domain project. Revert Active Directory changes Using an RDP client, sign in to a machine that has administrative access to your Active Directory domain. In the Active Directory Users and Computers MMC snap-in, go to the Projects OU. Delete the register-computer Active Directory user account. Delete the OU that you created for testing automated domain joining. What's next To join Windows VMs to a Managed Microsoft AD domain using automated domain join, see Join a Windows VM automatically to a domain. Review our best practices for deploying an Active Directory resource forest on Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Configure_SaaS_data_protection_for_Google_Workspace_data_with_SpinOne.txt b/Configure_SaaS_data_protection_for_Google_Workspace_data_with_SpinOne.txt new file mode 100644 index 0000000000000000000000000000000000000000..94faf3c29d364d9722678b0e09aa666c7ec19c45 --- /dev/null +++ b/Configure_SaaS_data_protection_for_Google_Workspace_data_with_SpinOne.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/backing-up-workspace-data-with-spinone +Date Scraped: 2025-02-23T11:56:24.690Z + +Content: +Home Docs Cloud Architecture Center Send feedback Configuring SaaS data protection for Google Workspace data with Spin.AI Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-07-12 UTC By: Spin.AI This document describes how to configure SpinOne - All-in-One SaaS Data Protection with Cloud Storage. When you configure SpinOne, including its cybersecurity features, you can store SpinOne for Google Workspace backup data in Google Cloud and restore data from Cloud Storage. The following steps are completed automatically as part of the SpinOne configuration and account creation process: Configures the required Google Cloud infrastructure and services such as Cloud Storage accounts, services, and API hooks. Integrates with Cloud Storage during the SpinOne registration process. Creates new Cloud Storage buckets to store data in Google Cloud. In this document, you complete the following: Set up SpinOne for Google Workspace. Back up and restore Google Workspace data using SpinOne. Perform a risk assessment of applications in your organization. View and configure ransomware monitoring and response. Enable data loss prevention (DLP) with data audit. SpinOne for Google Workspace SpinOne is a comprehensive SaaS data protection platform for your mission-critical Google Workspace data. SpinOne provides the following four core solutions: SaaS Apps Risk Assessment for Google Workspace SaaS Ransomware Protection for Google Workspace SaaS DLP for Google Workspace SaaS Backup & Recovery for Google Workspace Installing SpinOne for Google Workspace Go to the Google Workspace Marketplace to install SpinOne – Security & Backup and click Install. Sign in with your Google Workspace administrator account (make sure you are a Super Admin), and then click Next. In the Choose a secure cloud storage window, click a data center location for Google Cloud, and then click Get Started. Select a data center location based on your business policies or needs. These needs can include keeping data close to the source for performance reasons or keeping your backup data geographically separated from your production data. For more information, see Geography and Regions. Complete the registration process. Perform a risk assessment of the applications in your organization SpinOne lets you assess the risk of the applications that are installed in your organization. It provides an automated way to assess third-party applications' business, security, and compliance risks. SpinOne includes the following features: Continuous risk level analysis of applications: SpinAudit detects when new applications are installed or uninstalled. Then, it automatically reviews the application and identifies applications that have been blocked. After SpinAudit has blocked an application, its access is revoked whenever a user attempts to install it in the cloud SaaS environment. Implement security policies: Use granular policies to customize applications, data audits, and domain audit-related policies. It allows for specific rule scopes, exceptions, and notification settings on a per- rule basis. To view the applications in use in your organization, complete the following: In the Web Direct console, navigate to Risk Assessment > All Apps. Change the status to Active. It displays the scores, applications, states, types, users, and when access was last granted. By reviewing the score, you can determine the level of risks that were introduced in the environment by the applications. SpinOne automatically and continuously performs the risk assessment of third-party applications in Google Workspace Marketplace and browser plugins. Create a security policy to blocklist risky applications The risk assessment displays the risk score of applications that are installed in the environment. You can take that information and enforce governance policies based on risk score levels. Using the SpinOne security policies, you can create granular security policies to block applications based on their security score, among other factors. In the Web Direct console, navigate to Security Policies > Policies. Click the add icon (+) next to Create Policy. Select Apps Policy. Choose from several conditions, including the following: Application name Application category Application ID Developer Scope of permissions Application risk score You can choose to apply the policy to OAuth applications or Google Chrome extensions. After defining the conditions of the security policy, define the actions that you want to take. The following screenshot shows how to set the Blocklist action and the alert for each event. Click Next step. (Optional) In Scope and Exceptions, set the scope for the policy to specific users and enter user exceptions for the policy. Click Next step. The Preview screen appears. Click Create policy. View and configure ransomware monitoring and response You can view and configure ransomware protection in SpinOne. The ransomware protection dashboard provides visibility into malicious activity in the organization and proactive recovery of file resources. The ransomware protection dashboard gives visibility to the following information: Affected user Service affected Number of encrypted files Number of files recovered Unrecovered files When the attack started When the attack was stopped The type of ransomware In the Web Direct console, navigate to Ransomware protection. Click Settings to view the ransomware protection policy settings. If you have multiple policies configured for ransomware protection, you can change the priority of the policies. To launch the configuration settings for a ransomware protection policy, click the policy. The configuration for the ransomware protection policy includes: Policy type Description Scope Restore encrypted files automatically Revoke an access Restore file sharing permissions Send notification DLP with data audit SpinOne data audit lets you see the data that is shared internally and externally in your organization. It includes filtering the shared data reports to view sharing information. In addition, administrators can see personally identifiable information (PII) data in the form of transmitted credit card numbers (CCNs). Using data audit, administrators can create data policies in the security policy settings to enforce data governance and compliance requirements. In the Web Direct console, click Data audit > Shared data to view the shared data dashboard. Use the Owner, Shared to, Security Policies, and Date filters to filter data. To view the PII data, including the CCNs, click PII data. To create a new data policy, complete the following tasks: Go to Security policies > Policies. Click Create New > Data Policy. Configure the following conditions for the new data policy: Filename Check external domains Check domains, users, or groups Check shared by link Allowlist for domains, users, or groups Check for non-owner file sharing Click Next step. Configure the following actions for the new data policy: Revoke sharing permissions Send notification Send notification to owner Change the owner Suspend user Click Next step. (Optional) In Scope and Exceptions, set the scope for the data policy to specific users and enter any exceptions for the policy. Click Next step. The Preview screen appears. Click Create policy. Configuring Google Workspace backup In your SpinOne Dashboard, you can configure the following settings for your Google Workspace backup: Choose which Google Workspace services to back up. Configure backup frequency. Configure backup retention. Choose which Google Workspace services to back up During the initial installation and setup process for SpinOne, you can indicate which of the global Google Workspace services to include in the SpinOne backup of Google Workspace. You can change these preferences after the initial setup wizard or on a per-Google Workspace-user basis. For the purposes of this document, you configure your Google Workspace backup for all users. In the SpinOne Dashboard, in the Backup & Recovery section, click Users. To configure which of the following services are available for backup, expand the Users menu: Gmail Google Drive Google Calendar Google Contacts In Autobackup, select the services to back up. Configure automatic backup settings By default, the automatic backup settings are set up during the initial setup wizard. You can change the automatic backup settings for all users or a specific organization unit. Organizational units are configured in the Google Workspace environment and allow configuring different settings for different users. SpinOne automatically pulls the list of organizational units from your Google Workspace environment. In the SpinOne Dashboard, in the Backup & Recovery section, click Users. In the Update autobackup setting window, click Update all users, and then turn on the following services for backup: Gmail Google Drive Google Calendar Google Contacts Click Update. Configure backup frequency You can also configure your automatic backup frequency. You can choose to back up the environment either once or three times per day. The backups are fully automated. The backup times are set by our system. You may also trigger a manual backup at any time. In the SpinOne Dashboard, in the Backup & Recovery section, click Settings. In the Automated Backup Frequency section, click 3x/day, and then click Update. Configure backup retention By configuring the retention policy, you can choose to keep data indefinitely or to prune data after a specific number of months. Organizations may choose not to retain data due to business policies or other compliance regulations. The default is to keep backups indefinitely, but you can change the duration. In the SpinOne Dashboard, in the Backup & Recovery section, click Settings. In the Retention policy section, in the months from when data was backed up enter 12, and then click Update. What's next Install the application from Google Workspace Marketplace. Sign up for a free trial of SpinOne for Google Workspace. Find the right solution for you: SpinOne for Google Workspace. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Configure_networks_for_FedRAMP_and_DoD_in_Google_Cloud.txt b/Configure_networks_for_FedRAMP_and_DoD_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..89ddde00acf7f4248a15a7de26ccc9c245bc7b5e --- /dev/null +++ b/Configure_networks_for_FedRAMP_and_DoD_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/configure-networks-fedramp-dod-google-cloud +Date Scraped: 2025-02-23T11:56:19.570Z + +Content: +Home Docs Cloud Architecture Center Send feedback Configure networks for FedRAMP and DoD in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-28 UTC This document provides configuration guidance to help you to securely deploy Google Cloud networking policies in the United States (US) that comply with the design requirements for FedRAMP High and Department of Defense (DoD) Impact Level 2 (IL2), Impact Level 4 (IL4), and Impact Level 5 (IL5). This document is intended for solution architects, network engineers, and security engineers who design and deploy networking solutions on Google Cloud. The following diagram shows a landing zone network design for highly regulated workloads. Architecture The network design shown in the preceding diagram is aligned with the US compliance framework requirements for FedRAMP High, and DoD IL2, IL4, and IL5. This architecture includes the following components, which are described in greater detail later in this document: Virtual Private Cloud (VPC): These VPCs are global, however, you must only create subnets in US regions. Regional load balancers: These load balancers are regional, not global. They only support US deployments. Note that the use of external load balancers than can be directly accessed by the internet might need extra validation with DISA to assure DoD authorization for IL4 and IL5. Google Cloud Armor security policies: These policies can be used with supported regional load balancer security policies. Private Service Connect, Private Google Access (PGA), and Private service access (PSA): These options enable private connectivity to Google managed services within the region. You must enable private access to Google managed services and APIs within the region through the relevant option for your use case. Third-party services: For third-party producer-consumer services, you must ensure that both the producer service and the data that is in transit meet your compliance requirements. Non-prod: Provision other environments such as non-prod, testing, and quality assurance (QA) in accordance with your organization's VPC strategy. Use case Assured Workloads is a compliance framework that can help to provide the security controls that you need to support regulatory requirements for FedRAMP High, and DoD IL2, IL4, and IL5. After you deploy with Assured Workloads, you are responsible for setting up compliant and secure networking policies. For other compliance use cases, see Hosting FedRAMP Moderate and High Workloads on Google Cloud in the FedRAMP documentation. The scope of this guidance is limited to networking components. You must configure workloads in accordance with the shared responsibility model, FedRAMP Customer Responsibility Matrix, in-scope Google Cloud services, FedRAMP, and Assured Workloads guidelines. For more information about how to meet compliance requirements for other Google Cloud services, see the Compliance resource center. The services referenced in this document are for example purposes only. You must review the services that are in scope for compliance programs to assure the correct compliance level requirements for your workload. Out of scope products The following services don't meet FedRAMP High, or DoD IL2, IL4, and IL5 jurisdictional boundary compliance requirements: Global External Application Load Balancer Global Google Cloud Armor Global External Proxy Load Balancer We recommend that you discuss the risk of using these services in your network with your Google support team before you begin to make your network design. Design Considerations This section describes design considerations for which the configurations that are described in this document are an appropriate choice. Use Assured Workloads You must use Assured Workloads to meet compliance-based requirements on Google Cloud for regulations that have data sovereignty and residency requirements, such as FedRAMP High, and DoD IL4 and IL5. To understand if these principles apply to your compliance program on Google Cloud, we recommend that you review Overview of Assured Workloads in the early stages of your design phase. You are responsible for configuring your own network and IAM policies. You must configure an Assured Workloads folder and set the appropriate compliance program. In this case, either set the appropriate compliance program to FedRAMP High or IL2, IL4, IL5. This folder provides a regulatory boundary within an organization to identify regulated data types. By default, any project under this folder will inherit the security and compliance guardrails set at the Assured Workloads folder level. Assured Workloads restricts the regions that you can select for those resources based on the compliance program that you chose using the resource restriction Organization Policy Service. Regional alignment You must use one or more of the US Google regions to support the compliance programs in scope for this guidance. Note that FedRAMP High and DoD IL4 and IL5 have a general requirement that data be kept within a US geographical boundary. To learn which regions you can add, see Assured Workloads locations. Product-level compliance It's your responsibility to confirm that a product or service supports the appropriate data sovereignty and residency requirements for your use case. When you purchase or use your target compliance program, you must also follow these guidelines for each product that you use to meet applicable compliance requirements. Assured Workloads sets up a modifiable Organization Policy with a point in time resource usage restriction policy that reflects the services that are in compliance with the chosen compliance framework. Deployment To help you to meet your compliance requirements, we recommend that you follow the guidelines in this section for individual networking services. Virtual Private Cloud network configurations You must make the following Virtual Private Cloud configurations: Subnets: Create subnets in the US regions referenced in Regional alignment. Assured Workloads applies policies to restrict the creation of subnets in other locations. Firewall rules: You must configure VPC firewall rules to allow or deny connections only to or from virtual machine (VM) instances in your VPC network. Private Service Connect configurations Private Service Connect is a capability of Google Cloud networking that lets consumers access managed services privately from inside their VPC network. Both Private Service Connect types (Private Service Connect endpoints and Private Service Connect backends) support the controls that are described in this document when configured with regional load balancers. We recommend that you apply the configuration details described in the following table: Private Service Connect Type Supported load balancers Compliance status Private Service Connect endpoints for Google APIs Not applicable Not supported Private Service Connect backends for Google APIs Global External Application Load Balancer Regional External proxy Network Load Balancer or Internal Application Load Balancer Regional External proxy Network Load Balancer Compliant when used with either of the following regional load balancers: Regional external or Internal Application Load Balancer Regional External proxy Network Load Balancer Private Service Connect endpoints for published services Regional Internal Application Load Balancer Regional Internal passthrough Network Load Balancer Regional External proxy Network Load Balancer Compliant Private Service Connect backends for published services Global External Application Load Balancer Regional external or Internal Application Load Balancer Regional External proxy Network Load Balancer Regional Internal passthrough Network Load Balancer Compliant when used with the following regional load balancer: Regional external or Internal Application Load Balancer Regional External proxy Network Load Balancer Regional Internal passthrough Network Load Balancer Packet Mirroring Packet Mirroring is a VPC feature that you can use to help you to maintain compliance. Packet Mirroring captures all your traffic and packet data, including payloads and headers, and forwards it to target collectors for analysis. Packet Mirroring inherits VPC compliance status. Cloud Load Balancing Google Cloud offers different types of load balancers, as described in Application Load Balancer overview. For this architecture, you must use regional load balancers. Cloud DNS You can use Cloud DNS to help you to meet your compliance requirements. Cloud DNS is a managed DNS service in Google Cloud which supports private forwarding zones, peering zones, reverse lookup zones, and DNS server policies. Cloud DNS public zones don't comply with FedRAMP High, and DoD IL2, IL4, or IL5 controls. Cloud Router Cloud Router is a regional product that you can configure for Cloud VPN, Cloud Interconnect, and Cloud NAT. You must only configure Cloud Router in US regions. When you create or edit a VPC network, you can set the dynamic routing mode to be regional or global. If you enable global routing mode, you must configure custom advertised mode to only include US networks. Cloud NAT Cloud NAT is a regional managed NAT product that you can use to enable outbound access to the internet for private resources with no external IP addresses. You must only configure Cloud NAT gateway in US regions that have the associated Cloud Router component. Cloud VPN You must use Cloud VPN endpoints located within the US. Ensure that your VPN gateway is configured only for use in the correct US region, as described in Regional alignment. We recommend that you use HA VPN type for Cloud VPN. For encryption, you must only use FIPS 140-2 compliant ciphers to create certificates and to configure your IP address security. To learn more about supported ciphers in Cloud VPN, see Supported IKE ciphers. For guidance about how to select a cipher that conforms to FIPS 140-2 standards, see FIPS 140-2 Validated. After you make a configuration, there is no way to change an existing cipher in Google Cloud. Ensure that you configure the same cipher on your third-party appliance that you use with Cloud VPN. Google Cloud Armor Google Cloud Armor is a DDoS mitigation and application protection service. It helps to protect against DDoS attacks on Google Cloud customer deployments with workloads that are exposed to the internet. Google Cloud Armor for Regional external Application Load Balancer is designed to provide the same protection and capabilities for regional load balanced workloads. Because Google Cloud Armor web-application firewalls (WAF) use a regional scope, your configurations and traffic reside in the region where the resources are created. You must create regional backend security policies and attach them to backend services which are regionally scoped. The new regional security policies can only be applied to regionally scoped backend services in the same region, and are stored, evaluated, and enforced in-region. Google Cloud Armor for Network Load Balancers and VMs extends Google Cloud Armor DDoS protection for workloads exposed to the internet through a Network Load Balancer (or protocol forwarding) forwarding rule, or through a VM that's directly exposed through a public IP. To enable this protection, you must configure advanced network DDoS protection. Dedicated Interconnect To use Dedicated Interconnect, your network must physically connect to Google's network in a supported colocation facility. The facility provider supplies a 10G or 100G circuit between your network and a Google Edge point-of-presence. You must only use Cloud Interconnect in colocation facilities within the US that serve Google Cloud US regions. When you use Partner Cloud Interconnect, you must consult with the service provider to confirm that their locations are within the US and connected to one of the Google Cloud US locations listed later in this section. By default, the traffic sent over Cloud Interconnect is unencrypted. If you want to encrypt traffic sent over Cloud Interconnect then you can configure VPN over Cloud Interconnect or MACsec. For the full list of supported regions and co-locations, see the following table: Region Location Facility name Facility us-east4 (Virginia) Ashburn iad-zone1-1 Equinix Ashburn (DC1-DC11) Ashburn iad-zone2-1 Equinix Ashburn (DC1-DC11) Ashburn iad-zone1-5467 CoreSite - Reston (VA3) Ashburn iad-zone2-5467 CoreSite - Reston (VA3) us-east5 (Columbus) Columbus cmh-zone1-2377 Cologix COL1 Columbus cmh-zone2-2377 Cologix COL1 us-central1 (Iowa) Council Bluffs cbf-zone1-575 Nebraska data centers (1623 Farnam) Council Bluffs cbf-zone2-575 Nebraska data centers (1623 Farnam) us-south1 (Dallas) Dallas dfw-zone1-4 Equinix Dallas (DA1) Dallas dfw-zone2-4 Equinix Dallas (DA1) us-west1 (Oregon) Portland pdx-zone1-1922 EdgeConneX Portland (EDCPOR01) Portland pdx-zone2-1922 EdgeConneX Portland (EDCPOR01) us-west2 (Los Angeles) Los Angeles lax-zone1-8 Equinix Los Angeles (LA1) Los Angeles lax-zone2-8 Equinix Los Angeles (LA1) Los Angeles lax-zone1-19 CoreSite - LA1 - One Wilshire Los Angeles lax-zone2-19 CoreSite - LA1 - One Wilshire Los Angeles lax-zone1-403 Digital Realty LAX (600 West 7th) Los Angeles lax-zone2-403 Digital Realty LAX (600 West 7th) Los Angeles lax-zone1-333 Equinix LA3/LA4 - Los Angeles, El Segundo Los Angeles lax-zone2-333 Equinix LA3/LA4 - Los Angeles, El Segundo us-west3 (Salt Lake City) Salt Lake City slc-zone1-99001 Aligned Salt Lake (SLC-01) Salt Lake City slc-zone2-99001 Aligned Salt Lake (SLC-01) us-west4 (Las Vegas) Las Vegas las-zone1-770 Switch Las Vegas Las Vegas las-zone2-770 Switch Las Vegas What's next Learn more about the Google Cloud products used in this design guide: Google Virtual Private Cloud Cloud DNS Cloud NAT Google Load Balancing Google Cloud Armor Private Service Connect Cloud Interconnect Cloud VPN For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Haider Witwit | Customer EngineerBhavin Desai | Product ManagerOther contributors: Ashwin Gururaghavendran | Software EngineerPercy Wadia | Group Product ManagerDaniel Lees | Cloud Security ArchitectMarquis Carroll | ConsultantMichele Chubirka | Cloud Security Advocate Send feedback \ No newline at end of file diff --git a/Connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt b/Connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..cfdbf2a9d6dc3c28c1671e28ac35287f2ebfafa0 --- /dev/null +++ b/Connecting_visualization_software_to_Hadoop_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/connecting-visualization-software-to-hadoop-on-google-cloud +Date Scraped: 2025-02-23T11:52:49.737Z + +Content: +Home Docs Cloud Architecture Center Send feedback Connecting your visualization software to Hadoop on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC This tutorial is the second part of a series that shows you how to build an end-to-end solution to give data analysts secure access to data when using business intelligence (BI) tools. This tutorial is intended for operators and IT administrators who set up environments that provide data and processing capabilities to the business intelligence (BI) tools used by data analysts. Tableau is used as the BI tool in this tutorial. To follow along with this tutorial, you must have Tableau Desktop installed on your workstation. The series is made up of the following parts: The first part of the series, Architecture for connecting visualization software to Hadoop on Google Cloud, defines the architecture of the solution, its components, and how the components interact. This second part of the series tells you how to set up the architecture components that make up the end-to-end Hive topology on Google Cloud. The tutorial uses open source tools from the Hadoop ecosystem, with Tableau as the BI tool. The code snippets in this tutorial are available in a GitHub repository. The GitHub repository also includes Terraform configuration files to help you set up a working prototype. Throughout the tutorial, you use the name sara as the fictitious user identity of a data analyst. This user identity is in the LDAP directory that both Apache Knox and Apache Ranger use. You can also choose to configure LDAP groups, but this procedure is outside the scope of this tutorial. Objectives Create an end-to-end setup that enables a BI tool to use data from a Hadoop environment. Authenticate and authorize user requests. Set up and use secure communication channels between the BI tool and the cluster. Costs In this document, you use the following billable components of Google Cloud: Dataproc Cloud SQL Cloud Storage Network egress To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Dataproc, Cloud SQL, and Cloud Key Management Service (Cloud KMS) APIs. Enable the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Dataproc, Cloud SQL, and Cloud Key Management Service (Cloud KMS) APIs. Enable the APIs Initializing your environment In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell In Cloud Shell, set environment variables with your project ID, and the region and zones of the Dataproc clusters: export PROJECT_ID=$(gcloud info --format='value(config.project)') export REGION=us-central1 export ZONE=us-central1-b You can choose any region and zone, but keep them consistent as you follow this tutorial. Setting up a service account In Cloud Shell, create a service account. gcloud iam service-accounts create cluster-service-account \ --description="The service account for the cluster to be authenticated as." \ --display-name="Cluster service account" The cluster uses this account to access Google Cloud resources. Add the following roles to the service account: Dataproc Worker: to create and manage Dataproc clusters. Cloud SQL Editor: for Ranger to connect to its database using Cloud SQL Proxy. Cloud KMS CryptoKey Decrypter: to decrypt the passwords encrypted with Cloud KMS. bash -c 'array=( dataproc.worker cloudsql.editor cloudkms.cryptoKeyDecrypter ) for i in "${array[@]}" do gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member "serviceAccount:cluster-service-account@${PROJECT_ID}.iam.gserviceaccount.com" \ --role roles/$i done' Creating the backend cluster In this section, you create the backend cluster where Ranger is located. You also create the Ranger database to store the policy rules, and a sample table in Hive to apply the Ranger policies. Create the Ranger database instance Create a MySQL instance to store the Apache Ranger policies: export CLOUD_SQL_NAME=cloudsql-mysql gcloud sql instances create ${CLOUD_SQL_NAME} \ --tier=db-n1-standard-1 --region=${REGION} This command creates an instance called cloudsql-mysql with the machine type db-n1-standard-1 located in the region specified by the ${REGION} variable. For more information, see the Cloud SQL documentation. Set the instance password for the user root connecting from any host. You can use the example password for demonstrative purposes, or create your own. If you create your own password, use a minimum of eight characters, including at least one letter and one number. gcloud sql users set-password root \ --host=% --instance ${CLOUD_SQL_NAME} --password mysql-root-password-99 Encrypt the passwords In this section, you create a cryptographic key to encrypt the passwords for Ranger and MySQL. To prevent exfiltration, you store the cryptographic key in Cloud KMS. For security purposes, you can't view, extract, or export the key bits. You use the cryptographic key to encrypt the passwords and write them into files. You upload these files into a Cloud Storage bucket so that they are accessible to the service account that is acting on behalf of the clusters. The service account can decrypt these files because it has the cloudkms.cryptoKeyDecrypter role and access to the files and the cryptographic key. Even if a file is exfiltrated, the file can't be decrypted without the role and the key. As an extra security measure, you create separate password files for each service. This action minimizes the potential impacted area if a password is exfiltrated. For more information about key management, see the Cloud KMS documentation. In Cloud Shell, create a Cloud KMS key ring to hold your keys: gcloud kms keyrings create my-keyring --location global To encrypt your passwords, create a Cloud KMS cryptographic key: gcloud kms keys create my-key \ --location global \ --keyring my-keyring \ --purpose encryption Encrypt your Ranger admin user password using the key. You can use the example password or create your own. Your password must be a minimum of eight characters, including at least one letter and one number. echo "ranger-admin-password-99" | \ gcloud kms encrypt \ --location=global \ --keyring=my-keyring \ --key=my-key \ --plaintext-file=- \ --ciphertext-file=ranger-admin-password.encrypted Encrypt your Ranger database admin user password with the key: echo "ranger-db-admin-password-99" | \ gcloud kms encrypt \ --location=global \ --keyring=my-keyring \ --key=my-key \ --plaintext-file=- \ --ciphertext-file=ranger-db-admin-password.encrypted Encrypt your MySQL root password with the key: echo "mysql-root-password-99" | \ gcloud kms encrypt \ --location=global \ --keyring=my-keyring \ --key=my-key \ --plaintext-file=- \ --ciphertext-file=mysql-root-password.encrypted Create a Cloud Storage bucket to store encrypted password files: gcloud storage buckets create gs://${PROJECT_ID}-ranger --location=${REGION} Upload the encrypted password files to the Cloud Storage bucket: gcloud storage cp *.encrypted gs://${PROJECT_ID}-ranger Create the cluster In this section, you create a backend cluster with Ranger support. For more information about the Ranger optional component in Dataproc, see the Dataproc Ranger Component documentation page. In Cloud Shell, create a Cloud Storage bucket to store the Apache Solr audit logs: gcloud storage buckets create gs://${PROJECT_ID}-solr --location=${REGION} Export all the variables required in order to create the cluster: export BACKEND_CLUSTER=backend-cluster export PROJECT_ID=$(gcloud info --format='value(config.project)') export REGION=us-central1 export ZONE=us-central1-b export CLOUD_SQL_NAME=cloudsql-mysql export RANGER_KMS_KEY_URI=\ projects/${PROJECT_ID}/locations/global/keyRings/my-keyring/cryptoKeys/my-key export RANGER_ADMIN_PWD_URI=\ gs://${PROJECT_ID}-ranger/ranger-admin-password.encrypted export RANGER_DB_ADMIN_PWD_URI=\ gs://${PROJECT_ID}-ranger/ranger-db-admin-password.encrypted export MYSQL_ROOT_PWD_URI=\ gs://${PROJECT_ID}-ranger/mysql-root-password.encrypted For convenience, some of the variables that you set before are repeated in this command so you can modify them as you require. The new variables contain: The name of the backend cluster. The URI of the cryptographic key so that the service account can decrypt the passwords. The URI of the files containing the encrypted passwords. If you used a different key ring or key, or different filenames, use the corresponding values in your command. Create the backend Dataproc cluster: gcloud beta dataproc clusters create ${BACKEND_CLUSTER} \ --optional-components=SOLR,RANGER \ --region ${REGION} \ --zone ${ZONE} \ --enable-component-gateway \ --scopes=default,sql-admin \ --service-account=cluster-service-account@${PROJECT_ID}.iam.gserviceaccount.com \ --properties="\ dataproc:ranger.kms.key.uri=${RANGER_KMS_KEY_URI},\ dataproc:ranger.admin.password.uri=${RANGER_ADMIN_PWD_URI},\ dataproc:ranger.db.admin.password.uri=${RANGER_DB_ADMIN_PWD_URI},\ dataproc:ranger.cloud-sql.instance.connection.name=${PROJECT_ID}:${REGION}:${CLOUD_SQL_NAME},\ dataproc:ranger.cloud-sql.root.password.uri=${MYSQL_ROOT_PWD_URI},\ dataproc:solr.gcs.path=gs://${PROJECT_ID}-solr,\ hive:hive.server2.thrift.http.port=10000,\ hive:hive.server2.thrift.http.path=cliservice,\ hive:hive.server2.transport.mode=http" This command has the following properties: The final three lines in the command are the Hive properties to configure HiveServer2 in HTTP mode, so that Apache Knox can call Apache Hive through HTTP. The other parameters in the command operate as follows: The --optional-components=SOLR,RANGER parameter enables Apache Ranger and its Solr dependency. The --enable-component-gateway parameter enables the Dataproc Component Gateway to make the Ranger and other Hadoop user interfaces available directly from the cluster page in Google Cloud console. When you set this parameter, there is no need for SSH tunneling to the backend master node. The --scopes=default,sql-admin parameter authorizes Apache Ranger to access its Cloud SQL database. If you need to create an external Hive metastore that persists beyond the lifetime of any cluster and can be used across multiple clusters, see Using Apache Hive on Dataproc. To run the procedure, you must run the table creation examples directly on Beeline. While the gcloud dataproc jobs submit hive commands use Hive binary transport, these commands aren't compatible with HiveServer2 when it's configured in HTTP mode. Create a sample Hive table In Cloud Shell, create a Cloud Storage bucket to store a sample Apache Parquet file: gcloud buckets create gs://${PROJECT_ID}-hive --location=${REGION} Copy a publicly available sample Parquet file into your bucket: gcloud storage cp gs://hive-solution/part-00000.parquet \ gs://${PROJECT_ID}-hive/dataset/transactions/part-00000.parquet Connect to the master node of the backend cluster you created in the previous section using SSH: gcloud compute ssh --zone ${ZONE} ${BACKEND_CLUSTER}-m The name of your cluster master node is the name of the cluster followed by -m. The HA cluster master node names have an extra suffix. If it's your first time connecting to your master node from Cloud Shell, you are prompted to generate SSH keys. In the terminal you opened with SSH, connect to the local HiveServer2 using Apache Beeline, which is pre-installed on the master node: beeline -u "jdbc:hive2://localhost:10000/;transportMode=http;httpPath=cliservice admin admin-password"\ --hivevar PROJECT_ID=$(gcloud info --format='value(config.project)') This command starts the Beeline command-line tool and passes the name of your Google Cloud project in an environment variable. Hive isn't performing any user authentication, but to perform most tasks it requires a user identity. The admin user here is a default user that's configured in Hive. The identity provider that you configure with Apache Knox later in this tutorial handles user authentication for any requests that come from BI tools. In the Beeline prompt, create a table using the Parquet file you previously copied to your Hive bucket: CREATE EXTERNAL TABLE transactions (SubmissionDate DATE, TransactionAmount DOUBLE, TransactionType STRING) STORED AS PARQUET LOCATION 'gs://${PROJECT_ID}-hive/dataset/transactions'; Verify that the table was created correctly: SELECT * FROM transactions LIMIT 10; SELECT TransactionType, AVG(TransactionAmount) AS AverageAmount FROM transactions WHERE SubmissionDate = '2017-12-22' GROUP BY TransactionType; The results of the two queries appear in the Beeline prompt. Exit the Beeline command-line tool: !quit Copy the internal DNS name of the backend master: hostname -A | tr -d '[:space:]'; echo You use this name in the next section as backend-master-internal-dns-name to configure the Apache Knox topology. You also use the name to configure a service in Ranger. Exit the terminal on the node: exit Creating the proxy cluster In this section, you create the proxy cluster that has the Apache Knox initialization action. Create a topology In Cloud Shell, clone the Dataproc initialization-actions GitHub repository: git clone https://github.com/GoogleCloudDataproc/initialization-actions.git Create a topology for the backend cluster: export KNOX_INIT_FOLDER=`pwd`/initialization-actions/knox cd ${KNOX_INIT_FOLDER}/topologies/ mv example-hive-nonpii.xml hive-us-transactions.xml Apache Knox uses the name of the file as the URL path for the topology. In this step, you change the name to represent a topology called hive-us-transactions. You can then access the fictitious transaction data that you loaded into Hive in Create a sample Hive table Edit the topology file: vi hive-us-transactions.xml To see how backend services are configured, see the topology descriptor file. This file defines a topology that points to one or more backend services. Two services are configured with sample values: WebHDFS and HIVE. The file also defines the authentication provider for the services in this topology and the authorization ACLs. Add the data analyst sample LDAP user identity sara. hive.acl admin,sara;*;* Adding the sample identity lets the user access the Hive backend service through Apache Knox. Change the HIVE URL to point to the backend cluster Hive service. You can find the HIVE service definition at the bottom of the file, under WebHDFS. HIVE http://:10000/cliservice Replace the placeholder with the internal DNS name of the backend cluster that you acquired in Create a sample Hive table. Save the file and close the editor. To create additional topologies, repeat the steps in this section. Create one independent XML descriptor for each topology. In Create the proxy cluster you copy these files into a Cloud Storage bucket. To create new topologies, or change them after you create the proxy cluster, modify the files, and upload them again to the bucket. The Apache Knox initialization action creates a cron job that regularly copies changes from the bucket to the proxy cluster. Configure the SSL/TLS certificate A client uses an SSL/TLS certificate when it communicates with Apache Knox. The initialization action can generate a self-signed certificate, or you can provide your CA-signed certificate. In Cloud Shell, edit the Apache Knox general configuration file: vi ${KNOX_INIT_FOLDER}/knox-config.yaml Replace HOSTNAME with the external DNS name of your proxy master node as the value for the certificate_hostname attribute. For this tutorial, use localhost. certificate_hostname: localhost Later in this tutorial, you create an SSH tunnel and the proxy cluster for the localhost value. The Apache Knox general configuration file also contains the master_key that encrypts the certificates BI tools use to communicate with the proxy cluster. By default, this key is the word secret. If you are providing your own certificate, change the following two properties: generate_cert: false custom_cert_name: Save the file and close the editor. If you are providing your own certificate, you can specify it in the property custom_cert_name. Create the proxy cluster In Cloud Shell, create a Cloud Storage bucket: gcloud storage buckets create gs://${PROJECT_ID}-knox --location=${REGION} This bucket provides the configurations you created in the previous section to the Apache Knox initialization action. Copy all the files from the Apache Knox initialization action folder to the bucket: gcloud storage cp ${KNOX_INIT_FOLDER}/* gs://${PROJECT_ID}-knox --recursive Export all the variables required in order to create the cluster: export PROXY_CLUSTER=proxy-cluster export PROJECT_ID=$(gcloud info --format='value(config.project)') export REGION=us-central1 export ZONE=us-central1-b In this step, some of the variables that you set before are repeated so that you can make modifications as required. Create the proxy cluster: gcloud dataproc clusters create ${PROXY_CLUSTER} \ --region ${REGION} \ --zone ${ZONE} \ --service-account=cluster-service-account@${PROJECT_ID}.iam.gserviceaccount.com \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/knox/knox.sh \ --metadata knox-gw-config=gs://${PROJECT_ID}-knox Verify the connection through proxy After the proxy cluster is created, use SSH to connect to its master node from Cloud Shell: gcloud compute ssh --zone ${ZONE} ${PROXY_CLUSTER}-m From the terminal of the proxy cluster's master node, run the following query: beeline -u "jdbc:hive2://localhost:8443/;\ ssl=true;sslTrustStore=/usr/lib/knox/data/security/keystores/gateway-client.jks;trustStorePassword=secret;\ transportMode=http;httpPath=gateway/hive-us-transactions/hive"\ -e "SELECT SubmissionDate, TransactionType FROM transactions LIMIT 10;"\ -n admin -p admin-password This command has the following properties: The beeline command uses localhost instead of the DNS internal name because the certificate that you generated when you configured Apache Knox specifies localhost as the host name. If you are using your own DNS name or certificate, use the corresponding host name. The port is 8443, which corresponds to the Apache Knox default SSL port. The line that begins ssl=true enables SSL and provides the path and password for the SSL Trust Store to be used by client applications such as Beeline. The transportMode line indicates that the request should be sent over HTTP and provides the path for the HiveServer2 service. The path is composed of the keyword gateway, the topology name that you defined in a previous section, and the service name configured in the same topology, in this case hive. The -e parameter provides the query to run on Hive. If you omit this parameter, you open an interactive session in the Beeline command-line tool. The -n parameter provides a user identity and password. In this step, you are using the default Hive admin user. In the next sections, you create an analyst user identity and set up credentials and authorization policies for this user. Add a user to the authentication store By default, Apache Knox includes an authentication provider that is based on Apache Shiro. This authentication provider is configured with BASIC authentication against an ApacheDS LDAP store. In this section, you add a sample data analyst user identity sara to the authentication store. From the terminal in the proxy's master node, install the LDAP utilities: sudo apt-get install ldap-utils Create an LDAP Data Interchange Format (LDIF) file for the new user sara: export USER_ID=sara printf '%s\n'\ "# entry for user ${USER_ID}"\ "dn: uid=${USER_ID},ou=people,dc=hadoop,dc=apache,dc=org"\ "objectclass:top"\ "objectclass:person"\ "objectclass:organizationalPerson"\ "objectclass:inetOrgPerson"\ "cn: ${USER_ID}"\ "sn: ${USER_ID}"\ "uid: ${USER_ID}"\ "userPassword:${USER_ID}-password"\ > new-user.ldif Add the user ID to the LDAP directory: ldapadd -f new-user.ldif \ -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' \ -w 'admin-password' \ -H ldap://localhost:33389 The -D parameter specifies the distinguished name (DN) to bind when the user that is represented byldapadd accesses the directory. The DN must be a user identity that is already in the directory, in this case the user admin. Verify that the new user is in the authentication store: ldapsearch -b "uid=${USER_ID},ou=people,dc=hadoop,dc=apache,dc=org" \ -D 'uid=admin,ou=people,dc=hadoop,dc=apache,dc=org' \ -w 'admin-password' \ -H ldap://localhost:33389 The user details appear in your terminal. Copy and save the internal DNS name of the proxy master node: hostname -A | tr -d '[:space:]'; echo You use this name in the next section as to configure the LDAP synchronization. Exit the terminal on the node: exit Setting up authorization In this section, you configure identity synchronization between the LDAP service and Ranger. Sync user identities into Ranger To ensure that Ranger policies apply to the same user identities as Apache Knox, you configure the Ranger UserSync daemon to sync the identities from the same directory. In this example, you connect to the local LDAP directory that is available by default with Apache Knox. However, in a production environment, we recommend that you set up an external identity directory. For more information, see the Apache Knox User's Guide and the Google Cloud Cloud Identity, Managed Active Directory, and Federated AD documentation. Using SSH, connect to the master node of the backend cluster that you created: export BACKEND_CLUSTER=backend-cluster gcloud compute ssh --zone ${ZONE} ${BACKEND_CLUSTER}-m In the terminal, edit the UserSync configuration file: sudo vi /etc/ranger/usersync/conf/ranger-ugsync-site.xml Set the values of the following LDAP properties. Make sure that you modify the user properties and not the group properties, which have similar names. ranger.usersync.sync.source ldap ranger.usersync.ldap.url ldap://:33389 ranger.usersync.ldap.binddn uid=admin,ou=people,dc=hadoop,dc=apache,dc=org ranger.usersync.ldap.ldapbindpassword admin-password ranger.usersync.ldap.user.searchbase dc=hadoop,dc=apache,dc=org ranger.usersync.source.impl.class org.apache.ranger.ldapusersync.process.LdapUserGroupBuilder Replace the placeholder with the internal DNS name of the proxy server, which you retrieved in the last section. These properties are a subset of a full LDAP configuration that syncs both users and groups. For more information, see How to integrate Ranger with LDAP. Save the file and close the editor. Restart the ranger-usersync daemon: sudo service ranger-usersync restart Run the following command: grep sara /var/log/ranger-usersync/* If the identities are synched, you see at least one log line for the user sara. Creating Ranger policies In this section, you configure a new Hive service in Ranger. You also set up and test a Ranger policy to limit the access to the Hive data for a specific identity. Configure the Ranger service From the terminal of the master node, edit the Ranger Hive configuration: sudo vi /etc/hive/conf/ranger-hive-security.xml Edit the property of the ranger.plugin.hive.service.name property: ranger.plugin.hive.service.name ranger-hive-service-01 Name of the Ranger service containing policies for this YARN instance Save the file and close the editor. Restart the HiveServer2 Admin service: sudo service hive-server2 restart You are ready to create Ranger policies. Set up the service in the Ranger Admin console In the Google Cloud console, go to the Dataproc page. Click your backend cluster name, and then click Web Interfaces. Because you created your cluster with Component Gateway, you see a list of the Hadoop components that are installed in your cluster. Click the Ranger link to open the Ranger console. Log in to Ranger with the user admin and your Ranger admin password. The Ranger console shows the Service Manager page with a list of services. Click the plus sign in the HIVE group to create a new Hive service. In the form, set the following values: Service name: ranger-hive-service-01. You previously defined this name in the ranger-hive-security.xml configuration file. Username: admin Password: admin-password jdbc.driverClassName: keep the default name asorg.apache.hive.jdbc.HiveDriver jdbc.url: jdbc:hive2::10000/;transportMode=http;httpPath=cliservice Replace the placeholder with the name you retrieved in a previous section. Click Add. Each Ranger plugin installation supports a single Hive service. An easy way to configure additional Hive services is to start up additional backend clusters. Each cluster has its own Ranger plugin. These clusters can share the same Ranger DB, so that you have a unified view of all the services whenever you access the Ranger Admin console from any of those clusters. Set up a Ranger policy with limited permissions The policy allows the sample analyst LDAP user sara access to specific columns of the Hive table. On the Service Manager window, click the name of the service you created. The Ranger Admin console shows the Policies window. Click Add New Policy. With this policy, you give sara the permission to see only the columns submissionDate and transactionType from table transactions. In the form, set the following values: Policy name: any name, for example allow-tx-columns Database: default Table: transactions Hive column: submissionDate, transactionType Allow conditions: Select user: sara Permissions: select At the bottom of the screen, click Add. Test the policy with Beeline In the master node terminal, start the Beeline command-line tool with the user sara. beeline -u "jdbc:hive2://localhost:10000/;transportMode=http;httpPath=cliservice sara user-password" Although the Beeline command-line tool doesn't enforce the password, you must provide a password to run the preceding command. Run the following query to verify that Ranger blocks it. SELECT * FROM transactions LIMIT 10; The query includes the column transactionAmount, which sara doesn't have permission to select. A Permission denied error displays. Verify that Ranger allows the following query: SELECT submissionDate, transactionType FROM transactions LIMIT 10; Exit the Beeline command-line tool: !quit Exit the terminal: exit In the Ranger console, click the Audit tab. Both denied and allowed events display. You can filter the events by the service name you previously defined, for example, ranger-hive-service-01. Connecting from a BI tool The final step in this tutorial is to query the Hive data from Tableau Desktop. Create a firewall rule Copy and save your public IP address. In Cloud Shell, create a firewall rule that opens TCP port 8443 for ingress from your workstation: gcloud compute firewall-rules create allow-knox\ --project=${PROJECT_ID} --direction=INGRESS --priority=1000 \ --network=default --action=ALLOW --rules=tcp:8443 \ --target-tags=knox-gateway \ --source-ranges=/32 Replace the placeholder with your public IP address. Apply the network tag from the firewall rule to the proxy cluster's master node: gcloud compute instances add-tags ${PROXY_CLUSTER}-m --zone=${ZONE} \ --tags=knox-gateway Create an SSH tunnel This procedure is only necessary if you're using a self-signed certificate valid for localhost. If you are using your own certificate or your proxy master node has its own external DNS name, you can skip to Connect to Hive. In Cloud Shell, generate the command to create the tunnel: echo "gcloud compute ssh ${PROXY_CLUSTER}-m \ --project ${PROJECT_ID} \ --zone ${ZONE} \ -- -L 8443:localhost:8443" Run gcloud init to authenticate your user account and grant access permissions. Open a terminal in your workstation. Create an SSH tunnel to forward port 8443. Copy the command generated in the first step and paste it into the workstation terminal, and then run the command. Leave the terminal open so that the tunnel remains active. Connect to Hive On your workstation, install the Hive ODBC driver. Open Tableau Desktop, or restart it if it was open. On the home page under Connect / To a Server, select More. Search for and then select Cloudera Hadoop. Using the sample data analyst LDAP user sara as the user identity, fill out the fields as follows: Server: If you created a tunnel, use localhost. If you didn't create a tunnel, use the external DNS name of your proxy master node. Port: 8443 Type: HiveServer2 Authentication: Username and Password Username: sara Password: sara-password HTTP Path: gateway/hive-us-transactions/hive Require SSL: yes Click Sign In. Query Hive data On the Data Source screen, click Select Schema and search for default. Double-click the default schema name. The Table panel loads. In the Table panel, double-click New Custom SQL. The Edit Custom SQL window opens. Enter the following query, which selects the date and transaction type from the transactions table: SELECT `submissiondate`, `transactiontype` FROM `default`.`transactions` Click OK. The metadata for the query is retrieved from Hive. Click Update Now. Tableau retrieves the data from Hive because sara is authorized to read these two columns from the transactions table. To try to select all columns from the transactions table, in the Table panel, double-click New Custom SQL again. The Edit Custom SQL window opens. Enter the following query: SELECT * FROM `default`.`transactions` Click OK. The following error message displays: Permission denied: user [sara] does not have [SELECT] privilege on [default/transactions/*]. Because sara doesn't have authorization from Ranger to read the transactionAmount column, this message is expected. This example shows how you can limit what data Tableau users can access. To see all the columns, repeat the steps using the user admin. Close Tableau and your terminal window. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Read the first part of this series: Architecture to connect your Visualization Software to Hadoop on Google Cloud. Read the Hadoop migration Security Guide. Learn how to migrate on-premises Hadoop infrastructure to Google Cloud. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Connectivity.txt b/Connectivity.txt new file mode 100644 index 0000000000000000000000000000000000000000..2576da0e06cd849cfc9a12f0e4f60665a3d664d8 --- /dev/null +++ b/Connectivity.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design/connectivity +Date Scraped: 2025-02-23T11:50:52.171Z + +Content: +Home Docs Cloud Architecture Center Send feedback Network segmentation and connectivity for distributed applications in Cross-Cloud Network Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC This document is part of a design guide series for Cross-Cloud Network. The series consists of the following parts: Cross-Cloud Network for distributed applications Network segmentation and connectivity for distributed applications in Cross-Cloud Network (this document) Service networking for distributed applications in Cross-Cloud Network Network security for distributed applications in Cross-Cloud Network This part explores the network segmentation structure and connectivity, which is the foundation of the design. This document explains the phases in which you make the following choices: The overall network segmentation and project structure. Where you place your workload. How your projects are connected to external on-premises and other cloud provider networks, including the design for connectivity, routing, and encryption. How your VPC networks are connected internally to each other. How your Google Cloud VPC subnets are connected to each other and to other networks, including how you set up service reachability and DNS. Network segmentation and project structure During the planning stage, you must decide between one of two project structures: A consolidated infrastructure host project, in which you use a single infrastructure host project to manage all networking resources for all applications Segmented host projects, in which you use an infrastructure host project in combination with a different host project for each application During the planning stage, we recommend that you also decide the administrative domains for your workload environments. Scope the permissions for your infrastructure administrators and developers based on the principle of least privilege, and scope application resources into different application projects. Because infrastructure administrators need to set up connectivity to share resources, infrastructure resources can be handled within an infrastructure project. For example, to set up connectivity to shared infrastructure resources, infrastructure administrators can use an infrastructure project to handle those shared resources. At the same time, the development team might manage their workloads in one project, and the production team might manage their workloads in a separate project. Developers would then use the infrastructure resources in the infrastructure project to create and manage resources, services, load balancing, and DNS routing policies for their workloads. In addition, you must decide how many VPC networks you will implement initially and how they will be organized in your resource hierarchy. For details about how to choose a resource hierarchy, see Decide a resource hierarchy for your Google Cloud landing zone. For details about how to choose the number of VPC networks, see Deciding whether to create multiple VPC networks. For the Cross-Cloud Network, we recommend using the following VPCs: One or more application VPCs to host the resources for the different applications. One or more transit VPCs, where all external connectivity is handled. One or more services-access VPCs, which can be used to consolidate the deployment of private access to published services. The following diagram shows a visual representation of the recommended VPC structure that was just described. You can use the VPC structure shown in the diagram with either a consolidated or segmented project structure, as described in subsequent sections. The diagram shown here doesn't show connectivity between the VPC networks. Consolidated infrastructure host project You can use a consolidated infrastructure host project to manage all networking resources such as VPC networks and subnets, Network Connectivity Center hubs, VPC Network Peering, and load balancers. Multiple application Shared VPCs with their corresponding application service projects can be created in the infrastructure host project to match the organization structure. Use multiple application service projects to delegate resource administration. All networking across all application VPCs is billed to the consolidated infrastructure host project. For this project structure, many application service projects can share a smaller number of application VPCs. The following diagram provides a visual representation of the consolidated infrastructure host project and multiple application service projects that were just described. The diagram does not show connectivity among all projects. Segmented host projects In this pattern, each group of applications has its own application host project and VPC networks. Multiple application service projects can be attached to the host project. Billing for network services is split between the infrastructure host project and application host projects. Infrastructure charges are billed to the infrastructure host project and network charges, such as those for data transfer for applications, are billed to each application host project. The following diagram provides a visual representation of the multiple host projects and multiple application service projects that were just described. The diagram does not show connectivity among all projects. Workload placement Many connectivity choices depend upon the regional locations of your workloads. For guidance on placing workloads, see Best practices for Compute Engine regions selection. You should decide where your workloads will be before choosing connectivity locations. External and hybrid connectivity This section describes the requirements and recommendations for the following connectivity paths: Private connections to other cloud providers Private connections to on-premises data centers Internet connectivity for workloads, particularly outbound connectivity Cross-Cloud Network involves the interconnection of multiple cloud networks or on-premises networks. External networks can be owned and managed by different organizations. These networks physically connect to each other at one or more network-to-network interfaces (NNIs). The combination of NNIs must be designed, provisioned, and configured for performance, resiliency, privacy, and security. For modularity, reusability, and the ability to insert security NVAs, place external connections and routing in a transit VPC, which then serves as a shared connectivity service for other VPCs. Routing policies for resiliency, failover, and path preference across domains can be configured once in the transit VPC and leveraged by many other VPC networks. The design of the NNIs and the external connectivity is used later for Internal connectivity and VPC networking. The following diagram shows a transit VPC serving as a shared connectivity service for other VPCs, which are connected using VPC Network Peering, Network Connectivity Center, or HA VPN. For illustrative simplicity, the diagram shows a single transit VPC, but you can use multiple transit VPCs for connectivity in different regions. Private connections to other cloud providers If you have services running in other cloud service provider (CSP) networks that you want to connect to your Google Cloud network, you can connect to them over the internet or through private connections. We recommend private connections. When choosing options, consider throughput, privacy, cost, and operational viability. To maximize throughput while enhancing privacy, use a direct high-speed connection between cloud networks. Direct connections remove the need for intermediate physical networking equipment. We recommend that you use Cross-Cloud Interconnect, which provides these direct connections, as well as MACsec encryption and a throughput rate of up to 100 Gbps per link. If you can't use Cross-Cloud Interconnect, you can use Dedicated Interconnect or Partner Interconnect through a colocation facility. Select the locations where you connect to the other CSPs based on the location's proximity to the target regions. For location selection, consider the following: Check the list of locations: For Cross-Cloud Interconnect, check the list of locations that are available for both Google Cloud and CSPs (availability varies by cloud provider). For Dedicated Interconnect or Partner Interconnect, choose a low-latency location for the colocation facility. Evaluate the latency between the given point of presence (POP) edge and the relevant region in each CSP. To maximize the reliability of your cross-cloud connections, we recommend a configuration that supports a 99.99% uptime SLA for production workloads. For details, see Cross-Cloud Interconnect High availability, Establish 99.99% availability for Dedicated Interconnect, and Establish 99.99% availability for Partner Interconnect. If you don't require high bandwidth between different CSPs, it's possible to use VPN tunnels. This approach can help you get started, and you can upgrade to Cross-Cloud Interconnect when your distributed applications use more bandwidth. VPN tunnels can also achieve a 99.99% SLA. For details, see HA VPN topologies. Private connections to on-premises data centers For connectivity to private data centers, you can use one of the following hybrid connectivity options: Dedicated Interconnect Partner Interconnect HA VPN The routing considerations for these connections are similar to those for Private connections to other cloud providers. The following diagram shows connections to on-premises networks and how on-premises routers can connect to Cloud Router through a peering policy: Inter-domain routing with external networks To increase resiliency and throughput between the networks, use multiple paths to connect the networks. When traffic is transferred across network domains, it must be inspected by stateful security devices. As a result, flow symmetry at the boundary between the domains is required. For networks that transfer data across multiple regions, the cost and service quality level of each network might differ significantly. You might decide to use some networks over others, based on these differences. Set up your inter-domain routing policy to meet your requirements for inter-regional transit, traffic symmetry, throughput, and resiliency. The configuration of the inter-domain routing policies depends on the available functions at the edge of each domain. Configuration also depends on how the neighboring domains are structured from an autonomous system and IP addressing (subnetting) perspective across different regions. To improve scalability without exceeding prefix limits on edge devices, we recommend that your IP addressing plan results in fewer aggregate prefixes for each region and domain combination. When designing inter-regional routing, consider the following: Google Cloud VPC networks and Cloud Router both support global cross-region routing. Other CSPs might have regional VPCs and Border Gateway Protocol (BGP) scopes. For details, see the documentation from your other CSP. Cloud Router automatically advertises routes with predetermined path preferences based on regional proximity. This routing behavior is dependent on the configured dynamic routing mode of the VPC. You might need to override these preferences, for the routing behavior that you want. Different CSPs support different BGP and Bidirectional Forwarding Detection (BFD) functions, and Google's Cloud Router also has specific route policy capabilities as described in Establish BGP sessions. Different CSPs might use different BGP tie-breaking attributes to dictate preference for routes. Consult your CSP's documentation for details. Single region inter-domain routing We suggest that you start with single region inter-domain routing, which you build upon to create multiple region connections with inter-domain routing. Designs that use Cloud Interconnect are required to have a minimum of two connection locations that are in the same region but different edge availability domains. Decide whether to configure these duplicate connections in an active/active or active/passive design: Active/active uses Equal Cost Multi-Path (ECMP) routing to aggregate the bandwidth of both paths and use them simultaneously for inter-domain traffic. Cloud Interconnect also supports the use of LACP-aggregated links to achieve up to 200 Gbps of aggregate bandwidth per path. Active/passive forces one link to be a ready standby, only taking on traffic if the active link is interrupted. We recommend an active/active design for intra-regional links. However, certain on-premise networking topologies combined with the use of stateful security functions can necessitate an active/passive design. Cloud Router is instantiated across multiple zones, which provides higher resiliency than a single element would provide. The following diagram shows how all resilient connections converge at a single Cloud Router within a region. This design can support a 99.9% availability SLA within a single metropolitan area when following the guidelines to Establish 99.9% availability for Dedicated Interconnect. The following diagram shows two on-premises routers connected redundantly to the managed Cloud Router service in a single region: Multi-region inter-domain routing To provide backup connectivity, networks can peer at multiple geographical areas. By connecting the networks in multiple regions, the availability SLA can increase to 99.99%. The following diagram shows the 99.99% SLA architectures. It shows on-premises routers in two different locations connected redundantly to the managed Cloud Router services in two different regions. Beyond resiliency, the multi-regional routing design should accomplish flow symmetry. The design should also indicate the preferred network for inter-regional communications, which you can do with hot-potato and cold-potato routing. Pair cold-potato routing in one domain with hot-potato routing in the peer domain. For the cold-potato domain, we recommend using the Google Cloud network domain, which provides global VPC routing functionality. Flow symmetry isn't always mandatory, but flow asymmetry can cause issues with stateful security functions. The following diagram shows how you can use hot-potato and cold-potato routing to specify your preferred inter-regional transit network. In this case, traffic from prefixes X and Y stay on the originating network until they get to the region closest to the destination (cold-potato routing). Traffic from prefixes A and B switch to the other network in the originating region, then travel across the other network to the destination (hot-potato routing). Encryption of inter-domain traffic Unless otherwise noted, traffic is not encrypted on Cloud Interconnect connections between different CSPs or between Google Cloud and on-premise data centers. If your organization requires encryption for this traffic, you can use the following capabilities: MACsec for Cloud Interconnect: Encrypts traffic over Cloud Interconnect connections between your routers and Google's edge routers. For details, see MACsec for Cloud Interconnect overview. HA VPN over Cloud Interconnect: Uses multiple HA VPN tunnels to be able to provide the full bandwidth of the underlying Cloud Interconnect connections. The HA VPN tunnels are IPsec encrypted and are deployed over Cloud Interconnect connections that may also be MACsec encrypted. In this configuration, Cloud Interconnect connections are configured to allow only HA VPN traffic. For details, see HA VPN over Cloud Interconnect overview. Internet connectivity for workloads For both inbound and outbound internet connectivity, reply traffic is assumed to follow statefully the reverse direction of the original request's direction. Generally, features that provide inbound internet connectivity are separate from outbound internet features, with the exception of external IP addresses which provide both directions simultaneously. Inbound internet connectivity Inbound internet connectivity is mainly concerned with providing public endpoints for services hosted on the cloud. Examples of this include internet connectivity to web application servers and game servers hosted on Google Cloud. The main features providing inbound internet connectivity are Google's Cloud Load Balancing products. The design of a VPC network is independent of its ability to provide inbound internet connectivity: Routing paths for external passthrough Network Load Balancers provide connectivity between clients and backend VMs. Routing paths between Google Front Ends (GFEs) and backends provide connectivity between GFE proxies for global external Application Load Balancers or global external proxy Network Load Balancers and backend VMs. A proxy-only subnet provides connectivity between Envoy proxies for regional external Application Load Balancers or regional external proxy Network Load Balancers and backend VMs. Outbound internet connectivity Examples of outbound internet connectivity (where the initial request originates from the workload to an internet destination) include workloads accessing third-party APIs, downloading software packages and updates, and sending push notifications to webhook endpoints on the internet. For outbound connectivity, you can use Google Cloud built-in options, as described in Building internet connectivity for private VMs. Alternatively, you can use central NGFW NVAs as described in Network security. The main path to provide outbound internet connectivity is the default internet gateway destination in the VPC routing table, which is often the default route in Google VPCs. Both external IPs and Cloud NAT (Google Cloud's managed NAT service), require a route pointing at the default internet gateway of the VPC. Therefore, VPC routing designs that override the default route must provide outbound connectivity through other means. For details, see Cloud Router overview. To secure outbound connectivity, Google Cloud offers both Cloud Next Generation Firewall enforcement and Secure Web Proxy to provide deeper filtering on HTTP and HTTPS URLs. In all cases, however, the traffic follows the default route out to the default internet gateway or through a custom default route in the VPC routing table. Using your own IPs You can use Google-owned IPv4 addresses for internet connectivity or you can use Bring your own IP addresses (BYOIP) to use an IPv4 space that your organization owns. Most Google Cloud products that require an internet-routable IP address support using BYOIP ranges instead. You can also control the reputation of the IP space through the exclusive use of it. BYOIP helps with portability of connectivity, and can save IP address costs. Internal connectivity and VPC networking With the external and hybrid connectivity service configured, resources in the transit VPC can reach the external networks. The next step is to make this connectivity available to the resources that are hosted in other VPC networks. The following diagram shows the general structure of VPC, regardless of how you enabled external connectivity. It shows a transit VPC that terminates external connections and hosts a Cloud Router in every region. Each Cloud Router receives routes from its external peers over the NNIs in each region. Application VPCs are connected to the transit VPC so they can share external connectivity. In addition, the transit VPC functions as a hub for the spoke VPCs. The spoke VPCs can host applications, services, or a combination of both. For optimal performance and scalability with the built-in cloud networking services, VPCs should be connected by using Network Connectivity Center as described in Inter-VPC connectivity with Network Connectivity Center. Network Connectivity Center provides the following: Transitive access to Private Service Connect L4 and L7 endpoints and their associated services Transitive access to on-premise networks learned over BGP VPC network scale of 250 networks per hub If you want to insert network virtual appliances (NVAs) for firewalling or other network functions, you have to use VPC Network Peering. Perimeter firewalls can remain on external networks. If NVA insertion is a requirement, then use the Inter-VPC connectivity with VPC Network Peering pattern to interconnect your VPCs. Configure DNS forwarding and peering in the transit VPC as well. For details, see the DNS infrastructure design section. The following sections discuss the possible designs for hybrid connectivity that support base IP connectivity as well as API access point deployments. Inter-VPC connectivity with Network Connectivity Center We recommend that the application VPCs, transit VPCs and services-access VPCs all connect using Network Connectivity Center VPC spokes. Service consumer access points are deployed in services-access VPCs when they need to be reachable from other networks (other VPCs or external networks). You can deploy service consumer access points in application VPCs if these access points need to be reached only from within the application VPC If you need to provide access to services behind private services access, create a services-access VPC that is connected to a transit VPC using HA VPN. Then, connect the managed services VPC to the services-access VPC. The HA VPN enables transitive routing from other networks. The design is a combination of two connectivity types: Network Connectivity Center: provides connectivity between transit VPCs, application VPCs and services-access VPCs that host Private Service Connect endpoints. HA VPN inter-VPC connections: provide transitive connectivity for private services access subnets hosted on services-access VPCs. These services-access VPCs shouldn't be added as a spoke of the Network Connectivity Center hub. When you combine these connectivity types, plan for the following considerations: Redistribution of VPC Network Peering and Network Connectivity Center peer subnets into dynamic routing (to the services-access VPC over HA VPN and to external networks over hybrid interconnections) Multi-regional routing considerations Propagation of dynamic routes into VPC Network Peering and Network Connectivity Center peering (from the services-access VPC over HA VPN and from external networks over hybrid interconnections) The following diagram shows a services-access VPC hosting private services access subnets connected to the transit VPC with HA VPN. The diagram also shows the application VPCs, transit VPCs and services-access VPCs hosting Private Service Connect consumer endpoints connected using Network Connectivity Center: The structure shown in the preceding diagram contains these components: External network: A data center or remote office where you have network equipment. This example assumes that the locations are connected together using an external network. Transit VPC network: A VPC network in the hub project that lands connections from on-premises and other CSPs, then serves as a transit path from other VPCs to on-premises and CSP networks. App VPCs: Projects and VPC networks hosting various applications. Consumer VPC for private services access: A VPC network hosting centralized access over private services access to services needed by applications in other networks. Managed services VPC: Services provided and managed by other entities, but made accessible to applications running in VPC networks. Consumer VPC for Private Service Connect: A VPC network hosting Private Service Connect access points to services hosted in other networks. Use Network Connectivity Center to connect the application VPCs to the Cloud Interconnect VLAN attachments and HA VPN instances in the transit VPCs. Make all of the VPCs into spokes of the Network Connectivity Center hub, and make the VLAN attachments and HA VPNs into hybrid spokes of the same Network Connectivity Center hub. Use the default Network Connectivity Center mesh topology to enable communication amongst all spokes (VPC and hybrid). This topology also enables communication between the application VPCs that are subject to Cloud NGFW policies. Any consumer service VPCs connected over HA VPN must not be spokes of the Network Connectivity Center hub. Private Service Connect endpoints can be deployed in Network Connectivity Center VPC spokes and don't require an HA VPN connection for cross-VPC transitivity when using Network Connectivity Center. Inter-VPC connectivity with VPC Network Peering When published service consumer access points are deployed in a services access VPC, we recommend that the application VPCs connect using VPC Network Peering to the transit VPC and that the services-access VPCs connect to the transit VPC over HA VPN. In this design, the transit VPC is the hub, and you deploy the consumer access points for private service endpoints in a services access VPC. The design is a combination of two connectivity types: VPC Network Peering: provides connectivity between the transit VPC and the application VPCs. HA VPN inter-VPC connections: provide transitive connectivity between the services-access VPCs and the transit VPC. When you combine these architectures, plan for the following considerations: Redistribution of VPC peer subnets into dynamic routing (to the services-access VPC over HA VPN and to external networks over hybrid connections) Multi-regional routing considerations Propagation of dynamic routes into VPC peering (from the services-access VPC over HA VPN and from external networks over hybrid connections) The following diagram shows a services-access VPC, which is connected to the transit VPC with HA VPN, and the application VPCs, which are connected to the transit VPC with VPC Network Peering: The structure shown in the preceding diagram contains these components: Customer location: A data center or remote office where you have network equipment. This example assumes that the locations are connected together using an external network. Metro: A metropolitan area containing one or more Cloud Interconnect edge availability domains. Cloud Interconnect connects to other networks in such metropolitan areas. Hub project: A project hosting at least one VPC network that serves as a hub to other VPC networks. Transit VPC: A VPC network in the hub project that lands connections from on-premises and other CSPs, then serves as a transit path from other VPCs to on-premises and CSP networks. App host projects and VPCs: Projects and VPC networks hosting various applications. Services-access VPC: A VPC network hosting centralized access to services needed by applications in the application VPC networks. Managed services VPC: Services provided and managed by other entities, but made accessible to applications running in VPC networks. For the VPC Network Peering design, when application VPCs need to communicate with each other, you can connect the application VPCs to a Network Connectivity Center hub as VPC spokes. This approach provides connectivity amongst all the VPCs in the Network Connectivity Center hub. Subgroups of communication can be created by using multiple Network Connectivity Center hubs. Any communication restrictions required among endpoints within a particular hub can be achieved using firewall policies. Workload security for east-west connections between application VPCs can use the Cloud Next Generation Firewall. For detailed guidance and configuration blueprints to deploy these connectivity types, see Hub-and-spoke network architecture. DNS infrastructure design In a hybrid environment, either Cloud DNS or an external (on-premises or CSP) provider can handle a DNS lookup. External DNS servers are authoritative for external DNS zones, and Cloud DNS is authoritative for Google Cloud zones. DNS forwarding must be enabled bidirectionally between Google Cloud and the external networks, and firewalls must be set to allow the DNS resolution traffic. If you use a Shared VPC for your services-access VPC, in which administrators of different application service projects can instantiate their own services, use cross-project binding of DNS zones. Cross-project binding enables the segmentation and delegation of the DNS namespace to the service project administrators. In the transit case, where external networks are communicating with other external networks through Google Cloud, the external DNS zones should be configured to forward requests directly to each other. The Google Cross-Cloud Network would provide connectivity for the DNS requests and replies to complete, but Google Cloud DNS is involved in forwarding any of the DNS resolution traffic between zones in external networks. Any firewall rules enforced in the Cross-Cloud Network must allow the DNS resolution traffic between the external networks. The following diagram shows a DNS design can be used with any of the hub-and-spoke VPC connectivity configurations proposed in this design guide: The preceding diagram shows the following steps in the design flow: On-premises DNS Configure your on-premises DNS servers to be authoritative for on-premises DNS zones. Configure DNS forwarding (for Google Cloud DNS names) by targeting the Cloud DNS inbound forwarding IP address, which is created through the inbound server policy configuration in the hub VPC. This configuration allows the on-premises network to resolve Google Cloud DNS names. Transit VPC - DNS Egress Proxy Advertise the Google DNS egress proxy range 35.199.192.0/19 to the on-premises network using the Cloud Routers. Outbound DNS requests from Google to on-premises are sourced from this IP address range. Transit VPC - Cloud DNS Configure an inbound server policy for inbound DNS requests from on-premises. Configure Cloud DNS forwarding zone (for on-premises DNS names) targeting on-premises DNS servers. Services-access VPC - Cloud DNS Configure the services DNS peering zone (for on-premises DNS names) setting the hub VPC as the peer network. DNS resolution for on-premises and service resources go through the hub VPC. Configure services DNS private zones in the services host project and attach the services Shared VPC, application Shared VPC, and hub VPC to the zone. This allows all hosts (on-premises and in all service projects) to resolve the services DNS names. App host project - Cloud DNS Configure an App DNS peering zone for on-premises DNS names setting the hub VPC as the peer network. DNS resolution for on-premises hosts go through the hub VPC. Configure App DNS private zones in App Host Project and attach the application VPC, services Shared VPC and hub VPC to the zone. This configuration allows all hosts (on-premises and in all service projects) to resolve the App DNS names. For more information, see Hybrid architecture using a hub VPC network connected to spoke VPC networks. What's next Design the service networking for Cross-Cloud Network applications. Deploy Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center. Deploy Cross-Cloud Network inter-VPC connectivity using VPC Network Peering. Learn more about the Google Cloud products used in this design guide: VPC networks VPC Network Peering Private Service Connect Private services access Cloud Interconnect HA VPN For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Victor Moreno | Product Manager, Cloud NetworkingGhaleb Al-habian | Network SpecialistDeepak Michael | Networking Specialist Customer EngineerOsvaldo Costa | Networking Specialist Customer EngineerJonathan Almaleh | Staff Technical Solutions ConsultantOther contributors: Zach Seils | Networking SpecialistChristopher Abraham | Networking Specialist Customer EngineerEmanuele Mazza | Networking Product SpecialistAurélien Legrand | Strategic Cloud EngineerEric Yu | Networking Specialist Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMark Schlagenhauf | Technical Writer, NetworkingMarwan Al Shawi | Partner Customer EngineerAmmett Williams | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Consumer_Packaged_Goods.txt b/Consumer_Packaged_Goods.txt new file mode 100644 index 0000000000000000000000000000000000000000..739c271afff19a76dcf29ec438ad72b8d04539b7 --- /dev/null +++ b/Consumer_Packaged_Goods.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/cpg +Date Scraped: 2025-02-23T11:57:46.132Z + +Content: +Stay up to speed on how generative AI is transforming the CPG industry.Google Cloud for consumer packaged goodsWe're helping CPGs to digitally transform through data-driven innovation beginning with the consumer and extending through the value chain.Unlock consumer growth, capture new routes to market, and drive connected operations with Google Cloud.Talk with an expert2:41Watch how brands can deliver personalized content to target audiences with generative AITransform your CPG organization with Google CloudUnlock consumer growth with data-powered insightsImprove consumer experiencesWith Google Cloud, brands can unleash rich consumer insights and AI/ML for marketing mix optimization, predictive marketing, personalization at scale, and faster innovation.Enhance data analytics for better marketing, forecasting, and insights.“We're always looking to ensure a great consumer experience across all our categories, from healthcare to beauty products and much more. As a leader in analytics and AI, Google Cloud is a strategic partner helping us offer our consumers superior products and services that provide value in a secure and transparent way.”Vittorio Cretella, CIO, Procter & GambleExplore our data offerings:Customer Data PlatformLearn moreGoogle Marketing PlatformLearn moreData cloud for CPGLearn moreTransform go-to-market in omnichannel ecosystemsKeep up with demandPower omnichannel commerce transformation and sales efficiency.Google Cloud helps accelerate direct to consumer and omnichannel initiatives, improve channel insights, brand store execution, and optimize sales route to market.“The migration to Google Cloud was surprisingly swift. In 22 days, we were up and running.”Jack Constantine, Chief Digital Officer, LushExplore our omnichannel offerings:Product discoveryLearn moreDigital commerce migrationLearn moreShelf executionLearn moreDrive connected and efficient operationsImprove overall operationsWith Google Cloud, brands can enable a more intelligent supply chain, smart factory, sustainability, systemic agility, and productivity. Drive operational productivity through increased organizational collaboration and agility.“Partnering with Google Cloud on this transformation journey is an important step in helping us boldly lead the CPG industry in this data and relationship-driven era. This will help modernize our infrastructure and deepen our connection with our consumers to better anticipate their needs.”Jaime Montemayor, Chief Digital and Technology Officer, General MillsExplore our offerings:Demand forecastingLearn moreManufacturing Data EngineLearn moreCortex FrameworkLearn moreLead with sustainability Meet your sustainability goalsPower sustainable practices within your organization.Google Cloud empowers CPG brands with the technology to do more for our environment and our shared future.“We will now be able to process and combine complex sets of data like never before. The combination of these sustainability insights with our commercial sourcing information is a significant step-change in transparency, which is crucial to better protect and regenerate nature.”Dave Ingram, Chief Procurement Officer, Unilever Explore our offerings:Sustainable sourcingLearn moreClimate EngineLearn moreUnlock consumer growth with data-powered insightsUnlock consumer growth with data-powered insightsImprove consumer experiencesWith Google Cloud, brands can unleash rich consumer insights and AI/ML for marketing mix optimization, predictive marketing, personalization at scale, and faster innovation.Enhance data analytics for better marketing, forecasting, and insights.“We're always looking to ensure a great consumer experience across all our categories, from healthcare to beauty products and much more. As a leader in analytics and AI, Google Cloud is a strategic partner helping us offer our consumers superior products and services that provide value in a secure and transparent way.”Vittorio Cretella, CIO, Procter & GambleExplore our data offerings:Customer Data PlatformLearn moreGoogle Marketing PlatformLearn moreData cloud for CPGLearn moreTransform go-to-market in omnichannel ecosystemsTransform go-to-market in omnichannel ecosystemsKeep up with demandPower omnichannel commerce transformation and sales efficiency.Google Cloud helps accelerate direct to consumer and omnichannel initiatives, improve channel insights, brand store execution, and optimize sales route to market.“The migration to Google Cloud was surprisingly swift. In 22 days, we were up and running.”Jack Constantine, Chief Digital Officer, LushExplore our omnichannel offerings:Product discoveryLearn moreDigital commerce migrationLearn moreShelf executionLearn moreDrive connected and efficient operationsDrive connected and efficient operationsImprove overall operationsWith Google Cloud, brands can enable a more intelligent supply chain, smart factory, sustainability, systemic agility, and productivity. Drive operational productivity through increased organizational collaboration and agility.“Partnering with Google Cloud on this transformation journey is an important step in helping us boldly lead the CPG industry in this data and relationship-driven era. This will help modernize our infrastructure and deepen our connection with our consumers to better anticipate their needs.”Jaime Montemayor, Chief Digital and Technology Officer, General MillsExplore our offerings:Demand forecastingLearn moreManufacturing Data EngineLearn moreCortex FrameworkLearn moreLead with sustainability Lead with sustainability Meet your sustainability goalsPower sustainable practices within your organization.Google Cloud empowers CPG brands with the technology to do more for our environment and our shared future.“We will now be able to process and combine complex sets of data like never before. The combination of these sustainability insights with our commercial sourcing information is a significant step-change in transparency, which is crucial to better protect and regenerate nature.”Dave Ingram, Chief Procurement Officer, Unilever Explore our offerings:Sustainable sourcingLearn moreClimate EngineLearn moreHow AI is shaping the future of consumer packaged goodsMultimodal AI, AI agents, and other groundbreaking innovations are poised to revolutionize the CPG industry. Learn how these emerging trends could impact your business and gain a competitive edge.AI is shaping the future of CPGCheck out the latest happenings in CPGFarm to table, digitally enabledLearn how winning food CPGs are tackling digital transformation across the value chainHow a first-party data strategy helped Kraft Heinz deliver personalized experiencesRead how Kraft Heinz developed personalized marketing strategies on a foundation of first-party dataLipstick, mascara, APIs: What’s the secret to L’Oreal’s beauty-tech transformation?Learn how L’Oréal is using APIs to build an open, scalable, secure marketplace for beautyHow Bayer used machine learning to predict cold and flu trendsPredicting colds with predictive modeling—read how Bayer got ahead of cold and flu seasonHow L’Oreal built a data warehouse on Google CloudHear how L’Oreal built a serverless, multicloud warehouse based on Google CloudHow Google Cloud is driving growth with MondelezHear insights and examples of how data and AI/ML are driving the next phase of growth for MondelezView MoreTop brands who choose Google CloudDiscover why many of the world’s leading brands are choosing Google Cloud to help them innovate faster, make smarter decisions, and collaborate from anywhere.See all customersRecommended CPG industry partnersOur large ecosystem of trusted technology and services partners can help you solve today’s most complex business problems and unlock value across the supply chain.See all partnersDiscover insights. Find solutions.See how you can transform your CPG organization with Google Cloud.Go to consoleWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleSee all industry solutionsContinue browsingGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Contact_Center_as_a_Service.txt b/Contact_Center_as_a_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ce868f1371fdacb74a3097c4a2486b461bbe629 --- /dev/null +++ b/Contact_Center_as_a_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/contact-center-ai-platform +Date Scraped: 2025-02-23T12:02:24.155Z + +Content: +Contact Center as a Service (CCAI Platform)Delight customers while lowering your costs with a turnkey, enterprise-grade, omnichannel contact center solution that is native to the cloud.Formerly known as Contact Center AI Platform (CCAIP), this Contact Center as a Service (CCaaS) solution is built on Google Cloud’s foundational security, privacy, and AI innovation. Contact usVIDEOSee how Contact Center AI Platform works1:35BenefitsSupercharging the customer and agent experienceElevate CX and increase CSATModern digital and in-app experiences, eliminating channel switching between voice, digital, and AI-powered self-service. Fully supported engagement across apps, digital touchpoints with preserved context. AI-driven personalization and 24/7 availability boost customer satisfaction and loyalty. Achieve high agent satisfactionDecrease interaction volume with predictive AI driven routing. Agents are enabled with insights and can respond faster with automated responses using Contact Center AI building blocks. Reduce costs by improving operational efficiencyPlatform simplification reduces agent training time. One system of record to action insights, makes agent productivity higher. Automation of routine tasks, streamlined workflows, and AI-powered insights reduce costs, improve productivity, and optimize resource allocation. Key featuresProvides intelligent customer experience across channels and devicesManage multiple channels, large volume of interactions, complexity of interactions and agent workforce challenges all in a single platform.Multimodal, omnichannel customer experienceWeb and mobile SDKs (iOS and Android) embed the support experience across all channels (VoIP) via WebRTC and PSTN, chat, SMS, email, and social for consistent, personalized customer experience across all devices.Embeddable ExperiencesProvides the capabilities to not only embed voice and the digital channel suite into your app, but the entire customer journey - from visually navigating where they want to go, interacting with agents, sharing digital media, and making secure payments.AI-driven routing and visual IVRAI powered operations for contact deflection, predictive routing, agent productivity and operational efficiency. Reduce handle time by providing deep interaction context and turn-by-turn guidance on the conversation flow based on customer intent.Provides customers with self-service via Web or Mobile interfaces. Functions just like an IVR or Virtual Agent would function, just via a visual interface.Workforce and quality managementNative and third-party workforce management and quality management capabilities to support call and screen recording and agent coaching, agent scheduling, forecasting, adherence, and performance optimization.Inbound & Outbound Voice, SMS, & ChatAbility to handle multiple channels simultaneously and pivot between channels during an interaction.Ready to get started? Contact usCustomersSee how organizations are transforming their experiences for both their customers and their agents.Blog postSegra implements Google CCAI Platform3-min readVideoHow Gen Digital uses CCAI for their enterprise04:57VideoHow Loveholidays scaled with Contact Center AI04:02See all customersPartnersRecommended partnersExtend the capabilities of Contact Center AI Platform with selected partners.Expand allSystems Integration PartnersTechnology PartnersDocumentationExplore common use cases for Contact Center AI PlatformFuture-proof your business with a platform built to address your most complex challenges.Google Cloud BasicsCCAI Platform basicsGuide on how to operate CCAI Platform.Learn moreGoogle Cloud BasicsBuild virtual agents with Dialogflow CX prebuilt agentsLearn how to rapidly build and deploy advanced virtual agents using prebuilt agents for various use cases in healthcare, retail, travel, and more industries.Learn moreNot seeing what you’re looking for?View documentationFor more information on the availability of certain Cloud AI services, see our current Service Level Agreements.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesStart using Google CloudGo to consoleContinue browsingSee all productsExplore the MarketplaceFind solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Containers.txt b/Containers.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a3b78bd2270bd440552129dce31d1e888c04be7 --- /dev/null +++ b/Containers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/containers +Date Scraped: 2025-02-23T12:02:55.393Z + +Content: +Containers in CloudA better way to develop and deploy applications.Contact salesView documentationEverything at Google runs in containersContainerization helps our development teams move fast, deploy software efficiently, and operate at an unprecedented scale. We’ve packaged over a decade’s worth of experience launching several billion containers per week into Google Cloud so that developers and businesses of any size can easily tap the latest in container innovation.Learn more about containers and their benefits10xGoogle contributes 10x more code than any other public cloudBirthplace of Kubernetes, production-grade container orchestrationBuilt on the learnings from our internal cluster management system Borg, Kubernetes—our open source automated container orchestration project—manages your machines and services for you. Kubernetes improves your reliability and reduces the time and resources you need to spend on DevOps, not to mention relief from the stress attached to these tasks.Learn more about Kubernetes and its benefitsContainer-native networkingGoogle provides the Kubernetes Defined Network—a network that is fully integrated with GKE. This means easy-to-use integrations with load balancing, routing, security, and network observability. You also benefit from Google’s global network with multi-cluster networking for the highest level of resilience and availability.Dive deeper into networking best practices0zero-trust security framework modeled by GoogleDefense in depth for KubernetesGoogle Cloud gives you the ecosystem you need to develop and roll out software faster without compromising security; With GKE, you can uniformly and seamlessly establish policy guardrails and let the system declaratively enforce them for you. You can also easily implement a defense-in-depth architecture with zero trust built into every layer.Explore the fundamentals of container securityThe complete container solutionOur modern, end-to-end platform is built on cloud-native principles and prioritizes speed, security, and flexibility in highly differentiated ways. Google Cloud supports your journey, from writing code to running, operating, and managing it.Every company should move to Google Kubernetes Engine if they want to be competitive—and they all would if they realized how easy it is to do.Brian Morgan, CTO, CatalantFind your Kubernetes strategyKubernetes, anywhereKubernetes gives you a consistent platform for all your application deployments, both legacy as well as cloud-native, while offering a service-centric view of all your environments. By decoupling the apps from the underlying infrastructure, Kuberenetes gives you the flexibility to run your services across multiple clouds, on-premises, and even edge locations.Learn more about Anthos15Kindustry-first 4-way auto scaling with 15K node clusters on GKEAutopilot your containers on Google With GKE, the premier Kubernetes solution managed by Google reliability engineers, you can quickly set up your cluster and keep it production ready, highly available, and up to date. GKE integrates seamlessly with all Google Cloud services, including our operations suite, Identity and Access Management, and networking infrastructure.Learn more about GKEContainer to production in secondsWrite code your way by deploying any code or container that listens for requests or events. Built upon the container and Knative open standards, Cloud Run enables portability of your applications and abstracts away all infrastructure management for a simple, fully managed developer experience.Learn more about Cloud Run98%of users deploy an application on their first try in < 5 mins 14q14 quadrillion metric points handled by observability platformsOut-of-the-box logging and monitoringObservability is available with no configuration through GKE. Logs and metrics automatically flow to Cloud Logging and Cloud Monitoring where you can perform deep analyses, troubleshoot, set up alerts, create SLOs, and more. Learn more about Google Cloud’s operations suiteGet started with containers with Google CloudDiscover what sets Google Kubernetes Engine apartOrchestrate at Google scaleExplore best practices and tutorials with Kubernetes Engine docsJump in and get started todayWhat are containers and their benefits? Dive deeper into containersTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Content_overview(1).txt b/Content_overview(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..91b192b568d1eae9c5e880863896919ae226d6ef --- /dev/null +++ b/Content_overview(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml +Date Scraped: 2025-02-23T11:45:46.899Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and machine learning resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of AI and machine learning subjects. This page provides information to help you get started with generative AI, traditional AI, and machine learning. It also provides a list of all the AI and machine learning (ML) content in the Architecture Center. Get started The documents listed on this page can help you get started with designing, building, and deploying AI and ML solutions on Google Cloud. Explore generative AI Start by learning about the fundamentals of generative AI on Google Cloud, on the Cloud documentation site: To learn the stages of developing a generative AI application and explore the products and tools for your use case, see Build a generative AI application on Google Cloud. To identify when generative AI, traditional AI (which includes prediction and classification), or a combination of both might suit your business use case, see When to use generative AI or traditional AI. To define an AI business use case with a business value-driven decision approach, see Evaluate and define your generative AI business use case. To address the challenges of model selection, evaluation, tuning, and development, see Develop a generative AI application. To explore a generative AI and machine learning blueprint that deploys a pipeline for creating AI models, see Build and deploy generative AI and machine learning models in an enterprise. The guide explains the entire AI development lifecycle, from preliminary data exploration and experimentation through model training, deployment, and monitoring. Browse the following example architectures that use generative AI: Generative AI document summarization Generative AI knowledge base Generative AI RAG with Cloud SQL Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL Infrastructure for a RAG-capable generative AI application using GKE Model development and data labeling with Google Cloud and Labelbox For information about Google Cloud generative AI offerings, see Vertex AI and running your foundation model on GKE. Design and build To select the best combination of storage options for your AI workload, see Design storage for AI and ML workloads in Google Cloud. Google Cloud provides a suite of AI and machine learning services to help you summarize documents with generative AI, build image processing pipelines, and innovate with generative AI solutions. Keep exploring The documents that are listed later on this page and in the left navigation can help you build an AI or ML solution. The documents are organized in the following categories: Generative AI: Follow these architectures to design and build generative AI solutions. Model training: Implement machine learning, federated learning, and personalized intelligent experiences. MLOps: Implement and automate continuous integration, continuous delivery, and continuous training for machine learning systems. AI and ML applications: Build applications on Google Cloud that are customized for your AI and ML workloads. AI and machine learning resources in the Architecture Center You can filter the following list of AI and machine learning resources by typing a product name or a phrase that's in the resource title or description. Architecture for MLOps using TensorFlow Extended, Vertex AI Pipelines, and Cloud Build This document describes the overall architecture of a machine learning (ML) system using TensorFlow Extended (TFX) libraries. It also discusses how to set up a continuous integration (CI), continuous delivery (CD), and continuous training (CT) for... Products used: Cloud Build Best practices for implementing machine learning on Google Cloud Introduces best practices for implementing machine learning (ML) on Google Cloud, with a focus on custom-trained models based on your data and code. Products used: Vertex AI, Vertex AI, Vertex AI, Vertex Explainable AI, Vertex Feature Store, Vertex Pipelines, Vertex Tensorboard Build an ML vision analytics solution with Dataflow and Cloud Vision API How to deploy a Dataflow pipeline to process large-scale image files with Cloud Vision. Dataflow stores the results in BigQuery so that you can use them to train BigQuery ML pre-built models. Products used: BigQuery, Cloud Build, Cloud Pub/Sub, Cloud Storage, Cloud Vision, Dataflow Build and deploy generative AI and machine learning models in an enterprise Describes the generative AI and machine learning (ML) blueprint, which deploys a pipeline for creating AI models. Confidential computing for data analytics, AI, and federated learning Learn about how you can use confidential computing in Google Cloud to encrypt data in use for confidential data analytics, AI machine learning, and federated learning. Products used: Confidential Computing Cross-silo and cross-device federated learning on Google Cloud Provides guidance to help you create a federated learning platform that supports either a cross-silo or cross-device architecture. Data science with R on Google Cloud: Exploratory data analysis Shows you how to get started with data science at scale with R on Google Cloud. This document is intended for those who have some experience with R and with Jupyter notebooks, and who are comfortable with SQL. Products used: BigQuery, Cloud Storage, Notebooks, Vertex AI Deploy and operate generative AI applications Discusses techniques for building and operating generative AI applications using MLOps and DevOps principles. Design storage for AI and ML workloads in Google Cloud Map the AI and ML workload stages to Google Cloud storage options, and select the recommended storage options for your AI and ML workloads. Products used: Cloud Storage, Filestore, Persistent Disk Geospatial analytics architecture Learn about Google Cloud geospatial capabilities and how you can use these capabilities in your geospatial analytics applications. Products used: BigQuery, Dataflow Guidelines for developing high-quality, predictive ML solutions Collates some guidelines to help you assess, ensure, and control quality in machine learning (ML) solutions. Implement two-tower retrieval for large-scale candidate generation Learn how to implement an end-to-end two-tower candidate generation workflow with Vertex AI. Products used: Cloud Storage, Vertex AI Infrastructure for a RAG-capable generative AI application using GKE Shows you how to design the infrastructure for a generative AI application with RAG using GKE. Products used: Cloud SQL, Cloud Storage, Google Kubernetes Engine (GKE) Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL Design infrastructure to run a generative AI application with retrieval-augmented generation using AlloyDB as the vector store. Products used: AlloyDB for PostgreSQL, BigQuery, Cloud Logging, Cloud Monitoring, Cloud Pub/Sub, Cloud Run, Cloud Storage, Document AI, Vertex AI Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search Design infrastructure for a generative AI application with retrieval-augmented generation (RAG) using the vector search capabilities of Vertex AI. Products used: BigQuery, Cloud Logging, Cloud Monitoring, Cloud Pub/Sub, Cloud Run, Cloud Storage, Vertex AI Jump Start Solution: AI/ML image processing on Cloud Functions Analyze images by using pretrained machine learning models and an image-processing app deployed on Cloud Functions. Jump Start Solution: Analytics lakehouse Unify data lakes and data warehouses by creating an analytics lakehouse using BigQuery to store, process, analyze, and activate data. Jump Start Solution: Data warehouse with BigQuery Build a data warehouse with a dashboard and visualization tool using BigQuery. Jump Start Solution: Generative AI document summarization Process and summarize documents on demand by using Vertex AI Generative AI and large language models (LLMs). Jump Start Solution: Generative AI Knowledge Base Extract question and answer pairs from documents on demand by using Vertex AI Generative AI and large language models (LLMs)... Jump Start Solution: Generative AI RAG with Cloud SQL Deploy a retrieval augmented generation (RAG) application with vector embeddings and Cloud SQL. MLOps: Continuous delivery and automation pipelines in machine learning Discusses techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. Model development and data labeling with Google Cloud and Labelbox Provides guidance for building a standardized pipeline to help accelerate the development of ML models. Optimize AI and ML workloads with Parallelstore Learn how to optimize performance for artificial intelligence (AI) or machine learning (ML) workloads by using Parallelstore. Products used: Cloud Storage, Google Kubernetes Engine (GKE), Parallelstore, Virtual Private Cloud Use generative AI for utilization management A reference architecture for health insurance companies to automate prior authorization (PA) request processing and improve their utilization review (UR) processes. Products used: BigQuery, Cloud Logging, Cloud Monitoring, Cloud Pub/Sub, Cloud Run, Cloud Storage, Document AI, Vertex AI Use Vertex AI Pipelines for propensity modeling on Google Cloud Describes an example of an automated pipeline in Google Cloud that performs propensity modeling. Products used: BigQuery, Cloud Functions, Vertex AI Send feedback \ No newline at end of file diff --git a/Content_overview(10).txt b/Content_overview(10).txt new file mode 100644 index 0000000000000000000000000000000000000000..8068a77fab16b8f647b906d7e02b67ee9e23f91e --- /dev/null +++ b/Content_overview(10).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-iam +Date Scraped: 2025-02-23T11:54:57.167Z + +Content: +Home Docs Cloud Architecture Center Send feedback Security and IAM resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of security and identity and access management (IAM) subjects. Get started If you are new to Google Cloud or new to designing for security and IAM on Google Cloud, begin with these resources: Enterprise foundations blueprint Identity and access management overview Landing zone design in Google Cloud Security and IAM resources in the Architecture Center You can filter the following list of security and IAM resources by typing a product name or a phrase that's in the resource title or description. Automate malware scanning for files uploaded to Cloud Storage This document shows you how to build an event-driven pipeline that can help you automate the evaluation of files for malicious code. Products used: Cloud Logging, Cloud Run, Cloud Storage, Eventarc Best practices for mitigating compromised OAuth tokens for Google Cloud CLI Describes how to mitigate the impact of an attacker compromising the OAuth tokens that are used by the gcloud CLI. Products used: Google Cloud CLI Best practices for protecting against cryptocurrency mining attacks Cryptocurrency mining (also known as bitcoin mining ) is the process used to create new cryptocoins and verify transactions. Crytocurrency mining attacks occurs when attackers who gain access to your environment might also exploit your resources to... Products used: Cloud Key Management Service, Compute Engine, Google Cloud Armor, Identity and Access Management Best practices for securing your applications and APIs using Apigee Describes best practices that can help you to secure your applications and APIs using Apigee API management, Google Cloud Armor, reCAPTCHA Enterprise, and Cloud CDN. Products used: Cloud CDN Build and deploy generative AI and machine learning models in an enterprise Describes the generative AI and machine learning (ML) blueprint, which deploys a pipeline for creating AI models. Build hybrid and multicloud architectures using Google Cloud Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Products used: Cloud Load Balancing, Compute Engine, GKE Enterprise, Google Kubernetes Engine (GKE) Building internet connectivity for private VMs Describes options for connecting to and from the internet using Compute Engine resources that have private IP addresses. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Identity-Aware Proxy C3 AI architecture on Google Cloud Develop applications using C3 AI and Google Cloud. Products used: Cloud Key Management Service, Cloud NAT, Cloud Storage, Google Kubernetes Engine (GKE), Virtual Private Cloud Confidential computing for data analytics, AI, and federated learning Learn about how you can use confidential computing in Google Cloud to encrypt data in use for confidential data analytics, AI machine learning, and federated learning. Products used: Confidential Computing Configure networks for FedRAMP and DoD in Google Cloud Provides configuration guidance to help you to comply with design requirements for FedRAMP High and DoD IL2, IL4, and IL5 when you deploy Google Cloud networking policies. Configuring SaaS data protection for Google Workspace data with Spin.AI How to configure SpinOne - All-in-One SaaS Data Protection with Cloud Storage. Controls to restrict access to individually approved APIs Many organizations have a compliance requirement to restrict network access to an explicitly approved list of APIs, based on internal requirements or as part of adopting Assured Workloads. On-premises, this requirement is often addressed with proxy... Data management with Cohesity Helios and Google Cloud How Cohesity works with Google Cloud Storage. Cohesity is a hyperconverged secondary storage system for consolidating backup, test/dev, file services, and analytic datasets onto a scalable data platform. Products used: Cloud Storage De-identification and re-identification of PII in large-scale datasets using Sensitive Data Protection Discusses how to use Sensitive Data Protection to create an automated data transformation pipeline to de-identify sensitive data like personally identifiable information (PII). Products used: BigQuery, Cloud Pub/Sub, Cloud Storage, Dataflow, Identity and Access Management, Sensitive Data Protection Decide the network design for your Google Cloud landing zone This document describes four common network designs for landing zones, and helps you choose the option that best meets your requirements. Products used: VPC Service Controls, Virtual Private Cloud Deploy a secured serverless architecture using Cloud Run Provides guidance on how to help protect serverless applications that use Cloud Run by layering additional controls onto your existing foundation. Products used: Cloud Run Deploy a secured serverless architecture using Cloud Run functions Provides guidance on how to help protect serverless applications that use Cloud Functions (2nd gen) by layering additional controls onto your existing foundation. Products used: Cloud Functions Deploy an enterprise developer platform on Google Cloud Describes the enterprise application blueprint, which deploys an internal developer platform that provides managed software development and delivery. Deploy network monitoring and telemetry capabilities in Google Cloud Network telemetry collects network traffic data from devices on your network so that the data can be analyzed. Network telemetry lets security operations teams detect network-based threats and hunt for advanced adversaries, which is essential for... Products used: Compute Engine, Google Kubernetes Engine (GKE), Virtual Private Cloud Design secure deployment pipelines Describes best practices for designing secure deployment pipelines based on your confidentiality, integrity, and availability requirements. Products used: App Engine, Cloud Run, Google Kubernetes Engine (GKE) Designing networks for migrating enterprise workloads: Architectural approaches This document introduces a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. These architectures emphasize advanced connectivity, zero-trust security principles, and... Products used: Cloud CDN, Cloud DNS, Cloud Interconnect, Cloud Intrusion Detection System (Cloud IDS), Cloud Load Balancing, Cloud NAT, Cloud Service Mesh, Cloud VPN, Google Cloud Armor, Identity-Aware Proxy, Network Connectivity Center, VPC Service Controls, Virtual Private Cloud Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Edge hybrid pattern Discusses how the edge hybrid pattern addresses connectivity challenges by running time- and business-critical workloads locally, at the edge of the network. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Enterprise foundations blueprint This series presents an opinionated view of Google Cloud security best practices, organized to allow users to deploy them for their workloads on Google Cloud. Environment hybrid pattern Discusses how to keep the production environment of a workload in the existing data center but use the public cloud for other, non-production environments. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Federate Google Cloud with Active Directory Products used: Cloud Identity, Google Cloud Directory Sync Federate Google Cloud with Microsoft Entra ID (formerly Azure AD) Products used: Google Cloud Directory Sync FortiGate architecture in Google Cloud Describes the overall concepts around deploying a FortiGate Next Generation Firewall (NGFW) in Google Cloud. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Virtual Private Cloud Google Cloud FedRAMP implementation guide This guide is intended for security officers, compliance officers, IT admins, and other employees who are responsible for Federal Risk and Authorization Management Program (FedRAMP) implementation and compliance on Google Cloud. This guide helps you... Products used: Cloud Identity, Cloud Logging, Cloud Monitoring, Cloud VPN, Google Cloud Armor, Google Workspace, Identity and Access Management, Identity-Aware Proxy, Security Command Center Hybrid and multicloud architecture patterns Discusses common hybrid and multicloud architecture patterns, and describes the scenarios that these patterns are best suited for. Products used: Cloud DNS, Cloud Interconnect, Cloud Pub/Sub, Cloud Run, Cloud SQL, Cloud Storage, Google Cloud Armor, Google Kubernetes Engine (GKE), Looker Identify and prioritize security risks with Wiz Security Graph and Google Cloud Describes how to identify and prioritize security risks in your cloud workloads with Wiz Security Graph and Google Cloud. Products used: Artifact Registry, Cloud Audit Logs, Cloud SQL, Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE), Security Command Center Implement your Google Cloud landing zone network design This document provides steps and guidance to implement your chosen network design for your landing zone. Products used: Virtual Private Cloud Import data from an external network into a secured BigQuery data warehouse Describes an architecture that you can use to help secure a data warehouse in a production environment, and provides best practices for importing data into BigQuery from an external network such as an on-premises environment. Products used: BigQuery Import data from Google Cloud into a secured BigQuery data warehouse Describes an architecture that you can use to help secure a data warehouse in a production environment, and provides best practices for data governance of a data warehouse in Google Cloud. Products used: BigQuery, Cloud Key Management Service, Dataflow, Sensitive Data Protection Landing zone design in Google Cloud This series shows how to design and build a landing zone in Google Cloud, guiding you through high-level decisions about identity onboarding, resource hierarchy, network design, and security. Migrate to Google Cloud Helps you plan, design, and implement the process of migrating your application and infrastructure workloads to Google Cloud, including computing, database, and storage workloads. Products used: App Engine, Cloud Build, Cloud Data Fusion, Cloud Deployment Manager, Cloud Functions, Cloud Run, Cloud Storage, Container Registry, Data Catalog, Dataflow, Direct Peering, Google Kubernetes Engine (GKE), Transfer Appliance Mitigating ransomware attacks using Google Cloud Code created by a third party to infiltrate your systems to hijack, encrypt, and steal data is referred to as ransomware. To help you mitigate ransomware attacks, Google Cloud provides you with controls for identifying, protecting, detecting,... Products used: Google Security Operations, Google Workspace Overview of identity and access management Explores the general practice of identity and access management (generally referred to as IAM) and the individuals who are subject to it, including corporate identities, customer identities, and service identities. Products used: Cloud Identity, Identity and Access Management OWASP Top 10 2021 mitigation options on Google Cloud Helps you identify Google Cloud products and mitigation strategies that can help you defend against common application-level attacks that are outlined in OWASP Top 10. Products used: Google Cloud Armor, Security Command Center Secure virtual private cloud networks with the Palo Alto VM-Series NGFW Describes the networking concepts that you need to understand to deploy Palo Alto Networks VM-Series next generation firewall (NGFW) in Google Cloud. Products used: Cloud Storage Security log analytics in Google Cloud Shows how to collect, export, and analyze logs from Google Cloud to help you audit usage and detect threats to your data and workloads. Use the included threat detection queries for BigQuery or Chronicle, or bring your own SIEM. Products used: BigQuery, Cloud Logging, Compute Engine, Looker Studio Use Google Cloud Armor, load balancing, and Cloud CDN to deploy programmable global front ends Provides an architecture that uses a global front end which incorporates Google Cloud best practices to help scale, secure, and accelerate the delivery of your internet-facing applications. Send feedback \ No newline at end of file diff --git a/Content_overview(11).txt b/Content_overview(11).txt new file mode 100644 index 0000000000000000000000000000000000000000..ec53a29c2e29a3caec86e3a010ea35f374fa7f5d --- /dev/null +++ b/Content_overview(11).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/storage +Date Scraped: 2025-02-23T11:56:56.964Z + +Content: +Home Docs Cloud Architecture Center Send feedback Storage resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-10 UTC The Architecture Center provides content resources across a wide variety of storage subjects. Get started If you are new to Google Cloud or new to designing storage architectures on Google Cloud, begin by reading Design an optimal storage strategy for your cloud workload. Storage resources in the Architecture Center You can filter the following list of storage resources by typing a product name or a phrase that's in the resource title or description. Build a hybrid render farm Provides guidance on extending your existing, on-premises render farm to use compute resources on Google Cloud (Google Cloud). Products used: BigQuery, Cloud Interconnect, Cloud Storage, Cloud VPN, Compute Engine, Dedicated Interconnect C3 AI architecture on Google Cloud Develop applications using C3 AI and Google Cloud. Products used: Cloud Key Management Service, Cloud NAT, Cloud Storage, Google Kubernetes Engine (GKE), Virtual Private Cloud Configuring SaaS data protection for Google Workspace data with Spin.AI How to configure SpinOne - All-in-One SaaS Data Protection with Cloud Storage. Data management with Cohesity Helios and Google Cloud How Cohesity works with Google Cloud Storage. Cohesity is a hyperconverged secondary storage system for consolidating backup, test/dev, file services, and analytic datasets onto a scalable data platform. Products used: Cloud Storage Design an optimal storage strategy for your cloud workload Assess your workload's requirements, review the storage options in Google Cloud, and select an optimal storage strategy. Products used: Cloud Storage, Filestore, Persistent Disk Design storage for AI and ML workloads in Google Cloud Map the AI and ML workload stages to Google Cloud storage options, and select the recommended storage options for your AI and ML workloads. Products used: Cloud Storage, Filestore, Persistent Disk Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner File storage on Compute Engine Describes and compares options for file storage on Compute Engine. Products used: Compute Engine, Filestore Jump Start Solution: Dynamic web application with Java Run a dynamic web application built using Java and deployed on Google Kubernetes Engine (GKE). Jump Start Solution: Dynamic web application with JavaScript Run a dynamic web application built using JavaScript and deployed on Cloud Run. Jump Start Solution: Dynamic web application with Python and JavaScript Run a dynamic web application built using Python and JavaScript and deployed on Cloud Run. Parallel file systems for HPC workloads Review the storage options in Google Cloud for high performance computing (HPC) workloads, and learn when to use parallel file systems like Lustre and DDN EXAScaler Cloud for HPC workloads. Products used: Cloud Storage, Filestore, Persistent Disk Use Apache Hive on Dataproc Shows how to use Apache Hive on Dataproc in an efficient and flexible way by storing Hive data in Cloud Storage and hosting the Hive metastore in a MySQL database on Cloud SQL. Products used: Cloud SQL, Cloud Storage, Dataproc Website hosting How to host a website on Google Cloud. Google Cloud provides a robust, flexible, reliable, and scalable platform for serving websites. Products used: App Engine, Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE) Send feedback \ No newline at end of file diff --git a/Content_overview(2).txt b/Content_overview(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..910de89e591766ec1ba498fb7c46b25e40a679c8 --- /dev/null +++ b/Content_overview(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development +Date Scraped: 2025-02-23T11:46:46.140Z + +Content: +Home Docs Cloud Architecture Center Send feedback Application development resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of application development subjects. Application development resources in the Architecture Center You can filter the following list of application development resources by typing a product name or a phrase that's in the resource title or description. Apache Guacamole on GKE and Cloud SQL Describes an architecture for hosting Apache Guacamole on Google Kubernetes Engine (GKE) and Cloud SQL. Apache Guacamole offers a fully browser-based way to access remote desktops through Remote Desktop Protocol (RDP). Products used: Cloud SQL, Compute Engine, Google Kubernetes Engine (GKE) Architecture decision records overview Explains when and how to use architecture decision records (ADRs) as you build and run applications on Google Cloud. Products used: Cloud Pub/Sub, Cloud SQL, Google Kubernetes Engine (GKE) Architecture for MLOps using TensorFlow Extended, Vertex AI Pipelines, and Cloud Build This document describes the overall architecture of a machine learning (ML) system using TensorFlow Extended (TFX) libraries. It also discusses how to set up a continuous integration (CI), continuous delivery (CD), and continuous training (CT) for... Products used: Cloud Build Architectures for high availability of MySQL clusters on Compute Engine Describes several architectures that provide high availability (HA) for MySQL deployments on Google Cloud. Products used: Compute Engine Authenticate workforce users in a hybrid environment How to extend your identity management solution to Google Cloud to enable your workforce to authenticate and consume services in a hybrid computing environment. Products used: Cloud Identity Automate malware scanning for files uploaded to Cloud Storage This document shows you how to build an event-driven pipeline that can help you automate the evaluation of files for malicious code. Products used: Cloud Logging, Cloud Run, Cloud Storage, Eventarc Best practices and reference architectures for VPC design This guide introduces best practices and typical enterprise architectures for the design of virtual private clouds (VPCs) with Google Cloud. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud Router, Cloud VPN, Virtual Private Cloud Best practices for running cost-optimized Kubernetes applications on GKE A document describing Google Kubernetes Engine (GKE) features and options, and the best practices for running cost-optimized applications on GKE to take advantage of the elasticity provided by Google Cloud. Products used: Compute Engine, Google Kubernetes Engine (GKE) Build a hybrid render farm Provides guidance on extending your existing, on-premises render farm to use compute resources on Google Cloud (Google Cloud). Products used: BigQuery, Cloud Interconnect, Cloud Storage, Cloud VPN, Compute Engine, Dedicated Interconnect Build hybrid and multicloud architectures using Google Cloud Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Products used: Cloud Load Balancing, Compute Engine, GKE Enterprise, Google Kubernetes Engine (GKE) Building internet connectivity for private VMs Describes options for connecting to and from the internet using Compute Engine resources that have private IP addresses. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Identity-Aware Proxy C3 AI architecture on Google Cloud Develop applications using C3 AI and Google Cloud. Products used: Cloud Key Management Service, Cloud NAT, Cloud Storage, Google Kubernetes Engine (GKE), Virtual Private Cloud CI/CD pipeline for developing and delivering containerized apps Describes how to set up and use a development, continuous integration (CI), and continuous delivery (CD) system using an integrated set of Google Cloud tools. Products used: Artifact Registry, Cloud Build, Cloud Deploy, Google Kubernetes Engine (GKE) Cloud Monitoring metric export Describes a way to export Cloud Monitoring metrics for long-term analysis. Products used: App Engine, BigQuery, Cloud Monitoring, Cloud Pub/Sub, Cloud Scheduler, Datalab, Looker Studio Configuring SaaS data protection for Google Workspace data with Spin.AI How to configure SpinOne - All-in-One SaaS Data Protection with Cloud Storage. Connected device architectures on Google Cloud An overview on a series of approaches for connected device IoT architectures on Google Cloud. Controls to restrict access to individually approved APIs Many organizations have a compliance requirement to restrict network access to an explicitly approved list of APIs, based on internal requirements or as part of adopting Assured Workloads. On-premises, this requirement is often addressed with proxy... Deploy an Active Directory forest on Compute Engine Shows you how to deploy an Active Directory forest on Compute Engine in a way that follows the best practices. Products used: Cloud DNS, Compute Engine, Identity-Aware Proxy Deploy an enterprise developer platform on Google Cloud Describes the enterprise application blueprint, which deploys an internal developer platform that provides managed software development and delivery. Deploy and operate generative AI applications Discusses techniques for building and operating generative AI applications using MLOps and DevOps principles. Design secure deployment pipelines Describes best practices for designing secure deployment pipelines based on your confidentiality, integrity, and availability requirements. Products used: App Engine, Cloud Run, Google Kubernetes Engine (GKE) DevOps capabilities A set of capabilities that drive higher software delivery and organizational performance, as identified and validated by the DevOps Research and Assessment (DORA) team. Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Distributed load testing using Google Kubernetes Engine This tutorial explains how to use Google Kubernetes Engine (GKE) to deploy a distributed load testing framework that uses multiple containers to create traffic for a simple REST-based API. This tutorial load-tests a web application deployed to App... Products used: Google Kubernetes Engine (GKE) Edge hybrid pattern Discusses how the edge hybrid pattern addresses connectivity challenges by running time- and business-critical workloads locally, at the edge of the network. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Enterprise application on Compute Engine VMs with Oracle Exadata in Google Cloud Provides a reference architecture for an application that's hosted on Compute Engine VMs with connectivity to Oracle Cloud Infrastructure (OCI) Exadata databases in Google Cloud. Products used: Cloud Interconnect, Cloud Load Balancing, Cloud Monitoring, Cloud NAT, Cloud VPN, Compute Engine, Google Cloud Armor, Partner Interconnect, Virtual Private Cloud Enterprise application with Oracle Database on Compute Engine Provides a reference architecture to host an application that uses an Oracle database, deployed on Compute Engine VMs. Products used: Cloud Interconnect, Cloud Load Balancing, Cloud Logging, Cloud Monitoring, Cloud NAT, Cloud Storage, Cloud VPN, Compute Engine, Google Cloud Armor, Virtual Private Cloud Environment hybrid pattern Discusses how to keep the production environment of a workload in the existing data center but use the public cloud for other, non-production environments. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Federate Google Cloud with Active Directory Products used: Cloud Identity, Google Cloud Directory Sync Federate Google Cloud with Microsoft Entra ID (formerly Azure AD) Products used: Google Cloud Directory Sync File storage on Compute Engine Describes and compares options for file storage on Compute Engine. Products used: Compute Engine, Filestore From edge to mesh: Deploy service mesh applications through GKE Gateway Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) From edge to mesh: Expose service mesh applications through GKE Gateway Combines Cloud Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients. Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh Products used: Certificate Manager, Cloud Endpoints, Cloud Load Balancing, Cloud Service Mesh, Google Cloud Armor, Google Kubernetes Engine (GKE) From edge to multi-cluster mesh: Globally distributed applications exposed through GKE Gateway and Cloud Service Mesh Describes exposing applications externally through Google Kubernetes Engine (GKE) Gateways running on multiple GKE clusters within a service mesh. Products used: Certificate Manager, Cloud Endpoints, Cloud Load Balancing, Cloud Service Mesh, Google Cloud Armor, Google Kubernetes Engine (GKE) Gated egress Discusses how the gated egress pattern is based on exposing select APIs from various environments to workloads that are deployed in Google Cloud. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated egress and gated ingress Discusses scenarios that demand bidirectional usage of selected APIs between workloads that run in various environments. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated ingress Discusses exposing select APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Hub-and-spoke network architecture Evaluate the architectural options for designing hub-and-spoke network topologies in Google Cloud. Products used: Cloud NAT, Cloud VPN, Virtual Private Cloud Hybrid and multicloud architecture patterns Discusses common hybrid and multicloud architecture patterns, and describes the scenarios that these patterns are best suited for. Products used: Cloud DNS, Cloud Interconnect, Cloud Pub/Sub, Cloud Run, Cloud SQL, Cloud Storage, Google Cloud Armor, Google Kubernetes Engine (GKE), Looker Hybrid and multicloud monitoring and logging patterns Discusses monitoring and logging architectures for hybrid and multicloud deployments, and provides best practices for implementing them by using Google Cloud. Products used: Cloud Logging, Cloud Monitoring, GKE Enterprise, Google Distributed Cloud, Google Kubernetes Engine (GKE) Hybrid and multicloud secure networking architecture patterns Discusses several common secure network architecture patterns that you can use for hybrid and multicloud architectures. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Cloud Armor, Google Kubernetes Engine (GKE) Identify and prioritize security risks with Wiz Security Graph and Google Cloud Describes how to identify and prioritize security risks in your cloud workloads with Wiz Security Graph and Google Cloud. Products used: Artifact Registry, Cloud Audit Logs, Cloud SQL, Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE), Security Command Center Infrastructure for a RAG-capable generative AI application using GKE Shows you how to design the infrastructure for a generative AI application with RAG using GKE. Products used: Cloud SQL, Cloud Storage, Google Kubernetes Engine (GKE) Introduction to microservices This document is the first in a four-part series about designing, building, and deploying microservices. Products used: Cloud SQL, Cloud Trace, Google Kubernetes Engine (GKE) Jump Start Solution: Cloud SDK Client Library Interact with Google Cloud using the Google Cloud SDK Client Libraries to transform and query information. Jump Start Solution: Dynamic web application with Java Run a dynamic web application built using Java and deployed on Google Kubernetes Engine (GKE). Jump Start Solution: Dynamic web application with JavaScript Run a dynamic web application built using JavaScript and deployed on Cloud Run. Jump Start Solution: Dynamic web application with Python and JavaScript Run a dynamic web application built using Python and JavaScript and deployed on Cloud Run. Jump Start Solution: Ecommerce platform with serverless computing Run a containerized ecommerce application in a serverless environment using Cloud Run. Jump Start Solution: Ecommerce web app deployed on Kubernetes Run a microservices-based ecommerce application deployed on Google Kubernetes Engine (GKE) clusters. Jump Start Solution: Three-tier web app Run a three-tier web app in a serverless environment using Cloud Run. Log and monitor on-premises resources with BindPlane Describes considerations and design patterns for using Cloud Logging, Cloud Monitoring, and BindPlane to provide logging and monitoring services for on-premises resources. Products used: Cloud Logging, Cloud Monitoring Manage and scale networking for Windows applications that run on managed Kubernetes Discusses how to manage networking for Windows applications that run on Google Kubernetes Engine using Cloud Service Mesh and Envoy gateways. Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) Migrate to a Google Cloud VMware Engine platform Describes the VMware Engine blueprint, which deploys a platform for VM workloads. Products used: Google Cloud VMware Engine Migrate to Google Cloud Helps you plan, design, and implement the process of migrating your application and infrastructure workloads to Google Cloud, including computing, database, and storage workloads. Products used: App Engine, Cloud Build, Cloud Data Fusion, Cloud Deployment Manager, Cloud Functions, Cloud Run, Cloud Storage, Container Registry, Data Catalog, Dataflow, Direct Peering, Google Kubernetes Engine (GKE), Transfer Appliance Migrating On-Premises Hadoop Infrastructure to Google Cloud Guidance on moving on-premises Hadoop workloads to Google Cloud... Products used: BigQuery, Cloud Storage, Dataproc MLOps: Continuous delivery and automation pipelines in machine learning Discusses techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. Onboarding best practices for state, local, and education organizations Defines onboarding considerations and best practices for creating a Google Cloud and Google Workspace environment for state, local, and education (SLED) organizations, which often have unique IT needs compared to other enterprises. Products used: Cloud Billing, Google Workspace, Identity and Access Management Overview of identity and access management Explores the general practice of identity and access management (generally referred to as IAM) and the individuals who are subject to it, including corporate identities, customer identities, and service identities. Products used: Cloud Identity, Identity and Access Management Patterns and practices for identity and access governance on Google Cloud There are a number of Google Cloud products and services that you can use to help your organization develop an approach for identity governance and access management for applications and workloads running on Google Cloud. This document is intended... Products used: Cloud Audit Logs, Google Groups, Identity and Access Management Patterns for connecting other cloud service providers with Google Cloud Helps cloud architects and operations professionals decide how to connect Google Cloud with other cloud service providers (CSP) such as Amazon Web Services (AWS) and Microsoft Azure. Products used: Cloud Interconnect, Dedicated Interconnect, Partner Interconnect Patterns for scalable and resilient apps Introduces some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. Products used: Cloud Load Balancing, Cloud Monitoring, Cloud SQL, Cloud Storage, Compute Engine Patterns for using Active Directory in a hybrid environment Requirements to consider when you deploy Active Directory to Google Cloud and helps you choose the right architecture. Products used: Cloud Identity Reference architecture: Resource management with ServiceNow Provides architectural recommendations to integrate Google Cloud assets into ServiceNow discovery tools. Products used: Cloud Asset Inventory, Compute Engine Scalable BigQuery backup automation Build a solution to automate recurrent BigQuery backup operations at scale, with two backup methods: BigQuery snapshots and exports to Cloud Storage. Products used: BigQuery, Cloud Logging, Cloud Pub/Sub, Cloud Run, Cloud Scheduler, Cloud Storage Security log analytics in Google Cloud Shows how to collect, export, and analyze logs from Google Cloud to help you audit usage and detect threats to your data and workloads. Use the included threat detection queries for BigQuery or Chronicle, or bring your own SIEM. Products used: BigQuery, Cloud Logging, Compute Engine, Looker Studio Select a managed container runtime environment Learn about managed runtime environments and assess your requirements to choose between Cloud Run and GKE Autopilot. Products used: Cloud Run, Google Kubernetes Engine (GKE) Set up Chrome Remote Desktop for Linux on Compute Engine Shows you how to set up the Chrome Remote Desktop service on a Debian Linux virtual machine (VM) instance on Compute Engine. Chrome Remote Desktop allows you to remotely access applications with a graphical user interface. Products used: Compute Engine Set up Chrome Remote Desktop for Windows on Compute Engine Shows you how to set up the Chrome Remote Desktop service on a Microsoft Windows virtual machine (VM) instance on Compute Engine. Chrome Remote Desktop allows you to remotely access applications with a graphical user interface. Products used: Compute Engine Use Apache Hive on Dataproc Shows how to use Apache Hive on Dataproc in an efficient and flexible way by storing Hive data in Cloud Storage and hosting the Hive metastore in a MySQL database on Cloud SQL. Products used: Cloud SQL, Cloud Storage, Dataproc Website hosting How to host a website on Google Cloud. Google Cloud provides a robust, flexible, reliable, and scalable platform for serving websites. Products used: App Engine, Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE) Send feedback \ No newline at end of file diff --git a/Content_overview(3).txt b/Content_overview(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..e7c47f62dc5ce0fed5eb0deeb7dfac75e4612615 --- /dev/null +++ b/Content_overview(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/big-data-analytics +Date Scraped: 2025-02-23T11:48:48.146Z + +Content: +Home Docs Cloud Architecture Center Send feedback Big data and analytics resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-17 UTC The Architecture Center provides content resources across a wide variety of big data and analytics subjects. Big data and analytics resources in the Architecture Center You can filter the following list of big data and analytics resources by typing a product name or a phrase that's in the resource title or description. Analyzing FHIR data in BigQuery Explains the processes and considerations for analyzing Fast Healthcare Interoperability Resources (FHIR) data in BigQuery. Products used: BigQuery Architecture and functions in a data mesh A series that describes how to implement a data mesh that is internal to an organization. Build an ML vision analytics solution with Dataflow and Cloud Vision API How to deploy a Dataflow pipeline to process large-scale image files with Cloud Vision. Dataflow stores the results in BigQuery so that you can use them to train BigQuery ML pre-built models. Products used: BigQuery, Cloud Build, Cloud Pub/Sub, Cloud Storage, Cloud Vision, Dataflow Cloud Monitoring metric export Describes a way to export Cloud Monitoring metrics for long-term analysis. Products used: App Engine, BigQuery, Cloud Monitoring, Cloud Pub/Sub, Cloud Scheduler, Datalab, Looker Studio Continuous data replication to BigQuery using Striim Demonstrates how to migrate a MySQL database to BigQuery using Striim. Striim is a comprehensive streaming extract, transform, and load (ETL) platform. Products used: BigQuery, Cloud SQL for MySQL, Compute Engine Continuous data replication to Spanner using Striim How to migrate a MySQL database to Cloud Spanner using Striim. Products used: Cloud SQL, Cloud SQL for MySQL, Compute Engine, Spanner Data science with R on Google Cloud: Exploratory data analysis Shows you how to get started with data science at scale with R on Google Cloud. This document is intended for those who have some experience with R and with Jupyter notebooks, and who are comfortable with SQL. Products used: BigQuery, Cloud Storage, Notebooks, Vertex AI De-identification and re-identification of PII in large-scale datasets using Sensitive Data Protection Discusses how to use Sensitive Data Protection to create an automated data transformation pipeline to de-identify sensitive data like personally identifiable information (PII). Products used: BigQuery, Cloud Pub/Sub, Cloud Storage, Dataflow, Identity and Access Management, Sensitive Data Protection Geospatial analytics architecture Learn about Google Cloud geospatial capabilities and how you can use these capabilities in your geospatial analytics applications. Products used: BigQuery, Dataflow Import data from an external network into a secured BigQuery data warehouse Describes an architecture that you can use to help secure a data warehouse in a production environment, and provides best practices for importing data into BigQuery from an external network such as an on-premises environment. Products used: BigQuery Import data from Google Cloud into a secured BigQuery data warehouse Describes an architecture that you can use to help secure a data warehouse in a production environment, and provides best practices for data governance of a data warehouse in Google Cloud. Products used: BigQuery, Cloud Key Management Service, Dataflow, Sensitive Data Protection Jump Start Solution: Analytics lakehouse Unify data lakes and data warehouses by creating an analytics lakehouse using BigQuery to store, process, analyze, and activate data. Jump Start Solution: Data warehouse with BigQuery Build a data warehouse with a dashboard and visualization tool using BigQuery. Migrate to Google Cloud Helps you plan, design, and implement the process of migrating your application and infrastructure workloads to Google Cloud, including computing, database, and storage workloads. Products used: App Engine, Cloud Build, Cloud Data Fusion, Cloud Deployment Manager, Cloud Functions, Cloud Run, Cloud Storage, Container Registry, Data Catalog, Dataflow, Direct Peering, Google Kubernetes Engine (GKE), Transfer Appliance Migrating On-Premises Hadoop Infrastructure to Google Cloud Guidance on moving on-premises Hadoop workloads to Google Cloud... Products used: BigQuery, Cloud Storage, Dataproc Scalable BigQuery backup automation Build a solution to automate recurrent BigQuery backup operations at scale, with two backup methods: BigQuery snapshots and exports to Cloud Storage. Products used: BigQuery, Cloud Logging, Cloud Pub/Sub, Cloud Run, Cloud Scheduler, Cloud Storage Security log analytics in Google Cloud Shows how to collect, export, and analyze logs from Google Cloud to help you audit usage and detect threats to your data and workloads. Use the included threat detection queries for BigQuery or Chronicle, or bring your own SIEM. Products used: BigQuery, Cloud Logging, Compute Engine, Looker Studio Use Apache Hive on Dataproc Shows how to use Apache Hive on Dataproc in an efficient and flexible way by storing Hive data in Cloud Storage and hosting the Hive metastore in a MySQL database on Cloud SQL. Products used: Cloud SQL, Cloud Storage, Dataproc Send feedback \ No newline at end of file diff --git a/Content_overview(4).txt b/Content_overview(4).txt new file mode 100644 index 0000000000000000000000000000000000000000..1ae0875ab0ef016af5846a1327262c726f2064ea --- /dev/null +++ b/Content_overview(4).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/databases +Date Scraped: 2025-02-23T11:49:27.305Z + +Content: +Home Docs Cloud Architecture Center Send feedback Database resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC The Architecture Center provides content resources across a wide variety of database subjects. Database resources in the Architecture Center You can filter the following list of database resources by typing a product name or a phrase that's in the resource title or description. Analyzing FHIR data in BigQuery Explains the processes and considerations for analyzing Fast Healthcare Interoperability Resources (FHIR) data in BigQuery. Products used: BigQuery Apache Guacamole on GKE and Cloud SQL Describes an architecture for hosting Apache Guacamole on Google Kubernetes Engine (GKE) and Cloud SQL. Apache Guacamole offers a fully browser-based way to access remote desktops through Remote Desktop Protocol (RDP). Products used: Cloud SQL, Compute Engine, Google Kubernetes Engine (GKE) Architectures for high availability of MySQL clusters on Compute Engine Describes several architectures that provide high availability (HA) for MySQL deployments on Google Cloud. Products used: Compute Engine Architectures for high availability of PostgreSQL clusters on Compute Engine Several architectures that provide high availability (HA) for PostgreSQL deployments on Google Cloud. Products used: Compute Engine Continuous data replication to BigQuery using Striim Demonstrates how to migrate a MySQL database to BigQuery using Striim. Striim is a comprehensive streaming extract, transform, and load (ETL) platform. Products used: BigQuery, Cloud SQL for MySQL, Compute Engine Continuous data replication to Spanner using Striim How to migrate a MySQL database to Cloud Spanner using Striim. Products used: Cloud SQL, Cloud SQL for MySQL, Compute Engine, Spanner Database migration: Concepts and principles (Part 1) Introduces concepts, principles, terminology, and architecture of near-zero downtime database migration from on-premises or other cloud environments. Products used: Compute Engine, Spanner Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Enterprise application on Compute Engine VMs with Oracle Exadata in Google Cloud Provides a reference architecture for an application that's hosted on Compute Engine VMs with connectivity to Oracle Cloud Infrastructure (OCI) Exadata databases in Google Cloud. Products used: Cloud Interconnect, Cloud Load Balancing, Cloud Monitoring, Cloud NAT, Cloud VPN, Compute Engine, Google Cloud Armor, Partner Interconnect, Virtual Private Cloud Enterprise application with Oracle Database on Compute Engine Provides a reference architecture to host an application that uses an Oracle database, deployed on Compute Engine VMs. Products used: Cloud Interconnect, Cloud Load Balancing, Cloud Logging, Cloud Monitoring, Cloud NAT, Cloud Storage, Cloud VPN, Compute Engine, Google Cloud Armor, Virtual Private Cloud Jump Start Solution: Dynamic web application with Java Run a dynamic web application built using Java and deployed on Google Kubernetes Engine (GKE). Multicloud database management: Architectures, use cases, and best practices Multicloud database management: Architectures, use cases, and best practices. Products used: GKE Enterprise, Google Kubernetes Engine (GKE), Spanner Use Apache Hive on Dataproc Shows how to use Apache Hive on Dataproc in an efficient and flexible way by storing Hive data in Cloud Storage and hosting the Hive metastore in a MySQL database on Cloud SQL. Products used: Cloud SQL, Cloud Storage, Dataproc Send feedback \ No newline at end of file diff --git a/Content_overview(5).txt b/Content_overview(5).txt new file mode 100644 index 0000000000000000000000000000000000000000..626d1697834d6d96e44e3f0218efbdd9e52d6970 --- /dev/null +++ b/Content_overview(5).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud +Date Scraped: 2025-02-23T11:49:38.364Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of hybrid and multicloud subjects. Get started If you are new to Google Cloud or new to designing hybrid and multicloud architectures that include Google Cloud, begin by reading Hybrid and multicloud overview. Hybrid and multicloud resources in the Architecture Center You can filter the following list of hybrid and multicloud resources by typing a product name or a phrase that's in the resource title or description. Authenticate workforce users in a hybrid environment How to extend your identity management solution to Google Cloud to enable your workforce to authenticate and consume services in a hybrid computing environment. Products used: Cloud Identity Build a hybrid render farm Provides guidance on extending your existing, on-premises render farm to use compute resources on Google Cloud (Google Cloud). Products used: BigQuery, Cloud Interconnect, Cloud Storage, Cloud VPN, Compute Engine, Dedicated Interconnect Build hybrid and multicloud architectures using Google Cloud Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Products used: Cloud Load Balancing, Compute Engine, GKE Enterprise, Google Kubernetes Engine (GKE) Configure Active Directory for VMs to automatically join a domain Shows you how to configure Active Directory and Compute Engine so that Windows virtual machine (VM) instances can automatically join an Active Directory domain. Products used: Cloud Run, Cloud Scheduler, Compute Engine, Container Registry, Secret Manager Cross-Cloud Network for distributed applications Describes how to design Cross-Cloud Network for distributed applications. Products used: Cloud Load Balancing, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network with Network Connectivity Center. Products used: Cloud Load Balancing, Network Connectivity Center, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network with Network Connectivity Center. Products used: Cloud Load Balancing, Network Connectivity Center, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using VPC Network Peering Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network for distributed applications. Products used: Cloud Load Balancing, Virtual Private Cloud Data management with Cohesity Helios and Google Cloud How Cohesity works with Google Cloud Storage. Cohesity is a hyperconverged secondary storage system for consolidating backup, test/dev, file services, and analytic datasets onto a scalable data platform. Products used: Cloud Storage Deploy an Active Directory forest on Compute Engine Shows you how to deploy an Active Directory forest on Compute Engine in a way that follows the best practices. Products used: Cloud DNS, Compute Engine, Identity-Aware Proxy Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Edge hybrid pattern Discusses how the edge hybrid pattern addresses connectivity challenges by running time- and business-critical workloads locally, at the edge of the network. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Environment hybrid pattern Discusses how to keep the production environment of a workload in the existing data center but use the public cloud for other, non-production environments. Products used: Cloud Pub/Sub, Cloud Storage, Google Kubernetes Engine (GKE) Federate Google Cloud with Active Directory Products used: Cloud Identity, Google Cloud Directory Sync Federate Google Cloud with Microsoft Entra ID (formerly Azure AD) Products used: Google Cloud Directory Sync File storage on Compute Engine Describes and compares options for file storage on Compute Engine. Products used: Compute Engine, Filestore FortiGate architecture in Google Cloud Describes the overall concepts around deploying a FortiGate Next Generation Firewall (NGFW) in Google Cloud. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Virtual Private Cloud Gated egress Discusses how the gated egress pattern is based on exposing select APIs from various environments to workloads that are deployed in Google Cloud. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated egress and gated ingress Discusses scenarios that demand bidirectional usage of selected APIs between workloads that run in various environments. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated ingress Discusses exposing select APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Hub-and-spoke network architecture Evaluate the architectural options for designing hub-and-spoke network topologies in Google Cloud. Products used: Cloud NAT, Cloud VPN, Virtual Private Cloud Hybrid and multicloud architecture patterns Discusses common hybrid and multicloud architecture patterns, and describes the scenarios that these patterns are best suited for. Products used: Cloud DNS, Cloud Interconnect, Cloud Pub/Sub, Cloud Run, Cloud SQL, Cloud Storage, Google Cloud Armor, Google Kubernetes Engine (GKE), Looker Hybrid and multicloud monitoring and logging patterns Discusses monitoring and logging architectures for hybrid and multicloud deployments, and provides best practices for implementing them by using Google Cloud. Products used: Cloud Logging, Cloud Monitoring, GKE Enterprise, Google Distributed Cloud, Google Kubernetes Engine (GKE) Hybrid and multicloud secure networking architecture patterns Discusses several common secure network architecture patterns that you can use for hybrid and multicloud architectures. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Cloud Armor, Google Kubernetes Engine (GKE) Identify and prioritize security risks with Wiz Security Graph and Google Cloud Describes how to identify and prioritize security risks in your cloud workloads with Wiz Security Graph and Google Cloud. Products used: Artifact Registry, Cloud Audit Logs, Cloud SQL, Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE), Security Command Center Log and monitor on-premises resources with BindPlane Describes considerations and design patterns for using Cloud Logging, Cloud Monitoring, and BindPlane to provide logging and monitoring services for on-premises resources. Products used: Cloud Logging, Cloud Monitoring Migrate to a Google Cloud VMware Engine platform Describes the VMware Engine blueprint, which deploys a platform for VM workloads. Products used: Google Cloud VMware Engine Migrating On-Premises Hadoop Infrastructure to Google Cloud Guidance on moving on-premises Hadoop workloads to Google Cloud... Products used: BigQuery, Cloud Storage, Dataproc Overview of identity and access management Explores the general practice of identity and access management (generally referred to as IAM) and the individuals who are subject to it, including corporate identities, customer identities, and service identities. Products used: Cloud Identity, Identity and Access Management Patterns for connecting other cloud service providers with Google Cloud Helps cloud architects and operations professionals decide how to connect Google Cloud with other cloud service providers (CSP) such as Amazon Web Services (AWS) and Microsoft Azure. Products used: Cloud Interconnect, Dedicated Interconnect, Partner Interconnect Patterns for using Active Directory in a hybrid environment Requirements to consider when you deploy Active Directory to Google Cloud and helps you choose the right architecture. Products used: Cloud Identity Reference architecture: Resource management with ServiceNow Provides architectural recommendations to integrate Google Cloud assets into ServiceNow discovery tools. Products used: Cloud Asset Inventory, Compute Engine Secure virtual private cloud networks with the Palo Alto VM-Series NGFW Describes the networking concepts that you need to understand to deploy Palo Alto Networks VM-Series next generation firewall (NGFW) in Google Cloud. Products used: Cloud Storage Send feedback \ No newline at end of file diff --git a/Content_overview(6).txt b/Content_overview(6).txt new file mode 100644 index 0000000000000000000000000000000000000000..4f448628040b4e90c09ad66ba1c5395706679155 --- /dev/null +++ b/Content_overview(6).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrations +Date Scraped: 2025-02-23T11:51:28.841Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migration resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-10 UTC The Architecture Center provides content resources across a wide variety of migration subjects and scenarios to help you migrate workloads, data, and processes to Google Cloud. These resources are designed to help you complete these kinds of migrations: From an on-premises environment From a private hosting environment From another cloud provider Across Google Cloud regions A migration journey isn't unique to Google Cloud. Moving from one environment to another is a challenging task, so you need to plan and execute your migration carefully. No matter what you're migrating—whether apps, VMs, or containers—you need to complete tasks such as creating an inventory, establishing user and service identities, deploying your workloads, and optimizing for performance and scalability. As part of your migration journey, you have to make decisions that are dependent on the environment, the workloads, and the infrastructure that you're migrating to Google Cloud or to a hybrid cloud environment. The Migrate to Google Cloud series helps you choose the best path to suit your migration needs by establishing a migration framework. It's important to establish a migration framework because migration can be a repeatable task. For example, if you initially migrate your VMs to Google Cloud, you might also consider moving other data and workloads to Google Cloud. Establishing a general framework that can be applied to different workloads can make future migrations easier for you. Migration resources in the Architecture Center You can filter the following list of migration resources by typing a product name or a phrase that's in the resource title or description. Build hybrid and multicloud architectures using Google Cloud Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Products used: Cloud Load Balancing, Compute Engine, GKE Enterprise, Google Kubernetes Engine (GKE) Continuous data replication to Spanner using Striim How to migrate a MySQL database to Cloud Spanner using Striim. Products used: Cloud SQL, Cloud SQL for MySQL, Compute Engine, Spanner Database migration: Concepts and principles (Part 1) Introduces concepts, principles, terminology, and architecture of near-zero downtime database migration from on-premises or other cloud environments. Products used: Compute Engine, Spanner Decide the network design for your Google Cloud landing zone This document describes four common network designs for landing zones, and helps you choose the option that best meets your requirements. Products used: VPC Service Controls, Virtual Private Cloud Designing networks for migrating enterprise workloads: Architectural approaches This document introduces a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. These architectures emphasize advanced connectivity, zero-trust security principles, and... Products used: Cloud CDN, Cloud DNS, Cloud Interconnect, Cloud Intrusion Detection System (Cloud IDS), Cloud Load Balancing, Cloud NAT, Cloud Service Mesh, Cloud VPN, Google Cloud Armor, Identity-Aware Proxy, Network Connectivity Center, VPC Service Controls, Virtual Private Cloud Federate Google Cloud with Microsoft Entra ID (formerly Azure AD) Products used: Google Cloud Directory Sync File storage on Compute Engine Describes and compares options for file storage on Compute Engine. Products used: Compute Engine, Filestore Global deployment with Compute Engine and Spanner Learn how to architect a multi-tier application that runs on Compute Engine VMs and Spanner in a global topology on Google Cloud. Products used: Cloud Load Balancing, Cloud Storage, Compute Engine, Spanner, Virtual Private Cloud Implement your Google Cloud landing zone network design This document provides steps and guidance to implement your chosen network design for your landing zone. Products used: Virtual Private Cloud Jump Start Solution: Load balanced managed VMs Deploy an autoscaling group of Compute Engine VMs with a load balancer as the frontend. Landing zone design in Google Cloud This series shows how to design and build a landing zone in Google Cloud, guiding you through high-level decisions about identity onboarding, resource hierarchy, network design, and security. Migrate across Google Cloud regions Start preparing your workloads and data for migration across Google Cloud regions. Products used: BigQuery, Bigtable, Cloud SQL, Cloud Storage, Compute Engine, Dataflow, Dataproc, Google Kubernetes Engine (GKE), Spanner Migrate containers to Google Cloud: Migrate from Kubernetes to GKE Describes how to design, implement, and validate a plan to migrate from Kubernetes to Google Kubernetes Engine (GKE). Products used: Google Kubernetes Engine (GKE) Migrate from AWS Describes how to design, implement, and validate a plan to migrate from AWS to Google Cloud. Products used: Cloud DNS, Cloud Domains, Cloud Firewall, Cloud Load Balancing, Cloud Storage, Cloud Storage, Compute Engine, VPC Service Controls Migrate from AWS to Google Cloud: Migrate from AWS Lambda to Cloud Run Describes how to design, implement, and validate a plan to migrate from AWS Lambda to Cloud Run. Products used: Cloud Run, Migration Center Migrate from AWS: Amazon EC2 to Compute Engine Describes how to design, implement, and validate a plan to migrate from Amazon EC2 to Compute Engine. Products used: Cloud DNS, Cloud Domains, Cloud Firewall, Cloud Load Balancing, Cloud Storage, Cloud Storage, Compute Engine, VPC Service Controls Migrate from AWS: Amazon S3 to Cloud Storage Describes how to design, implement, and validate a plan to migrate from Amazon S3 to Cloud Storage. Products used: Cloud DNS, Cloud Domains, Cloud Firewall, Cloud Load Balancing, Cloud Storage, Cloud Storage, Compute Engine, VPC Service Controls Migrate from AWS: Migrate from Amazon EKS to GKE Design, implement, and validate a plan to migrate from Amazon EKS to Google Kubernetes Engine. Products used: Cloud Storage, Compute Engine, Google Kubernetes Engine (GKE) Migrate to a Google Cloud VMware Engine platform Describes the VMware Engine blueprint, which deploys a platform for VM workloads. Products used: Google Cloud VMware Engine Migrate to Google Cloud Helps you plan, design, and implement the process of migrating your application and infrastructure workloads to Google Cloud, including computing, database, and storage workloads. Products used: App Engine, Cloud Build, Cloud Data Fusion, Cloud Deployment Manager, Cloud Functions, Cloud Run, Cloud Storage, Container Registry, Data Catalog, Dataflow, Direct Peering, Google Kubernetes Engine (GKE), Transfer Appliance Migrating On-Premises Hadoop Infrastructure to Google Cloud Guidance on moving on-premises Hadoop workloads to Google Cloud... Products used: BigQuery, Cloud Storage, Dataproc Multi-regional deployment on Compute Engine Provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in multiple regions and describes the design factors to consider when you build a multi-regional architecture. Overview of identity and access management Explores the general practice of identity and access management (generally referred to as IAM) and the individuals who are subject to it, including corporate identities, customer identities, and service identities. Products used: Cloud Identity, Identity and Access Management Regional deployment on Compute Engine Learn how to architect a multi-tier application that runs on Compute Engine VMs in multiple zones within a Google Cloud region. Products used: Cloud Load Balancing, Cloud Storage, Compute Engine, Virtual Private Cloud Single-zone deployment on Compute Engine Provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in a single-zone region and describes the design factors to consider when you build a single-zone architecture. Use RIOT Live Migration to migrate to Redis Enterprise Cloud Describes an architecture to migrate from Redis-compatible sources to fully managed Redis Enterprise Cloud in Google Cloud using RIOT Live Migration service. Products used: Compute Engine Send feedback \ No newline at end of file diff --git a/Content_overview(7).txt b/Content_overview(7).txt new file mode 100644 index 0000000000000000000000000000000000000000..d67057cef1a42191c4c53031a89cb9b64db6d120 --- /dev/null +++ b/Content_overview(7).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/monitoring +Date Scraped: 2025-02-23T11:53:10.369Z + +Content: +Home Docs Cloud Architecture Center Send feedback Monitoring and logging resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of monitoring and logging subjects. Monitoring and logging resources in the Architecture Center You can filter the following list of monitoring and logging resources by typing a product name or a phrase that's in the resource title or description. Automate malware scanning for files uploaded to Cloud Storage This document shows you how to build an event-driven pipeline that can help you automate the evaluation of files for malicious code. Products used: Cloud Logging, Cloud Run, Cloud Storage, Eventarc Best practices and reference architectures for VPC design This guide introduces best practices and typical enterprise architectures for the design of virtual private clouds (VPCs) with Google Cloud. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud Router, Cloud VPN, Virtual Private Cloud Cloud Monitoring metric export Describes a way to export Cloud Monitoring metrics for long-term analysis. Products used: App Engine, BigQuery, Cloud Monitoring, Cloud Pub/Sub, Cloud Scheduler, Datalab, Looker Studio Configuring SaaS data protection for Google Workspace data with Spin.AI How to configure SpinOne - All-in-One SaaS Data Protection with Cloud Storage. Deploy log streaming from Google Cloud to Datadog Learn how to deploy a solution that sends log files to a Cloud Logging sink and then to Datadog. Products used: Cloud Logging, Cloud Pub/Sub, Cloud Storage, Dataflow Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Hybrid and multicloud monitoring and logging patterns Discusses monitoring and logging architectures for hybrid and multicloud deployments, and provides best practices for implementing them by using Google Cloud. Products used: Cloud Logging, Cloud Monitoring, GKE Enterprise, Google Distributed Cloud, Google Kubernetes Engine (GKE) Import logs from Cloud Storage to Cloud Logging Learn how to import logs that were previously exported to Cloud Storage back to Cloud Logging. Products used: BigQuery, Cloud Logging, Cloud Run, Cloud Storage Log and monitor on-premises resources with BindPlane Describes considerations and design patterns for using Cloud Logging, Cloud Monitoring, and BindPlane to provide logging and monitoring services for on-premises resources. Products used: Cloud Logging, Cloud Monitoring Migrate to Google Cloud Helps you plan, design, and implement the process of migrating your application and infrastructure workloads to Google Cloud, including computing, database, and storage workloads. Products used: App Engine, Cloud Build, Cloud Data Fusion, Cloud Deployment Manager, Cloud Functions, Cloud Run, Cloud Storage, Container Registry, Data Catalog, Dataflow, Direct Peering, Google Kubernetes Engine (GKE), Transfer Appliance Patterns for scalable and resilient apps Introduces some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. Products used: Cloud Load Balancing, Cloud Monitoring, Cloud SQL, Cloud Storage, Compute Engine Security log analytics in Google Cloud Shows how to collect, export, and analyze logs from Google Cloud to help you audit usage and detect threats to your data and workloads. Use the included threat detection queries for BigQuery or Chronicle, or bring your own SIEM. Products used: BigQuery, Cloud Logging, Compute Engine, Looker Studio Stream logs from Google Cloud to Datadog Provides an architecture to send log event data from across your Google Cloud ecosystem to Datadog Log Management. Products used: Cloud Logging, Cloud Pub/Sub, Cloud Storage, Dataflow Stream logs from Google Cloud to Splunk Create a production-ready, scalable, fault-tolerant, log export mechanism that streams logs and events from your resources in Google Cloud into Splunk. Products used: Cloud Logging, Cloud Pub/Sub, Dataflow Send feedback \ No newline at end of file diff --git a/Content_overview(8).txt b/Content_overview(8).txt new file mode 100644 index 0000000000000000000000000000000000000000..ee527e831935060f3188bfb225c7388359479ea7 --- /dev/null +++ b/Content_overview(8).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/networking +Date Scraped: 2025-02-23T11:53:38.433Z + +Content: +Home Docs Cloud Architecture Center Send feedback Networking resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-10 UTC The Architecture Center provides content resources across a wide variety of networking subjects. This page provides information to help you get started and a list of all networking content in the Architecture Center. Get started Google Cloud provides a suite of networking services to help you run your enterprise in the cloud. This page can help you get started with both designing and building a new cloud network and with enhancing your existing network. Design and build There are two general approaches to create a network: Just get started by creating a simple, but recommended, landing zone deployment and build from there. Read all the materials up front, plan everything end-to-end, and then build your design. If you just want to get started: The fastest way is to use Google Cloud Setup in the Google Cloud console. The user interface shows you how to set up your organization, users and groups, administration, billing, networking, monitoring and security so you can get started with Google Cloud. Even if you already have an organization set up, you can use Google Cloud Setup to create well-crafted networks. Alternatively, you can follow a manual process using the Landing zone design in Google Cloud document set. In that document set, Decide the network design for your Google Cloud landing zone provides several options for your network design. If you want to read and plan first: For an end-to-end Google Cloud deployment based on security best practices, see the enterprise foundations blueprint. The entire deployment is available as a Terraform configuration, which you can use as is or modify to meet your needs. If you are migrating workloads from an existing installation, see Designing networks for migrating enterprise workloads: Architectural approaches. Enhance If you already have your Google Cloud network set up, but you want to enhance or modify your setup, the documents listed in the left navigation can help. The documents are organized in the following categories: Connect: Connect Google Cloud resources to resources in other clouds, in your on-premises data centers, and in other parts of your Google Cloud deployment. Scale: Use load balancing, content delivery networks, and DNS to deliver your applications to your customers at any scale. Secure: Protect your applications and network traffic. Observe: Monitor and inspect your network configuration and traffic. Networking resources in the Architecture Center You can filter the following list of networking resources by typing a product name or a phrase that's in the resource title or description. Best practices and reference architectures for VPC design This guide introduces best practices and typical enterprise architectures for the design of virtual private clouds (VPCs) with Google Cloud. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud Router, Cloud VPN, Virtual Private Cloud Build hybrid and multicloud architectures using Google Cloud Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Products used: Cloud Load Balancing, Compute Engine, GKE Enterprise, Google Kubernetes Engine (GKE) Building internet connectivity for private VMs Describes options for connecting to and from the internet using Compute Engine resources that have private IP addresses. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Identity-Aware Proxy Controls to restrict access to individually approved APIs Many organizations have a compliance requirement to restrict network access to an explicitly approved list of APIs, based on internal requirements or as part of adopting Assured Workloads. On-premises, this requirement is often addressed with proxy... Cross-Cloud Network for distributed applications Describes how to design Cross-Cloud Network for distributed applications. Products used: Cloud Load Balancing, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network with Network Connectivity Center. Products used: Cloud Load Balancing, Network Connectivity Center, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network with Network Connectivity Center. Products used: Cloud Load Balancing, Network Connectivity Center, Virtual Private Cloud Cross-Cloud Network inter-VPC connectivity using VPC Network Peering Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network for distributed applications. Products used: Cloud Load Balancing, Virtual Private Cloud Decide the network design for your Google Cloud landing zone This document describes four common network designs for landing zones, and helps you choose the option that best meets your requirements. Products used: VPC Service Controls, Virtual Private Cloud Deploy network monitoring and telemetry capabilities in Google Cloud Network telemetry collects network traffic data from devices on your network so that the data can be analyzed. Network telemetry lets security operations teams detect network-based threats and hunt for advanced adversaries, which is essential for... Products used: Compute Engine, Google Kubernetes Engine (GKE), Virtual Private Cloud Deploying FortiGate-VM Next Generation Firewall using Terraform Shows you how to use Terraform to deploy a FortiGate reference architecture to help protect your applications against cyberattacks. Products used: Cloud Load Balancing, Cloud Storage, Compute Engine Design secure deployment pipelines Describes best practices for designing secure deployment pipelines based on your confidentiality, integrity, and availability requirements. Products used: App Engine, Cloud Run, Google Kubernetes Engine (GKE) Designing networks for migrating enterprise workloads: Architectural approaches This document introduces a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. These architectures emphasize advanced connectivity, zero-trust security principles, and... Products used: Cloud CDN, Cloud DNS, Cloud Interconnect, Cloud Intrusion Detection System (Cloud IDS), Cloud Load Balancing, Cloud NAT, Cloud Service Mesh, Cloud VPN, Google Cloud Armor, Identity-Aware Proxy, Network Connectivity Center, VPC Service Controls, Virtual Private Cloud FortiGate architecture in Google Cloud Describes the overall concepts around deploying a FortiGate Next Generation Firewall (NGFW) in Google Cloud. Products used: Cloud Load Balancing, Cloud NAT, Compute Engine, Virtual Private Cloud From edge to mesh: Deploy service mesh applications through GKE Gateway Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) From edge to mesh: Expose service mesh applications through GKE Gateway Combines Cloud Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients. Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh Products used: Certificate Manager, Cloud Endpoints, Cloud Load Balancing, Cloud Service Mesh, Google Cloud Armor, Google Kubernetes Engine (GKE) From edge to multi-cluster mesh: Globally distributed applications exposed through GKE Gateway and Cloud Service Mesh Describes exposing applications externally through Google Kubernetes Engine (GKE) Gateways running on multiple GKE clusters within a service mesh. Products used: Certificate Manager, Cloud Endpoints, Cloud Load Balancing, Cloud Service Mesh, Google Cloud Armor, Google Kubernetes Engine (GKE) Gated egress Discusses how the gated egress pattern is based on exposing select APIs from various environments to workloads that are deployed in Google Cloud. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated egress and gated ingress Discusses scenarios that demand bidirectional usage of selected APIs between workloads that run in various environments. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Gated ingress Discusses exposing select APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. Products used: Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Kubernetes Engine (GKE) Hub-and-spoke network architecture Evaluate the architectural options for designing hub-and-spoke network topologies in Google Cloud. Products used: Cloud NAT, Cloud VPN, Virtual Private Cloud Hybrid and multicloud monitoring and logging patterns Discusses monitoring and logging architectures for hybrid and multicloud deployments, and provides best practices for implementing them by using Google Cloud. Products used: Cloud Logging, Cloud Monitoring, GKE Enterprise, Google Distributed Cloud, Google Kubernetes Engine (GKE) Hybrid and multicloud secure networking architecture patterns Discusses several common secure network architecture patterns that you can use for hybrid and multicloud architectures. Products used: Cloud DNS, Cloud Interconnect, Cloud NAT, Cloud VPN, Compute Engine, Google Cloud Armor, Google Kubernetes Engine (GKE) Implement your Google Cloud landing zone network design This document provides steps and guidance to implement your chosen network design for your landing zone. Products used: Virtual Private Cloud Jump Start Solution: Load balanced managed VMs Deploy an autoscaling group of Compute Engine VMs with a load balancer as the frontend. Landing zone design in Google Cloud This series shows how to design and build a landing zone in Google Cloud, guiding you through high-level decisions about identity onboarding, resource hierarchy, network design, and security. Manage and scale networking for Windows applications that run on managed Kubernetes Discusses how to manage networking for Windows applications that run on Google Kubernetes Engine using Cloud Service Mesh and Envoy gateways. Products used: Cloud Load Balancing, Cloud Service Mesh, Google Kubernetes Engine (GKE) Patterns for connecting other cloud service providers with Google Cloud Helps cloud architects and operations professionals decide how to connect Google Cloud with other cloud service providers (CSP) such as Amazon Web Services (AWS) and Microsoft Azure. Products used: Cloud Interconnect, Dedicated Interconnect, Partner Interconnect Secure virtual private cloud networks with the Palo Alto VM-Series NGFW Describes the networking concepts that you need to understand to deploy Palo Alto Networks VM-Series next generation firewall (NGFW) in Google Cloud. Products used: Cloud Storage Use Google Cloud Armor, load balancing, and Cloud CDN to deploy programmable global front ends Provides an architecture that uses a global front end which incorporates Google Cloud best practices to help scale, secure, and accelerate the delivery of your internet-facing applications. VMware Engine network security using centralized appliances Design advanced network security for Google Cloud VMware Engine workloads to provide network protection features like DDoS mitigation, SSL offloading, NGFW, IPS/IDS, and DPI. Products used: Cloud CDN, Cloud Interconnect, Cloud Load Balancing, Cloud VPN, Google Cloud VMware Engine, Virtual Private Cloud Send feedback \ No newline at end of file diff --git a/Content_overview(9).txt b/Content_overview(9).txt new file mode 100644 index 0000000000000000000000000000000000000000..bd2ebc3b95530f161da2c248f9df145389703ce4 --- /dev/null +++ b/Content_overview(9).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/reliability +Date Scraped: 2025-02-23T11:54:01.527Z + +Content: +Home Docs Cloud Architecture Center Send feedback Reliability and disaster recovery resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-17 UTC The Architecture Center provides content resources across a wide variety of reliability and disaster recovery subjects. Get started If you are new to Google Cloud or new to designing for reliability and disaster recovery on Google Cloud, begin with these resources: Infrastructure reliability guide Disaster recovery planning guide Reliability and disaster recovery resources in the Architecture Center You can filter the following list of reliability and disaster recovery resources by typing a product name or a phrase that's in the resource title or description. Architectures for high availability of MySQL clusters on Compute Engine Describes several architectures that provide high availability (HA) for MySQL deployments on Google Cloud. Products used: Compute Engine Architectures for high availability of PostgreSQL clusters on Compute Engine Several architectures that provide high availability (HA) for PostgreSQL deployments on Google Cloud. Products used: Compute Engine Business continuity with CI/CD on Google Cloud Learn about developing a business continuity plan (BCP) for the CI/CD process. Products used: Artifact Registry, Backup and DR Service, Cloud Build, Cloud Deploy, Google Kubernetes Engine (GKE) Continuous data replication to Spanner using Striim How to migrate a MySQL database to Cloud Spanner using Striim. Products used: Cloud SQL, Cloud SQL for MySQL, Compute Engine, Spanner Database migration: Concepts and principles (Part 1) Introduces concepts, principles, terminology, and architecture of near-zero downtime database migration from on-premises or other cloud environments. Products used: Compute Engine, Spanner Design an optimal storage strategy for your cloud workload Assess your workload's requirements, review the storage options in Google Cloud, and select an optimal storage strategy. Products used: Cloud Storage, Filestore, Persistent Disk Disaster recovery planning guide The first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Products used: Cloud Key Management Service, Cloud Storage, Spanner Google Cloud infrastructure reliability guide Introduces the building blocks of reliability in Google Cloud, and provides architectural recommendations to design reliable infrastructure for your cloud workloads. Jump Start Solution: Load balanced managed VMs Deploy an autoscaling group of Compute Engine VMs with a load balancer as the frontend. Patterns for scalable and resilient apps Introduces some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. Products used: Cloud Load Balancing, Cloud Monitoring, Cloud SQL, Cloud Storage, Compute Engine Patterns for using floating IP addresses in Compute Engine How to use floating IP address patterns when migrating applications to Compute Engine from an on-premises network. Products used: Compute Engine Send feedback \ No newline at end of file diff --git a/Content_overview.txt b/Content_overview.txt new file mode 100644 index 0000000000000000000000000000000000000000..2506aeb702914395573b1e269fb0ad39793fbe2f --- /dev/null +++ b/Content_overview.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/fundamentals +Date Scraped: 2025-02-23T11:42:32.520Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture fundamentals Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-08 UTC The Architecture Center provides content for a variety of technology categories, like AI and ML, application development, and big data. Use the resources listed on this page to get fundamental architectural guidance that's applicable to all the technology categories. Architecture Framework Provides best practices and recommendations to help you build well-architected cloud topologies that are secure, efficient, resilient, high-performing, and cost-effective. Deployment archetypes Provides an overview of the basic archetypes for building cloud architectures (zonal, regional, multi-regional, global, hybrid, and multicloud), and describes the use cases and design considerations for each archetype. Landing zone design Describes how to design and build a landing zone that includes identity onboarding, resource hierarchy, network design, and security controls. Enterprise foundations blueprint Provides guidance to help you design and build a cloud foundation that enables consistent governance, security controls, scale, visibility, and access for your enterprise workloads. Send feedback \ No newline at end of file diff --git a/Continuous_data_replication_to_BigQuery_using_Striim.txt b/Continuous_data_replication_to_BigQuery_using_Striim.txt new file mode 100644 index 0000000000000000000000000000000000000000..c9a3997f5b5c6b7f9f0c0bba49c45f674407ed9c --- /dev/null +++ b/Continuous_data_replication_to_BigQuery_using_Striim.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/continuous-data-replication-bigquery-striim +Date Scraped: 2025-02-23T11:49:11.390Z + +Content: +Home Docs Cloud Architecture Center Send feedback Continuous data replication to BigQuery using Striim Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-13 UTC By: Edward Bell, Solutions Architect, Striim, Inc. This tutorial demonstrates how to migrate a MySQL database to BigQuery using Striim. Striim is a comprehensive streaming extract, transform, and load (ETL) platform that enables online database migrations and continuous streaming replication from on-premises and cloud data sources to Google Cloud data services. This tutorial focuses on the implementation of a continuous replication from Cloud SQL for MySQL to BigQuery. It is intended for database administrators, IT professionals, and data architects interested in taking advantage of BigQuery capabilities. Objectives Launch the Stiim for BigQuery free trial. Use Striim to continuously replicate from Cloud SQL for MySQL to BigQuery. Costs In this document, you use the following billable components of Google Cloud: Cloud SQL for MySQL BigQuery To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. This tutorial also uses Striim, which includes a trial period. You can find Striim in the Cloud Marketplace. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine and BigQuery APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Set the default compute zone to us-central1-a: gcloud config set compute/zone us-central1-a export COMPUTE_ZONE=us-central1-a This zone is where you deploy your database. For more information about zones, see Geography and regions. Create a Cloud SQL for MySQL instance You create a Cloud SQL for MySQL instance that you later connect to Striim. In this case, the instance acts as the source transactional system that you later replicate. In a real-world scenario, the source database can be one of many transactional database systems. In Cloud Shell, create the environment variables to create the instance: CSQL_NAME=striim-sql-src CSQL_USERNAME=striim-user CSQL_USER_PWD=$(openssl rand -base64 18) CSQL_ROOT_PWD=$(openssl rand -base64 18) If you close the Cloud Shell session, you lose the variables. Make a note of the CSQL_USER_PWD and CSQL_ROOT_PWD passwords generated by the following commands: echo $CSQL_USER_PWD and echo $CSQL_ROOT_PWD Create the Cloud SQL for MySQL instance: gcloud sql instances create $CSQL_NAME \ --root-password=$CSQL_ROOT_PWD --zone=$COMPUTE_ZONE \ --tier=db-n1-standard-2 --enable-bin-log Create a Cloud SQL for MySQL user that Striim can connect to: gcloud sql users create $CSQL_USERNAME --instance $CSQL_NAME \ --password $CSQL_USER_PWD --host=% The Cloud SQL for MySQL database is set up for Striim to read. Find the IP address of the Cloud SQL for MySQL instance and make a note of it: gcloud sql instances describe $CSQL_NAME --format='get(ipAddresses.ipAddress)' Set up Striim To set up an instance of the Striim server software, you use the Cloud Marketplace. In the Google Cloud console, go to the Striim page in the Cloud Marketplace. Go to Striim in the Cloud Marketplace Click Launch. In the New Striim Deployment window, complete the following fields: Select the project that you created or selected to use for this tutorial. In the Zone drop-down menu, select us-central1-a. If you accept the terms for service, select the I accept the Google Cloud Marketplace Terms of Service checkbox. Terms of Service** checkbox. Cloud Marketplace solutions typically come with various resources that launch to support the software. Review the monthly billing estimate before launching the solution. Leave all other settings at their default values. Click Deploy. In the Google Cloud console, go to the Deployments page. Go to Deployments To review the deployment details of the Striim instance, click the name of the Striim instance. Make a note of the name of the deployment and the name of the VM that has deployed. To allow Striim to communicate with Cloud SQL for MySQL, add the Striim server's IP address to the Cloud SQL for MySQL instance's authorized networks: STRIIMVM_NAME=STRIIM_VM_NAME STRIIMVM_ZONE=us-central1-a gcloud sql instances patch $CSQL_NAME \ --authorized-networks=$(gcloud compute instances describe $STRIIM_VM_NAME \ --format='get(networkInterfaces[0].accessConfigs[0].natIP)' \ --zone=$STRIIMVM_ZONE) Replace the following: STRIIM_VM_NAME: the name of the VM that you deployed with Striim. In the Google Cloud console, on the deployment instance details page, click Visit the site to open the Striim web UI. In the Striim configuration wizard, configure the following: Review the end-user license agreement. If you accept the terms, click Accept Striim EULA and Continue. Enter your contact information. Enter the Cluster Name, Admin, Sys, and Striim Key passwords of your choice. Make a note of these passwords. Click Save and Continue. Leave the key field blank to enable the trial, and then click Save and Continue. Click Launch. It takes about a minute for Striim to be configured. When done, click Log In. To log in to the Striim administrator console, log in with the admin user and the administrator password that you previously set. Keep this window open because you return to it in a later step. Set up Connector/J Use MySQL Connector/J to connect Striim to your Cloud SQL for MySQL instance. As of this writing, 5.1.49 is the latest version of Connector/J. In the Google Cloud console, go to the Deployments page. Go to Deployments For the Striim instance, click SSH to automatically connect to the instance. Download the Connector/J to the instance and extract it: wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.49.tar.gz tar -xvzf mysql-connector-java-5.1.49.tar.gz Copy the file to the Striim library path, allow it to be executable, and change ownership of the file that you downloaded: sudo cp ~/mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar /opt/striim/lib sudo chmod +x /opt/striim/lib/mysql-connector-java-5.1.49.jar sudo chown striim /opt/striim/lib/mysql-connector-java-5.1.49.jar To recognize the new library, restart the Striim server: sudo systemctl stop striim-node sudo systemctl stop striim-dbms sudo systemctl start striim-dbms sudo systemctl start striim-node Go back to the browser window with the administration console in it. Reload the page, and then log in using the admin user credentials. It can take a couple minutes for the server to complete its restart from the previous step, so you might get a browser error during that time. If you encounter an error, reload the page and log in again. Load sample transactions to Cloud SQL Before you can configure your first Striim app, load transactions into the MySQL instance. In Cloud Shell, connect to the instance using the Cloud SQL for MySQL instance credentials that you previously set: gcloud sql connect $CSQL_NAME --user=$CSQL_USERNAME Create a sample database and load some transactions into it: CREATE DATABASE striimdemo; USE striimdemo; CREATE TABLE ORDERS (ORDER_ID Integer, ORDER_DATE VARCHAR(50), ORDER_MODE VARCHAR(8), CUSTOMER_ID Integer, ORDER_STATUS Integer, ORDER_TOTAL Float, SALES_REP_ID Integer, PROMOTION_ID Integer, PRIMARY KEY (ORDER_ID)); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1001, 1568927976017, 'In-Store', 1001, 9, 34672.59, 331, 9404); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1002, 1568928036017, 'In-Store', 1002, 1, 28133.14, 619, 2689); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1003, 1568928096017, 'CompanyB', 1003, 1, 37367.95, 160, 30888); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1004, 1568928156017, 'CompanyA', 1004, 1, 7737.02, 362, 89488); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1005, 1568928216017, 'CompanyA', 1005, 9, 15959.91, 497, 78454); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1006, 1568928276017, 'In-Store', 1006, 1, 82531.55, 399, 22488); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1007, 1568928336017, 'CompanyA', 1007, 7, 52929.61, 420, 66256); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1008, 1568928396017, 'Online', 1008, 1, 26912.56, 832, 7262); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1009, 1568928456017, 'CompanyA', 1009, 1, 97706.08, 124, 12185); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1010, 1568928516017, 'CompanyB', 1010, 1, 47539.16, 105, 17868); To check the upload, count the records to ensure that 10 records were inserted: SELECT COUNT(*) FROM ORDERS; Leave the Cloud SQL for MySQL instance: Exit Create a BigQuery target dataset In this section, you create a BigQuery dataset, and load service account credentials so that Striim can write to the target database from the Google Cloud console. In Cloud Shell, create a BigQuery dataset: bq --location=US mk -d \ --description "Test Target for Striim." striimdemo For this tutorial, you deploy BigQuery in the US. Create a new target table: bq mk \ --table \ --description "Striim Table" \ --label organization:striimlab striimdemo.orders order_id:INTEGER,order_date:STRING,order_mode:STRING,customer_id:INTEGER,order_status:INTEGER,order_total:FLOAT,sales_rep_id:INTEGER,promotion_id:INTEGER Create a service account for Striim to connect to BigQuery: gcloud iam service-accounts create striim-bq \ --display-name striim-bq export sa_striim_bq=$(gcloud iam service-accounts list \ --filter="displayName:striim-bq" --format='value(email)') export PROJECT=$(gcloud info \ --format='value(config.project)') gcloud projects add-iam-policy-binding $PROJECT \ --role roles/bigquery.dataEditor \ --member serviceAccount:$sa_striim_bq gcloud projects add-iam-policy-binding $PROJECT \ --role roles/bigquery.user --member serviceAccount:$sa_striim_bq gcloud iam service-accounts keys create ~/striim-bq-key.json \ --iam-account $sa_striim_bq A key called striim-bq-key.json is created in your home path. Move the newly generated key to the server: gcloud compute scp ~/striim-bq-key.json $STRIIM_VM_NAME:~ \ --zone=$COMPUTE_ZONE Move the key to the /opt/striim directory: gcloud compute ssh \ --zone=$COMPUTE_ZONE $STRIIM_VM_NAME \ -- 'sudo cp ~/striim-bq-key.json /opt/striim && sudo chown striim /opt/striim/striim-bq-key.json' You are now ready to create a Striim app. Create an online database migration An online database migration moves data from a source database (either on-premises or hosted on a cloud provider) to a target database or data warehouse in Google Cloud. The source database remains fully accessible by the business app and with minimal performance impact on the source database during this time. In an online migration, you perform an initial bulkload, and also continuously capture any changes. You then synchronize the two databases to ensure that data isn't lost. If you want to focus on creating a change data capture (CDC) pipeline, see the Create a continuous Cloud SQL for MySQL to BigQuery data pipeline section. Create the source connection In the Google Cloud console, on the instance details page, click Visit the site to open the Striim web UI. In the Striim web UI, click Apps. Click Add App. Click Start from Scratch. In the Name field, enter MySQLToBigQuery_initLoad. In the Namespace drop-down menu, select the default Admin namespace. This label is used to organize your apps. Click Save. On the Flow Designer page, to do a one-time initial bulkload of data, from the Sources pane, drag Database to the flow design palette in the center of the screen and enter the following connection properties: In the Name field, enter mysql_source. Leave the Adapter field at the default value of DatabaseReader. In the Connection URL field, enter jdbc:mysql://PRIMARY_ADDRESS:3306/striimdemo. Replace PRIMARY_ADDRESS with the IP address of the Cloud SQL instance that you created in the previous section. In the Username field, enter the username that you set as the CSQL_USER environment variable, striim-user. In the Password field, enter the CSQL_USER_PWD value that you made a note of when you created a Cloud SQL for MySQL instance. To see more configuration properties, click Show optional properties. In the Tables field, enter striimdemo.ORDERS. For Output to, select New output. In the New output field, enter stream_CloudSQLMySQLInitLoad. Click Save. To test the configuration settings to make sure that Striim can successfully connect to Cloud SQL for MySQL, click Created, and then select Deploy App. In the Deployment window, you can specify that you want to run parts of your app on some of your deployment topology. For this tutorial, select Default, and click Deploy. To preview your data as it flows through the Striim pipeline, click waves mysql_source DataBase reader, and then click remove_red_eye Preview on run. Click Deployed, and then click Start App. The Striim app starts running, and data flows through the pipeline. If there are any errors, there is an issue connecting to the source database because there is only a source component in the pipeline. If you see your app successfully run, but no data flows through, typically that means that you don't have any data in your database. After you've successfully connected to your source database and tested that it can read data, click Running, and then select Stop App. Click Stopped, and then select Undeploy App. You are now ready to connect this flow to BigQuery. Perform an initial load into BigQuery In the Striim web UI, click waves mysql_source Database reader. Click add Connect to next component, select Connect next Target component, and then complete the following fields: In the Name field, enter bq_target. In the Adapter field, enter BigQueryWriter. The Tables property is a source/target pair separated by commas. It is in the format of srcSchema1.srcTable1,tgtSchema1.tgtTable1;srcSchema2.srcTable2,tgtSchema2.tgtTable2. For this tutorial, enter striimdemo.ORDERS,striimdemo.orders. The Service Account Key requires a fully qualified path and name of the key file that was previously generated. For this tutorial, enter /opt/striim/striim-bq-key.json. In the Project ID field, enter your Google Cloud project ID. Click Save. To deploy the app and preview the data flow, do the following: Click Created, and then select Deploy App. In the Deployment window, select Default, and then click Deploy. To preview your data as it flows through the Striim pipeline, click waves mysql_source Database reader, and then click remove_red_eye Preview on run. Click Deployed, and then click Start App. In the Google Cloud console, go to the BigQuery page. Go to BigQuery Click the striimdemo database. In the query editor, enter SELECT COUNT(*) AS ORDERS, AVG(ORDER_TOTAL) AS ORDERS_AVE, SUM(ORDER_TOTAL) AS ORDERS_SUM FROM striimdemo.orders; and then click Run. It can take up to 90 seconds for the transactions to fully replicate to BigQuery due to the default configuration settings. After it's successfully replicated, the results table outputs the average order of 43148.952 and the total size of the orders, 431489.52. You have successfully set up your Striim environment and pipeline to perform a batch load. Create a continuous data pipeline from Cloud SQL for MySQL to BigQuery With an initial one-time bulkload in place, you can now set up a continuous replication pipeline. This pipeline is similar to the bulk pipeline that you created, but with a different source object. Create a CDC source In the Striim web UI, click Home. Click add Apps. Click Start from Scratch. In the Name field, enter MySQLToBigQuery_cdc. In the Namespace drop-down menu, select Admin namespace. On the Flow Designer page, drag a MySQL CDC source reader to the center of the design palette. Configure your new MySQL CDC source with the following information: In the Name field, enter mysql_cdc_source. Leave the Adapter field at the default value of MysqlReader. In the Connection URL field, enter jdbc:mysql://PRIMARY_ADDRESS:3306/striimdemo. Enter the username and password that you used in the previous section. To see more configuration properties, click Show optional properties. In the Tables field, enter striimdemo.ORDERS. For Output to, select New output. In the New output field, enter stream_CloudSQLMySQLCDCLoad. Click Save. Load new transactions into BigQuery In the Striim web UI, click waves MysqlReader. Click add Connect to next component, and then select Connect next Target component. In the Name field, enter bq_cdc_target. In the Adapter field, enter BigQueryWriter. The Tables property is a source/target pair separated by commas. It is in the format of srcSchema1.srcTable1,tgtSchema1.tgtTable1;srcSchema2.srcTable2,tgtSchema2.tgtTable2. For this tutorial, use striimdemo.ORDERS,striimdemo.orders. The Service Account Key requires a fully qualified path and name of the key file that was previously generated. For this tutorial, enter /opt/striim/striim-bq-key.json In the Project ID field, enter your Google Cloud project ID. Click Save. To deploy the app and preview the data flow, do the following: Click Created, and then select Deploy App. In the Deployment window, select Default, and then click Deploy. To preview your data as it flows through the Striim pipeline, click waves MysqlReader, and then click remove_red_eye Preview on Run. Click Deployed, and then click Start App. In Cloud Shell, connect to your Cloud SQL for MySQL instance: gcloud sql connect $CSQL_NAME --user=$CSQL_USERNAME Connect to your database and load new transactions into it: USE striimdemo; INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1011, 1568928576017, 'In-Store', 1011, 9, 13879.56, 320, 88252); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1012, 1568928636017, 'CompanyA', 1012, 1, 19729.99, 76, 95203); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1013, 1568928696017, 'In-Store', 1013, 5, 7286.68, 164, 45162); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1014, 1568928756017, 'Online', 1014, 1, 87268.61, 909, 70407); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1015, 1568928816017, 'CompanyB', 1015, 1, 69744.13, 424, 79401); In the Striim web UI, on the Transactions view page, transactions now populate the page and show that data is flowing. In the Google Cloud console, go to the BigQuery page. Go to BigQuery Click the striimdemo database. To verify that your data is successfully replicated, in the Query Editor enter SELECT COUNT(*) AS ORDERS, AVG(ORDER_TOTAL) AS ORDERS_AVE, SUM(ORDER_TOTAL) AS ORDERS_SUM FROM striimdemo.orders; and then click Run. The results table outputs the average order of 43148.952 and the total size of the orders, 431489.52. It can take up to 90 seconds for the transactions to fully replicate to BigQuery due to the default configuration settings. Congratulations, you have successfully set up a streaming replication pipeline from Cloud SQL for MySQL to BigQuery. Clean up The easiest way to eliminate billing is to delete the Google Cloud project you created for the tutorial. Alternatively, you can delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Look at the Google Cloud Data Migration content. To learn about Striim, visit the website, schedule a demo with a Striim technologist, and subscribe to the Striim blog. To learn how to set up continuous data movement from Oracle to BigQuery, see Oracle to Google BigQuery – Continuous Movement of On-Premises Data via CDC and the Move Oracle to Google BigQuery in Real Time video. Send feedback \ No newline at end of file diff --git a/Continuous_data_replication_to_Cloud_Spanner_using_Striim.txt b/Continuous_data_replication_to_Cloud_Spanner_using_Striim.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b6b06284bb71a62612f5cdd073abbff37bb2f77 --- /dev/null +++ b/Continuous_data_replication_to_Cloud_Spanner_using_Striim.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/continuous-data-replication-cloud-spanner-striim +Date Scraped: 2025-02-23T11:54:49.557Z + +Content: +Home Docs Cloud Architecture Center Send feedback Continuous data replication to Spanner using Striim Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-13 UTC By: Edward Bell, Solutions Architect, Striim, Inc. This tutorial demonstrates how to migrate a MySQL database to Spanner using Striim. Striim is a comprehensive streaming extract, transform, and load (ETL) platform that enables online database migrations and continuous streaming replication from on-premises and cloud data sources to Google Cloud data services. This tutorial focuses on the implementation of a continuous migration from Cloud SQL for MySQL to Spanner, and is not an explanation of migrations or replications, or why you might want to migrate your underlying database. This tutorial is intended for database administrators, IT professionals, and cloud architects interested in using Spanner—a scalable, enterprise-grade, globally distributed, and strongly consistent database service built for the cloud. Objectives Use Google Cloud Marketplace to deploy Striim. Use Striim to read from a source Cloud SQL for MySQL database. Use Striim to continuously replicate from Cloud SQL for MySQL to Spanner. Costs In this document, you use the following billable components of Google Cloud: Compute Engine Cloud SQL for MySQL Spanner To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. This tutorial also uses Striim, which includes a trial period through the Cloud Marketplace. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine and Spanner APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Set the default compute zone to us-central1-a: gcloud config set compute/zone us-central1-a export COMPUTE_ZONE=us-central1-a This zone is where you deploy your database and compute resources to. For more information about zones, see Geography and regions. Create a Cloud SQL for MySQL instance You create a Cloud SQL for MySQL virtual machine (VM) instance that you later connect to Striim. In this case, the instance acts as the source transactional system that you later replicate. In a real-world scenario, the source database can be one of many transactional database systems either on-premises or in other clouds. In Cloud Shell, create the environment variables to create the instance: CSQL_NAME=striim-sql-src CSQL_USERNAME=striim-user CSQL_USER_PWD=$(openssl rand -base64 18) CSQL_ROOT_PWD=$(openssl rand -base64 18) If you close the Cloud Shell session, you lose the variables. Make a note of the CSQL_USER_PWD and CSQL_ROOT_PWD passwords generated by the following commands: echo $CSQL_USER_PWD echo $CSQL_ROOT_PWD Create the Cloud SQL for MySQL instance: gcloud sql instances create $CSQL_NAME \ --root-password=$CSQL_ROOT_PWD --zone=$COMPUTE_ZONE \ --tier=db-n1-standard-2 --enable-bin-log Create a Cloud SQL for MySQL user that Striim can connect to: gcloud sql users create $CSQL_USERNAME --instance $CSQL_NAME \ --password $CSQL_USER_PWD --host=% The Cloud SQL for MySQL database is set up for Striim to read. Find the IP address of the Cloud SQL for MySQL instance: gcloud sql instances list Write down the IP address listed in the PRIMARY_ADDRESS column. Set up Striim To set up an instance of the Striim server software, you use the Cloud Marketplace. In the Google Cloud console, go to the Striim page in the Cloud Marketplace. Go to Striim in the Cloud Marketplace Click Launch. In the New Striim Deployment window, complete the following fields: Select the project that you created or selected to use for this tutorial. In the Zone drop-down menu, select us-central1-a. If you accept the terms of service, select the I accept the Google Cloud Marketplace Terms of Service checkbox. Cloud Marketplace solutions typically come with various resources that launch to support the software. Review the monthly billing estimate before launching the solution. Leave all other settings at their default values. Click Deploy. In the Google Cloud console, go to the Deployments page. Go to Deployments To review the deployment details of the Striim instance, click the name of the Striim instance. Write down the name of the deployment, as well as the name of the VM that has deployed. To allow Striim to communicate with Cloud SQL for MySQL, add the Striim server's IP address to the Cloud SQL for MySQL instance's authorized networks: STRIIMVM_NAME=STRIIM_VM_NAME STRIIMVM_ZONE=us-central1-a gcloud sql instances patch $CSQL_NAME \ --authorized-networks=$(gcloud compute instances describe $STRIIMVM_NAME \ --format='get(networkInterfaces[0].accessConfigs[0].natIP)' \ --zone=$STRIIMVM_ZONE) Replace the following: STRIIM_VM_NAME: the name of the VM that you deployed with Striim In the Google Cloud console, on the deployment instance details page, click Visit the site to open the Striim web UI. In the Striim configuration wizard, configure the following: Review the end user license agreement. If you accept the terms, click Accept Striim EULA and Continue. Enter your contact information. Enter the Cluster Name, Admin, Sys, and Striim Key passwords of your choice. Make a note of these passwords. Click Save and Continue. Leave the key field blank to enable the trial, and then click Save and Continue. Click Launch. It takes about a minute for Striim to be configured. When done, click Log In. To log in to the Striim administrator console, log in with the admin user and the administrator password that you previously set. Keep this window open because you return to it in a later step. Set up MySQL Connector/J Use MySQL Connector/J to connect Striim to your Cloud SQL for MySQL instance. As of this writing, 5.1.49 is the latest version of Connector/J. in the Google Cloud console, go to the Deployments page. Go to Deployments page For the Striim instance, click SSH to automatically connect to the instance. Download the Connector/J to the instance and extract the contents of the file: wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.49.tar.gz tar -xvzf mysql-connector-java-5.1.49.tar.gz Copy the file to the Striim library path, allow it to be executable, and change ownership of the file that you downloaded: sudo cp ~/mysql-connector-java-5.1.49/mysql-connector-java-5.1.49.jar /opt/striim/lib sudo chmod +x /opt/striim/lib/mysql-connector-java-5.1.49.jar sudo chown striim /opt/striim/lib/mysql-connector-java-5.1.49.jar To recognize the new library, restart the Striim server: sudo systemctl stop striim-node sudo systemctl stop striim-dbms sudo systemctl start striim-dbms sudo systemctl start striim-node Go back to the browser window with the administration console in it. Reload the page, and then log in using the admin user credentials. It can take a couple minutes for the server to complete its restart from the previous step, so you might get a browser error during that time. If you encounter an error, reload the page and log in again. Load sample transactions to Cloud SQL Before you can configure your first Striim app, load transactions into the Cloud SQL for MySQL instance. In Cloud Shell, connect to the instance using the Cloud SQL for MySQL instance credentials that you previously set: gcloud sql connect $CSQL_NAME --user=$CSQL_USERNAME Create a sample database and load some sample transactions into it: CREATE DATABASE striimdemo; USE striimdemo; CREATE TABLE ORDERS (ORDER_ID Integer, ORDER_DATE VARCHAR(50), ORDER_MODE VARCHAR(8), CUSTOMER_ID Integer, ORDER_STATUS Integer, ORDER_TOTAL Float, SALES_REP_ID Integer, PROMOTION_ID Integer, PRIMARY KEY (ORDER_ID)); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1001, 1568927976017, 'In-Store', 1001, 9, 34672.59, 331, 9404); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1002, 1568928036017, 'In-Store', 1002, 1, 28133.14, 619, 2689); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1003, 1568928096017, 'CompanyB', 1003, 1, 37367.95, 160, 30888); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1004, 1568928156017, 'CompanyA', 1004, 1, 7737.02, 362, 89488); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1005, 1568928216017, 'CompanyA', 1005, 9, 15959.91, 497, 78454); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1006, 1568928276017, 'In-Store', 1006, 1, 82531.55, 399, 22488); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1007, 1568928336017, 'CompanyA', 1007, 7, 52929.61, 420, 66256); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1008, 1568928396017, 'Online', 1008, 1, 26912.56, 832, 7262); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1009, 1568928456017, 'CompanyA', 1009, 1, 97706.08, 124, 12185); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1010, 1568928516017, 'CompanyB', 1010, 1, 47539.16, 105, 17868); To check the upload, count the records to ensure that 10 records were inserted: SELECT COUNT(*) FROM ORDERS; Leave the Cloud SQL for MySQL instance: Exit Create a Spanner target instance In this section, you create a Spanner instance and load service account credentials so that Striim can write to the target instance from the Google Cloud console. In Cloud Shell, create a Spanner instance: gcloud spanner instances create striim-spanner-demo \ --config=regional-us-central1 --nodes=1 \ --description="Test Target for Striim" For this tutorial, deploy Spanner in the same region as Cloud SQL. If you chose a different region than us-central1, change the region. For more information about Spanner and regions, see instances. Create a database in the new instance with the target table: gcloud spanner databases create striimdemo \ --instance=striim-spanner-demo \ --ddl="CREATE TABLE orders (ORDER_ID INT64,ORDER_DATE STRING(MAX),ORDER_MODE STRING(MAX),CUSTOMER_ID INT64,ORDER_STATUS INT64,ORDER_TOTAL FLOAT64,SALES_REP_ID INT64,PROMOTION_ID INT64) PRIMARY KEY (ORDER_ID)" Create a service account for Striim to connect to Spanner: gcloud iam service-accounts create striim-spanner \ --display-name striim-spanner export sa_striim_spanner=$(gcloud iam service-accounts list \ --filter="displayName:striim-spanner" --format='value(email)') export PROJECT=$(gcloud info \ --format='value(config.project)') gcloud projects add-iam-policy-binding $PROJECT \ --role roles/spanner.databaseUser \ --member serviceAccount:$sa_striim_spanner gcloud iam service-accounts keys create ~/striim-spanner-key.json \ --iam-account $sa_striim_spanner A key called striim-spanner-key.json is created in your home path. Move the newly generated key to the server: gcloud compute scp ~/striim-spanner-key.json $STRIIMVM_NAME:~ \ --zone=$STRIIMVM_ZONE gcloud compute ssh --zone=$STRIIMVM_ZONE $STRIIMVM_NAME \ -- 'sudo cp ~/striim-spanner-key.json /opt/striim && \ sudo chown striim /opt/striim/striim-spanner-key.json' You are now ready to create a Striim app. Create an online database migration An online database migration moves data from a source database (either on-premises or hosted on a cloud provider) to a target database in Google Cloud. The source database remains fully accessible by the business app and with minimal performance impact on the source database during this time. In an online migration, you perform an initial bulk load, and also continuously capture any changes. You then synchronize the two databases to ensure that data isn't lost. Typically both databases are retained for long periods of time to test and verify that the app and users aren't impacted by switching to a new cloud database. Create the source connection In the Google Cloud console, on the instance details page, click Visit the site to open the Striim web UI. In the Striim web UI, click Apps. Click Add App. Click Start from Scratch. In the Name field, enter demo_online. In the Namespace drop-down menu, select the default Admin namespace. This label is used to organize your apps. Click Save. On the Flow Designer page, to do a one-time bulk initial load of data, from the Sources pane, drag Database to the flow design palette in the center of the screen and enter the following connection properties: In the Name field, enter mysql_source. Leave the Adapter field at the default value of DatabaseReader. In the Connection URL field, enter jdbc:mysql://PRIMARY_ADDRESS:3306/striimdemo. Replace PRIMARY_ADDRESS with the IP address of the Cloud SQL instance that you created in the previous section. In the Username field, enter the username that you set as the CSQL_USER environment variable, striim-user. In the Password field, enter the CSQL_USER_PWD value that you made a note of when you created a Cloud SQL for MySQL instance. To see more configuration properties, click Show optional properties. In the Tables field, enter striimdemo.ORDERS. For Output to, select New output. In the New output field, enter stream_CloudSQLMySQLInitLoad. Click Save. To test the configuration settings to make sure that Striim can successfully connect to Cloud SQL for MySQL, click Created, and then select Deploy App. In the Deployment window, you can specify that you want to run parts of your app on some of your deployment topology. For this tutorial, select Default, and click Deploy. To preview your data as it flows through the Striim pipeline, click waves mysql_source DataBase reader, and then click remove_red_eye Preview on run. Click Deployed, and then click Start App. The Striim app starts running, and data flows through the pipeline. If there are any errors, there is an issue connecting to the source database because there is only a source component in the pipeline. If you see your app successfully run, but no data flows through, typically that means that you don't have any data in your database. After you've successfully connected to your source database and tested that it can read data, click Running, and then select Stop App. Click Stopped, and then select Undeploy App. You are now ready to connect this flow to Spanner. Perform an initial load into Spanner In the Striim web UI, click waves mysql_source Database reader. Click add Connect to next component, select Connect next Target component, and then complete the following fields: In the Name field, enter tgt_online_spanner. In the Adapter field, enter SpannerWriter. In the Instance ID field, enter striim-spanner-demo. The Tables property is a source/target pair separated by commas. It is in the format of srcSchema1.srcTable1,tgtSchema1.tgtTable1;srcSchema2.srcTable2,tgtSchema2.tgtTable2. For this tutorial, enter striimdemo.ORDERS,striimdemo.orders. The Service Account Key requires a fully qualified path and name of the key file that was previously generated. For this tutorial, enter /opt/striim/striim-spanner-key.json. Click Save. To deploy the app and preview the data flow, do the following: Click Created, and then select Deploy App. In the Deployment window, select Default, and then click Deploy. To preview your data as it flows through the Striim pipeline, click waves mysql_source Database reader, and then click remove_red_eye Preview on run. Click Deployed, and then click Start App. In the Google Cloud console, go to the Spanner page. Go to Spanner Click the striimdemo database. In the query editor, enter SELECT * from orders LIMIT 100, and then click Run. The results table outputs the replicated data. You have successfully set up your Striim environment and pipeline to perform a batch load. Create a continuous data pipeline from Cloud SQL for MySQL to Spanner With an initial one-time bulk load in place, you can now set up a continuous replication pipeline. This pipeline is similar to the bulk pipeline that you just created, but with a different source object. Create a CDC source In the Striim web UI, click Home. Click add Apps. Click Start from Scratch. In the Name field, enter MySQLToCloudSpanner_cdc. In the Namespace drop-down menu, select Admin namespace. On the Flow Designer page, drag a MySQL CDC source reader to the center of the design palette. Configure your new MySQL CDC source with the following information: In the Name field, enter mysql_cdc_source. Leave the Adapter field at the default value of MysqlReader. In the Connection URL field, enter jdbc:mysql://PRIMARY_ADDRESS:3306/striimdemo. Enter the username and password that you used in the previous section. To see more configuration properties, click Show optional properties. In the Tables field, enter striimdemo.ORDERS. For Output to, select New output. In the New output field, enter tgt_MySQLCDCSpanner. Click Save. Load new transactions into Spanner Click waves MysqlReader. Click add Connect to next component, and then select Connect next Target component. In the Name field, enter tgt_cdc_spanner. In the Adapter field, enter SpannerWriter. In the Instance ID field, enter striim-spanner-demo. The Tables property is a source/target pair separated by commas. It is in the format of srcSchema1.srcTable1,tgtSchema1.tgtTable1;srcSchema2.srcTable2,tgtSchema2.tgtTable2. For this tutorial, use striimdemo.ORDERS,striimdemo.orders. The Service Account Key requires a fully qualified path and name of the key file that was previously generated. For this tutorial, enter /opt/striim/striim-spanner-key.json. Click Save. To deploy the app and preview the data flow, do the following: Click Created, and then select Deploy App. In the Deployment window, select Default, and then click Deploy. To preview your data as it flows through the Striim pipeline, click waves MysqlReader, and then click remove_red_eye Preview on Run. Click Deployed, and then click Start App. In Cloud Shell, connect to your Cloud SQL for MySQL instance: gcloud sql connect $CSQL_NAME --user=$CSQL_USERNAME Tell MySQL to use this database where the ORDERS table lives: USE striimdemo; INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1011, 1568928576017, 'In-Store', 1011, 9, 13879.56, 320, 88252); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1012, 1568928636017, 'CompanyA', 1012, 1, 19729.99, 76, 95203); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1013, 1568928696017, 'In-Store', 1013, 5, 7286.68, 164, 45162); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1014, 1568928756017, 'Online', 1014, 1, 87268.61, 909, 70407); INSERT INTO ORDERS (ORDER_ID, ORDER_DATE, ORDER_MODE, CUSTOMER_ID, ORDER_STATUS, ORDER_TOTAL, SALES_REP_ID, PROMOTION_ID) VALUES (1015, 1568928816017, 'CompanyB', 1015, 1, 69744.13, 424, 79401); In the Striim web UI, on the Transactions view page, transactions now populate the page and show that data is flowing. In the Google Cloud console, go to the Spanner page. Go to Spanner To see that the data is successfully replicated to the target, click the striimdemo database, and then click the Orders table. Click the Data tab and you now see that these transactions have successfully replicated to the target. You have successfully set up a streaming pipeline from Cloud SQL for MySQL to Spanner. Clean up The easiest way to eliminate billing is to delete the Google Cloud project you created for the tutorial. Alternatively, you can delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Take a look at Google Cloud data migration. If you want to learn more about Striim, visit the Striim website, schedule a demo with a Striim technologist, and subscribe to the Striim blog. Send feedback \ No newline at end of file diff --git a/Continuously_improve_and_innovate.txt b/Continuously_improve_and_innovate.txt new file mode 100644 index 0000000000000000000000000000000000000000..242a7ef4411d96ba2090b591e946fc51015a8ae1 --- /dev/null +++ b/Continuously_improve_and_innovate.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence/continuously-improve-and-innovate +Date Scraped: 2025-02-23T11:42:50.439Z + +Content: +Home Docs Cloud Architecture Center Send feedback Continuously improve and innovate Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you continuously optimize cloud operations and drive innovation. Principle overview To continuously improve and innovate in the cloud, you need to focus on continuous learning, experimentation, and adaptation. This helps you to explore new technologies and optimize existing processes and it promotes a culture of excellence that enables your organization to achieve and maintain industry leadership. Through continuous improvement and innovation, you can achieve the following goals: Accelerate innovation: Explore new technologies and services to enhance capabilities and drive differentiation. Reduce costs: Identify and eliminate inefficiencies through process-improvement initiatives. Enhance agility: Adapt rapidly to changing market demands and customer needs. Improve decision making: Gain valuable insights from data and analytics to make data-driven decisions. Organizations that embrace the continuous improvement and innovation principle can unlock the full potential of the cloud environment and achieve sustainable growth. This principle maps primarily to the Workforce focus area of operational readiness. A culture of innovation lets teams experiment with new tools and technologies to expand capabilities and reduce costs. Recommendations To continuously improve and innovate your cloud workloads, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Foster a culture of learning Encourage teams to experiment, share knowledge, and learn continuously. Adopt a blameless culture where failures are viewed as opportunities for growth and improvement. This recommendation is relevant to the workforce focus area of operational readiness. When you foster a culture of learning, teams can learn from mistakes and iterate quickly. This approach encourages team members to take risks, experiment with new ideas, and expand the boundaries of their work. It also creates a psychologically safe environment where individuals feel comfortable sharing failures and learning from them. Sharing in this way leads to a more open and collaborative environment. To facilitate knowledge sharing and continuous learning, create opportunities for teams to share knowledge and learn from each other. You can do this through informal and formal learning sessions and conferences. By fostering a culture of experimentation, knowledge sharing, and continuous learning, you can create an environment where teams are empowered to take risks, innovate, and grow. This environment can lead to increased productivity, improved problem-solving, and a more engaged and motivated workforce. Further, by promoting a blameless culture, you can create a safe space for employees to learn from mistakes and contribute to the collective knowledge of the team. This culture ultimately leads to a more resilient and adaptable workforce that is better equipped to handle challenges and drive success in the long run. Conduct regular retrospectives Retrospectives give teams an opportunity to reflect on their experiences, identify what went well, and identify what can be improved. By conducting retrospectives after projects or major incidents, teams can learn from successes and failures, and continuously improve their processes and practices. This recommendation is relevant to these focus areas of operational readiness: processes and governance. An effective way to structure a retrospective is to use the Start-Stop-Continue model: Start: In the Start phase of the retrospective, team members identify new practices, processes, and behaviors that they believe can enhance their work. They discuss why the changes are needed and how they can be implemented. Stop: In the Stop phase, team members identify and eliminate practices, processes, and behaviors that are no longer effective or that hinder progress. They discuss why these changes are necessary and how they can be implemented. Continue: In the Continue phase, team members identify practices, processes, and behaviors that work well and must be continued. They discuss why these elements are important and how they can be reinforced. By using a structured format like the Start-Stop-Continue model, teams can ensure that retrospectives are productive and focused. This model helps to facilitate discussion, identify the main takeaways, and identify actionable steps for future enhancements. Stay up-to-date with cloud technologies To maximize the potential of Google Cloud services, you must keep up with the latest advancements, features, and best practices. This recommendation is relevant to the workforce focus area of operational readiness. Participating in relevant conferences, webinars, and training sessions is a valuable way to expand your knowledge. These events provide opportunities to learn from Google Cloud experts, understand new capabilities, and engage with industry peers who might face similar challenges. By attending these sessions, you can gain insights into how to use new features effectively, optimize your cloud operations, and drive innovation within your organization. To ensure that your team members keep up with cloud technologies, encourage them to obtain certifications and attend training courses. Google Cloud offers a wide range of certifications that validate skills and knowledge in specific cloud domains. Earning these certifications demonstrates commitment to excellence and provides tangible evidence of proficiency in cloud technologies. The training courses that are offered by Google Cloud and our partners delve deeper into specific topics. They provide direct experience and practical skills that can be immediately applied to real-world projects. By investing in the professional development of your team, you can foster a culture of continuous learning and ensure that everyone has the necessary skills to succeed in the cloud. Actively seek and incorporate feedback Collect feedback from users, stakeholders, and team members. Use the feedback to identify opportunities to improve your cloud solutions. This recommendation is relevant to the workforce focus area of operational readiness. The feedback that you collect can help you to understand the evolving needs, issues, and expectations of the users of your solutions. This feedback serves as a valuable input to drive improvements and prioritize future enhancements. You can use various mechanisms to collect feedback: Surveys are an effective way to gather quantitative data from a large number of users and stakeholders. User interviews provide an opportunity for in-depth qualitative data collection. Interviews let you understand the specific challenges and experiences of individual users. Feedback forms that are placed within the cloud solutions offer a convenient way for users to provide immediate feedback on their experience. Regular meetings with team members can facilitate the collection of feedback on technical aspects and implementation challenges. The feedback that you collect through these mechanisms must be analyzed and synthesized to identify common themes and patterns. This analysis can help you prioritize future enhancements based on the impact and feasibility of the suggested improvements. By addressing the needs and issues that are identified through feedback, you can ensure that your cloud solutions continue to meet the evolving requirements of your users and stakeholders. Measure and track progress Key performance indicators (KPIs) and metrics are crucial for tracking progress and measuring the effectiveness of your cloud operations. KPIs are quantifiable measurements that reflect the overall performance. Metrics are specific data points that contribute to the calculation of KPIs. Review the metrics regularly and use them to identify opportunities for improvement and measure progress. Doing so helps you to continuously improve and optimize your cloud environment. This recommendation is relevant to these focus areas of operational readiness: governance and processes. A primary benefit of using KPIs and metrics is that they enable your organization to adopt a data-driven approach to cloud operations. By tracking and analyzing operational data, you can make informed decisions about how to improve the cloud environment. This data-driven approach helps you to identify trends, patterns, and anomalies that might not be visible without the use of systematic metrics. To collect and analyze operational data, you can use tools like Cloud Monitoring and BigQuery. Cloud Monitoring enables real-time monitoring of cloud resources and services. BigQuery lets you store and analyze the data that you gather through monitoring. Using these tools together, you can create custom dashboards to visualize important metrics and trends. Operational dashboards can provide a centralized view of the most important metrics, which lets you quickly identify any areas that need attention. For example, a dashboard might include metrics like CPU utilization, memory usage, network traffic, and latency for a particular application or service. By monitoring these metrics, you can quickly identify any potential issues and take steps to resolve them. Previous arrow_back Automate and manage change Send feedback \ No newline at end of file diff --git a/Continuously_monitor_and_improve_performance.txt b/Continuously_monitor_and_improve_performance.txt new file mode 100644 index 0000000000000000000000000000000000000000..f4bf11b4427cceb4c18d63405d3be81ecdd778fb --- /dev/null +++ b/Continuously_monitor_and_improve_performance.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/performance-optimization/continuously-monitor-and-improve-performance +Date Scraped: 2025-02-23T11:44:12.910Z + +Content: +Home Docs Cloud Architecture Center Send feedback Continuously monitor and improve performance Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you continuously monitor and improve performance. After you deploy applications, continuously monitor their performance by using logs, tracing, metrics, and alerts. As your applications grow and evolve, you can use the trends in these data points to re-assess your performance requirements. You might eventually need to redesign parts of your applications to maintain or improve their performance. Principle overview The process of continuous performance improvement requires robust monitoring tools and strategies. Cloud observability tools can help you to collect key performance indicators (KPIs) such as latency, throughput, error rates, and resource utilization. Cloud environments offer a variety of methods to conduct granular performance assessments across the application, the network, and the end-user experience. Improving performance is an ongoing effort that requires a multi-faceted approach. The following key mechanisms and processes can help you to boost performance: To provide clear direction and help track progress, define performance objectives that align with your business goals. Set SMART goals: specific, measurable, achievable, relevant, and time-bound. To measure performance and identify areas for improvement, gather KPI metrics. To continuously monitor your systems for issues, use visualized workflows in monitoring tools. Use architecture process mapping techniques to identify redundancies and inefficiencies. To create a culture of ongoing improvement, provide training and programs that support your employees' growth. To encourage proactive and continuous improvement, incentivize your employees and customers to provide ongoing feedback about your application's performance. Recommendations To promote modular designs, consider the recommendations in the following sections. Define clear performance goals and metrics Define clear performance objectives that align with your business goals. This requires a deep understanding of your application's architecture and the performance requirements of each application component. As a priority, optimize the most critical components that directly influence your core business functions and user experience. To help ensure that these components continue to run efficiently and meet your business needs, set specific and measurable performance targets. These targets can include response times, error rates, and resource utilization thresholds. This proactive approach can help you to identify and address potential bottlenecks, optimize resource allocation, and ultimately deliver a seamless and high-performing experience for your users. Monitor performance Continuously monitor your cloud systems for performance issues and set up alerts for any potential problems. Monitoring and alerts can help you to catch and fix issues before they affect users. Application profiling can help to identify bottlenecks and can help to optimize resource use. You can use tools that facilitate effective troubleshooting and network optimization. Use Google Cloud Observability to identify areas that have high CPU consumption, memory consumption, or network consumption. These capabilities can help developers improve efficiency, reduce costs, and enhance the user experience. Network Intelligence Center shows visualizations of the topology of your network infrastructure, and can help you to identify high-latency paths. Incentivize continuous improvement Create a culture of ongoing improvement that can benefit both the application and the user experience. Provide your employees with training and development opportunities that enhance their skills and knowledge in performance techniques across cloud services. Establish a community of practice (CoP) and offer mentorship and coaching programs to support employee growth. To prevent reactive performance management and encourage proactive performance management, encourage ongoing feedback from your employees, your customers, and your stakeholders. You can consider gamifying the process by tracking KPIs on performance and presenting those metrics to teams on a frequent basis in the form of a league table. To understand your performance and user happiness over time, we recommend that you measure user feedback quantitatively and qualitatively. The HEART framework can help you capture user feedback across five categories: Happiness Engagement Adoption Retention Task success By using such a framework, you can incentivize engineers with data-driven feedback, user-centered metrics, actionable insights, and a clear understanding of goals. Previous arrow_back Promote modular design Send feedback \ No newline at end of file diff --git a/Controls_to_restrict_access_to_individually_approved_APIs.txt b/Controls_to_restrict_access_to_individually_approved_APIs.txt new file mode 100644 index 0000000000000000000000000000000000000000..152c4ffb450487f591b60b4e8e10b32b585de53b --- /dev/null +++ b/Controls_to_restrict_access_to_individually_approved_APIs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/network-controls-limit-access-individually-approved-apis +Date Scraped: 2025-02-23T11:53:57.122Z + +Content: +Home Docs Cloud Architecture Center Send feedback Controls to restrict access to individually approved APIs Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-06 UTC Many organizations have a compliance requirement to restrict network access to an explicitly approved list of APIs, based on internal requirements or as part of adopting Assured Workloads. On-premises, this requirement is often addressed with proxy controls, but in your Google Virtual Private Cloud (VPC) you can address this requirement with the Restrict Resource Service Usage Organization Policy instead. This policy allows an administrator to define which Google Cloud resources can be created within their resource hierarchy, but to use this Organization Policy effectively, you must align various networking controls in your environment. This document describes how to restrict access to individually approved Google APIs using the Organization Policy Service and other network controls, as well as the challenges with applying the on-premises proxy-based approach to Google Cloud services. This document is intended for network administrators or security teams who want to restrict which Google Cloud APIs can be reached from their VPC network endpoints. Challenges with proxies for access control to Google APIs In an on-premises network, your enterprise might have a compliance requirement to allow egress traffic only to approved services and domains. This requirement can be enforced by filtering egress traffic through a web proxy or secure access gateway. This proxy intercepts all outgoing traffic and allows egress only to explicitly approved APIs. In some enterprises, you might similarly have a compliance requirement to restrict access from your VPC network to approved Google Cloud APIs. This compliance control is often seen in the following scenarios: An enterprise is adopting Assured Workloads for sensitive workloads and compliance controls. An enterprise has internal compliance requirements that network endpoints on Google Cloud are only allowed to access Google Cloud APIs approved through an internal process. An enterprise wants to migrate infrastructure-as-a-service (IaaS) workloads to Google Cloud with minimal refactoring. An enterprise has not yet developed controls for cloud and prefers to extend existing controls from the on-premises environment. Although your enterprise might use a web proxy to control egress from its on-premises network to web services, we don't recommend this approach for controlling access from your VPC network to Google Cloud APIs. Using this proxy approach introduces scalability concerns, creates a single point of failure, and does not address data exfiltration risks using Google Cloud APIs. We recommend using the Restrict Resource Service Usage Organization Policy instead of proxies to selectively allow access to individual Google Cloud APIs. The challenges related to building and maintaining a web proxy for access control to individual Google APIs are discussed in the following sections. Shared IP address ranges used by multiple Google APIs You can't control access to an individual Google API by a proxy or firewall rule that filters to a single IP address. Google uses a dynamic range of IP addresses for default domains. Within these IP addresses, there is not a one-to-one relationship between a dedicated IP address and a specific API. Shared domains used by Google APIs For some Google APIs, you can't control network access by filtering traffic on domains. Most Google APIs are reachable on endpoints which differentiate specific APIs by path and begin with a URI starting with www.googleapis.com. Certain Google APIs also use an endpoint with a dedicated subdomain. For example, the Cloud Storage API reference documents URIs in relation to the storage.googleapis.com/storage/v1 endpoint, but you could also use a URI starting with www.googleapis.com/storage/v1 to call the same API methods. When you use multiple APIs that only have endpoints on the domain www.googleapis.com, the egress proxy cannot distinguish between APIs based solely on the domain. For example, some Google Cloud APIs like Deployment Manager, and other Google APIs, like Tag Manager or Google Play Games, are only accessible on the www.googleapis.com domain. Additionally, all Google Cloud APIs use TLS encryption by default. If you want to allow one of these APIs but block the others, your proxy would have to decrypt the request to filter on the URI path, increasing complexity. Bottlenecks caused by proxies If all the traffic from your VPC network to Google APIs must go through an egress proxy, the proxy could become a bottleneck. If you do use an egress proxy for traffic from your VPC network to Google APIs, we recommend that you build the proxy for high availability to avoid service disruption. Maintaining and scaling the proxy can become complex because as your organization grows, the proxy could introduce a single point of failure, latency, and reduced throughput. There might be a particular impact on operations that transfer large volumes of data. Service-to-service exfiltration risk A web proxy can restrict whether a Google API is reachable from your VPC network, but does not address other exfiltration paths that use the Google API. For example, an employee within your enterprise might have legitimate IAM privileges to read internal Cloud Storage buckets. With this privilege, they could read internal data and then copy that data to a public bucket. The egress proxy cannot restrict API-to-API traffic that does not originate in your VPC, or control how internet traffic reaches the public endpoints for Google Cloud APIs. For sensitive data, a VPC Service Controls perimeter helps to mitigate this type of exfiltration. Enforcing a VPC Service Controls perimeter helps to protect resources inside the perimeter from misconfigured IAM policies, exfiltration, and compromised credentials. Configure network controls to restrict unapproved services When using the Restrict Resource Service Usage Organization Policy, to effectively restrict access to services, you must consider how your VPC network restricts egress traffic and exfiltration paths. The following sections describe best practices for network design to use the Restrict Resource Service Usage Organization Policy effectively. Configure VPC Service Controls When you're using the Restrict Resource Service Usage Organization Policy, we recommend that you also configure VPC Service Controls. By applying the Organization Policy in a project, you restrict which services can be used in that project, but the Organization Policy does not prevent services in this project from communicating with services in other projects. In comparison, VPC Service Controls lets you define a perimeter to prevent communication with services outside the perimeter. For example, if you define an Organization Policy to allow Compute Engine and deny Cloud Storage in your project, a VM in this project couldn't create or communicate with a Cloud Storage bucket in your project. However, the VM can make requests to a Cloud Storage bucket in another project, so exfiltration with the Cloud Storage service is still possible. To block communication with Cloud Storage or other Google services outside the perimeter, you must configure a VPC Service Controls perimeter. Use these controls together to selectively allow approved services in your environment and block a range of exfiltration paths to unapproved services. Remove paths to the internet If resources in your VPC network can communicate directly to the internet, there could be an alternative path to unapproved Google APIs and other services you want to block. Therefore we recommend that you only use internal IP addresses on your VMs and don’t allow egress routes through a NAT or proxy solution. The Restrict Resource Service Usage Organization Policy does not mitigate exfiltration paths to the public internet, so unapproved services could still be accessed indirectly from a server outside of your environment. Configure a private endpoint for API access To control API endpoints in your network, we recommend that you access Google APIs using Private Service Connect. When you configure Private Google Access to allow VMs with only internal IP to access Google APIs, this includes access to all Google APIs, including those not supported by VPC Service Controls or the Restrict Resource Service Usage Organization Policy. You can restrict Private Google Access to only supported APIs by additionally configuring Private Service Connect with the vpc-sc bundle. For example, enabling Private Google Access allows a private network path to all Google APIs, such as Google Google Drive or Google Maps Platform. An employee might copy data to their personal Google Drive from a Compute Engine instance using the Google Drive API. You can prevent this exfiltration path by configuring Private Service Connect with the vpc-sc bundle to provide access to the same set of services supported by the restricted virtual IP (VIP) at the restricted.googleapis.com endpoint. In comparison, a broader set of Google APIs can be reached using Private Google Access when you use the Google default domains, a Private Service Connect endpoint configured with the all-apis bundle, or the private VIP (private.googleapis.com). Alternatively, you can configure routes to restricted.googleapis.com. You might prefer to use the restricted VIP if you don't want to create a Private Service Connect endpoint for each region and each VPC network in your environment. However, we recommend the Private Service Connect approach because you can create an endpoint internal to your VPC network, compared to the restricted VIP approach which requires a route to a publicly announced endpoint. What's next Read more about Restrict Resource Service Usage Organization Policy. Understand which APIs are supported by the Restrict Resource Service Usage Organization Policy. Understand which APIs are supported on VPC Service Controls. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Conversational_Agents.txt b/Conversational_Agents.txt new file mode 100644 index 0000000000000000000000000000000000000000..950da423366efeb049189b060fb86fbb07b80df5 --- /dev/null +++ b/Conversational_Agents.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/conversational-agents +Date Scraped: 2025-02-23T12:02:05.920Z + +Content: +Jump to Conversational Agents and DialogflowConversational Agents and DialogflowBuild hybrid conversational agents with both deterministic and generative AI functionality. This allows you to have strict controls and use generative AI to better meet customer needs.Dialogflow CX is generally available (GA) and Conversational Agents is currently in Preview. Go to consoleContact salesSupport rich, intuitive customer conversations, powered by Google's leading deterministic and generative AIOne comprehensive development platform for text and voice-based virtual agentsJoin a community of over 1.5 million developers building with Dialogflow and Conversational Agents Create agents with a few clicks using Vertex AI Agent Builder and the Conversational Agents UIPart of Customer Engagement Suite with Google AIDialogflow CX Introduction1:30BenefitsBoost your customer and agent experiencesPredictability and control with no / low codeOut of the box quality for customer engagementCustomer engagement integrationsShort time to productionKey featuresProvides intelligent customer experience across channels and devicesManage multiple channels, large volume of interactions, complexity of interactions and agent workforce challenges all in a single platformGenerative AI agents and Visual flow builderCreate virtual agents with just a few clicks. Connect your webpage or documents to your virtual agent and leverage foundation models for generating responses from the content, out of the box. You can also call a foundation model to perform specific tasks during a virtual agent conversation or respond to a query contextually, significantly reducing development effort and making virtual agents more conversational.Reduce development time with interactive flow visualizations that allow builders to quickly see, understand, edit, and share their work. It also allows for easy collaboration across teams.Customer steering and understandingAutomatically direct customer interactions to the appropriate specialist / pre-built flow at the right time based on the customer’s intent and conversation data. Building this experience can be low effort and done out of the box, and routing based on intent rather than static rules can help speed up the problem resolution process. Multimodal conversations and multi-lingual self serviceCall companion provides an interactive visual interface on a user’s phone during voice session. Users see options quickly while the conversational agent speaks with them, and they can share input via text, images, and visual elements such as clickable cards to support the conversation, all designed to help improve self-service abandonment rates, time to resolution, and cost per interaction.Improve abandonment rates, average speed to answer, and CSAT with Google’s advanced translation AI. This allows you to engage with customers in their native language and better empathize with them. Our Conversational Agents solution supports 100+ languages and can be deployed to your self-service agents.Omnichannel implementationBuild once, deploy everywhere—in your contact centers and digital channels. Seamlessly integrate your agents across platforms, including web, mobile, and messenger, with Google Cloud’s CCaaS, and with telephony partners, such as Avaya, Cisco, Genesys, and NICE.State-based data models and end-to-end managementReuse intents, intuitively define transitions and data conditions, and handle supplemental questions—allowing customers to deviate from the main topic, then gracefully return to the main flow. Add generators and generative fall-backs to increase the content coverage and handle in-flow “chit-chat” without expensive, complex training. Take care of all your agent management needs, including CI/CD, analytics, experiments, and bot evaluation inside Dialogflow—you don't need any other custom software.BLOGRespond faster and more accurately with Dialogflow CXCustomersSee how organizations are transforming their customer interactions.Blog postCommerzbank has Reimagined the Customer Experience with Google Contact Center AI2-min readCase studyM&S: Calling on Google Cloud for personalized customer service that’s both digital and human3-min readCase studyMalaysia Airlines streamlines flight search, booking, and payment for its customers with a chatbot.5-min readCase studyDPD UK uses Dialogflow to resolve over 32% of customer queries on its parcel tracking app.5-min readCase studyDomino's simplifies ordering pizza using Google Cloud's conversational technology.2-min readBlog postKLM builds booking and packing bot, BB, for messaging with Dialogflow.5-min readSee all customersDocumentationTechnical resourcesGoogle Cloud BasicsConversational Agents and Dialogflow CX basicsReview the basics of using Dialogflow CX and an overview of the most important concepts.Learn moreQuickstartConversational Agents and Dialogflow CX quickstartsLearn how to get started with Dialogflow CX from setting up, to building using the console, and to interacting with an agent using the API.Learn moreGoogle Cloud BasicsDialogflow ES basicsLearn the basics of using Dialogflow ES with an overview of the most important concepts.Learn moreQuickstartDialogflow ES quickstartsSee how you can get started and running with Dialogflow ES.Learn moreQuickstartGet started with Vertex AI Agent BuilderLearn how to create and use your first generative AI agent using Vertex AI Agent Builder.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for DialogflowUse casesUse casesUse caseVoicebots for customer serviceGive customers 24/7 access to immediate conversational self-service, with seamless handoffs to human agents for more complex issues by building virtual agents and interactive voice response (IVR) that can perform tasks, such as scheduling appointments, answering common questions, or assisting a customer with simple requests. Use caseChatbots for B2C conversationsConnect with your customers on their preferred platform, at any time, from anywhere in the world. Whether your customers want to ask common questions or access specific information, text virtual agents offer an instant and satisfying experience for customers who want quick and accurate responses.View all technical guidesCompare featuresDialogflow EditionsDialogflow is available in two editions. The agent type, features, pricing, and quotas vary for each edition.FeaturesConversational Agents and Dialogflow CXAdvanced agents with most innovative capabilities for large or complex use.Go to consoleView documentationDialogflow ES (Essentials)Standard agents for small to medium and simple to moderately complex use.Go to consoleView documentationMultilingual support: engage with your global user/customer base30+ languages and variants supported30+ languages and variants supportedAnalytics: gain insights into agent performance and customer engagementsAdvanced performance dashboards Data export to custom dashboardsState-based visualizationsPerformance dashboards Data export to custom dashboardsOmnichannel integration: build once, deploy across your contact center and digital channels Integration across digital channels, including web, mobile, messenger, and othersAdvanced one-click telephony integrationIntegration into popular channels, such as Google Assistant, Slack, Twitter, and othersOne-click telephony integrationPrebuilt agents: accelerate time to production with a library of agents prepared for common use cases 9 production-ready agents for use cases across industries: telco, retail, financial services, travel, and more40+ template agents for building conversations for dining out, hotel booking, navigation, IoT, and moreAdvanced AI: take advantage of best machine learning models developed by Google ResearchState-of-the-art BERT-based natural language understanding (NLU) modelsCutting-edge speech recognition and speech synthesis models Standard high-quality natural language understanding (NLU) modelsCutting-edge speech recognition and speech synthesis modelsVisual flow builder: quickly see, understand, edit, and share work with interactive flow visualizationsReduce development time by 30% with an intuitive visual builder for visual state machineForm-based bot builderState-based models: switch between topics and manage complex flows with easeReuse intents and intuitively define transitions and data conditionsFlat data model for simple use casesSupplemental questions: handle deviations in conversations, then gracefully return to the main flowModels capable of easily defining and detecting minor conversation detoursNot supportedNative IVR settings: optimize for Contact Center AI deploymentsSupports settings such as DTMF, live agent handoff, barge-in, speech timeoutsNot supportedFlow-based modules: manage your agents easily and work on independent flows simultaneouslySupports up to 20 independent conversation flows with 40,000 intentsShared intents and training phrases across flowsSupports up to 10 sub-agents with as many as 20,000 intentsIntents and training phrases not shared across sub-agentsTesting: evaluate the quality of your agents to uncover bugs and prevent regressionsAdvanced multi-turn simulatorCreate and manage test cases for continuous evaluationStandard simulatorTest cases not supportedEnd-to-end management: take care of all your agent management needs inside DialogflowFlow-level versions and environments support for testing and deploymentExperiments and traffic-splitting natively supportedBasic versions/environments supportExperiments and virtual agent evaluation not supportedSpeaker ID: leverage biometric voice identification to identify and verify usersQuickly identify users with just their voiceIncrease security with an extra layer of verificationReduce user frustration by eliminating pins and passcodesNot supportedConversational Agents and Dialogflow CXAdvanced agents with most innovative capabilities for large or complex use.Go to consoleView documentationMultilingual support: engage with your global user/customer base30+ languages and variants supportedAnalytics: gain insights into agent performance and customer engagementsAdvanced performance dashboards Data export to custom dashboardsState-based visualizationsOmnichannel integration: build once, deploy across your contact center and digital channels Integration across digital channels, including web, mobile, messenger, and othersAdvanced one-click telephony integrationPrebuilt agents: accelerate time to production with a library of agents prepared for common use cases 9 production-ready agents for use cases across industries: telco, retail, financial services, travel, and moreAdvanced AI: take advantage of best machine learning models developed by Google ResearchState-of-the-art BERT-based natural language understanding (NLU) modelsCutting-edge speech recognition and speech synthesis models Visual flow builder: quickly see, understand, edit, and share work with interactive flow visualizationsReduce development time by 30% with an intuitive visual builder for visual state machineState-based models: switch between topics and manage complex flows with easeReuse intents and intuitively define transitions and data conditionsSupplemental questions: handle deviations in conversations, then gracefully return to the main flowModels capable of easily defining and detecting minor conversation detoursNative IVR settings: optimize for Contact Center AI deploymentsSupports settings such as DTMF, live agent handoff, barge-in, speech timeoutsFlow-based modules: manage your agents easily and work on independent flows simultaneouslySupports up to 20 independent conversation flows with 40,000 intentsShared intents and training phrases across flowsTesting: evaluate the quality of your agents to uncover bugs and prevent regressionsAdvanced multi-turn simulatorCreate and manage test cases for continuous evaluationEnd-to-end management: take care of all your agent management needs inside DialogflowFlow-level versions and environments support for testing and deploymentExperiments and traffic-splitting natively supportedSpeaker ID: leverage biometric voice identification to identify and verify usersQuickly identify users with just their voiceIncrease security with an extra layer of verificationReduce user frustration by eliminating pins and passcodesDialogflow ES (Essentials)Standard agents for small to medium and simple to moderately complex use.Go to consoleView documentationMultilingual support: engage with your global user/customer base30+ languages and variants supportedAnalytics: gain insights into agent performance and customer engagementsPerformance dashboards Data export to custom dashboardsOmnichannel integration: build once, deploy across your contact center and digital channels Integration into popular channels, such as Google Assistant, Slack, Twitter, and othersOne-click telephony integrationPrebuilt agents: accelerate time to production with a library of agents prepared for common use cases 40+ template agents for building conversations for dining out, hotel booking, navigation, IoT, and moreAdvanced AI: take advantage of best machine learning models developed by Google ResearchStandard high-quality natural language understanding (NLU) modelsCutting-edge speech recognition and speech synthesis modelsVisual flow builder: quickly see, understand, edit, and share work with interactive flow visualizationsForm-based bot builderState-based models: switch between topics and manage complex flows with easeFlat data model for simple use casesSupplemental questions: handle deviations in conversations, then gracefully return to the main flowNot supportedNative IVR settings: optimize for Contact Center AI deploymentsNot supportedFlow-based modules: manage your agents easily and work on independent flows simultaneouslySupports up to 10 sub-agents with as many as 20,000 intentsIntents and training phrases not shared across sub-agentsTesting: evaluate the quality of your agents to uncover bugs and prevent regressionsStandard simulatorTest cases not supportedEnd-to-end management: take care of all your agent management needs inside DialogflowBasic versions/environments supportExperiments and virtual agent evaluation not supportedSpeaker ID: leverage biometric voice identification to identify and verify usersNot supportedPricingPricingConversational Agents and Dialogflow are priced monthly based on the edition and the number of requests made during the month.New customers receive a $600 credit for a $0 trial of Dialogflow CX. This credit is automatically activated upon using Dialogflow CX for the first time and expires after 12 months. This is a Dialogflow-specific extension of the Google Cloud $0 trial.View pricing detailsPartnersIntegrate Converstional Agents and Dialogflow into your contact center and channels with Google Cloud CCaaS or our trusted partners.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Get startedNeed help getting started?Contact salesGet tips & best practicesSee tutorialsExplore the MarketplaceFind solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cost_Management.txt b/Cost_Management.txt new file mode 100644 index 0000000000000000000000000000000000000000..301aeb5b137eaf436918740f4cff45e138945f39 --- /dev/null +++ b/Cost_Management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cost-management +Date Scraped: 2025-02-23T12:05:48.175Z + +Content: +Reduce unexpected costs with AI-powered Cost Anomaly Detection. Learn moreJump to Cost ManagementCost ManagementTools for monitoring, controlling, and optimizing your costs.Go to consoleView documentationGain visibility into your current cost trends and forecastsDrive clear accountability for costs across your organizationControl your costs with strong financial governance policies and permissionsOptimize your cloud costs and usage with intelligent recommendations1:19Gain greater visibility and control with Google Cloud\342\200\231s cost management tools.BenefitsOrganize resources and understand your costsDrive accountability for costs across your organization, and better understand your return on cloud investments with flexible options for organizing resources and allocating costs to departments and teams.Take control of your costsControl your costs and reduce the risk of overspending with strong financial governance policies and permissions that make it easy to control who can do the spending and view costs across your organization.Optimize spending and savingsStay smart about your spending with intelligent recommendations tailored to your business that help optimize usage, save time on management, and minimize costs.Key featuresA path to more predictable cloud costsResource hierarchy and access controlsStructure and organize your resource hierarchy for fine-grained management and cost allocation using organizations, folders, projects, and labels. Enforce organizational policies with granular permissions at different levels in the resource hierarchy to control who can spend and who has administrative and cost-viewing permissions.Reports, dashboards, budgets, and alertsGet at-a-glance views of your current cost trends and forecasts with intuitive reports in the Google Cloud console and set budgets to closely monitor your costs and alert stakeholders through email or Pub/Sub when exceeding defined budget thresholds. Create custom dashboards for your teams with Looker Studio.RecommendationsView intelligent recommendations for optimizing your costs and usage. Easily apply these changes for immediate cost savings and greater efficiency.Design a Cloud FinOps operating model with Google Cloud expertsGoogle’s Financial Operations team will work with customers to create the structure of their FinOps team and outline the roles and responsibilities for each member. We work with IT, finance, and business areas to enable a cost-conscious culture across your organization. Google’s FinOps experts will co-create a RACI model and the key process flows necessary to create and operationalize your FinOps team for success. Contact sales to get started or learn more about our entire consulting portfolio.View all featuresNewsA guide to financial governance in the cloudCustomersBusinesses are doing more with less using Cost Management toolsLearn about Google Cloud customers who are optimizing costs with our tools, products, and solutions.Case studyBlue-sky thinking: how Sky is reimagining their FinOps journey5-min readCase studyEtsy: Doing more with less cost and infrastructure5-min readCase studyVendasta: Reducing costs and improving quality with cloud cost management5-min readVideoLearn how OpenX piloted our Google Cloud Cost Optimization Program11-min watchSee all customersWhat's newStay up to dateSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postUnlocking cloud cost optimization: A guide to Google FinOps ResourcesRead the blogBlog postWhen they go closed, we go open–Google Cloud and open billing dataLearn moreBlog postCost optimization on Google Cloud for developers and operatorsRead the blogReportUnderstanding the principles of cost optimizationRead reportVideoCost management in the CloudWatch videoBlog postIT Cost Assessment program: Unlock value to reinvest for growthRead the blogDocumentationExplore popular quickstarts, tutorials, and FinOps best practicesGoogle Cloud BasicsCloud Billing overviewLearn about Cloud Billing accounts and discover Cloud Billing tools that help you track and understand your Google Cloud spend, pay your bill, and optimize costs.Learn moreBest PracticeBest practices for cost optimizationDesign recommendations and best practices to help architects, developers, and administrators to optimize the cost of workloads in Google Cloud.Learn moreBest PracticeGoogle Cloud setup checklistThis checklist helps you set up Google Cloud for scalable, production-ready enterprise workloads.Learn moreGoogle Cloud BasicsSolving for operational efficiencyLearn to solve your toughest technology challenges, reduce IT spend, and prepare for whatever comes next.Learn moreTutorialCloud Billing interactive tutorialsLearn more about Cloud Billing and get hands-on with interactive tutorials about billing reports, exporting billing data, budgets, managing accounts, and more.Learn moreGoogle Cloud BasicsGet help for billing and paymentsLearn how to contact Cloud Billing support if you need help with your Cloud Billing account and where to get information about managing your account.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesAnnouncements about new or updated featuresAll featuresCost Management featuresReports and dashboardsGet at-a-glance views of your current cost trends and forecasts with intuitive reports in the Google Cloud console. Create custom dashboards for your teams with Looker Studio.Resource hierarchyStructure and organize your resource hierarchy for fine-grained management and cost allocation using organizations, folders, projects, and labels.Billing access controlEnforce organizational policies with granular permissions at different levels in the resource hierarchy to control who can spend and who has administrative and cost-viewing permissions.Budgets and alertsSet budgets to closely monitor your costs and alert stakeholders through email or Pub/Sub when exceeding defined budget thresholds.Automated budget actionsConfigure automated actions using programmatic budget notifications to throttle resources and cap costs to prevent unexpected activity from affecting your budgeted cloud spend.Billing exportExport detailed usage, cost, and pricing data automatically to BigQuery so that you can use Looker Studio or your preferred analytics tool for further cost analysis.RecommendationsView intelligent recommendations for optimizing your costs and usage. Easily apply these changes for immediate cost savings and greater efficiency.Billing APIsProgrammatically access and manage your billing accounts with Billing APIs.QuotasSet quota limits to proactively control your spend rate and prevent unforeseen spikes in usage.PricingCost Management pricingCost Management tools and 24/7 billing support are offered at no additional charge for Google Cloud customers. You will be charged only for use of Google Cloud services, such as BigQuery, Pub/Sub, Cloud Functions, and Cloud Storage. For information on the pricing of Google Cloud services, see the Google Cloud Pricing Calculator.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cost_management_tools.txt b/Cost_management_tools.txt new file mode 100644 index 0000000000000000000000000000000000000000..9a4f2056a50c3e7e3132824f86eb776fc95e9b0c --- /dev/null +++ b/Cost_management_tools.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/cost-management +Date Scraped: 2025-02-23T12:10:45.256Z + +Content: +Reduce unexpected costs with AI-powered Cost Anomaly Detection. Learn moreJump to Cost ManagementCost ManagementTools for monitoring, controlling, and optimizing your costs.Go to consoleView documentationGain visibility into your current cost trends and forecastsDrive clear accountability for costs across your organizationControl your costs with strong financial governance policies and permissionsOptimize your cloud costs and usage with intelligent recommendations1:19Gain greater visibility and control with Google Cloud\342\200\231s cost management tools.BenefitsOrganize resources and understand your costsDrive accountability for costs across your organization, and better understand your return on cloud investments with flexible options for organizing resources and allocating costs to departments and teams.Take control of your costsControl your costs and reduce the risk of overspending with strong financial governance policies and permissions that make it easy to control who can do the spending and view costs across your organization.Optimize spending and savingsStay smart about your spending with intelligent recommendations tailored to your business that help optimize usage, save time on management, and minimize costs.Key featuresA path to more predictable cloud costsResource hierarchy and access controlsStructure and organize your resource hierarchy for fine-grained management and cost allocation using organizations, folders, projects, and labels. Enforce organizational policies with granular permissions at different levels in the resource hierarchy to control who can spend and who has administrative and cost-viewing permissions.Reports, dashboards, budgets, and alertsGet at-a-glance views of your current cost trends and forecasts with intuitive reports in the Google Cloud console and set budgets to closely monitor your costs and alert stakeholders through email or Pub/Sub when exceeding defined budget thresholds. Create custom dashboards for your teams with Looker Studio.RecommendationsView intelligent recommendations for optimizing your costs and usage. Easily apply these changes for immediate cost savings and greater efficiency.Design a Cloud FinOps operating model with Google Cloud expertsGoogle’s Financial Operations team will work with customers to create the structure of their FinOps team and outline the roles and responsibilities for each member. We work with IT, finance, and business areas to enable a cost-conscious culture across your organization. Google’s FinOps experts will co-create a RACI model and the key process flows necessary to create and operationalize your FinOps team for success. Contact sales to get started or learn more about our entire consulting portfolio.View all featuresNewsA guide to financial governance in the cloudCustomersBusinesses are doing more with less using Cost Management toolsLearn about Google Cloud customers who are optimizing costs with our tools, products, and solutions.Case studyBlue-sky thinking: how Sky is reimagining their FinOps journey5-min readCase studyEtsy: Doing more with less cost and infrastructure5-min readCase studyVendasta: Reducing costs and improving quality with cloud cost management5-min readVideoLearn how OpenX piloted our Google Cloud Cost Optimization Program11-min watchSee all customersWhat's newStay up to dateSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postUnlocking cloud cost optimization: A guide to Google FinOps ResourcesRead the blogBlog postWhen they go closed, we go open–Google Cloud and open billing dataLearn moreBlog postCost optimization on Google Cloud for developers and operatorsRead the blogReportUnderstanding the principles of cost optimizationRead reportVideoCost management in the CloudWatch videoBlog postIT Cost Assessment program: Unlock value to reinvest for growthRead the blogDocumentationExplore popular quickstarts, tutorials, and FinOps best practicesGoogle Cloud BasicsCloud Billing overviewLearn about Cloud Billing accounts and discover Cloud Billing tools that help you track and understand your Google Cloud spend, pay your bill, and optimize costs.Learn moreBest PracticeBest practices for cost optimizationDesign recommendations and best practices to help architects, developers, and administrators to optimize the cost of workloads in Google Cloud.Learn moreBest PracticeGoogle Cloud setup checklistThis checklist helps you set up Google Cloud for scalable, production-ready enterprise workloads.Learn moreGoogle Cloud BasicsSolving for operational efficiencyLearn to solve your toughest technology challenges, reduce IT spend, and prepare for whatever comes next.Learn moreTutorialCloud Billing interactive tutorialsLearn more about Cloud Billing and get hands-on with interactive tutorials about billing reports, exporting billing data, budgets, managing accounts, and more.Learn moreGoogle Cloud BasicsGet help for billing and paymentsLearn how to contact Cloud Billing support if you need help with your Cloud Billing account and where to get information about managing your account.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesAnnouncements about new or updated featuresAll featuresCost Management featuresReports and dashboardsGet at-a-glance views of your current cost trends and forecasts with intuitive reports in the Google Cloud console. Create custom dashboards for your teams with Looker Studio.Resource hierarchyStructure and organize your resource hierarchy for fine-grained management and cost allocation using organizations, folders, projects, and labels.Billing access controlEnforce organizational policies with granular permissions at different levels in the resource hierarchy to control who can spend and who has administrative and cost-viewing permissions.Budgets and alertsSet budgets to closely monitor your costs and alert stakeholders through email or Pub/Sub when exceeding defined budget thresholds.Automated budget actionsConfigure automated actions using programmatic budget notifications to throttle resources and cap costs to prevent unexpected activity from affecting your budgeted cloud spend.Billing exportExport detailed usage, cost, and pricing data automatically to BigQuery so that you can use Looker Studio or your preferred analytics tool for further cost analysis.RecommendationsView intelligent recommendations for optimizing your costs and usage. Easily apply these changes for immediate cost savings and greater efficiency.Billing APIsProgrammatically access and manage your billing accounts with Billing APIs.QuotasSet quota limits to proactively control your spend rate and prevent unforeseen spikes in usage.PricingCost Management pricingCost Management tools and 24/7 billing support are offered at no additional charge for Google Cloud customers. You will be charged only for use of Google Cloud services, such as BigQuery, Pub/Sub, Cloud Functions, and Cloud Storage. For information on the pricing of Google Cloud services, see the Google Cloud Pricing Calculator.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cost_optimization.txt b/Cost_optimization.txt new file mode 100644 index 0000000000000000000000000000000000000000..163238730b28362abd0f1dc8e9701197e11ecbfe --- /dev/null +++ b/Cost_optimization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml/cost-optimization +Date Scraped: 2025-02-23T11:44:25.553Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and ML perspective: Cost optimization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to optimize the cost of your AI systems throughout the ML lifecycle. By adopting a proactive and informed cost management approach, your organization can realize the full potential of AI and ML systems and also maintain financial discipline. The recommendations in this document align with the cost optimization pillar of the Architecture Framework. AI and ML systems can help you to unlock valuable insights and predictive capabilities from data. For example, you can reduce friction in internal processes, improve user experiences, and gain deeper customer insights. The cloud offers vast amounts of resources and quick time-to-value without large up-front investments for AI and ML workloads. To maximize business value and to align the spending with your business goals, you need to understand the cost drivers, proactively optimize costs, set up spending controls, and adopt FinOps practices. Define and measure costs and returns To effectively manage your AI and ML costs in Google Cloud, you must define and measure the expenses for cloud resources and the business value of your AI and ML initiatives. Google Cloud provides comprehensive tools for billing and cost management to help you to track expenses granularly. Business value metrics that you can measure include customer satisfaction, revenue, and operational costs. By establishing concrete metrics for both costs and business value, you can make informed decisions about resource allocation and optimization. Consider the following recommendations: Establish clear business objectives and key performance indicators (KPIs) for your AI and ML projects. Use the billing information provided by Google Cloud to implement cost monitoring and reporting processes that can help you to attribute costs to specific AI and ML activities. Establish dashboards, alerting, and reporting systems to track costs and returns against KPIs. Optimize resource allocation To achieve cost efficiency for your AI and ML workloads in Google Cloud, you must optimize resource allocation. By carefully aligning resource allocation with the needs of your workloads, you can avoid unnecessary expenses and ensure that your AI and ML systems have the resources that they need to perform optimally. Consider the following recommendations: Use autoscaling to dynamically adjust resources for training and inference. Start with small models and data. Save costs by testing hypotheses at a smaller scale when possible. Discover your compute needs through experimentation. Rightsize the resources that are used for training and serving based on your ML requirements. Adopt MLOps practices to reduce duplication, manual processes, and inefficient resource allocation. Enforce data management and governance practices Effective data management and governance practices play a critical role in cost optimization. Well-organized data helps your organization to avoid needless duplication, reduces the effort required to obtain high quality data, and encourages teams to reuse datasets. By proactively managing data, you can reduce storage costs, enhance data quality, and ensure that your ML models are trained and operate on the most relevant and valuable data. Consider the following recommendations: Establish and adopt a well-defined data governance framework. Apply labels and relevant metadata to datasets at the point of data ingestion. Ensure that datasets are discoverable and accessible across the organization. Make your datasets and features reusable throughout the ML lifecycle wherever possible. Automate and streamline with MLOps A primary benefit of adopting MLOps practices is a reduction in costs, both from a technology perspective and in terms of personnel activities. Automation helps you to avoid duplication of ML activities and improve the productivity of data scientists and ML engineers. Consider the following recommendations: Increase the level of automation and standardization in your data collection and processing technologies to reduce development effort and time. Develop automated training pipelines to reduce the need for manual interventions and increase engineer productivity. Implement mechanisms for the pipelines to reuse existing assets like prepared datasets and trained models. Use the model evaluation and tuning services in Google Cloud to increase model performance with fewer iterations. This enables your AI and ML teams to achieve more objectives in less time. Use managed services and pre-trained or existing models There are many approaches to achieving business goals by using AI and ML. Adopt an incremental approach to model selection and model development. This helps you to avoid excessive costs that are associated with starting fresh every time. To control costs, start with a simple approach: use ML frameworks, managed services, and pre-trained models. Consider the following recommendations: Enable exploratory and quick ML experiments by using notebook environments. Use existing and pre-trained models as a starting point to accelerate your model selection and development process. Use managed services to train or serve your models. Both AutoML and managed custom model training services can help to reduce the cost of model training. Managed services can also help to reduce the cost of your model-serving infrastructure. Foster a culture of cost awareness and continuous optimization Cultivate a collaborative environment that encourages communication and regular reviews. This approach helps teams to identify and implement cost-saving opportunities throughout the ML lifecycle. Consider the following recommendations: Adopt FinOps principles across your ML lifecycle. Ensure that all costs and business benefits of AI and ML projects have assigned owners with clear accountability. ContributorsAuthors: Isaac Lo | AI Business Development ManagerFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization Specialist Previous arrow_back Reliability Next Performance optimization arrow_forward Send feedback \ No newline at end of file diff --git a/Cost_optimization_framework.txt b/Cost_optimization_framework.txt new file mode 100644 index 0000000000000000000000000000000000000000..1b04abb3c87cf36cfc0db1c3f60d99df877a66db --- /dev/null +++ b/Cost_optimization_framework.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization +Date Scraped: 2025-02-23T12:10:41.138Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Cost optimization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC The cost optimization pillar in the Google Cloud Architecture Framework describes principles and recommendations to optimize the cost of your workloads in Google Cloud. The intended audience includes the following: CTOs, CIOs, CFOs, and other executives who are responsible for strategic cost management. Architects, developers, administrators, and operators who make decisions that affect cost at all the stages of an organization's cloud journey. The cost models for on-premises and cloud workloads differ significantly. On-premises IT costs include capital expenditure (CapEx) and operational expenditure (OpEx). On-premises hardware and software assets are acquired and the acquisition costs are depreciated over the operating life of the assets. In the cloud, the costs for most cloud resources are treated as OpEx, where costs are incurred when the cloud resources are consumed. This fundamental difference underscores the importance of the following core principles of cost optimization. Note: You might be able to classify the cost of some Google Cloud services (like Compute Engine sole-tenant nodes) as capital expenditure. For more information, see Sole-tenancy accounting FAQ. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Core principles The recommendations in the cost optimization pillar of the Architecture Framework are mapped to the following core principles: Align cloud spending with business value: Ensure that your cloud resources deliver measurable business value by aligning IT spending with business objectives. Foster a culture of cost awareness: Ensure that people across your organization consider the cost impact of their decisions and activities, and ensure that they have access to the cost information required to make informed decisions. Optimize resource usage: Provision only the resources that you need, and pay only for the resources that you consume. Optimize continuously: Continuously monitor your cloud resource usage and costs, and proactively make adjustments as needed to optimize your spending. This approach involves identifying and addressing potential cost inefficiencies before they become significant problems. These principles are closely aligned with the core tenets of cloud FinOps. FinOps is relevant to any organization, regardless of its size or maturity in the cloud. By adopting these principles and following the related recommendations, you can control and optimize costs throughout your journey in the cloud. ContributorsAuthor: Nicolas Pintaux | Customer Engineer, Application Modernization SpecialistOther contributors: Anuradha Bajpai | Solutions ArchitectDaniel Lees | Cloud Security ArchitectEric Lam | Head of Google Cloud FinOpsFernando Rubbo | Cloud Solutions ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKent Hua | Solutions ManagerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerRadhika Kanakam | Senior Program Manager, Cloud GTMSteve McGhee | Reliability AdvocateSergei Lilichenko | Solutions ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Next Align spending with business value arrow_forward Send feedback \ No newline at end of file diff --git a/Costs_and_attributions.txt b/Costs_and_attributions.txt new file mode 100644 index 0000000000000000000000000000000000000000..7e2c34fb18110c68bed90403a5a5a6e0fc69cee3 --- /dev/null +++ b/Costs_and_attributions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/manage-costs-attributions +Date Scraped: 2025-02-23T11:47:15.200Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage costs and attributions for the developer platform Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC To manage costs in GKE, you must plan, continuously monitor, and optimize your environment. This section describes how you can manage the GKE costs that are associated with the blueprint. Proactive costs planning To plan your costs proactively, you must set clear cost expectations for your GKE workloads. Requirements can include many factors, such as the number and type of VMs used, the level of network traffic, the rate of logging, and the frequency of builds. After you set your cost expectations, you can set budget alerts on your projects, as described in the enterprise foundation blueprint. To attribute costs between workloads, you must consider how different resource types have different granularities of cost attribution. For example, consider the following: Project costs: Some projects contain resources that are associated with a single tenant. The cost of a single project is attributed to a cost center using metadata labels in billing exports. Multi-tenant cluster costs: Some projects contain GKE clusters that are shared by multiple tenants. GKE cost allocation provides a granular breakdown of costs in billing exports for each namespace or label on Kubernetes resources. Shared costs: Some projects include shared resources that support many tenants, but billing reports can't granularly attribute usage to individual tenants. We recommend that you treat these as a shared cost of the developer platform. Depending on your internal processes for cost attribution, you might assign this to a shared IT cost center or split the cost proportionally among cost centers based on the number of workloads that use the platform. The following shows which projects are associated with which type of cost attribution. Project Description Types of charges eab-infra-cicd Automation workflow project Shared costs eab-app-factory Application factory project Shared costs eab-gke-{env} Virtual machines and persistent disks for GKE Multi-tenant cluster costs Network load balancer and traffic charges incurred by applications on GKE Shared costs Logging and monitoring Shared costs eab-{tenant} CI/CD and application-owned resources, such as AlloyDB for PostgreSQL Project costs Continuous resource monitoring After you set a cost baseline for your GKE clusters, use Cloud Monitoring to monitor the use of your GKE clusters and look for underutilized resources as areas for potential optimization. In this blueprint, all costs are billed to a centralized billing account. To export your costs and do detailed analysis of your GKE billing usage, you can use Cloud Billing BigQuery exports, as described in the enterprise foundation blueprint. Optimization techniques After you create an operating baseline for your applications, you can apply different optimization techniques to the environment. These optimization techniques are designed to help reduce your costs. What's next Read about deployment methodology (next document in this series). Send feedback \ No newline at end of file diff --git a/Cross-Cloud_Network.txt b/Cross-Cloud_Network.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a404055dbb368432e8650c4f386fd09e6b1a030 --- /dev/null +++ b/Cross-Cloud_Network.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/cross-cloud-network +Date Scraped: 2025-02-23T11:59:54.778Z + +Content: +Cross-Cloud NetworkModernize your network with service centric Cross-Cloud NetworkSimplify hybrid and multicloud networking, and secure your workloads, data, and users with Google Cloud Cross-Cloud Network.Contact usSolution benefitsSimplify with service centric connectivity for hybrid and multicloud networksStrengthen security with ML-powered, real-time protectionCut costs with Google Cloud's global networkAccelerating the Enterprise AI Journey with Cross-Cloud NetworkOverviewConnect any app, any service, in any cloud To accelerate application roll out and performance, use service centric, any-to-any connectivity built on Google's foundational global network. Google Cloud provides an encrypted SLA-backed global backbone with extensive global presence, with 187+ PoPs in over 200 countries and territories. Get seamless cross-cloud connectivity and private access to cloud-native services for distributed applications hosted anywhere.Connect anywhere with Cross-Cloud Network2:39Watch the videoBoost security to minimize breaches With Cloud NGFW, you can protect your applications with industry leading threat efficacy. Improve network security posture control and simplify the integration of partner security solutions with Cross-Cloud Network.Priceline transforms its infrastructure with Google Cloud1:27Reduce costs and operating overheadSimplify hybrid and multicloud networking with Cross-Cloud Interconnect, a high-performance, natively encrypted network connection. Reduce operational overhead with an open, secure, and optimized network platform designed to help you accelerate your business.The economic benefits of Google Cloud Cross-Cloud NetworkLearn more about savings and benefits customers have seen using Cross-Cloud Network from ESGRead the studySimplify with service-centric networkingConnect managed SaaS and Google services with Private Service Connect everywhere. With service-centric Cross-Cloud Network, you can connect and secure services easily across clouds and on-prem.Service-centric Cross-Cloud Network demo - AWS and Google CloudSecurely connect services to other clouds with a private connectionWatch videoView moreHow It WorksCross-Cloud Network is a global network platform that is open, secure, and optimized for applications and users across on-prem and clouds. It uses Google Cloud’s planet-scale network for multicloud connectivity and to secure applications and users.Watch videoDiscover how Google Cloud's new networking platform redefines how modern distributed applications are connected, delivered, and accessed. Common UsesConnect and secure distributed appsConnect and protect your data across cloudsGoogle Cloud streamlines connections between you and other clouds with secure, high-speed direct connections able to prioritize your most mission-critical traffic. Cloud Next-Generation Firewall (NGFW) within Google Cloud, on-prem, and other clouds for full multicloud secure networking.A service-centric Cross-Cloud Network provides secure, bi-directional, end-to-end connectivity to services across clouds. Read solution briefConnect apps across multicloud networks with Cross-Cloud InterconnectLearn about application awareness on Cloud InterconnectSimplify connectivity for apps across on-prem and multicloud with Private Service ConnectCross-Cloud Networking for Distributed Applications - Solution Design GuideCross-Cloud Network enables you to distribute workloads and services across multiple cloud and on-premises networks. The Cross-Cloud Network provides developers and operators the experience of a single cloud across multiple clouds. The Solution Design Guide can be found on the Cloud Architecture Center. Read solution design guideWatch a demo on Cross-Cloud InterconnectLearn more about application awareness on Cloud InterconnectWatch a demo of application awareness on Cloud InterconnectHow-tosConnect and protect your data across cloudsGoogle Cloud streamlines connections between you and other clouds with secure, high-speed direct connections able to prioritize your most mission-critical traffic. Cloud Next-Generation Firewall (NGFW) within Google Cloud, on-prem, and other clouds for full multicloud secure networking.A service-centric Cross-Cloud Network provides secure, bi-directional, end-to-end connectivity to services across clouds. Read solution briefConnect apps across multicloud networks with Cross-Cloud InterconnectLearn about application awareness on Cloud InterconnectSimplify connectivity for apps across on-prem and multicloud with Private Service ConnectAdditional resourcesCross-Cloud Networking for Distributed Applications - Solution Design GuideCross-Cloud Network enables you to distribute workloads and services across multiple cloud and on-premises networks. The Cross-Cloud Network provides developers and operators the experience of a single cloud across multiple clouds. The Solution Design Guide can be found on the Cloud Architecture Center. Read solution design guideWatch a demo on Cross-Cloud InterconnectLearn more about application awareness on Cloud InterconnectWatch a demo of application awareness on Cloud InterconnectDeliver internet-facing apps and contentGlobal front end for internet-facing applicationsYou likely have applications in multiple environments and clouds, including some on-prem. Cross-Cloud Network improves the user experience of applications through our global front end. It provides you with built-in health checking capabilities so that you know your applications are served from the optimal location, and with Service Extensions you are able to customize the data processing of your load balancers to suit your specific application or business requirements.Read solution briefImprove user experience by optimizing app delivery with Global Load BalancingProtect your web applications and APIs against DDoS and other attacksAccelerate web and content delivery by using Google Cloud's content delivery networksConnect internet-facing applications anywhereGoogle Cloud's global front end solution helps organizations manage globally scaled applications across multiple origins, delivering high performance and protection from web attacks. It simplifies management of multicloud cloud web applications.Global front end solution design guideWhat’s new for the Google Cloud global front end for web delivery and protectionRead the blogMeet the nine new web delivery partner integrations coming through Service ExtensionsRead the blogHow Meesho migrated a petabyte of data into Cloud CDN with zero downtimeHow-tosGlobal front end for internet-facing applicationsYou likely have applications in multiple environments and clouds, including some on-prem. Cross-Cloud Network improves the user experience of applications through our global front end. It provides you with built-in health checking capabilities so that you know your applications are served from the optimal location, and with Service Extensions you are able to customize the data processing of your load balancers to suit your specific application or business requirements.Read solution briefImprove user experience by optimizing app delivery with Global Load BalancingProtect your web applications and APIs against DDoS and other attacksAccelerate web and content delivery by using Google Cloud's content delivery networksAdditional resourcesConnect internet-facing applications anywhereGoogle Cloud's global front end solution helps organizations manage globally scaled applications across multiple origins, delivering high performance and protection from web attacks. It simplifies management of multicloud cloud web applications.Global front end solution design guideWhat’s new for the Google Cloud global front end for web delivery and protectionRead the blogMeet the nine new web delivery partner integrations coming through Service ExtensionsRead the blogHow Meesho migrated a petabyte of data into Cloud CDN with zero downtimeSecure access for hybrid workforceConsistent security for users using SSE vendor of choiceProvide best-in-class security for hybrid workforce through an integrated open partner ecosystem so you can connect to leading Security Service Edge (SSE) providers. Secure users using managed and unmanaged devices with Secure Enterprise Browser (SEB).Read solution briefBroadcom SymantecPalo Alto Networks FortinetHow-tosConsistent security for users using SSE vendor of choiceProvide best-in-class security for hybrid workforce through an integrated open partner ecosystem so you can connect to leading Security Service Edge (SSE) providers. Secure users using managed and unmanaged devices with Secure Enterprise Browser (SEB).Read solution briefBroadcom SymantecPalo Alto Networks FortinetTake the next step with Google CloudReceive customized training with product specialistsContact sales Deep dive into the technical detailsRead the solution briefVertex AI with PSCGo to codelabAdvanced load balancing optimizations Go to codelabCloud Armor for NLB/VM with user defined rulesGo to codelabFAQExpand allWhat is Cross-Cloud Network?Cross-Cloud Network is a global cloud networking platform built on Google’s private backbone with seamless any-to-any connectivity, enhanced application experience, and ML-powered security across all users and workloads wherever they may be. When should I consider Cross-Cloud Network?Customers should consider Cross-Cloud Network for hybrid and/or multicloud environments where services or workloads need to connect securely. This includes running distributed applications, AI workloads for training and inferencing, internet-facing applications, and content delivery.Which networking products should I use with Cross-Cloud Network?Cross-Cloud Network leverages the entire Google Cloud Networking portfolio to offer solutions designed for customers. It offers simplified connectivity with performance, privacy, and operational efficiency. Customers with hybrid and multicloud networks may use SD-WAN, Cloud VPN, Cloud Interconnect, and Cross-Cloud Interconnect together with Network Connectivity Center. Best-in-class security including partner security is integrated natively in the Cross-Cloud Network with Cloud NGFW, Cloud Armor, Cloud Secure Web Gateway, Cloud NAT, Cloud DNS, NSIM, and SSE partner ecosystem. The Google Cloud front end is secured with Cloud Armor providing web application firewalling and DDoS protection. Application resources and services are abstracted with Cloud CDN, Cloud Load Balancing, and Private Service Connect to enable a uniform application delivery model across multiple clouds and private data centers.What are the benefits of Cross-Cloud Network?The Cross-Cloud Network is a service that provides high-performance, low-latency interconnectivity for best-in-class application services across clouds. With application awareness on Cloud Interconnect, Google Cloud is the first major cloud service provider to offer a managed traffic differentiation solution. It also offers cloud-native best-in-class security for user-to-app and app-to-app security. Users can manage security policies natively in Google Cloud or with third-party tooling of choice. The global front end unifies reachability, load balancing, and security across multiple clouds while optimizing connectivity paths and content caching for optimal application experience. Finally, the Cross-Cloud Network offers multi-regional reliability and resiliency across all relevant services.Which clouds are supported with Cross-Cloud Network?The Cross-Cloud Network can integrate with virtually any external network. Integration with Amazon Web Services, Microsoft Azure, Oracle Cloud Infrastructure, and Alibaba Cloud networks is streamlined with Cross-Cloud Interconnect.What is the difference between cross-cloud and multicloud?Up until now, businesses have had to build and maintain multicloud networks from a collection of independent and uncorrelated services and components. Multicloud networks put the responsibility on the business to integrate disparate technologies and broker agreements with a multitude of providers. The Cross-Cloud Network is a purpose-built, integrated network infrastructure for the enablement of cross-cloud application and user communications. Cross-Cloud Networking is the new era in cloud networking and provides a true cloud native cross-cloud infrastructure with performance, privacy, security, and simplicity.How does Cross-Cloud Network improve user experience?Cross-Cloud Network improves application experience by providing an optimal latency topology based on Google’s pervasive connectivity and by natively integrating security services over high-speed connections. This guarantees application communications always follow the best available path without incurring a performance hit when the flows are subject to security controls.How does Cross-Cloud Network enable distributed or multicloud applications?Multicloud applications are enabled by assembling the best possible cross-cloud topology over best-in-class links, making security cloud native and centralizing the abstraction of application services and workloads in other clouds. Who are the Security Service Edge partners?Palo Alto Networks, Broadcom Symantec, and Fortinet.Contact salesGo to consoleGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cross-Cloud_Network_inter-VPC_connectivity_using_Network_Connectivity_Center.txt b/Cross-Cloud_Network_inter-VPC_connectivity_using_Network_Connectivity_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..99b970fe4a2038068b7f34f43cac4f2fb74f6c03 --- /dev/null +++ b/Cross-Cloud_Network_inter-VPC_connectivity_using_Network_Connectivity_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design/ccn-ncc-vpn-ra +Date Scraped: 2025-02-23T11:51:02.094Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC This document provides a reference architecture that you can use to deploy a Cross-Cloud Network inter-VPC network topology in Google Cloud. This network design enables the deployment of software services across Google Cloud and external networks, like on-premises data centers or other Cloud Service Providers (CSPs). The intended audience for this document includes network administrators, cloud architects, and enterprise architects that will build out the network connectivity. It also includes cloud architects who plan how workloads are deployed. The document assumes a basic understanding of routing and internet connectivity. This design supports multiple external connections, multiple services-access Virtual Private Cloud (VPC) networks that contain services and service access points, and multiple workload VPC networks. In this document, the term service access points refers to access points to services made available using Google Cloud private services access and Private Service Connect. Network Connectivity Center is a hub-and-spoke control plane model for network connectivity management in Google Cloud. The hub resource provides centralized connectivity management for Network Connectivity Center VPC spokes. Network Connectivity Center hub is a global control plane that learns and distributes routes between the various spoke types that are connected to it. VPC spokes typically inject subnet routes into the centralized hub route table. Hybrid spokes typically inject dynamic routes into the centralized hub route table. Using the Network Connectivity Center hub's control plane information, Google Cloud automatically establishes data-plane connectivity between Network Connectivity Center spokes. Network Connectivity Center is the recommended approach to interconnect VPCs for scalable growth on Google Cloud. When network virtual appliances must be inserted in the traffic path, use static or policy-based routes along with VPC network peering to interconnect VPCs. For more information, see Cross-Cloud Network inter-VPC connectivity with VPC Network Peering. Architecture The following diagram shows a high-level view of the architecture of the networks and the different packet flows this architecture supports. The architecture contains the following high-level elements: Component Purpose Interactions External networks (On-premises or other CSP network) Hosts the clients of workloads that run in the workload VPCs and in the services-access VPCs. External networks can also host services. Exchanges data with Google Cloud's VPC networks through the transit network. Connects to the transit network by using Cloud Interconnect or HA VPN. Terminates one end of the following flows: External-to-external External-to-services-access External-to-Private-Service-Connect-consumer External-to-workload Transit VPC network (also known as a Routing VPC network in Network Connectivity Center) Acts as a hub for the external network, the services-access VPC network, and the workload VPC networks. Connects the external network, the services-access VPC network, Private Service Connect consumer network, and the workload VPC networks together through a combination of Cloud Interconnect, HA VPN, and Network Connectivity Center. Services-access VPC network Provides access to services that are needed by workloads that are running in the workload VPC networks or external networks. Also provides access points to managed services that are hosted in other networks. Exchanges data with the external, workload, and Private Service Connect consumer networks through the transit network. Connects to the transit VPC by using HA VPN. Transitive routing provided by HA VPN allows external traffic to reach managed services VPCs through the services-access VPC network. Terminates one end of the following flows: External-to-services-access Workload-to-services-access Services-access-to-Private-Service-Connect-consumer Managed services VPC network Hosts managed services that are needed by clients in other networks. Exchanges data with the external, services-access, Private Service Connect consumer, and workload networks. Connects to the services-access VPC network by using private services access, which uses VPC Network Peering. The managed services VPC can also connect to the Private Service Connect consumer VPC by using Private Service Connect or private services access. Terminates one end of flows from all other networks. Private Service Connect consumer VPC Hosts Private Service Connect endpoints that are accessible from other networks. This VPC might also be a workload VPC. Exchanges data with the external and services-access VPC networks through the transit VPC network. Connects to the transit network and other workload VPC networks by using Network Connectivity Center VPC spokes. Workload VPC networks Hosts workloads that are needed by clients in other networks. This architecture allows for multiple workload VPC networks. Exchanges data with the external and services-access VPC networks through the transit VPC network. Connects to the transit network, Private Service Connect consumer networks, and other workload VPC networks by using Network Connectivity Center VPC spokes. Terminates one end of the following flows: External-to-workload Workload-to-services-access Workload-to-Private-Service-Connect-consumer Workload-to-workload Network Connectivity Center The Network Connectivity Center hub incorporates a global routing database that serves as a network control plane for VPC subnet and hybrid connection routes across any Google Cloud region. Interconnects multiple VPC and hybrid networks in an any-to-any topology by building a datapath that uses the control plane routing table. The following diagram shows a detailed view of the architecture that highlights the four connections among the components: Connections descriptions This section describes the four connections that are shown in the preceding diagram. The Network Connectivity Center documentation refers to the transit VPC network as the routing VPC. While these networks have different names, they serve the same purpose. Connection 1: Between external networks and the transit VPC networks This connection between the external networks and the transit VPC networks happens over Cloud Interconnect or HA VPN. The routes are exchanged by using BGP between the Cloud Routers in the transit VPC network and between the external routers in the external network. Routers in the external networks announce the routes for the external subnets to the transit VPC Cloud Routers. In general, external routers in a given location announce routes from the same external location as more preferred than routes for other external locations. The preference of the routes can be expressed by using BGP metrics and attributes. Cloud Routers in the transit VPC network advertise routes for prefixes in Google Cloud's VPCs to the external networks. These routes must be announced using Cloud Router custom route announcements. Network Connectivity Center lets you transfer data between different on-premises networks by using the Google backbone network. When you configure the interconnect VLAN attachments as Network Connectivity Center hybrid spokes, you must enable site-to-site data transfer. Cloud Interconnect VLAN attachments that source the same external network prefixes are configured as a single Network Connectivity Center spoke. Connection 2: Between transit VPC networks and services-access VPC networks This connection between transit VPC networks and services-access VPC networks happens over HA VPN with separate tunnels for each region. Routes are exchanged by using BGP between the regional Cloud Routers in the transit VPC networks and in the services-access VPC networks. Transit VPC HA VPN Cloud Routers announce routes for external network prefixes, workload VPCs, and other services-access VPCs to the services-access VPC Cloud Router. These routes must be announced using Cloud Router custom route announcements. The services-access VPC announces its subnets and the subnets of any attached managed services VPC networks to the transit VPC network. Managed services VPC routes and the services-access VPC subnet routes must be announced using Cloud Router custom route announcements. Connection 3: Between the transit VPC network, workload VPC networks, and Private Service Connect services-access VPC networks The connection between the transit VPC network, workload VPC networks, and Private Service Connect consumer VPC networks occurs when subnets and prefix routes are exchanged using Network Connectivity Center. This connection enables communication between the workload VPC networks, services-access VPC networks that are connected as Network Connectivity Center VPC spokes, and other networks that are connected as Network Connectivity Center hybrid spokes. These other networks include the external networks and the services-access VPC networks that are using connection 1 and connection 2, respectively. The Cloud Interconnect or HA VPN attachments in the transit VPC network use Network Connectivity Center to export dynamic routes to the workload VPC networks. When you configure the workload VPC network as a spoke of the Network Connectivity Center hub, the workload VPC network automatically exports its subnets to the transit VPC network. Optionally, you can set up the transit VPC network as a VPC spoke. No static routes are exported from the workload VPC network to the transit VPC network. No static routes are exported from the transit VPC network to the workload VPC network. Connection 4: Private Service Connect Consumer VPC with Network Connectivity Center propagation Private Service Connect endpoints are organized in a common VPC that allows consumers access to first-party and third-party managed services. The Private Service Connect consumer VPC network is configured as a Network Connectivity Center VPC spoke. This spoke enables Private Service Connect propagation on the Network Connectivity Center hub. Private Service Connect propagation announces the host prefix of the Private Service Connect endpoint as a route into the Network Connectivity Center hub routing table. Private Service Connect services-access consumer VPC networks connect to workload VPC networks and to transit VPC networks. These connections enable transitive connectivity to Private Service Connect endpoints. The Network Connectivity Center hub must have Private Service Connect connection propagation enabled. Network Connectivity Center automatically builds a data path from all spokes to the Private Service Connect endpoint. Traffic flows The following diagram shows the flows that are enabled by this reference architecture. The following table describes the flows in the diagram: Source Destination Description External network Services-access VPC network Traffic follows routes over the external connections to the transit network. The routes are announced by the external-facing Cloud Router. Traffic follows the custom route to the services-access VPC network. The route is announced across the HA VPN connection. If the destination is in a managed-services VPC network that's connected to the services-access VPC network by private services access, then the traffic follows Network Connectivity Center custom routes to the managed services network. Services-access VPC network External network Traffic follows a custom route across the HA VPN tunnels to the transit network. Traffic follows routes across the external connections back to the external network. The routes are learned from the external routers over BGP. External network Workload VPC network or Private Service Connect consumer VPC network Traffic follows routes over the external connections to the transit network. The routes are announced by the external-facing Cloud Router. Traffic follows the subnet route to the relevant workload VPC network. The route is learned through Network Connectivity Center. Workload VPC network or Private Service Connect consumer VPC network External network Traffic follows a dynamic route back to the transit network. The route is learned through a Network Connectivity Center custom route export. Traffic follows routes across the external connections back to the external network. The routes are learned from the external routers over BGP. Workload VPC network Services-access VPC network Traffic follows routes to the transit VPC network. The routes are learned through a Network Connectivity Center custom route export. Traffic follows a route through one of the HA VPN tunnels to the services-access VPC network. The routes are learned from BGP custom route announcements. Services-access VPC network Workload VPC network Traffic follows a custom route to the transit network. The route is announced across the HA VPN tunnels. Traffic follows the subnet route to the relevant workload VPC network. The route is learned through Network Connectivity Center. Workload VPC network Workload VPC network Traffic that leaves one workload VPC follows the more specific route to the other workload VPC through Network Connectivity Center. Return traffic reverses this path. Products used Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Network Connectivity Center: An orchestration framework that simplifies network connectivity among spoke resources that are connected to a central management resource called a hub. Cloud Interconnect: A service that extends your external network to the Google network through a high-availability, low-latency connection. Cloud VPN: A service that securely extends your peer network to Google's network through an IPsec VPN tunnel. Cloud Router: A distributed and fully managed offering that provides Border Gateway Protocol (BGP) speaker and responder capabilities. Cloud Router works with Cloud Interconnect, Cloud VPN, and Router appliances to create dynamic routes in VPC networks based on BGP-received and custom learned routes. Design considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, and performance. Security and compliance The following list describes the security and compliance considerations for this reference architecture: For compliance reasons, you might want to deploy workloads in a single region only. If you want to keep all traffic in a single region, you can use a 99.9% topology. Use Cloud Next Generation Firewall (Cloud NGFW) to secure traffic that enters and leaves the services-access and workload VPC networks. To inspect traffic that passes between hybrid networks through the transit network, you need to use external firewalls or NVA firewalls. Enable Connectivity Tests to ensure that traffic is behaving as expected. Enable logging and monitoring as appropriate for your traffic and compliance needs. To gain insights into your traffic patterns, use VPC Flow Logs along with Flow Analyzer. Use Cloud IDS to gather additional insight into your traffic. Reliability The following list describes the reliability considerations for this reference architecture: To get 99.99% availability for Cloud Interconnect, you must connect into two different Google Cloud regions from different metros across two distinct zones. To improve reliability and minimize exposure to regional failures, you can distribute workloads and other cloud resources across regions. To handle your expected traffic, create a sufficient number of VPN tunnels. Individual VPN tunnels have bandwidth limits. Performance optimization The following list describes the performance considerations for this reference architecture: You might be able to improve network performance by increasing the maximum transmission unit (MTU) of your networks and connections. For more information, see Maximum transmission unit. Communication between the transit VPC and workload resources is over a Network Connectivity Center connection. This connection provides a full-line-rate throughput for all VMs in the network at no additional cost. You have several choices for how to connect your external network to the transit network. For more information about how to balance cost and performance considerations, see Choosing a Network Connectivity product. Deployment This section discusses how to deploy the Cross-Cloud Network inter-VPC connectivity by using the Network Connectivity Center architecture described in this document. The architecture in this document creates three types of connections to a central transit VPC, plus a connection between workload VPC networks and workload VPC networks. After Network Connectivity Center is fully configured, it establishes communication between all networks. This deployment assumes that you are creating connections between the external and transit networks in two regions, although workload subnets can be in other regions. If workloads are placed in one region only, subnets need to be created in that region only. To deploy this reference architecture, complete the following tasks: Create network segmentation with Network Connectivity Center Identify regions to place connectivity and workloads Create your VPC networks and subnets Create connections between external networks and your transit VPC network Create connections between your transit VPC network and services-access VPC networks Establish connectivity between your transit VPC network and workload VPC networks Test connectivity to workloads Create network segmentation with Network Connectivity Center Before you create a Network Connectivity Center hub for the first time, you must decide whether you want to use a full mesh topology or a star topology. The decision to commit to a full-mesh of interconnected VPCs or a star topology of VPCs is irreversible. Use the following general guidelines to make this irreversible decision: If the business architecture of your organization permits traffic between any of your VPC networks, use the Network Connectivity Center mesh. If traffic flows between certain different VPC spokes aren't permitted, but these VPC spokes can connect to a core group of VPC spokes, use a Network Connectivity Center star topology. Identify regions to place connectivity and workloads In general, you want to place connectivity and Google Cloud workloads in close proximity to your on-premises networks or other cloud clients. For more information about placing workloads, see Google Cloud Region Picker and Best practices for Compute Engine regions selection. Create your VPC networks and subnets To create your VPC networks and subnets, complete the following tasks: Create or identify the projects where you will create your VPC networks. For guidance, see Network segmentation and project structure. If you intend to use Shared VPC networks, provision your projects as Shared VPC host projects. Plan your IP address allocations for your networks. You can preallocate and reserve your ranges by creating internal ranges. Doing so makes later configuration and operations more straightforward. Create a transit network VPC with global routing enabled. Create services-access VPC networks. If you plan to have workloads in multiple regions, enable global routing. Create workload VPC networks. If you will have workloads in multiple regions, enable global routing. Create connections between external networks and your transit VPC network This section assumes connectivity in two regions and assumes that the external locations are connected and can fail over to each other. It also assumes that there is a preference for clients in an external location to reach services in the region where the external location exists. Set up the connectivity between external networks and your transit network. For an understanding of how to think about this, see External and hybrid connectivity. For guidance on choosing a connectivity product, see Choosing a Network Connectivity product. Configure BGP in each connected region as follows: Configure the router in the given external location as follows: Announce all subnets for that external location using the same BGP MED on both interfaces, such as 100. If both interfaces announce the same MED, then Google Cloud can use ECMP to load balance traffic across both connections. Announce all subnets from the other external location by using a lower-priority MED than that of the first region, such as 200. Announce the same MED from both interfaces. Configure the external-facing Cloud Router in the transit VPC of the connected region as follows: Set your Cloud Router with a private ASN. Use custom route advertisements, to announce all subnet ranges from all regions over both external-facing Cloud Router interfaces. Aggregate them if possible. Use the same MED on both interfaces, such as 100. Work with Network Connectivity Center hub and hybrid spokes, use the default parameters. Create a Network Connectivity Center hub. If your organization permits traffic between all of your VPC networks, use the default full-mesh configuration. If you are using Partner Interconnect, Dedicated Interconnect, HA-VPN, or a Router appliance to reach on-premises prefixes, configure these components as different Network Connectivity Center hybrid spokes. To announce the Network Connectivity Center hub route table subnets to remote BGP neighbors, set a filter to include all IPv4 address ranges. If hybrid connectivity terminates on a Cloud Router in a region that supports data transfer, configure the hybrid spoke with site-to-site data transfer enabled. Doing so supports site-to-site data transfer that uses Google's backbone network. Create connections between your transit VPC network and services-access VPC networks To provide transitive routing between external networks and the services-access VPC and between workload VPCs and the services-access VPC, the services-access VPC uses HA VPN for connectivity. Estimate how much traffic needs to travel between the transit and services-access VPCs in each region. Scale your expected number of tunnels accordingly. Configure HA VPN between the transit VPC network and the services-access VPC network in region A by using the instructions in Create HA VPN gateways to connect VPC networks. Create a dedicated HA VPN Cloud Router in the transit VPC network. Leave the external-network-facing router for external network connections. Transit VPC Cloud Router configuration: To announce external-network and workload VPC subnets to the services-access VPC, use custom route advertisements on the Cloud Router in the transit VPC. Services-access VPC Cloud Router configuration: To announce services-access VPC network subnets to the transit VPC, use custom route advertisements on the services-access VPC network Cloud Router. If you use private services-access to connect a managed services VPC network to the services-access VPC, use custom routes to announce those subnets as well. On the transit VPC side of the HA VPN tunnel, configure the pair of tunnels as a Network Connectivity Center hybrid spoke: To support inter-region data transfer, configure the hybrid spoke with site-to-site data transfer enabled. To announce the Network Connectivity Center hub route table subnets to remote BGP neighbors, set a filter to include all IPv4 address ranges. This action announces all IPv4 subnet routes to the neighbor. To install dynamic routes when capacity is limited on the external router, configure the Cloud Router to announce a summary route with a custom route advertisement. Use this approach instead of announcing the full route table of the Network Connectivity Center hub. If you connect a managed services VPC to the services-access VPC by using private services-access after the VPC Network Peering connection is established, you also have to update the services-access VPC side of the VPC Network Peering connection to export custom routes. Establish connectivity between your transit VPC network and workload VPC networks To establish inter-VPC connectivity at scale, use Network Connectivity Center with VPC spokes. Network Connectivity Center supports two different types of data plane models—the full-mesh data plane model or the star-topology data plane model. If all of your networks can be allowed to intercommunicate, then Establish full-mesh connectivity. If your business architecture requires a point-to-multipoint topology, then Establish star topology connectivity. Establish full-mesh connectivity The Network Connectivity Center VPC spokes include the transit VPCs, the Private Service Connect consumer VPCs, and all workload VPCs. Although Network Connectivity Center builds a fully meshed network of VPC spokes, the network operators must permit traffic flows between the source networks and the destination networks by using firewall rules or firewall policies. Configure all of the workload, transit, and Private Service Connect consumer VPCs as Network Connectivity Center VPC spokes. There can't be subnet overlaps across VPC spokes. When you configure the VPC spoke, announce non-overlapping IP address subnet ranges to the Network Connectivity Center hub route table: Include export subnet ranges. Exclude export subnet ranges. If VPC spokes are in different projects and the spokes are managed by administrators other than the Network Connectivity Center hub administrators, the VPC spoke administrators must initiate a request to join the Network Connectivity Center hub in the other projects. Use Identity and Access Management (IAM) permissions in the Network Connectivity Center hub project to grant the roles/networkconnectivity.groupUser role to that user. To enable private service connections to be transitively and globally accessible from other Network Connectivity Center spokes, enable the propagation of Private Service Connect connections on the Network Connectivity Center hub. If fully mesh inter-VPC communication between workload VPCs isn't allowed, consider using a Network Connectivity Center star topology. Establish star topology connectivity Centralized business architectures that require a point-to-multipoint topology can use a Network Connectivity Center star topology. To use a Network Connectivity Center star topology, complete the following tasks: In Network Connectivity Center, create a Network Connectivity Center hub and specify a star topology. To allow private service connections to be transitively and globally accessible from other Network Connectivity Center spokes, enable the propagation of Private Service Connect connections on the Network Connectivity Center hub. When you configure the Network Connectivity Center hub for a star topology, you can group VPCs in one of two predetermined groups: center groups or edge groups. To group VPCs in the center group, configure the transit VPC and Private Service Connect consumer VPCs as a Network Connectivity Center VPC spoke as part of the center group. Network Connectivity Center builds a fully meshed network between VPC spokes that are placed in the center group. To group workload VPCs in the edge group, configure each of these networks as Network Connectivity Center VPC spokes within that group. Network Connectivity Center builds a point-to-point data path from each Network Connectivity Center VPC spoke to all VPCs in the center group. Test connectivity to workloads If you have workloads that are already deployed in your VPC networks, test access to them now. If you connected the networks before you deployed workloads, you can deploy the workloads now and test. What's next Learn more about the Google Cloud products used in this design guide: VPC networks Network Connectivity Center Cloud Interconnect HA VPN For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Eric Yu | Networking Specialist Customer EngineerDeepak Michael | Networking Specialist Customer EngineerVictor Moreno | Product Manager, Cloud NetworkingOsvaldo Costa | Networking Specialist Customer EngineerOther contributors: Mark Schlagenhauf | Technical Writer, NetworkingAmmett Williams | Developer Relations EngineerGhaleb Al-habian | Network SpecialistTony Sarathchandra | Senior Product Manager Send feedback \ No newline at end of file diff --git a/Cross-Cloud_Network_inter-VPC_connectivity_with_VPC_Network_Peering.txt b/Cross-Cloud_Network_inter-VPC_connectivity_with_VPC_Network_Peering.txt new file mode 100644 index 0000000000000000000000000000000000000000..88590075f8e059830ebf9a133a6709d580098adf --- /dev/null +++ b/Cross-Cloud_Network_inter-VPC_connectivity_with_VPC_Network_Peering.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design/ccn-vnp-vpn-ra +Date Scraped: 2025-02-23T11:51:04.262Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cross-Cloud Network inter-VPC connectivity using VPC Network Peering Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-18 UTC This document provides a reference architecture that you can use to deploy a Cross-Cloud Network hub-and-spoke network topology in Google Cloud. This network design enables the deployment of software services across Google Cloud and external networks, like on-premises data centers or other Cloud Service Providers (CSPs). This design supports multiple external connections, multiple services-access Virtual Private Cloud (VPC) networks, and multiple workload VPC networks. The intended audience for this document is network administrators who build network connectivity and cloud architects who plan how workloads are deployed. The document assumes that you have a basic understanding of routing and internet connectivity. Architecture The following diagram shows a high-level view of the architecture of the networks and the four packet flows that this architecture supports. The architecture contains the following high-level elements: Component Purpose Interactions External networks (On-premises or other CSP network) Hosts the clients of workloads that run in the workload VPCs and in the services-access VPCs. External networks can also host services. Exchanges data with Google Cloud's Virtual Private Cloud networks through the transit network. Connects to the transit network by using Cloud Interconnect or HA VPN. Terminates one end of the following flows: External-to-shared-services External-to-workload Transit VPC network Acts as a hub for the external network, the services-access VPC network, and the workload VPC networks. Connects the external network, the services-access VPC network, and the workload VPC networks together through a combination of Cloud Interconnect, HA VPN, and VPC Network Peering. Services-access VPC network Provides access to services that are needed by workloads that are running in the workload VPC networks or external networks. Also provides access points to managed services that are hosted in other networks. Exchanges data with the external and workload networks through the transit network. Connects to the transit VPC by using HA VPN. Transitive routing provided by HA VPN allows external traffic to reach managed services VPCs through the services-access VPC network. Terminates one end of the following flows: External-to-shared-services Workload-to-shared-services Managed services VPC network Hosts managed services that are needed by clients in other networks. Exchanges data with the external, services-access, and workload networks. Connects to the services-access VPC network by using private services access, which uses VPC Network Peering, or by using Private Service Connect. Terminates one end of flows from all other networks. Workload VPC networks Hosts workloads that are needed by clients in other networks. Exchanges data with the external and services-access VPC networks through the transit VPC network. Connects to the transit network by using VPC Network Peering. Connects to other workload VPC networks by using Network Connectivity Center VPC spokes. Terminates one end of the following flows: External-to-workload Workload-to-shared-services Workload-to-workload The following diagram shows a detailed view of the architecture that highlights the four connections among the networks: Connections descriptions This section describes the four connections that are shown in the preceding diagram. Connection 1: Between external networks and the transit VPC network This connection between external networks and transit VPC networks happens over Cloud Interconnect or HA VPN. Routes are exchanged by using BGP between the Cloud Routers in the transit VPC network and the external routers in the external network. Routers in external networks announce the routes for external subnets to the transit VPC Cloud Routers. In general, external routers in a given location announce routes from the same external location as more preferred than routes for other external locations. The preference of the routes can be expressed by using BGP metrics and attributes. Cloud Routers in the transit VPC network advertise routes for prefixes in Google Cloud's VPCs to the external networks. These routes must be announced by using Cloud Router custom route announcements. Connection 2: Between transit VPC networks and services-access VPC networks This connection between transit VPC networks and services-access VPC networks happens over HA VPN with separate tunnels for each region. Routes are exchanged by using BGP between the regional Cloud Routers in the transit VPC networks and the services-access VPC networks. Transit VPC HA VPN Cloud Routers announce routes for external network prefixes, workload VPCs, and other services-access VPCs to the services-access VPC Cloud Router. These routes must be announced by using Cloud Router custom route announcements. The services-access VPC network announces its subnets and the subnets of any attached managed services VPC networks to the transit VPC network. Managed services VPC routes and the services-access VPC subnet routes must be announced by using Cloud Router custom route announcements. Connection 3: Between transit VPC networks and workload VPC networks This connection between transit VPC networks and workload VPC networks is implemented over VPC peering. Subnets and prefix routes are exchanged by using VPC peering mechanisms. This connection allows communication between the workload VPC networks and the other networks that are connected to the transit VPC network, including the external networks and the services-access VPC networks. The transit VPC network uses VPC Network Peering to export custom routes. These custom routes include all of the dynamic routes that have been learned by the transit VPC network. The workload VPC networks import those custom routes. The workload VPC network automatically exports subnets to the transit VPC network. No custom routes are exported from the workload VPCs to the transit VPC. Connection 4: Between workload VPC networks Workload VPC networks can be connected together by using Network Connectivity Center VPC spokes. This is an optional configuration. You can omit it if you don't want workload VPC networks to communicate with each other. Traffic flows The following diagram shows the four flows that are enabled by this reference architecture. The following table describes the flows in the diagram: Source Destination Description External network Services-access VPC network Traffic follows routes over the Cloud Interconnect connections to the transit network. The routes are announced by the external-facing Cloud Router. Traffic follows the custom route to the services-access VPC network. The route is announced across the HA VPN connection. If the destination is in a managed services VPC network that's connected to the services-access VPC network by private services access, then the traffic follows VPC Network Peering custom routes to the managed services network. Services-access VPC network External network Traffic follows a custom route across the HA VPN tunnels to the transit network. Traffic follows routes across the external connections back to the external network. The routes are learned from the external routers over BGP. External network Workload VPC network Traffic follows routes over the external connections to the transit network. The routes are announced by the external-facing Cloud Router. Traffic follows the subnet route to the relevant workload VPC network. The route is learned through VPC Network Peering. Workload VPC network External network Traffic follows a route back to the transit network. The route is learned through a VPC Network Peering custom route export. Traffic follows routes across the external connections back to the external network. The routes are learned from the external routers over BGP. Workload VPC network Services-access VPC network Traffic follows routes to the transit VPC. The routes are learned through a VPC Network Peering custom route export. Traffic follows a route through one of the HA VPN tunnels to the services-access VPC network. The route is learned from BGP custom route announcements. Services-access VPC network Workload VPC network Traffic follows a custom route to the transit network. The route is announced across the HA VPN tunnels. Traffic follows the subnet route to the relevant workload VPC network. The route is learned through VPC Network Peering. Workload VPC network Workload VPC network Traffic that leaves one workload VPC follows the more specific route to the other workload VPC through Network Connectivity Center. Return traffic reverses this path. Products used This reference architecture uses the following Google Cloud products: Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Network Connectivity Center: An orchestration framework that simplifies network connectivity among spoke resources that are connected to a central management resource called a hub. Cloud Interconnect: A service that extends your external network to the Google network through a high-availability, low-latency connection. Cloud VPN: A service that securely extends your peer network to Google's network through an IPsec VPN tunnel. Cloud Router: A distributed and fully managed offering that provides Border Gateway Protocol (BGP) speaker and responder capabilities. Cloud Router works with Cloud Interconnect, Cloud VPN, and Router appliances to create dynamic routes in VPC networks based on BGP-received and custom learned routes. Design Considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, and performance. Security and compliance The following list describes the security and compliance considerations for this reference architecture: For compliance reasons, you might want to deploy workloads in a single region only. If you want to keep all traffic in a single region, you can use a 99.9% topology. For more information, see Establish 99.9% availability for Dedicated Interconnect and Establish 99.9% availability for Partner Interconnect. Use Cloud Next Generation Firewall to secure traffic that enters and leaves the services-access and workload VPC networks. To secure traffic that passes between external networks and the transit network, you need to use external firewalls or NVA firewalls. Enable logging and monitoring as appropriate for your traffic and compliance needs. You can use VPC Flow Logs to gain insights into your traffic patterns. Use Cloud IDS to gather additional insight into your traffic. Reliability The following list describes the reliability considerations for this reference architecture: To get 99.99% availability for Cloud Interconnect, you must connect to two different Google Cloud regions. To improve reliability and minimize exposure to regional failures, you can distribute workloads and other cloud resources across regions. To handle your expected traffic, create a sufficient number of VPN tunnels. Individual VPN tunnels have bandwidth limits. Performance optimization The following list describes the performance considerations for this reference architecture: You might be able to improve network performance by increasing the maximum transmission unit (MTU) of your networks and connections. For more information, see Maximum transmission unit. Communication between the transit VPC and workload resources is over VPC Network Peering, which provides full-line-rate throughput for all VMs in the network at no additional cost. Consider VPC Network Peering quotas and limits when you plan your deployment. You have several choices in connecting your external network to the transit network. For more information about balancing cost and performance considerations, see Choosing a Network Connectivity product. Deployment The architecture in this document creates three sets of connections to a central transit VPC network plus a different connection among workload VPC networks. After all of the connections are fully configured, all of the networks in the deployment can communicate with all other networks. This deployment assumes that you are creating connections between the external and transit networks in two regions. Workload subnets can be in any region, however. If you are placing workloads in one region only, you only need to create subnets in that region. To deploy this reference architecture, complete the following tasks: Identify regions to place connectivity and workloads Create your VPC networks and subnets Create connections between external networks and your transit VPC network Create connections between your transit VPC network and services-access VPC networks Create connections between your transit VPC network and workload VPC networks Connect your workload VPC networks Test connectivity to workloads Identify regions to place connectivity and workloads In general, you want to place connectivity and Google Cloud workloads in close proximity to your on-premises networks or other cloud clients. For more information about placing workloads, see Google Cloud Region Picker and Best practices for Compute Engine regions selection. Create your VPC networks and subnets To create your VPC networks and subnets, complete the following tasks: Create or identify the projects where you will create your VPC networks. For guidance, see Network segmentation and project structure. If you intend to use Shared VPC networks, provision your projects as Shared VPC host projects. Plan your IP address allocations for your networks. You can preallocate and reserve your ranges by creating internal ranges. Allocating address blocks that can be aggregated makes later configuration and operations more straightforward. Create a transit network VPC with global routing enabled. Create service VPC networks. If you will have workloads in multiple regions, enable global routing. Create workload VPC networks. If you will have workloads in multiple regions, enable global routing. Create connections between external networks and your transit VPC network This section assumes connectivity in two regions and assumes that the external locations are connected and can fail over to each other. It also assumes that there is a preference for clients in external location A to reach services in region A, and so on. Set up the connectivity between the external networks and your transit network. For an understanding of how to think about this, see External and hybrid connectivity. For guidance on choosing a connectivity product, see Choosing a Network Connectivity product. Configure BGP in each connected region as follows: Configure the router in the given external location as follows: Announce all subnets for that external location by using the same BGP MED on both interfaces, such as 100. If both interfaces announce the same MED, then Google Cloud can use ECMP to load balance traffic across both connections. Announce all subnets from the other external location by using a lower-priority MED than that of the first region, such as 200. Announce the same MED from both interfaces. Configure the external-facing Cloud Router in the transit VPC of the connected region as follows: Set your Cloud Router ASN to be 16550. Using custom route advertisements, announce all subnet ranges from all regions over both external-facing Cloud Router interfaces. Aggregate them if possible. Use the same MED on both interfaces, such as 100. Create connections between your transit VPC network and services-access VPC networks To provide transitive routing between external networks and the services-access VPC and between workload VPCs and the services-access VPC, the services-access VPC uses HA VPN for connectivity. Estimate how much traffic needs to travel between the transit and services-access VPCs in each region. Scale your expected number of tunnels accordingly. Configure HA VPN between the transit VPC and the services-access VPC in region A by using the instructions in Create HA VPN gateways to connect VPC networks. Create a dedicated HA VPN Cloud Router in the transit network. Leave the external-network-facing router for external network connections. Transit VPC Cloud Router configuration: To announce external-network and workload VPC subnets to the services-access VPC, use custom route advertisements on the Cloud Router in the transit VPC. Services-access VPC Cloud Router configuration: To announce services-access VPC subnets to the transit VPC, use custom route advertisements on the services-access VPC Cloud Router. If you use private services access to connect a managed services VPC to the services-access VPC, use custom routes to announce those subnets as well. If you connect a managed services VPC to the services-access VPC by using private services access, after the VPC Network Peering connection is established, update the services-access VPC side of the VPC Network Peering connection to export custom routes. Create connections between your transit VPC network and workload VPC networks Create VPC Network Peering connections between your transit VPC and each of your workload VPCs: Enable Export custom routes for the transit VPC side of each connection. Enable Import custom routes for the workload VPC side of each connection. In the default scenario, only the workload VPC subnet routes are exported to the Transit VPC. You don't need to export custom routes from the workload VPCs. Connect your workload VPC networks Connect the workload VPC networks together by using Network Connectivity Center VPC spokes. Make all spokes part of the same Network Connectivity Center spoke peer group. Use a core peer group to allow full mesh communication between the VPCs. The Network Connectivity Center connection announces specific routes among the workload VPC networks. Traffic between these networks follows those routes. Test connectivity to workloads If you have workloads already deployed in your VPC networks, test access to them now. If you connected the networks before deploying workloads, you can deploy them now and test. What's next Learn more about the Google Cloud products used in this design guide: VPC networks VPC Network Peering Cloud Interconnect HA VPN For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Deepak Michael | Networking Specialist Customer EngineerVictor Moreno | Product Manager, Cloud NetworkingOsvaldo Costa | Networking Specialist Customer EngineerOther contributors: Mark Schlagenhauf | Technical Writer, NetworkingAmmett Williams | Developer Relations EngineerGhaleb Al-habian | Network Specialist Send feedback \ No newline at end of file diff --git a/Cross-silo_and_cross-device_federated_learning_on_Google_Cloud.txt b/Cross-silo_and_cross-device_federated_learning_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..323fdd0dbf56f98c4226f7ad5522d46094e03676 --- /dev/null +++ b/Cross-silo_and_cross-device_federated_learning_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/cross-silo-cross-device-federated-learning-google-cloud +Date Scraped: 2025-02-23T11:46:08.117Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cross-silo and cross-device federated learning on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-03 UTC This document describes two reference architectures that help you create a federated learning platform on Google Cloud using Google Kubernetes Engine (GKE). The reference architectures and associated resources that are described in this document support the following: Cross-silo federated learning Cross-device federated learning, building upon the cross-silo architecture The intended audiences for this document are cloud architects and AI and ML engineers who want to implement federated learning use cases on Google Cloud. It's also intended for decision-makers who are evaluating whether to implement federated learning on Google Cloud. Architecture The diagrams in this section show a cross-silo architecture and a cross-device architecture for federated learning. To learn about the different applications for these architectures, see Use cases. Cross-silo architecture The following diagram shows an architecture that supports cross-silo federated learning: The preceding diagram shows a simplistic example of a cross-silo architecture. In the diagram, all of the resources are in the same project in a Google Cloud organization. These resources include the local client model, the global client model, and their associated federated learning workloads. This reference architecture can be modified to support several configurations for data silos. Members of the consortium can host their data silos in the following ways: On Google Cloud, in the same Google Cloud organization, and same Google Cloud project. On Google Cloud, in the same Google Cloud organization, in different Google Cloud projects. On Google Cloud, in different Google Cloud organizations. In private, on-premises environments, or in other public clouds. For participating members to collaborate, they need to establish secure communication channels between their environments. For more information about the role of participating members in the federated learning effort, how they collaborate, and what they share with each other, see Use cases. The architecture includes the following components: A Virtual Private Cloud (VPC) network and subnet. A private GKE cluster that helps you to do the following: Isolate cluster nodes from the internet. Limit exposure of your cluster nodes and control plane to the internet by creating a private GKE cluster with authorized networks. Use shielded cluster nodes that use a hardened operating system image. Enable Dataplane V2 for optimized Kubernetes networking. Dedicated GKE node pools: You create a dedicated node pool to exclusively host tenant apps and resources. The nodes have taints to ensure that only tenant workloads are scheduled onto the tenant nodes. Other cluster resources are hosted in the main node pool. Data encryption (enabled by default): Data at rest. Data in transit. Cluster secrets at the application layer. In-use data encryption, by optionally enabling Confidential Google Kubernetes Engine Nodes. VPC Firewall rules which apply the following: Baseline rules that apply to all nodes in the cluster. Additional rules that only apply to nodes in the tenant node pool. These firewall rules limit ingress to and egress from tenant nodes. Cloud NAT to allow egress to the internet. Cloud DNS records to enable Private Google Access such that apps within the cluster can access Google APIs without going over the internet. Service accounts which are as follows: A dedicated service account for the nodes in the tenant node pool. A dedicated service account for tenant apps to use with Workload Identity Federation. Support for using Google Groups for Kubernetes role-based access control (RBAC). A Git repository to store configuration descriptors. An Artifact Registry repository to store container images. Config Sync and Policy Controller to deploy configuration and policies. Cloud Service Mesh gateways to selectively allow cluster ingress and egress traffic. Cloud Storage buckets to store global and local model weights. Access to other Google and Google Cloud APIs. For example, a training workload might need to access training data that's stored in Cloud Storage, BigQuery, or Cloud SQL. Cross-device architecture The following diagram shows an architecture that supports cross-device federated learning: The preceding cross-device architecture builds upon the cross-silo architecture with the addition of the following components: A Cloud Run service that simulates devices connecting to the server A Certificate Authority Service that creates private certificates for the server and clients to run A Vertex AI TensorBoard to visualize the result of the training A Cloud Storage bucket to store the consolidated model The private GKE cluster that uses confidential nodes as its primary pool to help secure the data in use The cross-device architecture uses components from the open source Federated Compute Platform (FCP) project. This project includes the following: Client code for communicating with a server and executing tasks on the devices A protocol for client-server communication Connection points with TensorFlow Federated to make it easier to define your federated computations The FCP components shown in the preceding diagram can be deployed as a set of microservices. These components do the following: Aggregator: This job reads device gradients and calculates aggregated result with Differential Privacy. Collector: This job runs periodically to query active tasks and encrypted gradients. This information determines when aggregation starts. Model uploader: This job listens to events and publishes results so that devices can download updated models. Task-assignment: This frontend service distributes training tasks to devices. Task-management: This job manages tasks. Task-scheduler: This job either runs periodically or is triggered by specific events. Products used The reference architectures for both federated learning use cases use the following Google Cloud components: Google Cloud Kubernetes engine (GKE): GKE provides the foundational platform for federated learning. TensorFlow Federated (TFF): TFF provides an open-source framework for machine learning and other computations on decentralized data. GKE also provides the following capabilities to your federated learning platform: Hosting the federated learning coordinator: The federated learning coordinator is responsible for managing the federated learning process. This management includes tasks such as distributing the global model to participants, aggregating updates from participants, and updating the global model. GKE can be used to host the federated learning coordinator in a highly available and scalable way. Hosting federated learning participants: Federated learning participants are responsible for training the global model on their local data. GKE can be used to host federated learning participants in a secure and isolated way. This approach can help ensure that participants' data is kept local. Providing a secure and scalable communication channel: Federated learning participants need to be able to communicate with the federated learning coordinator in a secure and scalable way. GKE can be used to provide a secure and scalable communication channel between participants and the coordinator. Managing the lifecycle of federated learning deployments: GKE can be used to manage the lifecycle of federated learning deployments. This management includes tasks such as provisioning resources, deploying the federated learning platform, and monitoring the performance of the federated learning platform. In addition to these benefits, GKE also provides a number of features that can be useful for federated learning deployments, such as the following: Regional clusters: GKE lets you create regional clusters, helping you to improve the performance of federated learning deployments by reducing latency between participants and the coordinator. Network policies: GKE lets you create network policies, helping to improve the security of federated learning deployments by controlling the flow of traffic between participants and the coordinator. Load balancing: GKE provides a number of load balancing options, helping to improve the scalability of federated learning deployments by distributing traffic between participants and the coordinator. TFF provides the following features to facilitate the implementation of federated learning use cases: The ability to declaratively express federated computations, which are a set of processing steps that run on a server and set of clients. These computations can be deployed to diverse runtime environments. Custom aggregators can be built using TFF open source. Support for a variety of federated learning algorithms, including the following algorithms: Federated averaging: An algorithm that averages the model parameters of participating clients. It's particularly well-suited for use cases where the data is relatively homogeneous and the model is not too complex. Typical use cases are as follows: Personalized recommendations: A company can use federated averaging to train a model that recommends products to users based on their purchase history. Fraud detection: A consortium of banks can use federated averaging to train a model that detects fraudulent transactions. Medical diagnosis: A group of hospitals can use federated averaging to train a model that diagnoses cancer. Federated stochastic gradient descent (FedSGD): An algorithm that uses stochastic gradient descent to update the model parameters. It's well-suited for use cases where the data is heterogeneous and the model is complex. Typical use cases are as follows: Natural language processing: A company can use FedSGD to train a model that improves the accuracy of speech recognition. Image recognition: A company can use FedSGD to train a model that can identify objects in images. Predictive maintenance: A company can use FedSGD to train a model that predicts when a machine is likely to fail. Federated Adam: An algorithm that uses the Adam optimizer to update the model parameters. Typical use cases are as follows: Recommender systems: A company can use federated Adam to train a model that recommends products to users based on their purchase history. Ranking: A company can use federated Adam to train a model that ranks search results. Click-through rate prediction: A company can use federated Adam to train a model that predicts the likelihood that a user clicks an advertisement. Use cases This section describes use cases for which the cross-silo and cross-device architectures are appropriate choices for your federated learning platform. Federated learning is a machine learning setting where many clients collaboratively train a model. This process is led by a central coordinator, and the training data remains decentralized. In the federated learning paradigm, clients download a global model and improve the model by training locally on their data. Then, each client sends its calculated model updates back to the central server where the model updates are aggregated and a new iteration of the global model is generated. In these reference architectures, the model training workloads run on GKE. Federated learning embodies the privacy principle of data minimization, by restricting what data is collected at each stage of computation, limiting access to data, and processing then discarding data as early as possible. Additionally, the problem setting of federated learning is compatible with additional privacy preserving techniques, such as using differential privacy (DP) to improve the model anonymization to ensure the final model does not memorize individual user's data. Depending on the use case, training models with federated learning can have additional benefits: Compliance: In some cases, regulations might constrain how data can be used or shared. Federated learning might be used to comply with these regulations. Communication efficiency: In some cases, it's more efficient to train a model on distributed data than to centralize the data. For example, the datasets that the model needs to be trained on are too large to move centrally. Making data accessible: Federated learning allows organizations to keep the training data decentralized in per-user or per-organization data silos. Higher model accuracy: By training on real user data (while ensuring privacy) rather than synthetic data (sometimes referred to as proxy data), it often results in higher model accuracy. There are different kinds of federated learning, which are characterized by where the data originates and where the local computations occur. The architectures in this document focus on two types of federated learning: cross-silo and cross-device. Other types of federated learning are out of scope for this document. Federated learning is further categorized by how the datasets are partitioned, which can be as follows: Horizontal federated learning (HFL): Datasets with the same features (columns) but different samples (rows). For example, multiple hospitals might have patient records with the same medical parameters but different patient populations. Vertical federated learning (VFL): Datasets with the same samples (rows) but different features (columns). For example, a bank and an ecommerce company might have customer data with overlapping individuals but different financial and purchasing information. Federated Transfer Learning (FTL): Partial overlap in both samples and features among the datasets. For example, two hospitals might have patient records with some overlapping individuals and some shared medical parameters, but also unique features in each dataset. Cross-silo federated computation is where the participating members are organizations or companies. In practice, the number of members is usually small (for example, within one hundred members). Cross-silo computation is typically used in scenarios where the participating organizations have different datasets, but they want to train a shared model or analyze aggregated results without sharing their raw data with each other. For example, participating members can have their environments in different Google Cloud organizations, such as when they represent different legal entities, or in the same Google Cloud organization, such as when they represent different departments of the same legal entity. Participating members might not be able to consider each other's workloads as trusted entities. For example, a participating member might not have access to the source code of a training workload that they receive from a third party, such as the coordinator. Because they can't access this source code, the participating member can't ensure that the workload can be fully trusted. To help you prevent an untrusted workload from accessing your data or resources without authorization, we recommend that you do the following: Deploy untrusted workloads in an isolated environment. Grant untrusted workloads only the strictly necessary access rights and permissions to complete the training rounds assigned to the workload. To help you isolate potentially untrusted workloads, these reference architectures implement security controls, such as configuring isolated Kubernetes namespaces, where each namespace has a dedicated GKE node pool. Cross-namespace communication and cluster inbound and outbound traffic are forbidden by default, unless you explicitly override this setting. Example use cases for cross-silo federated learning are as follows: Fraud detection: Federated learning can be used to train a fraud detection model on data that is distributed across multiple organizations. For example, a consortium of banks could use federated learning to train a model that detects fraudulent transactions. Medical diagnosis: Federated learning can be used to train a medical diagnosis model on data that is distributed across multiple hospitals. For example, a group of hospitals could use federated learning to train a model that diagnoses cancer. Cross-device federated learning is a type of federated computation where the participating members are end-user devices such as mobile phones, vehicles, or IoT devices. The number of members can reach up to a scale of millions or even tens of millions. The process for cross-device federated learning is similar to that of cross-silo federated learning. However, it also requires you to adapt the reference architecture to accommodate some of the extra factors that you must consider when you are dealing with thousands to millions of devices. You must deploy administrative workloads to handle scenarios that are encountered in cross-device federated learning use cases. For example, the need to coordinate a subset of clients that will take place in the round of training. The cross-device architecture provides this ability by letting you deploy the FCP services. These services have workloads that have connection points with TFF. TFF is used to write the code that manages this coordination. Example use cases for cross-device federated learning are as follows: Personalized recommendations: You can use cross-device federated learning to train a personalized recommendation model on data that's distributed across multiple devices. For example, a company could use federated learning to train a model that recommends products to users based on their purchase history. Natural language processing: Federated learning can be used to train a natural language processing model on data that is distributed across multiple devices. For example, a company could use federated learning to train a model that improves the accuracy of speech recognition. Predicting vehicle maintenance needs: Federated learning can be used to train a model that predicts when a vehicle is likely to need maintenance. This model could be trained on data that is collected from multiple vehicles. This approach lets the model learn from the experiences of all the vehicles, without compromising the privacy of any individual vehicle. The following table summarizes the features of the cross-silo and cross-device architectures, and shows you how to categorize the type of federated learning scenario that is applicable for your use case. Feature Cross-silo federated computations Cross-device federated computations Population size Usually small (for example, within one hundred devices) Scalable to thousands, millions, or hundreds of millions of devices Participating members Organizations or companies Mobile devices, edge devices, vehicles Most common data partitioning HFL, VFL, FTL HFL Data sensitivity Sensitive data that participants don't want to share with each other in raw format Data that's too sensitive to be shared with a central server Data availability Participants are almost always available Only a fraction of participants are available at any time Example use cases Fraud detection, medical diagnosis, financial forecasting Fitness tracking, voice recognition, image classification Design considerations This section provides guidance to help you use this reference architecture to develop one or more architectures that meet your specific requirements for security, reliability, operational efficiency, cost, and performance. Cross-silo architecture design considerations To implement a cross-silo federated learning architecture in Google Cloud, you must implement the following minimum prerequisites, which are explained in more detail in the following sections: Establish a federated learning consortium. Determine the collaboration model for the federated learning consortium to implement. Determine the responsibilities of the participant organizations. In addition to these prerequisites, there are other actions that the federation owner must take which are outside the scope of this document, such as the following: Manage the federated learning consortium. Design and implement a collaboration model. Prepare, manage, and operate the model training data and the model that the federation owner intends to train. Create, containerize, and orchestrate federated learning workflows. Deploy and manage federated learning workloads. Set up the communication channels for the participant organizations to securely transfer data. Establish a federated learning consortium A federated learning consortium is the group of organizations that participate in a cross-silo federated learning effort. Organizations in the consortium only share the parameters of the ML models, and you can encrypt these parameters to increase privacy. If the federated learning consortium allows the practice, organizations can also aggregate data that don't contain personally identifiable information (PII). Determine a collaboration model for the federated learning consortium The federated learning consortium can implement different collaboration models, such as the following: A centralized model that consists of a single coordinating organization, called the federation owner or orchestrator, and a set of participant organizations or data owners. A decentralized model that consists of organizations that coordinate as a group. A heterogeneous model that consists of a consortium of diverse participating organizations, all of which bring different resources to the consortium. This document assumes that the collaboration model is a centralized model. Determine the responsibilities of the participant organizations After choosing a collaboration model for the federated learning consortium, the federation owner must determine the responsibilities for the participant organizations. The federation owner must also do the following when they begin to build a federated learning consortium: Coordinate the federated learning effort. Design and implement the global ML model and the ML models to share with the participant organizations. Define the federated learning rounds—the approach for the iteration of the ML training process. Select the participant organizations that contribute to any given federated learning round. This selection is called a cohort. Design and implement a consortium membership verification procedure for the participant organizations. Update the global ML model and the ML models to share with the participant organizations. Provide the participant organizations with the tools to validate that the federated learning consortium meets their privacy, security, and regulatory requirements. Provide the participant organizations with secure and encrypted communication channels. Provide the participant organizations with all the necessary non-confidential, aggregated data that they need to complete each federated learning round. The participant organizations have the following responsibilities: Provide and maintain a secure, isolated environment (a silo). The silo is where participant organizations store their own data, and where ML model training is implemented. Participant organizations don't share their own data with other organizations. Train the models supplied by the federation owner using their own computing infrastructure and their own local data. Share model training results with the federation owner in the form of aggregated data, after removing any PII. The federation owner and the participant organizations can use Cloud Storage to share updated models and training results. The federation owner and the participant organizations refine the ML model training until the model meets their requirements. Implement federated learning on Google Cloud After establishing the federated learning consortium and determining how the federated learning consortium will collaborate, we recommend that participant organizations do the following: Provision and configure the necessary infrastructure for the federated learning consortium. Implement the collaboration model. Start the federated learning effort. Provision and configure the infrastructure for the federated learning consortium When provisioning and configuring the infrastructure for the federated learning consortium, it's the responsibility of the federation owner to create and distribute the workloads that train the federated ML models to the participant organizations. Because a third party (the federation owner) created and provided the workloads, the participant organizations must take precautions when deploying those workloads in their runtime environments. Participant organizations must configure their environments according to their individual security best practices, and apply controls that limit the scope and the permissions granted to each workload. In addition to following their individual security best practices, we recommend that the federation owner and the participant organizations consider threat vectors that are specific to federated learning. Implement the collaboration model After the federated learning consortium infrastructure is prepared, the federation owner designs and implements the mechanisms that let the participant organizations interact with each other. The approach follows the collaboration model that the federation owner chose for the federated learning consortium. Start the federated learning effort After implementing the collaboration model, the federation owner implements the global ML model to train, and the ML models to share with the participant organization. After those ML models are ready, the federation owner starts the first round of the federated learning effort. During each round of the federated learning effort, the federation owner does the following: Distributes the ML models to share with the participant organizations. Waits for the participant organizations to deliver the results of the training of the ML models that the federation owner shared. Collects and processes the training results that the participant organizations produced. Updates the global ML model when they receive appropriate training results from participating organizations. Updates the ML models to share with the other members of the consortium when applicable. Prepares the training data for the next round of federated learning. Starts the next round of federated learning. Security, privacy, and compliance This section describes factors that you should consider when you use this reference architecture to design and build a federated learning platform on Google Cloud. This guidance applies to both of the architectures that this document describes. The federated learning workloads that you deploy in your environments might expose you, your data, your federated learning models, and your infrastructure to threats that might impact your business. To help you increase the security of your federated learning environments, these reference architectures configure GKE security controls that focus on the infrastructure of your environments. These controls might not be enough to protect you from threats that are specific to your federated learning workloads and use cases. Given the specificity of each federated learning workload and use case, security controls aimed at securing your federated learning implementation are out of the scope of this document. For more information and examples about these threats, see Federated Learning security considerations. GKE security controls This section discusses the controls that you apply with these architectures to help you secure your GKE cluster. Enhanced security of GKE clusters These reference architectures help you create a GKE cluster which implements the following security settings: Limit exposure of your cluster nodes and control plane to the internet by creating a private GKE cluster with authorized networks. Use shielded nodes that use a hardened node image with the containerd runtime. Increased isolation of tenant workloads using GKE Sandbox. Encrypt data at rest by default. Encrypt data in transit by default. Encrypt cluster secrets at the application layer. Optionally encrypt data in-use by enabling Confidential Google Kubernetes Engine Nodes. For more information about GKE security settings, see Harden your cluster's security and About the security posture dashboard. VPC firewall rules Virtual Private Cloud (VPC) firewall rules govern which traffic is allowed to or from Compute Engine VMs. The rules let you filter traffic at VM granularity, depending on Layer 4 attributes. You create a GKE cluster with the default GKE cluster firewall rules. These firewall rules enable communication between the cluster nodes and GKE control plane, and between nodes and Pods in the cluster. You apply additional firewall rules to the nodes in the tenant node pool. These firewall rules restrict egress traffic from the tenant nodes. This approach can increase isolation of tenant nodes. By default, all egress traffic from the tenant nodes is denied. Any required egress must be explicitly configured. For example, you create firewall rules to allow egress from the tenant nodes to the GKE control plane, and to Google APIs using Private Google Access. The firewall rules are targeted to the tenant nodes by using the service account for the tenant node pool. Namespaces Namespaces let you provide a scope for related resources within a cluster—for example, Pods, Services, and replication controllers. By using namespaces, you can delegate administration responsibility for the related resources as a unit. Therefore, namespaces are integral to most security patterns. Namespaces are an important feature for control plane isolation. However, they don't provide node isolation, data plane isolation, or network isolation. A common approach is to create namespaces for individual applications. For example, you might create the namespace myapp-frontend for the UI component of an application. These reference architectures help you create a dedicated namespace to host the third-party apps. The namespace and its resources are treated as a tenant within your cluster. You apply policies and controls to the namespace to limit the scope of resources in the namespace. Network policies Network policies enforce Layer 4 network traffic flows by using Pod-level firewall rules. Network policies are scoped to a namespace. In the reference architectures that this document describes, you apply network policies to the tenant namespace that hosts the third-party apps. By default, the network policy denies all traffic to and from pods in the namespace. Any required traffic must be explicitly added to an allowlist. For example, the network policies in these reference architectures explicitly allow traffic to required cluster services, such as the cluster internal DNS and the Cloud Service Mesh control plane. Config Sync Config Sync keeps your GKE clusters in sync with configs stored in a Git repository. The Git repository acts as the single source of truth for your cluster configuration and policies. Config Sync is declarative. It continuously checks cluster state and applies the state declared in the configuration file to enforce policies, which helps to prevent configuration drift. You install Config Sync into your GKE cluster. You configure Config Sync to sync cluster configurations and policies from a Cloud Source repository. The synced resources include the following: Cluster-level Cloud Service Mesh configuration Cluster-level security policies Tenant namespace-level configuration and policy including network policies, service accounts, RBAC rules, and Cloud Service Mesh configuration Policy Controller Google Kubernetes Engine (GKE) Enterprise edition Policy Controller is a dynamic admission controller for Kubernetes that enforces CustomResourceDefinition-based (CRD-based) policies that are executed by the Open Policy Agent (OPA). Admission controllers are Kubernetes plugins that intercept requests to the Kubernetes API server before an object is persisted, but after the request is authenticated and authorized. You can use admission controllers to limit how a cluster is used. You install Policy Controller into your GKE cluster. These reference architectures include example policies to help secure your cluster. You automatically apply the policies to your cluster using Config Sync. You apply the following policies: Selected policies to help enforce Pod security. For example, you apply policies that prevent pods from running privileged containers and that require a read-only root file system. Policies from the Policy Controller template library. For example, you apply a policy that disallows services with type NodePort. Cloud Service Mesh Cloud Service Mesh is a service mesh that helps you simplify the management of secure communications across services. These reference architectures configure Cloud Service Mesh so that it does the following: Automatically injects sidecar proxies. Enforces mTLS communication between services in the mesh. Limits outbound mesh traffic to only known hosts. Limits inbound traffic only from certain clients. Lets you configure network security policies based on service identity rather than based on the IP address of peers on the network. Limits authorized communication between services in the mesh. For example, apps in the tenant namespace are only allowed to communicate with apps in the same namespace, or with a set of known external hosts. Routes all inbound and outbound traffic through mesh gateways where you can apply further traffic controls. Supports secure communication between clusters. Node taints and affinities Node taints and node affinity are Kubernetes mechanisms that let you influence how pods are scheduled onto cluster nodes. Tainted nodes repel pods. Kubernetes won't schedule a Pod onto a tainted node unless the Pod has a toleration for the taint. You can use node taints to reserve nodes for use only by certain workloads or tenants. Taints and tolerations are often used in multi-tenant clusters. For more information, see the dedicated nodes with taints and tolerations documentation. Node affinity lets you constrain pods to nodes with particular labels. If a pod has a node affinity requirement, Kubernetes doesn't schedule the Pod onto a node unless the node has a label that matches the affinity requirement. You can use node affinity to ensure that pods are scheduled onto appropriate nodes. You can use node taints and node affinity together to ensure tenant workload pods are scheduled exclusively onto nodes reserved for the tenant. These reference architectures help you control the scheduling of the tenant apps in the following ways: Creating a GKE node pool dedicated to the tenant. Each node in the pool has a taint related to the tenant name. Automatically applying the appropriate toleration and node affinity to any Pod targeting the tenant namespace. You apply the toleration and affinity using PolicyController mutations. Least privilege It's a security best practice to adopt a principle of least privilege for your Google Cloud projects and resources like GKE clusters. By using this approach, the apps that run inside your cluster, and the developers and operators that use the cluster, have only the minimum set of permissions required. These reference architectures help you use least privilege service accounts in the following ways: Each GKE node pool receives its own service account. For example, the nodes in the tenant node pool use a service account dedicated to those nodes. The node service accounts are configured with the minimum required permissions. The cluster uses Workload Identity Federation for GKE to associate Kubernetes service accounts with Google service accounts. This way, the tenant apps can be granted limited access to any required Google APIs without downloading and storing a service account key. For example, you can grant the service account permissions to read data from a Cloud Storage bucket. These reference architectures help you restrict access to cluster resources in the following ways: You create a sample Kubernetes RBAC role with limited permissions to manage apps. You can grant this role to the users and groups who operate the apps in the tenant namespace. By applying this limited role of users and groups, those users only have permissions to modify app resources in the tenant namespace. They don't have permissions to modify cluster-level resources or sensitive security settings like Cloud Service Mesh policies. Binary Authorization Binary Authorization lets you enforce policies that you define about the container images that are being deployed in your GKE environment. Binary Authorization allows only container images that conform with your defined policies to be deployed. It disallows the deployment of any other container images. In this reference architecture, Binary Authorization is enabled with its default configuration. To inspect the Binary Authorization default configuration, see Export the policy YAML file. For more information about how to configure policies, see the following specific guidance: Google Cloud CLI Google Cloud console The REST API The google_binary_authorization_policy Terraform resource Cross-organization attestation verification You can use Binary Authorization to verify attestations generated by a third-party signer. For example, in a cross-silo federated learning use case, you can verify attestations that another participant organization created. To verify the attestations that a third party created, you do the following: Receive the public keys that the third party used to create the attestations that you need to verify. Create the attestors to verify the attestations. Add the public keys that you received from the third party to the attestors you created. For more information about creating attestors, see the following specific guidance: Google Cloud CLI Google Cloud console the REST API the google_binary_authorization_attestor Terraform resource GKE Compliance dashboard The GKE Compliance dashboard provides actionable insights to strengthen your security posture, and helps you to automate compliance reporting for industry benchmarks and standards. You can enroll your GKE clusters to enable automated compliance reporting. For more information, see About the GKE Compliance dashboard. Federated learning security considerations Despite its strict data sharing model, federated learning isn't inherently secure against all targeted attacks, and you should take these risks into account when you deploy either of the architectures described in this document. There's also the risk of unintended information leaks about ML models or model training data. For example, an attacker might intentionally compromise the global ML model or rounds of the federated learning effort, or they might execute a timing attack (a type of side-channel attack) to gather information about the size of the training datasets. The most common threats against a federated learning implementation are as follows: Intentional or unintentional training data memorization. Your federated learning implementation or an attacker might intentionally or unintentionally store data in ways that might be difficult to work with. An attacker might be able to gather information about the global ML model or past rounds of the federated learning effort by reverse engineering the stored data. Extract information from updates to the global ML model. During the federated learning effort, an attacker might reverse engineer the updates to the global ML model that the federation owner collects from participant organizations and devices. The federation owner might compromise rounds. A compromised federation owner might control a rogue silo or device and start a round of the federated learning effort. At the end of the round, the compromised federation owner might be able to gather information about the updates that it collects from legitimate participant organizations and devices by comparing those updates to the one that the rogue silo produced. Participant organizations and devices might compromise the global ML model. During the federated learning effort, an attacker might attempt to maliciously affect the performance, the quality, or the integrity of the global ML model by producing rogue or inconsequential updates. To help mitigate the impact of the threats described in this section, we recommend the following best practices: Tune the model to reduce the memorization of training data to a minimum. Implement privacy-preserving mechanisms. Regularly audit the global ML model, the ML models that you intend to share, the training data, and infrastructure that you implemented to achieve your federated learning goals. Implement a secure aggregation algorithm to process the training results that participant organizations produce. Securely generate and distribute data encryption keys using a public key infrastructure. Deploy infrastructure to a confidential computing platform. Federation owners must also take the following additional steps: Verify the identity of each participant organization and the integrity of each silo in case of cross-silo architectures, and the identity and integrity of each device in case of cross-device architectures. Limit the scope of the updates to the global ML model that participant organizations and devices can produce. Reliability This section describes design factors that you should consider when you use either of the references architectures in this document to design and build a federated learning platform on Google Cloud. When designing your federated learning architecture on Google Cloud, we recommend that you follow the guidance in this section to improve the availability and scalability of the workload, and help make your architecture resilient to outages and disasters. GKE: GKE supports several different cluster types that you can tailor to the availability requirements of your workloads and to your budget. For example, you can create regional clusters that distribute the control plane and nodes across several zones within a region, or zonal clusters that have the control plane and nodes in a single zone. Both cross-silo and cross-device reference architectures rely on regional GKE clusters. For more information on the aspects to consider when creating GKE clusters, see cluster configuration choices. Depending on the cluster type and how the control plane and cluster nodes are distributed across regions and zones, GKE offers different disaster recovery capabilities to protect your workloads against zonal and regional outages. For more information on GKE's disaster recovery capabilities, see Architecting disaster recovery for cloud infrastructure outages: Google Kubernetes Engine. Google Cloud Load Balancing: GKE supports several ways of load balancing traffic to your workloads. The GKE implementations of the Kubernetes Gateway and Kubernetes Service APIs let you automatically provision and configure Cloud Load Balancing to securely and reliably expose the workloads running in your GKE clusters. In these reference architectures, all the ingress and egress traffic goes through Cloud Service Mesh gateways. These gateways mean that you can tightly control how traffic flows inside and outside your GKE clusters. Reliability challenges for cross-device federated learning Cross-device federated learning has a number of reliability challenges that are not encountered in cross-silo scenarios. These include the following: Unreliable or intermittent device connectivity Limited device storage Limited compute and memory Unreliable connectivity can lead to issues such as the following: Stale updates and model divergence: When devices experience intermittent connectivity, their local model updates might become stale, representing outdated information compared to the current state of the global model. Aggregating stale updates can lead to model divergence, where the global model deviates from the optimal solution due to inconsistencies in the training process. Imbalanced contributions and biased models: Intermittent communication can result in an uneven distribution of contributions from participating devices. Devices with poor connectivity might contribute fewer updates, leading to an imbalanced representation of the underlying data distribution. This imbalance can bias the global model towards the data from devices with more reliable connections. Increased communication overhead and energy consumption: Intermittent communication can lead to increased communication overhead, as devices might need to resend lost or corrupted updates. This issue can also increase the energy consumption on devices, especially for those with limited battery life, as they might need to maintain active connections for longer periods to ensure successful transmission of updates. To help mitigate some of the effects caused by intermittent communication, the reference architectures in this document can be used with the FCP. A system architecture that executes the FCP protocol can be designed to meet the following requirements: Handle long running rounds. Enable speculative execution (rounds can start before the required number of clients is assembled in anticipation of more checking in soon enough). Enable devices to choose which tasks they want to participate in. This approach can enable features like sampling without replacement, which is a sampling strategy where each sample unit of a population has only one chance to be selected. This approach helps to mitigate against unbalanced contributions and biased models Extensible for anonymization techniques like differential privacy (DP) and trusted aggregation (TAG). To help mitigate limited device storage and compute capabilities, the following techniques can help: Understand what is the maximum capacity available to run the federated learning computation Understand how much data can be held at any particular time Design the client-side federated learning code to operate within the available compute and RAM available on the clients Understand the implications of running out of storage and implement process to manage this Cost optimization This section provides guidance to optimize the cost of creating and running the federated learning platform on Google Cloud that you establish by using this reference architecture. This guidance applies for both of the architectures that this document describes. Running workloads on GKE can help you make your environment more cost-optimized by provisioning and configuring your clusters according to your workloads' resource requirements. It also enables features that dynamically reconfigure your clusters and cluster nodes, such as automatically scaling cluster nodes and Pods, and by right-sizing your clusters. For more information about optimizing the cost of your GKE environments, see Best practices for running cost-optimized Kubernetes applications on GKE. Operational efficiency This section describes the factors that you should consider to optimize efficiency when you use this reference architecture to create and run a federated learning platform on Google Cloud. This guidance applies for both of the architectures that this document describes. To increase the automation and monitoring of your federated learning architecture, we recommend that you adopt MLOps principles, which are DevOps principles in the context of machine learning systems. Practicing MLOps means that you advocate for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment and infrastructure management. For more information about MLOps, see MLOps: Continuous delivery and automation pipelines in machine learning. Performance optimization This section describes the factors that you should consider to optimize performance of your workloads when you use this reference architecture to create and run a federated learning platform on Google Cloud. This guidance applies for both of the architectures that this document describes. GKE supports several features to automatically and manually right-size and scale your GKE environment to meet the demands of your workloads, and help you avoid over-provisioning resources. For example, you can use Recommender to generate insights and recommendations to optimize your GKE resource usage. When thinking about how to scale your GKE environment, we recommend that you design short, medium, and long term plans for how you intend to scale your environments and workloads. For example, how do you intend to grow your GKE footprint in a few weeks, months, and years? Having a plan ready helps you take the most advantage of the scalability features that GKE provides, optimize your GKE environments, and reduce costs. For more information about planning for cluster and workload scalability, see About GKE Scalability. To increase the performance of your ML workloads, you can adopt Cloud Tensor Processing Units (Cloud TPUs), Google-designed AI accelerators that are optimized for training and inference of large AI models. Deployment To deploy the cross-silo and cross-device reference architectures that this document describes, see the Federated Learning on Google Cloud GitHub repository. What's next Explore how you can implement your federated learning algorithms on the TensorFlow Federated platform. Learn about Advances and Open Problems in Federated Learning. Read about federated learning on the Google AI Blog. Watch how Google uses keeps privacy intact when using federated learning with de-identified, aggregated information to improve ML models. Read Towards Federated learning at scale. Explore how you can implement an MLOps pipeline to manage the lifecycle of the machine learning models. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Grace Mollison | Solutions LeadMarco Ferrari | Cloud Solutions ArchitectOther contributors: Chloé Kiddon | Staff Software Engineer and ManagerLaurent Grangeau | Solutions ArchitectLilian Felix | Cloud Engineer Send feedback \ No newline at end of file diff --git a/Customer_Engagement_Suite_with_Google_AI.txt b/Customer_Engagement_Suite_with_Google_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..b6a40ba8186521a27cfa6a6f7f5b91ccae198b14 --- /dev/null +++ b/Customer_Engagement_Suite_with_Google_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/customer-engagement-ai +Date Scraped: 2025-02-23T11:58:40.474Z + +Content: +Learn how Customer Engagement Suite with Google AI can transform your business. Watch the webinar.Customer Engagement Suite with Google AIDelight customers with an end-to-end application that combines our most advanced conversational AI, with multimodal and omnichannel functionality to deliver exceptional customer experiences at every touchpoint. Request consultationBenefitsDeliver exceptional self-service, agent assistance, and operational insightsImprove customer experience and reduce operating costs with hybrid virtual agentsImprove employee experience with AI that has been grounded for accuracyEmpower your teams and improve productivity with actionable insightsDeliver consistent omnichannel engagements across web, mobile, voice, email, and appsSupport multimodal information including text, voice, and imagesSupport an ecosystem of connectors, telephony systems, CRM, and WFM applicationsKey featuresSeamless end-to-end experiences with generative AI you can trustOur most advanced conversational AI products and Gemini models grounded in your organization's resources to ensure high accuracy.Conversational AgentsThe Conversational Agents product helps you create virtual agents with both deterministic (rules-based control) and generative AI functionality that provide dynamic, personalized self-service and take on a greater volume of inquiries, enabling customer care representatives to focus on more specialized calls.Agent AssistThe Agent Assist product provides customer care representatives with in-the-moment assistance, generated responses, and real-time coaching to help them resolve customer issues faster and with greater accuracy. New features include generative knowledge assist, coaching model, summarization, smart reply, and live translation.Conversational InsightsThe Conversational Insights product analyzes real-time data from across your customer operations to provide operations managers and quality assurance teams with KPIs, inquiry topic categories to prioritize, and areas of improvement. Identify call drivers and sentiment from conversations that help customer operations managers learn about customer engagements and improve call outcomes.Contact Center as a ServiceThe Contact Center as a Service offering delivers seamless and consistent customer interactions across all your channels with a turnkey, enterprise-grade, omnichannel contact center solution that is native to the cloud, and built on Google Cloud’s foundational security, privacy, and AI innovation.Customer Engagement ServicesLet our experts help you get the most out of your customer care operations. Our Customer Engagement Services will provide an evaluation of your current solution and a report on areas of improvement and suggested next steps. Learn how conversational AI can help you improve self-service, agent productivity, and gain richer operational insights.Ready to get started? Contact usCustomersExplore our customer engagement case studiesSee how organizations are transforming their customer operations with Google.Case studyVerizon uses Contact Center AI to delight customers5-min readCase studyMarks & Spencer automates calls to stores with Contact Center AI6-min readBlog posteasyJet uses Dialogflow to power Speak Now feature booking flights using voice5-min readCase studyGoogle and Automation Anywhere reimagine customer experience by giving virtual agents a boost5-min readCase studyIllinois is using CCAI to help more than 1 million citizens who lost their jobs5-min readSee all customersPartnersSeamless integration with our partner ecosystem Customer Engagement Suite with Google AI easily integrates with existing technologies and supports an ecosystem of third-party offerings including telephony, CRM, workforce management, and connectors.Explore our marketplaceDocumentationSee tutorials, guides, and resources for this solutionQuickstartCreate virtual agents with Conversational Agents Learn how to rapidly build and deploy virtual agents to handle the types of conversations required for your system.Learn moreGoogle Cloud BasicsAgent Assist basicsLearn the fundamentals of Agent Assist with simple explanations and links for more information.Learn moreGoogle Cloud BasicsCCAI Insights documentationSee CCAI tutorials, how-to guides, and concepts.Learn moreNot seeing what you’re looking for?View documentationWhat's newSee the latest customer engagement updatesVideoCustomer engagement for telecommunicationsWatch videoBlog postCommerzbank case study Read the blogVideoCustomer engagement for financial servicesWatch videoBlog postImproves sales development operationsRead the blogBlog postReimagine customer experience Read the blogBlog postPindrop partners with Google CloudRead the blogCloud AI products comply with the Google Cloud SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesStart using Google CloudGo to consoleContinue browsingSee all productsExplore the MarketplaceFind solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Customer_stories.txt b/Customer_stories.txt new file mode 100644 index 0000000000000000000000000000000000000000..05ceab4ceb0f8017fa59eafc44ef0d62b7f0cb1d --- /dev/null +++ b/Customer_stories.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/customers +Date Scraped: 2025-02-23T11:57:40.632Z + +Content: +Google Cloud customer storiesOrganizations worldwide are turning to Google Cloud for their digital transformations. Today, nearly 90% of generative AI unicorns and more than 60% of funded gen AI startups are Google Cloud customers. Read more about more than 300 gen AI use cases or watch customer success stories.Contact salesReal-world benefits for customersDriving growth and efficiencyMillennium BCP uses BigQuery to boost conversion rates 2.6x in digital sales efforts.Read the case studyBattling security threatsApex Fintech Solutions reduced its threat detection turnaround time by up to 75%. Watch the videoBetter inventory forecastingThe Super-Pharm pharmacy chain uses Vertex AI to improve inventory accuracy by up to 90%.Read the case studyFeatured storiesMillennium BCP boosts conversion loan rates by 2.6x using BigQueryLearn moreapree health improves healthcare service delivery with Google CloudLearn moreMercado Libre boosts real-time data delivery with Google CloudWatch videoCommerzbank uses gen AI to transform advisory workflowsRead blog postEstée Lauder brings more value for customers with AIWatch videoXometry revolutionizes custom manufacturing with Vertex AIWatch videoFilter byProduct CategoryAI and Machine LearningAPI ManagementComputeContainersData AnalyticsDatabasesDeveloper ToolsGoogle WorkspaceNetworkingOperationsSecurity and IdentityServerless ComputingStorageOtherIndustryAutomotiveEducationFinancial & Insurance ServicesGamingGovernment & Public SectorHealthcareLife SciencesManufacturingMedia & EntertainmentRetail & Consumer GoodsTechnologyTelecommunicationsTransportationOtherRegionAsia Pacific & JapanEurope, Middle East & AfricaLatin AmericaU.S. & CanadasearchsendAI and Machine LearningcloseEurope, Middle East & AfricacloseClear allRadisson Hotel Group automated ad creation, localization, and translation using Vertex AI and BigQuery, increasing ad-driven revenue by 22% and return on ad spend by 35%, while halving manual work and personalizing customer experiences.Oper Credits uses Google Cloud's Vertex AI and Kubernetes to automate mortgage processes, reduce errors, and improve borrower and bank experiences.ABANCA establishes a new subsidiary, a 100% digital banking brand called B100, in just eight months with support from SNGULAR and Google Cloud.Schibsted Marketplaces, a leading online classifieds group in the Nordic region, cut infrastructure costs by 70% and accelerated data insights and model development by adopting Bigtable and BigQuery. This led to faster, more relevant recommendations and a better user experience.Lightricks’ LTXV, an AI video generation model, leverages Google Cloud to enable users to create compelling visuals in seconds. Google Cloud provides the infrastructural and computational support that powers LTXV’s first-to-market, open source model.Founded over two decades ago, Us Media has shifted its focus to supporting NGOs in their digital transformation, leveraging robust and innovative technological solutions to maximize their social impact.Sound Particles, an audio technology innovator, created the industry’s first AI-powered binaural audio solution, disrupting the entertainment industry after being granted access to the Google for Startups Cloud Program.Barkyn, a pet care startup, transformed from a dog food company into a holistic pet health service with Google Cloud. Using Vertex AI and Gemini, they built an AI-powered Assistant that gives personalized insights to help pet owners proactively manage their dog's health.ADEO uses Gemini Flash to optimize and accelerate the large-scale creation of product listings, whose clarity and effectiveness have a big impact on conversion rates and are a key issue for the ecommerce players.For 13 years, Nomad Education has been on a mission to promote academic success for all and democratize access to education in French-speaking countries, using innovation to promote equal opportunities.Fetcherr uses BigQuery and Vertex AI to support an AI-centered, real-time price optimization platform for the airline industry.Fit Analytics delivers 250 million size recommendations a month with BigQuery, TensorFlow, and Google Kubernetes Engine.Show moreWe're ready to help you along your digital transformation journey.Contact salesRead more of our in depth customer storiesVisit our blogLearn what industry analysts are saying about Google CloudExplore analyst reports Network with other professionalsJoin the Google Cloud communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Cymbal_Bank_example.txt b/Cymbal_Bank_example.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee7178316d132201fc3894881beb364bb7a4c590 --- /dev/null +++ b/Cymbal_Bank_example.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/cymbal-bank +Date Scraped: 2025-02-23T11:47:20.989Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cymbal Bank application architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC The blueprint includes a sample application that is named Cymbal Bank. Cymbal Bank demonstrates the best practices that are recommended for containerized applications. The Cymbal Bank application lets users create login accounts, sign in to their account, see their transaction history, make deposits, and transfer money to other users' accounts. Cymbal Bank services run as containers that connect to each other over REST APIs and gRPC APIs. The following diagram shows the Cymbal Bank application that is deployed on the blueprint developer platform. Each application is also a network service. Only the frontend application is exposed externally to the cluster through the GKE Gateway controller. All applications run as distributed services through the use of Cloud Service Mesh. For more information about the services that are included in the Cymbal Bank application, see the Cymbal Bank repository on GitHub. Cymbal Bank tenants To provide separation between tenants, each tenant in the developer platform has one team scope and at least one fleet namespace. Tenants never share a namespace. To deploy Cymbal Bank, each tenant only needs one namespace. In more complex scenarios, a tenant can have several namespaces. To illustrate how Cymbal Bank is deployed on the developer platform, this example assumes that there were three separate application development teams with different focus areas. The Terraform creates the following developer platform tenant for each of those teams: frontend tenant: A development team that focuses on the website and mobile application backends. accounts tenant: A development team that focuses on customer data. ledger tenant: A team that manages the ledger services. Cymbal Bank apps The Cymbal Bank application consists of six microservices: frontend, ledgerwriter, balancereader, transactionhistory, userservice, and contacts. Each microservice is mapped to an application within the tenant that owns it. The following table describes the mapping of the teams, team scope, fleet namespace, and microservices for Cymbal Bank. For the purpose of this mapping, this example assumes that Cymbal Bank is developed by three separate application operator teams. Teams manage a varying number of services. Each team is assigned a team scope. Team Team scope Fleet namespace Application - Microservice Kubernetes service account Frontend team frontend frontend frontend ksa-frontend Ledger team ledger ledger ledgerwriter ksa-ledgerwriter balancereader ksa-balancereader transactionhistory ksa-transactionhistory Accounts team accounts accounts userservice ksa-userservice contacts ksa-contacts Cymbal Bank database structure Cymbal Bank databases are deployed using AlloyDB for PostgreSQL. The databases are configured with a highly available primary instance in one region with redundant nodes in different zones, and cross-region replicas are used for disaster recovery. Cymbal Bank uses IAM database authentication to allow services access to the databases. The databases are encrypted using CMEK. Two PostgreSQL databases are used: ledger-db for the ledger, and accounts-db for user accounts. What's next Read about mapping BeyondProd security principles to the blueprint (next document in this series). Send feedback \ No newline at end of file diff --git a/Data_Analytics(1).txt b/Data_Analytics(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..03d540253c9987e71736fb8c3ba9d3d9f9af5508 --- /dev/null +++ b/Data_Analytics(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/data-analytics-and-ai +Date Scraped: 2025-02-23T12:03:31.097Z + +Content: +Google is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMS. Learn more.Unify data analytics and AI in a single platformBigQuery is the leading data analytics and AI platform for organizations seeking to unify their multimodal data, accelerate innovation with AI, and simplify data analytics across all users in the enterprise.Contact salesGo to console The BigQuery differenceExplore how BigQuery makes it easier for data teams to unify enterprise data and connect it with generative AI and new AI-powered experiences across our cloud portfolio.Manage all data and workloads in a single platformBigQuery simplifies data analytics by providing an integrated experience for data and AI. BigQuery has the simplicity and scale to manage multimodal data, with your choice of engines (SQL, Python, or Spark) at the best price and performance with data governance and security built-in. “Having a scalable place to land all our data helps us move quickly and gives us the confidence that as we get more data, we have the tools we need to go fast. To process, analyze, and act on our data to improve and grow the organization.”Rich Rubenstein VP of Data Analytics, General MillsRelated resourcesBigQuery offers up to 54% lower TCO over alternative cloud-based EDW solutionsGet the 2024 ESG reportHow Veo supercharged data strategy by migrating to BigQueryWatch the on-demand webinarConnect all of your enterprise data with AIBring gen AI to your data with scale and efficiency to leverage your business data with LLMs. BigQuery has first-party integration with Vertex AI to ground AI with your enterprise data.“BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.”Allie Surina Dixon, Director of Data, PricelineRelated resourcesLearn how your data holds the key to your AI successGet the guide5 steps for laying the right data foundation for AI successWatch the webinarEnsure your data and AI platform remains flexible, scalable, and adaptable to the ever-changing technological landscapeBigQuery's openness empowers data leaders with flexibility and scalability. Google Cloud's commitment to open source, open standards, and AI lets you choose best-fit solutions. Leverage OSS engines, open formats like Apache Iceberg, and your choice of AI models.“We are building our unified data and AI foundation using Google Cloud's lakehouse stack, where BigQuery and BigLake enable us to securely discover and manage all data types and formats in a single platform to build the best possible experiences for our patients, doctors, and nurses.”Mangesh Patil, Chief Analytics Officer, HCA HealthcareRelated resourcesConstellation Research: Google sets BigQuery apart with gen AI, open choices, and cross-cloud queryingBigQuery tables for Apache Iceberg: optimized storage for the open lakehouseRead the blogUnifiedManage all data and workloads in a single platformBigQuery simplifies data analytics by providing an integrated experience for data and AI. BigQuery has the simplicity and scale to manage multimodal data, with your choice of engines (SQL, Python, or Spark) at the best price and performance with data governance and security built-in. “Having a scalable place to land all our data helps us move quickly and gives us the confidence that as we get more data, we have the tools we need to go fast. To process, analyze, and act on our data to improve and grow the organization.”Rich Rubenstein VP of Data Analytics, General MillsRelated resourcesBigQuery offers up to 54% lower TCO over alternative cloud-based EDW solutionsGet the 2024 ESG reportHow Veo supercharged data strategy by migrating to BigQueryWatch the on-demand webinarIntelligentConnect all of your enterprise data with AIBring gen AI to your data with scale and efficiency to leverage your business data with LLMs. BigQuery has first-party integration with Vertex AI to ground AI with your enterprise data.“BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.”Allie Surina Dixon, Director of Data, PricelineRelated resourcesLearn how your data holds the key to your AI successGet the guide5 steps for laying the right data foundation for AI successWatch the webinarOpenEnsure your data and AI platform remains flexible, scalable, and adaptable to the ever-changing technological landscapeBigQuery's openness empowers data leaders with flexibility and scalability. Google Cloud's commitment to open source, open standards, and AI lets you choose best-fit solutions. Leverage OSS engines, open formats like Apache Iceberg, and your choice of AI models.“We are building our unified data and AI foundation using Google Cloud's lakehouse stack, where BigQuery and BigLake enable us to securely discover and manage all data types and formats in a single platform to build the best possible experiences for our patients, doctors, and nurses.”Mangesh Patil, Chief Analytics Officer, HCA HealthcareRelated resourcesConstellation Research: Google sets BigQuery apart with gen AI, open choices, and cross-cloud queryingBigQuery tables for Apache Iceberg: optimized storage for the open lakehouseRead the blogCustomer success storiesPriceline + Google cloudPriceline gets travelers to their happy places at happy prices with data and AIWith BigQuery at the heart of its data foundation, Priceline quickly innovates with gen AIVideo (2:50)Shopify is building the future of ecommerce on a modern data and AI platformShopify modernized its data foundation with BigQuery to deliver powerful AI innovationVideo (2:50)The Home Depot is renovating DIY experiences with data and AIThe Home Depot was able to quickly build and scale new AI-powered customer experience with BigQueryVideo (2:52)The Estée Lauder Companies use BigQuery and Vertex AI to deliver high touch customer experiencesLeveraging BigQuery has allowed them to deliver more for the business with less resourcesVideo (2:21)Mercado Libre is boxing up real-time data delivery with Google CloudMercado Libre can deliver real-time data to thousands of services across its platform with BigQueryVideo (2:20)McCormick is building the future of flavorMcCormick uses data and AI in BigQuery to meet the increasing global demand for flavorVideo (1:50)View MoreExecutive resourcesGoogle is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMSDownload the complimentary report10-min readBigQuery is named a Leader in The Forrester Wave™: Data Lakehouses, Q2 2024Download the report10-min readAn executive's guide to delivering value from data and AIGet the guide10-min readView MoreBigQuery offersJoin us at Google Cloud Next 2025Join interactive BigQuery demos, technical breakouts, and more at Next—live in Las Vegas April 9-11BigQuery migration incentives programSee if you qualify for incentives and credits while streamlining your migration to BigQueryData and AI Strategy AssessmentTake our assessment to understand your organization's AI readiness and get expert recommendationsView MoreTechnical resourcesBigQuery Integrations with Vertex AI YouTube playlistIn this playlist, learn how to use remote models to access Vertex AI resources and LLMs from BigQueryGoogle Gemini AI and data analytics in BigQueryExplore how Google Gemini in BigQuery transforms the experience through assistance and automationData analytics and AI technical blogsRead the latest technical blogs and innovations for BigQueryView MoreStart your data and AI journey todayBigQuery is a fully managed, unified data analytics and AI platform that helps you simplify analytics and is designed to be multi-engine, multimodal, and multicloud. Find out why tens of thousands of organizations choose BigQuery for their unified data and AI platform.Go to consoleContact salesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_Analytics.txt b/Data_Analytics.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ab8375c87ad9b011461b436c22cd47b29b44922 --- /dev/null +++ b/Data_Analytics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/data-analytics-and-ai +Date Scraped: 2025-02-23T11:59:00.234Z + +Content: +Google is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMS. Learn more.Unify data analytics and AI in a single platformBigQuery is the leading data analytics and AI platform for organizations seeking to unify their multimodal data, accelerate innovation with AI, and simplify data analytics across all users in the enterprise.Contact salesGo to console The BigQuery differenceExplore how BigQuery makes it easier for data teams to unify enterprise data and connect it with generative AI and new AI-powered experiences across our cloud portfolio.Manage all data and workloads in a single platformBigQuery simplifies data analytics by providing an integrated experience for data and AI. BigQuery has the simplicity and scale to manage multimodal data, with your choice of engines (SQL, Python, or Spark) at the best price and performance with data governance and security built-in. “Having a scalable place to land all our data helps us move quickly and gives us the confidence that as we get more data, we have the tools we need to go fast. To process, analyze, and act on our data to improve and grow the organization.”Rich Rubenstein VP of Data Analytics, General MillsRelated resourcesBigQuery offers up to 54% lower TCO over alternative cloud-based EDW solutionsGet the 2024 ESG reportHow Veo supercharged data strategy by migrating to BigQueryWatch the on-demand webinarConnect all of your enterprise data with AIBring gen AI to your data with scale and efficiency to leverage your business data with LLMs. BigQuery has first-party integration with Vertex AI to ground AI with your enterprise data.“BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.”Allie Surina Dixon, Director of Data, PricelineRelated resourcesLearn how your data holds the key to your AI successGet the guide5 steps for laying the right data foundation for AI successWatch the webinarEnsure your data and AI platform remains flexible, scalable, and adaptable to the ever-changing technological landscapeBigQuery's openness empowers data leaders with flexibility and scalability. Google Cloud's commitment to open source, open standards, and AI lets you choose best-fit solutions. Leverage OSS engines, open formats like Apache Iceberg, and your choice of AI models.“We are building our unified data and AI foundation using Google Cloud's lakehouse stack, where BigQuery and BigLake enable us to securely discover and manage all data types and formats in a single platform to build the best possible experiences for our patients, doctors, and nurses.”Mangesh Patil, Chief Analytics Officer, HCA HealthcareRelated resourcesConstellation Research: Google sets BigQuery apart with gen AI, open choices, and cross-cloud queryingBigQuery tables for Apache Iceberg: optimized storage for the open lakehouseRead the blogUnifiedManage all data and workloads in a single platformBigQuery simplifies data analytics by providing an integrated experience for data and AI. BigQuery has the simplicity and scale to manage multimodal data, with your choice of engines (SQL, Python, or Spark) at the best price and performance with data governance and security built-in. “Having a scalable place to land all our data helps us move quickly and gives us the confidence that as we get more data, we have the tools we need to go fast. To process, analyze, and act on our data to improve and grow the organization.”Rich Rubenstein VP of Data Analytics, General MillsRelated resourcesBigQuery offers up to 54% lower TCO over alternative cloud-based EDW solutionsGet the 2024 ESG reportHow Veo supercharged data strategy by migrating to BigQueryWatch the on-demand webinarIntelligentConnect all of your enterprise data with AIBring gen AI to your data with scale and efficiency to leverage your business data with LLMs. BigQuery has first-party integration with Vertex AI to ground AI with your enterprise data.“BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.”Allie Surina Dixon, Director of Data, PricelineRelated resourcesLearn how your data holds the key to your AI successGet the guide5 steps for laying the right data foundation for AI successWatch the webinarOpenEnsure your data and AI platform remains flexible, scalable, and adaptable to the ever-changing technological landscapeBigQuery's openness empowers data leaders with flexibility and scalability. Google Cloud's commitment to open source, open standards, and AI lets you choose best-fit solutions. Leverage OSS engines, open formats like Apache Iceberg, and your choice of AI models.“We are building our unified data and AI foundation using Google Cloud's lakehouse stack, where BigQuery and BigLake enable us to securely discover and manage all data types and formats in a single platform to build the best possible experiences for our patients, doctors, and nurses.”Mangesh Patil, Chief Analytics Officer, HCA HealthcareRelated resourcesConstellation Research: Google sets BigQuery apart with gen AI, open choices, and cross-cloud queryingBigQuery tables for Apache Iceberg: optimized storage for the open lakehouseRead the blogCustomer success storiesPriceline + Google cloudPriceline gets travelers to their happy places at happy prices with data and AIWith BigQuery at the heart of its data foundation, Priceline quickly innovates with gen AIVideo (2:50)Shopify is building the future of ecommerce on a modern data and AI platformShopify modernized its data foundation with BigQuery to deliver powerful AI innovationVideo (2:50)The Home Depot is renovating DIY experiences with data and AIThe Home Depot was able to quickly build and scale new AI-powered customer experience with BigQueryVideo (2:52)The Estée Lauder Companies use BigQuery and Vertex AI to deliver high touch customer experiencesLeveraging BigQuery has allowed them to deliver more for the business with less resourcesVideo (2:21)Mercado Libre is boxing up real-time data delivery with Google CloudMercado Libre can deliver real-time data to thousands of services across its platform with BigQueryVideo (2:20)McCormick is building the future of flavorMcCormick uses data and AI in BigQuery to meet the increasing global demand for flavorVideo (1:50)View MoreExecutive resourcesGoogle is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMSDownload the complimentary report10-min readBigQuery is named a Leader in The Forrester Wave™: Data Lakehouses, Q2 2024Download the report10-min readAn executive's guide to delivering value from data and AIGet the guide10-min readView MoreBigQuery offersJoin us at Google Cloud Next 2025Join interactive BigQuery demos, technical breakouts, and more at Next—live in Las Vegas April 9-11BigQuery migration incentives programSee if you qualify for incentives and credits while streamlining your migration to BigQueryData and AI Strategy AssessmentTake our assessment to understand your organization's AI readiness and get expert recommendationsView MoreTechnical resourcesBigQuery Integrations with Vertex AI YouTube playlistIn this playlist, learn how to use remote models to access Vertex AI resources and LLMs from BigQueryGoogle Gemini AI and data analytics in BigQueryExplore how Google Gemini in BigQuery transforms the experience through assistance and automationData analytics and AI technical blogsRead the latest technical blogs and innovations for BigQueryView MoreStart your data and AI journey todayBigQuery is a fully managed, unified data analytics and AI platform that helps you simplify analytics and is designed to be multi-engine, multimodal, and multicloud. Find out why tens of thousands of organizations choose BigQuery for their unified data and AI platform.Go to consoleContact salesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_Center_Migration.txt b/Data_Center_Migration.txt new file mode 100644 index 0000000000000000000000000000000000000000..d93af5acea5674c12db045a3c88fc31abfefee21 --- /dev/null +++ b/Data_Center_Migration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/data-center-migration +Date Scraped: 2025-02-23T11:59:42.590Z + +Content: +Data center migration into cloudWhether you need to exit or reduce on-premises data centers, migrate workloads as is, modernize apps, or leave another cloud, we’ll work together to craft the right cloud migration solutions for you and your business.Try Migration CenterFree migration cost assessmentTrade toil for innovation: IT leaders’ guide to leveling-up your infrastructureRegister to read the ebookBenefitsUnlock Google Cloud’s performance, scale, and security for your businessMigrate to a more secure cloudWith full visibility into our system, Google Cloud helps minimize threats and secure your data across our data centers, hardware, and network cables.Grow confidently on our purpose-built infrastructureGoogle Cloud runs the same products and services that support over one billion users without compromising performance, agility, or cost.Modernize at your own paceBuild, manage, and run modern hybrid applications on existing on-premises hardware or in multiple public clouds with Anthos.As part of our holistic cloud migration and modernization program, Google Cloud RaMP (Rapid Migration and Modernization Program), you can easily gain insight into your current landscape and estimate the total cost of migration with complimentary assessments from Google Cloud. Get a quick assessment on where you are in your cloud journey or sign up for a free, more comprehensive assessment of your IT landscape. Ready to dive in? Migration Center is Google Cloud's centralized, end-to-end hub for migration and modernization. Get started today or learn more with this 3-minute video. Key featuresCraft the right data center migration plan for your businessFrom assessment to optimization, RaMP helps you navigate your data center migration to cloud more quickly and simply than you thought. Rehost: lift and shiftMake as few changes during the migration as possible, redeploying applications to the cloud without making substantial changes to how they are configured. It is the most straightforward cloud migration strategy where administrators just “lift” their applications, workloads, virtual machines, and server operating systems and “shift” them to the new operating model in the public cloud. Replatform: lift and optimizeThe next step from a rehosting strategy, lifting the existing workloads and then optimizing them for the new cloud environment. For instance, a service may replatform a workload to the cloud to be able to take advantage of cloud-based microservice architecture, or containers in Kubernetes Engine. These applications will now have higher performance and more efficiency running in the cloud.Refactor: move and improveTaking applications and re-engineering them to be cloud-native, often changing the code of an application without altering its front-end behavior or experience. For example, a refactored application may be broken up from long strings of code into more modular pieces that can better take advantage of cloud capabilities, thus improving the performance of the code. Re-architect: continue to modernizeSimilar to refactoring but instead of restructuring how the application’s code works, it changes how that code functions in order to optimize the application and take advantage of cloud-native properties like scalability, security, and agility. One example of re-architecting an application is to take one large, monolithic application and turn it into several independent microservices. Rebuild: fully cloud optimizedTake an application and rewrite it entirely for the cloud. It is often easier to build an application from scratch than it is to refactor its old code to work in a cloud environment. A rebuilding strategy allows an organization to plan from the ground up, choosing which cloud-native tools and capabilities to utilize from the beginning. Looking for more information? Visit our migration strategies or migration architecture center pages, or check out these 30 guides to find additional reference architectures, guidance, and best practices for building or migrating your workloads on Google Cloud. Need some personalized guidance? Contact usYour migration into cloud starts nowBuild your cloud migration roadmap with this free guide and checklistGet the guide and checklistVideoHow to run an IT discovery and assessment with Migration CenterWatch videoTake a virtual tour of Google Cloud's Migration Center with this hands-on demoExplore the demoCustomersCustomers are growing their businesses through successful cloud migrationsCase studyHow Sabre migrated their apps and data centers to Google Cloud with speed and ease32-min watchBlog postThe Home Depot Canada migrates SAP to Google Cloud, improves efficiency and meets customer demand5-min readCase studyPayPal leverages Google Cloud to flawlessly manage surges in financial transactions4-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud8-min readCase studyInvoca migrates 300+ server racks to Google Cloud in less than 4 months with help from Zencore7-min readCase studyMajor League Baseball migrates to Google Cloud to drive fan engagement, increase operational efficiency10-min readSee all customersPartnersFind the right experts to drive your success with cloud migration servicesExpand allCloud migration services at Google CloudFull-service migration partnersSee all partnersRelated servicesCloud migration products and solutionsMigration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience.Application migrationLeverage Google Cloud’s proven infrastructure for all your apps and workloads to increase your performance, scalability, and security.Database migrationMoving your databases to the cloud can help you run and manage your applications at global scale while optimizing both efficiency and flexibility.Data warehouse modernizationAddress your current analytics demands while scaling as your data needs grow.VMware EngineBenefit from increased scale and agility while continuing to leverage the value of your existing VMware investments.Migrate to Virtual MachinesMigrate one application from on-premises or thousands of applications across multiple data centers and clouds safely and at scale.Migrate to ContainersEasily migrate into cloud and modernize your existing workloads to containers on a secure and managed Kubernetes service.Storage Transfer ServiceSecurely migrate petabyte-scale unstructured data from on-premises and other cloud providers to Google Cloud.Mainframe modernizationFoster innovation, increase agility, and reduce operational costs by modernizing your mainframes on Google Cloud. Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_Cloud.txt b/Data_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..3b62b728c77b266d47b33b7e96cfdeda35547235 --- /dev/null +++ b/Data_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/data-cloud +Date Scraped: 2025-02-23T11:57:17.524Z + +Content: +Google is a leader, positioned furthest in vision, in the 2024 Gartner Magic Quadrant for Cloud DBMS. Learn more.Data CloudTens of thousands of customers use Google's Data Cloud to unify data and connect it with groundbreaking AI to unleash transformative experiences. Google's Data Cloud delivers these capabilities with enterprise-grade efficiency, scalability, and security. This powerful combination helps organizations turn data into value and demonstrate ROI quickly.2024 Data and AI Trends ReportData and AI Strategy Assessment1:38Google Cloud customers have been able to quickly integrate generative AI into their products and servicesGet all of your data ready for AIWith the surge of new generative AI capabilities, companies and their customers can now interact with applications and data in new ways. With Google’s Data Cloud, you can use built-in generative AI capabilities to activate your enterprise data across BigQuery, AlloyDB for PostgreSQL, Spanner, Cloud SQL, and more, and use built-in features to apply AI/ML directly to your data. You can also easily integrate with Vertex AI to further enable differentiated AI experiences. Looker helps you build custom data experiences and data apps with powerful embedded capabilities and it offers an AI assistant that helps you chat with your data with conversational analytics.Google is a Leader and positioned furthest in vision among all vendors evaluated in the Gartner® 2023 Magic Quadrant™Manage all your data with a unified data platformGoogle's Data Cloud provides a unified data analytics foundation built on BigQuery that brings your data together into one place, integrating structured and unstructured data with AI to deliver insights quickly across your data estate. You can also unify your operational and analytical data with BigQuery federation for AlloyDB for PostgreSQL, Spanner, and Cloud SQL without moving or copying your data. Google's unified data platform allows you to manage your entire data life cycle and help make security and governance easier for different types of users within your organization.Video: Walmart uses Gemini to enrich its data and improve millions of product listings across its site 7:04Learn about the latest innovations and what's next for data analytics in the AI era9:48Dive into the world of Google Cloud databases and discover how to transform the way you build and deploy AI-powered applicationsRun all your data where it isGoogle Data Cloud is open to help you build modern, data-driven applications wherever your workloads are. With support for open source and open standards, you can build and modernize your applications with AlloyDB, a PostgreSQL compatible database for your most demanding enterprise workloads. We also offer AlloyDB Omni which runs across clouds, on-premises, and even on developer laptops and AlloyDB AI which is an integrated set of capabilities for building generative AI apps. With BigQuery Omni, you can also utilize data from multiple cloud and on-premises environments and access it in popular SaaS apps—without generally incurring the costs, security risks, and governance concerns associated with data migration.Video: How Bayer Crop Science unlocked harvest data efficiency with AlloyDBActivate AI with your business dataGoogle’s Data Cloud offers an AI-ready data platform with a seamless integration to Vertex AI for both your operational and analytical data. In BigQuery, you can use multimodal gen AI to build data pipelines that combine gen AI models with analytics, combine structured and unstructured data, and drive real time ML inference. We also support vector search across BigQuery, AlloyDB for PostgreSQL, Spanner, Cloud SQL and more. grounding generative AI in enterprise truth. With Gemini in BigQuery, we make the development of your AI scenarios simple by providing always-on intelligence and automation and accelerate the journey from data to insights. Gemini in Databases supercharges database development and management across every aspect of the database journey from development, performance optimization, fleet management to governance and migrations.Gemini for Google Cloud is your AI-powered assistant8:40Learn about Google Cloud's new generative AI capabilities It’s a whole new ballgame with Google Cloud AIBuilt on a culture of openness, data-driven decisions, and cost predictability, Google Cloud offers MLB innovation to drive growth with fans, now and in the future.Explore the interactive site to see how MLB uses Google AI and Google Data Cloud to power its experiences.BigQuery gave us a solid data foundation for AI. Our data was exactly where we needed it. We were able to connect millions of customer data points from hotel information, marketing content, and customer service chat and use our business data to ground LLMs.Allie Surina Dixon, Director of Data, PricelineWatch Priceline's story hereTens of thousands of organizations have adopted leading Google Cloud products to achieve lower TCO and improved productivityBigQueryBigQuery is a serverless, highly scalable, and cost-effective multicloud data warehouse designed for business agility.Learn more about BigQuery and start a free trial54%Lower TCO than cloud data warehouse alternatives99.99%Uptime SLA ensures reliable access to data and insightsAlloyDB for PostgreSQLA fully managed PostgreSQL—compatible database service designed for your most demanding enterprise workloads.See key features and benefits of AlloyDB4xFaster than standard PostgreSQL for transactional workloads100xFaster analytical queries than standard PostgreSQLLookerAn enterprise platform for business intelligence, data applications, and embedded analytics.Learn about cost savings and business benefits enabled by Looker99%Improvement on data teams' productivity with Looker24%Boosted employee productivity with self-service analyticsVertex AIBuild, deploy, and scale ML models faster, with pre-trained and custom tooling within a unified AI platform. Explore common use cases for Vertex AI80%Fewer lines of code needed to build custom models80%Fewer trials than traditional methods to find optimal parameters**For complex functions, with Vertex AI Vizier. We are continuing to evolve our architecture as we grow, and looking for ways to improve scalability, performance, and reliability. We believe that by adopting architecture that leverages the strengths of both AlloyDB and Spanner, we can build a system that can meet the needs of our users and handle our growth aspirations.James Groeneveld, Research Engineer, Character.AIWatch Character.ai's storyDiscover solutions that make data cloud a realityRun faster and smoother, while finding new ways to delight customers with AI and MLLearn more about AI and MLEmpower everyone to get insights with a fully managed, unified data analytics and BI platformLearn more about data analytics and AIUnlock generative AI potential and modernize your business with our database solutionsLearn more about databasesTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerTake the data cloud assessmentStart assessmentContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_Lake_Modernization.txt b/Data_Lake_Modernization.txt new file mode 100644 index 0000000000000000000000000000000000000000..6ab0aca2ba7520eb176d499c2ad5ab810eef5cc4 --- /dev/null +++ b/Data_Lake_Modernization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/data-lake +Date Scraped: 2025-02-23T11:59:02.941Z + +Content: +Data lake modernizationGoogle Cloud’s data lake powers any analysis on any type of data. This empowers your teams to securely and cost-effectively ingest, store, and analyze large volumes of diverse, full-fidelity data.Contact usBenefitsMake the most of your data lakeRe-host your data lakeIf you don’t want to rebuild your on-premises data lake in the cloud, lift and shift your data to Google Cloud to unlock cost savings and scale.Burst a data lake workloadTake a resource-intensive data or analytic processing workload and burst it to the cloud to autoscale compute without provisioning new hardware.Build a cloud-native data lakeData lake turned into a swamp? A cloud-native data lake on Google Cloud can accelerate your data engineers’ and scientists’ analytics development.Key featuresMigrate Apache Spark and Hadoop based data lakes to Google CloudFully managed servicesProvision, autoscale, and govern purpose-built data and analytic open source software clusters such as Apache Spark for easier management in as little as 90 seconds.Integrated data science and analyticsBuild, train, and deploy analytics faster on a Google data lake with Apache Spark, BigQuery, AI Platform Notebooks, GPUs, and other analytics accelerators.Cost managementGoogle Cloud’s auto-scaling services let you decouple storage from compute to increase query speeds and manage cost at a per-gigabyte level. Use custom machines, idle cluster deletion, and more to see up to 54% lower costs than an on-premises Hadoop deployment.Ready to get started? Contact usCustomersSee how these customers migrated petabytes of data and reduced costsVideoSee how Twitter migrated 300 PB of Hadoop Data to Google Cloud.49:57VideoPandora migrates 7 PB of data from its on-premises data lake to Google Cloud.50:52Case studyMETRO’s data lake project reduces infrastructure costs by more than 30%. 6-min readVideoHow Vodafone Group is axing 600+ Hadoop servers and moving to Google Cloud.47:17See all customersPartnersRecommended data lake partnersGoogle is just one piece of the data lake puzzle. Our key partners can help you unlock new capabilities that seamlessly integrate with the rest of your IT investments.See all partnersRelated servicesSee how these Google Cloud products can help modernize your data lakeDataprocMigrating Apache Hadoop or Spark workloads? Spin up a Dataproc cluster in seconds and copy/paste your existing Spark code.BigQueryRunning SQL queries on your data lake? Move to BigQuery to unlock SQL scale and speed.Cloud StorageReduce the cost of data storage and only pay for what you use with Cloud Storage.What's newExplore the latest updatesSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postCombining the power of Apache Spark and AI Platform Notebooks with Dataproc HubRead the blogBlog postBurst data lake processing to Dataproc using on-premises Hadoop dataRead the blogBlog postOptimize Apache Hadoop and Spark costs with flexible VM typesRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_Migration.txt b/Data_Migration.txt new file mode 100644 index 0000000000000000000000000000000000000000..158026b3b850e4270e120c08f05ee37c37067b37 --- /dev/null +++ b/Data_Migration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/data-migration +Date Scraped: 2025-02-23T11:59:01.476Z + +Content: +Google is positioned furthest in vision among all leaders in the 2023 Gartner® Magic Quadrant™ for Cloud DBMS.Cloud data migrationActivate your data with AI and build a strong AI-ready data foundation by moving to BigQuery, Google Cloud’s unified data platform. Streamline your migration path to BigQuery and accelerate your time to insights.Go to consoleMigration incentivesMigrations made easyWhy migrate your data to BigQuery?BigQuery Migration Service toolsMigration guidesLearn how to simplify your migration to BigQuery40:27OverviewWhy migrate your data to BigQuery?BigQuery is a unified AI-ready data platform that helps you maximize value from your data and is designed to be multi-engine, multi-format, and multicloud. It supports diverse data analytics and lakehouse workloads with a user-friendly interface that enables data analysts, data engineers, and data scientists to work across the same set of governed data. It's cost-effective compared to other solutions, making it a smart choice for businesses looking to leverage enterprise data for AI.The economic advantages of Google Cloud BigQuery versus alternative data and AI platformsGet the reportBigQuery Migration Service toolsBigQuery Migration Service streamlines your migration to BigQuery, a unified data and AI platform. Start with the migration assessment to understand your existing data warehouse and plan your move. Next, batch and interactive SQL translators prepare your queries and scripts for BigQuery, supporting various SQL dialects. The BigQuery Data Transfer Service automates and manages data migration, ensuring a smooth transition. Finally, the Data Validation Tool verifies the success of your migration.How Veo Supercharged Data Strategy by Migrating to BigQuery Webinar36:11Data platform migration incentives Migrating data platforms is not a trivial undertaking. We are introducing a new migration incentives program to help with your migration. This program includes Google Cloud credits to offset your migration costs, implementation services through Google or from eligible partners and cloud egress credits to cover costs associated with moving data from AWS or Azure.Data platform migration incentives programSign up todayView moreHow It WorksBigQuery Migration Service is a comprehensive, streamlined solution designed to migrate your data warehouse to BigQuery. This service provides free tools that guide you through every step of the migration process, from initial assessment and planning to data transfer and validation. These tools can help your organization plan for a faster, smoother, and less risky transition, accelerating the time it takes to realize the benefits of BigQuery, Google Cloud's unified, AI-ready data platform. Learn moreCommon UsesTerradata to BigQuery migrationMigrating from Teradata to BigQueryMigrating to BigQuery provides a modern, scalable, and cost-effective solution for advanced analytics, machine learning, and real-time insights. BigQuery eliminates the need for infrastructure management and scales automatically to meet your demands, allowing your team to focus on data analysis rather than system maintenance. Additionally, BigQuery's pay-as-you-go pricing model can lead to cost savings.Read the complete guide for migrating from Teradata to BigQueryTeradata to BigQuery migration tutorialMigrate schema and data from TeradataTeradata SQL translation guideHow-tosMigrating from Teradata to BigQueryMigrating to BigQuery provides a modern, scalable, and cost-effective solution for advanced analytics, machine learning, and real-time insights. BigQuery eliminates the need for infrastructure management and scales automatically to meet your demands, allowing your team to focus on data analysis rather than system maintenance. Additionally, BigQuery's pay-as-you-go pricing model can lead to cost savings.Read the complete guide for migrating from Teradata to BigQueryTeradata to BigQuery migration tutorialMigrate schema and data from TeradataTeradata SQL translation guideOracle to BigQuery migrationMigrating from Oracle to BigQueryBigQuery offers executives a serverless data warehouse solution that eliminates hardware limitations and infrastructure management. Its architecture automatically scales resources to meet query demands, allowing you to focus on data analysis rather than system maintenance. BigQuery handles administrative tasks like scaling and high availability, simplifying your cloud data operations and reducing overhead.Read the complete guide for migrating from Oracle to BigQueryOracle SQL translation guideSchema migrationData migrationHow-tosMigrating from Oracle to BigQueryBigQuery offers executives a serverless data warehouse solution that eliminates hardware limitations and infrastructure management. Its architecture automatically scales resources to meet query demands, allowing you to focus on data analysis rather than system maintenance. BigQuery handles administrative tasks like scaling and high availability, simplifying your cloud data operations and reducing overhead.Read the complete guide for migrating from Oracle to BigQueryOracle SQL translation guideSchema migrationData migrationAmazon Redshift to BigQuery migrationMigrating from Redshift to BigQueryBigQuery offers a serverless, fully managed data warehouse with automatic scaling and pay-as-you-go pricing, eliminating the infrastructure management and capacity planning headaches. This translates to reduced operational costs and fast time-to-insights for your team. Additionally, BigQuery's native integrations with other Google Cloud services like Vertex AI enhances your overall data and AI ecosystem.Read the complete guide for migrating from Redshift to BigQueryMigrate schema and data from Amazon RedshiftMigrating Amazon Redshift data with a VPC networkAmazon Redshift SQL translation guideHow-tosMigrating from Redshift to BigQueryBigQuery offers a serverless, fully managed data warehouse with automatic scaling and pay-as-you-go pricing, eliminating the infrastructure management and capacity planning headaches. This translates to reduced operational costs and fast time-to-insights for your team. Additionally, BigQuery's native integrations with other Google Cloud services like Vertex AI enhances your overall data and AI ecosystem.Read the complete guide for migrating from Redshift to BigQueryMigrate schema and data from Amazon RedshiftMigrating Amazon Redshift data with a VPC networkAmazon Redshift SQL translation guideSnowflake to BigQuery migration Migrating from Snowflake to BigQueryBigQuery offers a unified, cost-effective, serverless AI-ready data platform. With automatic scaling, it eliminates infrastructure management overhead and reduces costs. BigQuery's native integration with Google Cloud's AI capabilities, including Vertex AI and Gemini, streamlines your ability to bring AI to your data for the next generation of AI use cases and business transformation.Read the complete guide for migrating from Snowflake to BigQuerySnowflake SQL translation guideHow-tosMigrating from Snowflake to BigQueryBigQuery offers a unified, cost-effective, serverless AI-ready data platform. With automatic scaling, it eliminates infrastructure management overhead and reduces costs. BigQuery's native integration with Google Cloud's AI capabilities, including Vertex AI and Gemini, streamlines your ability to bring AI to your data for the next generation of AI use cases and business transformation.Read the complete guide for migrating from Snowflake to BigQuerySnowflake SQL translation guideApache Hive to BigQuery migrationMigrating from Apache Hive to BigQueryBigQuery is a fully managed, serverless cloud data warehouse offering automatic scaling, a simple SQL interface, and integrated machine learning capabilities. These simplify operations and reduce the need for specialized expertise, making it an accessible and cost-effective option for many organizations. BigQuery supports multiple engines, including streaming, Spark, Python, and SQL, to help data professionals across your organization work off the same data.Read the complete guide for migrating from Apache Hive to BigQuery Migrate schema and data from Apache HiveApache Hive SQL translation guideHow-tosMigrating from Apache Hive to BigQueryBigQuery is a fully managed, serverless cloud data warehouse offering automatic scaling, a simple SQL interface, and integrated machine learning capabilities. These simplify operations and reduce the need for specialized expertise, making it an accessible and cost-effective option for many organizations. BigQuery supports multiple engines, including streaming, Spark, Python, and SQL, to help data professionals across your organization work off the same data.Read the complete guide for migrating from Apache Hive to BigQuery Migrate schema and data from Apache HiveApache Hive SQL translation guideGet started with BigQuery todayExplore lakehouse jumpstart solutionGet startedSave costs with our new incentive programContact usSee the BigQuery differenceLearn moreBigQuery migration frameworkLearn moreTake the migration assessmentGet startedGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Data_management_with_Cohesity_Helios_and_Google_Cloud.txt b/Data_management_with_Cohesity_Helios_and_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..7b6b4df9c847cbaf5f4dc772345d30e2faa3ab87 --- /dev/null +++ b/Data_management_with_Cohesity_Helios_and_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/using-cohesity-with-cloud-storage-for-enterprise-hybrid-data-protection +Date Scraped: 2025-02-23T11:51:26.749Z + +Content: +Home Docs Cloud Architecture Center Send feedback Data management with Cohesity Helios and Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-27 UTC By Edwin Galang (Cohesity Cloud Solutions Architect) and Vikram Kanodia (Cohesity Cloud Solutions Architect) This document describes how you can use the Cohesity Helios data platform with Google Cloud. Using this platform has the following benefits: Long-term data retention with Cloud Storage. Backup and recovery of workloads and VMs in Compute Engine and Google Cloud VMware Engine using Cohesity DataProtect. File services in Google Cloud using Cohesity SmartFiles. Cohesity Helios is a data management platform that consolidates multiple functions like backup, recovery, analytics, and disaster recovery in a single scalable, secure, AI-driven platform. You can deploy Cohesity Helios at your network edge, in a data center, or in Google Cloud. Architecture The following diagram shows how Cohesity integrates with Google Cloud. As the diagram shows, Cohesity Helios is installed in your data center and in Google Cloud. It connects with Compute Engine and VMware Engine to obtain VM images and stores them in Cloud Storage. Cohesity Helios connects with Cloud Storage to provide a long-term archive for your data. You can deploy Cohesity Helios in Google Cloud to back up and recover: Compute Engine VMs VMware Engine VMs Application workloads SAP HANA Oracle database SQL Server With Cohesity SmartFiles, you can use protocols like network file system (NFS) and server message block (SMB) to provide file services in your Google Cloud environment. Cohesity Helios components Cohesity Helios has the following main components: Cohesity DataProtect: A software-defined backup and recovery solution that supports cloud environments. Cohesity DataProtect is designed for hyperscale and offers comprehensive policy-based protection for both on-premises and cloud data sources. Cohesity SmartFiles: A multiprotocol file and object solution that supports large enterprises and cloud environments. It's designed to let you scale, protect, and create multiple tiers for data management. Long-term data retention with Cloud Storage Cohesity supports the following Cloud Storage classes: Standard Nearline Coldline That means you can archive your data in the storage class that best meets your long-retention and cost requirements. Using Cohesity with Cloud Storage provides the following benefits: You don't need cloud gateways to connect Cohesity with Cloud Storage. You can use wildcards in searches to locate and restore archived data from Cloud Storage. You can recover individual VMs, restore files to source VMs, and recover individual application objects. Backup and recovery of Compute Engine VMs, VMware Engine VMs, and application workloads Cohesity DataProtect is designed to provide consistent and comprehensive protection for Compute Engine VMs, VMware Engine VMs, and application workloads. Using Cohesity DataProtect with Cloud Storage provides the following benefits: A single UI to manage and provision all your backup and recovery services. Policy automation to meet your business service level agreements (SLAs) and assign policies to a single job or to all jobs, globally. Immutable snapshots with Advanced Encryption Standard 256 encryption, multi-factor authentication, and Federal Information Processing Standards certification to help protect against ransomware. The use of strict consistency, erasure coding, and hardware fault tolerance to meet your SLAs and achieve higher data resiliency. Global, variable-length deduplication across workloads and protocols to maximize space. File services for Compute Engine VMs and VMware Engine Cohesity SmartFiles lets you consolidate data silos. It's designed to help securely manage the following unstructured content and application data types, including: Digital libraries Archives Rich media Video surveillance Big data Backup datasets Cohesity SmartFiles provides the following benefits: Broad compatibility across users and applications, whether on-premises or in the cloud, with support for the NFS and SMB protocols. Resiliency and consistency while eliminating disruptive upgrades, unique and distributed file systems. Ability to grow incrementally using linear performance and capacity that adds additional nodes. Space maximization using sliding window variable-length deduplication, Zstandard compression, and small file efficiency. Policies that transparently down-tier and up-tier data storage to optimize spending. An external network-attached storage tiering that coexists with your existing systems. A UI to manage, monitor, and search all your unstructured data. Machine-driven operational insights help predict future needs. Configure Cohesity with Cloud Storage This section describes how to configure Cohesity Helios with Cloud Storage and create a policy to archive data to Cloud Storage for long-term retention. Register Cloud Storage with Cohesity Helios To use Cloud Storage with Cohesity Helios, you must first register it as an external target. Log in to the Cohesity Helios console using your MyCohesity credentials or your SSO credentials if you are using another identity provider. In the Cohesity Dashboard, click Infrastructure > External Targets. Click Register External Target to access the Register External Target page. Register the Cloud Storage target using the following information: Purpose: select Archival New Target: enter a unique name for the target Type: select the Cloud Storage class Bucket name: enter the name of the Cloud Storage bucket Project ID: enter the name of the project that hosts Cloud Storage Client Email Address: enter the service account email address for Cloud Storage Client Private Key: enter the private key that is associated with the service account Encryption: enable to send and store data in an encrypted format Compression: enable to send and store the data in a compressed format Source Side Deduplication: enable to deduplicate data before sending it to the target Bandwidth Throttling: enable to limit the maximum data transfer rate to the target during a time window The following screenshot shows sample inputs. To create the new external target, click Register. Next, create a policy to archive data. Create a policy to archive data to Cloud Storage A policy is a reusable set of settings that define how and when data is protected, replicated, and archived. Log in to the Cohesity Helios console. In the Cohesity Dashboard, select Data Protection > Policies. Click Create Policy. Enter the following information to configure the policy: Policy name: enter a unique name for the policy DataLock: enable to create a write-once-read-many (WORM)-compliant backup Archive to: select the name of the Cloud Storage bucket Every: select how often backups are captured by the protection group Retain for: specify the number of days that backups are stored on the Cohesity cluster before the backups are deleted Click Add Archive. In the Archive to list, select the Cloud Storage bucket. Specify the archive schedule for copies of the snapshots that were created by this job. These copies are stored on the registered target. The following screenshot shows a sample policy. Click Save. Configure Cohesity to back up Compute Engine VMs The following sections describe how to register a Google Cloud source, create a policy, and create a protection group to back up Compute Engine VMs. Register a Google Cloud source Log in to the Cohesity Helios console. In the Cohesity Dashboard, select Data Protection > Sources. Select Register > Virtual Machines. Complete the following: Source Type: select Cloud > GCP: IAM User Use JSON Key File: enable to use the service account's JSON key file Service Account JSON Key File: select the service account's JSON key file or enter the information manually VPC: enter the VPC network Subnet: enter the VPC subnet The following screenshot shows a sample registration. Click Register. Create a policy to back up Compute Engine VMs A policy is a reusable set of settings that define how and when objects are protected, replicated, and archived. Log in to the Cohesity Helios console. In the Cohesity Dashboard, select Data Protection > Policies. Click Create Policy. Complete the following information: Policy name: enter a unique name for the policy DataLock: enable to create a WORM-compliant backup Archive to: select the name of the Cloud Storage bucket Every: select how often backups are captured by the protection group Retain for: specify the number of days that backups are stored on the Cohesity cluster before the backups are deleted Policy Name: enter a unique name for the policy DataLock: enable to create a WORM-compliant backup (Optional) To meet long-term retention requirements, archive Google Cloud data to Cloud Storage. Click Add Archive to add Cloud Storage to the policy. Enter the following information: Archive to: select the Cloud Storage bucket. Every: select when an archive is created. Retain for: select how long the copies are stored in the Cloud Storage bucket before they are deleted. The following screenshot shows a sample policy. Click Save. Create a protection group for Compute Engine VMs Log in to the Cohesity Helios console. In the Cohesity Dashboard, select Data Protection > Protection. Click Protect. Click Add Objects. In the Registered Source list, select the Google Cloud source. Select the VMs to protect and click Continue. The following screenshot shows some selected VMs. Enter the Protection Group name. The following screenshot shows a selected protection group. From the Policy list, select the policy. Click Protect. Recover Compute Engine VMs Cohesity can help you recover protected objects (such as VMs) from a snapshot created by a protection group. You can recover data from a Cohesity cluster or a Cloud Storage target. You can recover data to its original location or a new location. Log in to the Cohesity Helios console. In the Cohesity Dashboard, select Data Protection > Recoveries. Click Recover and select Virtual Machines > VMs. In the New Recovery wizard, search for the VMs. From the search results, select the VMs that you want to recover, the recovery point, and the location to recover the VM from. Click Next: Recover Options. Complete the following information: Recover to: select the location to recover to Recovery Options: select the recovery options Click Recover. What's next Review additional documentation on the Cohesity website: Next generation data management with Cohesity. Cohesity and Google Cloud. About Cohesity. Cohesity Blogs. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Data_science_with_R-_exploratory_data_analysis.txt b/Data_science_with_R-_exploratory_data_analysis.txt new file mode 100644 index 0000000000000000000000000000000000000000..a516cab3c8bf4e7bea1b1d22dfd6437abb5f358e --- /dev/null +++ b/Data_science_with_R-_exploratory_data_analysis.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/data-science-with-r-on-gcp-eda +Date Scraped: 2025-02-23T11:49:23.030Z + +Content: +Home Docs Cloud Architecture Center Send feedback Data science with R on Google Cloud: Exploratory data analysis Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-22 UTC This document shows you how to get started with data science at scale with R on Google Cloud. This is intended for those who have some experience with R and with Jupyter notebooks, and who are comfortable with SQL. This document focuses on performing exploratory data analysis using Vertex AI Workbench instances and BigQuery. You can find the accompanying code in a Jupyter notebook that's on GitHub. Overview R is one of the most widely used programming languages for statistical modeling. It has a large and active community of data scientists and machine learning (ML) professionals. With more than 20,000 packages in the open-source repository of the Comprehensive R Archive Network (CRAN), R has tools for all statistical data analysis applications, ML, and visualization. R has experienced steady growth in the last two decades due to its expressiveness of its syntax, and because of how comprehensive its data and ML libraries are. As a data scientist, you might want to know how you can make use of your skill set by using R, and how you can also harness the advantages of the scalable, fully managed cloud services for data science. Architecture In this walkthrough, you use Vertex AI Workbench instances as the data science environments to perform exploratory data analysis (EDA). You use R on data that you extract in this walkthrough from BigQuery, Google's serverless, highly scalable, and cost-effective cloud data warehouse. After you analyze and process the data, the transformed data is stored in Cloud Storage for further potential ML tasks. This flow is shown in the following diagram: Example data The example data for this document is the BigQuery New York City taxi trips dataset. This public dataset includes information about the millions of taxi rides that take place in New York City each year. In this document, you use the data from 2022, which is in the bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2022 table in BigQuery. This document focuses on EDA and on visualization using R and BigQuery. The steps in this document set you up for a ML goal of predicting taxi fare amount (the amount before taxes, fees, and other extras), given a number of factors about the trip. The actual model creation isn't covered in this document. Vertex AI Workbench Vertex AI Workbench is a service that offers an integrated JupyterLab environment, with the following features: One-click deployment. You can use a single click to start a JupyterLab instance that's preconfigured with the latest machine-learning and data-science frameworks. Scale on demand. You can start with a small machine configuration (for example, 4 vCPUs and 16 GB of RAM, as in this document), and when your data gets too big for one machine, you can scale up by adding CPUs, RAM, and GPUs. Google Cloud integration. Vertex AI Workbench instances are integrated with Google Cloud services like BigQuery. This integration makes it straightforward to go from data ingestion to preprocessing and exploration. Pay-per-use pricing. There are no minimum fees or up-front commitments. For information, see pricing for Vertex AI Workbench. You also pay for the Google Cloud resources that you use within the notebooks (such as BigQuery and Cloud Storage). Vertex AI Workbench instance notebooks run on Deep Learning VM Images. This document supports creating a Vertex AI Workbench instance that has R 4.3. Work with BigQuery using R BigQuery doesn't require infrastructure management, so you can focus on uncovering meaningful insights. You can analyze large amounts of data at scale and prepare datasets for ML by using the rich SQL analytical capabilities of BigQuery. To query BigQuery data using R, you can use bigrquery, an open-source R library. The bigrquery package provides the following levels of abstraction on top of BigQuery: The low-level API provides thin wrappers over the underlying BigQuery REST API. The DBI interface wraps the low-level API and makes working with BigQuery similar to working with any other database system. This is the most convenient layer if you want to run SQL queries in BigQuery or upload less than 100 MB. The dbplyr interface lets you treat BigQuery tables like in-memory data frames. This is the most convenient layer if you don't want to write SQL, but instead want dbplyr to write it for you. This document uses the low-level API from bigrquery, without requiring DBI or dbplyr. Objectives Create a Vertex AI Workbench instance that has R support. Query and analyze data from BigQuery using the bigrquery R library. Prepare and store data for ML in Cloud Storage. Costs In this document, you use the following billable components of Google Cloud: BigQuery Vertex AI Workbench instances. You are also charged for resources used within notebooks, including compute resources, BigQuery, and API requests. Cloud Storage To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API Create a Vertex AI Workbench instance The first step is to create a Vertex AI Workbench instance that you can use for this walkthrough. In the Google Cloud console, go to the Workbench page. Go to Workbench On the Instances tab, click add_box Create New. On the New instance window, click Create. For this walkthrough, keep all of the default values. The Vertex AI Workbench instance can take 2-3 minutes to start. When it's ready, the instance is automatically listed in the Notebook instances pane, and an Open JupyterLab link is next to the instance name. If the link to open JupyterLab doesn't appear in the list after a few minutes, then refresh the page. Open JupyterLab and install R To complete the walkthrough in the notebook, you need to open the JupyterLab environment, install R, clone the vertex-ai-samples GitHub repository, and then open the notebook. In the instances list, click Open Jupyterlab. This opens the JupyterLab environment in another tab in your browser. In the JupyterLab environment, click add_box New Launcher, and then on the Launcher tab, click Terminal. In the terminal pane, install R: conda create -n r conda activate r conda install -c r r-essentials r-base=4.3.2 During the installation, each time that you're prompted to continue, type y. The installation might take a few minutes to finish. When the installation is complete, the output is similar to the following: done Executing transaction: done ® jupyter@instance-INSTANCE_NUMBER:~$ Where INSTANCE_NUMBER is the unique number that's assigned to your Vertex AI Workbench instance. After the commands finish executing in the terminal, refresh your browser page, and then open the Launcher by clicking add_box New Launcher. The Launcher tab shows options for launching R in a notebook or in the console, and to create an R file. Click the Terminal tab, and then clone the vertex-ai-samples GitHub repository: git clone https://github.com/GoogleCloudPlatform/vertex-ai-samples.git When the command finishes, you see the vertex-ai-samples folder in the file browser pane of the JupyterLab environment. In the file browser, open vertex-ai-samples>notebooks >community>exploratory_data_analysis. You see the eda_with_r_and_bigquery.ipynb notebook. Open the notebook and set up R In the file browser, open the eda_with_r_and_bigquery.ipynb notebook. This notebook goes through exploratory data analysis with R and BigQuery. Throughout the rest of this document, you work in the notebook, and you run the code that you see within the Jupyter notebook. Check the version of R that the notebook is using: version The version.string field in the output should show R version 4.3.2, which you installed in the previous section. Check for and install the necessary R packages if they aren't already available in the current session: # List the necessary packages needed_packages <- c("dplyr", "ggplot2", "bigrquery") # Check if packages are installed installed_packages <- .packages(all.available = TRUE) missing_packages <- needed_packages[!(needed_packages %in% installed_packages)] # If any packages are missing, install them if (length(missing_packages) > 0) { install.packages(missing_packages) } Load the required packages: # Load the required packages lapply(needed_packages, library, character.only = TRUE) Authenticate bigrquery using out-of-band authentication: bq_auth(use_oob = True) Set the name of the project that you want to use for this notebook by replacing [YOUR-PROJECT-ID] with a name: # Set the project ID PROJECT_ID <- "[YOUR-PROJECT-ID]" Set the name of the Cloud Storage bucket in which to store output data by replacing [YOUR-BUCKET-NAME] with a globally unique name: BUCKET_NAME <- "[YOUR-BUCKET-NAME]" Set the default height and width for plots that will be generated later in the notebook: options(repr.plot.height = 9, repr.plot.width = 16) Query data from BigQuery In this section of the notebook, you read the results of executing a BigQuery SQL statement into R and take a preliminary look at the data. Create a BigQuery SQL statement that extracts some possible predictors and the target prediction variable for a sample of trips. The following query filters out some outlier or nonsensical values in the fields that are being read in for analysis. sql_query_template <- " SELECT TIMESTAMP_DIFF(dropoff_datetime, pickup_datetime, MINUTE) AS trip_time_minutes, passenger_count, ROUND(trip_distance, 1) AS trip_distance_miles, rate_code, /* Mapping from rate code to type from description column in BigQuery table schema */ (CASE WHEN rate_code = '1.0' THEN 'Standard rate' WHEN rate_code = '2.0' THEN 'JFK' WHEN rate_code = '3.0' THEN 'Newark' WHEN rate_code = '4.0' THEN 'Nassau or Westchester' WHEN rate_code = '5.0' THEN 'Negotiated fare' WHEN rate_code = '6.0' THEN 'Group ride' /* Several NULL AND some '99.0' values go here */ ELSE 'Unknown' END) AS rate_type, fare_amount, CAST(ABS(FARM_FINGERPRINT( CONCAT( CAST(trip_distance AS STRING), CAST(fare_amount AS STRING) ) )) AS STRING) AS key FROM `bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2022` /* Filter out some outlier or hard to understand values */ WHERE (TIMESTAMP_DIFF(dropoff_datetime, pickup_datetime, MINUTE) BETWEEN 0.01 AND 120) AND (passenger_count BETWEEN 1 AND 10) AND (trip_distance BETWEEN 0.01 AND 100) AND (fare_amount BETWEEN 0.01 AND 250) LIMIT %s " The key column is a generated row identifier based on the concatenated values of the trip_distance and fare_amount columns. Run the query and retrieve the same data as an in-memory tibble, which is like a data frame. sample_size <- 10000 sql_query <- sprintf(sql_query_template, sample_size) taxi_trip_data <- bq_table_download( bq_project_query( PROJECT_ID, query = sql_query ) ) View the retrieved results: head(taxi_trip_data) The output is a table that's similar to the following image: The results show these columns of trip data: trip_time_minutes integer passenger_count integer trip_distance_miles double rate_code character rate_type character fare_amount double key character View the number of rows and data types of each column: str(taxi_trip_data) The output is similar to the following: tibble [10,000 x 7] (S3: tbl_df/tbl/data.frame) $ trip_time_minutes : int [1:10000] 52 19 2 7 14 16 1 2 2 6 ... $ passenger_count : int [1:10000] 1 1 1 1 1 1 1 1 3 1 ... $ trip_distance_miles: num [1:10000] 31.3 8.9 0.4 0.9 2 0.6 1.7 0.4 0.5 0.2 ... $ rate_code : chr [1:10000] "5.0" "5.0" "5.0" "5.0" ... $ rate_type : chr [1:10000] "Negotiated fare" "Negotiated fare" "Negotiated fare" "Negotiated fare" ... $ fare_amount : num [1:10000] 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.01 ... $ key : chr [1:10000] "1221969315200336084" 5007772749405424948" "3727452358632142755" "77714841168471205370" ... View a summary of the retrieved data: summary(taxi_trip_data) The output is similar to the following: trip_time_minutes passenger_count trip_distance_miles rate_code Min. : 1.00 Min. :1.000 Min. : 0.000 Length:10000 1st Qu.: 20.00 1st Qu.:1.000 1st Qu.: 3.700 Class :character Median : 24.00 Median :1.000 Median : 4.800 Mode :character Mean : 30.32 Mean :1.465 Mean : 9.639 3rd Qu.: 39.00 3rd Qu.:2.000 3rd Qu.:17.600 Max. :120.00 Max. :9.000 Max. :43.700 rate_type fare_amount key Length:10000 Min. : 0.01 Length:10000 Class :character 1st Qu.: 16.50 Class :character Mode :character Median : 16.50 Mode :character Mean : 31.22 3rd Qu.: 52.00 Max. :182.50 Visualize data using ggplot2 In this section of the notebook, you use the ggplot2 library in R to study some of the variables from the example dataset. Display the distribution of the fare_amount values using a histogram: ggplot( data = taxi_trip_data, aes(x = fare_amount) ) + geom_histogram(bins = 100) The resulting plot is similar to the graph in the following image: Display the relationship between trip_distance and fare_amount using a scatter plot: ggplot( data = taxi_trip_data, aes(x = trip_distance_miles, y = fare_amount) ) + geom_point() + geom_smooth(method = "lm") The resulting plot is similar to the graph in the following image: Process the data in BigQuery from R When you're working with large datasets, we recommend that you perform as much analysis as possible (aggregation, filtering, joining, computing columns, and so on) in BigQuery, and then retrieve the results. Performing these tasks in R is less efficient. Using BigQuery for analysis takes advantage of the scalability and performance of BigQuery, and makes sure that the returned results can fit into memory in R. In the notebook, create a function that finds the number of trips and the average fare amount for each value of the chosen column: get_distinct_value_aggregates <- function(column) { query <- paste0( 'SELECT ', column, ', COUNT(1) AS num_trips, AVG(fare_amount) AS avg_fare_amount FROM `bigquery-public-data.new_york_taxi_trips.tlc_yellow_trips_2022` WHERE (TIMESTAMP_DIFF(dropoff_datetime, pickup_datetime, MINUTE) BETWEEN 0.01 AND 120) AND (passenger_count BETWEEN 1 AND 10) AND (trip_distance BETWEEN 0.01 AND 100) AND (fare_amount BETWEEN 0.01 AND 250) GROUP BY 1 ' ) bq_table_download( bq_project_query( PROJECT_ID, query = query ) ) } Invoke the function using the trip_time_minutes column that is defined using the timestamp functionality in BigQuery: df <- get_distinct_value_aggregates( 'TIMESTAMP_DIFF(dropoff_datetime, pickup_datetime, MINUTE) AS trip_time_minutes') ggplot( data = df, aes(x = trip_time_minutes, y = num_trips) ) + geom_line() ggplot( data = df, aes(x = trip_time_minutes, y = avg_fare_amount) ) + geom_line() The notebook displays two graphs. The first graph shows the number of trips by length of trip in minutes. The second graph shows the average fare amount of trips by trip time. The output of the first ggplot command is as follows, which shows the number of trips by length of trip (in minutes): The output of the second ggplot command is as follows, which shows the average fare amount of trips by trip time: To see more visualization examples with other fields in the data, refer to the notebook. Save data as CSV files to Cloud Storage The next task is to save extracted data from BigQuery as CSV files in Cloud Storage so you can use it for further ML tasks. In the notebook, load training and evaluation data from BigQuery into R: # Prepare training and evaluation data from BigQuery sample_size <- 10000 sql_query <- sprintf(sql_query_template, sample_size) # Split data into 75% training, 25% evaluation train_query <- paste('SELECT * FROM (', sql_query, ') WHERE MOD(CAST(key AS INT64), 100) <= 75') eval_query <- paste('SELECT * FROM (', sql_query, ') WHERE MOD(CAST(key AS INT64), 100) > 75') # Load training data to data frame train_data <- bq_table_download( bq_project_query( PROJECT_ID, query = train_query ) ) # Load evaluation data to data frame eval_data <- bq_table_download( bq_project_query( PROJECT_ID, query = eval_query ) ) Check the number of observations in each dataset: print(paste0("Training instances count: ", nrow(train_data))) print(paste0("Evaluation instances count: ", nrow(eval_data))) Approximately 75% of the total instances should be in training, with approximately 25% of the remaining instances in evaluation. Write the data to a local CSV file: # Write data frames to local CSV files, with headers dir.create(file.path('data'), showWarnings = FALSE) write.table(train_data, "data/train_data.csv", row.names = FALSE, col.names = TRUE, sep = ",") write.table(eval_data, "data/eval_data.csv", row.names = FALSE, col.names = TRUE, sep = ",") Upload the CSV files to Cloud Storage by wrapping gsutil commands that are passed to the system: # Upload CSV data to Cloud Storage by passing gsutil commands to system gcs_url <- paste0("gs://", BUCKET_NAME, "/") command <- paste("gsutil mb", gcs_url) system(command) gcs_data_dir <- paste0("gs://", BUCKET_NAME, "/data") command <- paste("gsutil cp data/*_data.csv", gcs_data_dir) system(command) command <- paste("gsutil ls -l", gcs_data_dir) system(command, intern = TRUE) You can also upload CSV files to Cloud Storage by using the googleCloudStorageR library, which invokes the Cloud Storage JSON API. You can also use bigrquery to write data from R back into BigQuery. Writing back to BigQuery is usually done after completing some preprocessing or generating results to be used for further analysis. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this document, you should remove them. Delete the project The easiest way to eliminate billing is to delete the project you created. If you plan to explore multiple architectures, tutorials, or quickstarts, then reusing projects can help you avoid exceeding project quota limits. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Learn more about how you can use BigQuery data in your R notebooks in the bigrquery documentation. Learn about best practices for ML engineering in Rules of ML. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Alok Pattani | Developer AdvocateOther contributors: Jason Davenport | Developer AdvocateFirat Tekiner | Senior Product Manager Send feedback \ No newline at end of file diff --git a/Data_warehouse_with_BigQuery.txt b/Data_warehouse_with_BigQuery.txt new file mode 100644 index 0000000000000000000000000000000000000000..80c54151ae92b26c521def9d49e0b7f68df2c619 --- /dev/null +++ b/Data_warehouse_with_BigQuery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/big-data-analytics/data-warehouse +Date Scraped: 2025-02-23T11:49:04.394Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Data warehouse with BigQuery Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-03 UTC This guide helps you understand, deploy, and use the Data warehouse with BigQuery Jump Start Solution. This solution demonstrates how you can build a data warehouse in Google Cloud using BigQuery as your data warehouse, with Looker Studio as a dashboard and visualization tool. The solution also uses the generative AI capabilities of Vertex AI to generate text that summarizes the analysis. Common use cases for building a data warehouse include the following: Aggregating and creating marketing analytics warehouses to improve revenue or other customer metrics. Building financial reports and analyses. Building operational dashboards to improve corporate performance. This document is intended for developers who have some background with data analysis and have used a database to perform an analysis. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful but not required in order to deploy this solution through the console. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives Learn how data flows into a cloud data warehouse, and how the data can be transformed using SQL. Build dashboards from the data to perform data analysis. Schedule SQL statements to update data on a common recurrence. Create a machine learning model to predict data values over time. Use generative AI to summarize the results of your machine learning model. Products used The solution uses the following Google Cloud products: BigQuery: A fully managed, highly scalable data warehouse with built-in machine learning capabilities. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. Looker Studio: Self-service business intelligence platform that helps you create and share data insights. Vertex AI: A machine learning (ML) platform that lets you train and deploy ML models and AI applications. The following Google Cloud products are used to stage data in the solution for first use: Workflows: A fully managed orchestration platform that executes services in a specified order as a workflow. Workflows can combine services, including custom services hosted on Cloud Run or Cloud Run functions, Google Cloud services such as BigQuery, and any HTTP-based API. Cloud Run functions: A serverless execution environment for building and connecting cloud services. Architecture The example warehouse that this solution deploys analyzes fictional ecommerce data from TheLook to understand company performance over time. The following diagram shows the architecture of the Google Cloud resources that the solution deploys. Solution flow The architecture represents a common data flow to populate and transform data for a data warehouse: Data is sent to a Cloud Storage bucket. Workflows facilitates the data movement. Data is loaded into BigQuery as a BigLake table using a SQL stored procedure. Data is transformed in BigQuery by using a SQL stored procedure. Dashboards are created from the data to further analyze with Looker Studio. Data is analyzed using a k-means model built with BigQuery ML. The analysis identifies common patterns, which are summarized by using the generative AI capabilities from Vertex AI through BigQuery. Cloud Run functions creates Python notebooks with additional learning content. Cost For an estimate of the cost of the Google Cloud resources that the data warehouse with BigQuery solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The data region where the data is staged. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/aiplatform.admin roles/bigquery.admin roles/cloudfunctions.admin roles/config.agent roles/datalineage.viewer roles/dataform.admin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/iam.serviceAccountTokenCreator roles/logging.configWriter roles/resourcemanager.projectIamAdmin roles/run.invoker roles/serviceusage.serviceUsageAdmin roles/storage.admin roles/workflows.admin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Data warehouse with BigQuery solution. Go to the Data warehouse with BigQuery solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To verify the resources that are deployed, click the more_vert Actions menu, and then select View resources. The Asset Inventory page of the Google Cloud console is opened in a new browser tab. The page lists the BigQuery objects, the Cloud Run function, the Workflows workflow, the Pub/Sub topic, and the Eventarc trigger resources that are deployed by the solution. To view the details of each resource, click the name of the resource in the Display name column. To view and use the solution, return to the Solution deployments page in the console. Click the more_vert Actions menu. Select View Looker Studio Dashboard to open a dashboard that's built on top of the sample data that's transformed by using the solution. Select Open BigQuery Editor to run queries and build machine learning (ML) models using the sample data in the solution. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Whether or not to enable underlying apis in this solution. # Example: true enable_apis = true # Whether or not to protect BigQuery resources from deletion when solution is modified or changed. # Example: false force_destroy = false # Whether or not to protect Cloud Storage resources from deletion when solution is modified or changed. # Example: true deletion_protection = true # Name of the BigQuery ML GenAI remote model used for text generation # Example: "text_generate_model" text_generation_model_name = "text_generate_model" For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects region: Available regions and zones Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also lists the following additional information that you'll need: The Looker Studio URL of the dashboard that was deployed. The link to open the BigQuery editor for some sample queries. The following example shows what the output looks like: lookerstudio_report_url = "https://lookerstudio.google.com/reporting/create?c.reportId=8a6517b8-8fcd-47a2-a953-9d4fb9ae4794&ds.ds_profit.datasourceName=lookerstudio_report_profit&ds.ds_profit.projectId=my-cloud-project&ds.ds_profit.type=TABLE&ds.ds_profit.datasetId=ds_edw&ds.ds_profit.tableId=lookerstudio_report_profit&ds.ds_dc.datasourceName=lookerstudio_report_distribution_centers&ds.ds_dc.projectId=my-cloud-project&ds.ds_dc.type=TABLE&ds.ds_dc.datasetId=ds_edw&ds.ds_dc.tableId=lookerstudio_report_distribution_centers" bigquery_editor_url = "https://console.cloud.google.com/bigquery?project=my-cloud-project&ws=!1m5!1m4!6m3!1smy-cloud-project!2sds_edw!3ssp_sample_queries" To view and use the dashboard and to run queries in BigQuery, copy the output URLs from the previous step and open the URLs in new browser tabs. The dashboard and the BigQuery editor appear in the new tabs. To see all of the Google Cloud resources that are deployed, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Customize the solution This section provides information that Terraform developers can use to modify the data warehouse with BigQuery solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. After you've seen how the solution works with the sample data, you might want to work with your own data. To use your own data, you put it into the Cloud Storage bucket named edw-raw-hash. The hash is a random set of 8 characters that's generated during the deployment. You can change the Terraform code in the following ways: Dataset ID. Change the Terraform code so that when the code creates the BigQuery dataset, it uses the dataset ID that you want to use for your data. Schema. Change the Terraform code so that it creates the BigQuery table ID that you want to use to store your data. This includes the external table schema so that BigQuery can read the data from Cloud Storage. Scheduled queries. Add stored procedures that perform the analysis that you're interested in. Looker dashboards. Change the Terraform code that creates a Looker dashboard so that the dashboard reflects the data that you're using. The following are common data warehouse objects, showing the Terraform example code in main.tf. BigQuery dataset: The schema where database objects are grouped and stored. resource "google_bigquery_dataset" "ds_edw" { project = module.project-services.project_id dataset_id = "DATASET_PHYSICAL_ID" friendly_name = "DATASET_LOGICAL_NAME" description = "DATASET_DESCRIPTION" location = "REGION" labels = var.labels delete_contents_on_destroy = var.force_destroy } BigQuery table: A database object that represents data that's stored in BigQuery or that represents a data schema that's stored in Cloud Storage. resource "google_bigquery_table" "tbl_edw_inventory_items" { dataset_id = google_bigquery_dataset.ds_edw.dataset_id table_id = "TABLE_NAME" project = module.project-services.project_id deletion_protection = var.deletion_protection ... } BigQuery stored procedure: A database object that represents one or more SQL statements to be executed when called. This could be to transform data from one table to another or to load data from an external table into a standard table. resource "google_bigquery_routine" "sp_sample_translation_queries" { project = module.project-services.project_id dataset_id = google_bigquery_dataset.ds_edw.dataset_id routine_id = "sp_sample_translation_queries" routine_type = "PROCEDURE" language = "SQL" definition_body = templatefile("${path.module}/assets/sql/sp_sample_translation_queries.sql", { project_id = module.project-services.project_id }) } BigQuery scheduled query: A utility to schedule a query or stored procedure to run at a specified frequency. resource "google_bigquery_data_transfer_config" "dts_config" { display_name = "TRANSFER_NAME" project = module.project-services.project_id location = "REGION" data_source_id = "scheduled_query" schedule = "every day 00:00" params = { query = "CALL ${module.project-services.project_id}.ds_edw.sp_lookerstudio_report()" } } To customize the solution, complete the following steps in Cloud Shell: Verify that the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. If it isn't, go to that directory: cd $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse Open main.tf and make the changes you want to make. For more information about the effects of such customization on reliability, security, performance, cost, and operations, see Design recommendations. Validate and review the Terraform configuration. Provision the resources. Design recommendations This section provides recommendations for using the data warehouse with BigQuery solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. As you begin to scale with BigQuery, you have available a number of ways to help improve your query performance and to reduce your total spend. These methods include changing how your data is physically stored, modifying your SQL queries, and using slot reservations to ensure cost performance. For more information about ways to help scale and run your data warehouse, see Introduction to optimizing query performance. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-google-bigquery/modules/data_warehouse. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Errors accessing data in BigQuery or Looker Studio There is a provisioning step that runs after the Terraform provisioning steps that loads data to the environment. If you get an error when the data is being loaded into the Looker Studio dashboard, or if there are no objects when you start exploring BigQuery, wait a few minutes and try again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/modules/data_warehouse Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Learn about BigQuery Generate text by using the ML.GENERATE_TEXT function Learn about Looker Studio Send feedback \ No newline at end of file diff --git a/Database_Migration.txt b/Database_Migration.txt new file mode 100644 index 0000000000000000000000000000000000000000..0c7b5d8e63f34241288cd3dd897983b163561beb --- /dev/null +++ b/Database_Migration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/database-migration +Date Scraped: 2025-02-23T11:59:15.971Z + +Content: +Google is positioned furthest in vision among all leaders in the 2023 Gartner® Magic Quadrant™ for Cloud DBMS. Get the report.Simplify your database migration journeyMigrate to Google Cloud to run and manage your databases at global scale while optimizing efficiency and flexibility. Take advantage of the new Database Migration Program, to quickly and cost-effectively get started.Contact usMigration guideJumpstart your migrations with the new Database Migration ProgramApply for the offer todayBenefitsAccelerate your database migration journey to Google CloudOptimize for cost and agilityStay flexible with fully managed, open-source-compatible databases. Get discounts and instance sizes for any budget without compromising performance.Innovate at scaleFocus on innovation with scalable, highly reliable databases. Develop apps with the scale, performance, and ease that powers Google’s services.One Google CloudIntegrate our databases with other Google Cloud solutions, including our data analytics, machine learning, and application development products.Key featuresDatabase migration strategiesMove to the same type of databaseLift and shift databases to Google Cloud using databases like Cloud SQL, AlloyDB for PostgreSQL, Memorystore for Valkey and Redis, and Bigtable, along with our open source partner databases like MongoDB, Datastax, Elastic, Neo4j, Influx Data, and Redis Enterprise. Database Migration Service can help minimize downtime during migration.Move to a new type of databaseWhether you’re moving from proprietary to open source databases or modernizing from traditional to scalable cloud-native databases, we have a solution for you. Leverage Database Migration Service to migrate from Oracle to PostgreSQL or use Datastream to synchronize data across databases, storage systems, and applications. Our migration assessment guides and partners can help you get started.Ready to get started? Contact usMigrate your databases to managed services on Google CloudVideoHow to migrate and modernize your applications with Google Cloud databasesWatch videoMigrating your databases to managed services on Google CloudRegister to read whitepaperCustomersSee how these customers migrated their databases to Google CloudCase studyEvernote moved 5 billion user notes to Google Cloud in only 70 days.4-min readCase studyLearn how Pantheon migrated 200K+ websites to Google Cloud in two weeks.5-min readSee all customersPartnersRecommended database migration partnersGoogle Cloud partners can help make even more of your migrations seamless.See all partnersDocumentationGuides for database migrationGoogle Cloud BasicsDatabase Migration Service OverviewLearn how Database Migration Service helps you lift and shift your workloads into Cloud SQL.Learn moreQuickstartMigrating MySQL to Cloud SQLLearn how to migrate your database from MySQL to Cloud SQL.Learn moreQuickstartMigrating PostgreSQL to Cloud SQLLearn how to migrate your database from PostgreSQL to Cloud SQL.Learn moreTutorialMigrating from MySQL to SpannerThis article explains how to migrate your online transactional processing (OLTP) database from MySQL to Spanner.Learn moreTutorialMigrating from Oracle to Cloud SQL with DatastreamUse Google’s Datastream-based, open-source toolkit to migrate from Oracle to Cloud SQL for PostgreSQL.Learn moreTutorialMigrating from PostgreSQL to SpannerThis article explains how to migrate your on-premises PostgreSQL instances to Spanner.Learn moreGoogle Cloud BasicsMigrating Oracle workloads to Google CloudLift and shift your Oracle workloads with the Google Cloud Bare Metal Solution. It provides hardware, hardware support, and integrated billing and support.Learn moreTutorialMigrating from DynamoDB to SpannerThis tutorial describes how to migrate from Amazon DynamoDB to Spanner.Learn moreBest PracticeMigrating data from HBase to BigtableThis article describes considerations and processes for migrating the data from an Apache HBase cluster to a Bigtable cluster on Google Cloud.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Database_Migration_Service(1).txt b/Database_Migration_Service(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..ac7a933bd5b540ccadcb9e628bd519fdebb2f318 --- /dev/null +++ b/Database_Migration_Service(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/database-migration +Date Scraped: 2025-02-23T12:06:48.253Z + +Content: +Database Migration Service offers AI-assisted migrations with detailed explanations and recommendations. Watch the video.Jump to Database Migration ServiceDatabase Migration ServiceSimplify migrations to the cloud. Available now for MySQL, PostgreSQL, SQL Server, and Oracle databases.Get startedMigration guideMigrate to open source databases in the cloud from on-premises, Google Cloud, or other cloudsReplicate data continuously for minimal downtime migrationsSeamlessly convert your schema and code with AI-powered assistanceServerless and easy to set upSign up to try application code conversion, assisted by Gemini.Request accessBenefitsSimplified migrationStart migrating in just a few clicks with an integrated, Gemini-assisted conversion and migration experience. Reduced migration complexity means you can enjoy the benefits of the cloud sooner.Minimal downtimeEnjoy a fully-managed service that migrates your database with minimal downtime. Serverless migrations are highly performant at scale and can take an initial snapshot followed by continuous data replication.Fast track to Cloud SQL and AlloyDBGet the operational benefits of the open, standards-based Cloud SQL and AlloyDB for PostgreSQL services with enterprise availability, stability, and security.Key featuresKey featuresEasy to useA guided experience takes you through migration of MySQL, PostgreSQL, SQL Server, and Oracle databases and offers built-in, customized source configuration information, schema and code conversion, and setup of multiple secure networking connectivity options. Database Migration Service leverages native replication capabilities for highly accurate, high-fidelity migrations, and you can run a validation before the migration to ensure migration success.Gemini in Database Migration ServiceMake database migration simpler and faster with AI-powered assistance. With the help of Gemini in DMS, in preview, you can review and convert database-resident code like stored procedures, triggers, and functions to a PostgreSQL compatible dialect. To upskill and retrain SQL developers, explainability enables a side-by-side comparison of dialects, along with a detailed explanation of code and recommendations.Integrated conversion and migrationModernize your legacy Oracle workloads by migrating them to PostgreSQL. A built-in conversion workspace reduces the manual effort of heterogeneous migrations, and the Gemini-assisted conversion process guides you through schema and code conversion and highlights any conversion issues for your review. The conversion workspace is available for migrations from Oracle to Cloud SQL and AlloyDB.Serverless experienceEliminate the operational burdens of database migration. There are no migration servers to provision, manage, or monitor, and auto-scaling ensures high-performance, uninterrupted data replication at scale. The entire process is done with minimal downtime, from the initial snapshot of the source database to continuous replication of changes.Secure by designRest easy knowing your data is protected during migration. Database Migration Service supports multiple secure, private connectivity methods to protect your data in transit. Once migrated, all data is encrypted by default, and Google Cloud databases provide multiple layers of security to meet even the most stringent requirements.3:28Simplify database migrations with AICustomersLearn from customers using Database Migration ServiceBlog postAccenture migrated data to Cloud SQL and now queries it directly using federation from BigQuery5-min readBlog postFreedom Financial Network accelerated their move to Cloud SQL with Database Migration Service5-min readBlog postRyde migrated their production databases from Amazon RDS to Cloud SQL in less than a day5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoModernizing Oracle workloads with DMS Conversion WorkspaceWatch videoBlog postWhat’s new in Oracle to PostgreSQL migrations with Database Migration ServiceRead the blogBlog postMigration from PostgreSQL sources to AlloyDB is generally availableRead the blogVideoGetting started with Database Migration ServiceWatch videoBlog postPrepare your source PostgreSQL instance and database(s) for migrationRead the blogBlog postMigrate your MySQL and PostgreSQL databases using Database Migration ServiceRead the blogDocumentationDatabase Migration Service documentationGoogle Cloud BasicsDatabase Migration Service overviewLearn how Database Migration Service helps you lift and shift your workloads into Cloud SQL.Learn moreQuickstartDatabase Migration Service for MySQL quickstartLearn how to seamlessly migrate your data from MySQL to Cloud SQL.Learn moreQuickstartDatabase Migration Service for PostgreSQL quickstartLearn how to seamlessly migrate your data from PostgreSQL to Cloud SQL.Learn moreTutorialDatabase Migration Service for MySQL how-to guidesLearn how to configure a source database, create a source connection profile, create a migration job, use the API, and more.Learn moreTutorialDatabase Migration Service for PostgreSQL how-to guidesLearn how to create a source, configure connectivity, create a migration job, use the API, and more.Learn moreTutorialMigrating MySQL to Cloud SQL with Database Migration ServiceGet started with migrating MySQL data to Cloud SQL using Database Migration Service in this short training path, a Google Cloud Skills Boost quest.Learn moreQuickstartPostgreSQL to AlloyDB quickstartLearn how to seamlessly migrate your data from PostgreSQL to AlloyDB for PostgreSQL.Learn moreTutorialAlloyDB how-to guideStep-by-step guide on migrating your data to AlloyDB with Database Migration Service.Learn moreGoogle Cloud BasicsMigrate Oracle to PostgreSQL with Database Migration ServiceLearn database migration strategies and how Database Migration Service, along with additional Google Cloud and partner tooling, can help.Learn moreNot seeing what you’re looking for?View all product documentationPricingPricingFor homogenous migrations, where the source and destination are the same database engine, Database Migration Service is offered at no additional charge. This includes migrations of MySQL, PostgreSQL, and SQL Server sources to Cloud SQL and AlloyDB destinations.Heterogeneous migrations between different database engines, are priced in per-byte increments on a per-migration job basis. Bytes are counted based on raw (uncompressed) data.View pricing detailsPartnersRecommended migration partnersGoogle Cloud partners can help make even more of your migrations seamless.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Database_Migration_Service.txt b/Database_Migration_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..8229888060d05736daa86c24f06edeff55425306 --- /dev/null +++ b/Database_Migration_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/database-migration +Date Scraped: 2025-02-23T12:04:07.628Z + +Content: +Database Migration Service offers AI-assisted migrations with detailed explanations and recommendations. Watch the video.Jump to Database Migration ServiceDatabase Migration ServiceSimplify migrations to the cloud. Available now for MySQL, PostgreSQL, SQL Server, and Oracle databases.Get startedMigration guideMigrate to open source databases in the cloud from on-premises, Google Cloud, or other cloudsReplicate data continuously for minimal downtime migrationsSeamlessly convert your schema and code with AI-powered assistanceServerless and easy to set upSign up to try application code conversion, assisted by Gemini.Request accessBenefitsSimplified migrationStart migrating in just a few clicks with an integrated, Gemini-assisted conversion and migration experience. Reduced migration complexity means you can enjoy the benefits of the cloud sooner.Minimal downtimeEnjoy a fully-managed service that migrates your database with minimal downtime. Serverless migrations are highly performant at scale and can take an initial snapshot followed by continuous data replication.Fast track to Cloud SQL and AlloyDBGet the operational benefits of the open, standards-based Cloud SQL and AlloyDB for PostgreSQL services with enterprise availability, stability, and security.Key featuresKey featuresEasy to useA guided experience takes you through migration of MySQL, PostgreSQL, SQL Server, and Oracle databases and offers built-in, customized source configuration information, schema and code conversion, and setup of multiple secure networking connectivity options. Database Migration Service leverages native replication capabilities for highly accurate, high-fidelity migrations, and you can run a validation before the migration to ensure migration success.Gemini in Database Migration ServiceMake database migration simpler and faster with AI-powered assistance. With the help of Gemini in DMS, in preview, you can review and convert database-resident code like stored procedures, triggers, and functions to a PostgreSQL compatible dialect. To upskill and retrain SQL developers, explainability enables a side-by-side comparison of dialects, along with a detailed explanation of code and recommendations.Integrated conversion and migrationModernize your legacy Oracle workloads by migrating them to PostgreSQL. A built-in conversion workspace reduces the manual effort of heterogeneous migrations, and the Gemini-assisted conversion process guides you through schema and code conversion and highlights any conversion issues for your review. The conversion workspace is available for migrations from Oracle to Cloud SQL and AlloyDB.Serverless experienceEliminate the operational burdens of database migration. There are no migration servers to provision, manage, or monitor, and auto-scaling ensures high-performance, uninterrupted data replication at scale. The entire process is done with minimal downtime, from the initial snapshot of the source database to continuous replication of changes.Secure by designRest easy knowing your data is protected during migration. Database Migration Service supports multiple secure, private connectivity methods to protect your data in transit. Once migrated, all data is encrypted by default, and Google Cloud databases provide multiple layers of security to meet even the most stringent requirements.3:28Simplify database migrations with AICustomersLearn from customers using Database Migration ServiceBlog postAccenture migrated data to Cloud SQL and now queries it directly using federation from BigQuery5-min readBlog postFreedom Financial Network accelerated their move to Cloud SQL with Database Migration Service5-min readBlog postRyde migrated their production databases from Amazon RDS to Cloud SQL in less than a day5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoModernizing Oracle workloads with DMS Conversion WorkspaceWatch videoBlog postWhat’s new in Oracle to PostgreSQL migrations with Database Migration ServiceRead the blogBlog postMigration from PostgreSQL sources to AlloyDB is generally availableRead the blogVideoGetting started with Database Migration ServiceWatch videoBlog postPrepare your source PostgreSQL instance and database(s) for migrationRead the blogBlog postMigrate your MySQL and PostgreSQL databases using Database Migration ServiceRead the blogDocumentationDatabase Migration Service documentationGoogle Cloud BasicsDatabase Migration Service overviewLearn how Database Migration Service helps you lift and shift your workloads into Cloud SQL.Learn moreQuickstartDatabase Migration Service for MySQL quickstartLearn how to seamlessly migrate your data from MySQL to Cloud SQL.Learn moreQuickstartDatabase Migration Service for PostgreSQL quickstartLearn how to seamlessly migrate your data from PostgreSQL to Cloud SQL.Learn moreTutorialDatabase Migration Service for MySQL how-to guidesLearn how to configure a source database, create a source connection profile, create a migration job, use the API, and more.Learn moreTutorialDatabase Migration Service for PostgreSQL how-to guidesLearn how to create a source, configure connectivity, create a migration job, use the API, and more.Learn moreTutorialMigrating MySQL to Cloud SQL with Database Migration ServiceGet started with migrating MySQL data to Cloud SQL using Database Migration Service in this short training path, a Google Cloud Skills Boost quest.Learn moreQuickstartPostgreSQL to AlloyDB quickstartLearn how to seamlessly migrate your data from PostgreSQL to AlloyDB for PostgreSQL.Learn moreTutorialAlloyDB how-to guideStep-by-step guide on migrating your data to AlloyDB with Database Migration Service.Learn moreGoogle Cloud BasicsMigrate Oracle to PostgreSQL with Database Migration ServiceLearn database migration strategies and how Database Migration Service, along with additional Google Cloud and partner tooling, can help.Learn moreNot seeing what you’re looking for?View all product documentationPricingPricingFor homogenous migrations, where the source and destination are the same database engine, Database Migration Service is offered at no additional charge. This includes migrations of MySQL, PostgreSQL, and SQL Server sources to Cloud SQL and AlloyDB destinations.Heterogeneous migrations between different database engines, are priced in per-byte increments on a per-migration job basis. Bytes are counted based on raw (uncompressed) data.View pricing detailsPartnersRecommended migration partnersGoogle Cloud partners can help make even more of your migrations seamless.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Database_Modernization.txt b/Database_Modernization.txt new file mode 100644 index 0000000000000000000000000000000000000000..37dfdba26e363eb851f5c17e1ba1a360ec39741f --- /dev/null +++ b/Database_Modernization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/database-modernization +Date Scraped: 2025-02-23T11:59:18.001Z + +Content: +Oracle and Google Cloud announce a groundbreaking multicloud partnership. Read the blog.Database modernizationModernize your databases to a unified and open platform with built-in AI. Our fully managed database services reduce complexity, improve cost-efficiency, and increase agility, so you can focus on innovation.Contact usAccelerate your move to Google Cloud with the Database Modernization ProgramStart your journey todayBenefitsUpgrade the databases your applications are built onBe prepared for growth with quick, seamless scalingScale Google Cloud databases seamlessly and build cloud-native apps that are prepared to handle seasonal surges or unpredictable growth.Move faster and focus on business valueEnable developers to ship faster and perform less maintenance with database features like serverless management, auto scaling, and deep integrations.Build more powerful applications with Google CloudTransform your business with a robust ecosystem of services like GKE. Easily access data for analytics and AI with BigQuery and Vertex AI.See how you can modernize your database with Google CloudVideoLearn how to modernize your databases using Database Migration ServiceWatch videoEventRun AlloyDB Omni anywhere - in your datacenter, and in any cloudWatch videoAccelerate generative AI-driven transformation with databasesDownload the guideCustomersCustomers are driving innovation with database modernization from Google CloudBlog postAuto Trader: Charting the road from Oracle to PostgreSQL5-min readBlog post70 apps in 2 years: How Renault tackled database migration5-min readBlog postHow Deutsche Bank achieved high availability and scalability with Spanner5-min readCase studyLucille Games aids growth of 15M users in two weeks with Google Cloud5-min readBlog postThe New York Times uses Firestore for collaborative editing in their newsroom6-min readSee all customersPartnersRecommended database partnersGoogle Cloud partners can help make your migration seamless, from assessment to validation.See all partnersExplore our marketplaceRelated servicesFully managed database management servicesGoogle Cloud’s portfolio of database management services let you achieve the performance, scalability, and flexibility your application demands at a reasonable cost, and provide low-latency data access and migration across your databases.AlloyDB for PostgreSQLA fully managed PostgreSQL-compatible database service for your most demanding enterprise database workloads.Cloud SQLFully managed relational database service for MySQL, PostgreSQL, and SQL Server.Database Migration ServiceA serverless and easy to use database migration service for migrations to the cloud.SpannerA relational database with unlimited scale, strong consistency, and up to 99.999% availability.BigtableA NoSQL database service for large analytical and operational workloads with low latency and high throughput.FirestoreA serverless document database for developing applications with real-time sync for mobile and web apps.DocumentationSee tutorials, guides, and resources for this solutionQuickstartDatabase Migration Service for Oracle to AlloyDBThis document describes how you can get started with migration from Oracle to AlloyDB for PostgreSQL quickly.Learn moreTutorialMigrating from DynamoDB to SpannerThis tutorial describes how to migrate from Amazon DynamoDB to Spanner.Learn moreTutorialMigrating from an Oracle® OLTP system to SpannerThis article explains how to migrate your database from Oracle® Online Transaction Processing (OLTP) systems to Spanner.Learn moreTutorialMigrating Data from HBase to BigtableThis article describes considerations and processes for migrating data from an Apache HBase cluster to a Bigtable cluster on Google Cloud.Learn moreTutorialMigrating from MySQL to SpannerThis article explains how to migrate your Online Transactional Processing (OLTP) database from MySQL to Spanner.Learn moreNot seeing what you’re looking for?View documentationWhat's newSee the latest updates about database modernizationSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.ReportGoogle a Leader in the 2023 Gartner Magic Quadrant for Cloud Database ManagementLearn moreVideoThe future of databases with generative AIWatch videoVideoData and AI Trends Report 2024Read reportBlog postThe world’s unpredictable, your databases shouldn’t add to itRead the blogVideoSimplify complex application development using Cloud FirestoreWatch videoBlog postGannett uses Google’s serverless platform to reach next generation of readersRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Databases(1).txt b/Databases(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f93bbd4f00828e3b3de48a8ffa806d041c42228e --- /dev/null +++ b/Databases(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/databases +Date Scraped: 2025-02-23T12:03:52.607Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud databasesGoogle Cloud offers the only suite of industry-leading databases built on planet-scale infrastructure and for AI. Experience unmatched reliability, price performance, security, and global scale for all your applications. Go to consoleContact salesIndustry-leading databases for innovationRevolutionize customer experiences with operational databases you know and love in virtually any environment whether in the cloud or on-premises. And with Gemini in databases, you can simplify all aspects of the database journey with AI-powered assistance.Download our gen AI and databases white paperUnlocking data efficiency with AlloyDBLearn how Bayer Crop Science modernized their data solution tool with AlloyDB for PostgreSQL to handle increasing demands and improve collaboration.Scaling a generative AI platformLearn how Character.AI uses AlloyDB and Spanner to serve five times the query volume at half the query latency.Streamline database operationsLearn how Google Cloud helped Ford significantly reduce their management overhead and database-related operational tasks. Databases that fit your needsDatabase typeGoogle Cloud ServiceUse case examplesRelationalAlloyDB for PostgreSQLPower your most demanding enterprise workloads with AlloyDB, the PostgreSQL-compatible database built for the future.AlloyDB Omni is a downloadable edition designed to run anywhere—in your datacenter, your laptop, and in any cloud.Use AlloyDB AI to easily build enterprise generative AI applications.Simplify migrations to AlloyDB with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.Heterogenous migrationsLegacy applicationsEnterprise workloadsHybrid cloud, multicloud, and edgeCloud SQLFully managed MySQL, PostgreSQL, and SQL Server.Simplify migrations to Cloud SQL from MySQL, PostgreSQL, and Oracle databases with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.CRMERPEcommerce and webSaaS applicationSpannerCloud-native with unlimited scale, global consistency, and up to 99.999% availability.Processes more than three billion requests per second at peak.Create a 90-day Spanner free trial instance with 10 GB of storage at no cost.Migrate from databases like Oracle or DynamoDB.GamingRetailGlobal financial ledgerSupply chain/inventory managementBare Metal Solution for OracleLift and shift Oracle workloads to Google Cloud.Legacy applicationsData center retirementBigQueryServerless, highly scalable, and cost-effective multicloud data warehouse designed for business agility and offers up to 99.99% availability.Enable near real-time insights on operational data with Datastream for BigQuery.Multicloud analyticsReal-time processingBuilt-in machine learningKey-valueBigtableHighly performant, fully managed NoSQL database service for large analytical and operational workloads. Offers up to 99.999% availability. Processes more than 7 billion requests per second at peak, and with more than 10 Exabytes of data under management.Learn how to migrate from HBase or Cassandra.PersonalizationAdtechRecommendation enginesFraud detectionDocumentFirestoreHighly-scalable, massively popular document database service for mobile, web, and server development that offers richer, faster queries and high availability up to 99.999%. Has a thriving developer community of more than 250,000 monthly active developers. Mobile/web/IoT applicationsReal-time syncOffline syncFirebase Realtime DatabaseStore and sync data in real time.Mobile sign-insPersonalized applications and adsIn-app chatIn-memoryMemorystoreFully managed Redis and Memcached for sub-millisecond data access.Memorystore for Redis Cluster is a fully managed service that can easily scale to terabytes of keyspace and tens of millions of operations per second. CachingGamingLeaderboardSocial chat or news feedAdditional NoSQLMongoDB AtlasGlobal cloud database service for modern applications.Mobile/web/IoT applicationsGamingContent managementSingle viewGoogle Cloud Partner ServicesManaged offerings from our open source partner network, including MongoDB, Datastax, Redis Labs, and Neo4j.Leverage existing investmentsDatabases that fit your needsRelationalAlloyDB for PostgreSQLPower your most demanding enterprise workloads with AlloyDB, the PostgreSQL-compatible database built for the future.AlloyDB Omni is a downloadable edition designed to run anywhere—in your datacenter, your laptop, and in any cloud.Use AlloyDB AI to easily build enterprise generative AI applications.Simplify migrations to AlloyDB with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.Heterogenous migrationsLegacy applicationsEnterprise workloadsHybrid cloud, multicloud, and edgeKey-valueBigtableHighly performant, fully managed NoSQL database service for large analytical and operational workloads. Offers up to 99.999% availability. Processes more than 7 billion requests per second at peak, and with more than 10 Exabytes of data under management.Learn how to migrate from HBase or Cassandra.PersonalizationAdtechRecommendation enginesFraud detectionDocumentFirestoreHighly-scalable, massively popular document database service for mobile, web, and server development that offers richer, faster queries and high availability up to 99.999%. Has a thriving developer community of more than 250,000 monthly active developers. Mobile/web/IoT applicationsReal-time syncOffline syncIn-memoryMemorystoreFully managed Redis and Memcached for sub-millisecond data access.Memorystore for Redis Cluster is a fully managed service that can easily scale to terabytes of keyspace and tens of millions of operations per second. CachingGamingLeaderboardSocial chat or news feedAdditional NoSQLMongoDB AtlasGlobal cloud database service for modern applications.Mobile/web/IoT applicationsGamingContent managementSingle viewReady to get started? Let’s solve your challenges together.See how tens of thousands of customers are building data-driven applications using Google's data and AI cloud.Learn moreMake your database your secret advantage with Google Cloud.Read the whitepaperLooking for technical resources? Explore our guides and tutorials.See how our customers saved time and money with Google Cloud databasesBlog postSabre chose Bigtable and Spanner to serve more than one billion travelers annually5-min readBlog postHow Renault migrated 70 applications from Oracle databases to Cloud SQL for PostgreSQL5-min readBlog postHow Bitly migrated 80 billion rows of core link data from a self-managed MySQL database to Bigtable5-min readBlog postHow Credit Karma uses Bigtable and BigQuery to store and analyze financial data for 130 million members5-min readBlog postHow ShareChat built a scalable data-driven social media platform with Google Cloud6-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Databases.txt b/Databases.txt new file mode 100644 index 0000000000000000000000000000000000000000..63d0e97e2fe829da4f99e105be6ad9306f3fab70 --- /dev/null +++ b/Databases.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/databases +Date Scraped: 2025-02-23T11:59:14.036Z + +Content: +Google is positioned furthest in vision among all leaders in the 2023 Gartner® Magic Quadrant™ for Cloud DBMS. Get the report. Grow your business. Not your overhead.Google Cloud databases provide you the best options for building enterprise gen AI apps for organizations of any size. See our database products.Request a demoContact salesData challenges meet data solutionsGoogle Cloud provides an intelligent, open, and unified data and AI cloud built from the same underlying architecture that powers Google’s most popular, global products, like YouTube, Search, and Maps. Revolutionize customer experiences with operational databases you know and love, in virtually any environment whether in the cloud or on-premises.SolutionsNo matter where you are in your cloud journey, Google Cloud's database solutions meet you where you are. Accelerate your business transformation with database migration, modernization solutions.SolutionsUse Case ExamplesDatabase migrationEasy and cost-effective homogeneous migrations to managed services.Lift-and-shift Oracle workloads to Bare Metal SolutionMove to managed MySQL, PostgreSQL, or SQL ServerMove to managed Redis or MemcachedDatabase modernizationHeterogeneous migrations to move off the old-guard vendors to enterprise-class, OSS-compatible databases.Move from proprietary databases to AlloyDB for PostgreSQLMove from traditional databases to cloud-nativeMove from other clouds to Google Cloud databasesDatabases for gamesExplore how to build global, live games that deliver exceptional user experiences with Google Cloud databases.Serve players at unmatched scale and availabilityBuild fast with increased developer productivityGenerate insights with market-leading solutionsOracle workload migrationReduce overhead, lower costs, and drive innovation with a variety of options for your Oracle workloads. Lift-and-shift Oracle to Bare Metal SolutionMove from Oracle to AlloyDB for PostgreSQLModernize Oracle workloads to Cloud SQL, Spanner, or BigQuerySQL Server on Google CloudMigrate and run Microsoft SQL Server on Compute Engine or use managed Cloud SQL for SQL Server.Migrate SQL Server to Cloud SQL for SQL ServerMigrate SQL Server to Google Compute EngineOpen source databasesBuild on fully managed open source databases designed for mission critical applications.Move to managed MySQL or PostgreSQLMove to managed Redis or MemcachedMove to managed open source partner databasesSolutionsDatabase migrationEasy and cost-effective homogeneous migrations to managed services.Lift-and-shift Oracle workloads to Bare Metal SolutionMove to managed MySQL, PostgreSQL, or SQL ServerMove to managed Redis or MemcachedDatabase modernizationHeterogeneous migrations to move off the old-guard vendors to enterprise-class, OSS-compatible databases.Move from proprietary databases to AlloyDB for PostgreSQLMove from traditional databases to cloud-nativeMove from other clouds to Google Cloud databasesDatabases for gamesExplore how to build global, live games that deliver exceptional user experiences with Google Cloud databases.Serve players at unmatched scale and availabilityBuild fast with increased developer productivityGenerate insights with market-leading solutionsOracle workload migrationReduce overhead, lower costs, and drive innovation with a variety of options for your Oracle workloads. Lift-and-shift Oracle to Bare Metal SolutionMove from Oracle to AlloyDB for PostgreSQLModernize Oracle workloads to Cloud SQL, Spanner, or BigQuerySQL Server on Google CloudMigrate and run Microsoft SQL Server on Compute Engine or use managed Cloud SQL for SQL Server.Migrate SQL Server to Cloud SQL for SQL ServerMigrate SQL Server to Google Compute EngineOpen source databasesBuild on fully managed open source databases designed for mission critical applications.Move to managed MySQL or PostgreSQLMove to managed Redis or MemcachedMove to managed open source partner databasesFeeling inspired? Let’s solve your challenges together.See how tens of thousands of customers are building data-driven applications using Google's data and AI cloud.Learn moreMake your database your secret advantage with Google Cloud.Read the whitepaperLearn from our customersBlog postHow Credit Karma uses Bigtable and BigQuery to store and analyze financial data for 130 million members.5-min readBlog postB4A achieves beautiful performance for its beauty platform with AlloyDB for PostgreSQL.5-min readBlog postDiscover how Spanner’s scalability and reliability enabled Google Photos to achieve 10x growth.5-min readSee all customersORACLE® is a registered trademark of Oracle Corporation. Redis is a trademark of Redis Labs Ltd. All rights therein are reserved to Redis Labs Ltd. Any use by Google is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Google.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Databases_for_Games.txt b/Databases_for_Games.txt new file mode 100644 index 0000000000000000000000000000000000000000..46cc5cf2d37046a519fa9c77e0d156bc58878661 --- /dev/null +++ b/Databases_for_Games.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/databases/games +Date Scraped: 2025-02-23T11:59:19.814Z + +Content: +Read about Google Cloud's strategy for live games.Databases for gamesBuild global, live games that deliver exceptional user experiences with Google Cloud databases. Serve millions of players at any scale with fast, reliable, and accurate data processing.Contact usBuild live games with SpannerBenefitsHow Google Cloud databases can help you scale to the next level of gamesUnmatched scale and availability with SpannerScale up and down seamlessly to tackle special events and sudden changes in game loads. Offer a consistent experience to millions of players across geographic locations.Spanner helps you lower cost and build fasterRemove operational toil, improve developer productivity with a self-tuning NoSQL-style horizontally scalable database that supports open standards and relational semantics.Elevate in-game experiences with AIAccurately predict skill levels and match players. Identify and remove toxic players to create a more positive community.Key featuresSee how Spanner compares to other databasesSpanner is an ideal database for your games backend Learn more about what makes our databases uniqueHarness world-class technology, performance, and scale from the same underlying architecture that powers Google’s most popular, billion-user products.Develop global multiplayer games with Spanner.Read the whitepaperMake your database your secret advantage with Google Cloud.Read the ebookAccelerate your move to the cloud with managed databases.Read the ebookCustomersThe most successful game developers build on Google CloudBlog postLearn how Pokémon GO scales to millions of requests with Google Cloud.5-min readCase studyEmbark Studios launches multiplayer playtests on demand within minutes.5-min readCase studyCOLOPL scales database resources to manage massive mobile game volumes.5-min readSee all customersRelated servicesGetting you startedBuild and operate global, always-on games with faster time-to-market. SpannerFully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. BigtableHBase-compatible, NoSQL database service with single-digit millisecond latency, limitless scale, and 99.999% availability.FirestoreFully managed, scalable, and serverless document database for developing rich applications, with 99.999% availability.MemorystoreScalable, secure, and highly available in-memory service for Redis and Memcached.BigQueryServerless and cost-effective enterprise data warehouse with built-in machine learning and BI.Google Kubernetes EngineScalable and automated service for deploying, scaling, and managing Kubernetes.DocumentationExplore common use cases for gamesBuild transformative game experiences with a platform built to address your most complex challenges.TutorialDevelop a scalable player profile service with SpannerIn this codelab, you will be creating two Go services that interact with a regional Spanner database to enable players to sign up and start playing.Learn moreTutorialDevelop a game inventory management service with SpannerIn this codelab, you will be creating two Go services that interact with a regional Spanner database to enable a sample game trade post.Learn moreTutorialDevelop a persistent game leaderboard with SpannerIn this video, you will learn how to implement a persistent game leaderboard with Spanner.Learn moreNot seeing what you’re looking for?View documentationWhat's newLearn about the latest news in gamesSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postGame Industry Trends 2022: Millions more players, billions more streams.Read the blogVideoCreating, optimizing, and maintaining streaming data pipelines.Watch videoVideoHow to build an in-game currency platform with Spanner and Cloud Run.Watch videoTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact usWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Dataflow.txt b/Dataflow.txt new file mode 100644 index 0000000000000000000000000000000000000000..276a6797f1ac0117473ae9343fe4e64ca2fe6131 --- /dev/null +++ b/Dataflow.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/dataflow +Date Scraped: 2025-02-23T12:03:33.237Z + +Content: +The Economic Benefits of Dataflow: Reduce costs by up to 63% and improve business outcomes. Read the report.DataflowReal-time data intelligenceMaximize the potential of your real-time data. Dataflow is a fully managed streaming platform that is easy-to-use and scalable to help accelerate real-time decision making and customer experiences.Go to consoleContact salesNew customers get $300 in free credits to spend on Dataflow.Dataflow highlightsLeverage real-time data to power gen AI and ML use casesDeliver rich, personalized experiences to your customersReal-time ETL and data integration into BigQueryLearn Dataflow in under two minutes1:48 min videoFeaturesUse streaming AI and ML to power gen AI models in real timeReal-time data empowers AI/ML models with the latest information, enhancing prediction accuracy. Dataflow ML simplifies deployment and management of complete ML pipelines. We offer ready-to-use patterns for personalized recommendations, fraud detection, threat prevention, and more. Build streaming AI with Vertex AI, Gemini models, and Gemma models, run remote inference, and streamline data processing with MLTransform. Enhance MLOps and ML job efficiency with Dataflow GPU and right-fitting capabilities.BlogHow Shopify improved consumer search intent with real-time ML using DataflowRead the storyEnable advanced streaming use cases at enterprise scale Dataflow is a fully managed service that uses open source Apache Beam SDK to enable advanced streaming use cases at enterprise scale. It offers rich capabilities for state and time, transformations, and I/O connectors. Dataflow scales to 4K workers per job and routinely processes petabytes of data. It features autoscaling for optimal resource utilization in both batch and streaming pipelines.Learn more about Apache Beam and Dataflow5:39Deploy multimodal data processing for gen AIDataflow enables parallel ingestion and transformation of multimodal data like images, text, and audio. It applies specialized feature extraction for each modality, then fuses these features into a unified representation. This fused data feeds into generative AI models, empowering them to create new content from the diverse inputs. Internal Google teams leverage Dataflow and FlumeJava to organize and compute model predictions for a large pool of available input data with no latency requirements.Accelerate time to value with templates and notebooksDataflow has tools that make it easy to get started. Dataflow templates are pre-designed blueprints for stream and batch processing and are optimized for efficient CDC and BigQuery data integration. Iteratively build pipelines with the latest data science frameworks from the ground up with Vertex AI notebooks and deploy with the Dataflow runner. Dataflow job builder is a visual UI for building and running Dataflow pipelines in the Google Cloud console, without writing code.Save time with smart diagnostics and monitoring toolsDataflow offers comprehensive diagnostics and monitoring tools. Straggler detection automatically identifies performance bottlenecks, while data sampling allows observing data at each pipeline step. Dataflow Insights offer recommendations for job improvements. The Dataflow UI provides rich monitoring tools, including job graphs, execution details, metrics, autoscaling dashboards, and logging. Dataflow also features a job cost monitoring UI for easy cost estimation.Built-in governance and securityDataflow helps you protect your data in a number of ways: encrypting data in use with confidential VM support; customer managed encryption keys (CMEK); VPC Service Controls integration; turning off public IPs. Dataflow audit logging gives your organization the visibility into Dataflow usage and helps answer the question “Who did what, where, and when?" for better governance.View all featuresHow It WorksDataflow is a fully managed platform for batch and streaming data processing. It enables scalable ETL pipelines, real-time stream analytics, real-time ML, and complex data transformations using Apache Beam's unified model, all on serverless Google Cloud infrastructure.Try Dataflow freeLearn Dataflow in a minute, including how it works and common use casesCommon UsesReal-time analyticsBring in streaming data for real-time analytics and operational pipelinesStart your data streaming journey by integrating your streaming data sources (Pub/Sub, Kafka, CDC events, user clickstream, logs, and sensor data) into BigQuery, Google Cloud Storage data lakes, Spanner, Bigtable, SQL stores, Splunk, Datadog, and more. Explore optimized Dataflow templates to set up your pipelines in a few clicks, no code. Add custom logic to your template jobs with integrated UDF builder or create custom ETL pipelines from scratch using the full power of Beam transforms and I/O connectors ecosystem. Dataflow is also commonly used to reverse ETL processed data from BigQuery to OLTP stores for fast lookups and serving end users. It is a common pattern for Dataflow to write streaming data to multiple storage locations.Explore Dataflow templates allow you to deploy a pre-packaged Dataflow pipeline for deployment Google Cloud Managed Service for Apache Kafka is now generally availableStream data with SQL using BigQuery continuous queries now in previewLaunch your first Dataflow job and take our self-guided course on Dataflow foundations.Tutorials, quickstarts, & labsBring in streaming data for real-time analytics and operational pipelinesStart your data streaming journey by integrating your streaming data sources (Pub/Sub, Kafka, CDC events, user clickstream, logs, and sensor data) into BigQuery, Google Cloud Storage data lakes, Spanner, Bigtable, SQL stores, Splunk, Datadog, and more. Explore optimized Dataflow templates to set up your pipelines in a few clicks, no code. Add custom logic to your template jobs with integrated UDF builder or create custom ETL pipelines from scratch using the full power of Beam transforms and I/O connectors ecosystem. Dataflow is also commonly used to reverse ETL processed data from BigQuery to OLTP stores for fast lookups and serving end users. It is a common pattern for Dataflow to write streaming data to multiple storage locations.Explore Dataflow templates allow you to deploy a pre-packaged Dataflow pipeline for deployment Google Cloud Managed Service for Apache Kafka is now generally availableStream data with SQL using BigQuery continuous queries now in previewLaunch your first Dataflow job and take our self-guided course on Dataflow foundations.Real-time ETL and data integrationModernize your data platform with real-time dataReal-time ETL and integration process and write data immediately, enabling rapid analysis and decision-making. Dataflow's serverless architecture and streaming capabilities make it ideal for building real-time ETL pipelines. Dataflow's ability to autoscale ensures efficiency and scale, while its support for various data sources and destinations simplifies integration.See the Google Cloud sketch for real-time ETL and data integration with DataflowVideo: Dataflow for real-time ETL and integrationSolution guide and GitHub repository: ETL and data integrationVideo: Replicate MySQL data to BigQuery using DataflowBuild your fundamentals with batch processing on Dataflow with this Google Cloud Skills Boost course.Tutorials, quickstarts, & labsModernize your data platform with real-time dataReal-time ETL and integration process and write data immediately, enabling rapid analysis and decision-making. Dataflow's serverless architecture and streaming capabilities make it ideal for building real-time ETL pipelines. Dataflow's ability to autoscale ensures efficiency and scale, while its support for various data sources and destinations simplifies integration.See the Google Cloud sketch for real-time ETL and data integration with DataflowVideo: Dataflow for real-time ETL and integrationSolution guide and GitHub repository: ETL and data integrationVideo: Replicate MySQL data to BigQuery using DataflowBuild your fundamentals with batch processing on Dataflow with this Google Cloud Skills Boost course.Real-time ML and gen AIAct in real time with streaming ML / AISplit second decisions drive business value. Dataflow Streaming AI and ML enable customers to implement low latency predictions and inferences, real-time personalization, threat detection, fraud prevention, and many more use cases where real-time intelligence matters. Preprocess data with MLTransform, which allows you to focus on transforming your data and away from writing complex code or managing underlying libraries. Make predictions to your generative AI model using RunInference.See the Google Cloud sketch for real-time ML and generative AI with DataflowVideo: Real-time ML and Generative AI with DataflowSolution guide and GitHub repository: Gen AI and machine learning inferenceReal-time anomaly detection with streaming and AITutorials, quickstarts, & labsAct in real time with streaming ML / AISplit second decisions drive business value. Dataflow Streaming AI and ML enable customers to implement low latency predictions and inferences, real-time personalization, threat detection, fraud prevention, and many more use cases where real-time intelligence matters. Preprocess data with MLTransform, which allows you to focus on transforming your data and away from writing complex code or managing underlying libraries. Make predictions to your generative AI model using RunInference.See the Google Cloud sketch for real-time ML and generative AI with DataflowVideo: Real-time ML and Generative AI with DataflowSolution guide and GitHub repository: Gen AI and machine learning inferenceReal-time anomaly detection with streaming and AIMarketing intelligenceTransform your marketing with real-time insightsReal-time marketing intelligence analyzes current market, customer, and competitor data for quick, informed decisions. It enables agile responses to trends, behaviors, and competitive actions, transforming marketing. Benefits include:Real-time omnichannel marketing with personalized offersImproved customer relationship management through personalized interactionsAgile marketing mix optimizationDynamic user segmentationCompetitive intelligence for staying aheadProactive crisis management on social mediaSee the Google Cloud sketch for marketing intelligence with DataflowVideo: Real-time marketing intelligence with DataflowSolution guide and GitHub repository: Marketing intelligence inferenceTutorials, quickstarts, & labsTransform your marketing with real-time insightsReal-time marketing intelligence analyzes current market, customer, and competitor data for quick, informed decisions. It enables agile responses to trends, behaviors, and competitive actions, transforming marketing. Benefits include:Real-time omnichannel marketing with personalized offersImproved customer relationship management through personalized interactionsAgile marketing mix optimizationDynamic user segmentationCompetitive intelligence for staying aheadProactive crisis management on social mediaSee the Google Cloud sketch for marketing intelligence with DataflowVideo: Real-time marketing intelligence with DataflowSolution guide and GitHub repository: Marketing intelligence inferenceClickstream analyticsOptimize and personalize web and app experiencesReal-time clickstream analytics empower businesses to analyze user interactions on websites and apps instantly. This unlocks real-time personalization, A/B testing, and funnel optimization, leading to increased engagement, faster product development, reduced churn, and enhanced product support. Ultimately, it enables a superior user experience and drives business growth through dynamic pricing and personalized recommendations.See the Google Cloud sketch for real-time clickstream analytics with DataflowVideo: Clickstream analytics with DataflowSolution guide and GitHub repository: Clickstream analyticsTutorials, quickstarts, & labsOptimize and personalize web and app experiencesReal-time clickstream analytics empower businesses to analyze user interactions on websites and apps instantly. This unlocks real-time personalization, A/B testing, and funnel optimization, leading to increased engagement, faster product development, reduced churn, and enhanced product support. Ultimately, it enables a superior user experience and drives business growth through dynamic pricing and personalized recommendations.See the Google Cloud sketch for real-time clickstream analytics with DataflowVideo: Clickstream analytics with DataflowSolution guide and GitHub repository: Clickstream analyticsReal-time log replication and analyticsCentralized log management and analyticsGoogle Cloud logs can be replicated to third-party platforms like Splunk using Dataflow for near real-time log processing and analytics. This solution provides centralized log management, compliance, auditing, and analytics capabilities while reducing cost and improving performance.See the Google Cloud sketch for real-time clickstream analytics with DataflowVideo: Log replication and analytics with DataflowSolution guide and GitHub repository: Log replication and analyticsTutorials, quickstarts, & labsCentralized log management and analyticsGoogle Cloud logs can be replicated to third-party platforms like Splunk using Dataflow for near real-time log processing and analytics. This solution provides centralized log management, compliance, auditing, and analytics capabilities while reducing cost and improving performance.See the Google Cloud sketch for real-time clickstream analytics with DataflowVideo: Log replication and analytics with DataflowSolution guide and GitHub repository: Log replication and analyticsPricingHow Dataflow pricing worksExplore the billing and resource model for Dataflow.Services and usageDescription PricingDataflow compute resourcesDataflow billing for compute resources includes: Worker CPU and memoryDataflow Shuffle data processed for batch workloadsStreaming Engine Compute UnitsStreaming Engine data processedLearn more on our detailed pricing pageOther Dataflow resourcesOther Dataflow resources that are billed for all jobs include Persistent Disk, GPUs, and snapshots.Learn more on our detailed pricing pageDataflow committed use discounts (CUDs)Dataflow CUDs offer two levels of discounts, depending on the commitment period:A one-year CUD gives you a 20% discount from the on-demand rateA three-year CUD gives you a 40% discount from the on-demand rateLearn more about Dataflow CUDsLearn more about Dataflow pricing. View all pricing details.How Dataflow pricing worksExplore the billing and resource model for Dataflow.Dataflow compute resourcesDescription Dataflow billing for compute resources includes: Worker CPU and memoryDataflow Shuffle data processed for batch workloadsStreaming Engine Compute UnitsStreaming Engine data processedPricingLearn more on our detailed pricing pageOther Dataflow resourcesDescription Other Dataflow resources that are billed for all jobs include Persistent Disk, GPUs, and snapshots.PricingLearn more on our detailed pricing pageDataflow committed use discounts (CUDs)Description Dataflow CUDs offer two levels of discounts, depending on the commitment period:A one-year CUD gives you a 20% discount from the on-demand rateA three-year CUD gives you a 40% discount from the on-demand ratePricingLearn more about Dataflow CUDsLearn more about Dataflow pricing. View all pricing details.Pricing calculatorEstimate your monthly Dataflow costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet started with Dataflow todayTry Dataflow in consoleHave a large project? Contact salesHow to use DataflowLearn morePre-built Dataflow templatesGet startedBrowse Dataflow code samplesExplore nowBusiness CaseSee why leading customers choose Dataflow Namitha Vijaya Kumar, Product Owner, Google Cloud SRE at ANZ Bank“Dataflow is helping both our batch process and real-time data processing, thereby ensuring timeliness of data is maintained in the enterprise data lake. This in turn helps downstream usage of data for analytics/decisioning and delivery of real-time notifications for our retail customers.”Read Forrester WaveRelated contentGoogle has been Peer-recognized as a Customer’s Choice for Event Stream ProcessingGet the reportBringing the power of ML to the world of streaming data with SpotifyWatch the videoYahoo compares Dataflow versus self-managed Apache Flink for two streaming use casesRead the blogDataflow benefitsStreaming ML made easyTurnkey capabilities to bring streaming to AI/ML: RunInference for inference, MLTransform for model training pre-processing, Enrichment for feature store lookups, and dynamic GPU support all bring reduced toil with no wasted spend for limited GPU resources.Optimal price-performance with robust tooling Dataflow offers cost-effective streaming with automated optimization for maximum performance and resource usage. It scales effortlessly to handle any workload and features AI-powered self-healing. Robust tooling helps with operations and understanding. Open, portable, and extensibleDataflow is built for open source Apache Beam with unified batch and streaming support, making your workloads portable between clouds, on-premises, or to edge devices. Partners & IntegrationDataflow partnersGoogle Cloud partners have developed integrations with Dataflow to quickly and easily enable powerful data processing tasks of any size. See all partners to start your streaming journey today. Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Dataform.txt b/Dataform.txt new file mode 100644 index 0000000000000000000000000000000000000000..1cd8dcfaf9b970f0ae39f61c4b2fdedc0c4cec2c --- /dev/null +++ b/Dataform.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/dataform +Date Scraped: 2025-02-23T12:03:47.238Z + +Content: +Jump to DataformDataformDevelop and operationalize scalable data transformations pipelines in BigQuery using SQL.Go to consoleDevelop curated, up-to-date, trusted, and documented tables in BigQueryEnable data analysts and data engineers to collaborate on the same repositoryBuild scalable data pipelines in BigQuery using SQLIntegrate with GitHub and GitLabKeep tables updated without managing infrastructureBenefitsSimplify your data processing architectureDevelop and operationalize scalable data pipelines in BigQuery using SQL from a single environment and without additional dependencies. Collaborate using software development practicesWith Dataform, data teams manage their SQL code and data assets' definitions following software engineering best practices—such as version control, environments, testing, and documentation. Build production-grade SQL pipelinesDataform abstracts away the complexity of building SQL pipelines. Data analysts can manage dependencies, configure data quality tests, and orchestrate complex pipelines using SQL.Key featuresKey featuresOpen source, SQL-based language to manage data transformationsDataform Core enables data engineers and data analysts to centrally create table definitions, configure dependencies, add column descriptions, and configure data quality assertions in a single repository using just SQL.Dataform Core functions can be adopted incrementally and additively, without modifying existing code.Dataform Core is open source and can be used locally, giving users freedom from lock-in, and flexibility for more advanced use cases. Fully managed, serverless orchestration for data pipelinesDataform handles the operational infrastructure to update your tables following the dependencies between your tables and using the latest version of your code. Lineage and data information can be tracked seamlessly with Dataform integrations. Trigger SQL workflows manually, or schedule via Cloud Composer, Workflows, or third-party services.Fully featured cloud development environment to develop with SQLDefine tables, fix issues with real-time error messages, visualize dependencies, commit the changes to Git, and schedule pipelines in minutes, from a single interface, without leaving your web browser. Connect your repository with third-party providers such as GitHub and GitLab. Commit changes and push or open pull requests from the IDE. DocumentationDocumentationQuickstartCreate and execute a SQL workflowLearn how to create a SQL workflow and execute it in BigQuery by using Dataform and SQLX.Learn moreTutorialVersion control your codeLearn how to use version control in Dataform to keep track of development.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.PricingPricingDataform is a free service.There may be associated costs from other services when using the product. View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Dataplex.txt b/Dataplex.txt new file mode 100644 index 0000000000000000000000000000000000000000..03b65e6ae5acb3c283dc8b213626150f5373d192 --- /dev/null +++ b/Dataplex.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/dataplex +Date Scraped: 2025-02-23T12:03:45.489Z + +Content: +Google Cloud named a leader in the 2024 Gartner Magic Quadrant for Data Integration Tools. Get the report.DataplexIntelligent data to AI governanceCentrally discover, manage, monitor, and govern data and AI artifacts across your data platform, providing access to trusted data and powering analytics and AI at scale.Go to consoleContact salesProduct highlightsData governance to improve data quality and discoverabilityAn intelligent data fabric powered by AI/MLData to AI governanceLearn Dataplex in 2 minutes2 minute videoFeaturesData to AI governance with Vertex AI and DataplexInstantly discover AI models, datasets, features, and related data artifacts you need in a single search experience, spanning projects and regions while adhering to IAM permissions. Use Dataplex to enrich AI artifacts with critical business metadata for informed decision-making, such as ownership, key attributes, and relevant context.VIDEOData and AI governance at Next45:31Data governance in BigQueryTo support the end-to-end data lifecycle and make it easier for customers to manage, discover, and govern data, we’re bringing Dataplex capabilities directly into BigQuery, including data quality, data lineage, and profiling. Now you can apply data governance directly to your data without leaving BigQuery.How Dataplex Provides Data Governance for the AI eraRead about the newest innovationsRead the blogGen AI powered insights and semantic searchJumpstart your analytics with a curated list of questions that you can ask of your data. Harnessing the power of metadata and cutting-edge Gemini models, Data Insights generates tailored queries to uncover hidden patterns and valuable insights from your data. Semantic Metadata search for data helps you discover data using the language of your choice. Users have the ability to search for data assets using natural language queries, eliminating the requirement to recall search syntax and qualifiers. Simplified data discovery with Data CatalogAutomate data discovery, classification, and metadata enrichment of structured, semi-structured, and unstructured data, stored in Google Cloud and beyond, with built-in data intelligence. Manage technical, operational, and business metadata, for all your data, in a unified, flexible, and powerful Data Catalog. Enrich metadata with relevant business context using a built-in business glossary. Easily search, find, and understand your data with a built-in global, faceted search.End-to-end data lineageEasily understand where your data comes from and the transformations it goes through with end-to-end data lineage. Automatically processed for Google Cloud data sources and extendible to third-party data sources.Automated data qualityUse automatically captured data lineage and built-in data profiling to better understand your data, trace dependencies, and effectively troubleshoot data issues. Automate data quality across distributed data and enable access to data you can trust.View all featuresHow It WorksDataplex enables you manage, monitor, and govern data and AI artifacts across data lakes, warehouses and databases. It helps users intelligently establish data profiles, assess data quality, determine data lineage, classify data, organize data into domains, and manage and govern the data life cycle.Dataplex quickstartWatch: Manage and govern distributed data with DataplexCommon UsesData to AI governanceData to AI governance with Dataplex and Vertex AIIn a single search experience, you can discover data and AI assets org-wide and instantly discover AI models, datasets, and related data artifacts, spanning projects and regions while adhering to IAM permissions. You can also augment assets with business context and enrich AI artifacts with business metadata for informed decision-making, such as ownership, key attributes, and relevant context.Use Data Catalog to search for Vertex AI model and dataset resources Watch the Next '24 panel of data leaders discuss data to AI governanceTutorials, quickstarts, & labsData to AI governance with Dataplex and Vertex AIIn a single search experience, you can discover data and AI assets org-wide and instantly discover AI models, datasets, and related data artifacts, spanning projects and regions while adhering to IAM permissions. You can also augment assets with business context and enrich AI artifacts with business metadata for informed decision-making, such as ownership, key attributes, and relevant context.Use Data Catalog to search for Vertex AI model and dataset resources Watch the Next '24 panel of data leaders discuss data to AI governanceBuild a data meshUse Dataplex to build a data meshA data mesh is a strategy where data ownership is decentralized and handled by domain data owners, where distributed datasets across locations can improve data accessibility and operational efficiency. Dataplex helps logically organize your data and related artifacts into a Dataplex Lake, or a data domain, that enables you to unify distributed data and organize it based on the business context.Read the guide on how to build a data mesh with DataplexVideo: Using Dataplex as a unified data mesh across distributed dataBlog: Build a data mesh on Google Cloud with DataplexDBInisght report: Google Dataplex and the data meshTutorials, quickstarts, & labsUse Dataplex to build a data meshA data mesh is a strategy where data ownership is decentralized and handled by domain data owners, where distributed datasets across locations can improve data accessibility and operational efficiency. Dataplex helps logically organize your data and related artifacts into a Dataplex Lake, or a data domain, that enables you to unify distributed data and organize it based on the business context.Read the guide on how to build a data mesh with DataplexVideo: Using Dataplex as a unified data mesh across distributed dataBlog: Build a data mesh on Google Cloud with DataplexDBInisght report: Google Dataplex and the data meshDemocratize data insightsDemocratize data insights with Dataplex Data CatalogSearch and discover your data and AI artifacts across silos using a fully managed, serverless Data Catalog within Dataplex. Data Catalog has built-in capabilities to automatically ingest technical metadata, enrich metadata with relevant business context, and empower every user in your organization to easily find and understand their data and AI artifacts with a powerful, faceted search interface.Read the guide on using Data Catalog for better data discovery, metadata management, and moreGuide: Search and view data assets with Data Catalog Guide: Cloud Storage data discovery with DataplexGet Started: Enrich table metadata with taggingTutorials, quickstarts, & labsDemocratize data insights with Dataplex Data CatalogSearch and discover your data and AI artifacts across silos using a fully managed, serverless Data Catalog within Dataplex. Data Catalog has built-in capabilities to automatically ingest technical metadata, enrich metadata with relevant business context, and empower every user in your organization to easily find and understand their data and AI artifacts with a powerful, faceted search interface.Read the guide on using Data Catalog for better data discovery, metadata management, and moreGuide: Search and view data assets with Data Catalog Guide: Cloud Storage data discovery with DataplexGet Started: Enrich table metadata with taggingPricingDataplex pricingDataplex pricing is based on pay-as-you-go usage.Service and usageDescriptionPrice (USD)Dataplex processingDataplex standard and premium processing are metered by the Data Compute Unit (DCU). DCU-hour is an abstract billing unit for Dataplex and the actual metering depends on the individual features you use.Free tier Dataplex processingFirst 100 DCU-hour per month for Dataplex standard processing.No chargeStandard Dataplex processing Dataplex standard tier covers the data discovery functionality that automatically discovers table and fileset metadata from Cloud Storage.Starting at$0.060per DCU-hourPremium Dataplex processingThe Dataplex premium processing tier covers the data exploration workbench, data lineage, data quality, and data profiling capabilities of Dataplex.Starting at$0.089per DCU-hourData Catalog pricingMetadata storage pricingData Catalog measures the average amount of the stored metadata during a short time interval. For billing, these measurements are combined into a one-month average, which is multiplied by the monthly rate.Dataplex free tierFirst 1 MiB monthly average storage.No chargeMetadata storage Over 1 MiB monthly average storage.Starting at$2per GiB per monthAPI chargesData Catalog charges for API calls made to the Data Catalog API and Data Lineage API.API callsFirst 1 million in a month.No chargeAPI callsOver 1 million in a month.Starting at$10per 100,000 API callsDataplex shuffle storage pricingShuffle storage pricing covers any disk storage specified in the environments configured for the data exploration workbench.Starting at$0.040per GB-monthOther usageData organization features in Dataplex (lake, zone, or asset setup) and security policy application and propagation, are provided free of charge.Some Dataplex functionalities trigger job execution via Dataproc, BigQuery, and Dataflow. Usages for those services are charged according to their respective pricing models, and charges will show up under those services as such.Dataplex pricingDataplex pricing is based on pay-as-you-go usage.Dataplex processingDescriptionDataplex standard and premium processing are metered by the Data Compute Unit (DCU). DCU-hour is an abstract billing unit for Dataplex and the actual metering depends on the individual features you use.Price (USD)Free tier Dataplex processingFirst 100 DCU-hour per month for Dataplex standard processing.DescriptionNo chargeStandard Dataplex processing Dataplex standard tier covers the data discovery functionality that automatically discovers table and fileset metadata from Cloud Storage.DescriptionStarting at$0.060per DCU-hourPremium Dataplex processingThe Dataplex premium processing tier covers the data exploration workbench, data lineage, data quality, and data profiling capabilities of Dataplex.DescriptionStarting at$0.089per DCU-hourData Catalog pricingDescriptionMetadata storage pricingData Catalog measures the average amount of the stored metadata during a short time interval. For billing, these measurements are combined into a one-month average, which is multiplied by the monthly rate.Price (USD)Dataplex free tierFirst 1 MiB monthly average storage.DescriptionNo chargeMetadata storage Over 1 MiB monthly average storage.DescriptionStarting at$2per GiB per monthAPI chargesData Catalog charges for API calls made to the Data Catalog API and Data Lineage API.DescriptionAPI callsFirst 1 million in a month.DescriptionNo chargeAPI callsOver 1 million in a month.DescriptionStarting at$10per 100,000 API callsDataplex shuffle storage pricingDescriptionShuffle storage pricing covers any disk storage specified in the environments configured for the data exploration workbench.Price (USD)Starting at$0.040per GB-monthOther usageDescriptionData organization features in Dataplex (lake, zone, or asset setup) and security policy application and propagation, are provided free of charge.Price (USD)Some Dataplex functionalities trigger job execution via Dataproc, BigQuery, and Dataflow. Usages for those services are charged according to their respective pricing models, and charges will show up under those services as such.DescriptionExplore pricingVisit the Dataplex pricing to see pricing per region and more.Visit pricing pageCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptGet started with DataplexGo to consoleDataplex QuickstartGet startedHow Dataplex worksLearn moreDataplex best practicesRead the guideExplore Data Catalog code samplesGet started todayPartners & IntegrationPartnering with industry leadersPartnersExplore all partners in Google Cloud Partner Center.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Dataproc(1).txt b/Dataproc(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..78d6e610c81bc416e2201fcf0dcbe8b648079340 --- /dev/null +++ b/Dataproc(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/dataproc +Date Scraped: 2025-02-23T12:05:25.649Z + +Content: +Launch a preconfigured solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured data. Deploy in console.Jump to DataprocDataprocDataproc is a fully managed and highly scalable service for running Apache Hadoop, Apache Spark, Apache Flink, Presto, and 30+ open source tools and frameworks. Use Dataproc for data lake modernization, ETL, and secure data science, at scale, integrated with Google Cloud, at a fraction of the cost.Go to consoleFlexible: Use serverless, or manage clusters on Google Compute and Kubernetes. Deploy a Google-recommended solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured dataOpen: Run open source data analytics at scale, with enterprise grade securityIntelligent: Enable data users through integrations with Vertex AI, BigQuery, and DataplexSecure: Configure advanced security such as Kerberos, Apache Ranger and Personal AuthenticationCost-effective: Realize 54% lower TCO compared to on-prem data lakes with per-second pricingVIDEODataproc supports popular OSS like Apache Spark, Presto, Flink, and more.1:23BenefitsModernize your open source data processingServerless deployment, logging, and monitoring let you focus on your data and analytics, not on your infrastructure. Reduce TCO of Apache Spark management by up to 54%. Build and train models 5X faster.Intelligent and seamless OSS for data scienceEnable data scientists and data analysts to seamlessly perform data science jobs through native integrations with BigQuery, Dataplex, Vertex AI, and OSS notebooks like JupyterLab.Enterprise security integrated with Google CloudSecurity features such as default at-rest encryption, OS Login, VPC Service Controls, and customer-managed encryption keys (CMEK). Enable Hadoop Secure Mode via Kerberos by adding a security configuration. Key featuresKey features Fully managed and automated big data open source softwareServerless deployment, logging, and monitoring let you focus on your data and analytics, not on your infrastructure. Reduce TCO of Apache Spark management by up to 54%. Enable data scientists and engineers to build and train models 5X faster, compared to traditional notebooks, through integration with Vertex AI Workbench. The Dataproc Jobs API makes it easy to incorporate big data processing into custom applications, while Dataproc Metastore eliminates the need to run your own Hive metastore or catalog service. Containerize Apache Spark jobs with KubernetesBuild your Apache Spark jobs using Dataproc on Kubernetes so you can use Dataproc with Google Kubernetes Engine (GKE) to provide job portability and isolation. Enterprise security integrated with Google CloudWhen you create a Dataproc cluster, you can enable Hadoop Secure Mode via Kerberos by adding a Security Configuration. Additionally, some of the most commonly used Google Cloud-specific security features used with Dataproc include default at-rest encryption, OS Login, VPC Service Controls, and customer-managed encryption keys (CMEK).The best of open source with the best of Google CloudDataproc lets you take the open source tools, algorithms, and programming languages that you use today, but makes it easy to apply them on cloud-scale datasets. At the same time, Dataproc has out-of-the-box integration with the rest of the Google Cloud analytics, database, and AI ecosystem. Data scientists and engineers can quickly access data and build data applications connecting Dataproc to BigQuery, Vertex AI, Spanner, Pub/Sub, or Data Fusion. View all features3:39Demo: See how Dataproc and Cloud Storage can help accelerate loan processingCustomersLearn from customers using DataprocBlog postBroadcom modernizes its data lake with Dataproc and unlocks flexible data management 5-min readCase studyDataproc provides Wayfair high-performance, low-maintenance access to unstructured data at scale.8-min readVideo Vodafone Group moves 600 on-premises Apache Hadoop servers to the cloud. 47:17Case study Twitter moved from on-premises Hadoop to Google Cloud to more cost-effectively store and query data.49:57Case study Pandora migrated 7 PB+ of data from their on-prem Hadoop to Google Cloud to help scale and lower costs.50:51Case study Spinning up and down Dataproc clusters helped METRO reduce infrastructure costs by 30% to 50%.5-min readSee all customersWhat's newWhat's newServerless Spark is now Generally Available. Sign up for preview for other Spark on Google Cloud services. Blog postServerless Spark jobs made seamless for all data usersLearn moreBlog postConverging architectures: Bringing data lakes and data warehouses togetherRead the blogBlog postNew Dataproc best practices guideLearn moreBlog postNew GA Dataproc features extend data science and ML capabilitiesLearn moreDocumentationDocumentationGoogle Cloud BasicsServerless SparkSubmit Spark jobs which auto-provision and auto-scale. More details with the quickstart link below. Learn moreAPIs & Libraries Dataproc initialization actionsAdd other OSS projects to your Dataproc clusters with pre-built initialization actions.Learn moreAPIs & Libraries Open source connectorsLibraries and tools for Apache Hadoop interoperability.Learn moreAPIs & LibrariesDataproc Workflow Templates The Dataproc WorkflowTemplates API provides a flexible and easy-to-use mechanism for managing and executing workflows.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notes Read about the latest releases for Dataproc.Use casesUse casesUse case Move your Hadoop and Spark clusters to the cloudEnterprises are migrating their existing on-premises Apache Hadoop and Spark clusters over to Dataproc to manage costs and unlock the power of elastic scale. With Dataproc, enterprises get a fully managed, purpose-built cluster that can autoscale to support any data or analytics processing job. Best practice Apache Spark migration guideDon’t rewrite your Spark code in Google Cloud. Learn moreBest practice Migrate HDFS data to Google CloudLearn when and how you should migrate your on-premises HDFS data to Google Cloud Storage. Learn moreBest practice Moving security controls from on-premises to DataprocMigrate existing security controls to Dataproc to help achieve enterprise and industry compliance. Learn moreUse case Data science on DataprocCreate your ideal data science environment by spinning up a purpose-built Dataproc cluster. Integrate open source software like Apache Spark, NVIDIA RAPIDS, and Jupyter notebooks with Google Cloud AI services and GPUs to help accelerate your machine learning and AI development. TutorialUse Dataproc and Apache Spark ML for machine learningIntegrate Dataproc with other Google Cloud services to build an end-to-end data science experience. Learn moreBest practiceIT governed open source data science with Dataproc HubLearn how Dataproc Hub can provide your data scientist all the open source tools they need in an IT governed and cost control way. Learn moreTutorialDataproc meets TensorFlow on YARNLearn how to orchestrate distributed TensorFlow with TonY. Learn moreView all technical guidesAll featuresAll featuresServerless SparkDeploy Spark applications and pipelines that autoscale without any manual infrastructure provisioning or tuning. Resizable clustersCreate and scale clusters quickly with various virtual machine types, disk sizes, number of nodes, and networking options. Autoscaling clustersDataproc autoscaling provides a mechanism for automating cluster resource management and enables automatic addition and subtraction of cluster workers (nodes). Cloud integratedBuilt-in integration with Cloud Storage, BigQuery, Dataplex, Vertex AI, Composer, Bigtable, Cloud Logging, and Cloud Monitoring, giving you a more complete and robust data platform. Automatic or manual configurationDataproc automatically configures hardware and software but also gives you manual control. Developer toolsMultiple ways to manage a cluster, including an easy-to-use web UI, the Cloud SDK, RESTful APIs, and SSH access. Initialization actionsRun initialization actions to install or customize the settings and libraries you need when your cluster is created. Optional componentsUse optional components to install and configure additional components on the cluster. Optional components are integrated with Dataproc components and offer fully configured environments for Zeppelin, Presto, and other open source software components related to the Apache Hadoop and Apache Spark ecosystem. Custom containers and imagesDataproc serverless Spark can be provisioned with custom docker containers. Dataproc clusters can be provisioned with a custom image that includes your pre-installed Linux operating system packages. Flexible virtual machinesClusters can use custom machine types and preemptible virtual machines to make them the perfect size for your needs. Workflow templatesDataproc workflow templates provide a flexible and easy-to-use mechanism for managing and executing workflows. A workflow template is a reusable workflow configuration that defines a graph of jobs with information on where to run those jobs. Automated policy management Standardize security, cost, and infrastructure policies across a fleet of clusters. You can create policies for resource management, security, or network at a project level. You can also make it easy for users to use the correct images, components, metastore, and other peripheral services, enabling you to manage your fleet of clusters and serverless Spark policies in the future. Smart alertsDataproc recommended alerts allow customers to adjust the thresholds for the pre-configured alerts to get alerts on idle, runaway clusters, jobs, overutilized clusters and more. Customers can further customize these alerts and even create advanced cluster and job management capabilities. These capabilities allow customers to manage their fleet at scale.Dataproc on Google Distributed Cloud (GDC)Dataproc on GDC enables you to run Spark on the GDC Edge Appliance in your data center. Now you can use the same Spark applications on Google Cloud as well as on sensitive data in your data center.Multi-regional Dataproc MetastoreDataproc Metastore is a fully managed, highly available Hive metastore (HMS) with fine-grained access control. Multi-regional Dataproc Metastore provides active-active DR and resilience against regional outages.PricingPricingDataproc pricing is based on the number of vCPU and the duration of time that they run. While pricing shows hourly rate, we charge down to the second, so you only pay for what you use.Ex: A cluster with 6 nodes (1 main + 5 workers) of 4 CPUs each ran for 2 hours would cost $0.48. Dataproc charge = # of vCPUs * hours * Dataproc price = 24 * 2 * $0.01 = $0.48Please see pricing page for details.View pricing detailsPartnersPartnersDataproc integrates with key partners to complement your existing investments and skill sets. See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Dataproc.txt b/Dataproc.txt new file mode 100644 index 0000000000000000000000000000000000000000..def1f61f0a4d8486004456d11a29410d700c75ea --- /dev/null +++ b/Dataproc.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/dataproc +Date Scraped: 2025-02-23T12:03:37.167Z + +Content: +Launch a preconfigured solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured data. Deploy in console.Jump to DataprocDataprocDataproc is a fully managed and highly scalable service for running Apache Hadoop, Apache Spark, Apache Flink, Presto, and 30+ open source tools and frameworks. Use Dataproc for data lake modernization, ETL, and secure data science, at scale, integrated with Google Cloud, at a fraction of the cost.Go to consoleFlexible: Use serverless, or manage clusters on Google Compute and Kubernetes. Deploy a Google-recommended solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured dataOpen: Run open source data analytics at scale, with enterprise grade securityIntelligent: Enable data users through integrations with Vertex AI, BigQuery, and DataplexSecure: Configure advanced security such as Kerberos, Apache Ranger and Personal AuthenticationCost-effective: Realize 54% lower TCO compared to on-prem data lakes with per-second pricingVIDEODataproc supports popular OSS like Apache Spark, Presto, Flink, and more.1:23BenefitsModernize your open source data processingServerless deployment, logging, and monitoring let you focus on your data and analytics, not on your infrastructure. Reduce TCO of Apache Spark management by up to 54%. Build and train models 5X faster.Intelligent and seamless OSS for data scienceEnable data scientists and data analysts to seamlessly perform data science jobs through native integrations with BigQuery, Dataplex, Vertex AI, and OSS notebooks like JupyterLab.Enterprise security integrated with Google CloudSecurity features such as default at-rest encryption, OS Login, VPC Service Controls, and customer-managed encryption keys (CMEK). Enable Hadoop Secure Mode via Kerberos by adding a security configuration. Key featuresKey features Fully managed and automated big data open source softwareServerless deployment, logging, and monitoring let you focus on your data and analytics, not on your infrastructure. Reduce TCO of Apache Spark management by up to 54%. Enable data scientists and engineers to build and train models 5X faster, compared to traditional notebooks, through integration with Vertex AI Workbench. The Dataproc Jobs API makes it easy to incorporate big data processing into custom applications, while Dataproc Metastore eliminates the need to run your own Hive metastore or catalog service. Containerize Apache Spark jobs with KubernetesBuild your Apache Spark jobs using Dataproc on Kubernetes so you can use Dataproc with Google Kubernetes Engine (GKE) to provide job portability and isolation. Enterprise security integrated with Google CloudWhen you create a Dataproc cluster, you can enable Hadoop Secure Mode via Kerberos by adding a Security Configuration. Additionally, some of the most commonly used Google Cloud-specific security features used with Dataproc include default at-rest encryption, OS Login, VPC Service Controls, and customer-managed encryption keys (CMEK).The best of open source with the best of Google CloudDataproc lets you take the open source tools, algorithms, and programming languages that you use today, but makes it easy to apply them on cloud-scale datasets. At the same time, Dataproc has out-of-the-box integration with the rest of the Google Cloud analytics, database, and AI ecosystem. Data scientists and engineers can quickly access data and build data applications connecting Dataproc to BigQuery, Vertex AI, Spanner, Pub/Sub, or Data Fusion. View all features3:39Demo: See how Dataproc and Cloud Storage can help accelerate loan processingCustomersLearn from customers using DataprocBlog postBroadcom modernizes its data lake with Dataproc and unlocks flexible data management 5-min readCase studyDataproc provides Wayfair high-performance, low-maintenance access to unstructured data at scale.8-min readVideo Vodafone Group moves 600 on-premises Apache Hadoop servers to the cloud. 47:17Case study Twitter moved from on-premises Hadoop to Google Cloud to more cost-effectively store and query data.49:57Case study Pandora migrated 7 PB+ of data from their on-prem Hadoop to Google Cloud to help scale and lower costs.50:51Case study Spinning up and down Dataproc clusters helped METRO reduce infrastructure costs by 30% to 50%.5-min readSee all customersWhat's newWhat's newServerless Spark is now Generally Available. Sign up for preview for other Spark on Google Cloud services. Blog postServerless Spark jobs made seamless for all data usersLearn moreBlog postConverging architectures: Bringing data lakes and data warehouses togetherRead the blogBlog postNew Dataproc best practices guideLearn moreBlog postNew GA Dataproc features extend data science and ML capabilitiesLearn moreDocumentationDocumentationGoogle Cloud BasicsServerless SparkSubmit Spark jobs which auto-provision and auto-scale. More details with the quickstart link below. Learn moreAPIs & Libraries Dataproc initialization actionsAdd other OSS projects to your Dataproc clusters with pre-built initialization actions.Learn moreAPIs & Libraries Open source connectorsLibraries and tools for Apache Hadoop interoperability.Learn moreAPIs & LibrariesDataproc Workflow Templates The Dataproc WorkflowTemplates API provides a flexible and easy-to-use mechanism for managing and executing workflows.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notes Read about the latest releases for Dataproc.Use casesUse casesUse case Move your Hadoop and Spark clusters to the cloudEnterprises are migrating their existing on-premises Apache Hadoop and Spark clusters over to Dataproc to manage costs and unlock the power of elastic scale. With Dataproc, enterprises get a fully managed, purpose-built cluster that can autoscale to support any data or analytics processing job. Best practice Apache Spark migration guideDon’t rewrite your Spark code in Google Cloud. Learn moreBest practice Migrate HDFS data to Google CloudLearn when and how you should migrate your on-premises HDFS data to Google Cloud Storage. Learn moreBest practice Moving security controls from on-premises to DataprocMigrate existing security controls to Dataproc to help achieve enterprise and industry compliance. Learn moreUse case Data science on DataprocCreate your ideal data science environment by spinning up a purpose-built Dataproc cluster. Integrate open source software like Apache Spark, NVIDIA RAPIDS, and Jupyter notebooks with Google Cloud AI services and GPUs to help accelerate your machine learning and AI development. TutorialUse Dataproc and Apache Spark ML for machine learningIntegrate Dataproc with other Google Cloud services to build an end-to-end data science experience. Learn moreBest practiceIT governed open source data science with Dataproc HubLearn how Dataproc Hub can provide your data scientist all the open source tools they need in an IT governed and cost control way. Learn moreTutorialDataproc meets TensorFlow on YARNLearn how to orchestrate distributed TensorFlow with TonY. Learn moreView all technical guidesAll featuresAll featuresServerless SparkDeploy Spark applications and pipelines that autoscale without any manual infrastructure provisioning or tuning. Resizable clustersCreate and scale clusters quickly with various virtual machine types, disk sizes, number of nodes, and networking options. Autoscaling clustersDataproc autoscaling provides a mechanism for automating cluster resource management and enables automatic addition and subtraction of cluster workers (nodes). Cloud integratedBuilt-in integration with Cloud Storage, BigQuery, Dataplex, Vertex AI, Composer, Bigtable, Cloud Logging, and Cloud Monitoring, giving you a more complete and robust data platform. Automatic or manual configurationDataproc automatically configures hardware and software but also gives you manual control. Developer toolsMultiple ways to manage a cluster, including an easy-to-use web UI, the Cloud SDK, RESTful APIs, and SSH access. Initialization actionsRun initialization actions to install or customize the settings and libraries you need when your cluster is created. Optional componentsUse optional components to install and configure additional components on the cluster. Optional components are integrated with Dataproc components and offer fully configured environments for Zeppelin, Presto, and other open source software components related to the Apache Hadoop and Apache Spark ecosystem. Custom containers and imagesDataproc serverless Spark can be provisioned with custom docker containers. Dataproc clusters can be provisioned with a custom image that includes your pre-installed Linux operating system packages. Flexible virtual machinesClusters can use custom machine types and preemptible virtual machines to make them the perfect size for your needs. Workflow templatesDataproc workflow templates provide a flexible and easy-to-use mechanism for managing and executing workflows. A workflow template is a reusable workflow configuration that defines a graph of jobs with information on where to run those jobs. Automated policy management Standardize security, cost, and infrastructure policies across a fleet of clusters. You can create policies for resource management, security, or network at a project level. You can also make it easy for users to use the correct images, components, metastore, and other peripheral services, enabling you to manage your fleet of clusters and serverless Spark policies in the future. Smart alertsDataproc recommended alerts allow customers to adjust the thresholds for the pre-configured alerts to get alerts on idle, runaway clusters, jobs, overutilized clusters and more. Customers can further customize these alerts and even create advanced cluster and job management capabilities. These capabilities allow customers to manage their fleet at scale.Dataproc on Google Distributed Cloud (GDC)Dataproc on GDC enables you to run Spark on the GDC Edge Appliance in your data center. Now you can use the same Spark applications on Google Cloud as well as on sensitive data in your data center.Multi-regional Dataproc MetastoreDataproc Metastore is a fully managed, highly available Hive metastore (HMS) with fine-grained access control. Multi-regional Dataproc Metastore provides active-active DR and resilience against regional outages.PricingPricingDataproc pricing is based on the number of vCPU and the duration of time that they run. While pricing shows hourly rate, we charge down to the second, so you only pay for what you use.Ex: A cluster with 6 nodes (1 main + 5 workers) of 4 CPUs each ran for 2 hours would cost $0.48. Dataproc charge = # of vCPUs * hours * Dataproc price = 24 * 2 * $0.01 = $0.48Please see pricing page for details.View pricing detailsPartnersPartnersDataproc integrates with key partners to complement your existing investments and skill sets. See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Datasets.txt b/Datasets.txt new file mode 100644 index 0000000000000000000000000000000000000000..22953eafa5013350c2c640b850136d635c0800fe --- /dev/null +++ b/Datasets.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/datasets +Date Scraped: 2025-02-23T11:59:08.141Z + +Content: +DatasetsEnhance your analytics and AI initiatives with pre-built data solutions and valuable datasets powered by BigQuery, Cloud Storage, Earth Engine, and other Google Cloud services. Explore all datasetsGo to consoleDataset: Atlas of Human Settlements Turkey and SyriaAtlas AI, a leading provider of geospatial intelligence products, has released data layers covering Turkey and Syria. Find in Analytics Hub.Analytics Hub generally availableRead our blog about how Analytics Hub enables efficient and secure exchange of valuable data and analytics assets across organizational boundaries.Inside the public dataset onboarding pipelineCheck out our release notes for the latest updates and datasets onboarded using our public datasets pipeline repository.Featured datasetsCategoryFeatured datasetsSample use cases and insightsGoogle datasetsGoogle TrendsView the Top 25 and Top 25 rising queries from Google Trends from the past 30-days with this dataset. Each term includes 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US and now over 50 countries across the globe.What are the most popular retail items people have searched for across the area?Google Analytics (Sample)The dataset provides 12 months (August 2016 to August 2017) of obfuscated Google Analytics 360 data from the Google Merchandise Store to show what an ecommerce website would see, including traffic source, content, and transactional data.What is the total number of transactions generated per device browser?Google Patents ResearchGoogle Patents Research Data contains the output of much of the data analysis work used in Google Patents (patents.google.com), including machine translations of titles and abstracts from Google Translate, embedding vectors, extracted top terms, similar documents, and forward references.What are the 20 most recent patents filed?Commercial datasetsCrux InformaticsCrux Deliver is a managed service for data engineering and operations. Crux wires up all of the traditional and alternative data providers on behalf of its clients and manages all aspects of onboarding, data engineering, and operations. Every dataset is validated so that we only deliver clean and actionable data. What are the datasets Crux can help me onboard into my data ecosystem?Exchange Data InternationalExchange Data International (EDI) helps the global financial and investment community make informed decisions. EDI’s extensive content database includes worldwide equity and fixed income corporate actions, dividends, static reference data, closing prices, and shares outstanding.Understand historical events that affect Equity Shares and ETFs.FactsetFactSet is a global provider of integrated financial information, analytical applications, and industry-leading service that delivers superior content, analytics, and flexible technology.Track multiple versions of merger deals to enhance your investment process.HouseCanaryInstant access to reliable property, loan and valuation information for 100M homes. ML algorithms process hundreds of data sources to provide Home Price Indices for 381 Metros, 18,300 ZIP codes and 4M blocks covering >95% of the US residential market. Make investment decisions from 40-year historical volatility or 3-year forecast.LinkUpLinkUp, the global leader in accurate, real-time, and predictive job market data and analytics offers proprietary data solutions that give customers the ability to derive valuable insights into the global labor market and help investors generate alpha at the macro, sector, geographic, and individual company level.Create models and signals to assess and predict job growth at the sector level.London Energy Brokers AssociationLEBA’s solution gives customers the ability to access a unique, consolidated view of the Energy markets from across the main energy brokers. Energy, Oil and Gas producers, wholesale users, utilities, and financial traders benefit from independent market information based on traded activity rather than price assessments.Understand the energy prices across countries in EuropeNeustarNeustar, Inc., a TransUnion company, is a leader in identity resolution providing the data and technology that enable trusted connections between companies and people at the moments that matter most. Neustar offers industry-leading solutions in marketing, risk and communications.Improve customer data assets and build privacy-focused consumer databasesRS MetricsRS Metrics, the leading company for asset-level, real-time, objective and verifiable ESG data, gives customers the ability to access accurate insights into EV manufacturers’ factory inventory levels.Create independent, verifiable, and objective benchmarks of EV car production.Ursa Space SystemsUrsa Space Systems, a global satellite intelligence infrastructure provider, gives customers the ability to monitor global economic trends with data derived from satellite imagery, updated on a weekly basis.What is the likely direction of oil price benchmarks and regional spreads?Public datasetsSevere Storm Event DetailsThe Storm Events Database is an integrated database of severe weather events across the United States from 1950 to this year, with information about a storm event's location, azimuth, distance, impact, and severity, including the cost of damages to property and crops.Use case: home improvement retailer understanding impact of storms on inventoryTechnical Reference Pattern: Dynamic insurance pricing model using this datasetCensus Bureau US BoundariesThese are full-resolution boundary files, derived from TIGER/Line Shapefiles, the fully supported, core geographic products from the US Census Bureau.These include information for the 50 states, the District of Columbia, Puerto Rico, and the outlying island areas.Use case: Developing an urbanization index for retailersAmerican Community SurveyThe American Community Survey (ACS) is an ongoing survey that provides vital information on a yearly basis about our nation and its people by contacting over 3.5 million households across the country. The resulting data provides incredibly detailed demographic information across the US aggregated at various geographic levels.Use case: Population growth trends as inputs to facility/site selection analysisAll public datasets Search for and access over 200 datasets listed in Google Cloud Marketplace.What datasets can help provide deeper context for our analytics or ai workflows?Earth Engine datasetsEarth EngineEarth Engine's public data archive includes more than forty years of historical imagery and scientific datasets, updated daily and available for online analysis.How has surface temperature changed over the past 30 years?What did this area look like before year 2000?Kaggle datasetsKaggle DatasetsInside Kaggle you’ll find all the code and data you need to do your data science work. Use over 80,000 public datasets and 400,000 public notebooks to conquer any analysis in no time.Can you tackle some of the most vexing and provocative problems in data science?Synthetic datasetsCymbal InvestmentsThe synthetic data represents transactions from automated trading bots operated by the fictional Cymbal Investments group, each using a single algorithm to guide its trading decisions. The records are derived from FIX protocol (version 4.4) Trade Capture Reports loaded into BigQuery. How much did traders make from each individual trade?Research datasetsDataset SearchGoogle's Dataset Search program has indexed almost 25 million datasets from across the web, giving you a single place to search for datasets and find links to where the data is. Filter by recency, format, topic, and more. What datasets exist for < keyword you're interested in >? Which sustainability datasets exist from last year are free for commercial use?Featured datasetsGoogle datasetsGoogle TrendsView the Top 25 and Top 25 rising queries from Google Trends from the past 30-days with this dataset. Each term includes 5 years of historical data across the 210 Designated Market Areas (DMAs) in the US and now over 50 countries across the globe.What are the most popular retail items people have searched for across the area?Commercial datasetsCrux InformaticsCrux Deliver is a managed service for data engineering and operations. Crux wires up all of the traditional and alternative data providers on behalf of its clients and manages all aspects of onboarding, data engineering, and operations. Every dataset is validated so that we only deliver clean and actionable data. What are the datasets Crux can help me onboard into my data ecosystem?Public datasetsSevere Storm Event DetailsThe Storm Events Database is an integrated database of severe weather events across the United States from 1950 to this year, with information about a storm event's location, azimuth, distance, impact, and severity, including the cost of damages to property and crops.Use case: home improvement retailer understanding impact of storms on inventoryTechnical Reference Pattern: Dynamic insurance pricing model using this datasetEarth Engine datasetsEarth EngineEarth Engine's public data archive includes more than forty years of historical imagery and scientific datasets, updated daily and available for online analysis.How has surface temperature changed over the past 30 years?What did this area look like before year 2000?Kaggle datasetsKaggle DatasetsInside Kaggle you’ll find all the code and data you need to do your data science work. Use over 80,000 public datasets and 400,000 public notebooks to conquer any analysis in no time.Can you tackle some of the most vexing and provocative problems in data science?Synthetic datasetsCymbal InvestmentsThe synthetic data represents transactions from automated trading bots operated by the fictional Cymbal Investments group, each using a single algorithm to guide its trading decisions. The records are derived from FIX protocol (version 4.4) Trade Capture Reports loaded into BigQuery. How much did traders make from each individual trade?Research datasetsDataset SearchGoogle's Dataset Search program has indexed almost 25 million datasets from across the web, giving you a single place to search for datasets and find links to where the data is. Filter by recency, format, topic, and more. What datasets exist for < keyword you're interested in >? Which sustainability datasets exist from last year are free for commercial use?Feeling inspired? Let’s solve your challenges together.Learn how Google Cloud datasets transform the way your business operates with data and pre-built solutions.Contact salesIf there is a public dataset you would like to see onboarded, please contact public-data-help@google.com.With BigQuery sandbox, you can try the full BigQuery experience without a billing account or credit card.Data partners and customer storiesLearn more from both sides of the dataset ecosystem: data providers and data consumers.Blog postNOAA and Google Cloud: A data match made in the cloud5-min readVideoData vs. COVID-19: How public data is helping flatten the curveVideo (19:35)Case StudyHCA Healthcare: Accelerating COVID-19 response through a national portal5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Datastream.txt b/Datastream.txt new file mode 100644 index 0000000000000000000000000000000000000000..2723e5a3d325d83f1e8a3c6b4863cd1965fe9390 --- /dev/null +++ b/Datastream.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/datastream +Date Scraped: 2025-02-23T12:04:06.014Z + +Content: +Datastream’s SQL Server source is available in preview. Read the blog.Jump to DatastreamDatastreamServerless and easy-to-use change data capture and replication service.Go to consoleAccess to streaming data from MySQL, PostgreSQL, AlloyDB, SQL Server, and Oracle databasesNear real-time analytics in BigQuery with Datastream for BigQueryEasy-to-use setup with built-in secure connectivity for faster time to valueServerless platform that automatically scales, with no resources to provision or manageLog-based mechanism to reduce the load and potential disruption on source databases5:39Introduction to Datastream for BigQueryBenefitsReplicate and synchronize data with minimal latencySynchronize data across heterogeneous databases, storage systems, and applications reliably, with low latency, while minimizing impact on source performance.Scale up or down with a serverless architectureGet up and running fast with a serverless and easy-to-use service that seamlessly scales up or down, and has no infrastructure to manage.Unmatched flexibility with Google Cloud servicesConnect and integrate data across your organization with the best of Google Cloud services like BigQuery, Spanner, Dataflow, and Data Fusion.Key featuresDatastream's differentiated approachStreaming data from relational databasesDatastream reads and delivers every change—insert, update, and delete—from your MySQL, PostgreSQL, AlloyDB, SQL Server, and Oracle databases to load data into BigQuery, Cloud SQL, Cloud Storage, and Spanner. Agentless and Google-native, it reliably streams every event as it happens. Datastream processes over half a trillion events per month.Reliable pipelines with advanced recoveryUnexpected interruptions can be costly. Datastream's robust stream recovery minimizes downtime and data loss, so you can maintain critical business operations and make informed decisions based on uninterrupted data pipelines.Schema drift resolutionAs source schemas change, Datastream allows for fast and seamless schema drift resolution. Datastream rotates files, creating a new file in the destination bucket, on every schema change. Original source data types are just an API-call away with an up-to-date, versioned Schema Registry.Secure by designDatastream supports multiple secure, private connectivity methods to protect data in transit. In addition, data is encrypted in transit and at rest so you can rest easy knowing your data is protected as it streams.BLOGUnlock the power of change data capture and replicationCustomersLearn from customers replicating data using DatastreamBlog postUnlock the power of change data capture and replication.5-min readBlog postDatastream helps Falabella get much quicker insights on their operational data. 5-min readBlog postCogeco Communications, Inc. brings data from hundreds of Oracle tables at 10X time and effort efficiency.5-min readBlog postSchnuck Markets, Inc. replaced batch processing with Datastream for faster insights in BigQuery.5-min readSee all customersDatastream was instrumental in achieving high-quality, reconciled data replication from our various operational data sources into BigQuery. It is easy to use, serverless and highly scalable, allowing us to set up the first streams in a short timeframe. Speed was critical to us because the data ingestion was the foundation for our data platform migration.Oleksandr Kaleniuk, Data Ingestion Lead at Just Eat TakeawayWhat's newNews and events about DatastreamSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postStreamlining data integration with SQL Server source support in DatastreamRead the blogVideoGoogle I/O 2023: Using Google Cloud's latest services to build data-driven appsWatch videoBlog postIntroducing Datastream for BigQueryRead the blogBlog postDatastream is now generally available - read the launch announcementRead the blogVideoIntroducing DatastreamWatch videoBlog postMigrate from Oracle to PostgreSQL with DatastreamRead the blogDocumentationFind resources and documentation for DatastreamGoogle Cloud BasicsDatastream overviewLearn how Datastream helps you replicate and synchronize data across heterogeneous databases, storage systems, and applications.Learn moreQuickstartDatastream quickstart using the Cloud ConsoleLearn how to use the Google Cloud Console as a visual interface to start streaming data.Learn moreTutorialConfigure Datastream using the APILearn how to use the API to configure Datastream to transfer data from a source Oracle database into Cloud Storage.Learn moreGoogle Cloud BasicsHow to migrate from Oracle to Cloud SQL for PostgreSQLUse Google’s Datastream-based, open-source toolkit to migrate from Oracle to Cloud SQL for PostgreSQL.Learn moreTutorialReplicate data into BigQuery with Datastream and DataflowLearn how to combine Datastream with Dataflow templates to replicate data from a relational database.Learn moreTutorialHow to replicate data from an Oracle database into BigQueryWatch this video to learn how to replicate data in real time from Oracle into BigQuery using Data Fusion’s replication accelerator that's integrated with Datastream.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for DatastreamUse casesContinuous change data capture: Replicate every event as it happensUse caseReal time, anytime change streamsChange data capture integrates data by reading change events (inserts, updates, and deletes) from source databases and writing them to a data destination, so action can be taken. Datastream supports change streams from Oracle and MySQL databases into BigQuery, Cloud SQL, Cloud Storage, and Spanner, enabling real-time analytics, database replication, and other use cases. Additional sources and destinations are coming in the future.View all technical guidesPricingDatastream pricing detailsDatastream pricing is calculated based upon actual monthly data processed. Additional pricing details are available on the Datastream pricing page.Additional resources like Cloud Storage, Dataflow, and BigQuery are billed per that service’s pricing.View pricing detailsPartnersRecommended partnersGoogle Cloud partners can help you get the most out of your data with Datastream.See all partnersORACLE® is a registered trademark of Oracle Corporation.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Day_2_Operations_for_GKE.txt b/Day_2_Operations_for_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b67a9ffd6fbde307711ef52562d4be852f80169 --- /dev/null +++ b/Day_2_Operations_for_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/day-2-operations-for-gke +Date Scraped: 2025-02-23T11:58:25.522Z + +Content: +Day 2 Ops for GKEHelping customers simplify how they operate their GKE platform and build an effective strategy for monitoring and managing it. To schedule a hands-on workshop please contact your Google Cloud account team.Contact usBenefitsSimplifying operations to manage the platform in a cost-effective mannerComprehensive SolutionGoogle Cloud’s Day 2 Ops solution provides an end-to-end approach for managing their GKE platform as well as monitoring and troubleshooting it to ensure required SLA.Minimize Operational RiskStreamline and standardize platform upgrades so that GKE clusters don’t stay on outdated versions for long and get exposed to security breaches and related vulnerabilities.Reduce Operational CostsOrganizations can reduce their operational costs by having an unified approach for monitoring and managing their various GKE environments.Key featuresHands-on workshop: Day 2 Ops for GKEOur solution uses a hands-on workshop to help customers understand the Day 2 strategies for GKE. Some aspects covered in the workshop are below.GKE cluster notifications with Pub/SubWhen certain events occur that are relevant to a GKE cluster, such as important scheduled upgrades or available security bulletins, GKE can publish cluster notifications about those events as messages to Pub/Sub topics. You can receive these notifications on a Pub/Sub subscription, integrate with third-party services, and filter for the notification types you want to receive.GKE release channels and cluster upgradesBy default, auto-upgrading nodes are enabled for Google Kubernetes Engine (GKE) clusters and node pools. GKE release channels offer you the ability to balance between stability and the feature set of the version deployed in the cluster. When you enroll a new cluster in a release channel, Google automatically manages the version and upgrade cadence for the cluster and its node pools.GKE maintenance windows and exclusionsA maintenance window is a repeating window of time during which automatic maintenance is permitted. A maintenance exclusion is a non-repeating window of time during which automatic maintenance is forbidden. These provide fine-grained control over when automatic maintenance can occur on your GKE clusters. GKE node pool updatesNode pools represent a subset of nodes within a cluster; a container cluster can contain one or more node pools. Dynamic configuration changes are limited to network tags, node labels, and node taints. Any other field changes in the UpdateNodePool API will not occur dynamically, and will result in node re-creation.GKE backup and restoreBackup for GKE is a service for backing up and restoring workloads in GKE clusters. Backups of your workloads may be useful for disaster recovery, CI/CD pipelines, cloning workloads, or upgrade scenarios. Protecting your workloads can help you achieve business-critical recovery point objectives.Ready to get started? Contact usTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/De-identification_and_re-identification_of_PII_in_large-scale_datasets_using_Cloud_DLP.txt b/De-identification_and_re-identification_of_PII_in_large-scale_datasets_using_Cloud_DLP.txt new file mode 100644 index 0000000000000000000000000000000000000000..7befd8581ff9277a311909e1287891a3e1de84ce --- /dev/null +++ b/De-identification_and_re-identification_of_PII_in_large-scale_datasets_using_Cloud_DLP.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/de-identification-re-identification-pii-using-cloud-dlp +Date Scraped: 2025-02-23T11:56:26.537Z + +Content: +Home Docs Cloud Architecture Center Send feedback De-identification and re-identification of PII in large-scale datasets using Sensitive Data Protection Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-07 UTC This document discusses how to use Sensitive Data Protection to create an automated data transformation pipeline to de-identify sensitive data like personally identifiable information (PII). De-identification techniques like tokenization (pseudonymization) let you preserve the utility of your data for joining or analytics while reducing the risk of handling the data by obfuscating the raw sensitive identifiers. To minimize the risk of handling large volumes of sensitive data, you can use an automated data transformation pipeline to create de-identified replicas. Sensitive Data Protection enables transformations such as redaction, masking, tokenization, bucketing, and other methods of de-identification. When a dataset hasn't been characterized, Sensitive Data Protection can also inspect the data for sensitive information by using more than 100 built-in classifiers. This document is intended for a technical audience whose responsibilities include data security, data processing, or data analytics. This guide assumes that you're familiar with data processing and data privacy, without the need to be an expert. Reference architecture The following diagram shows a reference architecture for using Google Cloud products to add a layer of security to sensitive datasets by using de-identification techniques. The architecture consists of the following: Data de-identification streaming pipeline: De-identifies sensitive data in text using Dataflow. You can reuse the pipeline for multiple transformations and use cases. Configuration (Sensitive Data Protection template and key) management: A managed de-identification configuration that is accessible by only a small group of people—for example, security admins—to avoid exposing de-identification methods and encryption keys. Data validation and re-identification pipeline: Validates copies of the de-identified data and uses a Dataflow pipeline to re-identify data at a large scale. Helping to secure sensitive data One of the key tasks of any enterprise is to help ensure the security of their users' and employees' data. Google Cloud provides built-in security measures to facilitate data security, including encryption of stored data and encryption of data in transit. Encryption at rest: Cloud Storage Maintaining data security is critical for most organizations. Unauthorized access to even moderately sensitive data can damage the trust, relationships, and reputation that you have with your customers. Google encrypts data stored at rest by default. By default, any object uploaded to a Cloud Storage bucket is encrypted using a Google-owned and Google-managed encryption key. If your dataset uses a pre-existing encryption method and requires a non-default option before uploading, there are other encryption options provided by Cloud Storage. For more information, see Data encryption options. Encryption in transit: Dataflow When your data is in transit, the at-rest encryption isn't in place. In-transit data is protected by secure network protocols referred to as encryption in transit. By default, Dataflow uses Google-owned and Google-managed encryption keys. The tutorials associated with this document use an automated pipeline that uses the default Google-owned and Google-managed encryption keys. Sensitive Data Protection data transformations There are two main types of transformations performed by Sensitive Data Protection: recordTransformations infoTypeTransformations Both recordTransformations and infoTypeTransformations methods can de-identify and encrypt sensitive information in your data. For example, you can transform the values in the US_SOCIAL_SECURITY_NUMBER column to be unidentifiable or use tokenization to obscure it while keeping referential integrity. The infoTypeTransformations method enables you to inspect for sensitive data and transform the finding. For example, if you have unstructured or free-text data, the infoTypeTransformations method can help you identify an SSN inside of a sentence and encrypt the SSN value while leaving the rest of the text intact. You can also define custom infoTypes methods. The recordTransformations method enables you to apply a transformation configuration per field when using structured or tabular data. With the recordTransformations method, you can apply the same transformation across every value in that field such as hashing or tokenizing every value in a column with SSN column as the field or header name. With the recordTransformations method , you can also mix in the infoTypeTransformations method that only apply to the values in the specified fields. For example, you can use an infoTypeTransformations method inside of a recordTransformations method for the field named comments to redact any findings for US_SOCIAL_SECURITY_NUMBER that are found inside the text in the field. In increasing order of complexity, the de-identification processes are as follows: Redaction: Remove the sensitive content with no replacement of content. Masking: Replace the sensitive content with fixed characters. Encryption: Replace sensitive content with encrypted strings, possibly reversibly. Working with delimited data Often, data consists of records delimited by a selected character, with fixed types in each column, like a CSV file. For this class of data, you can apply de-identification transformations (recordTransformations) directly, without inspecting the data. For example, you can expect a column labeled SSN to contain only SSN data. You don't need to inspect the data to know that the infoType detector is US_SOCIAL_SECURITY_NUMBER. However, free-form columns labeled Additional Details can contain sensitive information, but the infoType class is unknown beforehand. For a free-form column, you need to inspect the infoTypes detector (infoTypeTransformations) before applying de-identification transformations. Sensitive Data Protection allows both of these transformation types to co-exist in a single de-identification template. Sensitive Data Protection includes more than 100 built-in infoTypes detectors. You can also create custom types or modify built-in infoTypes detectors to find sensitive data that is unique to your organization. Determining transformation type Determining when to use the recordTransformations or infoTypeTransformations method depends on your use case. Because using the infoTypeTransformations method requires more resources and is therefore more costly, we recommend using this method only for situations where the data type is unknown. You can evaluate the costs of running Sensitive Data Protection using the Google Cloud pricing calculator. For examples of transformation, this document refers to a dataset that contains CSV files with fixed columns, as demonstrated in the following table. Column name Inspection infoType (custom or built-in) Sensitive Data Protection transformation type Card Number Not applicable Deterministic encryption (DE) Card Holder's Name Not applicable Deterministic encryption (DE) Card PIN Not applicable Crypto hashing SSN (Social Security Number) Not applicable Masking Age Not applicable Bucketing Job Title Not applicable Bucketing Additional Details Built-in: IBAN_CODE, EMAIL_ADDRESS, PHONE_NUMBER Custom: ONLINE_USER_ID Replacement This table lists the column names and describes which type of transformation is needed for each column. For example, the Card Number column contains credit card numbers that need to be encrypted; however, they don't need to be inspected, because the data type (infoType) is known. The only column where an inspection transformation is recommended is the Additional Details column. This column is free-form and might contain PII, which, for the purposes of this example, should be detected and de-identified. The examples in this table present five different de-identification transformations: Two-way tokenization: Replaces the original data with a token that is deterministic, preserving referential integrity. You can use the token to join data or use the token in aggregate analysis. You can reverse or de-tokenize the data using the same key that you used to create the token. There are two methods for two-way tokenizations: Deterministic encryption (DE): Replaces the original data with a base64-encoded encrypted value and doesn't preserve the original character set or length. Format-preserving encryption with FFX (FPE-FFX): Replaces the original data with a token generated by using format-preserving encryption in FFX mode. By design, FPE-FFX preserves the length and character set of the input text. It lacks authentication and an initialization vector, which can cause a length expansion in the output token. Other methods, like DE, provide stronger security and are recommended for tokenization use cases unless length and character-set preservation are strict requirements, such as backward compatibility with legacy data systems. One-way tokenization, using cryptographic hashing: Replaces the original value with a hashed value, preserving referential integrity. However, unlike two-way tokenization, a one-way method isn't reversible. The hash value is generated by using an SHA-256-based message authentication code (HMAC-SHA-256) on the input value. Masking: Replaces the original data with a specified character, either partially or completely. Bucketing: Replaces a more identifiable value with a less distinguishing value. Replacement: Replaces original data with a token or the name of the infoType if detected. Method selection Choosing the best de-identification method can vary based on your use case. For example, if a legacy app is processing the de-identified records, then format preservation might be important. If you're dealing with strictly formatted 10-digit numbers, FPE preserves the length (10 digits) and character set (numeric) of an input for legacy system support. However, if strict formatting isn't required for legacy compatibility, as is the case for values in the Card Holder's Name column, then DE is the preferred choice because it has a stronger authentication method. Both FPE and DE enable the tokens to be reversed or de-tokenized. If you don't need de-tokenization, then cryptographic hashing provides integrity but the tokens can't be reversed. Other methods—like masking, bucketing, date-shifting, and replacement—are good for values that don't need to retain full integrity. For example, bucketing an age value (for example, 27) to an age range (20-30) can still be analyzed while reducing the uniqueness that might lead to the identification of an individual. Token encryption keys For cryptographic de-identification transformations, a cryptographic key, also known as token encryption key, is required. The token encryption key that is used for de-identification encryption is also used to re-identify the original value. The secure creation and management of token encryption keys are beyond the scope of this document. However, there are some important principles to consider that are used later in the associated tutorials: Avoid using plaintext keys in the template. Instead, use Cloud KMS to create a wrapped key. Use separate token encryption keys for each data element to reduce the risk of compromising keys. Rotate token encryption keys. Although you can rotate the wrapped key, rotating the token encryption key breaks the integrity of the tokenization. When the key is rotated, you need to re-tokenize the entire dataset. Sensitive Data Protection templates For large-scale deployments, use Sensitive Data Protection templates to accomplish the following: Enable security control with Identity and Access Management (IAM). Decouple configuration information, and how you de-identify that information, from the implementation of your requests. Reuse a set of transformations. You can use the de-identify and re-identify templates over multiple datasets. BigQuery The final component of the reference architecture is viewing and working with the de-identified data in BigQuery. BigQuery is Google's data warehouse tool that includes serverless infrastructure, BigQuery ML, and the ability to run Sensitive Data Protection as a native tool. In the example reference architecture, BigQuery serves as a data warehouse for the de-identified data and as a backend to an automated re-identification data pipeline that can share data through Pub/Sub. What's next Learn about using Sensitive Data Protection to inspect storage and databases for sensitive data. Learn about other pattern recognition solutions. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Decide_identity_onboarding.txt b/Decide_identity_onboarding.txt new file mode 100644 index 0000000000000000000000000000000000000000..07f91b4304c6841e9c57e73f56f5210fe939879d --- /dev/null +++ b/Decide_identity_onboarding.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones/decide-how-to-onboard-identities +Date Scraped: 2025-02-23T11:45:10.899Z + +Content: +Home Docs Cloud Architecture Center Send feedback Decide how to onboard identities to Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This document describes identity provisioning options for Google Cloud and the decisions that you must make when you onboard your users to Cloud Identity or Google Workspace. This document also provides guidance on where to find more information on how to deploy each option. This document is part of a series about landing zones, and is intended for architects and technical practitioners who are involved in managing identities for your organization and your Google Cloud deployment. Overview To let your organization's users access your Google Cloud resources, you must provide a way for them to authenticate themselves. Google Cloud uses Google Sign-In to authenticate users, which is the same identity provider (IdP) that other Google services such as Gmail or Google Ads use. Although some users in your organization might already have a private Google user account, we strongly advise against letting them use their private accounts when they access Google Cloud. Instead, you can onboard your users to Cloud Identity or Google Workspace, which lets you control the lifecycle and security of user accounts. Provisioning identities in Google Cloud is a complex topic and your exact strategy might require more detail than is in scope for this decision guide. For more best practices, planning, and deployment information, see overview of identity and access management. Decision points for identity onboarding To choose the best identity provisioning design for your organization, you must make the following decisions: Your identity architecture How to consolidate existing user accounts Decide on your identity architecture Managing the lifecycle and security of user accounts plays an important role in securing your Google Cloud deployment. A key decision that you must make is the role that Google Cloud should play in relation to your existing identity management systems and applications. The options are as follows: Use Google as your primary identity provider (IdP). Use federation with an external identity provider. The following sections provide more information about each option. Option 1: Use Google as your primary source for identities (no federation) When you create user accounts directly in Cloud Identity or Google Workspace, you can make Google your source of identities and primary IdP. Users can then use these identities and credentials to sign in to Google Cloud and other Google services. Cloud Identity and Google Workspace provide a large selection of ready-to-use integrations for popular third-party applications. You can also use standard protocols such as SAML, OAuth, and OpenID Connect to integrate your custom applications with Cloud Identity or Google Workspace. Use this strategy when the following is true: Your organization already has user identities provisioned in Google Workspace. Your organization doesn't have an existing IdP. Your organization has an existing IdP but wants to start quickly with a small subset of users and federate identities later. Avoid this strategy when you have an existing IdP that you want to use as an authoritative source for identities. For more information, see the following: Preparing your Google Workspace or Cloud Identity Account Using Google as an IdP Option 2: Use federation with an external identity provider You can integrate Google Cloud with an existing external IdP by using federation. Identity federation establishes trust between two or more IdPs so that the multiple identities that a user might have in different identity management systems can be linked. When you federate a Cloud Identity or Google Workspace account with an external IdP, you let users use their existing identity and credentials to sign in to Google Cloud and other Google services. Important: Super administrator accounts always bypass single-sign-on (SSO). Use this strategy when the following is true: You have an existing IdP such as Active Directory, Azure AD, ForgeRock, Okta, or Ping Identity. You want employees to use their existing identity and credentials to sign in to Google Cloud and other Google services such as Google Ads and Google Marketing Platform. Avoid this strategy when your organization doesn't have an existing IdP. For more information, see the following: External identities - Overview of Google identity management Reference architectures: Using an external IdP Best practices for federating Google Cloud with an external identity provider Federating Google Cloud with Active Directory Federating Google Cloud with Azure AD Decide how to consolidate existing user accounts If you haven't been using Cloud Identity or Google Workspace, it's possible that your organization's employees are using consumer accounts to access your Google services. Consumer accounts are accounts that are fully owned and managed by the people who created them. Because those accounts are not under your organization's control and might include both personal and corporate data, you must decide how to consolidate these accounts with other corporate accounts. For details on consumer accounts, how to identify them, and what risk they might pose to your organization, see Assessing existing user accounts. The options for consolidating the accounts are as follows: Consolidate a relevant subset of consumer accounts. Consolidate all accounts through migration. Consolidate all accounts through eviction, by not migrating accounts before creating new ones. The following sections provide more information about each option. Option 1: Consolidate a relevant subset of consumer accounts If you want to keep consumer accounts and manage them and their data under corporate policies, you must migrate them to Cloud Identity or Google Workspace. However, the process of consolidating consumer accounts can be time consuming. Therefore, we recommend that you first evaluate which subset of users are relevant for your planned Google Cloud deployment, and then consolidate only those user accounts. Use this strategy when the following is true: The transfer tool for unmanaged user accounts shows many consumer user accounts in your domain, but only a subset of your users will use Google Cloud. You want to save time in the consolidation process. Avoid this strategy when the following is true: You don't have consumer user accounts in your domain. You want to ensure that all data from all consumer user accounts in your domain is consolidated to managed accounts before you start using Google Cloud. For more information, see Overview of consolidating accounts. Option 2: Consolidate all accounts through migration If you want to manage all user accounts in your domain, you can consolidate all consumer accounts by migrating them to managed accounts. Use this strategy when the following is true: The transfer tool for unmanaged user accounts shows only a few consumer accounts in your domain. You want to restrict the use of consumer accounts in your organization. Avoid this strategy when you want to save time in the consolidation process. For more information, see Migrating consumer accounts. Option 3: Consolidate all accounts through eviction You can evict consumer accounts in the following circumstances: You want users who created consumer accounts to keep full control over their accounts and data. You don't want to transfer any data to be managed by your organization. To evict consumer accounts, create a managed user identity of the same name without migrating the user account first. Use this strategy when the following is true: You want to create new managed accounts for your users without transferring any of the data that exists in their consumer accounts. You want to restrict the Google services that are available in your organization. You also want users to keep their data and keep using these services for the consumer accounts that they created. Avoid this strategy when consumer accounts have been used for corporate purposes and might have access to corporate data. For more information, see Evicting consumer accounts. Best practices for onboarding identities After you choose your identity architecture and your method to consolidate existing consumer accounts, consider the following identity best practices. Select a suitable onboarding plan that works for your organization Select a high-level plan to onboard your organization's identities to Cloud Identity or Google Workspace. For a selection of proven onboarding plans, along with guidance on how to select the plan that best suits your needs, see Assessing onboarding plans. If you plan to use an external IdP and have identified user accounts that need to be migrated, you might have additional requirements. For more information, see Assessing the impact of user account consolidation on federation. Protect user accounts After you've onboarded users to Cloud Identity or Google Workspace, you must put measures in place to help protect their accounts from abuse. For more information, see the following: Implement security best practices for Cloud Identity administrative accounts. Enforce uniform multifactor authentication rules and follow best practices when combined with federating identities. Export your Google Workspace or Cloud Identity audit logs to Cloud Logging by enabling data sharing. What's next Decide your resource hierarchy (next document in this series). Find out more about how users, Cloud Identity accounts, and Google Cloud organizations relate. Review recommended best practices for planning accounts and organizations. Read about best practices for federating Google Cloud with an external identity provider. Send feedback \ No newline at end of file diff --git a/Decide_network_design.txt b/Decide_network_design.txt new file mode 100644 index 0000000000000000000000000000000000000000..18c1bd4d139252097ebfde60148a1d66d5583301 --- /dev/null +++ b/Decide_network_design.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones/decide-network-design +Date Scraped: 2025-02-23T11:45:16.207Z + +Content: +Home Docs Cloud Architecture Center Send feedback Decide the network design for your Google Cloud landing zone Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC When you design your landing zone, you must choose a network design that works for your organization. This document describes four common network designs, and helps you choose the option that best meets your organization's requirements, and your organization's preference for centralized control or decentralized control. It's intended for network engineers, architects, and technical practitioners who are involved in creating the network design for your organization's landing zone. This article is part of a series about landing zones. Choose your network design The network design that you choose depends primarily on the following factors: Centralized or decentralized control: Depending on your organization's preferences, you must choose one of the following: Centralize control over the network including IP addressing, routing, and firewalling between different workloads. Give your teams greater autonomy in running their own environments and building network elements within their environments themselves. On-premises or hybrid cloud connectivity options: All the network designs discussed in this document provide access from on-premises to cloud environments through Cloud VPN or Cloud Interconnect. However, some designs require you to set up multiple connections in parallel, while others use the same connection for all workloads. Security requirements: Your organization might require traffic between different workloads in Google Cloud to pass through centralized network appliances such as next generation firewalls (NGFW). This constraint influences your Virtual Private Cloud (VPC) network design. Scalability: Some designs might be better for your organization than others, based on the number of workloads that you want to deploy, and the number of virtual machines (VMs), internal load balancers, and other resources that they will consume. Decision points for network design The following flowchart shows the decisions that you must make to choose the best network design for your organization. The preceding diagram guides you through the following questions: Do you require Layer 7 inspection using network appliances between different workloads in Google Cloud? If yes, see Hub-and-spoke topology with centralized appliances. If no, proceed to the next question. Do many of your workloads require on-premises connectivity? If yes, go to decision point 4. If no, proceed to the next question. Can your workloads communicate using private endpoints in a service producer and consumer model? If yes, see Expose services in a consumer-producer model with Private Service Connect. If no, proceed to the next question. Do you want to administer firewalling and routing centrally? If yes, see Shared VPC network for each environment. If no, see Hub-and-spoke topology without appliances. This chart is intended to help you make a decision, however, it can often be the case that multiple designs might be suitable for your organization. In these instances, we recommend that you choose the design that fits best with your use case. Network design options The following sections describe four common design options. We recommend option 1 for most use cases. The other designs discussed in this section are alternatives that apply to specific organizational edge-case requirements. The best fit for your use case might also be a network that combines elements from multiple design options discussed in this section. For example, you can use Shared VPC networks in hub-and-spoke topologies for better collaboration, centralized control, and to limit the number of VPC spokes. Or, you might design most workloads in a Shared VPC topology but isolate a small number of workloads in separate VPC networks that only expose services through a few defined endpoints using Private Service Connect. Note: When the design options refer to connections to on-premises networks, you can use the same concepts for connections to other cloud service providers (CSPs). Option 1: Shared VPC network for each environment We recommend this network design for most use cases. This design uses separate Shared VPC networks for each deployment environment that you have in Google Cloud (development, testing, and production). This design lets you centrally manage network resources in a common network and provides network isolation between the different environments. Use this design when the following is true: You want central control over firewalling and routing rules. You need a simple, scalable infrastructure. You need centralized IP address space management. Avoid this design when the following is true: You want developer teams to have full autonomy, including the ability to manage their own firewall rules, routing, and peering to other team networks. You need Layer 7 inspection using NGFW appliances. The following diagram shows an example implementation of this design. The preceding diagram shows the following: The on-premises network is spread across two geographical locations. The on-premises network connects through redundant Cloud Interconnect instances to two separate Shared VPC networks, one for production and one for development. The production and development environments are connected to both Cloud Interconnect instances with different VLAN attachments. Each Shared VPC has service projects that host the workloads. Firewall rules are centrally administered in the host project. The development environment has the same VPC structure as the production environment. By design, traffic from one environment cannot reach another environment. However, if specific workloads must communicate with each other, you can allow data transfer through controlled channels on-premises, or you can share data between applications with Google Cloud services like Cloud Storage or Pub/Sub. We recommend that you avoid directly connecting separated environments through VPC Network Peering, because it increases the risk of accidentally mixing data between the environments. Using VPC Network Peering between large environments also increases the risk of hitting VPC quotas around peering and peering groups. For more information, see the following: Shared VPC overview Shared VPC architecture in the enterprise foundations guide Reference architecture in VPC design best practices Terraform deployment stage: Networking with separate environments as part of Fabric FAST framework Network stage for Terraform example foundation using Cloud Foundation toolkit To implement this design option, see Create option 1: Shared VPC network for each environment. Option 2: Hub-and-spoke topology with centralized appliances This network design uses hub-and-spoke topology. A hub VPC network contains a set of appliance VMs such as NGFWs that are connected to the spoke VPC networks that contain the workloads. Traffic between the workloads, on-premises networks, or the internet is routed through appliance VMs for inspection and filtering. Use this design when the following is true: You require Layer 7 inspection between different workloads or applications. You have a corporate mandate that specifies the security appliance vendor for all traffic. Avoid this design when the following is true: You don't require Layer 7 inspection for most of your workloads. You want workloads on Google Cloud to not communicate at all with each other. You only need Layer 7 inspection for traffic going to on-premises networks. The following diagram shows an example implementation of this pattern. The preceding diagram shows the following: A production environment which includes a hub VPC network and multiple spoke VPC networks that contain the workloads. The spoke VPC networks are connected with the hub VPC network by using VPC Network Peering. The hub VPC network has multiple instances of a virtual appliance in a managed instance group. Traffic to the managed instance group goes through an internal passthrough Network Load Balancer. The spoke VPC networks communicate with each other through the virtual appliances by using static routes with the internal load balancer as the next hop. Cloud Interconnect connects the transit VPC networks to on-premises locations. On-premises networks are connected through the same Cloud Interconnects using separate VLAN attachments. The transit VPC networks are connected to a separate network interface on the virtual appliances, which lets you inspect and filter all traffic to and from these networks by using your appliance. The development environment has the same VPC structure as the production environment. This setup doesn't use source network address translation (SNAT). SNAT isn't required because Google Cloud uses symmetric hashing. For more information see Symmetric hashing. By design, traffic from one spoke network cannot reach another spoke network. However, if specific workloads must communicate with each other, you can set up direct peering between the spoke networks using VPC Network Peering, or you can share data between applications with Google Cloud services like Cloud Storage or Pub/Sub. To maintain low latency when the appliance communicates between workloads, the appliance must be in the same region as the workloads. If you use multiple regions in your cloud deployment, you can have one set of appliances and one hub VPC for each environment in each region. Alternatively, you can use network tags with routes to have all instances communicate with the closest appliance. Firewall rules can restrict the connectivity within the spoke VPC networks that contain workloads. Often, teams who administer the workloads also administer these firewall rules. For central policies, you can use hierarchical firewall policies. If you require a central network team to have full control over firewall rules, consider centrally deploying those rules in all VPC networks by using a GitOps approach. In this case, restrict the IAM permissions to only those administrators who can change the firewall rules. Spoke VPC networks can also be Shared VPC networks if multiple teams deploy in the spokes. In this design, we recommend that you use VPC Network Peering to connect the hub VPC network and spoke VPC networks because it adds minimum complexity. However, the maximum number of spokes is limited by the following: The limit on VPC Network Peering connections from a single VPC network. Peering group limits such as the maximum number of forwarding rules for the internal TCP/UDP Load Balancing for each peering group. If you expect to reach these limits, you can connect the spoke networks through Cloud VPN. Using Cloud VPN adds extra cost and complexity and each Cloud VPN tunnel has a bandwidth limit. For more information, see the following: Hub and spoke transitivity architecture in the enterprise foundations guide Terraform deployment stage: Networking with Network Virtual Appliance as part of the Fabric FAST framework Terraform hub-and-spoke transitivity module as part of the example foundation To implement this design option, see Create option 2: Hub-and-spoke topology with centralized appliances. Option 3: Hub-and-spoke topology without appliances This network design also uses a hub-and-spoke topology, with a hub VPC network that connects to on-premises networks and spoke VPC networks that contain the workloads. Because VPC Network Peering is non-transitive, spoke networks cannot communicate with each other directly. Use this design when the following is true: You want workloads or environments in Google Cloud to not communicate with each other at all using internal IP addresses, but you do want them to share on-premises connectivity. You want to give teams autonomy in managing their own firewall and routing rules. Avoid this design when the following is true: You require Layer 7 inspection between workloads. You want to centrally manage routing and firewall rules. You require communications from on-premises services to managed services that are connected to the spokes through another VPC Network Peering, because VPC Network Peering is non-transitive. The following diagram shows an example implementation of this design. The preceding diagram shows the following: A production environment which includes a hub VPC network and multiple spoke VPC networks that contain the workloads. The spoke VPC networks are connected with the hub VPC network by using VPC Network Peering. Connectivity to on-premises locations passes through Cloud Interconnect connections in the hub VPC network. On-premises networks are connected through the Cloud Interconnect instances using separate VLAN attachments. The development environment has the same VPC structure as the production environment. By design, traffic from one spoke network cannot reach another spoke network. However, if specific workloads must communicate with each other, you can set up direct peering between the spoke networks using VPC Network Peering, or you can share data between applications with Google Cloud services like Cloud Storage or Pub/Sub. This network design is often used in environments where teams act autonomously and there is no centralized control over firewall and routing rules. However, the scale of this design is limited by the following: The limit on VPC Network Peering connections from a single VPC network Peering group limits such as the maximum number of forwarding rules for the internal passthrough Network Load Balancer for each peering group Therefore, this design is not typically used in large organizations that have many separate workloads on Google Cloud. As a variation to the design, you can use Cloud VPN instead of VPC Network Peering. Cloud VPN lets you increase the number of spokes, but adds a bandwidth limit for each tunnel and increases complexity and costs. When you use custom advertised routes, Cloud VPN also allows for transitivity between the spokes without requiring you to directly connect all the spoke networks. For more information, see the following: Hub-and-spoke network architecture Hub-and-spoke architecture in the enterprise foundations guide Terraform deployment stage: Networking with VPC Network Peering as part of the Fabric FAST framework Terraform deployment stage: Networking with Cloud VPN as part of Fabric FAST framework To implement this design option, see Create option 3: Hub-and-spoke topology without appliances. Option 4: Expose services in a consumer-producer model with Private Service Connect In this network design, each team or workload gets their own VPC network, which can also be a Shared VPC network. Each VPC network is independently managed and uses Private Service Connect to expose all the services that need to be accessed from outside the VPC network. Use this design when the following is true: Workloads only communicate with each other and the on-premises environment through defined endpoints. You want teams to be independent of each other, and manage their own IP address space, firewalls, and routing rules. Avoid this design when the following is true: Communication between services and applications uses many different ports or channels, or ports and channels change frequently. Communication between workloads uses protocols other than TCP or UDP. You require Layer 7 inspection between workloads. The following diagram shows an example implementation of this pattern. The preceding diagram shows the following: Separate workloads are located in separate projects and VPC networks. A client VM in one VPC network can connect to a workload in another VPC network through a Private Service Connect endpoint. The endpoint is attached to a service attachment in the VPC network where the service is located. The service attachment can be in a different region from the endpoint if the endpoint is configured for global access. The service attachment connects to the workload through Cloud Load Balancing. Clients in the workload VPC can reach workloads that are located on-premises as follows: The endpoint is connected to a service attachment in a transit VPC network. The service attachment is connected to the on-premises network using Cloud Interconnect. An internal Application Load Balancer is attached to the service attachment and uses a hybrid network endpoint group to balance traffic load between the endpoints that are located on-premises. On-premises clients can also reach endpoints in the transit VPC network that connect to service attachments in the workload VPC networks. For more information, see the following: Publish managed services using Private Service Connect Access published services through endpoints To implement this design option, see Create option 4: Expose services in a consumer-producer model with Private Service Connect. Best practices for network deployment After you choose the best network design for your use case, we recommend implementing the following best practices: Use custom mode VPC networks and delete the default network to have better control over your network's IP addresses. Limit external access by using Cloud NAT for resources that need internet access and reducing the use of public IP addresses to resources accessible through Cloud Load Balancing. For more information, see building internet connectivity for private VMs. If you use Cloud Interconnect, make sure that you follow the recommended topologies for non-critical or production-level applications. Use redundant connections to meet the SLA for the service. Alternatively, you can connect Google Cloud to on-premises networks through Cloud VPN. Enforce the policies introduced in limit external access by using an organization policy to restrict direct access to the internet from your VPC. Use hierarchical firewall policies to inherit firewall policies consistently across your organization or folders. Follow DNS best practices for hybrid DNS between your on-premises network and Google Cloud. For more information, see Best practices and reference architectures for VPC design. What's next Implement your Google Cloud landing zone network design Decide the security for your Google Cloud landing zone (next document in this series). Read Best practices for VPC network design. Read more about Private Service Connect. Send feedback \ No newline at end of file diff --git a/Decide_resource_hierarchy.txt b/Decide_resource_hierarchy.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3750e34ca94da7121cf7faf28392c7424203638 --- /dev/null +++ b/Decide_resource_hierarchy.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones/decide-resource-hierarchy +Date Scraped: 2025-02-23T11:45:12.813Z + +Content: +Home Docs Cloud Architecture Center Send feedback Decide a resource hierarchy for your Google Cloud landing zone Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC A resource hierarchy helps to organize your resources in Google Cloud. This document describes how to choose your resource hierarchy as part of your landing zone design. It's intended for cloud system administrators, architects, and technical practitioners who are involved in designing the resource hierarchy. This document is part of a series on landing zones. It includes sample hierarchies that demonstrate common ways that businesses can structure resources in Google Cloud. Design factors for resource hierarchy When you define your resource hierarchy in Google Cloud, you must consider how your organization works today and the ideal end state of your cloud transformation. The best way to manage resources is based on your organization's intended way of working in the cloud. As every organization is different, there is no single, best approach for resource hierarchy. However, we recommend that you avoid mapping your corporate organization structure to the resource hierarchy. Instead, focus on your business needs and operations in Google Cloud. Google Cloud resource hierarchy The resource hierarchy in Google Cloud starts at the root node, which is called the organization. We recommend that businesses have only one root node, except for in specific situations. You define lower levels of the hierarchy using folders and projects, and you create folders within folders to build your hierarchy. You can create the projects that host your workloads at any level of the hierarchy. The following diagram shows a root node called Organization, and folders at levels one, two, and three. Projects and subfolders are created under the folders in level two. Resource hierarchy factors When you decide your resource hierarchy, consider the following factors: Who is responsible for cloud resources? Is it your departments, subsidiaries, technical teams, or legal entities? What are your compliance needs? Do you have upcoming business events, such as mergers, acquisitions, and spin-offs? Understand resource interactions throughout the hierarchy Organization policies are inherited by descendants in the resource hierarchy, but can be superseded by policies defined at a lower level. For more information, see understanding hierarchy evaluation. You use organization policy constraints to set guidelines around the whole organization or significant parts of it and still allow for exceptions. Allow policies, formerly known as IAM policies are inherited by descendants, and allow policies at lower levels are additive. However, allow policies can be superseded by deny policies, which let you restrict permissions at the project, folder, and organization level. Deny policies are applied before allow policies. You also need to consider the following: Cloud Logging includes aggregated sinks that you can use to aggregate logs at the folder or organization level. For more information, see decide the security for your Google Cloud landing zone. Billing is not directly linked to the resource hierarchy, but assigned at the project level. However, to get aggregated information at the folder level, you can analyze your costs by project hierarchy using billing reports. Hierarchical firewall policies let you implement consistent firewall policies throughout the organization or in specific folders. Inheritance is implicit, which means that you can allow or deny traffic at any level or you can delegate the decision to a lower level. Decision points for resource hierarchy design The following flowchart shows the things that you must consider to choose the best resource hierarchy for your organization. The preceding diagram outlines the following decision points: Do different subsidiaries, regional groups, or business units have very different policy requirements? If yes, follow the design based on region or subsidiaries. If no, go to the next decision point. Do your workload or product teams require strong autonomy over their policies? For example, you don't have a central security team that determines policies for all workload or product teams. If yes, see the design based on accountability framework. If no, see the design based on application environment. Your specific use case might lead you to another resource hierarchy design than what the decision chart suggests. Most organizations choose a hybrid approach and select different designs at different levels of the resource hierarchy, starting with the design that most affects policies and access. Option 1: Hierarchy based on application environments In many organizations, you define different policies and access controls for different application environments, such as development, production, and testing. Having separate policies that are standardized across each environment eases management and configuration. For example, you might have security policies that are more stringent in production environments than in testing environments. Use a hierarchy based on application environments if the following is true: You have separate application environments that have different policy and administration requirements. You have centralized security or audit requirements that a central security team must be able to enforce consistently on all production workloads and data. You require different Identity and Access Management (IAM) roles to access your Google Cloud resources in different environments. Avoid this hierarchy if the following is true: You don't run multiple application environments. You don't have varying application dependencies and business processes across environments. You have strong policy differences for different regions, teams, or applications. The following diagram shows a hierarchy for example.com, which is a fictitious financial technology company. As shown in the preceding diagram, example.com has three application environments that have different policies, access controls, regulatory requirements, and processes. The environments are as follows: Development and QA environment: This environment is managed by developers who are both internal employees and consultants. They continuously push code and are responsible for quality assurance. This environment is never available to your business' consumers. The environment has less strict integration and authentication requirements than the production environment, and developers are assigned approved roles with suitable permissions. The Development and QA environment is designed only for standard application offerings from example.com. Testing environment: This environment is used for regression and application testing, and supports the business-to-business (B2B) offerings of example.com clients who use example.com APIs. For this purpose, example.com creates two project types: Internal testing projects to complete internal regression, performance, and configuration for B2B offerings. Client UAT projects with multi-tenant support so that B2B clients can validate their configurations and align with example.com requirements for user experience, branding, workflows, reports, and so on. Production environment: This environment hosts all product offerings that are validated, accepted, and launched. This environment is subject to Payment Card Industry Data Security Standard (PCI DSS) regulations, uses hardware security modules (HSMs), and integrates with third-party processors for items such as authentication and payment settlements. The audit and compliance teams are critical stakeholders of this environment. Access to this environment is tightly controlled and limited mostly to automated deployment processes. For more information about designing a hierarchy that is based on application environments, see Organization structure in the enterprise foundations blueprint. Option 2: Hierarchy based on regions or subsidiaries Some organizations operate across many regions and have subsidiaries doing business in different geographies or have been a result of mergers and acquisitions. These organizations require a resource hierarchy that uses the scalability and management options in Google Cloud, and maintains the independence for different processes and policies that exist between the regions or subsidiaries. This hierarchy uses subsidiaries or regions as the highest folder level in the resource hierarchy. Deployment procedures are typically focused around the regions. Use this hierarchy if the following is true: Different regions or subsidiaries operate independently. Regions or subsidiaries have different operational backbones, digital platform offerings, and processes. Your business has different regulatory and compliance standards for regions or subsidiaries. The following diagram shows an example hierarchy of a global organization with two subsidiaries and a holding group with a regionalized structure. The preceding diagram has the following hierarchy structure: The following folders are on the first level: The Subsidiary 1 and Subsidiary 2 folders represent the two subsidiaries of the company that have different access permissions and policies from the rest of the organization. Each subsidiary uses IAM to restrict access to their projects and Google Cloud resources. The Holding group folder represents the groups that have high-level common policies across regions. The Bootstrap folder represents the common resources that are required to deploy your Google Cloud infrastructure, as described in the enterprise foundations blueprint. On the second level, within the Group Holding folder, there are the following folders: The APAC, EMEA, and Americas folders represent the various regions within the group that have different governance, access permissions, and policies. The Shared infrastructure folder represents resources that are used globally across all regions. Within each folder are various projects that contain the resources that these subsidiaries or regions are responsible for. You can add more folders to separate different legal entities, departments, and teams within your company. Option 3: Hierarchy based on an accountability framework A hierarchy based on an accountability framework works best when your products are run independently or organizational units have clearly defined teams who own the lifecycle of the products. In these organizations, the product owners are responsible for the entire product lifecycle, including its processes, support, policies, security, and access rights. Your products are independent from each other, and organization-wide guidelines and controls are uncommon. Use this hierarchy when the following is true: You run an organization that has clear ownership and accountability for each product. Your workloads are independent and don't share many common policies. Your processes and external developer platforms are offered as service or product offerings. The following diagram shows an example hierarchy for an ecommerce platform provider. The preceding diagram has the following hierarchy structure: The following folders are on the first level: The folders that are named Ecommerce Modules and Logistics and Warehousing Modules represent the modules within the platform offering that require the same access permissions and policies during the product lifecycle. The Reconciliation and Billing folder represents the product teams who are responsible for the end-to-end modules for specific business components within the platform offering. The Bootstrap folder represents the common resources that are required to deploy your Google Cloud infrastructure, as described in the enterprise foundations blueprint. Within each folder are various projects that contain the independent modules that different product teams are responsible for. For more information, see Fabric FAST Terraform framework resource hierarchy. Best practices for resource hierarchy The following sections describe the best practices for designing resource hierarchy that we recommend, regardless of the resource hierarchy that you choose. For more best practices on how to configure your Cloud Identity and Google Workspace accounts and organizations, see Best practices for planning accounts and organizations. Use a single organization node To avoid management overhead, use a single organization node whenever possible. However, consider using multiple organization nodes to address the following use cases: You want to test major changes to your IAM levels or resource hierarchy. You want to experiment in a sandbox environment that doesn't have the same organization policies. Your organization includes sub-companies that are likely to be sold off or run as completely separate entities in the future. Use standardized naming conventions Use a standardized naming convention throughout your organization. The security foundations blueprint has a sample naming convention that you can adapt to your requirements. Keep bootstrapping resources and common services separate Keep separate folders for bootstrapping the Google Cloud environment using infrastructure-as-code (IaC) and for common services that are shared between environments or applications. Place the bootstrap folder right below the organization node in the resource hierarchy. Place the folders for common services at different levels of the hierarchy, depending on the structure that you choose. Place the folder for common services right below the organization node when the following is true: Your hierarchy uses application environments at the highest level and teams or applications at the second layer. You have shared services such as monitoring that are common between environments. Place the folder for common services at a lower level, below the application folders, when the following is true: You have services that are shared between applications and you deploy a separate instance for each deployment environment. Applications share microservices that require development and testing for each deployment environment. The following diagram shows an example hierarchy where there is a shared infrastructure folder that is used by all environments and shared services folders for each application environment at a lower level in the hierarchy: The preceding diagram has the following hierarchy structure: The folders on the first level are as follows: The Development environment and Production environment folders contain the application environments. The Shared infrastructure folder contains common resources that are shared across environments, such as monitoring. The Bootstrap folder contains the common resources required to deploy your Google Cloud infrastructure, as described in the enterprise foundations blueprint. On the second level, there are the following folders: A folder in each environment for each application (App 1 and App 2) which contains the resources for these applications. A Shared folder for both application environments that contains services that are shared between the applications but are independent for each environment. For example, you might have a folder-level secrets project so that you can apply different allow policies to your production secrets and non-production secrets. Within each application folder are various projects that contain the independent modules that are part of each application. What's next Design the network for your landing zone (the next document in this series). Review the enterprise foundations blueprint. Read the blueprints and whitepapers that are available in the Google Cloud security best practices center. Send feedback \ No newline at end of file diff --git a/Decide_security.txt b/Decide_security.txt new file mode 100644 index 0000000000000000000000000000000000000000..f341d0b36200a7216492503178e0c72919a5691e --- /dev/null +++ b/Decide_security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones/decide-security +Date Scraped: 2025-02-23T11:45:20.862Z + +Content: +Home Docs Cloud Architecture Center Send feedback Decide the security for your Google Cloud landing zone Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-31 UTC This document introduces important security decisions and recommended options to consider when designing a Google Cloud landing zone. It's part of a series about landing zones, and is intended for security specialists, CISOs, and architects who want to understand the decisions that they need to make when designing a landing zone in Google Cloud. In this document, it's assumed that a central team, such as the security team or the platform team, enforces these landing zone security controls. Because the focus of this document is the design of enterprise-scale environments, some strategies that it describes might be less relevant for small teams. Decision points for securing your Google Cloud landing zone To choose the best security design for your organization, you must make the following decisions: How to limit persistent credentials for service accounts How to mitigate data exfiltration through Google APIs How to continuously monitor for insecure configurations and threats How to centrally aggregate necessary logs How to meet compliance requirements for encryption at rest How to meet compliance requirements for encryption in transit Which additional controls are necessary for managing cloud service provider access Architecture diagram The example architecture described in this document uses common security design patterns. Your specific controls might vary based on factors such as your organization's industry, target workloads, or additional compliance requirements. The following diagram shows the security controls architecture that you apply in your landing zone when you follow the recommendations in this document. The preceding diagram shows the following: Service account key management helps mitigate risk from long-lived service account credentials. VPC Service Controls defines a perimeter around sensitive resources that helps to restrict access from outside the perimeter. Security Command Center monitors the environment for insecure configurations and threats. A centralized log sink collects audit logs from all projects. Google default encryption at rest encrypts all data that persists to disk. Google default encryption in transit applies to layer 3 and layer 4 network paths. Access Transparency gives you visibility and control over how Google can access your environment. Decide how to limit persistent credentials for service accounts Service accounts are machine identities that you use to grant IAM roles to workloads and allow the workload to access Google Cloud APIs. A service account key is a persistent credential, and any persistent credentials are potentially high risk. We don't recommend that you let developers freely create service account keys. For example, if a developer accidentally commits the service account key to a public Git repository, an external attacker can authenticate using those credentials. As another example, if the service account key is stored in an internal repository, a malicious insider who can read the key could use the credentials to escalate their own Google Cloud privileges. To define a strategy to manage these persistent credentials, you must provide viable alternatives, limit the proliferation of persistent credentials, and manage how they are used. For information about alternatives to service account keys, see Choose the best authentication method for your use case. The following sections describe the options to limit persistent credentials. We recommend option 1 for most use cases. The other options discussed in the following sections are alternatives that you can consider if option 1 doesn't apply to your specific organization. All organizations created after May 23, 2024 have secure by default organization policies enforced when the organization resource is first created. This change makes option 1 the default option. Option 1: Restrict use of persistent service account keys We recommend that you do not permit any users to download service account keys because exposed keys are a common attack vector. Restricting the use of persistent service account keys is an option that can help reduce the risk and overhead of manually managing service account keys. To implement this option, consider the following: To prevent developers from creating and downloading persistent credentials, configure the organization policy constraint constraints/iam.disableServiceAccountKeyCreation. Educate your teams on more secure alternatives to service account keys. For example, when users and applications that are outside of your Google Cloud environment need to use a service account, they can authenticate with service account impersonation or workload identity federation instead of a service account key. Design a process for teams to request an exception to this policy when downloading a service account key is the only viable option. For example, a third-party SaaS product might require a service account key to read logs from your Google Cloud environment. Avoid this option when you already have tooling in place to generate short-lived API credentials for service accounts. For more information, see the following: Organization policy constraints in the Google Cloud enterprise foundations blueprint Best practices for working with service accounts Option 2: Use additional access management tools to generate short-lived credentials As an alternative to Restrict use of persistent service account keys, you can generate short-lived credentials for service accounts. Short-lived credentials create less risk than persistent credentials such as service account keys. You can develop your own tooling or use third-party solutions such as Hashicorp Vault to generate short-lived access credentials. Use this option when you already have invested in a third-party tool for generating short-lived credentials for access control, or have sufficient budget and capacity to develop your own solution. Avoid using this option when you don't have existing tooling to grant short-lived credentials, or don't have the capacity to build your own solution. For more information, see Creating short-lived service account credentials. Decide how to mitigate data exfiltration through Google APIs Google APIs have public endpoints that are available to all customers. While every API resource in your Google Cloud environment is subject to IAM access controls, there is a risk that data could be accessed using stolen credentials, exfiltrated by malicious insiders or compromised code, or exposed through a misconfigured IAM policy. VPC Service Controls is a solution that addresses these risks. However, VPC Service Controls also introduces complexity to your access model, so you must design VPC Service Controls to meet your unique environment and use case. The following sections describe the options to mitigate data exfiltration through Google APIs. We recommend option 1 for most use cases. The other options discussed in the following sections are alternatives that you can consider if option 1 doesn't apply to your specific use case. Option 1: Configure VPC Service Controls broadly across your environment We recommend that you design your environment within one or more VPC Service Controls perimeters that restrict all supported APIs. Configure exceptions to the perimeter with access levels or ingress policies so that developers can access the services that they require, including console access where needed. Use this option when the following is true: The services that you intend to use support VPC Service Controls, and your workloads do not require unrestricted internet access. You store sensitive data on Google Cloud that could be a significant loss if exfiltrated. You have consistent attributes for developer access that can be configured as exceptions to the perimeter, allowing users to access the data that they need. Avoid this option when your workloads require unrestricted internet access or services that are not supported by VPC Service Controls. For more information, see the following: Best practices for enabling VPC Service Controls Supported products and limitations for VPC Service Controls Design and architect service perimeters Option 2: Configure VPC Service Controls for a subset of your environment Instead of configuring VPC Service Controls broadly across your environment, you can configure VPC Service Controls only on the subset of projects that contain sensitive data and internal-only workloads. This option lets you use a simpler design and operation for most projects, while still prioritizing data protection for projects with sensitive data. For example, you might consider this alternative when a limited number of projects contain BigQuery datasets with sensitive data. You can define a service perimeter around just these projects, and define ingress and egress rules to allow narrow exceptions for the analysts who need to use these datasets. For another example, in an application with three-tier architecture, some components might be outside of the perimeter. The presentation tier that allows ingress from user traffic might be a project outside of the perimeter, and the application tier and data tier that contain sensitive data might be separate projects inside the service perimeter. You define ingress and egress rules to the perimeter so that the tiers can communicate across the perimeter with granular access. Use this option when the following is true: Only limited and well-defined projects contain sensitive data. Other projects contain data of lower risk. Some workloads are internal only, but some workloads require public internet access or have dependencies on services that are not supported by VPC Service Controls. Configuring VPC Service Controls across all projects creates too much overhead or requires too many workarounds Avoid this option when many projects could potentially contain sensitive data. For more information, see Best practices for enabling VPC Service Controls. Option 3: Don't configure VPC Service Controls As another alternative to configuring VPC Service Controls broadly across your environment, you can choose not to use VPC Service Controls, particularly if the operational overhead outweighs the value of VPC Service Controls. For example, your organization might not have a consistent pattern of developer access that could form the basis of an ingress policy. Perhaps your IT operations are outsourced to multiple third parties, so developers don't have managed devices or access from consistent IP addresses. In this scenario, you might not be able to define ingress rules to allow exceptions to the perimeter that developers need to complete their daily operations. Use this option when: You use services that do not support VPC Service Controls. Workloads are internet facing and don't contain sensitive data. You don't have consistent attributes of developer access like managed devices or known IP ranges. Avoid this option when you have sensitive data in your Google Cloud environment. Decide how to continuously monitor for insecure configurations and threats Adopting cloud services introduces new challenges and threats when compared to using services located on-premises. Your existing tools that monitor long-lived servers may not be appropriate for autoscaling or ephemeral services, and might not monitor serverless resources at all. Therefore, you should evaluate security tools that work with the full range of cloud services that you might adopt. You should also continuously monitor for cloud standards, like the CIS Benchmarks for Google Cloud. The following sections describe the options for continuous monitoring. We recommend option 1 for most use cases. The other options discussed in the following sections are alternatives that you can consider if option 1 doesn't apply to your specific use case. Option 1: Use Security Command Center We recommend that you activate Security Command Center at the organization level, which helps you strengthen your security posture by doing the following: Evaluating your security and data attack surface Providing asset inventory and discovery Identifying misconfigurations, vulnerabilities, and threats Helping you mitigate and remediate risks When you enable Security Command Center at the beginning of your landing zone build, your organization's security team has near real-time visibility on insecure configurations, threats, and remediation options. This visibility helps your security team assess whether the landing zone meets their requirements and is ready for developers to start deploying applications. Use this option when the following is true: You want a security posture management and threat detection tool that is integrated with all Google Cloud services without additional integration effort. You want to use the same threat intelligence, machine learning, and other advanced methods that Google uses to protect its own services. Your existing security operations center (SOC) doesn't have the skills or capacity to generate threat insights from a large volume of cloud logs. Avoid this option when your existing security tools can fully address ephemeral or serverless cloud resources, monitor for insecure configurations, and identify threats at scale in a cloud environment. Option 2: Use your existing security tools for cloud security posture management and threat detection As an alternative option to Use Security Command Center, you might consider other cloud security posture management tools. Various third-party tools exist that have similar functions to Security Command Center, and you might already have invested in cloud-native tools that are focused on multi-cloud environments. You can also use Security Command Center and third-party tools together. For example, you might ingest the finding notifications from Security Command Center to another tool, or you might add a third-party security service to the Security Command Center dashboard. As another example, you might have a requirement to store logs on an existing SIEM system for the SOC team to analyze for threats. You could configure your existing SIEM to ingest only the finding notifications that Security Command Center produces, instead of ingesting a large volume of logs and expecting a SOC team to analyze the raw logs for insight. Use this option when your existing security tools can fully address ephemeral or serverless cloud resources, monitor for insecure configurations, and identify threats at scale in a cloud environment. Avoid this option when the following is true: Your existing SOC doesn't have the skills or capacity to generate threat insights from the vast volume of cloud logs. Integrating multiple third-party tools with multiple Google Cloud services introduces more complexity than value. For more information, see the following: Enable finding notifications for Pub/Sub Managing security sources using the Security Command Center API Add a third-party security service Decide how to centrally aggregate necessary logs Most audit logs are stored in the Google Cloud project that produced them. As your environment grows, it can be untenable for an auditor to check logs in every individual project. Therefore, you need to make a decision on how logs will be centralized and aggregated to help your internal audit and security operations. The following sections describe the options for aggregating logs. We recommend option 1 for most use cases. The other options discussed in the following sections are alternatives that you can consider if option 1 doesn't apply to your specific use case. Option 1: Retain logs in Google Cloud by using aggregated logs sinks We recommend that you configure a centralized organization-wide log sink for audit logs and other logs that are required by your security team. You can reference the logs scoping tool to identify the logs that your security team requires and whether these log types require explicit enablement. For example, the security team expects a central record of any resources that your users create so that the security team can monitor and investigate suspicious changes. The security team also requires an immutable record of data access for certain highly sensitive workloads. Therefore, the security team configures one log sink to aggregate admin activity audit logs from all projects into a Log Analytics bucket in a central project that they can view for impromptu investigations. They then configure a second log sink for data access audit logs from projects with sensitive workloads into a Cloud Storage bucket for long-term retention. Use this option when the following is true: Your security team expects a central record of all audit logs or other specific log types. Your security team needs to store logs in an environment with restricted access, outside the control of the workload or teams who produced the log. Avoid this option when the following is true: Your organization doesn't have a central requirement for consistent audit logs across workloads. Individual project owners have full responsibility for managing their own audit logs. For more information, see the following: Detective controls in the enterprise foundations blueprint Best practices for Cloud Audit Logs Configure aggregated sinks Log scoping tool Retention policies and retention policy locks Option 2: Export required audit logs to storage outside of Google Cloud As an alternative to storing logs in Google Cloud only, consider exporting audit logs outside of Google Cloud. After you centralize necessary log types into an aggregate log sink in Google Cloud, ingest the contents of that sink to another platform outside of Google Cloud for storing and analyzing logs. For example, you might use a third-party SIEM to aggregate and analyze audit logs across multiple cloud providers. This tool has sufficient capabilities to work with serverless cloud resources, and your SOC team has the skills and capacity to generate insight from this large volume of logs. This option can potentially be very expensive because of the network egress cost in Google Cloud, as well as the storage cost and capacity in the other environment. Rather than exporting every available log, we recommend that you be selective about which logs are required in the external environment. Use this option when you have a requirement to store logs from all environments and cloud providers in a single central location. Avoid this option when the following is true: Your existing systems don't have the capacity or budget to ingest a large volume of additional cloud logs. Your existing systems require integration efforts for each log type and format. You are collecting logs without a clear goal of how they will be used. For more information, see Detective controls in the enterprise foundations blueprint. Decide how to meet compliance requirements for encryption at rest Google Cloud automatically encrypts all your content stored at rest, using one or more encryption mechanisms. Depending on your compliance requirements, you might have an obligation to manage the encryption keys yourself. The following sections describe the options for encryption at rest. We recommend option 1 for most use cases. The other options discussed in the following sections are alternatives that you can consider if option 1 doesn't apply to your specific use case. Option 1: Accept use of default encryption at rest Default encryption at rest is sufficient for many use cases that don't have particular compliance requirements regarding encryption key management. For example, the security team at an online gaming company requires all customer data to be encrypted at rest. They don't have regulatory requirements about key management, and after reviewing Google's default encryption at rest, they are satisfied that it's a sufficient control for their needs. Use this option when the following is true: You don't have particular requirements around how to encrypt data or how encryption keys are managed. You prefer a managed service over the cost and operational overhead of managing your own encryption keys. Avoid this option when you have compliance requirements to manage your own encryption keys. For more information, see Encryption at rest in Google Cloud. Option 2: Manage encryption keys using Cloud KMS In addition to default encryption at rest, you might require more control over the keys used to encrypt data at rest within a Google Cloud project. Cloud Key Management Service (Cloud KMS) offers the ability to protect your data using customer-managed encryption keys (CMEK). For example, in the financial services industry, you might have a requirement to report to your external auditors how you manage your own encryption keys for sensitive data. For additional layers of control, you can configure hardware security modules (HSM) or external key management (EKM) with CMEK. Customer-supplied encryption keys (CSEK) are not recommended; scenarios that historically were addressed by CSEK are now better addressed by Cloud External Key Manager (Cloud EKM) because Cloud EKM has support for more services and higher availability. This option shifts some responsibility to application developers to follow the key management that your security team mandates. The security team can enforce the requirement by blocking the creation of non-compliant resources with CMEK organization policies. Use this option when one or more of the following is true: You have a requirement to manage the lifecycle of your own keys. You have a requirement to generate cryptographic key material with a FIPS 140-2 Level 3 certified HSM. You have a requirement to store cryptographic key material outside of Google Cloud using Cloud EKM. Avoid this option when the following is true: You don't have particular requirements for how to encrypt data or how encryption keys are managed. You prefer a managed service over the cost and operational overhead of managing your own encryption keys. For more information, see the following: Manage encryption keys with Cloud Key Management Service in the enterprise foundations blueprint Customer-managed encryption keys (CMEK) Cloud HSM Cloud External Key Manager CMEK organization policies Option 3: Tokenize data at the application layer before persisting in storage In addition to the default encryption provided by Google, you can also develop your own solution to tokenize data before storing it in Google Cloud. For example, a data scientist must analyze a dataset with PII information, but the data scientist should not have access to read the raw data of some sensitive fields. To limit control access to raw data, you could develop an ingestion pipeline with Sensitive Data Protection to ingest and tokenize sensitive data, and then persist the tokenized data to BigQuery. Tokenizing data is not a control that you can centrally enforce in the landing zone. Instead, this option shifts the responsibility to application developers to configure their own encryption, or to a platform team who develops an encryption pipeline as a shared service for application developers to use. Use this option when particular workloads have highly sensitive data that must not be used in its raw form. Avoid this option when Cloud KMS is sufficient to meet your requirements, as described in Manage encryption keys using Cloud KMS. Moving data through additional encryption and decryption pipelines adds cost, latency, and complexity to applications. For more information, see the following : Sensitive Data Protection overview Take charge of your data: How tokenization makes data usable without sacrificing privacy Decide how to meet compliance requirements for encryption in transit Google Cloud has several security measures to help ensure the authenticity, integrity, and privacy of data in transit. Depending on your security and compliance requirements, you might also configure application layer encryption. The following sections describe the options for encryption in transit. We recommend option 1 for most use cases. The other option discussed in the following sections is an additional feature that you can consider if option 1 is insufficient for your specific use case. Option 1: Evaluate whether default encryption in transit is sufficient Many traffic patterns in your landing zone benefit from Google's default encryption in transit. All VM-to-VM traffic within a VPC network and connected VPC networks is encrypted at layer 3 or layer 4. Traffic from the internet to Google services terminates at the Google Front End (GFE). GFE also does the following: Terminates traffic for incoming HTTP(S), TCP, and TLS proxy traffic Provides DDoS attack countermeasures Routes and load balances traffic to the Google Cloud services A traffic pattern that requires configuration is Cloud Interconnect, because traffic from your on-premises environment to Google Cloud is not encrypted in transit by default. If using Cloud Interconnect, we recommend that you enable MACsec for Cloud Interconnect as part of your landing zone. Use this option when Google default encryption in transit is sufficient for your internal workloads. Avoid this option when the following is true: You allow internet ingress traffic into your VPC network. You require Layer 7 encryption in transit between all internal compute resources. In these cases, you should configure additional controls, as discussed in Option 2: Require applications to configure Layer 7 encryption in transit. For more information, see Encryption in transit in Google Cloud. Option 2: Require applications to configure Layer 7 encryption in transit In addition to default encryption in transit, you can configure Layer 7 encryption for application traffic. Google Cloud provides managed services to support applications that need application-layer encryption in transit, including managed certificates, Cloud Service Mesh, and Cloud Service Mesh. For example, a developer is building a new application that accepts ingress traffic from the internet. They use an external Application Load Balancer with Google-managed SSL certificates to run and scale services behind a single IP address. Application layer encryption is not a control that you can enforce centrally in the landing zone. Instead, this option shifts some responsibility to application developers to configure encryption in transit. Use this option when applications require HTTPS or SSL traffic between components. Consider allowing a limited exception to this option when you are migrating compute workloads to the cloud that did not previously require encryption in transit for internal traffic when the applications were on-premises. During a large-scale migration, forcing legacy applications to modernize before migration might cause an unacceptable delay to the program. For more information, see the following: Using Google-managed SSL certificates Using self-managed SSL certificates Cloud Service Mesh Cloud Service Mesh service security Decide which additional controls are necessary for managing cloud service provider access The need to audit cloud service provider (CSP) access can be a concern during cloud adoption. Google Cloud provides multiple layers of control that can enable verification of cloud provider access. The following sections describe the options for managing CSP access. We recommend option 1 for most use cases. The other option discussed in the following sections is an additional feature that you can consider if option 1 is insufficient for your specific use case. Option 1: Enable Access Transparency logs Access Transparency logs record the actions taken by Google Cloud personnel in your environment, such as when they troubleshoot a support case. For example, your developer raises a troubleshooting issue to Cloud Customer Care, and asks the support agent to help troubleshoot their environment. An Access Transparency log is generated to show what actions the support staff took, including the support case number that justified it. Use this option when the following is true: You have a requirement to verify that Google Cloud personnel are accessing your content only for valid business reasons, such as fixing an outage or attending to your support requests. You have a compliance requirement to track access to sensitive data. Option 2: Enable Access Transparency logs and provider access management If your business has a compliance requirement to grant explicit approval before a CSP can access your environment, in addition to Option 1, you can use Access Transparency with other controls that let you explicitly approve or deny access to your data. Access Approval is a manual solution that ensures that Customer Care and Google engineering require your explicit approval (through email or through a webhook) whenever they need to access your content. Key Access Justifications is a programmatic solution that adds a justification field to any requests for encryption keys that are configured with Cloud EKM. Use this option when the following is true: You want a central team to directly manage access to your content by Google personnel. For Access Approval, you can accept the risk that the central capability to approve or deny each access request is more important than the operational trade-off, which could be a slower resolution of support cases. For Key Access Justifications, your business is already using Cloud External Key Manager as part of your encryption strategy, and wants programmatic control over all types of access to encrypted data (not just Google personnel access to data). Avoid this option when the following is true: The operational delay that can result when a central team with Access Approval authority must respond is an unacceptable risk to workloads that require high availability and a rapid response from Customer Care. For more information, see the following: Cloud provider access management Overview of Access Approval Security best practices for Google Cloud landing zones In addition to the decisions introduced in this document, consider the following security best practices: Provision the identities in your environment. For more information, see Decide how to onboard identities to Google Cloud. Design a network that meets your organization's use cases. For more information, see Decide the network design for your Google Cloud landing zone. Implement the organization policy constraints that are defined in the enterprise foundations blueprint. These constraints help to prevent common security issues such as creating unnecessary external IP addresses, or granting the Editor role to all service accounts. Review the full list of organization policy constraints to assess whether other controls are relevant for your requirements. All organizations created after May 23, 2024 have secure by default organization policies enforced when the organization resource is first created. This change makes option 1 the default option. What's next Review the enterprise foundations blueprint. Read more security best practices in the Google Cloud architecture framework. Read the blueprints and technical papers that are available in the Google Cloud security best practices center. Send feedback \ No newline at end of file diff --git a/Deep_Learning_Containers.txt b/Deep_Learning_Containers.txt new file mode 100644 index 0000000000000000000000000000000000000000..8d378ae8421de4e3a311a4f41fc70f985a5b4c63 --- /dev/null +++ b/Deep_Learning_Containers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/deep-learning-containers/docs +Date Scraped: 2025-02-23T12:03:14.642Z + +Content: +Home Deep Learning Containers Documentation Stay organized with collections Save and categorize content based on your preferences. Deep Learning Containers documentation View all product documentation Deep Learning Containers are a set of Docker containers with key data science frameworks, libraries, and tools pre-installed. These containers provide you with performance-optimized, consistent environments that can help you prototype and implement workflows quickly. Learn more. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Get started with a local deep learning container Deep Learning Containers overview emoji_objects Concepts Choose a container image Train in a container using Google Kubernetes Engine Create a derivative container info Resources Pricing Release notes Related videos \ No newline at end of file diff --git a/Define_migration_scope.txt b/Define_migration_scope.txt new file mode 100644 index 0000000000000000000000000000000000000000..49872abb9fbe69d1deda202534d0a9dc5a0447a0 --- /dev/null +++ b/Define_migration_scope.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/riot-live-migration-redis-enterprise-cloud/assessment +Date Scraped: 2025-02-23T11:53:08.115Z + +Content: +Home Docs Cloud Architecture Center Send feedback Define the scope of your migration to Redis Enterprise Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-29 UTC This document describes how you define the scope of your migration to deploy RIOT Live Migration to migrate to Redis Enterprise Cloud in a production environment. Database architects, DevOps and SRE teams, or Network administrators can use this architecture to offer near-zero downtime migrations to their teams. This document assumes that you're familiar with using the Google Cloud CLI and Compute Engine. To define the scope of your migration, you complete the following steps: Assess the source environment. Build an inventory of your source instances. Identify and document the migration scope and affordable downtime. Assess your deployment and administration process. Assess the source environment To assess your source environment, you determine the requirements and dependencies of the resources that you want to migrate from Redis OSS, AWS ElastiCache, and Azure Cache for Redis to a fully-managed Redis Enterprise Cloud instance in Google Cloud. The assessment phase consists of the following tasks: Build a comprehensive inventory of Redis-compatible workloads. Perform data sizing and Redis Cluster sizing: If you're using AWS ElastiCache, you can extract your database metrics by using the Redis tool ECstats. If you're using Azure Cache for Redis, you can extract raw usage data for your Redis instances by using the acrp2acre tool. Review networking requirements such as VPC peering or Private service connect. Calculate the total cost of ownership (TCO) of the target environment by visiting the Redis Enterprise Cloud Pricing page. Decide on the order and priority of the workloads that you want to migrate. Create different subscriptions to consolidate databases with similar purposes such as development or test, staging, and production. Build an inventory of your source instances To define the scope of your migration, you create an inventory of your source instances from Redis OSS, AWS ElastiCache, and Azure Cache for Redis. The goal of this step is to collect information about each database, such as memory limit, IOPS, and durability requirements. Generic properties at the subscription level: The region of your subscription Active-Active geo distribution Auto-tiering (receive lower total cost of ownership if memory limit is over 250GB or more) Configurations for each database: Memory limit and throughput (operations per second) High availability Durability requirements Advanced capabilities such as search, JSON, time series, and probabilistic for each database Connection information including port, user, and other security options Requirements and constraints: Recovery point objective (RPO) and recovery time objective (RTO) Service level agreements (SLAs) Regulatory and compliance requirements (see the Redis Customer Trust Center) Authentication and security requirements Identify and document the migration scope and affordable downtime To have a successful migration, you need to have a migration scope in place. To scope your migration, you document essential information that influences your migration strategy and tooling. By this stage of the assessment, you can answer the following questions: Are your databases larger than 250GB? If so, you will receive a lower total cost of ownership if auto-tiering is enabled. Where are the databases located (regions and zones) and what is their proximity to applications? How often does the data change? Many components of this effort are already described in the preceding section "Build an inventory of your source instances." However, there are other aspects that you need to consider in this step, like documenting the scalability, durability, and security requirements and constraints that need to be upheld. We recommend that you review the Redis Trust Center for industry and compliance certifications, and discuss them with your business owners and legal team if necessary. You should also define a thorough migration scope. You can use the output from tools like ECstats and acrp2acre to define the sizing requirements for your Redis Enterprise Cloud instances in Google Cloud. Review the attributes of each database instance, such as scalability, and security requirements. If the database size is greater than 250 GB, we recommend that you use auto-tiering. We also recommend that you group databases with similar characteristics and security profiles into a single subscription. Doing so will help ensure that your database migration doesn't affect your existing SLA and business operations. Assess your deployment and administration process To ensure that there aren't any unnecessary interruptions to your production environment, we recommend that you assess the operational and deployment processes of your database. The assessment should help you to determine how your databases need to be adapted to facilitate a successful migration. Assess how you define and enforce security policies for your database instance to control access to your database. Assess your monitoring and alerting requirements by defining notification emails sent to your account and the conditions that trigger them. Collect and visualize your Redis Cloud metrics by using the Redis Prometheus and Grafana integration. What's next Read Google Cloud data migration content. For more in-depth documentation and best practices, review RIOT documentation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Saurabh Kumar | ISV Partner EngineerGilbert Lau | Principal Cloud Architect, RedisOther contributors: Chris Mague | Customer Engineer, Data ManagementGabe Weiss | Developer Advocacy ManagerMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Define_reliability_based_on_user-experience_goals.txt b/Define_reliability_based_on_user-experience_goals.txt new file mode 100644 index 0000000000000000000000000000000000000000..8a5a46dddddaf8372281be771bbf35d7f091033b --- /dev/null +++ b/Define_reliability_based_on_user-experience_goals.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/define-reliability-based-on-user-experience-goals +Date Scraped: 2025-02-23T11:43:17.475Z + +Content: +Home Docs Cloud Architecture Center Send feedback Define reliability based on user-experience goals Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework helps you to assess your users' experience, and then map the findings to reliability goals and metrics. This principle is relevant to the scoping focus area of reliability. Principle overview Observability tools provide large amounts of data, but not all of the data directly relates to the impacts on the users. For example, you might observe high CPU usage, slow server operations, or even crashed tasks. However, if these issues don't affect the user experience, then they don't constitute an outage. To measure the user experience, you need to distinguish between internal system behavior and user-facing problems. Focus on metrics like the success ratio of user requests. Don't rely solely on server-centric metrics, like CPU usage, which can lead to misleading conclusions about your service's reliability. True reliability means that users can consistently and effectively use your application or service. Recommendations To help you measure user experience effectively, consider the recommendations in the following sections. Measure user experience To truly understand your service's reliability, prioritize metrics that reflect your users' actual experience. For example, measure the users' query success ratio, application latency, and error rates. Ideally, collect this data directly from the user's device or browser. If this direct data collection isn't feasible, shift your measurement point progressively further away from the user in the system. For example, you can use the load balancer or frontend service as the measurement point. This approach helps you identify and address issues before those issues can significantly impact your users. Analyze user journeys To understand how users interact with your system, you can use tracing tools like Cloud Trace. By following a user's journey through your application, you can find bottlenecks and latency issues that might degrade the user's experience. Cloud Trace captures detailed performance data for each hop in your service architecture. This data helps you identify and address performance issues more efficiently, which can lead to a more reliable and satisfying user experience. Previous arrow_back Overview Next Set realistic targets for reliability arrow_forward Send feedback \ No newline at end of file diff --git a/Deploy_FortiGate-VM_Next_Generation_Firewall_using_Terraform.txt b/Deploy_FortiGate-VM_Next_Generation_Firewall_using_Terraform.txt new file mode 100644 index 0000000000000000000000000000000000000000..01887cbc1ddbe5c6d3ddc30131874b096f4dad68 --- /dev/null +++ b/Deploy_FortiGate-VM_Next_Generation_Firewall_using_Terraform.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/use-terraform-to-deploy-a-fortigate-ngfw +Date Scraped: 2025-02-23T11:53:48.628Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploying FortiGate-VM Next Generation Firewall using Terraform Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-10-20 UTC By Bartek Moczulski, Consulting System Engineer, Fortinet Inc. This tutorial shows you how to use Terraform to deploy a FortiGate reference architecture to help protect your applications against cyberattacks. FortiGate is a next-generation firewall (NGFW) with software-defined wide area network (SD-WAN) capabilities deployed as a network virtual appliance in Compute Engine. When deployed, FortiGate can help secure applications by inspecting all inbound traffic originating from the internet and outbound and internal traffic between application tiers. You can use the same FortiGate cluster as a secure web gateway to protect outbound traffic originating from your workloads. In this tutorial, you create several networks and deploy a high-availability (HA) FortiGate cluster. You also create a two-tier web application and configure FortiGate and Google Cloud to help enable secure inbound traffic, outbound traffic, and internal traffic. Architecture The following diagram shows that the architecture deployed in this tutorial consists of an HA cluster of FortiGate NGFWs that uses a pair of external and internal load balancers to direct traffic to the active FortiGate VM instance. A two-tier web application is deployed behind the NGFWs. Connections from the internet to the application frontend (Tier 1) pass through the active FortiGate instance, as indicated by the red path. The NGFWs also inspect connections from Tier1 to Tier2, as indicated by the purple path. For more information about testing NGFW connectivity and testing threat prevention, see Verify the FortiGate NGFW deployment. Objectives Prepare for the deployment. Clone the Terraform modules. Deploy an HA cluster of FortiGate VMs into new Virtual Private Cloud (VPC) networks. Verify that the appliances deployed successfully. Create sample workloads and help secure them using FortiGate. This tutorial is split into two separate deployments. In the first deployment, you use the day0 Terraform module to create a basic FortiGate deployment, form an HA cluster, and prepare load balancer resources. In the second deployment, you use an example configuration in the day1 Terraform module to create a simple two-tier web application. You also enable secure connectivity to, from, and within the application using the FortiGate NGFW deployed earlier. Costs FortiGate VM for Google Cloud supports both on-demand pay-as-you-go (PAYG) licensing and bring-your-own-license (BYOL) models. This tutorial uses the BYOL model. For more information, see FortiGate support. In this document, you use the following billable components of Google Cloud: Compute Engine Cloud Load Balancing Cloud Storage Cloud NAT To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. For more information about FortiGate licensing in the public cloud, see the Fortinet article on order types. If you're a new Google Cloud user, you might be eligible for a free trial. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API Upload FortiGate license files (.lic) to a local directory, like lic1.lic and lic2.lic. These files are referenced when deploying instances for license provisioning. To obtain your production-ready or evaluation licenses, contact your local Fortinet reseller. Ensure both lic1.lic and lic2.lic files are uploaded to the Day0 directory before you run your Terraform plan. If you prefer to use the PAYG licensing model, consult the documentation on GitHub regarding the required code modifications. (Optional) Create a dedicated custom role and a service account for FortiGate. This action isn't obligatory, although it's highly recommended that you create these roles. If the fortigatesdn-ro service account isn't found, the template attempts to detect it and falls back to the Compute Engine default service account. For more information, see the Create a custom role and a service account section. You must have the Compute Admin privileges to the Google Cloud project to deploy this tutorial. Note: Avoid using multiple copies of the same solution into the same project. Using duplicate resource names causes conflicts. This tutorial consists of Terraform templates that fully automate the deployment of all resources. The necessary files are available in the Fortinet GitHub repository: In Cloud Shell, clone the GitHub repository: git clone https://github.com/fortinet/fortigate-tutorial-gcp.git You can follow this tutorial using Cloud Shell, which comes preinstalled with the gcloud CLI, Git, Terraform, and text editors. If you use Cloud Shell, you don't need to install anything on your workstation. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell A Cloud Shell session opens inside a new frame at the bottom of the Google Cloud console and displays a command-line prompt. Create a custom role and a service account FortiGate instances can query Google API Client Libraries to resolve dynamic addresses in a firewall policy. This popular functionality lets you build firewall policies based on network tags and other metadata rather than on static IP addresses. In this section, you create an IAM role and a service account with the minimum required privilege set, and a binding policy. After you create roles, they can't be deleted and recreated. In Cloud Shell, create a Fortinet IAM role: GCP_PROJECT_ID=$(gcloud config get-value project) gcloud iam roles create FortigateSdnReader --project=$Google Cloud_PROJECT_ID \ --title="FortiGate SDN Connector Role (read-only)" \ --permissions="compute.zones.list,compute.instances.list,container.clusters.list,container.nodes.list,container.pods.list,container.services.list" Create a Fortinet service account: GCP_PROJECT_ID=$(gcloud config get-value project) gcloud iam roles create FortigateSdnReader --project=$GCP_PROJECT_ID \ --title="FortiGate SDN Connector Role (read-only)" \ --permissions="compute.zones.list,compute.instances.list,container.clusters.list,container.nodes.list,container.pods.list,container.services.list" gcloud iam service-accounts create fortigatesdn-ro \ --display-name="FortiGate SDN Connector" Create a binding policy: gcloud projects add-iam-policy-binding $GCP_PROJECT_ID \ --member="serviceAccount:fortigatesdn-ro@$GCP_PROJECT_ID.iam.gserviceaccount.com" \ --role="projects/$GCP_PROJECT_ID/roles/FortigateSdnReader" Initialize Terraform In this section, you modify the files in the Day0 directory in the Terraform GitHub repository: Before you begin, find the public IP address of your computer by searching for My IP address. In Cloud Shell, update the terraform.tfvars file with your Google Cloud project ID and the us-central1 region: GCP_PROJECT = PROJECT_ID GCE_REGION = us-central1 Prefix = fgt- Add your public IP address to the admin_acl list in the main.tf file: module "fortigates" { source = "../modules/fgcp-ha-ap-lb" region = var.GCE_REGION service_account = data.google_service_account.fgt.email != null ? data.google_service_account.fgt.email : "" admin_acl = ["IP-ADDRESS", "${data.http.my_ip.body}/32"] api_acl = ["${data.http.my_ip.body}/32"] This template restricts access for the administrative interface and FortiGate API to the IP address that you run the template from. Initialize Terraform: terraform init terraform plan -out tf.plan ## verify the list of resources that will be created before moving to the next step. terraform apply tf.plan To connect to the primary FortiGate instance, use ssh with the default user ID: admin. The default password and primary IP address are visible in the Terraform output. When you connect for the first time, you must change the admin password. Verify the status of the provisioning by checking the output of the following command: get system ha status The output should resemble the data in the following screenshot: At the end of the output, you should see two instances: Primary and Secondary. This output confirms that both instances were properly licensed and successfully formed an HA cluster. Note: The secondary instance might initially show up as out-of-sync. Further deployment and configuration steps can continue regardless. This section described the minimal deployment of a FortiGate cluster. In the next section, you add additional functionality such as routes, firewall policies, and peered VPCs. Deploy FortiGate NGFW workloads In this section, you update the files in the Day1 directory of the GitHub repository to deploy workloads with a new public IP address. You also update the configuration files of the FortiGate NGFW. The Day1 directory contains the following files: day0-import.tf: Imports data from the day0 deployment to identify FortiGate instances and their API key. It also indicates where the Terraform state file is pulled from. workloads.tf: Creates a proxy and web servers into new Tier 1 and Tier 2 VPC networks. main.tf: Connects Tier 1 and Tier 2 VPC networks with internal VPC of the FortiGate cluster. It also enables inbound and outbound connectivity and adds Tier 1 to the Tier 2 east-west firewall policy. Using the -parallelism=1 Terraform option in the code sample helps reduce or remove concurrency issues related to Google Cloud peering and routing operations for multiple VPC networks in a single Terraform deployment. In Cloud Shell, access the Day1 directory: cd ../day1 To deploy a sample web application and configure FortiGate to forward connections to it, run the following commands: terraform init terraform plan -out day1.plan terraform apply day1.plan -parallelism=1 The following screenshot shows a route operation in progress error. An error like this can occur if you don't use the -parallelism flag. Without this flag, you might run into concurrency issues. To recover from this error, run the terraform apply command again. Terraform automatically verifies which steps failed and adds the missing resources. Verify the FortiGate NGFW deployment At this point in the tutorial, you've deployed a multi-tier architecture with inbound, outbound, and internal connectivity, all secured using FortiGate. In this section, you verify your deployment of the FortiGate NGFW. You also attempt to upload a harmless virus file to ensure that the firewall is operating. To connect to the web server over FortiGate, check the public IP address of the external load balancer in Terraform outputs. Connect to it using your web browser. Wait a minute or two for the proxy and web server provisioning to finish before attempting the following verification steps: In Cloud Shell, copy the public IP address of the external load balancer from the public_ip output of the terraform apply command. Launch a web browser. Connect to the public IP address of the load balancer: http://Public IP address of the load balancer You should see the default Nginx welcome page. To download a harmless test virus file from the European Institute for Computer Anti-Virus Research, and to verify that FortiGate NGFW threat inspection is working, add /eicar.com to the IP address you entered in the previous step. To see more details about the inspected network traffic: In the web browser, enter the public IP address of the first FortiGate instance (the same IP address you used earlier for SSH connections) into the address bar. Connect to the FortiGate web console using a web browser. The HTTPS console is available on the standard port of 443 by default. Skip the following optional setup steps: Dashboard configuration Firmware updates Initial welcome video Select Log & Report > Forward Traffic from the FortiGate web console. A list of all the connections attempted through the FortiGate firewall appears. You should be able to identify the following: Successful connection from your own IP address to the web server followed by a successful internal connection. Multiple outbound connections with various applications detected. Another internal connection between 10.0.0.5 and 10.1.0.5 that is blocked because of UTM policy. This action proves that the east-west threat inspection is working correctly. To reveal details of the detected threat, double-click on the blocked connection in the Result column. A details page appears. Select Security. Security appears in Log details. Note: By default, FortiGate policy doesn't log all connections. To help you learn, the tutorial intentionally added more verbosity to the logging files. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the individual resources To avoid incurring further charges, delete the individual resources that you use in this tutorial: Delete the configuration added in the Day1 directory Go to the Day1 directory from which you deployed the solution and run the following command: terraform destroy Confirm you want to delete the resources. Delete the FortiGate and the remaining resources Go to the Day0 directory and run the following command: terraform destroy Confirm you want to delete the resources. Note: If your public IP address changed after you reinitialized Cloud Shell, you might see the FortiOS provider error. To mitigate the FortiOS provider error, you have to update the API ACL: Check the public IP address of your Cloud Shell instance: curl https://api.ipify.org Enable connecting to serial ports in the primary FortiGate (fgt-vm1-us-central1) VM instance settings using the Google Cloud console. To connect to the FortiGate administrative interface using the serial console, click Connect to serial console. Issue the following commands in the FortiGate CLI, replacing IP_ADDRESS with Cloud Shell public IP address: config system api-user edit terraform config trusthost edit 0 set ipv4-trusthost IP_ADDRESS/32 next end next end Go back to the Day1 directory and re-run terraform destroy. What's next Read more about other Fortinet products for Google Cloud at fortinet.com. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_an_Active_Directory_forest_on_Compute_Engine.txt b/Deploy_an_Active_Directory_forest_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..5c67e337b010565520ee027fdaad8498bee88e61 --- /dev/null +++ b/Deploy_an_Active_Directory_forest_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-an-active-directory-forest-on-compute-engine +Date Scraped: 2025-02-23T11:51:21.983Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy an Active Directory forest on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how to deploy an Active Directory forest on Compute Engine in a way that follows the best practices described in Best practices for running Active Directory on Google Cloud. This guide is intended for administrators and DevOps engineers. It assumes that you have a solid understanding of Active Directory and basic knowledge of Google Cloud networking and security. Architecture The deployment consists of two projects: A host project that contains a Shared VPC network, a private DNS forwarding zone, and firewall rules for Active Directory. A service project that contains two domain controllers that are deployed across two zones. This architecture lets you do the following: Deploy additional Windows workloads in separate projects, and let them use the Shared VPC network and Active Directory forest. Integrate the Active Directory forest with an existing on-premises forest to implement the resource-forest pattern. Note: This guide describes how to deploy an Active Directory forest for which you manage domain controllers yourself. Consider using Managed AD to deploy a fully-managed Active Directory forest unless some of your requirements cannot be accommodated by using Managed AD. Before you begin To follow the instructions in this guide, make sure you have the following: Subnet CIDR ranges for two subnets: Domain controllers subnet. This subnet contains the domain controllers. Using a dedicated subnet for domain controllers helps you distinguish domain controller traffic from other server traffic when you manage firewall rules or analyzing network logs. We recommend a subnet CIDR range that's sized /28 or /29. Resource subnet. This subnet contains servers and administrative workstations. Use a subnet CIDR range that's large enough to accommodate all the servers that you plan to deploy. Make sure that your subnets don't overlap with any on-premises subnets, and allow sufficient room for growth. A DNS domain name and a NetBIOS domain name for the Active Directory forest root domain. For more information about choosing a name, see Microsoft naming conventions. Deploy a shared network In this section, you create a new project and use it to deploy a Shared VPC network. Later, you'll use this network to deploy the Active Directory domain controllers. Create a project You now create a new project and use it to deploy a Shared VPC network. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine and Cloud DNS APIs. Enable the APIs To get the permissions that you need to deploy a shared network, ask your administrator to grant you the following IAM roles on the project or parent folder: Compute Network Admin (roles/compute.networkAdmin) Compute Security Admin (roles/compute.securityAdmin) Compute Shared VPC Admin (roles/compute.xpnAdmin) DNS Administrator (roles/dns.admin) For more information about granting roles, see Manage access to projects, folders, and organizations. You might also be able to get the required permissions through custom roles or other predefined roles. Delete the default VPC By default, Compute Engine creates a default network in each new project that you create. This network is configured in auto mode, which means a subnet is pre-allocated for each region and is automatically assigned a CIDR range. In this section, you replace this VPC network with a custom mode network that contains two subnets and that uses custom CIDR ranges. In the Google Cloud console, open Cloud Shell. Activate Cloud Shell Launch PowerShell: pwsh Configure the gcloud CLI to use the new project: gcloud config set project PROJECT_ID Replace PROJECT_ID with the ID of your project. Delete all firewall rules that are associated with the default VPC: $ProjectId = gcloud config get-value core/project & gcloud compute firewall-rules list ` --filter "network=default" ` --format "value(name)" | % { gcloud compute firewall-rules delete --quiet $_ --project $ProjectId } Delete the default VPC: & gcloud compute networks list --format "value(name)" | % { gcloud compute networks delete $_ --quiet } Create a custom mode VPC network You now create a custom mode VPC network in the your VPC host project. In PowerShell, initialize the following variables: $VpcName = "VPC_NAME" $Region = "REGION" $SubnetRangeDomainControllers = "DC_CIDR" $SubnetRangeResources = "RESOURCES_CIDR" Replace the following: VPC_NAME: the name of the VPC. REGION: the region to deploy the Active Directory domain controllers in. DC_CIDR: the subnet range to use for the domain controllers subnet. RESOURCES_CIDR: the subnet range to use for the resource subnet. Example: $VpcName = "ad" $Region = "us-central1" $SubnetRangeDomainControllers = "10.0.0.0/28" $SubnetRangeResources = "10.0.1.0/24" Create the VPC and configure it to be used as a Shared VPC network: $ProjectId = gcloud config get-value core/project & gcloud compute networks create $VpcName --subnet-mode custom & gcloud compute shared-vpc enable $ProjectId Create the subnets and enable Private Google Access so that Windows can activate without internet access. & gcloud compute networks subnets create domain-controllers ` --network $VpcName ` --range $SubnetRangeDomainControllers ` --region $Region ` --enable-private-ip-google-access & gcloud compute networks subnets create resources ` --network $VpcName ` --range $SubnetRangeResources ` --region $Region ` --enable-private-ip-google-access Deploy subnets and firewall rules You now create firewall rules to allow Active Directory communication within the VPC. Allow RDP connections to all VM instances through Cloud IAP TCP forwarding: & gcloud compute firewall-rules create allow-rdp-ingress-from-iap ` --direction INGRESS ` --action allow ` --rules tcp:3389 ` --enable-logging ` --source-ranges 35.235.240.0/20 ` --network $VpcName ` --priority 10000 Allow DNS queries from Cloud DNS to domain controllers. & gcloud compute firewall-rules create allow-dns-ingress-from-clouddns ` --direction INGRESS ` --action=allow ` --rules udp:53,tcp:53 ` --enable-logging ` --source-ranges 35.199.192.0/19 ` --target-tags ad-domaincontroller ` --network $VpcName ` --priority 10000 This firewall rule is required in order for the private DNS forwarding zone to work. Allow Active Directory replication between domain controllers: & gcloud compute firewall-rules create allow-replication-between-addc ` --direction INGRESS ` --action allow ` --rules "icmp,tcp:53,udp:53,tcp:88,udp:88,udp:123,tcp:135,tcp:389,udp:389,tcp:445,udp:445,tcp:49152-65535" ` --enable-logging ` --source-tags ad-domaincontroller ` --target-tags ad-domaincontroller ` --network $VpcName ` --priority 10000 Allow Active Directory logons from VMs that are in the resources subnet to domain controllers: & gcloud compute firewall-rules create allow-logon-ingress-to-addc ` --direction INGRESS ` --action allow ` --rules "icmp,tcp:53,udp:53,tcp:88,udp:88,udp:123,tcp:135,tcp:389,udp:389,tcp:445,udp:445,tcp:464,udp:464,tcp:3268,udp:3268,tcp:9389,tcp:49152-65535" ` --enable-logging ` --source-ranges $SubnetRangeResources ` --target-tags ad-domaincontroller ` --network $VpcName ` --priority 10000 If you plan to configure Secure LDAP, allow Secure LDAP connections from VMs that are in the resources subnet to domain controllers: & gcloud compute firewall-rules create allow-ldaps-ingress-to-addc ` --direction INGRESS ` --action allow ` --rules tcp:636 ` --enable-logging ` --source-ranges $SubnetRangeResources ` --target-tags ad-domaincontroller ` --network $VpcName ` --priority 10000 You only need this firewall rule if you plan to configure Secure LDAP. (Optional) Create a firewall rule that logs all failed access attempts. The logs can be useful for diagnosing connectivity problems, but they might produce a significant volume of log data. & gcloud compute firewall-rules create deny-ingress-from-all ` --direction INGRESS ` --action deny ` --rules tcp:0-65535,udp:0-65535 ` --enable-logging ` --source-ranges 0.0.0.0/0 ` --network $VpcName ` --priority 65000 Deploy the Active Directory forest In this section, you create a new service project and attach it to the Shared VPC host project that you created previously. You then use the service project to deploy a new Active Directory forest with two domain controllers. Create a project You now create a new project and use it to deploy the Active Directory domain controller VMs. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine and Secret Manager APIs. Enable the APIs To get the permissions that you need to deploy the Active Directory forest, ask your administrator to grant you the following IAM roles on the project: Compute Instance Admin (v1) (roles/compute.instanceAdmin.v1) Service Account Admin (roles/iam.serviceAccountAdmin) Service Account User (roles/iam.serviceAccountUser) Secret Manager Admin (roles/secretmanager.admin) IAP-secured Tunnel User (roles/iap.tunnelResourceAccessor) For more information about granting roles, see Manage access to projects, folders, and organizations. You might also be able to get the required permissions through custom roles or other predefined roles. Prepare the configuration The next step is to prepare the configuration for the Active Directory deployment. If you previously closed the PowerShell session, open Cloud Shell. Activate Cloud Shell Launch PowerShell: pwsh Configure the gcloud CLI to use the new project: gcloud config set project DC_PROJECT_ID Replace DC_PROJECT_ID with the ID of your project. Use PowerShell to create the following variables: $AdDnsDomain = "DNS_DOMAIN" $AdNetbiosDomain = "NETBIOS_DOMAIN" $VpcProjectId = "VPCHOST_PROJECT_ID" $VpcName = "VPC_NAME" $Region = "REGION" $Zones = "REGION-a", "REGION-b" Replace the following: DNS_DOMAIN: the forest root domain name of the Active Directory forest, for example cloud.example.com. NETBIOS_DOMAIN: the NetBIOS domain name for the forest root domain, for example CLOUD. VPCHOST_PROJECT_ID: the project ID of the VPC host project that you created previously. VPC_NAME: Name of the Shared VPC network that you created previously. REGION: Region to deploy the Active Directory domain controllers in. Notice that the names of the zones are based on the names of the region that you specify. You can extend the VPC and your domain to cover additional regions at any time. Example: $AdDnsDomain = "cloud.example.com" $AdNetbiosDomain = "CLOUD" $VpcProjectId = "vpc-project-123" $VpcName = "ad" $Region = "us-west1" $Zones = "us-west1-a", "us-west1-b" Create a private DNS forwarding zone You now reserve two static IP addresses for your domain controllers and create a private DNS forwarding zone that forwards all DNS queries for the Active Directory domain to these IP addresses. Attach the project to the Shared VPC network: $ProjectId = gcloud config get-value core/project & gcloud compute shared-vpc associated-projects add $ProjectId --host-project $VpcProjectId Reserve two static internal IP addresses in the domain controllers subnet: $AddressOfDc1 = gcloud compute addresses create dc-1 ` --region $Region ` --subnet "projects/$VpcProjectId/regions/$Region/subnetworks/domain-controllers" ` --format value`(address`) $AddressOfDc2 = gcloud compute addresses create dc-2 ` --region $Region ` --subnet "projects/$VpcProjectId/regions/$Region/subnetworks/domain-controllers" ` --format value`(address`) Create a Cloud DNS private forwarding zone in the VPC host project and configure the zone to forward DNS queries to the two reserved IP addresses: & gcloud dns managed-zones create $AdDnsDomain.Replace(".", "-") ` --project $VpcProjectId ` --dns-name $AdDnsDomain ` --description "Active Directory forwarding zone" ` --networks $VpcName ` --visibility private ` --forwarding-targets "$AddressOfDc1,$AddressOfDc2" Create a DSRM password You now define the Directory Service Restore Mode (DSRM) password and store it in Secret Manager. You then grant the domain controller VMs temporary access to this secret so that they can use it to deploy the Active Directory forest. Generate a random password and store it in a Secret Manager secret: # Generate a random password. $DsrmPassword = [Guid]::NewGuid().ToString()+"-"+[Guid]::NewGuid().ToString() $TempFile = New-TemporaryFile Set-Content $TempFile "$DsrmPassword" -NoNewLine & gcloud secrets create ad-password --data-file $TempFile Remove-Item $TempFile Create the service account for the domain controller VM instances: $DcServiceAccount = gcloud iam service-accounts create ad-domaincontroller ` --display-name "AD Domain Controller" ` --format "value(email)" Grant the service account permission to read the secret for the next hour: $Expiry = [DateTime]::UtcNow.AddHours(1).ToString("o") & gcloud secrets add-iam-policy-binding ad-password ` --member=serviceAccount:$($DcServiceAccount) ` --role=roles/secretmanager.secretAccessor ` --condition="title=Expires after 1h,expression=request.time < timestamp('$Expiry')" Deploy domain controllers You now deploy two VM instances and create a new Active Directory forest and domain. To minimize the number of manual steps, you use startup scripts. In PowerShell, run the following command to generate a startup script: ' $ErrorActionPreference = "Stop" # # Only run the script if the VM is not a domain controller already. # if ((Get-CimInstance -ClassName Win32_OperatingSystem).ProductType -eq 2) { exit } # # Read configuration from metadata. # Import-Module "${Env:ProgramFiles}\Google\Compute Engine\sysprep\gce_base.psm1" $ActiveDirectoryDnsDomain = Get-MetaData -Property "attributes/ActiveDirectoryDnsDomain" -instance_only $ActiveDirectoryNetbiosDomain = Get-MetaData -Property "attributes/ActiveDirectoryNetbiosDomain" -instance_only $ActiveDirectoryFirstDc = Get-MetaData -Property "attributes/ActiveDirectoryFirstDc" -instance_only $ProjectId = Get-MetaData -Property "project-id" -project_only $Hostname = Get-MetaData -Property "hostname" -instance_only $AccessToken = (Get-MetaData -Property "service-accounts/default/token" | ConvertFrom-Json).access_token # # Read the DSRM password from secret manager. # $Secret = (Invoke-RestMethod ` -Headers @{ "Metadata-Flavor" = "Google"; "x-goog-user-project" = $ProjectId; "Authorization" = "Bearer $AccessToken"} ` -Uri "https://secretmanager.googleapis.com/v1/projects/$ProjectId/secrets/ad-password/versions/latest:access") $DsrmPassword = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($Secret.payload.data)) $DsrmPassword = ConvertTo-SecureString -AsPlainText $DsrmPassword -force # # Promote. # Write-Host "Setting administrator password..." Set-LocalUser -Name Administrator -Password $DsrmPassword if ($ActiveDirectoryFirstDc -eq $env:COMPUTERNAME) { Write-Host "Creating a new forest $ActiveDirectoryDnsDomain ($ActiveDirectoryNetbiosDomain)..." Install-ADDSForest ` -DomainName $ActiveDirectoryDnsDomain ` -DomainNetbiosName $ActiveDirectoryNetbiosDomain ` -SafeModeAdministratorPassword $DsrmPassword ` -DomainMode Win2008R2 ` -ForestMode Win2008R2 ` -InstallDns ` -CreateDnsDelegation:$False ` -NoRebootOnCompletion:$True ` -Confirm:$false } else { do { Write-Host "Waiting for domain to become available..." Start-Sleep -s 60 & ipconfig /flushdns | Out-Null & nltest /dsgetdc:$ActiveDirectoryDnsDomain | Out-Null } while ($LASTEXITCODE -ne 0) Write-Host "Adding DC to $ActiveDirectoryDnsDomain ($ActiveDirectoryNetbiosDomain)..." Install-ADDSDomainController ` -DomainName $ActiveDirectoryDnsDomain ` -SafeModeAdministratorPassword $DsrmPassword ` -InstallDns ` -Credential (New-Object System.Management.Automation.PSCredential ("Administrator@$ActiveDirectoryDnsDomain", $DsrmPassword)) ` -NoRebootOnCompletion:$true ` -Confirm:$false } # # Configure DNS. # Write-Host "Configuring DNS settings..." Get-Netadapter| Disable-NetAdapterBinding -ComponentID ms_tcpip6 Set-DnsClientServerAddress ` -InterfaceIndex (Get-NetAdapter -Name Ethernet).InterfaceIndex ` -ServerAddresses 127.0.0.1 # # Enable LSA protection. # New-ItemProperty ` -Path "HKLM:\SYSTEM\CurrentControlSet\Control\Lsa" ` -Name "RunAsPPL" ` -Value 1 ` -PropertyType DWord Write-Host "Restarting to apply all settings..." Restart-Computer ' | Out-File dc-startup.ps1 -Encoding ASCII The script does the following: Read the DSRM password from Secret Manager. Promote the VM to a domain controller. Configure DNS settings so that each domain controller uses the loopback address as a DNS server. Disable IPv6. Enable LSA protection. Create a VM instance for the first domain controller: $Subnet = "projects/$VpcProjectId/regions/$Region/subnetworks/domain-controllers" $Metadata = ` "ActiveDirectoryDnsDomain=$AdDnsDomain", "ActiveDirectoryNetbiosDomain=$AdNetbiosDomain", "ActiveDirectoryFirstDc=dc-1", "sysprep-specialize-script-ps1=Install-WindowsFeature AD-Domain-Services; Install-WindowsFeature DNS", "disable-account-manager=true" -join "," & gcloud compute instances create dc-1 ` --image-family windows-2022 ` --image-project windows-cloud ` --machine-type n2-standard-8 ` --tags ad-domaincontroller ` --metadata "$Metadata" ` --metadata-from-file windows-startup-script-ps1=dc-startup.ps1 ` --no-address ` --network-interface "no-address,private-network-ip=$AddressOfDc1,subnet=$Subnet" ` --service-account $DcServiceAccount ` --scopes cloud-platform ` --zone $Zones[0] ` --shielded-integrity-monitoring ` --shielded-secure-boot ` --shielded-vtpm ` --deletion-protection This command does the following: Create a shielded Windows Server 2022 VM. Assign the ad-domaincontroller service account to the VM so that it can access the DSRM password. Configure the guest agent to disable the account manager. For more information about configuring the guest agent, see Enabling and disabling Windows instance features. Let the VM install the Windows features AD-Domain-Services and DNS during the sysprep specialize phase. Let the VM run the startup script that you created previously. Create another VM instance for the second domain controller and place it in a different zone: & gcloud compute instances create dc-2 ` --image-family windows-2022 ` --image-project windows-cloud ` --machine-type n2-standard-8 ` --tags ad-domaincontroller ` --metadata "$Metadata" ` --metadata-from-file windows-startup-script-ps1=dc-startup.ps1 ` --no-address ` --network-interface "no-address,private-network-ip=$AddressOfDc2,subnet=$Subnet" ` --service-account $DcServiceAccount ` --scopes cloud-platform ` --zone $Zones[1] ` --shielded-integrity-monitoring ` --shielded-secure-boot ` --shielded-vtpm ` --deletion-protection Monitor the initialization process of the first domain controller by viewing its serial port output: & gcloud compute instances tail-serial-port-output dc-1 --zone $Zones[0] Wait about 10 minutes until you see the message Restarting to apply all settings..., then press Ctrl+C. Monitor the initialization process of the second domain controller by viewing its serial port output: & gcloud compute instances tail-serial-port-output dc-2 --zone $Zones[1] Wait about 10 minutes until you see the message Restarting to apply all settings..., then press Ctrl+C. The Active Directory forest and domain are now ready to use. Connect to a domain controller You can now customize the Active Directory forest by connecting to one of the domain controllers. In PowerShell, access the password for the Administrator user: gcloud secrets versions access latest --secret ad-password Connect to dc-1 by using RDP and log on as the Administrator user. Because the VM instance doesn't have a public IP addresses, you must connect through Identity-Aware Proxy TCP forwarding. What's next Learn more about patterns for using Active Directory in a hybrid environment. Configure Active Directory for VMs to automatically join a domain. Learn more about using Active Directory across firewalls. Send feedback \ No newline at end of file diff --git a/Deploy_an_enterprise_AI_and_ML_model.txt b/Deploy_an_enterprise_AI_and_ML_model.txt new file mode 100644 index 0000000000000000000000000000000000000000..91208329c40e2e1f1dcc9f84f7f83f25fd04d710 --- /dev/null +++ b/Deploy_an_enterprise_AI_and_ML_model.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/genai-mlops-blueprint +Date Scraped: 2025-02-23T11:46:17.390Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build and deploy generative AI and machine learning models in an enterprise Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-28 UTC As generative AI and machine learning (ML) models become more common in enterprises' business activities and business processes, enterprises increasingly need guidance on model development to ensure consistency, repeatability, security, and safety. To help large enterprises build and deploy generative AI and ML models, we created the enterprise generative AI and machine learning blueprint. This blueprint provides you with a comprehensive guide to the entire AI development lifecycle, from preliminary data exploration and experimentation through model training, deployment, and monitoring. The enterprise generative AI and ML blueprint provides you with many benefits, including the following: Prescriptive guidance: Clear guidance on how you can create, configure, and deploy a generative AI and ML development environment that is based on Vertex AI. You can use Vertex AI to develop your own models. Increased efficiency: Extensive automation to help reduce the toil from deploying infrastructure and developing generative AI and ML models. Automation lets you focus on value-added tasks such as model design and experimentation. Enhanced governance and auditability: Reproducibility, traceability, and controlled deployment of models is incorporated into the design of this blueprint. This benefit lets you better manage your generative AI and ML model lifecycle and helps ensure you can re-train and evaluate models consistently, with clear audit trails. Security: The blueprint is designed to be aligned with the requirements of the National Institute of Standards and Technology (NIST) framework and the Cyber Risk Institute (CRI) framework. The enterprise generative AI and ML blueprint includes the following: A GitHub repository that contains a set of Terraform configurations, a Jupyter notebook, a Vertex AI Pipelines definition, a Cloud Composer directed acyclic graph (DAG), and ancillary scripts. The components in the repository complete the following: The Terraform configuration sets up a Vertex AI model development platform that can support multiple model development teams. The Jupyter notebook lets you develop a model interactively. The Vertex AI Pipelines definition translates the Jupyter notebook into a reproducible pattern that can be used for production environments. The Cloud Composer DAG provides an alternative method to Vertex AI Pipelines. The ancillary scripts help deploy the Terraform code and pipelines. A guide to the architecture, design, security controls, and operational processes that you use this blueprint to implement (this document). The enterprise generative AI and ML blueprint is designed to be compatible with the enterprise foundations blueprint. The enterprise foundations blueprint provides a number of base-level services that this blueprint relies on, such as VPC networks. You can deploy the enterprise generative AI and ML blueprint without deploying the enterprise foundations blueprint if your Google Cloud environment provides the necessary functionality to support the enterprise generative AI and ML blueprint. This document is intended for cloud architects, data scientists, and data engineers who can use the blueprint to build and deploy new generative AI or ML models on Google Cloud. This document assumes that you are familiar with generative AI and ML model development and the Vertex AI machine learning platform. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Enterprise generative AI and ML blueprint overview The enterprise generative AI and ML blueprint takes a layered approach to provide the capabilities that enable generative AI and ML model training. The blueprint is intended to be deployed and controlled through an ML operations (MLOps) workflow. The following diagram shows how the MLOps layer deployed by this blueprint relates to other layers in your environment. This diagram includes the following: The Google Cloud infrastructure provides you with security capabilities such as encryption at rest and encryption in transit, as well as basic building blocks such as compute and storage. The enterprise foundation provides you with a baseline of resources such as identity, networking, logging, monitoring, and deployment systems that enable you to adopt Google Cloud for your AI workloads. The data layer is an optional layer in the development stack that provides you with various capabilities such as data ingestion, data storage, data access control, data governance, data monitoring, and data sharing. The generative AI and ML layer (this blueprint) lets you build and deploy models. You can use this layer for preliminary data exploration and experimentation, model training, model serving, and monitoring. CI/CD provides you with the tools to automate the provision, configuration, management, and deployment of infrastructure, workflows, and software components. These components help you ensure consistent, reliable, and auditable deployments; minimize manual errors; and accelerate the overall development cycle. To show how the generative AI and ML environment is used, the blueprint includes a sample ML model development. The sample model development takes you through building a model, creating operational pipelines, training the model, testing the model, and deploying the model. Architecture The enterprise generative AI and ML blueprint provides you with the ability to work directly with data. You can create models in an interactive (development) environment and promote the models into an operational (production or non-production) environment. In the interactive environment, you develop ML models using Vertex AI Workbench, which is a Jupyter Notebook service that is managed by Google. You build data extraction, data transformation, and model-tuning capabilities in the interactive environment and promote them into the operational environment. In the operational (non-production) environment, you use pipelines to build and test their models in a repeatable and controllable fashion. After you are satisfied with the performance of the model, you can deploy the model into the operational (production) environment. The following diagram shows the various components of the interactive and operational environments. This diagram includes the following: Deployment systems: Services such as Service Catalog and Cloud Build deploy Google Cloud resources into the interactive environment. Cloud Build also deploys Google Cloud resources and model-building workflows into the operational environment. Data sources: Services such as BigQuery, Cloud Storage, Spanner, and AlloyDB for PostgreSQL host your data. The blueprint provides example data in BigQuery and Cloud Storage. Interactive environment: An environment where you can interact directly with data, experiment on models, and build pipelines for use in the operational environment. Operational environment: An environment where you can build and test your models in a repeatable manner and then deploy models into production. Model services: The following services support various MLOps activities: Vertex AI Feature Store serves feature data to your model. Model Garden includes an ML model library that lets you use Google models and select open-source models. Vertex AI Model Registry manages the lifecycle of your ML models. Artifact storage: These services store the code and containers and for your model development and pipelines. These services include the following: Artifact Registry stores containers that are used by pipelines in the operational environment to control the various stages of the model development. Git repository stores the code base of the various components that are used in the model development. Platform personas When you deploy the blueprint, you create four types of user groups: an MLOps engineer group, a DevOps engineer group, a data scientist group, and a data engineer group. The groups have the following responsibilities: The MLOps engineer group develops the Terraform templates used by the Service Catalog. This team provides templates used by many models. The DevOps engineer group approves the Terraform templates that the MLOps developer group creates. The data scientist group develops models, pipelines, and the containers that are used by the pipelines. Typically, a single team is dedicated to building a single model. The Data engineer group approves the use of the artifacts that the data science group creates. Organization structure This blueprint uses the organizational structure of the enterprise foundation blueprint as a basis for deploying AI and ML workloads. The following diagram shows the projects that are added to the foundation to enable AI and ML workloads. The following table describes the projects that are used by the generative AI and ML blueprint. Folder Project Description common prj-c-infra-pipeline Contains the deployment pipeline that's used to build out the generative AI and ML components of the blueprint. For more information, see the infrastructure pipeline in the enterprise foundation blueprint. prj-c-service-catalog Contains the infrastructure used by the Service Catalog to deploy resources in the interactive environment. development prj-d-machine-learning Contains the components for developing an AI and ML use case in an interactive mode. non-production prj-n-machine-learning Contains the components for testing and evaluating an AI and ML use case that can be deployed to production. production prj-p-machine-learning Contains the components for deploying an AI and ML use case into production. Networking The blueprint uses the Shared VPC network created in the enterprise foundation blueprint. In the interactive (development) environment, Vertex AI Workbench notebooks are deployed in service projects. On-premises users can access the projects using the private IP address space in the Shared VPC network. On-premises users can access Google Cloud APIs, such as Cloud Storage, through Private Service Connect. Each Shared VPC network (development, non-production, and production) has a distinct Private Service Connect endpoint. The operational environment (non-production and production) has two separate Shared VPC networks that on-premises resources can access through private IP addresses. The interactive and operational environments are protected using VPC Service Controls. Cloud Logging This blueprint uses the Cloud Logging capabilities that are provided by the enterprise foundation blueprint. Cloud Monitoring To monitor custom training jobs, the blueprint includes a dashboard that lets you monitor the following metrics: CPU utilization of each training node Memory utilization of each training node Network usage If a custom training job has failed, the blueprint uses Cloud Monitoring to provide you with an email alerting mechanism to notify you of the failure. For monitoring deployed models that use the Vertex AI endpoint, the blueprint comes with a dashboard with that has the following metrics: Performance metrics: Predictions per second Model latency Resource usage: CPU usage Memory usage Organizational policy setup In addition to the organizational policies created by the enterprise foundation blueprint, this blueprint adds the organizational policies listed in predefined posture for secure AI, extended. Operations This section describes the environments that are included in the blueprint. Interactive environment To let you explore data and develop models while maintaining your organization's security posture, the interactive environment provides you with a controlled set of actions you can perform. You can deploy Google Cloud resources using one of the following methods: Using Service Catalog, which is preconfigured through automation with resource templates Building code artifacts and committing them to Git repositories using Vertex AI Workbench notebooks The following diagram depicts the interactive environment. A typical interactive flow has the following steps and components associated with it: Service Catalog provides a curated list of Google Cloud resources that data scientists can deploy into the interactive environment. The data scientist deploys the Vertex AI Workbench notebook resource from the Service Catalog. Vertex AI Workbench notebooks are the main interface that data scientists use to work with Google Cloud resources that are deployed in the interactive environment. The notebooks enable data scientists to pull their code from Git and update their code as needed. Source data is stored outside of the interactive environment and managed separately from this blueprint. Access to the data is controlled by a data owner. Data scientists can request read access to source data, but data scientists can't write to the source data. Data scientists can transfer source data into the interactive environment into resources created through the Service Catalog. In the interactive environment, data scientists can read, write, and manipulate the data. However, data scientists can't transfer data out of the interactive environment or grant access to resources that are created by Service Catalog. BigQuery stores structured data and semi-structured data and Cloud Storage stores unstructured data. Feature Store provides data scientists with low-latency access to features for model training. Data scientists train models using Vertex AI custom training jobs. The blueprint also uses Vertex AI for hyperparameter tuning. Data scientists evaluate models through the use of Vertex AI Experiments and Vertex AI TensorBoard. Vertex AI Experiments lets you run multiple trainings against a model using different parameters, modeling techniques, architectures, and inputs. Vertex AI TensorBoard lets you track, visualize, and compare the various experiments that you ran and then choose the model with the best observed characteristics to validate. Data scientists validate their models with Vertex AI evaluation. To validate their models, data scientists split the source data into a training data set and a validation data set and run a Vertex AI evaluation against your model. Data scientists build containers using Cloud Build, store the containers in Artifact Registry, and use the containers in pipelines that are in the operational environment. Operational environment The operational environment uses a Git repository and pipelines. This environment includes the production environment and non-production environment of the enterprise foundation blueprint. In the non-production environment, the data scientist selects a pipeline from one of the pipelines that was developed in the interactive environment. The data scientist can run the pipeline in the non-production environment, evaluate the results, and then determine which model to promote into the production environment. The blueprint includes an example pipeline that was built using Cloud Composer and an example pipeline that was built using Vertex AI Pipelines. The diagram below shows the operational environment. A typical operational flow has the following steps: A data scientist merges a development branch successfully into a deployment branch. The merge into the deployment branch triggers a Cloud Build pipeline. One of the following items occurs: If a data scientist is using Cloud Composer as the orchestrator, the Cloud Build pipeline moves a DAG into Cloud Storage. If the data scientist is using Vertex AI Pipelines as the orchestrator, the pipeline moves a Python file into Cloud Storage. The Cloud Build pipeline triggers the orchestrator (Cloud Composer or Vertex AI Pipelines). The orchestrator pulls its pipeline definition from Cloud Storage and begins to execute the pipeline. The pipeline pulls a container from Artifact Registry that is used by all stages of the pipeline to trigger Vertex AI services. The pipeline, using the container, triggers a data transfer from the source data project into the operational environment. Data is transformed, validated, split, and prepared for model training and validation by the pipeline. If needed, the pipeline moves data into Vertex AI Feature Store for easy access during model training. The pipeline uses Vertex AI custom model training to train the model. The pipeline uses Vertex AI evaluation to validate the model. A validated model is imported into the Model Registry by the pipeline. The imported model is then used to generate predictions through online predictions or batch predictions. After the model is deployed into the production environment, the pipeline uses Vertex AI Model Monitoring to detect if the model's performance degrades by monitoring for training-serving skew and prediction drift. Deployment The blueprint uses a series of Cloud Build pipelines to provision the blueprint infrastructure, the pipeline in the operational environment, and the containers used to create generative AI and ML models. The pipelines used and the resources provisioned are the following: Infrastructure pipeline: This pipeline is part of the enterprise foundation blueprint. This pipeline provisions the Google Cloud resources that are associated with the interactive environment and operational environment. Interactive pipeline: The interactive pipeline is part of the interactive environment. This pipeline copies Terraform templates from a Git repository to a Cloud Storage bucket that Service Catalog can read. The interactive pipeline is triggered when a pull request is made to merge with the main branch. Container pipeline: The blueprint includes a Cloud Build pipeline to build containers used in the operational pipeline. Containers that are deployed across environments are immutable container images. Immutable container images help ensure that the same image is deployed across all environments and cannot be modified while they are running. If you need to modify the application, you must rebuild and redeploy the image. Container images that are used in the blueprint are stored in Artifact Registry and referenced by the configuration files that are used in the operational pipeline. Operational pipeline: The operational pipeline is part of the operational environment. This pipeline copies DAGs for Cloud Composer or Vertex AI Pipelines, which are then used to build, test, and deploy models. Service Catalog Service Catalog enables developers and cloud administrators to make their solutions usable by internal enterprise users. The Terraform modules in Service Catalog are built and published as artifacts to the Cloud Storage bucket with the Cloud Build CI/CD pipeline. After the modules are copied to the bucket, developers can use the modules to create Terraform solutions on the Service Catalog Admin page, add the solutions to Service Catalog and share the solutions with interactive environment projects so that users can deploy the resources. The interactive environment uses Service Catalog to let data scientists deploy Google Cloud resources in a manner that complies with their enterprise's security posture. When developing a model that requires Google Cloud resources, such as a Cloud Storage bucket, the data scientist selects the resource from the Service Catalog, configures the resource, and deploys the resource in the interactive environment. Service Catalog contains pre-configured templates for various Google Cloud resources that the data scientist can deploy in the interactive environment. The data scientist cannot alter the resource templates, but can configure the resources through the configuration variables that the template exposes. The following diagram shows the structure of how the Service Catalog and interactive environment interrelate. Data scientists deploy resources using the Service Catalog, as described in the following steps: The MLOps engineer puts a Terraform resource template for Google Cloud into a Git repository. The commit to Git triggers a Cloud Build pipeline. Cloud Build copies the template and any associated configuration files to Cloud Storage. The MLOps engineer sets up the Service Catalog solutions and Service Catalog manually. The engineer then shares the Service Catalog with a service project in the interactive environment. The data scientist selects a resource from the Service Catalog. Service Catalog deploys the template into the interactive environment. The resource pulls any necessary configuration scripts. The data scientist interacts with the resources. Repositories The pipelines described in Deployment are triggered by changes in their corresponding repository. To help ensure that no one can make independent changes to the production environment, there is a separation of responsibilities between users who can submit code and users who can approve code changes. The following table describes the blueprint repositories and their submitters and approvers. Repository Pipeline Description Submitter Approver ml-foundation Infrastructure Contains the Terraform code for the generative AI and ML blueprint that creates the interactive and operational environments. MLOps engineer DevOps engineer service-catalog Interactive Contains the templates for the resources that the Service Catalog can deploy. MLOps engineer DevOps engineer artifact-publish Container Contains the containers that pipelines in the operational environment can use. Data scientist Data engineer machine-learning Operational Contains the source code that pipelines in the operational environment can use. Data scientist Data engineer Branching strategy The blueprint uses persistent branching to deploy code to the associated environment. The blueprint uses three branches (development, non-production, and production) that reflect the corresponding environments. Security controls The enterprise generative AI and ML blueprint uses a layered defense-in-depth security model that uses default Google Cloud capabilities, Google Cloud services, and security capabilities that are configured through the enterprise foundation blueprint. The following diagram shows the layering of the various security controls for the blueprint. The functions of the layers are the following: Interface: provides data scientists with services that allow them to interact with the blueprint in a controlled manner. Deployment: provides a series of pipelines that deploy infrastructure, build containers, and create models. The use of pipelines allows for auditability, traceability, and repeatability. Networking: provides data exfiltration protections around the blueprint resources at the API layer and the IP layer. Access management: controls who can access what resources and helps prevent unauthorized use of your resources. Encryption: allows you to control your encryption keys, secrets, and help protect your data through default encryption-at-rest and encryption-in-transit. Detective: helps you to detect misconfigurations and malicious activity. Preventive: provides you with the means to control and restrict how your infrastructure is deployed. The following table describes the security controls that are associated with each layer. Layer Resource Security control Interface Vertex AI Workbench Provides a managed notebook experience that incorporates user access control, network access control, IAM access control, and disabled file downloads. These features enable a more secure user experience. Git repositories Provides user access control to protect your repositories. Service Catalog Provides data scientists with a curated list of resources that can only be deployed in approved configurations. Deployment Infrastructure pipeline Provides a secured flow to deploy the blueprint infrastructure through the use of Terraform. Interactive pipeline Provides a secured flow to transfer templates from a Git repository into a bucket within your Google Cloud organization. Container pipeline Provides a secured flow to build containers that are used by the operational pipeline. Operational pipeline Provides a controlled flow to train, test, validate, and deploy models. Artifact Registry Stores containers images in a secure manner using resource access control Network Private Service Connect Lets you communicate with Google Cloud APIs using private IP addresses so that you can avoid exposing traffic to the internet. VPC with private IP addresses The blueprint uses VPCs with private IP addresses to help remove exposure to internet facing threats. VPC Service Controls Helps protect protected resources against data exfiltration. Firewall Helps protect the VPC network against unauthorized access. Access management Cloud Identity Provides centralized user management, reducing the unauthorized access risk. IAM Provides fine-grained control of who can do what to which resources, thereby enabling least privilege in access management. Encryption Cloud KMS Lets you control the encryption keys that are used within your Google Cloud organization. Secret Manager Provides a secret store for your models that is controlled by IAM. Encryption-at-rest By default, Google Cloud encrypts data at rest. Encryption-in-transit By default, Google Cloud encrypts data in transit. Detective Security Command Center Provides threat detectors that help protect your Google Cloud organization. Continuous architecture Continually checks your Google Cloud organization against a series of Open Policy Agent (OPA) policies that you have defined. IAM Recommender Analyzes user permissions and provides suggestions about reducing permissions to help enforce the principle of least privilege. Firewall Insights Analyzes firewall rules, identifies overly-permissive firewall rules, and suggests more restrictive firewalls to help strengthen your overall security posture. Cloud Logging Provides visibility into system activity and helps enable the detection of anomalies and malicious activity. Cloud Monitoring Tracks key signals and events that can help identify suspicious activity. Preventative Organization Policy Service Lets you restrict actions within your Google Cloud organization. What's next Deploy the Terraform associated with this blueprint. Learn more about the enterprise foundations blueprint. Read the Best practices for implementing machine learning on Google Cloud. Learn more about Vertex AI. Learn more about MLOps on Google Cloud: MLOps on Vertex AI Practitioners guide to MLOps: A framework for continuous delivery and automation of machine learning (PDF) MLOps: Continuous delivery and automation pipelines in machine learning Architecture for MLOps using TensorFlow Extended, Vertex AI Pipelines, and Cloud Build Vertex AI Pipelines sample code Vertex AI notebook sample code For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_and_operate_generative_AI_applications.txt b/Deploy_and_operate_generative_AI_applications.txt new file mode 100644 index 0000000000000000000000000000000000000000..bb7d3ece73a99c34f0d8ec86748612ff59296b6d --- /dev/null +++ b/Deploy_and_operate_generative_AI_applications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-operate-generative-ai-applications +Date Scraped: 2025-02-23T11:46:14.819Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy and operate generative AI applications Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-19 UTC Generative AI has introduced a new way to build and operate AI applications that is different from predictive AI. To build a generative AI application, you must choose from a diverse range of architectures and sizes, curate data, engineer optimal prompts, tune models for specific tasks, and ground model outputs in real-world data. This document describes how you can adapt DevOps and MLOps processes to develop, deploy, and operate generative AI applications on existing foundation models. For information on deploying predictive AI, see MLOps: Continuous delivery and automation pipelines in machine learning. What are DevOps and MLOps? DevOps is a software engineering methodology that connects development and operations. DevOps promotes collaboration, automation, and continuous improvement to streamline the software development lifecycle, using practices such as continuous integration and continuous delivery (CI/CD). MLOps builds on DevOps principles to address the challenges of building and operating machine learning (ML) systems. Machine learning systems typically use predictive AI to identify patterns and make predictions. The MLOps workflow includes the following: Data validation Model training Model evaluation and iteration Model deployment and serving Model monitoring What are foundation models? Foundation models are the core component in a generative AI application. These models are large programs that use datasets to learn and make decisions without human intervention. Foundation models are trained on many types of data, including text, images, audio, and video. Foundation models include large language models (LLMs) such as Llama 3.1 and multimodal models such as Gemini. Unlike predictive AI models, which are trained for specific tasks on focused datasets, foundation models are trained on massive and diverse datasets. This training lets you use foundation models to develop applications for many different use cases. Foundation models have emergent properties (PDF), which let them provide responses to specific inputs without explicit training. Because of these emergent properties, foundation models are challenging to create and operate and require you to adapt your DevOps and MLOps processes. Developing a foundation model requires significant data resources, specialized hardware, significant investment, and specialized expertise. Therefore, many businesses prefer to use existing foundation models to simplify the development and deployment of their generative AI applications. Lifecycle of a generative AI application The lifecycle for a generative AI application includes the following phases: Discovery: Developers and AI engineers identify which foundation model is most suitable for their use case. They consider each model's strengths, weaknesses, and costs to make an informed decision. Development and experimentation: Developers use prompt engineering to create and refine input prompts to get the required output. When available, few-shot learning, parameter-efficient fine-tuning (PEFT), and model chaining help guide model behavior. Model chaining refers to orchestrating calls to multiple models in a specific sequence to create a workflow. Deployment: Developers must manage many artifacts in the deployment process, including prompt templates, chain definitions, embedded models, retrieval data stores, and fine-tuned model adapters. These artifacts have their own governance requirements and require careful management throughout development and deployment. Generative AI application deployment also must account for the technical capabilities of the target infrastructure, ensuring that application hardware requirements are met. Continuous monitoring in production: Administrators improve application performance and maintain safety standards through responsible AI techniques, such as ensuring fairness, transparency, and accountability in the model's outputs. Continuous improvement: Developers constantly adjust foundation models through prompting techniques, swapping the models out for newer versions, or even combining multiple models for enhanced performance, cost efficiency, or reduced latency. Conventional continuous training still holds relevance for scenarios when recurrent fine-tuning or incorporating human feedback loops are needed. Data engineering practices have a critical role across all development stages. To create reliable outputs, you must have factual grounding (which ensures that the model's outputs are based on accurate and up-to-date information) and recent data from internal and enterprise systems. Tuning data helps adapt models to specific tasks and styles, and rectifies persistent errors. Find the foundation model for your use case Because building foundation models is resource-intensive, most businesses prefer to use an existing foundation model that is optimal for their use case. Finding the right foundation model is difficult because there are many foundation models. Each model has different architectures, sizes, training datasets, and licenses. In addition, each use case presents unique requirements, demanding that you analyze available models across multiple dimensions. Consider the following factors when you assess models: Quality: Run test prompts to gauge output quality. Latency and throughput: Determine the correct latency and throughput that your use case requires, as these factors directly impact user experience. For example, a chatbot requires lower latency than batch-processed summarization tasks. Development and maintenance time: Consider the time investment for initial development and ongoing maintenance. Managed models often require less effort than openly available models that you deploy yourself. Usage cost: Consider the infrastructure and consumption costs that are associated with the model. Compliance: Assess the model's ability to adhere to relevant regulations and licensing terms. Develop and experiment When building generative AI applications, development and experimentation are iterative and orchestrated. Each experimental iteration involves refining data, adapting the foundation model, and evaluating results. Evaluation provides feedback that guides subsequent iterations in a continuous feedback loop. If performance doesn't match expectations, you can gather more data, augment the data, or further curate the data. In addition, you might need to optimize prompts, apply fine-turning techniques, or change to another foundation model. This iterative refinement cycle, driven by evaluation insights, is just as important for optimizing generative AI applications as it is for machine learning and predictive AI. The foundation model paradigm Foundation models differ from predictive models because they are multi-purpose models. Instead of being trained for a single purpose on data specific to that task, foundation models are trained on broad datasets, which lets you apply a foundation model to many different use cases. Foundation models are also highly sensitive to changes in their input. The output of the model and the task that it performs are determined by the input to the model. A foundation model can translate text, generate videos, or classify data simply by changing the input. Even insignificant changes to the input can affect the model's ability to correctly perform that task. These properties of foundation models require different development and operational practices. Although models in the predictive AI context are self-sufficient and task-specific, foundation models are multi-purpose and need an additional element beyond the user input. Generative AI models require a prompt, and more specifically, a prompt template. A prompt template is a set of instructions and examples along with placeholders to accommodate user input. The application can combine the prompt template and the dynamic data (such as the user input) to create a complete prompt, which is the text that is passed as input to the foundation model. The prompted model component The presence of the prompt is a distinguishing feature of generative AI applications. The model and the prompt aren't sufficient for the generation of content; generative AI needs both. The combination of the model and the prompt is known as the prompted model component. The prompted model component is the smallest independent component that is sufficient to create a generative AI application. The prompt doesn't need to be complicated. For example, it can be a simple instruction, such as "translate the following sentence from English to French", followed by the sentence to be translated. However, without that preliminary instruction, a foundation model won't perform the required translation task. So a prompt, even just a basic instruction, is necessary along with the input to get the foundation model to do the task required by the application. The prompted model component creates an important distinction for MLOps practices when developing generative AI applications. In the development of a generative AI application, experimentation and iteration must be done in the context of a prompted model component. The generative AI experimentation cycle typically begins with testing variations of the prompt — changing the wording of the instructions, providing additional context, or including relevant examples — and evaluating the impact of those changes. This practice is commonly referred to as prompt engineering. Prompt engineering involves the following iterative steps: Prompting: Craft and refine prompts to elicit desired behaviors from a foundation model for a specific use case. Evaluation: Assess the model's outputs, ideally programmatically, to gauge its understanding and success in fulfilling the prompt's instructions. To track evaluation results, you can optionally register the results of an experiment. Because the prompt itself is a core element of the prompt engineering process, it becomes the most important artifact within the artifacts that are part of the experiment. However, to experiment with a generative AI application, you must identify the artifact types. In predictive AI, data, pipelines, and code are different. But with the prompt paradigm in generative AI, prompts can include context, instructions, examples, guardrails, and actual internal or external data pulled from somewhere else. To determine the artifact type, you must recognize that a prompt has different components and requires different management strategies. Consider the following: Prompt as data: Some parts of the prompt act just like data. Elements like few-shot examples, knowledge bases, and user queries are essentially data points. These components require data-centric MLOps practices such as data validation, drift detection, and lifecycle management. Prompt as code: Other components such as context, prompt templates, and guardrails are similar to code. These components define the structure and rules of the prompt itself and require more code-centric practices such as approval processes, code versioning, and testing. As a result, when you apply MLOps practices to generative AI, you must have processes that give developers an easy way to store, retrieve, track, and modify prompts. These processes allow for fast iteration and principled experimentation. Often one version of a prompt can work well with a specific version of the model and not as well with a different version. When you track the results of an experiment, you must record the prompt, the components' versions, the model version, metrics, and output data. Model chaining and augmentation Generative AI models, particularly large language models (LLMs), face inherent challenges in maintaining recency and avoiding hallucinations. Encoding new information into LLMs requires expensive and data-intensive pre-training before they can be deployed. Depending on the use case, using only one prompted model to perform a particular generation might not be sufficient. To solve this issue, you can connect several prompted models together, along with calls to external APIs and logic expressed as code. A sequence of prompted model components connected together in this way is commonly known as a chain. The following diagram shows the components of a chain and the relative development process. Mitigation for recency and hallucination Two common chain-based patterns that can mitigate recency and hallucinations are retrieval-augmented generation (RAG) (PDF) and agents. RAG augments pre-trained models with knowledge retrieved from databases, which bypasses the need for pre-training. RAG enables grounding and reduces hallucinations by incorporating up-to-date factual information directly into the generation process. Agents, popularized by the ReAct prompting technique (PDF), use LLMs as mediators that interact with various tools, including RAG systems, internal or external APIs, custom extensions, or even other agents. Agents enable complex queries and real-time actions by dynamically selecting and using relevant information sources. The LLM, acting as an agent, interprets the user's query, decides which tool to use, and formulates the response based on the retrieved information. You can use RAG and agents to create multi-agent systems that are connected to large information networks, enabling sophisticated query handling and real-time decision making. The orchestration of different models, logic, and APIs is not new to generative AI applications. For example, recommendation engines combine collaborative filtering models, content-based models, and business rules to generate personalized product recommendations for users. Similarly, in fraud detection, machine learning models are integrated with rule-based systems and external data sources to identify suspicious activities. What makes these chains of generative AI components different is that you can't characterize the distribution of component inputs beforehand, which makes the individual components much harder to evaluate and maintain in isolation. Orchestration causes a paradigm shift in how you develop AI applications for generative AI. In predictive AI, you can iterate on the separate models and components in isolation and then chain them in the AI application. In generative AI, you develop a chain during integration, perform experimentation on the chain end-to-end, and iterate chaining strategies, prompts, foundation models, and other APIs in a coordinated manner to achieve a specific goal. You often don't need feature engineering, data collection, or further model training cycles; just changes to the wording of the prompt template. The shift towards MLOps for generative AI, in contrast to MLOps for predictive AI, results in the following differences: Evaluation: Because of the tight coupling of chains, chains require end-to-end evaluation, not just for each component, to gauge their overall performance and the quality of their output. In terms of evaluation techniques and metrics, evaluating chains is similar to evaluating prompted models. Versioning: You must manage a chain as a complete artifact in its entirety. You must track the chain configuration with its own revision history for analysis, for reproducibility, and to understand the effects of changes on output. Your logs must include the inputs, outputs, intermediate states of the chain, and any chain configurations that were used during each execution. Continuous monitoring: To detect performance degradation, data drift, or unexpected behavior in the chain, you must configure proactive monitoring systems. Continuous monitoring helps to ensure early identification of potential issues to maintain the quality of the generated output. Introspection: You must inspect the internal data flows of a chain (that is, the inputs and outputs from each component) as well as the inputs and outputs of the entire chain. By providing visibility into the data that flows through the chain and the resulting content, developers can pinpoint the sources of errors, biases, or undesirable behavior. The following diagram shows how chains, prompted model components, and model tuning work together in a generative AI application to reduce recency and hallucinations. Data is curated, models are tuned, and chains are added to further refine responses. After the results are evaluated, developers can log the experiment and continue to iterate. Fine-tuning When you are developing a generative AI use case that involves foundation models, it can be difficult, especially for complex tasks, to rely on only prompt engineering and chaining to solve the use case. To improve task performance, developers often need to fine-tune the model directly. Fine-tuning lets you actively change all the layers or a subset of layers (parameter efficient fine-tuning) of the model to optimize its ability to perform a certain task. The most common ways of tuning a model are the following: Supervised fine-tuning: You train the model in a supervised manner, teaching it to predict the right output sequence for a given input. Reinforcement learning from human feedback (RLHF): You train a reward model to predict what humans would prefer as a response. Then, you use this reward model to nudge the LLM in the right direction during the tuning process. This process is similar to having a panel of human judges guide the model's learning. The following diagram shows how tuning helps refine the model during the experimentation cycle. In MLOps, fine-tuning shares the following capabilities with model training: The ability to track the artifacts that are part of the tuning job. For example, artifacts include the input data or the parameters being used to tune the model. The ability to measure the impact of the tuning. This capability lets you evaluate the tuned model for the specific tasks that it was trained on and to compare results with previously tuned models or frozen models for the same task. Continuous training and tuning In MLOps, continuous training is the practice of repeatedly retraining machine learning models in a production environment. Continuous training helps to ensure that the model remains up-to-date and performs well as real-world data patterns change over time. For generative AI models, continuous tuning of the models is often more practical than a retraining process because of the high data and computational costs involved. The approach to continuous tuning depends on your specific use case and goals. For relatively static tasks like text summarization, the continuous tuning requirements might be lower. But for dynamic applications like chatbots that need constant human alignment, more frequent tuning using techniques like RLHF that are based on human feedback is necessary. To determine the right continuous tuning strategy, you must evaluate the nature of your use case and how the input data evolves over time. Cost is also a major consideration, as compute infrastructure greatly affects the speed and expense of tuning. Graphics processing units (GPUs) and tensor processing units (TPUs) are hardware that is required for fine-tuning. GPUs, known for their parallel processing power, are highly effective in handling the computationally intensive workloads and are often associated with training and running complex machine learning models. TPUs, on the other hand, are specifically designed by Google for accelerating machine learning tasks. TPUs excel in handling large-matrix operations that are common in deep learning neural networks. Data practices Previously, ML model behavior was dictated solely by its training data. While this still holds true for foundation models, the model behavior for generative AI applications that are built on top of foundation models is determined by how you adapt the model with different types of input data. Foundation models are trained on data such as the following: Pretraining datasets (for example, C4, The Pile, or proprietary data) Instruction tuning datasets Safety tuning datasets Human preference data Generative AI applications are adapted on data such as the following: Prompts Augmented or grounded data (for example, websites, documents, PDFs, databases, or APIs) Task-specific data for PEFT Task-specific evaluations Human preference data The main difference for data practices between predictive ML and generative AI is at the beginning of the lifecycle process. In predictive ML, you spend a lot of time on data engineering, and if you don't have the right data, you cannot build an application. In generative AI, you start with a foundation model, some instructions, and maybe a few example inputs (such as in-context learning). You can prototype and launch an application with very little data. The ease of prototyping, however, comes with the additional challenge of managing diverse data. Predictive AI relies on well-defined datasets. In generative AI, a single application can use various data types, from completely different data sources, all working together. Consider the following data types: Conditioning prompts: Instructions given to the foundation model to guide its output and set boundaries of what it can generate. Few-shot examples: A way to show the model what you want to achieve through input-output pairs. These examples help the model understand the specific tasks, and in many cases, these examples can boost performance. Grounding or augmentation data: The data that permits the foundation model to produce answers for a specific context and keep responses current and relevant without retraining the entire foundation model. This data can come from external APIs (like Google Search) or internal APIs and data sources. Task-specific datasets: The datasets that help fine-tune an existing foundation model for a particular task, improving its performance in that specific area. Full pre-training datasets: The massive datasets that are used to initially train foundation models. Although application developers might not have access to them or the tokenizers, the information encoded in the model itself influences the application's output and performance. This diverse range of data types adds a complexity layer in terms of data organization, tracking, and lifecycle management. For example, a RAG-based application can rewrite user queries, dynamically gather relevant examples using a curated set of examples, query a vector database, and combine the information with a prompt template. A RAG-based application requires you to manage multiple data types, including user queries, vector databases with curated few-shot examples and company information, and prompt templates. Each data type needs careful organization and maintenance. For example, a vector database requires processing data into embeddings, optimizing chunking strategies, and ensuring only relevant information is available. A prompt template needs versioning and tracking, and user queries need rewriting. MLOps and DevOps best practices can help with these tasks. In predictive AI, you create data pipelines for extraction, transformation, and loading. In generative AI, you build pipelines to manage, evolve, adapt, and integrate different data types in a versionable, trackable, and reproducible way. Fine-tuning foundation models can boost generative AI application performance, but the models need data. You can get this data by launching your application and gathering real-world data, generating synthetic data, or a mix of both. Using large models to generate synthetic data is becoming popular because this method speeds up the deployment process, but it's still important to have humans check the results for quality assurance. The following are examples of how you can use large models for data engineering purposes: Synthetic data generation: This process involves creating artificial data that closely resembles real-world data in terms of its characteristics and statistical properties. Large and capable models often complete this task. Synthetic data serves as additional training data for generative AI, enabling it to learn patterns and relationships even when labeled real-world data is scarce. Synthetic data correction: This technique focuses on identifying and correcting errors and inconsistencies within existing labeled datasets. By using the power of larger models, generative AI can flag potential labeling mistakes and propose corrections to improve the quality and reliability of the training data. Synthetic data augmentation: This approach goes beyond generating new data. Synthetic data augmentation involves intelligently manipulating existing data to create diverse variations while preserving essential features and relationships. Generative AI can encounter a broader range of scenarios than predictive AI during training, which leads to improved generalization and the ability to generate nuanced and relevant outputs. Unlike predictive AI, it is difficult to evaluate generative AI. For example, you might not know the training data distribution of the foundation models. You must build a custom evaluation dataset that reflects all your use cases, including the essential, average, and edge cases. Similar to fine-tuning data, you can use powerful LLMs to generate, curate, and augment data for building robust evaluation datasets. Evaluation The evaluation process is a core activity of the development of generative AI applications. Evaluation might have different degrees of automation: from entirely driven by humans to entirely automated by a process. When you're prototyping a project, evaluation is often a manual process. Developers review the model's outputs, getting a qualitative sense of how it's performing. But as the project matures and the number of test cases increases, manual evaluation becomes a bottleneck. Automating evaluation has two big benefits: it lets you move faster and makes evaluations more reliable. It also takes human subjectivity out of the equation, which helps ensure that the results are reproducible. But automating evaluation for generative AI applications comes with its own set of challenges. For example, consider the following: Both the inputs (prompts) and outputs can be incredibly complex. A single prompt might include multiple instructions and constraints that the model must manage. The outputs themselves are often high-dimensional such as a generated image or a block of text. Capturing the quality of these outputs in a simple metric is difficult. Some established metrics, like BLEU for translations and ROUGE for summaries, aren't always sufficient. Therefore, you can use custom evaluation methods or another foundation model to evaluate your system. For example, you could prompt a large language model (such as AutoSxS) to score the quality of generated texts across various dimensions. Many evaluation metrics for generative AI are subjective. What makes one output better than another can be a matter of opinion. You must make sure that your automated evaluation aligns with human judgment because you want your metrics to be a reliable proxy of what people would think. To ensure comparability between experiments, you must determine your evaluation approach and metrics early in the development process. Lack of ground truth data, especially in the early stages of a project. One workaround is to generate synthetic data to serve as a temporary ground truth that you can refine over time with human feedback. Comprehensive evaluation is essential for safeguarding generative AI applications against adversarial attacks. Malicious actors can craft prompts to try to extract sensitive information or manipulate the model's outputs. Evaluation sets need to specifically address these attack vectors, through techniques like prompt fuzzing (feeding the model random variations on prompts) and testing for information leakage. To evaluate generative AI applications, implement the following: Automate the evaluation process to help ensure speed, scalability, and reproducibility. You can consider automation as a proxy for human judgment. Customize the evaluation process as required for your use cases. To ensure comparability, stabilize the evaluation approach, metrics, and ground truth data as early as possible in the development phase. Generate synthetic ground truth data to accommodate for the lack of real ground truth data. Include test cases of adversarial prompting as part of the evaluation set to test the reliability of the system itself against these attacks. Deploy Production-level generative AI applications are complex systems with many interacting components. To deploy a generative AI application to production, you must manage and coordinate these components with the previous stages of generative AI application development. For example, a single application might use several LLMs alongside a database, all fed by a dynamic data pipeline. Each of these components can require its own deployment process. Deploying generative AI applications is similar to deploying other complex software systems because you must deploy system components such as databases and Python applications. We recommend that you use standard software engineering practices such as version control and CI/CD. Version control Generative AI experimentation is an iterative process that involves repeated cycles of development, evaluation, and modification. To ensure a structured and manageable approach, you must implement strict versioning for all modifiable components. These components include the following: Prompt templates: Unless you use specific prompt management solutions, use version control tools to track versions. Chain definitions: Use version control tools to track versions of the code that defines the chain (including API integrations, database calls, and functions). External datasets: In RAG systems, external datasets play an important role. Use existing data analytics solutions such as BigQuery, AlloyDB for PostgreSQL, and Vertex AI Feature Store to track these changes and versions of these datasets. Adapter models: Techniques like LoRA tuning for adapter models are constantly evolving. Use established data storage solutions (for example, Cloud Storage) to manage and version these assets effectively. Continuous integration In a continuous integration framework, every code change goes through automatic testing before merging to catch issues early. Unit and integration testing are important for quality and reliability. Unit tests focus on individual code pieces, while integration testing verifies that different components work together. Implementing a continuous integration system helps to do the following: Ensure reliable, high-quality outputs: Rigorous testing increases confidence in the system's performance and consistency. Catch bugs early: Identifying issues through testing prevents them from causing bigger problems downstream. Catching bugs early makes the system more robust and resilient to edge cases and unexpected inputs. Lower maintenance costs: Well-documented test cases simplify troubleshooting and enable smoother modifications in the future, reducing overall maintenance efforts. These benefits are applicable to generative AI applications. Apply continuous integration to all elements of the system, including the prompt templates, chain, chaining logic, any embedded models, and retrieval systems. However, applying continuous integration to generative AI comes with the following challenges: Difficulty generating comprehensive test cases: The complex and open-ended nature of generative AI outputs makes it hard to define and create an exhaustive set of test cases that cover all possibilities. Reproducibility issues: Achieving deterministic, reproducible results is tricky because generative models often have intrinsic randomness and variability in their outputs, even for identical inputs. This randomness makes it harder to consistently test for expected behaviors. These challenges are closely related to the broader question of how to evaluate generative AI applications. You can apply many of the same evaluation techniques to the development of CI systems for generative AI. Continuous delivery After the code is merged, a continuous delivery process begins to move the built and tested code through environments that closely resemble production for further testing before the final deployment. As described in Develop and experiment, chain elements become one of the main components to deploy because they fundamentally constitute the generative AI application. The delivery process for the generative AI application that contains the chain might vary depending on the latency requirements and whether the use case is batch or online. Batch use cases require that you deploy a batch process that is executed on a schedule in production. The delivery process focuses on testing the entire pipeline in integration in an environment that is similar to production before deployment. As part of the testing process, developers can assert specific requirements around the throughput of the batch process itself and check that all components of the application are functioning correctly. (For example, developers can check permissions, infrastructure, and code dependencies.) Online use cases require that you deploy an API, which is the application that contains the chain and is capable of responding to users at low latency. Your delivery process involves testing the API in integration in an environment that is similar to production. These tests verify that all components of the application are functioning correctly. You can verify non-functional requirements (for example, scalability, reliability, and performance) through a series of tests, including load tests. Deployment checklist The following list describes the steps to take when you deploy a generative AI application using a managed service such as Vertex AI: Configure version control: Implement version control practices for model deployments. Version control lets you roll back to previous versions if necessary and track changes made to the model or deployment configuration. Optimize the model: Perform model optimization tasks (distillation, quantization, and pruning) before packaging or deploying the model. Containerize the model: Package the trained model into a container. Define the target hardware requirements: Ensure the target deployment environment meets the requirements for optimal performance of the model, such as GPUs, TPUs, and other specialized hardware accelerators. Define the model endpoint: Specify the model container, input format, output format, and any additional configuration parameters. Allocate resources: Allocate the appropriate compute resources for the endpoint based on the expected traffic and performance requirements. Configure access control: Set up access control mechanisms to restrict access to the endpoint based on authentication and authorization policies. Access control helps ensure that only authorized users or services can interact with the deployed model. Create model endpoint: Create an endpoint to deploy the model as a REST API service. The endpoint lets clients send requests to the endpoint and receive responses from the model. Configure monitoring and logging: Set up monitoring and logging systems to track the endpoint's performance, resource utilization, and error logs. Deploy custom integrations: Integrate the model into custom applications or services using the model's SDK or APIs. Deploy real-time applications: Create a streaming pipeline that processes data and generates responses in real time. Log and monitor Monitoring generative AI applications and their components requires techniques that you can add to the monitoring techniques that you use for conventional MLOps. You must log and monitor your application end-to-end, which includes logging and monitoring the overall input and output of your application and every component. Inputs to the application trigger multiple components to produce the outputs. If the output to a given input is factually inaccurate, you must determine which of the components didn't perform well. You require lineage in your logging for all components that were executed. You must also map the inputs and components with any additional artifacts and parameters that they depend on so that you can analyze the inputs and outputs. When applying monitoring, prioritize monitoring at the application level. If application-level monitoring proves that the application is performing well, it implies that all components are also performing well. Afterwards, apply monitoring to the prompted model components to get more granular results and a better understanding of your application. As with conventional monitoring in MLOps, you must deploy an alerting process to notify application owners when drift, skew, or performance decay is detected. To set up alerts, you must integrate alerting and notification tools into your monitoring process. The following sections describe monitoring skew and drift and continuous evaluation tasks. In addition, monitoring in MLOps includes monitoring the metrics for overall system health like resources utilization and latency. These efficiency metrics also apply to generative AI applications. Skew detection Skew detection in conventional ML systems refers to training-serving skew that occurs when the feature data distribution in production deviates from the feature data distribution that was observed during model training. For generative AI applications that use pretrained models in components that are chained together to produce the output, you must also measure skew. You can measure skew by comparing the distribution of the input data that you used to evaluate your application and the distribution of the inputs to your application in production. If the two distributions drift apart, you must investigate further. You can apply the same process to the output data as well. Drift detection Like skew detection, drift detection checks for statistical differences between two datasets. However, instead of comparing evaluations and serving inputs, drift looks for changes in input data. Drift lets you evaluate the inputs and therefore how the behavior of your users changes over time. Given that the input to the application is typically text, you can use different methods to measure skew and drift. In general, these methods are trying to identify significant changes in production data, both textual (such as size of input) and conceptual (such as topics in input), when compared to the evaluation dataset. All these methods are looking for changes that could indicate the application might not be prepared to successfully handle the nature of the new data that are now coming in. Some common methods including the following: Calculating embeddings and distances Counting text length and number of tokens Tracking vocabulary changes, new concepts and intents, prompts and topics in datasets Using statistical approaches such as least-squares density difference (PDF), maximum mean discrepancy (MMD), learned kernel MMD (PDF), or context-aware MMD. Because generative AI use cases are so diverse, you might require additional custom metrics that better capture unexpected changes in your data. Continuous evaluation Continuous evaluation is another common approach to generative AI application monitoring. In a continuous evaluation system, you capture the model's production output and run an evaluation task using that output to keep track of the model's performance over time. You can collect direct user feedback, such as ratings, which provide immediate insight into the perceived quality of outputs. In parallel, comparing model-generated responses against established ground truth allows for deeper analysis of performance. You can collect ground truth through human assessment or as a result of an ensemble AI model approach to generate evaluation metrics. This process provides a view on how your evaluation metrics changed from when you developed your model to what you have in production today. Govern In the context of MLOps, governance encompasses all the practices and policies that establish control, accountability, and transparency over the development, deployment, and ongoing management of machine learning models, including all the activities related to the code, data, and model lifecycles. In predictive AI applications, lineage focuses on tracking and understanding the complete journey of a machine learning model. In generative AI, lineage goes beyond the model artifact to extend to all the components in the chain. Tracking includes the data, models, model lineage, code, and the relative evaluation data and metrics. Lineage tracking can help you audit, debug, and improve your models. Along with these new practices, you can govern the data lifecycle and the generative AI component lifecycles using standard MLOps and DevOps practices. What's next Deploy a generative AI application using Vertex AI Authors: Anant Nawalgaria, Christos Aniftos, Elia Secchi, Gabriela Hernandez Larios, Mike Styer, and Onofrio Petragallo Send feedback \ No newline at end of file diff --git a/Deploy_network_monitoring_and_telemetry_capabilities_in_Google_Cloud.txt b/Deploy_network_monitoring_and_telemetry_capabilities_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..28004eef5d6c96fb87f3bbd4f63dbcaf9c791157 --- /dev/null +++ b/Deploy_network_monitoring_and_telemetry_capabilities_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-network-telemetry-blueprint +Date Scraped: 2025-02-23T11:53:59.480Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy network monitoring and telemetry capabilities in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-13 UTC Network telemetry collects network traffic data from devices on your network so that the data can be analyzed. Network telemetry lets security operations teams detect network-based threats and hunt for advanced adversaries, which is essential for autonomic security operations. To obtain network telemetry, you need to capture and store network data. This blueprint describes how you can use Packet Mirroring and Zeek to capture network data in Google Cloud. This blueprint is intended for security analysts and network administrators who want to mirror network traffic, store this data, and forward it for analysis. This blueprint assumes that you have working knowledge of networking and network monitoring. This blueprint is part of a security blueprint that's made up of the following: A GitHub repository that contains a set of Terraform configurations and scripts. A guide to the architecture, design, and security controls that you implement with the blueprint (this document). Using this blueprint, you capture network packets (including network metadata) using Packet Mirroring, transform the network packets into Zeek logs, and then store them in Cloud Logging. The blueprint extracts metadata such as IP addresses, ports, protocols, and Layer 7 headers and requests. Storing network metadata as Zeek logs uses less data volume than storing raw packet data and is therefore more cost effective. This document assumes that you have already configured a foundational set of security controls, as described in the Google Cloud enterprise foundations guide. Supported use cases This blueprint supports the following use cases: Your security operations center (SOC) requires access to Google Cloud network log data in a centralized location so that they can investigate security incidents. This blueprint translates network packet data into logs that you can forward to your analysis and investigation tools. Analysis and investigation tools include BigQuery, Google Security Operations, Flowmon, ExtraHop, or security information and event management (SIEM). Your security teams require visibility into Google Cloud networks to perform threat hunting using tools such as Google SecOps. You can use this blueprint to create a pipeline for Google Cloud network traffic. You want to demonstrate how your organization meets compliance requirements for network detection and response. For example, your organization must demonstrate compliance with Memorandum M-21-31 from the United States Office of Management and Budget (OMB). Your network security analysts require long-term network log data. This blueprint supports both long-term monitoring and on-demand monitoring. If you also require packet capture (pcap) data, you need to use network protocol analyzer tools (for example, Wireshark or tcpdump). The use of network protocol analyzer tools is not in the scope of this blueprint. You can't deploy this blueprint with Cloud Intrusion Detection System. Both this solution and Cloud Intrusion Detection System use Packet Mirroring policies, and these policies can only be used by one service at a time. Costs This blueprint can affect your costs because you are adding computing resources and storing significant amounts of data in Cloud Logging. Consider the following when you deploy the blueprint: Each collector virtual machine (VM) in Compute Engine runs as an e2-medium instance. You can control storage costs with the following: Using Packet Mirroring filters. Not mirroring across zones to avoid inter-zonal egress charges. Storing data only as long as required by your organization. You can use the Pricing Calculator to get an estimate for your computing, logging, and storage costs. Architecture The following architectural diagram shows the infrastructure that you use this blueprint to implement: The architecture shown in the preceding image uses a combination of the following Google Cloud services and features: Two Virtual Private Cloud (VPC) networks: A Virtual Private Cloud network for the mirrored sources. A VPC network for the collector instances. These VPC networks must be in the same project. Compute Engine or Google Kubernetes Engine (GKE) instances (called the mirrored sources) in specific regions and subnets that are the source for the network packets. You identify which instances to mirror sources using one of the following methods: Network tags Compute instance names Subnet name Compute Engine instances that function as the collector instances behind an internal passthrough Network Load Balancer, in the same region as the mirrored sources. These instances run the Zeek-Fluentd Golden Image or your custom zeek-fluentd image. The VMs are e2-medium and the supported throughput is 4 Gbps. An internal passthrough Network Load Balancer that receives packets from the mirrored sources and forwards them to the collector instances for processing. The forwarding rule for the load balancer uses the --is-mirroring-collector flag. VPC firewall rules that permit the following: Egress from mirrored sources to the internal passthrough Network Load Balancer. Ingress from the collector instances to the mirrored instances. A Packet Mirroring policy that defines the region, subnet, mirrored instances, protocols, direction, and forwarding rule. Each region requires its own Packet Mirroring policy. VPC Network Peering to permit connectivity using internal IP addresses between highly available Compute Engine VMs across multiple regions. VPC Network Peering allows the mirrored sources to communicate with the internal passthrough Network Load Balancer. A Cloud Logging instance that collects all the packets for storage and retrieval by an analysis and investigation tool. Understand the security controls that you need This section discusses the security controls within Google Cloud that you can use to help secure the different components of the network monitoring architecture. VPC network security controls You create VPC networks around your mirrored sources and your collectors. When you create the VPC network for the collectors, you delete the system-generated default route, which means that all default internet gateway routes are turned off. Turning off default internet gateways helps reduce your network attack surface from external threat attackers. You create subnets in your VPC network for each region. Subnets let you control the flow of traffic between your workloads on Google Cloud and also from external sources. The subnets have Private Google Access enabled. Private Google Access also helps reduce your network attack surface, while permitting VMs to communicate to Google APIs and services. To permit communication between the VPC networks, you enable VPC Network Peering. VPC Network Peering uses subnet routes for internal IP address connectivity. You import and export custom routes to allow a direct connection between the mirrored sources and the collectors. You must restrict all communication to regional routes because the internal passthrough Network Load Balancer for the collectors doesn't support global routes. Firewall rules You use firewall rules to define the connections that the mirrored sources and collectors can make. You set up an ingress rule to allow for regular uptime health checks, an egress rule for all protocols on the mirrored sources, and an ingress rule for all protocols on the collectors. Collector VM security controls The collector VMs are responsible for receiving the packet data. The collector VMs are identical VMs that operate as managed instance groups (MIGs). You turn on health checks to permit automatic recreation of an unresponsive VM. In addition, you allow the collectors to autoscale based on your usage requirements. Each collector VM runs the zeek-fluentd Packer image. This image consists of Zeek, which generates the logs, and Fluentd, which forwards the logs to Cloud Logging. After you deploy the Terraform module, you can update the VM OS and Zeek packages and apply the security controls that are required for your organization. Internal load balancer security controls The internal passthrough Network Load Balancer directs network packet traffic from the mirrored sources to the collector VMs for processing. All the collector VMs must run in the same region as the internal passthrough Network Load Balancer. The forwarding rule for the internal passthrough Network Load Balancer defines that access is possible from all ports, but global access isn't allowed. In addition, the forwarding rule defines this load balancer as a mirroring collector, using the --is-mirroring-collector flag. You don't need to set up a load balancer for storage, as each collector VM directly uploads logs to Cloud Logging. Packet Mirroring Packet Mirroring requires you to identify the instances that you want to mirror. You can identify the instances that you want to mirror using network tags, instance names, or the subnet that the instances are located in. In addition, you can further filter traffic by using one or more of the following: Layer 4 protocols, such as TCP, UDP, or ICMP. IPv4 CIDR ranges in the IP headers, such as 10.0.0.0/8. Direction of the traffic that you want to mirror, such as ingress, egress, or both. Service accounts and access controls Service accounts are identities that Google Cloud can use to run API requests on your behalf. Service accounts ensure that user identities don't have direct access to services. To deploy the Terraform code, you must impersonate a service account that has the following roles in the project: roles/compute.admin roles/compute.networkAdmin roles/compute.packetMirroringAdmin roles/compute.packetMirroringUser roles/iam.serviceAccountTokenCreator roles/iam.serviceAccountUser roles/logging.logWriter roles/monitoring.metricWriter roles/storage.admin The collector VMs also require this service account so that they can authenticate to Google Cloud services, get the network packets, and forward them to Cloud Logging. Data retention practices You can specify how long Cloud Logging stores your network logs using retention rules for your log buckets. To determine how long to store the data, review your organization's regulatory requirements. Logging and auditing You can use Cloud Monitoring to analyze the performance of the collector VMs and set up alerts for uptime checks and performance conditions such as CPU load. You can track administrator access or changes to the data and configuration using Cloud Audit Logs. Audit logging is supported by Compute Engine, Cloud Load Balancing, and Cloud Logging. You can export monitoring information as follows: To Google SecOps for additional analysis. For more information, see Ingesting Google Cloud Logs in to Google SecOps. To a third-party SIEM, using Pub/Sub and Dataflow. For more information, see Export Google Cloud security data to your SIEM system. Bringing it all together To implement the architecture described in this document, do the following: Deploy a secure baseline in Google Cloud, as described in the Google Cloud enterprise foundations blueprint. If you choose not to deploy the enterprise foundations blueprint, ensure that your environment has a similar security baseline in place. Review the Readme for the blueprint and ensure that you meet all the prerequisites. In your testing environment, deploy one of the example network telemetry configurations to see the blueprint in action. As part of your testing process, do the following: Verify that the Packet Mirroring policies and subnets were created. Verify that you have the Logs Viewer (roles/logging.viewer) role and run a curl command to view your log data. For example: curl http://example.com/ You should see that log data is stored in Cloud Logging. Use Security Command Center to scan the newly created resources against your compliance requirements. Verify that your system is capturing and storing the appropriate network packets, and fine-tune the performance as necessary. Deploy the blueprint into your production environment. Connect Cloud Logging to your SIEM or Google SecOps so that your SOC and network security analysts can incorporate the new telemetry into their dashboards. What's next Work through the blueprint. Read about When to use five telemetry types in security threat monitoring. Read about Leveraging Network Telemetry in Google Cloud. Read about transforming your SOC using autonomic security operations. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(1).txt b/Deploy_the_architecture(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..67288fb97880e8893207e3cc0910084762c61799 --- /dev/null +++ b/Deploy_the_architecture(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/exposing-service-mesh-apps-through-gke-ingress/deployment +Date Scraped: 2025-02-23T11:47:32.149Z + +Content: +Home Docs Cloud Architecture Center Send feedback From edge to mesh: Deploy service mesh applications through GKE Gateway Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-31 UTC This deployment shows how to combine Cloud Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients. You can expose an application to clients in many ways, depending on where the client is. This deployment shows you how to expose an application to clients by combining Cloud Load Balancing with Cloud Service Mesh to integrate load balancers with a service mesh. This deployment is intended for advanced practitioners who run Cloud Service Mesh, but it works for Istio on Google Kubernetes Engine too. Architecture The following diagram shows how you can use mesh ingress gateways to integrate load balancers with a service mesh: Note: For applications in Cloud Service Mesh, deployment of the external TCP/UDP load balancer is the default exposure type. You deploy the load balancer through the Kubernetes Service, which selects the Pods of the mesh ingress proxies. In the topology of the preceding diagram, the cloud ingress layer, which is programed through GKE Gateway, sources traffic from outside of the service mesh and directs that traffic to the mesh ingress layer. The mesh ingress layer then directs traffic to the mesh-hosted application backends. The preceding topology has the following considerations: Cloud ingress: In this reference architecture, you configure the Google Cloud load balancer through GKE Gateway to check the health of the mesh ingress proxies on their exposed health-check ports. Mesh ingress: In the mesh application, you perform health checks on the backends directly so that you can run load balancing and traffic management locally. The preceding diagram illustrates HTTPS encryption from the client to the Google Cloud load balancer, from the load balancer to the mesh ingress proxy, and from the ingress proxy to the sidecar proxy. Objectives Deploy a Google Kubernetes Engine (GKE) cluster on Google Cloud. Deploy an Istio-based Cloud Service Mesh on your GKE cluster. Configure GKE Gateway to terminate public HTTPS traffic and direct that traffic to service mesh-hosted applications. Deploy the Online Boutique application on the GKE cluster that you expose to clients on the internet. Cost optimization In this document, you use the following billable components of Google Cloud: Google Kubernetes Engine Compute Engine Cloud Load Balancing Certificate Manager Cloud Service Mesh Google Cloud Armor Cloud Endpoints To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell You run all of the terminal commands for this deployment from Cloud Shell. Upgrade to the latest version of the Google Cloud CLI: gcloud components update Set your default Google Cloud project: export PROJECT=PROJECT export PROJECT_NUMBER=$(gcloud projects describe ${PROJECT} --format="value(projectNumber)") gcloud config set project ${PROJECT} Replace PROJECT with the project ID that you want to use for this deployment. Create a working directory: mkdir -p ${HOME}/edge-to-mesh cd ${HOME}/edge-to-mesh export WORKDIR=`pwd` After you finish this deployment, you can delete the working directory. Create GKE clusters The features that are described in this deployment require a GKE cluster version 1.16 or later. In Cloud Shell, create a new kubeconfig file. This step ensures that you don't create a conflict with your existing (default) kubeconfig file. touch edge2mesh_kubeconfig export KUBECONFIG=${WORKDIR}/edge2mesh_kubeconfig Define environment variables for the GKE cluster: export CLUSTER_NAME=edge-to-mesh export CLUSTER_LOCATION=us-central1 Enable the Google Kubernetes Engine API: gcloud services enable container.googleapis.com Create a GKE Autopilot cluster: gcloud container --project ${PROJECT} clusters create-auto \ ${CLUSTER_NAME} --region ${CLUSTER_LOCATION} --release-channel rapid Ensure that the cluster is running: gcloud container clusters list The output is similar to the following: NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS edge-to-mesh us-central1 1.27.3-gke.1700 34.122.84.52 e2-medium 1.27.3-gke.1700 3 RUNNING Install a service mesh In this section, you configure the managed Cloud Service Mesh with fleet API. In Cloud Shell, enable the required APIs: gcloud services enable mesh.googleapis.com Enable Cloud Service Mesh on the fleet: gcloud container fleet mesh enable Register the cluster to the fleet: gcloud container fleet memberships register ${CLUSTER_NAME} \ --gke-cluster ${CLUSTER_LOCATION}/${CLUSTER_NAME} Apply the mesh_id label to the edge-to-mesh cluster: gcloud container clusters update ${CLUSTER_NAME} --project ${PROJECT} --region ${CLUSTER_LOCATION} --update-labels mesh_id=proj-${PROJECT_NUMBER} Enable automatic control plane management and managed data plane: gcloud container fleet mesh update \ --management automatic \ --memberships ${CLUSTER_NAME} After a few minutes, verify that the control plane status is ACTIVE: gcloud container fleet mesh describe The output is similar to the following: ... membershipSpecs: projects/892585880385/locations/us-central1/memberships/edge-to-mesh: mesh: management: MANAGEMENT_AUTOMATIC membershipStates: projects/892585880385/locations/us-central1/memberships/edge-to-mesh: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: 'Revision(s) ready for use: asm-managed-rapid.' updateTime: '2023-08-04T02:54:39.495937877Z' name: projects/e2m-doc-01/locations/global/features/servicemesh resourceState: state: ACTIVE ... Deploy GKE Gateway In the following steps, you deploy the external Application Load Balancer through the GKE Gateway controller. The GKE Gateway resource automates the provisioning of the load balancer and backend health checking. Additionally, you use Certificate Manager to provision and manage a TLS certificate, and Endpoints to automatically provision a public DNS name for the application. Install a service mesh ingress gateway As a security best practice, we recommend that you deploy the ingress gateway in a different namespace from the control plane. In Cloud Shell, create a dedicated asm-ingress namespace: kubectl create namespace asm-ingress Add a namespace label to the asm-ingress namespace: kubectl label namespace asm-ingress istio-injection=enabled The output is similar to the following: namespace/asm-ingress labeled Labeling the asm-ingress namespace with istio-injection=enabled instructs Cloud Service Mesh to automatically inject Envoy sidecar proxies when an application is deployed. Create a self-signed certificate used by the ingress gateway to terminate TLS connections between the Google Cloud load balancer (to be configured later through the GKE Gateway controller) and the ingress gateway, and store the self-signed certificate as a Kubernetes secret: openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \ -subj "/CN=frontend.endpoints.${PROJECT}.cloud.goog/O=Edge2Mesh Inc" \ -keyout frontend.endpoints.${PROJECT}.cloud.goog.key \ -out frontend.endpoints.${PROJECT}.cloud.goog.crt kubectl -n asm-ingress create secret tls edge2mesh-credential \ --key=frontend.endpoints.${PROJECT}.cloud.goog.key \ --cert=frontend.endpoints.${PROJECT}.cloud.goog.crt For more details about the requirements for the ingress gateway certificate, see the secure backend protocol considerations guide. Run the following commands to create the ingress gateway resource YAML: mkdir -p ${WORKDIR}/asm-ig/base cat < ${WORKDIR}/asm-ig/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/anthos-service-mesh-samples/docs/ingress-gateway-asm-manifests/base EOF mkdir ${WORKDIR}/asm-ig/variant cat < ${WORKDIR}/asm-ig/variant/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: asm-ingressgateway namespace: asm-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] EOF cat < ${WORKDIR}/asm-ig/variant/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: asm-ingressgateway namespace: asm-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: asm-ingressgateway subjects: - kind: ServiceAccount name: asm-ingressgateway EOF cat < ${WORKDIR}/asm-ig/variant/service-proto-type.yaml apiVersion: v1 kind: Service metadata: name: asm-ingressgateway spec: ports: - name: status-port port: 15021 protocol: TCP targetPort: 15021 - name: http port: 80 targetPort: 8080 - name: https port: 443 targetPort: 8443 appProtocol: HTTP2 type: ClusterIP EOF cat < ${WORKDIR}/asm-ig/variant/gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: asm-ingressgateway spec: servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" # IMPORTANT: Must use wildcard here when using SSL, as SNI isn't passed from GFE tls: mode: SIMPLE credentialName: edge2mesh-credential EOF cat < ${WORKDIR}/asm-ig/variant/kustomization.yaml namespace: asm-ingress resources: - ../base - role.yaml - rolebinding.yaml patches: - path: service-proto-type.yaml target: kind: Service - path: gateway.yaml target: kind: Gateway EOF Apply the ingress gateway CRDs: kubectl apply -k ${WORKDIR}/asm-ig/variant Ensure that all deployments are up and running: kubectl wait --for=condition=available --timeout=600s deployment --all -n asm-ingress The output is similar to the following: deployment.apps/asm-ingressgateway condition met Apply a service mesh ingress gateway health check When integrating a service mesh ingress gateway to a Google Cloud application load balancer, the application load balancer must be configured to perform health checks against the ingress gateway Pods. The HealthCheckPolicy CRD provides an API to configure that health check. In Cloud Shell, create the HealthCheckPolicy.yaml file: cat <${WORKDIR}/ingress-gateway-healthcheck.yaml apiVersion: networking.gke.io/v1 kind: HealthCheckPolicy metadata: name: ingress-gateway-healthcheck namespace: asm-ingress spec: default: checkIntervalSec: 20 timeoutSec: 5 #healthyThreshold: HEALTHY_THRESHOLD #unhealthyThreshold: UNHEALTHY_THRESHOLD logConfig: enabled: True config: type: HTTP httpHealthCheck: #portSpecification: USE_NAMED_PORT port: 15021 portName: status-port #host: HOST requestPath: /healthz/ready #response: RESPONSE #proxyHeader: PROXY_HEADER #requestPath: /healthz/ready #port: 15021 targetRef: group: "" kind: Service name: asm-ingressgateway EOF Apply the HealthCheckPolicy: kubectl apply -f ${WORKDIR}/ingress-gateway-healthcheck.yaml Define security policies Google Cloud Armor provides DDoS defense and customizable security policies that you can attach to a load balancer through Ingress resources. In the following steps, you create a security policy that uses preconfigured rules to block cross-site scripting (XSS) attacks. This rule helps block traffic that matches known attack signatures but allows all other traffic. Your environment might use different rules depending on your workload. In Cloud Shell, create a security policy that is called edge-fw-policy: gcloud compute security-policies create edge-fw-policy \ --description "Block XSS attacks" Create a security policy rule that uses the preconfigured XSS filters: gcloud compute security-policies rules create 1000 \ --security-policy edge-fw-policy \ --expression "evaluatePreconfiguredExpr('xss-stable')" \ --action "deny-403" \ --description "XSS attack filtering" Create the GCPBackendPolicy.yaml file to attach to the ingress gateway service: cat < ${WORKDIR}/cloud-armor-backendpolicy.yaml apiVersion: networking.gke.io/v1 kind: GCPBackendPolicy metadata: name: cloud-armor-backendpolicy namespace: asm-ingress spec: default: securityPolicy: edge-fw-policy targetRef: group: "" kind: Service name: asm-ingressgateway EOF Apply the GCPBackendPolicy.yaml file: kubectl apply -f ${WORKDIR}/cloud-armor-backendpolicy.yaml Configure IP addressing and DNS In Cloud Shell, create a global static IP address for the Google Cloud load balancer: gcloud compute addresses create e2m-gclb-ip --global This static IP address is used by the GKE Gateway resource and allows the IP address to remain the same, even if the external load balancer changes. Get the static IP address: export GCLB_IP=$(gcloud compute addresses describe e2m-gclb-ip \ --global --format "value(address)") echo ${GCLB_IP} To create a stable, human-friendly mapping to the static IP address of your application load balancer, you must have a public DNS record. You can use any DNS provider and automation that you want. This deployment uses Endpoints instead of creating a managed DNS zone. Endpoints provides a free Google-managed DNS record for a public IP address. Run the following command to create the YAML specification file named dns-spec.yaml: cat < ${WORKDIR}/dns-spec.yaml swagger: "2.0" info: description: "Cloud Endpoints DNS" title: "Cloud Endpoints DNS" version: "1.0.0" paths: {} host: "frontend.endpoints.${PROJECT}.cloud.goog" x-google-endpoints: - name: "frontend.endpoints.${PROJECT}.cloud.goog" target: "${GCLB_IP}" EOF The YAML specification defines the public DNS record in the form of frontend.endpoints.${PROJECT}.cloud.goog, where ${PROJECT} is your unique project identifier. Deploy the dns-spec.yaml file in your Google Cloud project: gcloud endpoints services deploy ${WORKDIR}/dns-spec.yaml The output is similar to the following: project [e2m-doc-01]... Operation "operations/acat.p2-892585880385-fb4a01ad-821d-4e22-bfa1-a0df6e0bf589" finished successfully. Service Configuration [2023-08-04r0] uploaded for service [frontend.endpoints.e2m-doc-01.cloud.goog] Now that the IP address and DNS are configured, you can generate a public certificate to secure the frontend. To integrate with GKE Gateway, you use Certificate Manager TLS certificates. Provision a TLS certificate In this section, you create a TLS certificate using Certificate Manager, and associate it with a certificate map through a certificate map entry. The application load balancer, configured through GKE Gateway, uses the certificate to provide secure communications between the client and Google Cloud. After it's created, the certificate map entry is referenced by the GKE Gateway resource. In Cloud Shell, enable the Certificate Manager API: gcloud services enable certificatemanager.googleapis.com --project=${PROJECT} Create the TLS certificate: gcloud --project=${PROJECT} certificate-manager certificates create edge2mesh-cert \ --domains="frontend.endpoints.${PROJECT}.cloud.goog" Create the certificate map: gcloud --project=${PROJECT} certificate-manager maps create edge2mesh-cert-map Attach the certificate to the certificate map with a certificate map entry: gcloud --project=${PROJECT} certificate-manager maps entries create edge2mesh-cert-map-entry \ --map="edge2mesh-cert-map" \ --certificates="edge2mesh-cert" \ --hostname="frontend.endpoints.${PROJECT}.cloud.goog" Deploy the GKE Gateway and HTTPRoute resources In this section, you configure the GKE Gateway resource that provisions the Google Cloud application load balancer using the gke-l7-global-external-managed gatewayClass. Additionally, you configure HTTPRoute resources that both route requests to the application and perform HTTP to HTTP(S) redirects. In Cloud Shell, run the following command to create the Gateway manifest as gke-gateway.yaml: cat < ${WORKDIR}/gke-gateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: external-http namespace: asm-ingress annotations: networking.gke.io/certmap: edge2mesh-cert-map spec: gatewayClassName: gke-l7-global-external-managed # gke-l7-gxlb listeners: - name: http # list the port only so we can redirect any incoming http requests to https protocol: HTTP port: 80 - name: https protocol: HTTPS port: 443 addresses: - type: NamedAddress value: e2m-gclb-ip # reference the static IP created earlier EOF Apply the Gateway manifest to create a Gatewaycalled external-http: kubectl apply -f ${WORKDIR}/gke-gateway.yaml Create the default HTTPRoute.yaml file: cat << EOF > ${WORKDIR}/default-httproute.yaml apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: default-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: https rules: - matches: - path: value: / backendRefs: - name: asm-ingressgateway port: 443 EOF Apply the default HTTPRoute: kubectl apply -f ${WORKDIR}/default-httproute.yaml Create an additional HTTPRoute.yaml file to perform HTTP to HTTP(S) redirects: cat << EOF > ${WORKDIR}/default-httproute-redirect.yaml kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1 metadata: name: http-to-https-redirect-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: http rules: - filters: - type: RequestRedirect requestRedirect: scheme: https statusCode: 301 EOF Apply the redirect HTTPRoute: kubectl apply -f ${WORKDIR}/default-httproute-redirect.yaml Reconciliation takes time. Use the following command until programmed=true: kubectl get gateway external-http -n asm-ingress -w Install the Online Boutique sample app In Cloud Shell, create a dedicated onlineboutique namespace: kubectl create namespace onlineboutique Add a label to the onlineboutique namespace: kubectl label namespace onlineboutique istio-injection=enabled Labeling the onlineboutique namespace with istio-injection=enabled instructs Cloud Service Mesh to automatically inject Envoy sidecar proxies when an application is deployed. Download the Kubernetes YAML files for the Online Boutique sample app: curl -LO \ https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml Deploy the Online Boutique app: kubectl apply -f kubernetes-manifests.yaml -n onlineboutique The output is similar to the following (including warnings about GKE Autopilot setting default resource requests and limits): Warning: autopilot-default-resources-mutator:Autopilot updated Deployment onlineboutique/emailservice: adjusted resources to meet requirements for containers [server] (see http://g.co/gke/autopilot-resources) deployment.apps/emailservice created service/emailservice created Warning: autopilot-default-resources-mutator:Autopilot updated Deployment onlineboutique/checkoutservice: adjusted resources to meet requirements for containers [server] (see http://g.co/gke/autopilot-resources) deployment.apps/checkoutservice created service/checkoutservice created Warning: autopilot-default-resources-mutator:Autopilot updated Deployment onlineboutique/recommendationservice: adjusted resources to meet requirements for containers [server] (see http://g.co/gke/autopilot-resources) deployment.apps/recommendationservice created service/recommendationservice created ... Ensure that all deployments are up and running: kubectl get pods -n onlineboutique The output is similar to the following: NAME READY STATUS RESTARTS AGE adservice-64d8dbcf59-krrj9 2/2 Running 0 2m59s cartservice-6b77b89c9b-9qptn 2/2 Running 0 2m59s checkoutservice-7668b7fc99-5bnd9 2/2 Running 0 2m58s ... Wait a few minutes for the GKE Autopilot cluster to provision the necessary compute infrastructure to support the application. Run the following command to create the VirtualService manifest as frontend-virtualservice.yaml: cat < frontend-virtualservice.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: frontend-ingress namespace: onlineboutique spec: hosts: - "frontend.endpoints.${PROJECT}.cloud.goog" gateways: - asm-ingress/asm-ingressgateway http: - route: - destination: host: frontend port: number: 80 EOF VirtualService is created in the application namespace (onlineboutique). Typically, the application owner decides and configures how and what traffic gets routed to the frontend application, so VirtualService is deployed by the app owner. Deploy frontend-virtualservice.yaml in your cluster: kubectl apply -f frontend-virtualservice.yaml Access the following link: echo "https://frontend.endpoints.${PROJECT}.cloud.goog" Your Online Boutique frontend is displayed. Note: You might get an ERR_SSL_VERSION_OR_CIPHER_MISMATCH error. This error occurs when certificates have not yet propagated to all of the Google Front Ends (GFEs) globally. Wait a few minutes, and then try the link again. To display the details of your certificate, click lock View site information in your browser's address bar, and then click Certificate (Valid). The certificate viewer displays details for the managed certificate, including the expiration date and who issued the certificate. You now have a global HTTPS load balancer serving as a frontend to your service mesh-hosted application. Clean up After you've finished the deployment, you can clean up the resources you created on Google Cloud so you won't be billed for them in the future. You can either delete the project entirely or delete cluster resources and then delete the cluster. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the individual resources If you want to keep the Google Cloud project you used in this deployment, delete the individual resources: In Cloud Shell, delete the HTTPRoute resources: kubectl delete -f ${WORKDIR}/default-httproute-redirect.yaml kubectl delete -f ${WORKDIR}/default-httproute.yaml Delete the GKE Gateway resource: kubectl delete -f ${WORKDIR}/gke-gateway.yaml Delete the TLS certificate resources (including the certificate map entry and its parent certificate map): gcloud --project=${PROJECT} certificate-manager maps entries delete edge2mesh-cert-map-entry --map="edge2mesh-cert-map" --quiet gcloud --project=${PROJECT} certificate-manager maps delete edge2mesh-cert-map --quiet gcloud --project=${PROJECT} certificate-manager certificates delete edge2mesh-cert --quiet Delete the Endpoints DNS entry: gcloud endpoints services delete "frontend.endpoints.${PROJECT}.cloud.goog" The output is similar to the following: Are you sure? This will set the service configuration to be deleted, along with all of the associated consumer information. Note: This does not immediately delete the service configuration or data and can be undone using the undelete command for 30 days. Only after 30 days will the service be purged from the system. When you are prompted to continue, enter Y. The output is similar to the following: Waiting for async operation operations/services.frontend.endpoints.edge2mesh.cloud.goog-5 to complete... Operation finished successfully. The following command can describe the Operation details: gcloud endpoints operations describe operations/services.frontend.endpoints.edge2mesh.cloud.goog-5 Delete the static IP address: gcloud compute addresses delete ingress-ip --global The output is similar to the following: The following global addresses will be deleted: - [ingress-ip] When you are prompted to continue, enter Y. The output is similar to the following: Deleted [https://www.googleapis.com/compute/v1/projects/edge2mesh/global/addresses/ingress-ip]. Delete the GKE cluster: gcloud container clusters delete $CLUSTER_NAME --zone $CLUSTER_LOCATION The output is similar to the following: The following clusters will be deleted. - [edge-to-mesh] in [us-central1] When you are prompted to continue, enter Y. After a few minutes, the output is similar to the following: Deleting cluster edge-to-mesh...done. Deleted [https://container.googleapis.com/v1/projects/e2m-doc-01/zones/us-central1/clusters/edge-to-mesh]. What's next Learn about more features offered by GKE Ingress that you can use with your service mesh. Learn about the different types of cloud load balancing available for GKE. Learn about the features and functionality offered by Cloud Service Mesh. See how to deploy Ingress across multiple GKE clusters for multi-regional load balancing. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(10).txt b/Deploy_the_architecture(10).txt new file mode 100644 index 0000000000000000000000000000000000000000..bbf690a38752970dc96073445bcf0697807f8dee --- /dev/null +++ b/Deploy_the_architecture(10).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/stream-logs-from-google-cloud-to-splunk/deployment +Date Scraped: 2025-02-23T11:53:22.715Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy log streaming from Google Cloud to Splunk Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-24 UTC This document describes how you deploy an export mechanism to stream logs from Google Cloud resources to Splunk. It assumes that you've already read the corresponding reference architecture for this use case. These instructions are intended for operations and security administrators who want to stream logs from Google Cloud to Splunk. You must be familiar with Splunk and the Splunk HTTP Event Collector (HEC) when using these instructions for IT operations or security use cases. Although not required, familiarity with Dataflow pipelines, Pub/Sub, Cloud Logging, Identity and Access Management, and Cloud Storage is useful for this deployment. To automate the deployment steps in this reference architecture using infrastructure as code (IaC), see the terraform-splunk-log-export GitHub repository. Architecture The following diagram shows the reference architecture and demonstrates how log data flows from Google Cloud to Splunk. As shown in the diagram, Cloud Logging collects the logs into an organization-level log sink and sends the logs to Pub/Sub. The Pub/Sub service creates a single topic and subscription for the logs and forwards the logs to the main Dataflow pipeline. The main Dataflow pipeline is a Pub/Sub to Splunk streaming pipeline which pulls logs from the Pub/Sub subscription and delivers them to Splunk. Parallel to the primary Dataflow pipeline, the secondary Dataflow pipeline is a Pub/Sub to Pub/Sub streaming pipeline to replay messages if a delivery fails. At the end of the process, Splunk Enterprise or Splunk Cloud Platform acts as an HEC endpoint and receives the logs for further analysis. For more details, see the Architecture section of the reference architecture. To deploy this reference architecture, you perform the following tasks: Perform set up tasks. Create an aggregated log sink in a dedicated project. Create a dead-letter topic. Set up a Splunk HEC endpoint. Configure the Dataflow pipeline capacity. Export logs to Splunk. Transform logs or events in-flight using user-defined functions (UDF) within the Splunk Dataflow pipeline. Handle delivery failures to avoid data loss from potential misconfiguration or transient network issues. Before you begin Complete the following steps to set up an environment for your Google Cloud to Splunk reference architecture: Bring up a project, enable billing, and activate the APIs. Grant IAM roles. Set up your environment. Set up secure networking. Bring up a project, enable billing, and activate the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Monitoring API, Secret Manager, Compute Engine, Pub/Sub, and Dataflow APIs. Enable the APIs Grant IAM roles In the Google Cloud console, ensure that you have the following Identity and Access Management (IAM) permissions for organization and project resources. For more information, see Granting, changing, and revoking access to resources. Permissions Predefined roles Resource logging.sinks.create logging.sinks.get logging.sinks.update Logs Configuration Writer (roles/logging.configWriter) Organization compute.networks.* compute.routers.* compute.firewalls.* networkservices.* Compute Network Admin (roles/compute.networkAdmin) Compute Security Admin (roles/compute.securityAdmin) Project secretmanager.* Secret Manager Admin (roles/secretmanager.admin) Project If the predefined IAM roles don't include enough permissions for you to perform your duties, create a custom role. A custom role gives you the access that you need, while also helping you to follow the principle of least privilege. Note: To create the organization-level aggregated log sink, you must have the Logs Configuration Writer role assigned to you at the organization hierarchy level. If you don't have access to change permissions at the organization level, ask your Google Cloud organization administrator to create the organization-wide log sink for you. For more information about Cloud Logging access control, see Cloud Logging predefined roles. Set up your environment In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Set the project for your active Cloud Shell session: gcloud config set project PROJECT_ID Replace PROJECT_ID with your project ID. Set up secure networking In this step, you set up secure networking before processing and exporting logs to Splunk Enterprise. Create a VPC network and a subnet: gcloud compute networks create NETWORK_NAME --subnet-mode=custom gcloud compute networks subnets create SUBNET_NAME \ --network=NETWORK_NAME \ --region=REGION \ --range=192.168.1.0/24 Replace the following: NETWORK_NAME: the name for your network SUBNET_NAME: the name for your subnet REGION: the region that you want to use for this network Create a firewall rule for Dataflow worker virtual machines (VMs) to communicate with one another: gcloud compute firewall-rules create allow-internal-dataflow \ --network=NETWORK_NAME \ --action=allow \ --direction=ingress \ --target-tags=dataflow \ --source-tags=dataflow \ --priority=0 \ --rules=tcp:12345-12346 This rule allows internal traffic between Dataflow VMs which use TCP ports 12345-12346. Also, the Dataflow service sets the dataflow tag. Create a Cloud NAT gateway: gcloud compute routers create nat-router \ --network=NETWORK_NAME \ --region=REGION gcloud compute routers nats create nat-config \ --router=nat-router \ --nat-custom-subnet-ip-ranges=SUBNET_NAME \ --auto-allocate-nat-external-ips \ --region=REGION Enable Private Google Access on the subnet: gcloud compute networks subnets update SUBNET_NAME \ --enable-private-ip-google-access \ --region=REGION Create a log sink In this section, you create the organization-wide log sink and its Pub/Sub destination, along with the necessary permissions. In Cloud Shell, create a Pub/Sub topic and associated subscription as your new log sink destination: gcloud pubsub topics create INPUT_TOPIC_NAME gcloud pubsub subscriptions create \ --topic INPUT_TOPIC_NAME INPUT_SUBSCRIPTION_NAME Replace the following: INPUT_TOPIC_NAME: the name for the Pub/Sub topic to be used as the log sink destination INPUT_SUBSCRIPTION_NAME: the name for the Pub/Sub subscription to the log sink destination Create the organization log sink: gcloud logging sinks create ORGANIZATION_SINK_NAME \ pubsub.googleapis.com/projects/PROJECT_ID/topics/INPUT_TOPIC_NAME \ --organization=ORGANIZATION_ID \ --include-children \ --log-filter='NOT logName:projects/PROJECT_ID/logs/dataflow.googleapis.com' Replace the following: ORGANIZATION_SINK_NAME: the name of your sink in the organization ORGANIZATION_ID: your organization ID The command consists of the following flags: The --organization flag specifies that this is an organization-level log sink. The --include-children flag is required and ensures that the organization-level log sink includes all logs across all subfolders and projects. The --log-filter flag specifies the logs to be routed. In this example, you exclude Dataflow operations logs specifically for the project PROJECT_ID, because the log export Dataflow pipeline generates more logs itself as it processes logs. The filter prevents the pipeline from exporting its own logs, avoiding a potentially exponential cycle. The output includes a service account in the form of o#####-####@gcp-sa-logging.iam.gserviceaccount.com. Grant the Pub/Sub Publisher IAM role to the log sink service account on the Pub/Sub topic INPUT_TOPIC_NAME. This role allows the log sink service account to publish messages on the topic. gcloud pubsub topics add-iam-policy-binding INPUT_TOPIC_NAME \ --member=serviceAccount:LOG_SINK_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/pubsub.publisher Replace LOG_SINK_SERVICE_ACCOUNT with the name of the service account for your log sink. Create a dead-letter topic To prevent potential data loss that results when a message fails to be delivered, you should create a Pub/Sub dead-letter topic and corresponding subscription. The failed message is stored in the dead-letter topic until an operator or site reliability engineer can investigate and correct the failure. For more information, see Replay failed messages section of the reference architecture. In Cloud Shell, create a Pub/Sub dead-letter topic and subscription to prevent data loss by storing any undeliverable messages: gcloud pubsub topics create DEAD_LETTER_TOPIC_NAME gcloud pubsub subscriptions create --topic DEAD_LETTER_TOPIC_NAME DEAD_LETTER_SUBSCRIPTION_NAME Replace the following: DEAD_LETTER_TOPIC_NAME: the name for the Pub/Sub topic that will be the dead-letter topic DEAD_LETTER_SUBSCRIPTION_NAME: the name for the Pub/Sub subscription for the dead-letter topic Set up a Splunk HEC endpoint In the following procedures, you set up a Splunk HEC endpoint and store the newly created HEC token as a secret in Secret Manager. When you deploy the Splunk Dataflow pipeline, you need to supply both the endpoint URL and the token. Configure the Splunk HEC If you don't already have a Splunk HEC endpoint, see the Splunk documentation to learn how to configure a Splunk HEC. Splunk HEC runs on the Splunk Cloud Platform service or on your own Splunk Enterprise instance. Keep Splunk HEC Indexer Acknowledgement option disabled, since it is not supported by Splunk Dataflow. In Splunk, after you create a Splunk HEC token, copy the token value. In Cloud Shell, save the Splunk HEC token value in a temporary file named splunk-hec-token-plaintext.txt. Note: For high availability and to learn how to scale with high-volume traffic, see the Splunk docs on how to scale HEC by distributing load to multiple nodes using an HTTP(S) load balancer. Store the Splunk HEC token in Secret Manager In this step, you create a secret and a single underlying secret version in which to store the Splunk HEC token value. Note: Secret Manager is supported as of Pub/Sub to Splunk Dataflow template version 2022-03-14-00_RC00. This reference architecture uses the latest version of this template. In Cloud Shell, create a secret to contain your Splunk HEC token: gcloud secrets create hec-token \ --replication-policy="automatic" For more information on the replication policies for secrets, see Choose a replication policy. Add the token as a secret version using the contents of the file splunk-hec-token-plaintext.txt: gcloud secrets versions add hec-token \ --data-file="./splunk-hec-token-plaintext.txt" Delete the splunk-hec-token-plaintext.txt file, as it is no longer needed. Configure the Dataflow pipeline capacity Note: Before proceeding, make sure you have enough CPU and IP quota in your chosen region to provision the maximum possible number of Dataflow VMs (set with the --max-workers flag). For more information about how to check and request an increase in quotas, see Allocation quotas. The following table summarizes the recommended general best practices for configuring the Dataflow pipeline capacity settings: Setting General best practice --worker-machine-type flag Set to baseline machine size n2-standard-4 for the best performance to cost ratio --max-workers flag Set to the maximum number of workers needed to handle the expected peak EPS per your calculations parallelism parameter Set to 2 x vCPUs/worker x the maximum number of workers to maximize the number of parallel Splunk HEC connections batchCount parameter Set to 10-50 events/request for logs, provided that the max buffering delay of two seconds is acceptable Remember to use your own unique values and calculations when you deploy this reference architecture in your environment. Set the values for machine type and machine count. To calculate values appropriate for your cloud environment, see Machine type and Machine count sections of the reference architecture. DATAFLOW_MACHINE_TYPE DATAFLOW_MACHINE_COUNT Set the values for Dataflow parallelism and batch count. To calculate values appropriate for your cloud environment, see the Parallelism and Batch count sections of the reference architecture. JOB_PARALLELISM JOB_BATCH_COUNT For more information on how to calculate Dataflow pipeline capacity parameters, see the Performance and cost optimization design considerations section of the reference architecture. Export logs by using the Dataflow pipeline In this section, you deploy the Dataflow pipeline with the following steps: Create a Cloud Storage bucket and Dataflow worker service account. Grant roles and access to the Dataflow worker service account. Deploy the Dataflow pipeline. View logs in Splunk. The pipeline delivers Google Cloud log messages to the Splunk HEC. Create a Cloud Storage bucket and Dataflow worker service account In Cloud Shell, create a new Cloud Storage bucket with a uniform bucket-level access setting: gcloud storage buckets create gs://PROJECT_ID-dataflow/ --uniform-bucket-level-access The Cloud Storage bucket that you just created is where the Dataflow job stages temporary files. In Cloud Shell, create a service account for your Dataflow workers: gcloud iam service-accounts create WORKER_SERVICE_ACCOUNT \ --description="Worker service account to run Splunk Dataflow jobs" \ --display-name="Splunk Dataflow Worker SA" Replace WORKER_SERVICE_ACCOUNT with the name that you want to use for the Dataflow worker service account. Grant roles and access to the Dataflow worker service account In this section, grant the required roles to the Dataflow worker service account as shown in the following table. Role Path Purpose Dataflow Admin roles/dataflow.worker Enable the service account to act as a Dataflow admin. Dataflow Worker roles/dataflow.worker Enable the service account to act as a Dataflow worker. Storage Object Admin roles/storage.objectAdmin Enable the service account to access the Cloud Storage bucket that is used by Dataflow for staging files. Pub/Sub Publisher roles/pubsub.publisher Enable the service account to publish failed messages to the Pub/Sub dead-letter topic. Pub/Sub Subscriber roles/pubsub.subscriber Enable the service account to access the input subscription. Pub/Sub Viewer roles/pubsub.viewer Enable the service account to view the subscription. Secret Manager Secret Accessor roles/secretmanager.secretAccessor Enable the service account to access the secret that contains the Splunk HEC token. In Cloud Shell, grant the Dataflow worker service account the Dataflow Admin and Dataflow Worker roles that this account needs to execute Dataflow job operations and administration tasks: gcloud projects add-iam-policy-binding PROJECT_ID \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/dataflow.admin" gcloud projects add-iam-policy-binding PROJECT_ID \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/dataflow.worker" Grant the Dataflow worker service account access to view and consume messages from the Pub/Sub input subscription: gcloud pubsub subscriptions add-iam-policy-binding INPUT_SUBSCRIPTION_NAME \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/pubsub.subscriber" gcloud pubsub subscriptions add-iam-policy-binding INPUT_SUBSCRIPTION_NAME \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/pubsub.viewer" Grant the Dataflow worker service account access to publish any failed messages to the Pub/Sub unprocessed topic: gcloud pubsub topics add-iam-policy-binding DEAD_LETTER_TOPIC_NAME \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/pubsub.publisher" Grant the Dataflow worker service account access to the Splunk HEC token secret in Secret Manager: gcloud secrets add-iam-policy-binding hec-token \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" Grant the Dataflow worker service account read and write access to the Cloud Storage bucket to be used by the Dataflow job for staging files: gcloud storage buckets add-iam-policy-binding gs://PROJECT_ID-dataflow/ \ --member="serviceAccount:WORKER_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com" --role=”roles/storage.objectAdmin” Deploy the Dataflow pipeline In Cloud Shell, set the following environment variable for your Splunk HEC URL: export SPLUNK_HEC_URL=SPLUNK_HEC_URL Replace the SPLUNK_HEC_URL variable using the form protocol://host[:port], where: protocol is either http or https. host is the fully qualified domain name (FQDN) or IP address of either your Splunk HEC instance, or, if you have multiple HEC instances, the associated HTTP(S) (or DNS-based) load balancer. port is the HEC port number. It is optional, and depends on your Splunk HEC endpoint configuration. An example of a valid Splunk HEC URL input is https://splunk-hec.example.com:8088. If you are sending data to HEC on Splunk Cloud Platform, see Send data to HEC on Splunk Cloud to determine the above host and port portions of your specific Splunk HEC URL. Caution: Failure to match the Splunk HEC URL format described in this step will cause Dataflow input validation errors, and the data will not be delivered to the Splunk HEC endpoint. Instead, your streaming logs will be sent to the unprocessed topic. The Splunk HEC URL must not include the HEC endpoint path, for example, /services/collector. The Pub/Sub to Splunk Dataflow template currently only supports the /services/collector endpoint for JSON-formatted events, and it automatically appends that path to your Splunk HEC URL input. To learn more about the HEC endpoint, see the Splunk documentation for services/collector endpoint. Deploy the Dataflow pipeline using the Pub/Sub to Splunk Dataflow template: gcloud beta dataflow jobs run JOB_NAME \ --gcs-location=gs://dataflow-templates/latest/Cloud_PubSub_to_Splunk \ --staging-location=gs://PROJECT_ID-dataflow/temp/ \ --worker-machine-type=DATAFLOW_MACHINE_TYPE \ --max-workers=DATAFLOW_MACHINE_COUNT \ --region=REGION \ --network=NETWORK_NAME \ --subnetwork=regions/REGION/subnetworks/SUBNET_NAME \ --disable-public-ips \ --parameters \ inputSubscription=projects/PROJECT_ID/subscriptions/INPUT_SUBSCRIPTION_NAME,\ outputDeadletterTopic=projects/PROJECT_ID/topics/DEAD_LETTER_TOPIC_NAME,\ url=SPLUNK_HEC_URL,\ tokenSource=SECRET_MANAGER, \ tokenSecretId=projects/PROJECT_ID/secrets/hec-token/versions/1, \ batchCount=JOB_BATCH_COUNT,\ parallelism=JOB_PARALLELISM,\ javascriptTextTransformGcsPath=gs://splk-public/js/dataflow_udf_messages_replay.js,\ javascriptTextTransformFunctionName=process Replace JOB_NAME with the name format pubsub-to-splunk-date+"%Y%m%d-%H%M%S" The optional parameters javascriptTextTransformGcsPath and javascriptTextTransformFunctionName specify a publicly available sample UDF: gs://splk-public/js/dataflow_udf_messages_replay.js. The sample UDF includes code examples for event transformation and decoding logic that you use to replay failed deliveries. For more information about UDF, see Transform events in-flight with UDF. After the pipeline job completes, find the new job ID in the output, copy the job ID, and save it. You enter this job ID in a later step. View logs in Splunk It takes a few minutes for the Dataflow pipeline workers to be provisioned and ready to deliver logs to Splunk HEC. You can confirm that the logs are properly received and indexed in the Splunk Enterprise or Splunk Cloud Platform search interface. To see the number of logs per type of monitored resource: In Splunk, open Splunk Search & Reporting. Run the search index=[MY_INDEX] | stats count by resource.type where the MY_INDEX index is configured for your Splunk HEC token: If you don't see any events, see Handle delivery failures. Transform events in-flight with UDF The Pub/Sub to Splunk Dataflow template supports a JavaScript UDF for custom event transformation, such as adding new fields or setting Splunk HEC metadata on an event basis. The pipeline you deployed uses this sample UDF. In this section, you first edit the sample UDF function to add a new event field. This new field specifies the value of the originating Pub/Sub subscription as additional contextual information. You then update the Dataflow pipeline with the modified UDF. Modify the sample UDF In Cloud Shell, download the JavaScript file that contains the sample UDF function: wget https://storage.googleapis.com/splk-public/js/dataflow_udf_messages_replay.js In the text editor of your choice, open the JavaScript file, locate the field event.inputSubscription, uncomment that line and replace splunk-dataflow-pipeline with INPUT_SUBSCRIPTION_NAME: event.inputSubscription = "INPUT_SUBSCRIPTION_NAME"; Save the file. Upload the file to the Cloud Storage bucket: gcloud storage cp ./dataflow_udf_messages_replay.js gs://PROJECT_ID-dataflow/js/ Update the Dataflow pipeline with the new UDF In Cloud Shell, stop the pipeline by using the Drain option to ensure that the logs which were already pulled from Pub/Sub are not lost: gcloud dataflow jobs drain JOB_ID --region=REGION Run the Dataflow pipeline job with the updated UDF. gcloud beta dataflow jobs run JOB_NAME \ --gcs-location=gs://dataflow-templates/latest/Cloud_PubSub_to_Splunk \ --worker-machine-type=DATAFLOW_MACHINE_TYPE \ --max-workers=DATAFLOW_MACHINE_COUNT \ --region=REGION \ --network=NETWORK_NAME \ --subnetwork=regions/REGION/subnetworks/SUBNET_NAME \ --disable-public-ips \ --parameters \ inputSubscription=projects/PROJECT_ID/subscriptions/INPUT_SUBSCRIPTION_NAME,\ outputDeadletterTopic=projects/PROJECT_ID/topics/DEAD_LETTER_TOPIC_NAME,\ url=SPLUNK_HEC_URL,\ tokenSource=SECRET_MANAGER, \ tokenSecretId=projects/PROJECT_ID/secrets/hec-token/versions/1, \ batchCount=JOB_BATCH_COUNT,\ parallelism=JOB_PARALLELISM,\ javascriptTextTransformGcsPath=gs://PROJECT_ID-dataflow/js/dataflow_udf_messages_replay.js,\ javascriptTextTransformFunctionName=process Replace JOB_NAME with the name format pubsub-to-splunk-date+"%Y%m%d-%H%M%S" Handle delivery failures Delivery failures can happen due to errors in processing events or connecting to the Splunk HEC. In this section, you introduce delivery failure to demonstrate the error handling workflow. You also learn how to view and trigger the re-delivery of the failed messages to Splunk. Trigger delivery failures To introduce a delivery failure manually in Splunk, do one of the following: If you run a single instance, stop the Splunk server to cause connection errors. Disable the relevant HEC token from your Splunk input configuration. Troubleshoot failed messages To investigate a failed message, you can use the Google Cloud console: In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Click the unprocessed subscription that you created. If you used the previous example, the subscription name is: projects/PROJECT_ID/subscriptions/DEAD_LETTER_SUBSCRIPTION_NAME. To open the messages viewer, click View Messages. To view messages, click Pull, making sure to leave Enable ack messages cleared. Inspect the failed messages. Pay attention to the following: The Splunk event payload under the Message body column. The error message under the attribute.errorMessage column. The error timestamp under the attribute.timestamp column. The following screenshot shows an example of a failure message that you receive if the Splunk HEC endpoint is temporarily down or unreachable. Notice that the text of the errorMessage attribute reads The target server failed to respond. The message also shows the timestamp that is associated with each failure. You can use this timestamp to troubleshoot the root cause of the failure. Replay failed messages In this section, you need to restart the Splunk server or enable the Splunk HEC endpoint to fix the delivery error. You can then replay the unprocessed messages. In Splunk, use one of the following methods to restore the connection to Google Cloud: If you stopped the Splunk server, restart the server. If you disabled the Splunk HEC endpoint in the Trigger delivery failures section, check that the Splunk HEC endpoint is now operating. In Cloud Shell, take a snapshot of the unprocessed subscription before re-processing the messages in this subscription. The snapshot prevents the loss of messages if there's an unexpected configuration error. gcloud pubsub snapshots create SNAPSHOT_NAME \ --subscription=DEAD_LETTER_SUBSCRIPTION_NAME Replace SNAPSHOT_NAME with a name that helps you identify the snapshot, such as dead-letter-snapshot-date+"%Y%m%d-%H%M%S. Use the Pub/Sub to Splunk Dataflow template to create a Pub/Sub to Pub/Sub pipeline. The pipeline uses another Dataflow job to transfer the messages from the unprocessed subscription back to the input topic. DATAFLOW_INPUT_TOPIC="INPUT_TOPIC_NAME" DATAFLOW_DEADLETTER_SUB="DEAD_LETTER_SUBSCRIPTION_NAME" JOB_NAME=splunk-dataflow-replay-date +"%Y%m%d-%H%M%S" gcloud dataflow jobs run JOB_NAME \ --gcs-location= gs://dataflow-templates/latest/Cloud_PubSub_to_Cloud_PubSub \ --worker-machine-type=n2-standard-2 \ --max-workers=1 \ --region=REGION \ --parameters \ inputSubscription=projects/PROJECT_ID/subscriptions/DEAD_LETTER_SUBSCRIPTION_NAME,\ outputTopic=projects/PROJECT_ID/topics/INPUT_TOPIC_NAME Copy the Dataflow job ID from the command output and save it for later. You'll enter this job ID as REPLAY_JOB_ID when you drain your Dataflow job. In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Select the unprocessed subscription. Confirm that the Unacked message count graph is down to 0, as shown in the following screenshot. In Cloud Shell, drain the Dataflow job that you created: gcloud dataflow jobs drain REPLAY_JOB_ID --region=REGION Replace REPLAY_JOB_ID with the Dataflow job ID you saved earlier. When messages are transferred back to the original input topic, the main Dataflow pipeline automatically picks up the failed messages and re-delivers them to Splunk. Confirm messages in Splunk To confirm that the messages have been re-delivered, in Splunk, open Splunk Search & Reporting. Run a search for delivery_attempts > 1. This is a special field that the sample UDF adds to each event to track the number of delivery attempts. Make sure to expand the search time range to include events that may have occurred in the past, because the event timestamp is the original time of creation, not the time of indexing. In the following screenshot, the two messages that originally failed are now successfully delivered and indexed in Splunk with the correct timestamp. Notice that the insertId field value is the same as the value that appears in the failed messages when you view the unprocessed subscription. The insertId field is a unique identifier that Cloud Logging assigns to the original log entry.. The insertId also appears in the Pub/Sub message body. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this reference architecture, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the organization-level sink Use the following command to delete the organization-level log sink: gcloud logging sinks delete ORGANIZATION_SINK_NAME --organization=ORGANIZATION_ID Delete the project With the log sink deleted, you can proceed with deleting resources created to receive and export logs. The easiest way is to delete the project you created for the reference architecture. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next For a full list of Pub/Sub to Splunk Dataflow template parameters, see the Pub/Sub to Splunk Dataflow documentation. For the corresponding Terraform templates to help you deploy this reference architecture, see the terraform-splunk-log-export GitHub repository. It includes a pre-built Cloud Monitoring dashboard for monitoring your Splunk Dataflow pipeline. For more details on Splunk Dataflow custom metrics and logging to help you monitor and troubleshoot your Splunk Dataflow pipelines, refer to this blog New observability features for your Splunk Dataflow streaming pipelines. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(11).txt b/Deploy_the_architecture(11).txt new file mode 100644 index 0000000000000000000000000000000000000000..35cf097fd38973fccb79595e174a0884d5841dc6 --- /dev/null +++ b/Deploy_the_architecture(11).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/stream-cloud-logs-to-datadog/deployment +Date Scraped: 2025-02-23T11:53:35.803Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy log streaming from Google Cloud to Datadog Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-10 UTC This document describes how you deploy a Cloud Logging log sink and a Dataflow pipeline to stream logs from Google Cloud to Datadog. It assumes that you're familiar with the reference architecture in Stream logs from Google Cloud to Datadog. These instructions are intended for IT professionals who want to stream logs from Google Cloud to Datadog. Although it's not required, having experience with the following Google products is useful for deploying this architecture: Dataflow pipelines Pub/Sub Cloud Logging Identity and Access Management (IAM) Cloud Storage You must have a Datadog account to complete this deployment. However, you don't need any familiarity with Datadog Log Management. Architecture The following diagram shows the architecture that's described in this document. This diagram demonstrates how log files that are generated by Google Cloud are ingested by Datadog and shown to Datadog users. Click the diagram to enlarge it. As shown in the preceding diagram, the following events occur: Cloud Logging collects log files from a Google Cloud project into a designated Cloud Logging log sink and then forwards them to a Pub/Sub topic. A Dataflow pipeline pulls the logs from the Pub/Sub topic, batches them, compresses them into a payload, and then delivers them to Datadog. If there's a delivery failure, a secondary Dataflow pipeline sends messages from a dead-letter topic back to the primary log-forwarding topic to be redelivered. The logs arrive in Datadog for further analysis and monitoring. For more information, see the Architecture section of the reference architecture. Objectives Create the secure networking infrastructure. Create the logging and Pub/Sub infrastructure. Create the credentials and storage infrastructure. Create the Dataflow infrastructure. Validate that Datadog Log Explorer received logs. Manage delivery errors. Costs In this document, you use the following billable components of Google Cloud: Cloud Logging Cloud Storage Dataflow Pub/Sub To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. You also use the following billable components for Datadog: Datadog Log Management Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Monitoring, Secret Manager, Compute Engine, Pub/Sub, Logging, and Dataflow APIs. Enable the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Monitoring, Secret Manager, Compute Engine, Pub/Sub, Logging, and Dataflow APIs. Enable the APIs IAM role requirements Make sure that you have the following role or roles on the project: Compute > Compute Network Admin, Compute > Compute Security Admin, Dataflow > Dataflow Admin, Dataflow > Dataflow Worker, IAM > Project IAM Admin, IAM > Service Account Admin, IAM > Service Account User, Logging > Logs Configuration Writer, Logging > Logs Viewer, Pub/Sub > Pub/Sub Admin, Secret Manager > Secret Manager Admin, Storage > Storage Admin Check for the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator. For all rows that specify or include you, check the Role column to see whether the list of roles includes the required roles. Grant the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. Click person_add Grant access. In the New principals field, enter your user identifier. This is typically the email address for a Google Account. In the Select a role list, select a role. To grant additional roles, click add Add another role and add each additional role. Click Save. Create network infrastructure This section describes how to create your network infrastructure to support the deployment of a Cloud Logging log sink and a Dataflow pipeline to stream logs from Google Cloud to Datadog. Create a Virtual Private Cloud (VPC) network and subnet To host the Dataflow pipeline worker VMs, create a Virtual Private Cloud (VPC) network and subnet: In the Google Cloud console, go to the VPC networks page. Go to VPC networks Click Create VPC network. In the Name field, provide a name for the network. In the Subnets section, provide a name, region, and IP address range for the subnetwork. The size of the IP address range might vary based on your environment. A subnet mask of length /24 is sufficient for most use cases. In the Private Google Access section, select On. Click Done and then click Create. Create a VPC firewall rule To restrict traffic to the Dataflow VMs, create a VPC firewall rule: In the Google Cloud console, go to the Create a firewall rule page. Go to Create a firewall rule In the Name field, provide a name for the rule. In the Description field, explain what the rule does. In the Network list, select the network for your Dataflow VMs. In the Priority field, specify the order in which this rule is applied. Set the Priority to 0. Rules with lower numbers get prioritized first. The default value for this field is 1,000. In the Direction of traffic section, select Ingress. In the Action on match section, select Allow. Create targets, source tags, protocols, and ports In the Google Cloud console, go to the Create a firewall rule page. Go to Create a firewall rule Find the Targets list and select Specified target tags. In the Target tags field, enter dataflow. In the Source filter list, select Source tags. In the Source tags field, enter dataflow. In the Protocols and Ports section complete the following tasks: Select Specified protocols and ports. Select the TCP checkbox. In the Ports field, enter 12345-12346. Click Create. Create a Cloud NAT gateway To help enable secure outbound connections between Google Cloud and Datadog, create a Cloud NAT gateway. In the Google Cloud console, go to the Cloud NAT page. Go to Cloud NAT In the Cloud NAT page, click Create Cloud NAT gateway. In the Gateway name field, provide a name for the gateway. In the NAT type section, select Public. In the Select Cloud Router section, in the Network list, select your network from the list of available networks. In the Region list, select the region that contains your Cloud Router. In the Cloud Router list, select or create a new router in the same network and region. In the Cloud NAT mapping section, in the Cloud NAT IP addresses list, select Automatic. Click Create. Create logging and Pub/Sub infrastructure Create Pub/Sub topics and subscriptions to receive and forward your logs, and to handle any delivery failures. In the Google Cloud console, go to the Create a Pub/Sub topic page. Go to Create a Pub/Sub topic In the Topic ID field, provide a name for the topic. Leave the Add a default subscription checkbox selected. Click Create. To handle any log messages that are rejected by the Datadog API, create an additional topic and default subscription. To create an additional topic and default subscription, repeat the steps in this procedure. The additional topic is used within the Datadog Dataflow template as part of the path configuration for the outputDeadletterTopic template parameter. Route the logs to Pub/Sub This deployment describes how to create a project-level Cloud Logging log sink. However, you can also create an organization-level aggregated sink that combines logs from multiple projects. Set the includeChildren parameter on the organization-level sink: In the Google Cloud console, go to the Create logs routing sink page. Go to Create logs routing sink In the Sink details section, in the Sink name field, enter a name. Optional: In the Sink description field, explain the purpose of the log sink. Click Next. In the Sink destination section, in the Select sink service list, select Cloud Pub/Sub topic. In the Select a Cloud Pub/Sub topic list, select the input topic that you just created. Click Next. Optional: In the Choose logs to include in sink section, in the Build inclusion filter field, specify which logs to include in the sink by entering your logging queries. For example, to include only 10% of the logs with a severity level of INFO, create an inclusion filter with severity=INFO AND sample(insertId, 0.1). For more information, see Logging query language. Note: If you don't set a filter, all logs for all resources in your project, including audit logs, are routed to the destination that you create in this section. Click Next. Optional: In the Choose logs to filter out of sink (optional) section, create logging queries to specify which logs to exclude from the sink: To build an exclusion filter, click add Add exclusion. In the Exclusion filter name field, enter a name. In the Build an exclusion filter field, enter a filter expression that matches the log entries that you want to exclude. You can also use the sample function to select a portion of the log entries to exclude. To create the sink with your new exclusion filter turned off, click Disable after you enter the expression. You can update the sink later to enable the filter. Click Create sink. Identify writer-identity values In the Google Cloud console, go to the Log Router page. Go to Log Router In the Log Router Sinks section, find your log sink and then click more_vert More actions. Click View sink details. In the Writer identity row, next to serviceAccount, copy the service account ID. You use the copied service account ID value in the next section. Add a principal value Go to the Pub/Sub Topics page. Go to Pub/Sub Topics Select your input topic. Click Show info panel. On the Info Panel, in the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the Writer identity service account ID that you copied in the previous section. In the Assign roles section, in the Select a role list, point to Pub/Sub and click Pub/Sub Publisher. Click Save. Create credentials and storage infrastructure To store your Datadog API key value, create a secret in Secret Manager. This API key is used by the Dataflow pipeline to forward logs to Datadog. In the Google Cloud console, go to the Create secret page. Go to Create secret In the Name field, provide a name for your secret—for example, my_secret. A secret name can contain uppercase and lowercase letters, numerals, hyphens, and underscores. The maximum allowed length for a name is 255 characters. In the Secret value section, in the Secret value field, paste your Datadog API key value. You can find the Datadog API key value on the Datadog Organization Settings page. Click Create secret. Create storage infrastructure To stage temporary files for the Dataflow pipeline, create a Cloud Storage bucket with Uniform bucket-level access enabled: In the Google Cloud console, go to the Create a bucket page. Go to Create a bucket In the Get Started section, enter a globally unique, permanent name for the bucket. Click Continue. In the Choose where to store your data section, select Region, select a region for your bucket, and then click Continue. In the Choose a storage class for your data section, select Standard, and then click Continue. In the Choose how to control access to objects section, find the Access control section, select Uniform, and then click Continue. Optional: In the Choose how to protect object data section, configure additional security settings. Click Create. If prompted, leave the Enforce public access prevention on this bucket item selected. Create Dataflow infrastructure In this section you create a custom Dataflow worker service account. This account should follow the principle of least privilege. The default behavior for Dataflow pipeline workers is to use your project's Compute Engine default service account, which grants permissions to all resources in the project. If you are forwarding logs from a production environment, create a custom worker service account with only the necessary roles and permissions. Assign this service account to your Dataflow pipeline workers. The following IAM roles are required for the Dataflow worker service account that you create in this section. The service account uses these IAM roles to interact with your Google Cloud resources and to forward your logs to Datadog through Dataflow. Role Effect Dataflow Admin Dataflow Worker Allows creating, running, and examining Dataflow jobs. For more information, see Roles in the Dataflow access control documentation. Pub/Sub Publisher Pub/Sub Subscriber Pub/Sub Viewer Allows viewing subscriptions and topics, consuming messages from a subscription, and publishing messages to a topic. For more information, see Roles in the Pub/Sub access control documentation. Secret Manager Secret Accessor Allows accessing the payload of secrets. For more information, see Access control with IAM. Storage Object Admin Allows listing, creating, viewing, and deleting objects. For more information, see IAM roles for Cloud Storage. Create a Dataflow worker service account In the Google Cloud console, go to the Service Accounts page. Go to Service Accounts In the Select a recent project section, select your project. On the Service Accounts page, click Create service account. In the Service account details section, in the Service account name field, enter a name. Click Create and continue. In the Grant this service account access to project section, add the following project-level roles to the service account: Dataflow Admin Dataflow Worker Click Done. The Service Accounts page appears. On the Service Accounts page, click your service account. In the Service account details section, copy the Email value. You use this value in the next section. The system uses the value to configure access to your Google Cloud resources, so that the service account can interact with them. Provide access to the Dataflow worker service account To view and consume messages from the Pub/Sub input subscription, provide access to the Dataflow worker service account: In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Select the checkbox next to your input subscription. Click Show info panel. In the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following resource-level roles to the service account: Pub/Sub Subscriber Pub/Sub Viewer Click Save. Handle failed messages To handle failed messages, you configure the Dataflow worker service account to send any failed messages to a dead-letter topic. To send the messages back to the primary input topic after any issues are resolved, the service account needs to view and consume messages from the dead-letter subscription. Grant access to the service account In the Google Cloud console, go to the Pub/Sub Topics page. Go to Pub/Sub Topics Select the checklist next to your input topic. Click Show info panel. In the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following resource-level role to the service account: Pub/Sub Publisher Click Save. Create a dead-letter topic In the Google Cloud console, go to the Pub/Sub Topics page. Go to Pub/Sub Topics Select the checkbox next to your dead-letter topic. Click Show info panel. In the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following resource-level role to the service account: Pub/Sub Publisher Click Save. Create a dead-letter subscription In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Select the checkbox next to your dead-letter subscription. Click Show info panel. In the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following resource-level roles to the service account: Pub/Sub Subscriber Pub/Sub Viewer Click Save. Enable the Dataflow worker service account To access the Datadog API key secret in Secret Manager, you must first enable the Dataflow worker service account. Doing so lets the Dataflow worker service account access the Datadog API key secret. In the Google Cloud console, go to the Secret Manager page. Go to Secret Manager Select the checkbox next to your secret. Click Show info panel, In the Permissions tab, click Add principal. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following resource-level role to the service account: Secret Manager Secret Accessor Click Save. Stage files to the Cloud Storage bucket Give the Dataflow worker service account access to read and write the Dataflow job's staging files to the Cloud Storage bucket: In the Google Cloud console, go to the Buckets page. Go to Buckets Select the checklist next to your bucket. Click Permissions. In the Add principals section, in the New principals field, paste the email of the service account that you created earlier. In the Assign roles section, assign the following role to the service account: Storage Object Admin Click Save. Export logs with the Pub/Sub-to-Datadog pipeline Provide a baseline configuration for running the Pub/Sub to Datadog pipeline in a secure network with a custom Dataflow worker service account. If you expect to stream a high volume of logs, you can also configure the following parameters and features: batchCount: The number of messages in each batched request to Datadog (from 10 to 1,000 messages, with a default value of 100). To ensure a timely and consistent flow of logs, a batch is sent at least every two seconds. parallelism: The number of requests that are being sent to Datadog in parallel, with a default value of 1 (no parallelism). Horizontal Autoscaling: Enabled by default for streaming jobs that use Streaming Engine. For more information, see Streaming autoscaling. User-defined functions: Optional JavaScript functions that you configure to act as extensions to the template (not enabled by default). For the Dataflow job's URL parameter, ensure that you select the Datadog logs API URL that corresponds to your Datadog site: Site Logs API URL US1 https://http-intake.logs.datadoghq.com US3 https://http-intake.logs.us3.datadoghq.com US5 https://http-intake.logs.us5.datadoghq.com EU https://http-intake.logs.datadoghq.eu AP1 https://http-intake.logs.ap1.datadoghq.com US1-FED https://http-intake.logs.ddog-gov.com Create your Dataflow job In the Google Cloud console, go to the Create job from template page. Go to Create job from template In the Job name field, name the project. From the Regional endpoint list, select a Dataflow endpoint. In the Dataflow template list, select Pub/Sub to Datadog. The Required Parameters section appears. Configure the Required Parameters section: In the Pub/Sub input subscription list, select the input subscription. In the Datadog Logs API URL field, enter the URL that corresponds to your Datadog site. In the Output deadletter Pub/Sub topic list, select the topic that you created to receive message failures. Configure the Streaming Engine section: In the Temporary location field, specify a path for temporary files in the storage bucket that you created for that purpose. Configure the Optional Parameters section: In the Google Cloud Secret Manager ID field, enter the resource name of the secret that you configured with your Datadog API key value. Configure your credentials, service account, and networking parameters In the Source of the API key passed field, select SECRET_MANAGER. In the Worker region list, select the region where you created your custom VPC and subnet. In the Service account email list, select the custom Dataflow worker service account that you created for that purpose. In the Worker IP Address Configuration list, select Private. In the Subnetwork field, specify the private subnetwork that you created for the Dataflow worker VMs. For more information, see Guidelines for specifying a subnetwork parameter for Shared VPC. Optional: Customize other settings. Click Run job. The Dataflow service allocates resources to run the pipeline. Validate that Datadog Log Explorer received logs Open the Datadog Log Explorer, and ensure that the timeframe is expanded to encompass the timestamp of the logs. To validate that Datadog Log Explorer received logs, search for logs with the gcp.dataflow.step source attribute, or any other log attribute. Validate that Datadog Log Explorer received logs from Google Cloud: Source:gcp.dataflow.step The output will display all of the Datadog log messages that you forwarded from the dead-letter topic to the primary log forwarding pipeline. For more information, see Search logs in the Datadog documentation. Manage delivery errors Log file delivery from the Dataflow pipeline that streams Google Cloud logs to Datadog can fail occasionally. Delivery errors can be caused by: 4xx errors from the Datadog logs endpoint (related to authentication or network issues). 5xx errors caused by server issues at the destination. Manage 401 and 403 errors If you encounter a 401 error or a 403 error, you must replace the primary log-forwarding job with a replacement job that has a valid API key value. You must then clear the messages generated by those errors from the dead-letter topic. To clear the error messages, follow the steps in the Troubleshoot failed messages section. For more information about replacing the primary log-forwarding job with a replacement job, see Launch a replacement job. Manage other 4xx errors To resolve all other 4xx errors, follow the steps in the Troubleshoot failed messages section. Manage 5xx errors For5xx errors, delivery is automatically retried with exponential backoff, for a maximum of 15 minutes. This automatic process might not resolve all errors. To clear any remaining 5xx errors, follow the steps in the Troubleshoot failed messages section. Troubleshoot failed messages When you see failed messages in the dead-letter topic, examine them. To resolve the errors, and to forward the messages from the dead-letter topic to the primary log-forwarding pipeline, complete all of the following subsections in order. Review your dead-letter subscription In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Click the subscription ID of the dead-letter subscription that you created. Click the Messages tab. To view the messages, leave the Enable ack messages checkbox cleared and click Pull. Inspect the failed messages and resolve any issues. Reprocess dead-letter messages To reprocess dead-letter messages, first create a Dataflow job and then configure parameters. Create your Dataflow job In the Google Cloud console, go to the Create job from template page. Go to Create job from template Give the job a name and specify the regional endpoint. Configure your messaging and storage parameters In the Create job from template page, in the Dataflow template list, select the Pub/Sub to Pub/Sub template. In the Source section, in the Pub/Sub input subscription list, select your dead-letter subscription. In the Target section, in the Output Pub/Sub topic list, select the primary input topic. In the Streaming Engine section, in the Temporary location field, specify a path and filename prefix for temporary files in the storage bucket that you created for that purpose. For example, gs://my-bucket/temp. Configure your networking and service account parameters In the Create job from template page, find the Worker region list and select the region where you created your custom VPC and subnet. In the Service Account email list, select the custom Dataflow worker service account email address that you created for that purpose. In the Worker IP Address Configuration list, select Private. In the Subnetwork field, specify the private subnetwork that you created for the Dataflow worker VMs. For more information, see Guidelines for specifying a subnetwork parameter for Shared VPC. Optional: Customize other settings. Click Run job. Confirm the dead-letter subscription is empty Confirming that the dead-letter subscription is empty helps ensure that you have forwarded all messages from that Pub/Sub subscription to the primary input topic. In the Google Cloud console, go to the Pub/Sub Subscriptions page. Go to Pub/Sub Subscriptions Click the subscription ID of the dead-letter subscription that you created. Click the Messages tab. Confirm that there are no more unacknowledged messages through the Pub/Sub subscription metrics. For more information, see Monitor message backlog. Drain the backup Dataflow job After you have resolved the errors, and the messages in the dead-letter topic have returned to the log-forwarding pipeline, follow these steps to stop running the Pub/Sub to Pub/Sub template. Draining the backup Dataflow job ensures that the Dataflow service finishes processing the buffered data while also blocking the ingestion of new data. In the Google Cloud console, go to the Dataflow jobs page. Go to Dataflow jobs Select the job that you want to stop. The Stop Jobs window appears. To stop a job, the status of the job must be running. Select Drain. Click Stop job. Clean up If you don't plan to continue using the Google Cloud and Datadog resources deployed in this reference architecture, delete them to avoid incurring additional costs. There are no Datadog resources for you to delete. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next To learn more about the benefits of the Pub/Sub to Datadog Dataflow template, read the Stream your Google Cloud logs to Datadog with Dataflow blog post. For more information about Cloud Logging, see Cloud Logging. To learn more about Datadog log management, see Best Practices for Log Management. For more information about Dataflow, see Dataflow. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Ashraf Hanafy | Senior Software Engineer for Google Cloud Integrations, DatadogDaniel Trujillo | Engineering Manager, Google Cloud Integrations, DatadogBryce Eadie | Technical Writer, DatadogSriram Raman | Senior Product Manager, Google Cloud Integrations, DatadogOther contributors: Maruti C | Global Partner EngineerChirag Shankar | Data EngineerKevin Winters | Key Enterprise ArchitectLeonid Yankulin | Developer Relations EngineerMohamed Ali | Cloud Technical Solutions Developer Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(12).txt b/Deploy_the_architecture(12).txt new file mode 100644 index 0000000000000000000000000000000000000000..d0c1a0a42586c78f3be3356efd08b2babd9eb117 --- /dev/null +++ b/Deploy_the_architecture(12).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/automate-malware-scanning-for-documents-uploaded-to-cloud-storage/deployment +Date Scraped: 2025-02-23T11:56:38.850Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy automated malware scanning for files uploaded to Cloud Storage Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-02 UTC This document describes how you deploy the architecture in Automate malware scanning for files uploaded to Cloud Storage. This deployment guide assumes that you're familiar with the basic functionality of the following technologies: Cloud Storage Cloud Run Cloud Scheduler Eventarc Docker Node.js Architecture The following diagram shows the deployment architecture that you create in this document: The diagram shows the following two pipelines that are managed by this architecture: File scanning pipeline, which checks if an uploaded file contains malware. ClamAV malware database mirror update pipeline, which maintains an up-to-date mirror of the database of malware that ClamAV uses. For more information about the architecture, see Automate malware scanning for files uploaded to Cloud Storage. Objectives Build a mirror of the ClamAV malware definitions database in a Cloud Storage bucket. Build a Cloud Run service with the following functions: Scanning files in a Cloud Storage bucket for malware using ClamAV and move scanned files to clean or quarantined buckets based on the outcome of the scan. Maintaining a mirror of the ClamAV malware definitions database in Cloud Storage. Create an Eventarc trigger to trigger the malware-scanning service when a file is uploaded to Cloud Storage. Create a Cloud Scheduler job to trigger the malware-scanning service to refresh the mirror of the malware definitions database in Cloud Storage. Costs This architecture uses the following billable components of Google Cloud: Cloud Storage Cloud Run Eventarc To generate a cost estimate based on your projected usage, use the pricing calculator. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Artifact Registry, Cloud Build, Resource Manager, Cloud Scheduler, Eventarc, Logging, Monitoring, Pub/Sub, Cloud Run, and Service Usage APIs. Enable the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Artifact Registry, Cloud Build, Resource Manager, Cloud Scheduler, Eventarc, Logging, Monitoring, Pub/Sub, Cloud Run, and Service Usage APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. In this deployment, you run all commands from Cloud Shell. Deploy the architecture You can deploy the architecture described in this document by using one of the following methods: Use Cloud Shell: Use this method if you want to see how each component of the solution is deployed and configured using the Google Cloud CLI command line tool. To use this deployment method, follow the instructions in Deploy using Cloud Shell. Use the Terraform CLI: Use this method if you want to deploy the solution in as few manual steps as possible. This method relies on Terraform to deploy and configure the individual components. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy using Cloud Shell To manually deploy the architecture described in this document, complete the steps in the following subsections. Prepare your environment In this section, you assign settings for values that are used throughout the deployment, such as region and zone. In this deployment, you use us-central1 as the region for the Cloud Run service and us as the location for the Eventarc trigger and Cloud Storage buckets. In Cloud Shell, set common shell variables including region and location: REGION=us-central1 LOCATION=us PROJECT_ID=PROJECT_ID SERVICE_NAME="malware-scanner" SERVICE_ACCOUNT="${SERVICE_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" Replace PROJECT_ID with your project ID. Initialize the gcloud environment with your project ID: gcloud config set project "${PROJECT_ID}" Create three Cloud Storage buckets with unique names: gcloud storage buckets create "gs://unscanned-${PROJECT_ID}" --location="${LOCATION}" gcloud storage buckets create "gs://quarantined-${PROJECT_ID}" --location="${LOCATION}" gcloud storage buckets create "gs://clean-${PROJECT_ID}" --location="${LOCATION}" ${PROJECT_ID} is used to make sure that the bucket names are unique. These three buckets hold the uploaded files at various stages during the file scanning pipeline: unscanned-PROJECT_ID: Holds files before they're scanned. Your users upload their files to this bucket. quarantined-PROJECT_ID: Holds files that the malware-scanner service has scanned and deemed to contain malware. clean-PROJECT_ID: Holds files that the malware-scanner service has scanned and found to be uninfected. Create a fourth Cloud Storage bucket: gcloud storage buckets create "gs://cvd-mirror-${PROJECT_ID}" --location="${LOCATION}" ${PROJECT_ID} is used to make sure that the bucket name is unique. This bucket cvd-mirror-PROJECT_ID is used to maintain a local mirror of the malware definitions database, which prevents rate limiting from being triggered by the ClamAV CDN. Set up a service account for the malware-scanner service In this section, you create a service account to use for the malware scanner service. You then grant the appropriate roles to the service account so that it has permissions to read and write to the Cloud Storage buckets. The roles ensure that the account has minimal permissions and that it only has access to the resources that it needs. Create the malware-scanner service account: gcloud iam service-accounts create ${SERVICE_NAME} Grant the Object Admin role to the buckets. The role allows the service to read and delete files from the unscanned bucket, and to write files to the quarantined and clean buckets. gcloud storage buckets add-iam-policy-binding "gs://unscanned-${PROJECT_ID}" \ --member="serviceAccount:${SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin gcloud storage buckets add-iam-policy-binding "gs://clean-${PROJECT_ID}" \ --member="serviceAccount:${SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin gcloud storage buckets add-iam-policy-binding "gs://quarantined-${PROJECT_ID}" \ --member="serviceAccount:${SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin gcloud storage buckets add-iam-policy-binding "gs://cvd-mirror-${PROJECT_ID}" \ --member="serviceAccount:${SERVICE_ACCOUNT}" --role=roles/storage.objectAdmin Grant the Metric Writer role, which allows the service to write metrics to Monitoring: gcloud projects add-iam-policy-binding \ "${PROJECT_ID}" \ --member="serviceAccount:${SERVICE_ACCOUNT}" \ --role=roles/monitoring.metricWriter Create the malware-scanner service in Cloud Run In this section, you deploy the malware-scanner service to Cloud Run. The service runs in a Docker container that contains the following: A Dockerfile to build a container image with the service, Node.js runtime, Google Cloud SDK, and ClamAV binaries. The TypeScript files for the malware-scanner Cloud Run service. A config.json configuration file to specify your Cloud Storage bucket names. A updateCvdMirror.sh shell script to refresh the ClamAV malware definitions database mirror in Cloud Storage. A bootstrap.sh shell script to run the necessary services on instance startup. To deploy the service, do the following: In Cloud Shell, clone the GitHub repository that contains the code files: git clone https://github.com/GoogleCloudPlatform/docker-clamav-malware-scanner.git Change to the cloudrun-malware-scanner directory: cd docker-clamav-malware-scanner/cloudrun-malware-scanner Create the config.json configuration file based on the config.json.tmpl template file in the GitHub repository: sed "s/-bucket-name/-${PROJECT_ID}/" config.json.tmpl > config.json The preceding command uses a search and replace operation to give the Cloud Storage buckets unique names that are based on the Project ID. Optional: View the updated configuration file: cat config.json Perform an initial population of the ClamAV malware database mirror in Cloud Storage: python3 -m venv pyenv . pyenv/bin/activate pip3 install crcmod cvdupdate ./updateCvdMirror.sh "cvd-mirror-${PROJECT_ID}" deactivate These commands performs a local install of the CVDUpdate tool, and then runs the updateCvdMirror.sh script which uses CVDUpdate to copy the ClamAV malware database to the cvd-mirror-PROJECT_ID bucket that you created earlier. You can check the contents of the mirror bucket: gcloud storage ls "gs://cvd-mirror-${PROJECT_ID}/cvds" The bucket should contain several CVD files that contain the full malware database, several .cdiff files that contain the daily differential updates, and two JSON files with configuration and state information. Create and deploy the Cloud Run service using the service account that you created earlier: gcloud beta run deploy "${SERVICE_NAME}" \ --source . \ --region "${REGION}" \ --no-allow-unauthenticated \ --memory 4Gi \ --cpu 1 \ --concurrency 20 \ --min-instances 1 \ --max-instances 5 \ --no-cpu-throttling \ --cpu-boost \ --timeout 300s \ --service-account="${SERVICE_ACCOUNT}" The command creates a Cloud Run instance that has 1 vCPU and uses 4 GiB of RAM. This size is acceptable for this deployment. However, in a production environment, you might want to choose a larger CPU and memory size for the instance, and a larger --max-instances parameter. The resource sizes that you might need depend on how much traffic the service needs to handle. The command includes the following specifications: The --concurrency parameter specifies the number of simultaneous requests that each instance can process. The --no-cpu-throttling parameter lets the instance perform operations in the background, such as updating malware definitions. The --cpu-boost parameter doubles the number of vCPUs on instance startup to reduce startup latency. The --min-instances 1 parameter maintains at least one instance active, because the startup time for each instance is relatively high. The --max-instances 5 parameter prevents the service from being scaled up too high. When prompted, enter Y to build and deploy the service. The build and deployment takes about 10 minutes. When it's complete, the following message is displayed: Service [malware-scanner] revision [malware-scanner-UNIQUE_ID] has been deployed and is serving 100 percent of traffic. Service URL: https://malware-scanner-UNIQUE_ID.a.run.app Store the Service URL value from the output of the deployment command in a shell variable. You use the value later when you create a Cloud Scheduler job. SERVICE_URL="SERVICE_URL" Optional: To check the running service and the ClamAV version, run the following command: curl -D - -H "Authorization: Bearer $(gcloud auth print-identity-token)" \ ${SERVICE_URL} The output looks like the following sample. It shows the version of the malware-scanner service, the version of ClamAV, and the version of the malware definitions with the date that they were last updated. gcs-malware-scanner version 3.2.0 Using Clam AV version: ClamAV 1.4.1/27479/Fri Dec 6 09:40:14 2024 The Cloud Run service requires that all invocations are authenticated, and the authenticating identities must have the run.routes.invoke permission on the service. You add the permission in the next section. Create an Eventarc Cloud Storage trigger In this section, you add permissions to allow Eventarc to capture Cloud Storage events and create a trigger to send these events to the Cloud Run malware-scanner service. If you're using an existing project that was created before April 8, 2021, add the iam.serviceAccountTokenCreator role to the Pub/Sub service account: PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)") PUBSUB_SERVICE_ACCOUNT="service-${PROJECT_NUMBER}@gcp-sa-pubsub.iam.gserviceaccount.com" gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member="serviceAccount:${PUBSUB_SERVICE_ACCOUNT}"\ --role='roles/iam.serviceAccountTokenCreator' This role addition is only required for older projects and allows Pub/Sub to invoke the Cloud Run service. In Cloud Shell, grant the Pub/Sub Publisher role to the Cloud Storage service account: STORAGE_SERVICE_ACCOUNT=$(gcloud storage service-agent --project="${PROJECT_ID}") gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:${STORAGE_SERVICE_ACCOUNT}" \ --role "roles/pubsub.publisher" Allow the malware-scanner service account to invoke the Cloud Run service, and act as an Eventarc event receiver: gcloud run services add-iam-policy-binding "${SERVICE_NAME}" \ --region="${REGION}" \ --member "serviceAccount:${SERVICE_ACCOUNT}" \ --role roles/run.invoker gcloud projects add-iam-policy-binding "${PROJECT_ID}" \ --member "serviceAccount:${SERVICE_ACCOUNT}" \ --role "roles/eventarc.eventReceiver" Create an Eventarc trigger to capture the finalized object event in the unscanned Cloud Storage bucket and send it to your Cloud Run service. The trigger uses the malware-scanner service account for authentication: BUCKET_NAME="unscanned-${PROJECT_ID}" gcloud eventarc triggers create "trigger-${BUCKET_NAME}-${SERVICE_NAME}" \ --destination-run-service="${SERVICE_NAME}" \ --destination-run-region="${REGION}" \ --location="${LOCATION}" \ --event-filters="type=google.cloud.storage.object.v1.finalized" \ --event-filters="bucket=${BUCKET_NAME}" \ --service-account="${SERVICE_ACCOUNT}" If you receive one of the following errors, wait one minute and then run the commands again: ERROR: (gcloud.eventarc.triggers.create) INVALID_ARGUMENT: The request was invalid: Bucket "unscanned-PROJECT_ID" was not found. Please verify that the bucket exists. ERROR: (gcloud.eventarc.triggers.create) FAILED_PRECONDITION: Invalid resource state for "": Permission denied while using the Eventarc Service Agent. If you recently started to use Eventarc, it may take a few minutes before all necessary permissions are propagated to the Service Agent. Otherwise, verify that it has Eventarc Service Agent role. Change the message acknowledgement deadline to five minutes in the underlying Pub/Sub subscription that's used by the Eventarc trigger. The default value of 10 seconds is too short for large files or high loads. SUBSCRIPTION_NAME=$(gcloud eventarc triggers describe \ "trigger-${BUCKET_NAME}-${SERVICE_NAME}" \ --location="${LOCATION}" \ --format="get(transport.pubsub.subscription)") gcloud pubsub subscriptions update "${SUBSCRIPTION_NAME}" --ack-deadline=300 Although your trigger is created immediately, it can take up to two minutes for that trigger to be fully functional. Create a Cloud Scheduler job to trigger ClamAV database mirror updates Create a Cloud Scheduler job that executes an HTTP POST request on the Cloud Run service with a command to update the mirror of the malware definitions database. To avoid having too many clients use the same time slot, ClamAV requires that you schedule the job at a random minute between 3 and 57, avoiding multiples of 10. while : ; do # set MINUTE to a random number between 3 and 57 MINUTE="$((RANDOM%55 + 3))" # exit loop if MINUTE isn't a multiple of 10 [[ $((MINUTE % 10)) != 0 ]] && break done gcloud scheduler jobs create http \ "${SERVICE_NAME}-mirror-update" \ --location="${REGION}" \ --schedule="${MINUTE} */2 * * *" \ --oidc-service-account-email="${SERVICE_ACCOUNT}" \ --uri="${SERVICE_URL}" \ --http-method=post \ --message-body='{"kind":"schedule#cvd_update"}' \ --headers="Content-Type=application/json" The --schedule command-line argument defines when the job runs using the unix-cron string format. The value given indicates that the job should run at the specific randomly-generated minute every two hours. This job only updates the ClamAV mirror in Cloud Storage. The ClamAV freshclam daemon in each instance of the Cloud Run checks the mirror every 30 minutes for new definitions and updates the ClamAV daemon. Deploy using the Terraform CLI This section describes deploying the architecture described in this document by using the Terraform CLI. Clone the GitHub Repository In Cloud Shell, clone the GitHub repository that contains the code and Terraform files: git clone https://github.com/GoogleCloudPlatform/docker-clamav-malware-scanner.git Prepare the environment In this section, you assign settings for values that are used throughout the deployment, such as region and zone. In this deployment, you use us-central1 as the region for the Cloud Run service and us as the location for the Eventarc trigger and Cloud Storage buckets. In Cloud Shell, set common shell variables including region and location: REGION=us-central1 LOCATION=us PROJECT_ID=PROJECT_ID Replace PROJECT_ID with your project ID. Initialize the gcloud CLI environment with your project ID: gcloud config set project "${PROJECT_ID}" Create the config.json configuration file based on the config.json.tmpl template file in the GitHub repository: sed "s/-bucket-name/-${PROJECT_ID}/" \ docker-clamav-malware-scanner/cloudrun-malware-scanner/config.json.tmpl \ > docker-clamav-malware-scanner/cloudrun-malware-scanner/config.json The preceding command uses a search and replace operation to give the Cloud Storage buckets unique names that are based on the Project ID. Optional: View the updated configuration file: cat docker-clamav-malware-scanner/cloudrun-malware-scanner/config.json Configure the Terraform variables. The contents of the config.json configuration file are passed to Terraform by using the TF_VAR_config_json variable, so that Terraform knows which Cloud Storage buckets are to create. The value of this variable is also passed to Cloud Run to configure the service. TF_VAR_project_id=$PROJECT_ID TF_VAR_region=us-central1 TF_VAR_bucket_location=us TF_VAR_config_json="$(cat docker-clamav-malware-scanner/cloudrun-malware-scanner/config.json)" TF_VAR_create_buckets=true export TF_VAR_project_id TF_VAR_region TF_VAR_bucket_location TF_VAR_config_json TF_VAR_create_buckets Deploy the base infrastructure In Cloud Shell, run the following commands to deploy the base infrastructure: gcloud services enable \ cloudresourcemanager.googleapis.com \ serviceusage.googleapis.com cd docker-clamav-malware-scanner/terraform/infra terraform init terraform apply Respond yes when prompted. This Terraform script performs the following tasks: Creates the service accounts Creates the Artifact Registry Creates the Cloud Storage buckets Sets the appropriate roles and permissions Performs an initial population of the Cloud Storage bucket that contains the mirror of ClamAV malware definitions database Build the container for the service In Cloud Shell, run the following commands to launch a Cloud Build job to create the container image for the service: cd ../../cloudrun-malware-scanner gcloud builds submit \ --region="$TF_VAR_region" \ --config=cloudbuild.yaml \ --service-account="projects/$PROJECT_ID/serviceAccounts/malware-scanner-build@$PROJECT_ID.iam.gserviceaccount.com" \ . Wait a few minutes for the build to complete. Deploy the service and trigger In Cloud Shell, run the following commands to deploy the Cloud Run service: cd ../terraform/service/ terraform init terraform apply Respond yes when prompted. It can take several minutes for the service to deploy and start. This terraform script performs the following tasks: Deploys the Cloud Run service by using the container image that you just built. Sets up the Eventarc triggers on the unscanned Cloud Storage buckets. Although your trigger is created immediately, it can take up to two minutes for that trigger to be fully functional. Creates the Cloud Scheduler job to update to the ClamAV malware definitions mirror. If the deployment fails with one of the following errors, wait one minute and then run the terraform apply command again to retry creating the Eventarc trigger. Error: Error creating Trigger: googleapi: Error 400: Invalid resource state for "": The request was invalid: Bucket "unscanned-PROJECT_ID" was not found. Please verify that the bucket exists. Error: Error creating Trigger: googleapi: Error 400: Invalid resource state for "": Permission denied while using the Eventarc Service Agent. If you recently started to use Eventarc, it may take a few minutes before all necessary permissions are propagated to the Service Agent. Otherwise, verify that it has Eventarc Service Agent role.. Optional: To check the running service and the ClamAV version in use, run the following commands: MALWARE_SCANNER_URL="$(terraform output -raw cloud_run_uri)" curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" \ "${MALWARE_SCANNER_URL}" The output looks like the following sample. It shows the version of the malware-scanner service, the version of ClamAV, and the version of the malware definitions with the date that they were last updated. gcs-malware-scanner version 3.2.0 Using Clam AV version: ClamAV 1.4.1/27479/Fri Dec 6 09:40:14 2024 Test the pipeline by uploading files To test the pipeline, you upload one clean (malware-free) file and one test file that mimics an infected file: Create a sample text file or use an existing clean file to test the pipeline processes. In Cloud Shell, copy the sample data file to the unscanned bucket: gcloud storage cp FILENAME "gs://unscanned-${PROJECT_ID}" Replace FILENAME with the name of the clean text file. The malware-scanner service inspects each file and moves it to an appropriate bucket. This file is moved to the clean bucket. Give the pipeline a few seconds to process the file and then check your clean bucket to see if the processed file is there: gcloud storage ls "gs://clean-${PROJECT_ID}" --recursive You can check that the file was removed from the unscanned bucket: gcloud storage ls "gs://unscanned-${PROJECT_ID}" --recursive Upload a file called eicar-infected.txt that contains the EICAR standard anti-malware test signature to your unscanned bucket: echo -e 'X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*' \ | gcloud storage cp - "gs://unscanned-${PROJECT_ID}/eicar-infected.txt" This text string has a signature that triggers malware scanners for testing purposes. This test file is a widely used test—it isn't actual malware and it's harmless to your workstation. If you try to create a file that contains this string on a computer that has a malware scanner installed, you can trigger an alert. Wait a few seconds and then check your quarantined bucket to see if your file successfully went through the pipeline: gcloud storage ls "gs://quarantined-${PROJECT_ID}" --recursive The service also logs a Logging log entry when a malware infected file is detected. You can check that the file was removed from the unscanned bucket: gcloud storage ls "gs://unscanned-${PROJECT_ID}" --recursive Test the malware definitions database update mechanism In Cloud Shell, trigger the check for updates by forcing the Cloud Scheduler job to run: gcloud scheduler jobs run "${SERVICE_NAME}-mirror-update" --location="${REGION}" The results of this command are only shown in the detailed logs. Monitor the service You can monitor the service by using Cloud Logging and Cloud Monitoring. View detailed logs In the Google Cloud console, go to the Cloud Logging Logs Explorer page. Go to Logs Explorer If the Log fields filter isn't displayed, click Log Fields. In the Log Fields filter, click Cloud Run Revision. In the Service Name section of the Log Fields filter, click malware-scanner. The logs query results shows the logs from the service, including several lines that show the scan requests and status for the two files that you uploaded: Scan request for gs://unscanned-PROJECT_ID/FILENAME, (##### bytes) scanning with clam ClamAV CLAMAV_VERSION_STRING Scan status for gs://unscanned-PROJECT_ID/FILENAME: CLEAN (##### bytes in #### ms) ... Scan request for gs://unscanned-PROJECT_ID/eicar-infected.txt, (69 bytes) scanning with clam ClamAV CLAMAV_VERSION_STRING Scan status for gs://unscanned-PROJECT_ID/eicar-infected.txt: INFECTED stream: Eicar-Signature FOUND (69 bytes in ### ms) The output shows the ClamAV version and malware database signature revision, along with the malware name for the infected test file. You can use these log messages to set up alerts for when malware has been found, or for when failures occurred while scanning. The output also shows the malware definitions mirror update logs: Starting CVD Mirror update CVD Mirror update check complete. output: ... If the mirror was updated, the output shows additional lines: CVD Mirror updated: DATE_TIME - INFO: Downloaded daily.cvd. Version: VERSION_INFO Freshclam update logs appear every 30 mins: DATE_TIME -> Received signal: wake up DATE_TIME -> ClamAV update process started at DATE_TIME DATE_TIME -> daily.cvd database is up-to-date (version: VERSION_INFO) DATE_TIME -> main.cvd database is up-to-date (version: VERSION_INFO) DATE_TIME -> bytecode.cvd database is up-to-date (version: VERSION_INFO) If the database was updated, the freshclam log lines are instead similar to the following: DATE_TIME -> daily.cld updated (version: VERSION_INFO) View Metrics The service generates the following metrics for monitoring and alerting purposes: Number of clean files processed: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/clean-files Number of infected files processed: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/infected-files Number of files ignored and not scanned: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/ignored-files Time spent scanning files: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/scan-duration Total number of bytes scanned: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/bytes-scanned Number of failed malware scans: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/scans-failed Number of CVD Mirror update checks: workload.googleapis.com/googlecloudplatform/gcs-malware-scanning/cvd-mirror-updates You can view these metrics in the Cloud Monitoring Metrics Explorer: In the Google Cloud console, go to the Cloud Monitoring Metrics Explorer page. Go to Metrics Explorer Click the Select a metric field and enter the filter string malware. Expand the Generic Task resource. Expand the Googlecloudplatform category. Select the googlecloudplatform/gcs-malware-scanning/clean-files metric. The graph shows a data point that indicates when the clean file was scanned. You can use metrics to monitor the pipeline and to create alerts for when malware is detected, or when files fail processing. The generated metrics have the following labels, which you can use for filtering and aggregation to view more fine-grained details with Metrics Explorer: source_bucket destination_bucket clam_version cloud_run_revision In the ignored_files metric, the following reason labels define why files are ignored: ZERO_LENGTH_FILE: If the ignoreZeroLengthFiles config value is set, and the file is empty. FILE_TOO_LARGE: When the file exceeds the maximum scan size of 500 MiB. REGEXP_MATCH: When the filename matches one of the patterns defined in fileExclusionPatterns. FILE_SIZE_MISMATCH: If the file size changes while it is being examined. Advanced configuration The following sections describe how you can configure the scanner with more advanced parameters. Handle multiple buckets The malware scanner service can scan files from multiple source buckets and send the files to separate clean and quarantined buckets. Although this advanced configuration is out of the scope of this deployment, the following is a summary of the required steps: Create unscanned, clean, and quarantined Cloud Storage buckets that have unique names. Grant the appropriate roles to the malware-scanner service account on each bucket. Edit the config.json configuration file to specify the bucket names for each configuration: { "buckets": [ { "unscanned": "unscanned-bucket-1-name", "clean": "clean-bucket-1-name", "quarantined": "quarantined-bucket-1-name" }, { "unscanned": "unscanned-bucket-2-name", "clean": "clean-bucket-2-name", "quarantined": "quarantined-bucket-2-name" } ], "ClamCvdMirrorBucket": "cvd-mirror-bucket-name" } For each of the unscanned buckets, create an Eventarc trigger. Make sure to create a unique trigger name for each bucket. The Cloud Storage bucket must be in the same project and region as the Eventarc trigger. If you are using the Terraform deployment, the steps in this section are automatically applied when you pass your updated config.json configuration file in the terraform configuration variable TF_VAR_config_json. Ignoring temporary files Some uploading services, such as SFTP to Cloud Storage gateways, create one or more temporary files during the upload process. These services then rename these files to the final filename once the upload is complete. The normal behavior of the scanner is to scan and move all files, including these temporary files as soon as they are written, which may cause the uploader service to fail when it can't find its temporary files. The fileExclusionPatterns section of the config.json configuration file lets you use regular expressions to specify a list of filename patterns to ignore. Any files matching these regular expressions are left in the unscanned bucket. When this rule is triggered, the ignored-files counter is incremented, and a message is logged to indicate that the file matching the pattern was ignored. The following code sample shows a config.json configuration file with the fileExclusionPatterns list set to ignore files ending in .tmp or containing the string .partial_upload.. { "buckets": [ { "unscanned": "unscanned-bucket-name", "clean": "clean-bucket-name", "quarantined": "quarantined-bucket-name" }, ], "ClamCvdMirrorBucket": "cvd-mirror-bucket-name", "fileExclusionPatterns": [ "\\.tmp$", "\\.partial_upload\\." ] } Take care when using \ characters in the regular expression as they will need to be escaped in the JSON file with another \. For example, to specify a literal . in a regular expression, the symbol needs to be escaped twice - once for the regular expression, and again for the text in the JSON file, therefore becoming \\., as in the last line of the preceding code sample. Ignore zero-length files Similarly to temporary files, some upload services create a zero-length file on Cloud Storage, then update this file later with more contents. These files can also be ignored by setting the config.json parameter ignoreZeroLengthFiles to true, for example: { "buckets": [ { "unscanned": "unscanned-bucket-name", "clean": "clean-bucket-name", "quarantined": "quarantined-bucket-name" }, ], "ClamCvdMirrorBucket": "cvd-mirror-bucket-name", "ignoreZeroLengthFiles": true } When this rule is triggered, the ignored-files metric is incremented, and a message is logged to indicate that a zero-length file was ignored. Maximum scan file size The default maximum scan file size is 500 MiB. This is chosen because it takes approximately 5 minutes to scan a file of this size. Files that are larger than 500 MiB are ignored, and are left in the unscanned bucket. The files-ignored metric is incremented and a message is logged. If you need to increase this limit, then update the following limits so they accommodate the new maximum file size and scan duration values: The Cloud Run service request timeout is 5 minutes The Pub/Sub subscription message acknowledgement deadline is 5 minutes The Scanner code has a MAX_FILE_SIZE constant of 500 MiB. The ClamAV service config has StreamMaxLength, MaxScanSize, and MaxFileSize settings of 512 MB. These settings are set by the bootstrap.sh script. Clean up The following section explains how you can avoid future charges for the Google Cloud project that you used in this deployment. Delete the Google Cloud project To avoid incurring charges to your Google Cloud account for the resources used in this deployment, you can delete the Google Cloud project. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Explore Cloud Storage documentation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(2).txt b/Deploy_the_architecture(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..bf3caae30474f8c3c1a9194d9d30d56e45507e03 --- /dev/null +++ b/Deploy_the_architecture(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/build-apps-using-gateway-and-cloud-service/deployment +Date Scraped: 2025-02-23T11:47:37.575Z + +Content: +Home Docs Cloud Architecture Center Send feedback From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-30 UTC This document shows you how accomplish the following tasks: Deploy globally distributed applications exposed through GKE Gateway and Cloud Service Mesh. Expose an application to multiple clients by combining Cloud Load Balancing with Cloud Service Mesh. Integrate load balancers with a service mesh deployed across multiple Google Cloud regions. This deployment guide is intended for platform administrators. It's also intended for advanced practitioners who run Cloud Service Mesh. The instructions also work for Istio on GKE. Architecture The following diagram shows the default ingress topology of a service mesh—an external TCP/UDP load balancer that exposes the ingress gateway proxies on a single cluster: This deployment guide uses Google Kubernetes Engine (GKE) Gateway resources. Specifically, it uses a multi-cluster gateway to configure multi-region load balancing in front of multiple Autopilot clusters that are distributed across two regions. The preceding diagram shows how data flows through cloud ingress and mesh ingress scenarios. For more information, see the explanation of the architecture diagram in the associated reference architecture document. Objectives Deploy a pair of GKE Autopilot clusters on Google Cloud to the same fleet. Deploy an Istio-based Cloud Service Mesh to the same fleet. Configure a load balancer using GKE Gateway to terminate public HTTPS traffic. Direct public HTTPS traffic to applications hosted by Cloud Service Mesh that are deployed across multiple clusters and regions. Deploy the whereami sample application to both Autopilot clusters. Cost optimization In this document, you use the following billable components of Google Cloud: Google Kubernetes Engine Cloud Load Balancing Cloud Service Mesh Multi Cluster Ingress Google Cloud Armor Certificate Manager Cloud Endpoints To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell You run all of the terminal commands for this deployment from Cloud Shell. Set your default Google Cloud project: export PROJECT=YOUR_PROJECT export PROJECT_NUMBER=$(gcloud projects describe PROJECT_ID --format="value(projectNumber)") gcloud config set project PROJECT_ID Replace PROJECT_ID with the ID of the project that you want to use for this deployment. Create a working directory: mkdir -p ${HOME}/edge-to-mesh-multi-region cd ${HOME}/edge-to-mesh-multi-region export WORKDIR=`pwd` Create GKE clusters In this section, you create GKE clusters to host the applications and supporting infrastructure, which you create later in this deployment guide. In Cloud Shell, create a new kubeconfig file. This step ensures that you don't create a conflict with your existing (default) kubeconfig file. touch edge2mesh_mr_kubeconfig export KUBECONFIG=${WORKDIR}/edge2mesh_mr_kubeconfig Define the environment variables that are used when creating the GKE clusters and the resources within them. Modify the default region choices to suit your purposes. export CLUSTER_1_NAME=edge-to-mesh-01 export CLUSTER_2_NAME=edge-to-mesh-02 export CLUSTER_1_REGION=us-central1 export CLUSTER_2_REGION=us-east4 export PUBLIC_ENDPOINT=frontend.endpoints.PROJECT_ID.cloud.goog Enable the Google Cloud APIs that are used throughout this guide: gcloud services enable \ container.googleapis.com \ mesh.googleapis.com \ gkehub.googleapis.com \ multiclusterservicediscovery.googleapis.com \ multiclusteringress.googleapis.com \ trafficdirector.googleapis.com \ certificatemanager.googleapis.com Create a GKE Autopilot cluster with private nodes in CLUSTER_1_REGION. Use the --async flag to avoid waiting for the first cluster to provision and register to the fleet: gcloud container clusters create-auto --async \ ${CLUSTER_1_NAME} --region ${CLUSTER_1_REGION} \ --release-channel rapid --labels mesh_id=proj-${PROJECT_NUMBER} \ --enable-private-nodes --enable-fleet Create and register a second Autopilot cluster in CLUSTER_2_REGION: gcloud container clusters create-auto \ ${CLUSTER_2_NAME} --region ${CLUSTER_2_REGION} \ --release-channel rapid --labels mesh_id=proj-${PROJECT_NUMBER} \ --enable-private-nodes --enable-fleet Ensure that the clusters are running. It might take up to 20 minutes until all clusters are running: gcloud container clusters list The output is similar to the following: NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS edge-to-mesh-01 us-central1 1.27.5-gke.200 34.27.171.241 e2-small 1.27.5-gke.200 RUNNING edge-to-mesh-02 us-east4 1.27.5-gke.200 35.236.204.156 e2-small 1.27.5-gke.200 RUNNING Gather the credentials for CLUSTER_1_NAME.You created CLUSTER_1_NAMEasynchronously so you could run additional commands while the cluster provisioned. gcloud container clusters get-credentials ${CLUSTER_1_NAME} \ --region ${CLUSTER_1_REGION} To clarify the names of the Kubernetes contexts, rename them to the names of the clusters: kubectl config rename-context gke_PROJECT_ID_${CLUSTER_1_REGION}_${CLUSTER_1_NAME} ${CLUSTER_1_NAME} kubectl config rename-context gke_PROJECT_ID_${CLUSTER_2_REGION}_${CLUSTER_2_NAME} ${CLUSTER_2_NAME} Install a service mesh In this section, you configure the managed Cloud Service Mesh with fleet API. Using the fleet API to enable Cloud Service Mesh provides a declarative approach to provision a service mesh. In Cloud Shell, enable Cloud Service Mesh on the fleet: gcloud container fleet mesh enable Enable automatic control plane and data plane management: gcloud container fleet mesh update \ --management automatic \ --memberships ${CLUSTER_1_NAME},${CLUSTER_2_NAME} Wait about 20 minutes. Then verify that the control plane status is ACTIVE: gcloud container fleet mesh describe The output is similar to the following: createTime: '2023-11-30T19:23:21.713028916Z' membershipSpecs: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: mesh: management: MANAGEMENT_AUTOMATIC projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: mesh: management: MANAGEMENT_AUTOMATIC membershipStates: projects/603904278888/locations/us-central1/memberships/edge-to-mesh-01: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:21.333579005Z' projects/603904278888/locations/us-east4/memberships/edge-to-mesh-02: servicemesh: controlPlaneManagement: details: - code: REVISION_READY details: 'Ready: asm-managed-rapid' implementation: ISTIOD state: ACTIVE dataPlaneManagement: details: - code: OK details: Service is running. state: ACTIVE state: code: OK description: |- Revision ready for use: asm-managed-rapid. All Canonical Services have been reconciled successfully. updateTime: '2024-06-27T09:00:24.674852751Z' name: projects/e2m-private-test-01/locations/global/features/servicemesh resourceState: state: ACTIVE spec: {} updateTime: '2024-06-04T17:16:28.730429993Z' Deploy an external Application Load Balancer and create ingress gateways In this section, you deploy an external Application Load Balancer through the GKE Gateway controller and create ingress gateways for both clusters. The gateway and gatewayClass resources automate the provisioning of the load balancer and backend health checking. To provide TLS termination on the load balancer, you create Certificate Manager resources and attach them to the load balancer. Additionally, you use Endpoints to automatically provision a public DNS name for the application. Install an ingress gateway on both clusters As a security best practice, we recommend that you deploy the ingress gateway in a different namespace from the mesh control plane. In Cloud Shell, create a dedicated asm-ingress namespace on each cluster: kubectl --context=${CLUSTER_1_NAME} create namespace asm-ingress kubectl --context=${CLUSTER_2_NAME} create namespace asm-ingress Add a namespace label to the asm-ingress namespaces: kubectl --context=${CLUSTER_1_NAME} label namespace asm-ingress istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} label namespace asm-ingress istio-injection=enabled The output is similar to the following: namespace/asm-ingress labeled Labeling the asm-ingress namespaces with istio-injection=enabled instructs Cloud Service Mesh to automatically inject Envoy sidecar proxies when a pod is deployed. Generate a self-signed certificate for future use: openssl req -new -newkey rsa:4096 -days 365 -nodes -x509 \ -subj "/CN=frontend.endpoints.PROJECT_ID.cloud.goog/O=Edge2Mesh Inc" \ -keyout ${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ -out ${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt The certificate provides an additional layer of encryption between the load balancer and the service mesh ingress gateways. It also enables support for HTTP/2-based protocols like gRPC. Instructions about how to attach the self-signed certificate to the ingress gateways are provided later in Create external IP address, DNS record, and TLS certificate resources. For more information about the requirements of the ingress gateway certificate, see Encryption from the load balancer to the backends. Create a Kubernetes secret on each cluster to store the self-signed certificate: kubectl --context ${CLUSTER_1_NAME} -n asm-ingress create secret tls \ edge2mesh-credential \ --key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ --cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt kubectl --context ${CLUSTER_2_NAME} -n asm-ingress create secret tls \ edge2mesh-credential \ --key=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.key \ --cert=${WORKDIR}/frontend.endpoints.PROJECT_ID.cloud.goog.crt To integrate with external Application Load Balancer, create a kustomize variant to configure the ingress gateway resources: mkdir -p ${WORKDIR}/asm-ig/base cat < ${WORKDIR}/asm-ig/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/anthos-service-mesh-samples/docs/ingress-gateway-asm-manifests/base EOF mkdir ${WORKDIR}/asm-ig/variant cat < ${WORKDIR}/asm-ig/variant/role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: asm-ingressgateway namespace: asm-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] EOF cat < ${WORKDIR}/asm-ig/variant/rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: asm-ingressgateway namespace: asm-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: asm-ingressgateway subjects: - kind: ServiceAccount name: asm-ingressgateway EOF cat < ${WORKDIR}/asm-ig/variant/service-proto-type.yaml apiVersion: v1 kind: Service metadata: name: asm-ingressgateway namespace: asm-ingress spec: ports: - name: status-port port: 15021 protocol: TCP targetPort: 15021 - name: http port: 80 targetPort: 8080 appProtocol: HTTP - name: https port: 443 targetPort: 8443 appProtocol: HTTP2 type: ClusterIP EOF cat < ${WORKDIR}/asm-ig/variant/gateway.yaml apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: asm-ingressgateway namespace: asm-ingress spec: servers: - port: number: 443 name: https protocol: HTTPS hosts: - "*" # IMPORTANT: Must use wildcard here when using SSL, as SNI isn't passed from GFE tls: mode: SIMPLE credentialName: edge2mesh-credential EOF cat < ${WORKDIR}/asm-ig/variant/kustomization.yaml namespace: asm-ingress resources: - ../base - role.yaml - rolebinding.yaml patches: - path: service-proto-type.yaml target: kind: Service - path: gateway.yaml target: kind: Gateway EOF Apply the ingress gateway configuration to both clusters: kubectl --context ${CLUSTER_1_NAME} apply -k ${WORKDIR}/asm-ig/variant kubectl --context ${CLUSTER_2_NAME} apply -k ${WORKDIR}/asm-ig/variant Expose ingress gateway pods to the load balancer using a multi-cluster service In this section, you export the ingress gateway pods through a ServiceExport custom resource. You must export the ingress gateway pods through a ServiceExport custom resource for the following reasons: Allows the load balancer to address the ingress gateway pods across multiple clusters. Allows the ingress gateway pods to proxy requests to services running within the service mesh. In Cloud Shell, enable multi-cluster Services (MCS) for the fleet: gcloud container fleet multi-cluster-services enable Grant MCS the required IAM permissions to the project or fleet: gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:PROJECT_ID.svc.id.goog[gke-mcs/gke-mcs-importer]" \ --role "roles/compute.networkViewer" Create the ServiceExport YAML file: cat < ${WORKDIR}/svc_export.yaml kind: ServiceExport apiVersion: net.gke.io/v1 metadata: name: asm-ingressgateway namespace: asm-ingress EOF Apply the ServiceExport YAML file to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/svc_export.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/svc_export.yaml If you receive the following error message, wait a few moments for the MCS custom resource definitions (CRDs) to install. Then re-run the commands to apply the ServiceExport YAML file to both clusters. error: resource mapping not found for name: "asm-ingressgateway" namespace: "asm-ingress" from "svc_export.yaml": no matches for kind "ServiceExport" in version "net.gke.io/v1" ensure CRDs are installed first Create external IP address, DNS record, and TLS certificate resources In this section, you create networking resources that support the load-balancing resources that you create later in this deployment. In Cloud Shell, reserve a static external IP address: gcloud compute addresses create mcg-ip --global A static IP address is used by the GKE Gateway resource. It lets the IP address remain the same, even if the external load balancer is recreated. Get the static IP address and store it as an environment variable: export MCG_IP=$(gcloud compute addresses describe mcg-ip --global --format "value(address)") echo ${MCG_IP} To create a stable, human-friendly mapping to your Gateway IP address, you must have a public DNS record. You can use any DNS provider and automation scheme that you want. This deployment uses Endpoints instead of creating a managed DNS zone. Endpoints provides a free Google-managed DNS record for an external IP address. Run the following command to create a YAML file named dns-spec.yaml: cat < ${WORKDIR}/dns-spec.yaml swagger: "2.0" info: description: "Cloud Endpoints DNS" title: "Cloud Endpoints DNS" version: "1.0.0" paths: {} host: "frontend.endpoints.PROJECT_ID.cloud.goog" x-google-endpoints: - name: "frontend.endpoints.PROJECT_ID.cloud.goog" target: "${MCG_IP}" EOF The dns-spec.yaml file defines the public DNS record in the form of frontend.endpoints.PROJECT_ID.cloud.goog, where PROJECT_ID is your unique project identifier. Deploy the dns-spec.yaml file to create the DNS entry. This process takes a few minutes. gcloud endpoints services deploy ${WORKDIR}/dns-spec.yaml Create a certificate using Certificate Manager for the DNS entry name you created in the previous step: gcloud certificate-manager certificates create mcg-cert \ --domains="frontend.endpoints.PROJECT_ID.cloud.goog" A Google-managed TLS certificate is used to terminate inbound client requests at the load balancer. Create a certificate map: gcloud certificate-manager maps create mcg-cert-map The load balancer references the certificate through the certificate map entry you create in the next step. Create a certificate map entry for the certificate you created earlier in this section: gcloud certificate-manager maps entries create mcg-cert-map-entry \ --map="mcg-cert-map" \ --certificates="mcg-cert" \ --hostname="frontend.endpoints.PROJECT_ID.cloud.goog" Create backend service policies and load balancer resources In this section you accomplish the following tasks; Create a Google Cloud Armor security policy with rules. Create a policy that lets the load balancer check the responsiveness of the ingress gateway pods through the ServiceExport YAML file you created earlier. Use the GKE Gateway API to create a load balancer resource. Use the GatewayClass custom resource to set the specific load balancer type. Enable multi-cluster load balancing for the fleet and designate one of the clusters as the configuration cluster for the fleet. In Cloud Shell, create a Google Cloud Armor security policy: gcloud compute security-policies create edge-fw-policy \ --description "Block XSS attacks" Create a rule for the security policy: gcloud compute security-policies rules create 1000 \ --security-policy edge-fw-policy \ --expression "evaluatePreconfiguredExpr('xss-stable')" \ --action "deny-403" \ --description "XSS attack filtering" Create a YAML file for the security policy, and reference the ServiceExport YAML file through a corresponding ServiceImport YAML file: cat < ${WORKDIR}/cloud-armor-backendpolicy.yaml apiVersion: networking.gke.io/v1 kind: GCPBackendPolicy metadata: name: cloud-armor-backendpolicy namespace: asm-ingress spec: default: securityPolicy: edge-fw-policy targetRef: group: net.gke.io kind: ServiceImport name: asm-ingressgateway EOF Apply the Google Cloud Armor policy to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/cloud-armor-backendpolicy.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/cloud-armor-backendpolicy.yaml Create a custom YAML file that lets the load balancer perform health checks against the Envoy health endpoint (port 15021 on path /healthz/ready) of the ingress gateway pods in both clusters: cat < ${WORKDIR}/ingress-gateway-healthcheck.yaml apiVersion: networking.gke.io/v1 kind: HealthCheckPolicy metadata: name: ingress-gateway-healthcheck namespace: asm-ingress spec: default: config: httpHealthCheck: port: 15021 portSpecification: USE_FIXED_PORT requestPath: /healthz/ready type: HTTP targetRef: group: net.gke.io kind: ServiceImport name: asm-ingressgateway EOF Apply the custom YAML file you created in the previous step to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/ingress-gateway-healthcheck.yaml Enable multi-cluster load balancing for the fleet, and designate CLUSTER_1_NAME as the configuration cluster: gcloud container fleet ingress enable \ --config-membership=${CLUSTER_1_NAME} \ --location=${CLUSTER_1_REGION} Grant IAM permissions for the Gateway controller in the fleet: gcloud projects add-iam-policy-binding PROJECT_ID \ --member "serviceAccount:service-${PROJECT_NUMBER}@gcp-sa-multiclusteringress.iam.gserviceaccount.com" \ --role "roles/container.admin" Create the load balancer YAML file through a Gateway custom resource that references the gke-l7-global-external-managed-mc gatewayClass and the static IP address you created earlier: cat < ${WORKDIR}/frontend-gateway.yaml kind: Gateway apiVersion: gateway.networking.k8s.io/v1 metadata: name: external-http namespace: asm-ingress annotations: networking.gke.io/certmap: mcg-cert-map spec: gatewayClassName: gke-l7-global-external-managed-mc listeners: - name: http # list the port only so we can redirect any incoming http requests to https protocol: HTTP port: 80 - name: https protocol: HTTPS port: 443 allowedRoutes: kinds: - kind: HTTPRoute addresses: - type: NamedAddress value: mcg-ip EOF Apply the frontend-gateway YAML file to both clusters. Only CLUSTER_1_NAME is authoritative unless you designate a different configuration cluster as authoritative: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-gateway.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-gateway.yaml Create an HTTPRoute YAML file called default-httproute.yaml that instructs the Gateway resource to send requests to the ingress gateways: cat << EOF > ${WORKDIR}/default-httproute.yaml apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: default-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: https rules: - backendRefs: - group: net.gke.io kind: ServiceImport name: asm-ingressgateway port: 443 EOF Apply the HTTPRoute YAML file you created in the previous step to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/default-httproute.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/default-httproute.yaml To perform HTTP to HTTP(S) redirects, create an additional HTTPRoute YAML file called default-httproute-redirect.yaml: cat << EOF > ${WORKDIR}/default-httproute-redirect.yaml kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1 metadata: name: http-to-https-redirect-httproute namespace: asm-ingress spec: parentRefs: - name: external-http namespace: asm-ingress sectionName: http rules: - filters: - type: RequestRedirect requestRedirect: scheme: https statusCode: 301 EOF Apply the redirect HTTPRoute YAML file to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/default-httproute-redirect.yaml Inspect the Gateway resource to check the progress of the load balancer deployment: kubectl --context=${CLUSTER_1_NAME} describe gateway external-http -n asm-ingress The output shows the information you entered in this section. Deploy the whereami sample application This guide uses whereami as a sample application to provide direct feedback about which clusters are replying to requests. The following section sets up two separate deployments of whereami across both clusters: a frontend deployment and a backend deployment. The frontend deployment is the first workload to receive the request. It then calls the backend deployment. This model is used to demonstrate a multi-service application architecture. Both frontend and backend services are deployed to both clusters. In Cloud Shell, create the namespaces for a whereami frontend and a whereami backend across both clusters and enable namespace injection: kubectl --context=${CLUSTER_1_NAME} create ns frontend kubectl --context=${CLUSTER_1_NAME} label namespace frontend istio-injection=enabled kubectl --context=${CLUSTER_1_NAME} create ns backend kubectl --context=${CLUSTER_1_NAME} label namespace backend istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} create ns frontend kubectl --context=${CLUSTER_2_NAME} label namespace frontend istio-injection=enabled kubectl --context=${CLUSTER_2_NAME} create ns backend kubectl --context=${CLUSTER_2_NAME} label namespace backend istio-injection=enabled Create a kustomize variant for the whereami backend: mkdir -p ${WORKDIR}/whereami-backend/base cat < ${WORKDIR}/whereami-backend/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8s EOF mkdir ${WORKDIR}/whereami-backend/variant cat < ${WORKDIR}/whereami-backend/variant/cm-flag.yaml apiVersion: v1 kind: ConfigMap metadata: name: whereami data: BACKEND_ENABLED: "False" # assuming you don't want a chain of backend calls METADATA: "backend" EOF cat < ${WORKDIR}/whereami-backend/variant/service-type.yaml apiVersion: "v1" kind: "Service" metadata: name: "whereami" spec: type: ClusterIP EOF cat < ${WORKDIR}/whereami-backend/variant/kustomization.yaml nameSuffix: "-backend" namespace: backend commonLabels: app: whereami-backend resources: - ../base patches: - path: cm-flag.yaml target: kind: ConfigMap - path: service-type.yaml target: kind: Service EOF Apply the whereami backend variant to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -k ${WORKDIR}/whereami-backend/variant kubectl --context=${CLUSTER_2_NAME} apply -k ${WORKDIR}/whereami-backend/variant Create a kustomize variant for the whereami frontend: mkdir -p ${WORKDIR}/whereami-frontend/base cat < ${WORKDIR}/whereami-frontend/base/kustomization.yaml resources: - github.com/GoogleCloudPlatform/kubernetes-engine-samples/quickstarts/whereami/k8s EOF mkdir whereami-frontend/variant cat < ${WORKDIR}/whereami-frontend/variant/cm-flag.yaml apiVersion: v1 kind: ConfigMap metadata: name: whereami data: BACKEND_ENABLED: "True" BACKEND_SERVICE: "http://whereami-backend.backend.svc.cluster.local" EOF cat < ${WORKDIR}/whereami-frontend/variant/service-type.yaml apiVersion: "v1" kind: "Service" metadata: name: "whereami" spec: type: ClusterIP EOF cat < ${WORKDIR}/whereami-frontend/variant/kustomization.yaml nameSuffix: "-frontend" namespace: frontend commonLabels: app: whereami-frontend resources: - ../base patches: - path: cm-flag.yaml target: kind: ConfigMap - path: service-type.yaml target: kind: Service EOF Apply the whereami frontend variant to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -k ${WORKDIR}/whereami-frontend/variant kubectl --context=${CLUSTER_2_NAME} apply -k ${WORKDIR}/whereami-frontend/variant Create a VirtualService YAML file to route requests to the whereami frontend: cat << EOF > ${WORKDIR}/frontend-vs.yaml apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: whereami-vs namespace: frontend spec: gateways: - asm-ingress/asm-ingressgateway hosts: - 'frontend.endpoints.PROJECT_ID.cloud.goog' http: - route: - destination: host: whereami-frontend port: number: 80 EOF Apply the frontend-vs YAML file to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-vs.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-vs.yaml Now that you have deployed frontend-vs.yaml to both clusters, attempt to call the public endpoint for your clusters: curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq The output is similar to the following: { "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "8396338201253702608", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-645h", "pod_ip": "10.124.0.199", "pod_name": "whereami-backend-7cbdfd788-8mmnq", "pod_name_emoji": "📸", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-01", "gce_instance_id": "1047264075324910451", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-01-pool-2-d687e3c0-5kf2", "pod_ip": "10.54.1.71", "pod_name": "whereami-frontend-69c4c867cb-dgg8t", "pod_name_emoji": "🪴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T03:46:24", "zone": "us-central1-c" } Note: It takes around 20 minutes for the certificate to be provisioned. If you don't receive a response, use the -v flag in the curl command in this step to verify whether the error is related to TLS errors. If you run the curl command a few times, you'll see that the responses (both from frontend and backend) come from different regions. In its response, the load balancer is providing geo-routing. That means the load balancer is routing requests from the client to the nearest active cluster, but the requests are still landing randomly. When requests occasionally go from one region to another, it increases latency and cost. In the next section, you implement locality load balancing in the service mesh to keep requests local. Enable and test locality load balancing for whereami In this section, you implement locality load balancing in the service mesh to keep requests local. You also perform some tests to see how whereami handles various failure scenarios. When you make a request to the whereami frontend service, the load balancer sends the request to the cluster with the lowest latency relative to the client. That means the ingress gateway pods within the mesh load balance requests to whereami frontend pods across both clusters. This section will address that issue by enabling locality load balancing within the mesh. Note: The DestinationRule examples in the following section set an artificially low value for regional failover. When adapting these samples for your own purposes, test extensively to verify that you've found the appropriate value for your needs. In Cloud Shell, create a DestinationRule YAML file that enables locality load balancing regional failover to the frontend service: cat << EOF > ${WORKDIR}/frontend-dr.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: name: frontend namespace: frontend spec: host: whereami-frontend.frontend.svc.cluster.local trafficPolicy: connectionPool: http: maxRequestsPerConnection: 0 loadBalancer: simple: LEAST_REQUEST localityLbSetting: enabled: true outlierDetection: consecutive5xxErrors: 1 interval: 1s baseEjectionTime: 1m EOF The preceding code sample only enables local routing for the frontend service. You also need an additional configuration that handles the backend. Apply the frontend-dr YAML file to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/frontend-dr.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/frontend-dr.yaml Create a DestinationRule YAML file that enables locality load balancing regional failover to the backend service: cat << EOF > ${WORKDIR}/backend-dr.yaml apiVersion: networking.istio.io/v1beta1 kind: DestinationRule metadata: n ame: backend namespace: backend spec: host: whereami-backend.backend.svc.cluster.local trafficPolicy: connectionPool: http: maxRequestsPerConnection: 0 loadBalancer: simple: LEAST_REQUEST localityLbSetting: enabled: true outlierDetection: consecutive5xxErrors: 1 interval: 1s baseEjectionTime: 1m EOF Apply the backend-dr YAML file to both clusters: kubectl --context=${CLUSTER_1_NAME} apply -f ${WORKDIR}/backend-dr.yaml kubectl --context=${CLUSTER_2_NAME} apply -f ${WORKDIR}/backend-dr.yaml With both sets of DestinationRule YAML files applied to both clusters, requests remain local to the cluster that the request is routed to. To test failover for the frontend service, reduce the number of replicas for the ingress gateway in your primary cluster. From the perspective of the multi-regional load balancer, this action simulates a cluster failure. It causes that cluster to fail its load balancer health checks. This example uses the cluster in CLUSTER_1_REGION. You should only see responses from the cluster in CLUSTER_2_REGION. Reduce the number of replicas for the ingress gateway in your primary cluster to zero and call the public endpoint to verify that requests have failed over to the other cluster: kubectl --context=${CLUSTER_1_NAME} -n asm-ingress scale --replicas=0 deployment/asm-ingressgateway Note: While there is a HorizontalPodAutoscaler configured with minReplicas:3 for the asm-ingressgateway deployment, scaling is temporarily deactivated when the deployment's replica count is set to 0. For more information, see Horizontal Pod Autoscaling - Implicit maintenance-mode deactivation. The output should resemble the following: $ curl -s https://frontend.endpoints.PROJECT_ID.cloud.goog | jq { "backend_result": { "cluster_name": "edge-to-mesh-02", "gce_instance_id": "2717459599837162415", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "whereami-backend.backend.svc.cluster.local", "metadata": "backend", "node_name": "gk3-edge-to-mesh-02-pool-2-675f6abf-dxs2", "pod_ip": "10.124.1.7", "pod_name": "whereami-backend-7cbdfd788-mp8zv", "pod_name_emoji": "🏌🏽‍♀", "pod_namespace": "backend", "pod_service_account": "whereami-backend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b" }, "cluster_name": "edge-to-mesh-02", "gce_instance_id": "6983018919754001204", "gce_service_account": "e2m-mcg-01.svc.id.goog", "host_header": "frontend.endpoints.e2m-mcg-01.cloud.goog", "metadata": "frontend", "node_name": "gk3-edge-to-mesh-02-pool-3-d42ddfbf-qmkn", "pod_ip": "10.124.1.142", "pod_name": "whereami-frontend-69c4c867cb-xf8db", "pod_name_emoji": "🏴", "pod_namespace": "frontend", "pod_service_account": "whereami-frontend", "project_id": "e2m-mcg-01", "timestamp": "2023-12-01T05:41:18", "zone": "us-east4-b" } To resume typical traffic routing, restore the ingress gateway replicas to the original value in the cluster: kubectl --context=${CLUSTER_1_NAME} -n asm-ingress scale --replicas=3 deployment/asm-ingressgateway Simulate a failure for the backend service, by reducing the number of replicas in the primary region to 0: kubectl --context=${CLUSTER_1_NAME} -n backend scale --replicas=0 deployment/whereami-backend Verify that the responses from the frontend service come from the us-central1 primary region through the load balancer, and the responses from the backend service come from the us-east4 secondary region. The output should also include a response for the frontend service from the primary region (us-central1), and a response for the backend service from the secondary region (us-east4), as expected. Restore the backend service replicas to the original value to resume typical traffic routing: kubectl --context=${CLUSTER_1_NAME} -n backend scale --replicas=3 deployment/whereami-backend You now have a global HTTP(S) load balancer serving as a frontend to your service-mesh-hosted, multi-region application. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this deployment, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the individual resources If you want to keep the Google Cloud project you used in this deployment, delete the individual resources: In Cloud Shell, delete the HTTPRoute resources: kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/default-httproute-redirect.yaml kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/default-httproute.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/default-httproute.yaml Delete the GKE Gateway resources: kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/frontend-gateway.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/frontend-gateway.yaml Delete the policies: kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/ingress-gateway-healthcheck.yaml kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/cloud-armor-backendpolicy.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/cloud-armor-backendpolicy.yaml Delete the service exports: kubectl --context=${CLUSTER_1_NAME} delete -f ${WORKDIR}/svc_export.yaml kubectl --context=${CLUSTER_2_NAME} delete -f ${WORKDIR}/svc_export.yaml Delete the Google Cloud Armor resources: gcloud --project=PROJECT_ID compute security-policies rules delete 1000 --security-policy edge-fw-policy --quiet gcloud --project=PROJECT_ID compute security-policies delete edge-fw-policy --quiet Delete the Certificate Manager resources: gcloud --project=PROJECT_ID certificate-manager maps entries delete mcg-cert-map-entry --map="mcg-cert-map" --quiet gcloud --project=PROJECT_ID certificate-manager maps delete mcg-cert-map --quiet gcloud --project=PROJECT_ID certificate-manager certificates delete mcg-cert --quiet Delete the Endpoints DNS entry: gcloud --project=PROJECT_ID endpoints services delete "frontend.endpoints.PROJECT_ID.cloud.goog" --quiet Delete the static IP address: gcloud --project=PROJECT_ID compute addresses delete mcg-ip --global --quiet Delete the GKE Autopilot clusters. This step takes several minutes. gcloud --project=PROJECT_ID container clusters delete ${CLUSTER_1_NAME} --region ${CLUSTER_1_REGION} --quiet gcloud --project=PROJECT_ID container clusters delete ${CLUSTER_2_NAME} --region ${CLUSTER_2_REGION} --quiet What's next Learn about more features offered by GKE Gateway that you can use with your service mesh. Learn about the different types of Cloud Load Balancing available for GKE. Learn about the features and functions offered by Cloud Service Mesh. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Alex Mattson | Application Specialist EngineerMark Chilvers | Application Specialist EngineerOther contributors: Abdelfettah Sghiouar | Cloud Developer AdvocateGreg Bray | Customer EngineerPaul Revello | Cloud Solutions ArchitectValavan Rajakumar | Key Enterprise Architect Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(3).txt b/Deploy_the_architecture(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..bf47fdfd60be8295a7759aa66dbc85c89bc3542a --- /dev/null +++ b/Deploy_the_architecture(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/app-development-and-delivery-with-cloud-code-gcb-cd-and-gke/deployment +Date Scraped: 2025-02-23T11:47:52.077Z + +Content: +Home Docs Cloud Architecture Center Send feedback Develop and deploy containerized apps using a CI/CD pipeline Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2022-11-18 UTC This deployment guide describes how to set up and use a development, continuous integration (CI), and continuous delivery (CD) system using an integrated set of Google Cloud tools. You can use this system to develop and deploy applications to Google Kubernetes Engine (GKE). This guide shows you how to create the architecture that's described in Deployment pipeline for developing and delivering containerized apps. This deployment guide is intended for both software developers and operators, and you play the following roles as you complete it: First you act as the operator to set up the CI/CD pipeline. The main components of this pipeline are Cloud Build, Artifact Registry, and Cloud Deploy. Then you act as a developer to change an application using Cloud Code. When you act as a developer, you see the integrated experience that this pipeline provides. Finally, you act as an operator and go through the steps to deploy an application into production. This deployment guide assumes you're familiar with running gcloud commands on Google Cloud and with deploying application containers to GKE. Architecture The following diagram shows the resources used in this deployment guide: For details about the components that are used in this architecture, see Deployment pipeline for developing and delivering containerized apps. Objectives Acting as an operator, you do the following: Set up the CI pipeline and CD pipeline. This setup includes the following: Set up the required permissions. Create the GKE clusters for the staging and production environments. Create a repository in Cloud Source Repositories for the source code. Create a repository in Artifact Registry for the application container. Create a Cloud Build trigger on the main GitHub repository. Create a Cloud Deploy delivery pipeline and targets. The targets are the staging and production environment. Start the CI/CD process to deploy to staging and then promote to production. Acting as a developer, you make a change to the application. To do so, you do the following: Clone the repository to work with a pre-configured development environment. Make a change to the application within your developer workspace. Build and test the change. The tests include a validation test for governance. View and validate the change in a dev cluster. This cluster runs on minikube. Commit the change into the main repository. Costs In this document, you use the following billable components of Google Cloud: Cloud Build Cloud Deploy Artifact Registry Google Kubernetes Engine Cloud Source Repositories Cloud Storage To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Artifact Registry, Cloud Build, Cloud Deploy, Cloud Source Repositories, Google Kubernetes Engine, Resource Manager, and Service Networking APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Prepare your environment In this section, you act as the application operator and do the following: Set up the required permissions. Create the GKE clusters for the staging and production environments. Clone the source repository. Create a repository in Cloud Source Repositories for the source code. Create a repository in Artifact Registry for the container application. Set up permissions In this section, you grant the permissions that are needed to set up the CI/CD pipeline. If you're working in a new instance of Cloud Shell Editor, specify the project to use for this deployment guide: gcloud config set project PROJECT_ID Replace PROJECT_ID with the ID of the project you selected or created for this deployment guide. If a dialog is displayed, click Authorize. Make sure the default Compute Engine service account has sufficient permissions to run jobs in Cloud Deploy and pull containers from Artifact Registry. Cloud Build and Cloud Deploy use this default service account. This service account might already have the necessary permissions. This step ensures the necessary permissions are granted for projects that disable automatic role grants for default service accounts. gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/clouddeploy.jobRunner" gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/artifactregistry.reader" Grant the Cloud Build service account privilege to invoke deployments with Cloud Deploy and to update the delivery pipeline and the target definitions: gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")@cloudbuild.gserviceaccount.com \ --role="roles/clouddeploy.operator" For more information about this IAM role, see the clouddeploy.operator role. Grant the Cloud Build and Cloud Deploy service account privilege to deploy to GKE: gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")-compute@developer.gserviceaccount.com \ --role="roles/container.admin" for more details about this IAM role, see the container.admin role role. Grant the Cloud Build service account the permissions needed to invoke Cloud Deploy operations: gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:$(gcloud projects describe PROJECT_ID \ --format="value(projectNumber)")@cloudbuild.gserviceaccount.com \ --role="roles/iam.serviceAccountUser" When Cloud Build invokes Cloud Deploy it uses a Compute Engine service account to create a release, which is why this permission is needed. For more details about this IAM role, see the iam.serviceAccountUser role role. You have now granted the permissions that are needed for the CI/CD pipeline. Create the GKE clusters In this section, you create the staging and production environments, which are both GKE clusters. (You don't need to set up the development cluster here, because it uses minikube.) Create the staging and production GKE clusters: gcloud container clusters create-auto staging \ --region us-central1 \ --project=$(gcloud config get-value project) \ --async gcloud container clusters create-auto prod \ --region us-central1 \ --project=$(gcloud config get-value project) \ --async The staging cluster is where you test changes to your code. After you verify that the deployment in staging did not negatively affect the application, you deploy to production. Run the following command and ensure the output has STATUS: RUNNING for both the staging and production clusters: gcloud container clusters list Retrieve the credentials to your kubeconfig files for the staging and production clusters. gcloud container clusters get-credentials staging --region us-central1 gcloud container clusters get-credentials prod --region us-central1 You use these credentials to interact with the GKE clusters—for example, to check that an application is running properly. You have now created the GKE clusters for the staging and production environment. Open the IDE and clone the repository To clone the repository and view the application in your development environment, do the following: Clone the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell Editor opens and clones the sample repository. You can now view the application's code in your Cloud Shell Editor. Specify the project to use for this deployment guide: gcloud config set project PROJECT_ID If a dialog is displayed, click Authorize. You now have the source code for the application in your development environment. This source repository includes the Cloud Build and Cloud Deploy files needed for the CI/CD pipeline. Create repositories for the source code and for the containers In this section you set up a repository in Cloud Source Repositories for the source code, and a repository in Artifact Registry to store the containers built by the CI/CD pipeline. Create a repository in Cloud Source Repositories to store the source code and link it with the CI/CD process: gcloud source repos create cicd-sample Ensure the Cloud Deploy configurations target the correct project: sed -i s/project-id-placeholder/$(gcloud config get-value project)/g deploy/* git config --global credential.https://source.developers.google.com.helper gcloud.sh git remote add google https://source.developers.google.com/p/$(gcloud config get-value project)/r/cicd-sample Push your source code to the repository: git push --all google Create an image repository in Artifact Registry: gcloud artifacts repositories create cicd-sample-repo \ --repository-format=Docker \ --location us-central1 You now have a repository for source code in Cloud Source Repositories and one for the application container in Artifact Registry. The Cloud Source Repositories repository allows you to clone the source code and connect it to the CI/CD pipeline. Configure the CI/CD pipeline In this section, you act as the application operator and configure the CI/CD pipeline. The pipeline uses Cloud Build for CI and Cloud Deploy for CD. The steps of the pipeline are defined in the Cloud Build trigger. Create a Cloud Storage bucket for Cloud Build to store the artifacts.json file (which tracks the artifacts generated by Skaffold for each build): gcloud storage buckets create gs://$(gcloud config get-value project)-gceme-artifacts/ Storing each build's artifacts.json file in a central place is a good practice because it provides traceability, which makes troubleshooting easier. Review the cloudbuild.yaml file, which defines the Cloud Build trigger and is already configured in the source repository that you cloned. This file defines the trigger invoked whenever there is a new push to the main branch of the source-code repository. The following steps for the CI/CD pipeline are defined in the cloudbuild.yaml file: Cloud Build uses Skaffold to build the application container. Cloud Build places the build's artifacts.json file in the Cloud Storage bucket. Cloud Build places the application container in Artifact Registry. Cloud Build runs tests on the application container. The gcloud deploy apply command registers the following files with the Cloud Deploy service: deploy/pipeline.yaml, which is the delivery pipeline deploy/staging.yaml and deploy/prod.yaml, which are the target files When the files are registered, Cloud Deploy creates the pipeline and targets if they do not yet exist, or re-creates them if the configuration changed. The targets are the staging and production environments. Cloud Deploy creates a new release for the delivery pipeline. This release references the application container that was built and tested in the CI process. Cloud Deploy deploys the release to the staging environment. The delivery pipeline and targets are managed by Cloud Deploy and are decoupled from the source code. This decoupling means that you don't need to update the delivery pipeline and target files when a change is made to the application's source code. Create the Cloud Build trigger: gcloud beta builds triggers create cloud-source-repositories \ --name="cicd-sample-main" \ --repo="cicd-sample" \ --branch-pattern="main" \ --build-config="cloudbuild.yaml" This trigger tells Cloud Build to watch the source repository and to use the cloudbuild.yaml file to react to any changes to the repository. This trigger is invoked whenever there is a new push to the main branch. Go to the Cloud Build page in the Google Cloud console. Go to Cloud Build Notice that there are no builds for your application. You have now set up the CI and CD pipelines, and created a trigger on the main branch of the repository. Make a change to your application within your developer workspace In this section, you act as the application developer. As you develop your application, you make and verify iterative changes to the application using Cloud Code as your development workspace: Make a change to the application. Build and test the new code. Deploy the application to the minikube cluster and verify the user-facing changes. Submit the change to the main repository. When this change is committed into the main repository, the Cloud Build trigger starts the CI/CD pipeline. Build, test, and run the application In this section, you build, test, deploy, and access your application. Use the same instance of Cloud Shell Editor that you used in the preceding section. If you closed the editor, then in your browser open Cloud Shell Editor by going to ide.cloud.google.com. In the terminal, start minikube: minikube start minikube sets up a local Kubernetes cluster in your Cloud Shell. This setup takes a few minutes to run. After it's completed, the minikube process runs in the background on the Cloud Shell instance. In the pane at the bottom of Cloud Shell Editor, select Cloud Code. In the thin panel that appears between the terminal and the editor, select Run on Kubernetes. If you see a prompt that says Use current context (minikube) to run the app?, click Yes. This command builds the source code and runs tests. This can take a few minutes. The tests include unit tests and a pre-configured validation step that checks the rules set for the deployment environment. This ensures that you are warned about deployment issues even while you're still working in your development environment. The Output tab shows the Skaffold progress as it builds and deploys your application. Keep this tab open throughout this section. When the build and tests finish, the Output tab says Update succeeded, and shows two URLs. As you build and test your app, Cloud Code streams back the logs and URLs in the Output tab. As you make changes and run tests in your development environment, you can see your development environment's version of the app and verify that it's working correctly. The output also says Watching for changes..., which means that watch mode is enabled. While Cloud Code is in watch mode, the service detects any saved changes in your repository and automatically rebuilds and redeploys the app with the latest changes. In the Cloud Code terminal, hold the pointer over the first URL in the output (http://localhost:8080). In the tool tip that appears, select Open Web Preview. In the background, Cloud Code is automatically port-forwarding traffic to the cicd-sample service running on minikube. In your browser, refresh the page. The number next to Counter increases, showing that the app is responding to your refresh. In your browser, keep this page open so that you can view the application as you make any changes in your local environment. You have now built and tested your application in the development environment. You have deployed the application into the development cluster running on minikube, and viewed the user-facing behavior of the application. Make a change In this section, you make a change to the application and view the change as the app runs in the development cluster. In Cloud Shell Editor, open the index.html file. Search for the string Sample App Info, and change it to sample app info, so that the title now uses lowercase letters. The file is automatically saved, triggering a rebuild of the application container. Cloud Code detects the change and redeploys it automatically. The Output tab shows Update initiated. This redeployment takes a few minutes to run. This automatic redeploy feature is available for any application running on a Kubernetes cluster. When the build is done, go to your browser where you have the app open and refresh the page. When you refresh, see that the text now uses lowercase letters. This setup gives you automatic reloading for any architecture, with any components. When you use Cloud Code and minikube, anything that is running in Kubernetes has this hot code reloading functionality. You can debug applications that are deployed to a Kubernetes cluster in Cloud Code. These steps aren't covered in this deployment guide, but for details, see Debugging a Kubernetes application. Commit the code Now that you've made a change to the application, you can commit the code. Configure your git identity: git config --global user.email "YOU@EXAMPLE.COM" git config --global user.name "NAME" Replace the following: YOU@EXAMPLE.COM: the email address that's connected to your GitHub account. NAME: the name that's connected to your GitHub account. From the terminal, commit the code: git add . git commit -m "use lowercase for: sample app info" You don't need to run the git push command here. That comes later. Working in the development environment, you have now made a change to the application, built and tested the change, and verified the user-facing behavior of these changes. The tests in the development environment include governance checks, which let you fix issues that cause problems in the production environment. In this deployment guide, when you commit the code into the main repository, you don't go through a code review. However, a code review or change approval is a recommended process for software development. For more information about change approval best practices, see Streamlining change approval. Deploy a change into production In this section, you act as the application operator and do the following: Trigger the CI/CD pipeline, which deploys the release to the staging environment. Promote and approve the release to production. Start the CI/CD pipeline and deploy into staging In this section, you start the CI/CD pipeline by invoking the Cloud Build trigger. This trigger is invoked whenever a change is committed to the main repository. You can also initiate the CI system with a manual trigger. In the Cloud Shell Editor, run the following command to trigger a build: git push google This build includes the change you made to cicd-sample. Return to the Cloud Build dashboard and see that a build is created. Click Running: cicd-sample - cicd-sample-main in the build log on the right, and look for the blue text denoting the start and end of each step. Step 0 shows the output of the skaffold build and skaffold test instructions from the cloudbuild.yaml file. The build and test tasks in Step 0 (the CI part of the pipeline) passed, so the deployment tasks of Step 1 (the CD part of the pipeline) now run. This step finishes with the following message: Created Cloud Deploy rollout ROLLOUT_NAME in target staging Open the Cloud Deploy delivery pipelines page and click the cicd-sample delivery pipeline. The application is deployed in staging, but not in production. Verify that the application is working successfully in staging: kubectl proxy --port 8001 --context gke_$(gcloud config get-value project)_us-central1_staging This command sets up a kubectl proxy to access the application. Access the application from Cloud Shell: In Cloud Shell Editor, open a new terminal tab. Send a request to localhost to increment a counter: curl -s http://localhost:8001/api/v1/namespaces/default/services/cicd-sample:8080/proxy/ | grep -A 1 Counter You can run this command multiple times and watch the counter value increment each time. As you view the app, notice that the text that you changed is in the version of the application you deployed on staging. Close this second tab. In the first tab, press Control+C to stop the proxy. You have now invoked the Cloud Build trigger to start the CI process, which includes building the application, deploying it to the staging environment, and running tests to verify the application is working in staging. The CI process is successful when the code builds and tests pass in the staging environment. The success of the CI process then initiates the CD system in Cloud Deploy. Promote the release to production In this section, you promote the release from staging to production. The production target comes pre-configured to require approval, so you manually approve it. For your own CI/CD pipeline, you might want to use a deployment strategy that launches the deployment gradually before you do a full deployment into production. Launching the deployment gradually can make it easier to detect issues and, if needed, to restore a previous release. To promote the release to production, do the following: Open the Cloud Deploy delivery pipelines overview and select the cicd-sample pipeline. Promote the deployment from staging to production. To do so, do the following: In the pipeline diagram at the top of the page, click the blue Promote button in the staging box. In the window that opens, click the Promote button at the bottom. The deployment is not yet running in production. It's waiting for the required manual approval. Manually approve the deployment: In the pipeline visualization, click the Review button between the staging and production boxes. In the window that opens, click the Review button. In the next window, click Approve. Return to the Cloud Deploy delivery pipelines overview and select the cicd-sample pipeline. After the pipeline visualization shows the prod box as green (meaning a successful rollout), verify that the application is working in production by setting up a kubectl proxy that you use to access the application: kubectl proxy --port 8002 --context gke_$(gcloud config get-value project)_us-central1_prod Access the application from Cloud Shell: In Cloud Shell Editor, open a new terminal tab. Increment the counter: curl -s http://localhost:8002/api/v1/namespaces/default/services/cicd-sample:8080/proxy/ | grep -A 1 Counter You can run this command multiple times and watch the counter value increment each time. Close this second terminal tab. In the first tab, press Control+C to stop the proxy. You've now promoted and approved the production deployment. The application with your recent change is now running in production. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this deployment guide, either delete the project that contains the resources, or keep the project and delete the individual resources. Option 1: delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Option 2: delete the individual resources Delete the Cloud Deploy pipeline: gcloud deploy delivery-pipelines delete cicd-sample --region=us-central1 --force Delete the Cloud Build trigger: gcloud beta builds triggers delete cicd-sample-main Delete the staging and production clusters: gcloud container clusters delete staging gcloud container clusters delete prod Delete the repository in Cloud Source Repositories: gcloud source repos delete cicd-sample Delete the Cloud Storage buckets: gcloud storage rm -r gs://$(gcloud config get-value project)-gceme-artifacts/ gcloud storage rm -r gs://$(gcloud config get-value project)_clouddeploy/ Delete the repository in Artifact Registry: gcloud artifacts repositories delete cicd-sample-repo \ --location us-central1 What's next To learn how to deploy into a private GKE instance, see Deploying to a private cluster on a Virtual Private Cloud network. For information about how to implement, improve, and measure deployment automation, see Deployment automation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(4).txt b/Deploy_the_architecture(4).txt new file mode 100644 index 0000000000000000000000000000000000000000..27176c9756c884ee36fe5dc3ae56693fc8a45f91 --- /dev/null +++ b/Deploy_the_architecture(4).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-guacamole-gke/deployment +Date Scraped: 2025-02-23T11:47:59.887Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy Apache Guacamole on GKE and Cloud SQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-09 UTC This document describes how you deploy Apache Guacamole on GKE and Cloud SQL. These instructions are intended for server administrators and engineers who want to host Guacamole on GKE and Cloud SQL. The document assumes you are familiar with deploying workloads to Kubernetes and Cloud SQL for MySQL. We recommend that you be familiar with Identity and Access Management and Google Compute Engine as well. Architecture The following diagram shows how a Google Cloud load balancer is configured with IAP, to protect an instance of the Guacamole client running in GKE: The Guacamole client connects to the guacd backend service, which brokers remote desktop connections to one or more Compute Engine VMs. The scripts also deploy a Cloud SQL instance to manage configuration data for Guacamole. For details, see Apache Guacamole on GKE and Cloud SQL. Objectives Deploy the infrastructure by using Terraform. Create a Guacamole database in Cloud SQL. Deploy Guacamole to a GKE Cluster by using Skaffold. Test a connection to a VM through Guacamole. Costs In this document, you use the following billable components of Google Cloud: Compute Engine GKE Cloud SQL Artifact Registry To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Resource Manager, Service Usage, Artifact Registry, and Compute Engine APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Deploy the infrastructure In this section, you use Terraform to deploy the following resources: Virtual Private Cloud A firewall rule A GKE cluster An Artifact Registry repository Cloud SQL for MySQL A VM for managing the MySQL database Service accounts The Terraform configuration also enables the use of IAP in your project. In Cloud Shell, clone the GitHub repository: git clone https://github.com/GoogleCloudPlatform/guacamole-on-gcp.git Deploy the required infrastructure by using Terraform: cd guacamole-on-gcp/tf-infra unset GOOGLE_CLOUD_QUOTA_PROJECT terraform init -upgrade terraform apply Follow the instructions to enter your Google Cloud project ID. To approve Terraform's request to deploy resources to your project, enter yes. Deploying all resources takes several minutes to complete. Deploy the Guacamole database In this section, you create the Guacamole database and tables in Cloud SQL for MySQL, and populate the database with the administrator user information. In Cloud Shell, set environment variables and find the database root password: cd .. source bin/read-tf-output.sh Make a note of the database root password; you need it in the following steps. The script reads output variables from the Terraform run and sets the following environment variables, which are used throughout this procedure: CLOUD_SQL_INSTANCE ZONE REGION DB_MGMT_VM PROJECT_ID GKE_CLUSTER GUACAMOLE_URL SUBNET Copy the create-schema.sql and insert-admin-user.sql script files to the database management VM, and then connect to the VM: gcloud compute scp \ --tunnel-through-iap \ --zone=$ZONE \ create-schema.sql \ insert-admin-user.sql \ $DB_MGMT_VM: gcloud compute ssh $DB_MGMT_VM \ --zone=$ZONE \ --tunnel-through-iap A console session to the Database Management VM through Cloud Shell is now established. Install MySQL client tools: sudo apt-get update sudo apt-get install -y mariadb-client Connect to Cloud SQL and create the database. When prompted for a password, use the root password you noted earlier in this section. export CLOUD_SQL_PRIVATE_IP=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/cloud_sql_ip -H "Metadata-Flavor: Google") mysql -h $CLOUD_SQL_PRIVATE_IP -u root -p Grant the database user permissions over the newly created database: CREATE DATABASE guacamole; USE guacamole; GRANT SELECT,INSERT,UPDATE,DELETE ON guacamole.* TO 'guac-db-user'; FLUSH PRIVILEGES; SOURCE create-schema.sql; SOURCE insert-admin-user.sql; quit After the MySQL commands finish running, exit the VM SSH session: exit Deploy Guacamole to GKE by using Skaffold In this section, you deploy the Guacamole application to the GKE cluster, by using Skaffold. Skaffold handles the workflow for building, pushing, and deploying the Guacamole images to the GKE clusters. In Cloud Shell, deploy the GKE configuration by using terraform: cd tf-k8s terraform init -upgrade terraform apply -parallelism=1 Get credentials for the GKE cluster: gcloud container clusters get-credentials \ --region $REGION $GKE_CLUSTER Run Skaffold from the root of the cloned git repository: cd .. skaffold --default-repo $REGION-docker.pkg.dev/$PROJECT_ID/guac-repo run The Skaffold tool builds container images for Guacamole through Google Cloud Build (the command line includes a flag that specifies which repository to push the images to). The tool also runs a kustomize step to generate Kubernetes ConfigMaps and Secrets based on the output of the Terraform run. Verify that the certificate was provisioned: kubectl get -w managedcertificates/guacamole-client-cert \ -n guacamole \ -o jsonpath="{.spec.domains[0]} is {.status.domainStatus[0].status}" Provisioning the certificate can take up to 60 minutes to complete. Once the certificate is provisioned, you can visit your URL in a browser. View the URL from the terraform output: echo $GUACAMOLE_URL In a browser window, enter the URL that you got in the previous step. When IAP prompts you, sign in with your Google credentials. After you sign in, you are logged into Guacamole with administrative privileges, based on the insert-admin-user.sql script you ran previously in this procedure. Note: The OAuth configuration created by this procedure is set to internal. This means you must use a Google Account in the same organization as the one you used to deploy Guacamole in this procedure; otherwise, you receive an HTTP/403 org_internal error. If your browser session is already signed into a different Google Account, try connecting to the URL in an incognito mode tab. You can now add additional users based on their email address through the Guacamole user interface. For details, see Administration in the Guacamole documentation. These additional users also require permissions through Google IAM, with the IAP-secured Web App User role. Test a connection to a VM After you deploy, configure, and successfully sign in to Guacamole, you can create a Windows VM and connect to the newly created VM through Guacamole. Create a VM In Cloud Shell, create a Windows VM to test connections to: export TEST_VM=windows-vm gcloud compute instances create $TEST_VM \ --project=$PROJECT_ID \ --zone=$ZONE \ --machine-type=n1-standard-1 \ --subnet=$SUBNET \ --no-address \ --image-family=windows-2019 \ --image-project=windows-cloud \ --boot-disk-size=50GB \ --boot-disk-type=pd-standard \ —-shielded-secure-boot After running the command, you may need to wait a few minutes for Windows to finish initializing, before you proceed to the next step. Reset the Windows password for the VM you just created: gcloud compute reset-windows-password $TEST_VM \ --user=admin \ --zone=$ZONE Add a new connection to the VM In a browser window, enter the Guacamole instance URL from Deploy Guacamole to GKE using Skaffold, and then sign in through IAP. In the Guacamole UI, click your username, and then click Settings. Under the Connections tab, click New Connection. In the Name field, enter a name for the connection. In the Location field, enter the location for the connection. From the Protocol drop-down list, select RDP. Under Network, in the Hostname field, enter the name of the VM you created, windows-vm. Your project DNS resolves this hostname to the instance's internal IP address. Note: If you choose to create your VM in a different zone than your Guacamole GKE cluster, you need to fully qualify the VM name. For details, see Internal DNS. In the Authentication section, set the following fields: Username: admin Password: the password you got when you reset the password for the VM Security mode: NLA (Network Level Authentication) Ignore server certificate: select the checkbox Compute Engine Windows VMs are provisioned with a self-signed certificate for Remote Desktop Services, so you need to instruct Guacamole to ignore certificate validation issues. Click Save. Click your username, and select Home. Click the connection you just created to test connectivity. After a few seconds, you should see the desktop of the VM instance. For more details on configuring Guacamole, see the Apache Guacamole Manual. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this procedure, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the new resources As an alternative to deleting the entire project, you can delete the individual resources created during this procedure. Note that the OAuth Consent Screen configuration cannot be removed from a project, only modified. In Cloud Shell, use terraform to delete the resources: cd ~/guacamole-on-gcp/tf-k8s terraform destroy cd ~/guacamole-on-gcp/tf-infra terraform destroy gcloud compute instances delete $TEST_VM –-zone=$ZONE What's next Review the GKE guidance on Hardening your cluster's security. Review Encrypt secrets at the application layer to learn how to boost security for secrets, such as database credentials and OAuth credentials. Review IAM Conditions to learn how to provide more granular control over user access to Guacamole. Understand more about how IAP integration works by reviewing the custom authentication provider in the GitHub repository. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Richard Grime | Principal Architect, UK Public SectorOther contributors: Aaron Lind | Solution Engineer, Application InnovationEyal Ben Ivri | Cloud Solutions ArchitectIdo Flatow | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(5).txt b/Deploy_the_architecture(5).txt new file mode 100644 index 0000000000000000000000000000000000000000..2eded3696e3bb12adf2226d60cf78dd8c8791d6c --- /dev/null +++ b/Deploy_the_architecture(5).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/manage-and-scale-windows-networking/deployment +Date Scraped: 2025-02-23T11:48:30.226Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy Windows applications on managed Kubernetes Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC This document describes how you deploy the reference architecture in Manage and scale networking for Windows applications that run on managed Kubernetes. These instructions are intended for cloud architects, network administrators, and IT professionals who are responsible for the design and management of Windows applications that run on Google Kubernetes Engine (GKE) clusters. Architecture The following diagram shows the reference architecture that you use when you deploy Windows applications that run on managed GKE clusters. As shown in the preceding diagram, an arrow represents the workflow for managing networking for Windows applications that run on GKE using Cloud Service Mesh and Envoy gateways. The regional GKE cluster includes both Windows and Linux node pools. Cloud Service Mesh creates and manages traffic routes to the Windows Pods. Objectives Create and set up a GKE cluster to run Windows applications and Envoy proxies. Deploy and verify the Windows applications. Configure Cloud Service Mesh as the control plane for the Envoy gateways. Use the Kubernetes Gateway API to provision the internal Application Load Balancer and expose the Envoy gateways. Understand the continual deployment operations you created. Costs Deployment of this architecture uses the following billable components of Google Cloud: Cloud Load Balancing Google Kubernetes Engine Cloud Service Mesh When you finish this deployment, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Shell, and Cloud Service Mesh APIs. Enable the APIs In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell If running in a shared Virtual Private Cloud (VPC) environment, you also need to follow the instructions to manually create the proxy-only subnet and firewall rule for the Cloud Load Balancing responsiveness checks. Create a GKE cluster Use the following steps to create a GKE cluster. You use the GKE cluster to contain and run the Windows applications and Envoy proxies in this deployment. In Cloud Shell, run the following Google Cloud CLI command to create a regional GKE cluster with one node in each of the three regions: gcloud container clusters create my-cluster --enable-ip-alias \ --num-nodes=1 \ --release-channel stable \ --enable-dataplane-v2 \ --region us-central1 \ --scopes=cloud-platform \ --gateway-api=standard Add the Windows node pool to the GKE cluster: gcloud container node-pools create win-pool \ --cluster=my-cluster \ --image-type=windows_ltsc_containerd \ --no-enable-autoupgrade \ --region=us-central1 \ --num-nodes=1 \ --machine-type=n1-standard-2 \ --windows-os-version=ltsc2019 This operation might take around 20 minutes to complete. Store your Google Cloud project ID in an environment variable: export PROJECT_ID=$(gcloud config get project) Connect to the GKE cluster: gcloud container clusters get-credentials my-cluster --region us-central1 List all the nodes in the GKE cluster: kubectl get nodes The output should display three Linux nodes and three Windows nodes. After the GKE cluster is ready, you can deploy two Windows-based test applications. Deploy two test applications In this section, you deploy two Windows-based test applications. Both test applications print the hostname that the application runs on. You also create a Kubernetes Service to expose the application through standalone network endpoint groups (NEGs). When you deploy a Windows-based application and a Kubernetes Service on a regional cluster, it creates a NEG for each zone in which the application runs. Later, this deployment guide discusses how you can configure these NEGs as backends for Cloud Service Mesh services. In Cloud Shell, apply the following YAML file with kubectl to deploy the first test application. This command deploys three instances of the test application, one in each regional zone. apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver-1 name: win-webserver-1 spec: replicas: 3 selector: matchLabels: app: win-webserver-1 template: metadata: labels: app: win-webserver-1 name: win-webserver-1 spec: containers: - name: windowswebserver image: k8s.gcr.io/e2e-test-images/agnhost:2.36 command: ["/agnhost"] args: ["netexec", "--http-port", "80"] topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: win-webserver-1 nodeSelector: kubernetes.io/os: windows Apply the matching Kubernetes Service and expose it with a NEG: apiVersion: v1 kind: Service metadata: name: win-webserver-1 annotations: cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "win-webserver-1"}}}' spec: type: ClusterIP selector: app: win-webserver-1 ports: - name: http protocol: TCP port: 80 targetPort: 80 Verify the deployment: kubectl get pods The output shows that the application has three running Windows Pods. NAME READY STATUS RESTARTS AGE win-webserver-1-7bb4c57f6d-hnpgd 1/1 Running 0 5m58s win-webserver-1-7bb4c57f6d-rgqsb 1/1 Running 0 5m58s win-webserver-1-7bb4c57f6d-xp7ww 1/1 Running 0 5m58s Verify that the Kubernetes Service was created: $ kubectl get svc The output resembles the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.64.0.1 443/TCP 58m win-webserver-1 ClusterIP 10.64.6.20 80/TCP 3m35s Run the describe command for kubectl to verify that corresponding NEGs were created for the Kubernetes Service in each of the zones in which the application runs: $ kubectl describe service win-webserver-1 The output resembles the following: Name: win-webserver-1 Namespace: default Labels: Annotations: cloud.google.com/neg: {"exposed_ports": {"80":{"name": "win-webserver-1"}}} cloud.google.com/neg-status: {"network_endpoint_groups":{"80":"win-webserver-1"},"zones":["us-central1-a","us-central1-b","us-central1-c"]} Selector: app=win-webserver-1 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.64.6.20 IPs: 10.64.6.20 Port: http 80/TCP TargetPort: 80/TCP Endpoints: 10.60.3.5:80,10.60.4.5:80,10.60.5.5:80 Session Affinity: None Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Create 4m25s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-a". Normal Create 4m18s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-b". Normal Create 4m11s neg-controller Created NEG "win-webserver-1" for default/win-webserver-1-win-webserver-1-http/80-80-GCE_VM_IP_PORT-L7 in "us-central1-c". Normal Attach 4m9s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-a") Normal Attach 4m8s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-c") Normal Attach 4m8s neg-controller Attach 1 network endpoint(s) (NEG "win-webserver-1" in zone "us-central1-b") The output from the preceding command shows you that a NEG was created for each zone. Optional: Use gcloud CLI to verify that the NEGs were created: gcloud compute network-endpoint-groups list The output is as follows: NAME LOCATION ENDPOINT_TYPE SIZE win-webserver-1 us-central1-a GCE_VM_IP_PORT 1 win-webserver-1 us-central1-b GCE_VM_IP_PORT 1 win-webserver-1 us-central1-c GCE_VM_IP_PORT 1 To deploy the second test application, apply the following YAML file: apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver-2 name: win-webserver-2 spec: replicas: 3 selector: matchLabels: app: win-webserver-2 template: metadata: labels: app: win-webserver-2 name: win-webserver-2 spec: containers: - name: windowswebserver image: k8s.gcr.io/e2e-test-images/agnhost:2.36 command: ["/agnhost"] args: ["netexec", "--http-port", "80"] topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: app: win-webserver-2 nodeSelector: kubernetes.io/os: windows Create the corresponding Kubernetes Service: apiVersion: v1 kind: Service metadata: name: win-webserver-2 annotations: cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "win-webserver-2"}}}' spec: type: ClusterIP selector: app: win-webserver-2 ports: - name: http protocol: TCP port: 80 targetPort: 80 Verify the application deployment: kubectl get pods Check the output and verify that there are three running Pods. Verify that the Kubernetes Service and three NEGs were created: kubectl describe service win-webserver-2 Configure Cloud Service Mesh In this section, Cloud Service Mesh is configured as the control plane for the Envoy gateways. You map the Envoy gateways to the relevant Cloud Service Mesh routing configuration by specifying the scope_name parameter. The scope_name parameter lets you configure different routing rules for the different Envoy gateways. In Cloud Shell, create a firewall rule that allows incoming traffic from the Google services that are checking application responsiveness: gcloud compute firewall-rules create allow-health-checks \ --network=default \ --direction=INGRESS \ --action=ALLOW \ --rules=tcp \ --source-ranges="35.191.0.0/16,130.211.0.0/22,209.85.152.0/22,209.85.204.0/22" Check the responsiveness of the first application: gcloud compute health-checks create http win-app-1-health-check \ --enable-logging \ --request-path="/healthz" \ --use-serving-port Check the responsiveness of the second application: gcloud compute health-checks create http win-app-2-health-check \ --enable-logging \ --request-path="/healthz" \ --use-serving-port Create a Cloud Service Mesh backend service for the first application: gcloud compute backend-services create win-app-1-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --port-name=http \ --health-checks win-app-1-health-check Create a Cloud Service Mesh backend service for the second application: gcloud compute backend-services create win-app-2-service \ --global \ --load-balancing-scheme=INTERNAL_SELF_MANAGED \ --port-name=http \ --health-checks win-app-2-health-check Add the NEGs you created previously. These NEGs are associated with the first application you created as a backend to the Cloud Service Mesh backend service. This code sample adds one NEG for each zone in the regional cluster you created. BACKEND_SERVICE=win-app-1-service APP1_NEG_NAME=win-webserver-1 MAX_RATE_PER_ENDPOINT=10 gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP1_NEG_NAME \ --network-endpoint-group-zone us-central1-b \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP1_NEG_NAME \ --network-endpoint-group-zone us-central1-a \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP1_NEG_NAME \ --network-endpoint-group-zone us-central1-c \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT Add additional NEGs. These NEGs are associated with the second application you created as a backend to the Cloud Service Mesh backend service. This code sample adds one NEG for each zone in the regional cluster you created. BACKEND_SERVICE=win-app-2-service APP2_NEG_NAME=win-webserver-2 gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP2_NEG_NAME \ --network-endpoint-group-zone us-central1-b \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP2_NEG_NAME \ --network-endpoint-group-zone us-central1-a \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT gcloud compute backend-services add-backend $BACKEND_SERVICE \ --global \ --network-endpoint-group $APP2_NEG_NAME \ --network-endpoint-group-zone us-central1-c \ --balancing-mode RATE \ --max-rate-per-endpoint $MAX_RATE_PER_ENDPOINT Configure additional Cloud Service Mesh resources Now that you've configured the Cloud Service Mesh services, you need to configure two additional resources to complete your Cloud Service Mesh setup. First, these steps show how to configure a Gateway resource. A Gateway resource is a virtual resource that's used to generate Cloud Service Mesh routing rules. Cloud Service Mesh routing rules are used to configure Envoy proxies as gateways. Next, the steps show how to configure an HTTPRoute resource for each of the backend services. The HTTPRoute resource maps HTTP requests to the relevant backend service. In Cloud Shell, create a YAML file called gateway.yaml that defines the Gateway resource: cat < gateway.yaml name: gateway80 scope: gateway-proxy ports: - 8080 type: OPEN_MESH EOF Create the Gateway resource by invoking the gateway.yaml file: gcloud network-services gateways import gateway80 \ --source=gateway.yaml \ --location=global The Gateway name will be projects/$PROJECT_ID/locations/global/gateways/gateway80. You use this Gateway name when you create HTTPRoutes for each backend service. Create the HTTPRoutes for each backend service: In Cloud Shell, store your Google Cloud project ID in an environment variable: export PROJECT_ID=$(gcloud config get project) Create the HTTPRoute YAML file for the first application: cat < win-app-1-route.yaml name: win-app-1-http-route hostnames: - win-app-1 gateways: - projects/$PROJECT_ID/locations/global/gateways/gateway80 rules: - action: destinations: - serviceName: "projects/$PROJECT_ID/locations/global/backendServices/win-app-1-service" EOF Create the HTTPRoute resource for the first application: gcloud network-services http-routes import win-app-1-http-route \ --source=win-app-1-route.yaml \ --location=global Create the HTTPRoute YAML file for the second application: cat < win-app-2-route.yaml name: win-app-2-http-route hostnames: - win-app-2 gateways: - projects/$PROJECT_ID/locations/global/gateways/gateway80 rules: - action: destinations: - serviceName: "projects/$PROJECT_ID/locations/global/backendServices/win-app-2-service" EOF Create the HTTPRoute resource for the second application: gcloud network-services http-routes import win-app-2-http-route \ --source=win-app-2-route.yaml \ --location=global Deploy and expose the Envoy gateways After you create the two Windows-based test applications and the Cloud Service Mesh, you deploy the Envoy gateways by creating a deployment YAML file. The deployment YAML file accomplishes the following tasks: Bootstraps the Envoy gateways. Configures the Envoy gateways to use Cloud Service Mesh as its control plane. Configures the Envoy gateways to use HTTPRoutes for the gateway named Gateway80. Deploy two replica Envoy gateways. This approach helps to make the gateways fault tolerant and provides redundancy. To automatically scale the Envoy gateways based on load, you can optionally configure a Horizontal Pod Autoscaler. If you decide to configure a Horizontal Pod Autoscaler, you must follow the instructions in Configuring horizontal Pod autoscaling. In Cloud Shell, create a YAML file: apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: td-envoy-gateway name: td-envoy-gateway spec: replicas: 2 selector: matchLabels: app: td-envoy-gateway template: metadata: creationTimestamp: null labels: app: td-envoy-gateway spec: containers: - name: envoy image: envoyproxy/envoy:v1.21.6 imagePullPolicy: Always resources: limits: cpu: "2" memory: 1Gi requests: cpu: 100m memory: 128Mi env: - name: ENVOY_UID value: "1337" volumeMounts: - mountPath: /etc/envoy name: envoy-bootstrap initContainers: - name: td-bootstrap-writer image: gcr.io/trafficdirector-prod/xds-client-bootstrap-generator imagePullPolicy: Always args: - --project_number='my_project_number' - --scope_name='gateway-proxy' - --envoy_port=8080 - --bootstrap_file_output_path=/var/lib/data/envoy.yaml - --traffic_director_url=trafficdirector.googleapis.com:443 - --expose_stats_port=15005 volumeMounts: - mountPath: /var/lib/data name: envoy-bootstrap volumes: - name: envoy-bootstrap emptyDir: {} Replace my_project_number with your project number. You can find your project number by running the following command: gcloud projects describe $(gcloud config get project) --format="value(projectNumber)" Port 15005 is used to expose the Envoy Admin endpoint named /stats. It's also used for the following purposes: As a responsiveness endpoint from the internal Application Load Balancer. As a way to consume Google Cloud Managed Service for Prometheus metrics from Envoy. When the two Envoy Gateway Pods are running, create a service of type ClusterIP to expose them. You must also create a YAML file called BackendConfig. BackendConfig defines a non-standard responsiveness check. That check is used to verify the responsiveness of the Envoy gateways. To create the backend configuration with a non-standard responsiveness check, create a YAML file called envoy-backendconfig: apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: envoy-backendconfig spec: healthCheck: checkIntervalSec: 5 timeoutSec: 5 healthyThreshold: 2 unhealthyThreshold: 3 type: HTTP requestPath: /stats port: 15005 The responsiveness check will use the /stats endpoint on port 15005 to continuously check the responsiveness of the Envoy gateways. Create the Envoy gateways service: apiVersion: v1 kind: Service metadata: name: td-envoy-gateway annotations: cloud.google.com/backend-config: '{"default": "envoy-backendconfig"}' spec: type: ClusterIP selector: app: td-envoy-gateway ports: - name: http protocol: TCP port: 8080 targetPort: 8080 - name: stats protocol: TCP port: 15005 targetPort: 15005 View the Envoy gateways service you created: kubectl get svc td-envoy-gateway Create the Kubernetes Gateway resource Creating the Kubernetes Gateway resource provisions the internal Application Load Balancer to expose the Envoy gateways. Before creating that resource, you must create two sample self-signed certificates and then import them into the GKE cluster as Kubernetes Secrets. The certificates enable the following gateway architecture: Each application is served over HTTPS. Each application uses a dedicated certificate. When using self-managed certificates, the internal Application Load Balancer can use up to the maximum limit of certificates to expose applications with different fully qualified domain names. To create the certificates use openssl. In Cloud Shell, generate a configuration file for the first certificate: cat <CONFIG_FILE [req] default_bits = 2048 req_extensions = extension_requirements distinguished_name = dn_requirements prompt = no [extension_requirements] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @sans_list [dn_requirements] 0.organizationName = example commonName = win-webserver-1.example.com [sans_list] DNS.1 = win-webserver-1.example.com EOF Generate a private key for the first certificate: openssl genrsa -out sample_private_key 2048 Generate a certificate request: openssl req -new -key sample_private_key -out CSR_FILE -config CONFIG_FILE Sign and generate the first certificate: openssl x509 -req -signkey sample_private_key -in CSR_FILE -out sample.crt -extfile CONFIG_FILE -extensions extension_requirements -days 90 Generate a configuration file for the second certificate: cat <CONFIG_FILE2 [req] default_bits = 2048 req_extensions = extension_requirements distinguished_name = dn_requirements prompt = no [extension_requirements] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment subjectAltName = @sans_list [dn_requirements] 0.organizationName = example commonName = win-webserver-2.example.com [sans_list] DNS.1 = win-webserver-2.example.com EOF Generate a private key for the second certificate: openssl genrsa -out sample_private_key2 2048 Generate a certificate request: openssl req -new -key sample_private_key2 -out CSR_FILE2 -config CONFIG_FILE2 Sign and generate the second certificate: openssl x509 -req -signkey sample_private_key2 -in CSR_FILE2 -out sample2.crt -extfile CONFIG_FILE2 -extensions extension_requirements -days 90 Import certificates as Kubernetes Secrets In this section, you accomplish the following tasks: Import the self-signed certificates into the GKE cluster as Kubernetes Secrets. Create a static IP address for an internal VPC. Create the Kubernetes Gateway API resource. Verify that the certificates work. In Cloud Shell, import the first certificate as a Kubernetes Secret: kubectl create secret tls sample-cert --cert sample.crt --key sample_private_key Import the second certificate as a Kubernetes Secret: kubectl create secret tls sample-cert-2 --cert sample2.crt --key sample_private_key2 To enable internal Application Load Balancer, create a static IP address on the internal VPC: gcloud compute addresses create sample-ingress-ip --region us-central1 --subnet default Create the Kubernetes Gateway API resource YAML file: kind: Gateway apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: internal-https spec: gatewayClassName: gke-l7-rilb addresses: - type: NamedAddress value: sample-ingress-ip listeners: - name: https protocol: HTTPS port: 443 tls: mode: Terminate certificateRefs: - name: sample-cert - name: sample-cert-2 By default, a Kubernetes Gateway has no default routes. The gateway returns a page not found (404) error when requests are sent to it. Configure a default route YAML file for the Kubernetes Gateway that passes all incoming requests to the Envoy gateways: kind: HTTPRoute apiVersion: gateway.networking.k8s.io/v1beta1 metadata: name: envoy-default-backend spec: parentRefs: - kind: Gateway name: internal-https rules: - backendRefs: - name: td-envoy-gateway port: 8080 Verify the full flow by sending HTTP requests to both applications. To verify that the Envoy gateways route traffic to the correct application Pods, inspect the HTTP Host header. Find and store the Kubernetes Gateway IP address in an environment variable: export EXTERNAL_IP=$(kubectl get gateway internal-https -o json | jq .status.addresses[0].value -r) Send a request to the first application: curl --insecure -H "Host: win-app-1" https://$EXTERNAL_IP/hostName Send a request to the second application: curl --insecure -H "Host: win-app-2" https://$EXTERNAL_IP/hostName Verify that the hostname returned from the request matches the Pods running win-app-1 and win-app-2: kubectl get pods The output should display win-app-1 and win-app-2. Monitor Envoy gateways Monitor your Envoy gateways with Google Cloud Managed Service for Prometheus. Google Cloud Managed Service for Prometheus should be enabled by default on the cluster that you created earlier. In Cloud Shell, create a PodMonitoring resource by applying the following YAML file: apiVersion: monitoring.googleapis.com/v1 kind: PodMonitoring metadata: name: prom-envoy spec: selector: matchLabels: app: td-envoy-gateway endpoints: - port: 15005 interval: 30s path: /stats/prometheus After applying the YAML file, the system begins to collect Google Cloud Managed Service for Prometheus metrics in a dashboard. To create the Google Cloud Managed Service for Prometheus metrics dashboard, follow these instructions: Sign in to the Google Cloud console. Open the menu menu. Click Operations > Monitoring > Dashboards. To import the dashboard, follow these instructions: On the Dashboards screen, click Sample Library. Enter envoy in the filter box. Click Istio Envoy Prometheus Overview. Select the checkbox. Click Import and then click Confirm to import the dashboard. To view the dashboard, follow these instructions: Click Dashboard List. Select Integrations. Click Istio Envoy Prometheus Overview to view the dashboard. You can now see the most important metrics of your Envoy gateways. You can also configure alerts based on your criteria. Before you clean up, send a few more test requests to the applications and see how the dashboard updates with the latest metrics. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this deployment, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next Learn more about the Google Cloud products used in this deployment guide: GKE networking best practices Best practices for running cost effective Kubernetes applications on GKE Cloud Service Mesh control plane observability Using self-managed SSL certificates with load balancers For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Eitan Eibschutz | Staff Technical Solutions ConsultantOther contributors: John Laham | Solutions ArchitectKaslin Fields | Developer AdvocateMaridi (Raju) Makaraju | Supportability Tech LeadValavan Rajakumar | Key Enterprise ArchitectVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(6).txt b/Deploy_the_architecture(6).txt new file mode 100644 index 0000000000000000000000000000000000000000..5b41c5d03489454e9a4e2251133eea41a78ceca6 --- /dev/null +++ b/Deploy_the_architecture(6).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/scalable-bigquery-backup-automation/deployment +Date Scraped: 2025-02-23T11:49:09.036Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy scalable BigQuery backup automation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-17 UTC This document describes how you deploy Scalable BigQuery backup automation. This document is intended for cloud architects, engineers, and data governance officers who want to define and automate data policies in their organizations. Experience with Terraform is helpful. Architecture The following diagram shows the automated backup architecture: Cloud Scheduler triggers the run. The dispatcher service, using BigQuery API, lists the in-scope tables. Through a Pub/Sub message, the dispatcher service submits one request for each table to the configurator service. The configurator service determines the backup policies for the tables, and then submits one request for each table to the relevant Cloud Run service. The Cloud Run service then submits a request to the BigQuery API and runs the backup operations. Pub/Sub triggers the tagger service, which logs the results and updates the backup state in the Cloud Storage metadata layer. For details about the architecture, see Scalable BigQuery backup automation. Objectives Build Cloud Run services. Configure Terraform variables. Run the Terraform and manual deployment scripts. Run the solution. Costs In this document, you use the following billable components of Google Cloud: BigQuery Pub/Sub Cloud Logging Cloud Run Cloud Storage Cloud Scheduler Firestore in Datastore mode (Datastore) To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin If you're re-deploying the solution, you can skip this section (for example, after new commits). In this section, you create one-time resources. In one of the following development environments, set up the gcloud CLI: Cloud Shell: to use an online terminal with the gcloud CLI already set up, activate Cloud Shell. Local shell: to use a local development environment, install and initialize the gcloud CLI. If you want to create a new Google Cloud project to use as the host project for the deployment, use the gcloud projects create command: gcloud projects create PROJECT_ID Replace PROJECT_ID with the ID of the project you want to create. Install Maven: Download Maven. In Cloud Shell, add Maven to PATH: export PATH=/DOWNLOADED_MAVEN_DIR/bin:$PATH In Cloud Shell, clone the GitHub repository: git clone https://github.com/GoogleCloudPlatform/bq-backup-manager.git Set and export the following environment variables: export PROJECT_ID=PROJECT_ID export TF_SA=bq-backup-mgr-terraform export COMPUTE_REGION=COMPUTE_REGION export DATA_REGION=DATA_REGION export BUCKET_NAME=${PROJECT_ID}-bq-backup-mgr export BUCKET=gs://${BUCKET_NAME} export DOCKER_REPO_NAME=docker-repo export CONFIG=bq-backup-manager export ACCOUNT=ACCOUNT_EMAIL gcloud config configurations create $CONFIG gcloud config set project $PROJECT_ID gcloud config set account $ACCOUNT gcloud config set compute/region $COMPUTE_REGION gcloud auth login gcloud auth application-default login Replace the following: PROJECT_ID: the ID of the Google Cloud host project that you want to deploy the solution to. COMPUTE_REGION: the Google Cloud region where you want to deploy compute resources like Cloud Run and Identity and Access Management (IAM). DATA_REGION: the Google Cloud region you want to deploy data resources (such as buckets and datasets) to. ACCOUNT_EMAIL: the user account email address. Enable the APIs: ./scripts/enable_gcp_apis.sh The script enables the following APIs: Cloud Resource Manager API IAM API Data Catalog API Artifact Registry API BigQuery API Pub/Sub API Cloud Storage API Cloud Run Admin API Cloud Build API Service Usage API App Engine Admin API Serverless VPC Access API Cloud DNS API Prepare the Terraform state bucket: gsutil mb -p $PROJECT_ID -l $COMPUTE_REGION -b on $BUCKET Prepare the Terraform service account: ./scripts/prepare_terraform_service_account.sh To publish images that this solution uses, prepare a Docker repository: gcloud artifacts repositories create $DOCKER_REPO_NAME --repository-format=docker \ --location=$COMPUTE_REGION \ --description="Docker repository for backups" Deploy the infrastructure Make sure that you've completed Before you begin at least once. In this section, follow the steps to deploy or redeploy the latest codebase to the Google Cloud environment. Activate the gcloud CLI configuration In Cloud Shell, activate and authenticate the gcloud CLI configuration: gcloud config configurations activate $CONFIG gcloud auth login gcloud auth application-default login Build Cloud Run services images In Cloud Shell, build and deploy docker images to be used by the Cloud Run service: export DISPATCHER_IMAGE=${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-dispatcher-service:latest export CONFIGURATOR_IMAGE=${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-configurator-service:latest export SNAPSHOTER_BQ_IMAGE=${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-snapshoter-bq-service:latest export SNAPSHOTER_GCS_IMAGE=${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-snapshoter-gcs-service:latest export TAGGER_IMAGE=${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-tagger-service:latest ./scripts/deploy_services.sh Configure Terraform variables This deployment uses Terraform for configurations and a deployment script. In Cloud Shell, create a new Terraform TFVARS file in which you can override the variables in this section: export VARS=FILENAME .tfvars Replace FILENAME with the name of the variables file that you created (for example, my-variables). You can use the example-variables file as a reference. In the TFVARS file, configure the project variables: project = "PROJECT_ID" compute_region = "COMPUTE_REGION" data_region = "DATA_REGION" You can use the default values that are defined in the variables.tf file or change the values. Configure the Terraform service account, which you created and prepared earlier in Before you begin: terraform_service_account = "bq-backup-mgr-terraform@PROJECT_ID.iam.gserviceaccount.com" Make sure that you use the full email address of the account that you created. Configure the Cloud Run services to use the container images that you built and deployed earlier: dispatcher_service_image = "${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-dispatcher-service:latest" configurator_service_image = "${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-configurator-service:latest" snapshoter_bq_service_image = "${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-snapshoter-bq-service:latest" snapshoter_gcs_service_image = "${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-snapshoter-gcs-service:latest" tagger_service_image = "${COMPUTE_REGION}-docker.pkg.dev/${PROJECT_ID}/${DOCKER_REPO_NAME}/bqsm-tagger-service:latest" This script instructs Terraform to use these published images in the Cloud Run services, which Terraform creates later. Terraform only links a Cloud Run service to an existing image. It doesn't build the images from the codebase, because that was completed in a previous step. In the schedulers variable, define at least one scheduler. The scheduler periodically lists and checks tables for required backups, based on their table-level backup cron schedules. { name = "SCHEDULER_NAME" cron = "SCHEDULER_CRON" payload = { is_force_run = FORCE_RUN is_dry_run = DRY_RUN folders_include_list = [FOLDERS_INCLUDED] projects_include_list = [PROJECTS_INCLUDED] projects_exclude_list = [PROJECTS_EXCLUDED] datasets_include_list = [DATASETS_INCLUDED] datasets_exclude_list = [DATASETS_EXCLUDED] tables_include_list = [TABLES_INCLUDED] tables_exclude_list = [TABLES_EXCLUDED] } } Replace the following: SCHEDULER_NAME: the display name of the Cloud Scheduler. SCHEDULER_CRON: the frequency with which the scheduler checks whether a backup is due for the in-scope tables, based on their individual backup schedules. This can be any unix-cron compatible string. For example, 0 * * * * is an hourly frequency. FORCE_RUN: a boolean value. Set the value to false if you want the scheduler to use the tables' cron schedules. If set to true, all in-scope tables are backed up, regardless of their cron setting. DRY_RUN: a boolean value. When set to true, no actual backup operations take place. Only log messages are generated. Use true when you want to test and debug the solution without incurring backup costs. FOLDERS_INCLUDED: a list of numerical IDs for folders that contain BigQuery data (for example, 1234, 456). When set, the solution backs up the tables in the specified folders, and ignores the projects_include_list, datasets_include_list, and tables_include_list field settings. PROJECTS_INCLUDED: a list of project names (for example, "project1", "project2"). When set, the solution backs up the tables in the specified projects, and ignores the datasets_include_list and tables_include_list field settings. This setting is ignored if you set the folders_include_list field. PROJECTS_EXCLUDED: a list of project names or regular expression (for example, "project1", "regex:^test_"). When set, the solution does not take backups of the tables in the specified projects. You can use this setting in combination with the folders_include_list field. DATASETS_INCLUDED: a list of datasets (for example, "project1.dataset1", "project1.dataset2"). When set, the solution backs up the tables in the specified datasets, and ignores the tables_include_list field setting. This setting is ignored if you set the folders_include_list or projects_include_list fields. DATASETS_EXCLUDED: a list of datasets or regular expression (for example, "project1.dataset1", "regex:.*\\_landing$"). When set, the solution does not take backups of the tables in the specified datasets. You can use this setting in combination with the folders_include_list or projects_include_list fields. TABLES_INCLUDED: a list of tables (for example, "project1.dataset1.table 1", "project1.dataset2.table2"). When set, the solution backs up the specified tables. This setting is ignored if you set the folders_include_list, projects_include_list, or datasets_include_list fields. TABLES_EXCLUDED: a list of tables or regular expression (for example, "project1.dataset1.table 1", "regex:.*\_test"). When set, the solution does not take backups of the specified tables. You can use this setting in combination with the folders_include_list, projects_include_list, or datasets_include_list fields. All exclusion lists accept regular expressions in the form regex:REGULAR_EXPRESSION. If the fully qualified entry name (for example, "project.dataset.table") matches any of the supplied regular expression, it's excluded from the backup scope. The following are some common use cases: Exclude all dataset names that end with _landing: datasets_exclude_list = ["regex:.*\\_landing$"] Exclude all tables ending with _test, _tst, _bkp, or _copy: tables_exclude_list = ["regex:.*\_(test|tst|bkp|copy)"] Define fallback policies On each run, the solution needs to determine the backup policy of each in-scope table. For more information about the types of policies, see Backup policies. This section shows you how to define a fallback policy. A fallback policy is defined with a default_policy variable and a set of exceptions or overrides on different levels (folder, project, dataset, and table). This approach provides granular flexibility without the need for an entry for each table. There are additional sets of policy fields, depending on the backup method that you decide to use: BigQuery snapshots, exports to Cloud Storage, or both. In the TFVARS file, for the default_policy variable, set the following common fields for the default policy: fallback_policy = { "default_policy" : { "backup_cron" : "BACKUP_CRON" "backup_method" : "BACKUP_METHOD", "backup_time_travel_offset_days" : "OFFSET_DAYS", "backup_storage_project" : "BACKUP_STORAGE_PROJECT", "backup_operation_project" : "BACKUP_OPERATIONS_PROJECT", Replace the following: BACKUP_CRON: a cron expression to set the frequency with which a table is backed up (for example, for backups every 6 hours, specify 0 0 */6 * * *). This must be a Spring-Framework compatible cron expression. BACKUP_METHOD: the method, which you specify as BigQuery Snapshot, GCS Snapshot (to use the export to Cloud Storage method), or Both. You need to provide the required fields for each chosen backup method, as shown later. OFFSET_DAYS: the number of days in the past that determines the point in time from which to back up the tables. Values can be a number between 0 and 7. BACKUP_STORAGE_PROJECT: the ID of the project where all snapshot and export operations are stored. This is the same project where the bq_snapshot_storage_dataset and gcs_snapshot_storage_location resides. Small deployments can use the host project, but large scale deployments should use a separate project. BACKUP_OPERATIONS_PROJECT: an optional setting, where you specify the ID of the project where all snapshot and export operations run. Snapshot and export job quotas and limits are applicable to this project. This can be the same value as backup_storage_project. If not set, the solution uses the source table's project. If you specified BigQuery Snapshot or Both as the backup_method, add the following fields after the common fields, in the default_policy variable: "bq_snapshot_expiration_days" : "SNAPSHOT_EXPIRATION", "bq_snapshot_storage_dataset" : "DATASET_NAME", Replace the following: SNAPSHOT_EXPIRATION: the number of days to keep each snapshot (for example, 15). DATASET_NAME: the name of the dataset to store snapshots in (for example, backups). The dataset must already exist in the project specified for backup_storage_project. If you specified GCS Snapshot (to use the export to Cloud Storage method) or Both as the backup_method, add the following fields to the default_policy variable: "gcs_snapshot_storage_location" : "STORAGE_BUCKET", "gcs_snapshot_format" : "FILE_FORMAT", "gcs_avro_use_logical_types" : AVRO_TYPE, "gcs_csv_delimiter" : "CSV_DELIMITER", "gcs_csv_export_header" : CSV_EXPORT_HEADER Replace the following: STORAGE_BUCKET: the Cloud Storage bucket in which to store the exported data, in the format gs://bucket/path/. For example, gs://bucket1/backups/. FILE_FORMAT: the file format and compression used to export a BigQuery table to Cloud Storage. Available values are CSV, CSV_GZIP, JSON, JSON_GZIP, AVRO, AVRO_DEFLATE, AVRO_SNAPPY, PARQUET, PARQUET_SNAPPY, and PARQUET_GZIP. AVRO_TYPE: a boolean value. If set to false, the BigQuery types are exported as strings. If set to true, the types are exported as their corresponding Avro logical type. This field is required when the gcs_snapshot_format is any Avro type format. CSV_DELIMITER: the delimiter used for the exported CSV files, and the value can be any ISO-8859-1 single-byte character. You can use \t or tab to specify tab delimiters. This field is required when the gcs_snapshot_format is any CSV type format. CSV_EXPORT_HEADER: a boolean value. If set to true, the column headers are exported to the CSV files. This field is required when the gcs_snapshot_format is any CSV type format. For details and Avro type mapping, see the following table: BigQuery Type Avro Logical Type TIMESTAMP timestamp-micros (annotates Avro LONG) DATE date (annotates Avro INT) TIME timestamp-micro (annotates Avro LONG) DATETIME STRING (custom named logical type datetime) Add override variables for specific folders, projects, datasets, and tables: }, "folder_overrides" : { "FOLDER_NUMBER" : { }, }, "project_overrides" : { "PROJECT_NAME" : { } }, "dataset_overrides" : { "PROJECT_NAME.DATASET_NAME" : { } }, "table_overrides" : { "PROJECT_NAME.DATASET_NAME.TABLE_NAME" : { } } } Replace the following: FOLDER_NUMBER: specify the folder for which you want to set override fields. PROJECT_NAME: specify the project when you set override fields for a particular project, dataset, or table. DATASET_NAME: specify the dataset when you set override fields for a particular dataset or table. TABLE_NAME: specify the table for which you want to set override fields. For each override entry, such as a specific project in the project_overrides variable, add the common fields and the required fields for the backup method that you specified earlier in default_policy. If you don't want to set overrides for a particular level, set that variable to an empty map (for example, project_overrides : {}). In the following example, override fields are set for a specific table that uses the BigQuery snapshot method: }, "project_overrides" : {}, "table_overrides" : { "example_project1.dataset1.table1" : { "backup_cron" : "0 0 */5 * * *", # every 5 hours each day "backup_method" : "BigQuery Snapshot", "backup_time_travel_offset_days" : "7", "backup_storage_project" : "project name", "backup_operation_project" : "project name", # bq settings "bq_snapshot_expiration_days" : "14", "bq_snapshot_storage_dataset" : "backups2" }, } } For a full example of a fallback policy, see the example-variables file. Configure additional backup operation projects If you want to specify additional backup projects, such as those defined in external configurations (table-level backup policy) or the table source projects, configure the following variable: additional_backup_operation_projects = [ADDITIONAL_BACKUPS] Replace ADDITIONAL_BACKUPS with a comma-separated list of project names (for example, "project1", "project2"). If you're using only the fallback backup policy without table-level external policies, you can set the value to an empty list. If you don't add this field, any projects that are specified in the optional backup_operation_project field are automatically included as backup projects. Configure Terraform service account permissions In the previous steps, you configured the backup projects where the backup operations run. Terraform needs to deploy resources to those backup projects. The service account that Terraform uses must have the required permissions for these specified backup projects. In Cloud Shell, grant the service account permissions for all of the projects where backup operations run: ./scripts/prepare_backup_operation_projects_for_terraform.sh BACKUP_OPERATIONS_PROJECT DATA_PROJECTS ADDITIONAL_BACKUPS Replace the following: BACKUP_OPERATIONS_PROJECT: any projects defined in the backup_operation_project fields in any of the fallback policies and table-level policies. DATA_PROJECTS: if no backup_operation_project field is defined in a fallback or table-level policy, include the projects for those source tables. ADDITIONAL_BACKUPS: any projects that are defined in the additional_backup_operation_projects Terraform variable. Run the deployment scripts In Cloud Shell, run the Terraform deployment script: cd terraform terraform init \ -backend-config="bucket=${BUCKET_NAME}" \ -backend-config="prefix=terraform-state" \ -backend-config="impersonate_service_account=$TF_SA@$PROJECT_ID.iam.gserviceaccount.com" terraform plan -var-file=$VARS terraform apply -var-file=$VARS Add the time to live (TTL) policies for Firestore: gcloud firestore fields ttls update expires_at \ --collection-group=project_folder_cache \ --enable-ttl \ --async \ --project=$PROJECT_ID The solution uses Datastore as a cache in some situations. To save costs and improve lookup performance, the TTL policy allows Firestore to automatically delete entries that are expired. Set up access to sources and destinations In Cloud Shell, set the following variables for the service accounts used by the solution: export SA_DISPATCHER_EMAIL=dispatcher@${PROJECT_ID}.iam.gserviceaccount.com export SA_CONFIGURATOR_EMAIL=configurator@${PROJECT_ID}.iam.gserviceaccount.com export SA_SNAPSHOTER_BQ_EMAIL=snapshoter-bq@${PROJECT_ID}.iam.gserviceaccount.com export SA_SNAPSHOTER_GCS_EMAIL=snapshoter-gcs@${PROJECT_ID}.iam.gserviceaccount.com export SA_TAGGER_EMAIL=tagger@${PROJECT_ID}.iam.gserviceaccount.com If you've changed the default names in Terraform, update the service account emails. If you've set the folders_include_list field, and want to set the scope of the BigQuery scan to include certain folders, grant the required permissions on the folder level: ./scripts/prepare_data_folders.sh FOLDERS_INCLUDED To enable the application to execute the necessary tasks in different projects, grant the required permissions on each of these projects: ./scripts/prepare_data_projects.sh DATA_PROJECTS ./scripts/prepare_backup_storage_projects.sh BACKUP_STORAGE_PROJECT ./scripts/prepare_backup_operation_projects.sh BACKUP_OPERATIONS_PROJECT Replace the following: DATA_PROJECTS: the data projects (or source projects) that contain the source tables that you want to back up (for example, project1 project2). Include the following projects: Projects that are specified in the inclusion lists in the Terraform variable schedulers. If you want to back up tables in the host project, include the host project. BACKUP_STORAGE_PROJECT: the backup storage projects (or destination projects) where the solution stores the backups (for example, project1 project2). You need to include the projects that are specified in the following fields: The backup_storage_project fields in all of the fallback policies. The backup_storage_project fields in all of the table-level policies. Include backup storage projects that are used in multiple fields or that are used as both the source and destination project BACKUP_OPERATIONS_PROJECT: the data operation projects where the solution runs the backup operations (for example, project1 project2). You need to include the projects that are specified in the following fields: The backup_operation_project fields in all of the fallback policies. All inclusion lists in the scope of the BigQuery scan (if you don't set the backup_operation_project field). The backup_operation_project fields in all of the table-level policies. Include backup operations projects that are used in multiple fields or that are used as both the source and destination project. For tables that use column-level access control, identify all policy tag taxonomies that are used by your tables (if any), and grant the solution's service accounts access to the table data: TAXONOMY="projects/TAXONOMY_PROJECT/locations/TAXONOMY_LOCATION/taxonomies/TAXONOMY_ID" gcloud data-catalog taxonomies add-iam-policy-binding \ $TAXONOMY \ --member="serviceAccount:${SA_SNAPSHOTER_BQ_EMAIL}" \ --role='roles/datacatalog.categoryFineGrainedReader' gcloud data-catalog taxonomies add-iam-policy-binding \ $TAXONOMY \ --member="serviceAccount:${SA_SNAPSHOTER_GCS_EMAIL}" \ --role='roles/datacatalog.categoryFineGrainedReader' Replace the following: TAXONOMY_PROJECT: the project ID in the policy tag taxonomy TAXONOMY_LOCATION: the location specified in the policy tag taxonomy TAXONOMY_ID: the taxonomy ID of the policy tag taxonomy Repeat the previous step for each policy tag taxonomy. Run the solution After you deploy the solution, use the following sections to run and manage the solution. Set table-level backup policies In Cloud Shell, create a table-level policy with the required fields, and then store the policy in the Cloud Storage bucket for policies: # Use the default backup policies bucket unless overwritten in the .tfvars export POLICIES_BUCKET=${PROJECT_ID}-bq-backup-manager-policies # set target table info export TABLE_PROJECT='TABLE_PROJECT' export TABLE_DATASET='TABLE_DATASET' export TABLE='TABLE_NAME' # Config Source must be 'MANUAL' when assigned this way export BACKUP_POLICY="{ 'config_source' : 'MANUAL', 'backup_cron' : 'BACKUP_CRON', 'backup_method' : 'BACKUP_METHOD', 'backup_time_travel_offset_days' : 'OFFSET_DAYS', 'backup_storage_project' : 'BACKUP_STORAGE_PROJECT', 'backup_operation_project' : 'BACKUP_OPERATION_PROJECT', 'gcs_snapshot_storage_location' : 'STORAGE_BUCKET', 'gcs_snapshot_format' : 'FILE_FORMAT', 'gcs_avro_use_logical_types' : 'AVRO_TYPE', 'bq_snapshot_storage_dataset' : 'DATASET_NAME', 'bq_snapshot_expiration_days' : 'SNAPSHOT_EXPIRATION' }" # File name MUST BE backup_policy.json echo $BACKUP_POLICY >> backup_policy.json gsutil cp backup_policy.json gs://${POLICIES_BUCKET}/policy/project=${TABLE_PROJECT}/dataset=${TABLE_DATASET}/table=${TABLE}/backup_policy.json Replace the following: TABLE_PROJECT: the project in which the table resides TABLE_DATASET: the dataset of the table TABLE_NAME: the name of the table Trigger backup operations The Cloud Scheduler jobs that you configured earlier run automatically based on their cron expression. You can also manually run the jobs in the Google Cloud console. For more information, see Run your job. Monitor and report With your host project (PROJECT_ID) selected, you can run the following queries in BigQuery Studio to get reports and information. Get progress statistics of each run (including in-progress runs): SELECT * FROM `bq_backup_manager.v_run_summary_counts` Get all fatal (non-retryable errors) for a single run: SELECT * FROM `bq_backup_manager.v_errors_non_retryable` WHERE run_id = 'RUN_ID' Replace RUN_ID with the ID of the run. Get all runs on a table and their execution information: SELECT * FROM `bq_backup_manager.v_errors_non_retryable` WHERE tablespec = 'project.dataset.table' You can also specify a grouped version: SELECT * FROM `bq_backup_manager.v_audit_log_by_table_grouped`, UNNEST(runs) r WHERE r.run_has_retryable_error = FALSE For debugging, you can get detailed request and response information for each service invocation: SELECT jsonPayload.unified_target_table AS tablespec, jsonPayload.unified_run_id AS run_id, jsonPayload.unified_tracking_id AS tracking_id, CAST(jsonPayload.unified_is_successful AS BOOL) AS configurator_is_successful, jsonPayload.unified_error AS configurator_error, CAST(jsonPayload.unified_is_retryable_error AS BOOL) AS configurator_is_retryable_error, CAST(JSON_VALUE(jsonPayload.unified_input_json, '$.isForceRun') AS BOOL) AS is_force_run, CAST(JSON_VALUE(jsonPayload.unified_output_json, '$.isBackupTime') AS BOOL) AS is_backup_time, JSON_VALUE(jsonPayload.unified_output_json, '$.backupPolicy.method') AS backup_method, CAST(JSON_VALUE(jsonPayload.unified_input_json, '$.isDryRun') AS BOOL) AS is_dry_run, jsonPayload.unified_input_json AS request_json, jsonPayload.unified_output_json AS response_json FROM `bq_backup_manager.run_googleapis_com_stdout` WHERE jsonPayload.global_app_log = 'UNIFIED_LOG' -- 1= dispatcher, 2= configurator, 3=bq snapshoter, -3=gcs snapshoter and 4=tagger AND jsonPayload.unified_component = "2" Get the backup policies that are manually added or assigned by the system based on fallbacks: SELECT * FROM `bq_backup_manager.ext_backup_policies` Limitations For more information about limits and quotas for each project that is specified in the backup_operation_project fields, see Limits. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this deployment, either delete the projects that contain the resources, or keep the projects and delete the individual resources. Delete the projects Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the new resources As an alternative to deleting the projects, you can delete the resources created during this procedure. In Cloud Shell, delete the Terraform resources: terraform destroy -var-file="${VARS}" The command deletes almost all of the resources. Check to ensure that all the resources you want to delete are removed. What's next Learn more about BigQuery: BigQuery table snapshots BigQuery table exports to Cloud Storage For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Karim Wadie | Strategic Cloud EngineerOther contributors: Chris DeForeest | Site Reliability EngineerEyal Ben Ivri | Cloud Solutions ArchitectJason Davenport | Developer AdvocateJaliya Ekanayake | Engineering ManagerMuhammad Zain | Strategic Cloud Engineer Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(7).txt b/Deploy_the_architecture(7).txt new file mode 100644 index 0000000000000000000000000000000000000000..df570c54cccb8234e385cda87b415987e73f7c6c --- /dev/null +++ b/Deploy_the_architecture(7).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/using-apache-hive-on-cloud-dataproc/deployment +Date Scraped: 2025-02-23T11:49:17.215Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy Apache Hive on Dataproc Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-05-08 UTC Important: We recommend that you use Dataproc Metastore. to manage Hive metadata on Google Cloud, rather than the legacy workflow described in this deployment. This document describes how you deploy the architecture in Use Apache Hive on Dataproc. This document is intended for cloud architects and data engineers who are interested in deploying Apache Hive on Dataproc and the Hive Metastore in Cloud SQL. Architecture In this deployment guide you deploy all compute and storage services in the same Google Cloud region to minimize network latency and network transport costs. The following diagram shows the lifecycle of a Hive query. In the diagram, the Hive client submits a query, which is processed, fetched, and returned. Processing takes place in the Hive server. The data is requested and returmed from a Hive warehouse stored in a regional bucket in Cloud Storage. Objectives Create a MySQL instance on Cloud SQL for the Hive metastore. Deploy Hive servers on Dataproc. Install the Cloud SQL Proxy on the Dataproc cluster instances. Upload Hive data to Cloud Storage. Run Hive queries on multiple Dataproc clusters. Costs This deployment uses the following billable components of Google Cloud: Dataproc Cloud Storage Cloud SQL You can use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial. When you finish this deployment, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this deployment, create a project instead of selecting an existing project. After you finish this deployment, you can delete the project to remove all resources that are associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Learn how to check if billing is enabled on a project. Initialize the environment Start a Cloud Shell instance: Go to Cloud Shell In Cloud Shell, set the default Compute Engine zone to the zone where you are going to create your Dataproc clusters. export PROJECT=$(gcloud info --format='value(config.project)') export REGION=REGION export ZONE=ZONE gcloud config set compute/zone ${ZONE} Replace the following: REGION: The region where you want to create the cluster, such as us-central1. ZONE: The zone where you want to create the cluster, such as us-central1-a. Enable the Dataproc and Cloud SQL Admin APIs by running this command in Cloud Shell: gcloud services enable dataproc.googleapis.com sqladmin.googleapis.com (Optional) Creating the warehouse bucket If you don't have a Cloud Storage bucket to store Hive data, create a warehouse bucket (you can run the following commands in Cloud Shell) replacing BUCKET_NAME with a unique bucket name: export WAREHOUSE_BUCKET=BUCKET_NAME gcloud storage buckets create gs://${WAREHOUSE_BUCKET} --location=${REGION} Creating the Cloud SQL instance In this section, you create a new Cloud SQL instance that will later be used to host the Hive metastore. In Cloud Shell, create a new Cloud SQL instance: gcloud sql instances create hive-metastore \ --database-version="MYSQL_5_7" \ --activation-policy=ALWAYS \ --zone ${ZONE} This command might take a few minutes to complete. Creating a Dataproc cluster Create the first Dataproc cluster, replacing CLUSTER_NAME with a name such as hive-cluster : gcloud dataproc clusters create CLUSTER_NAME \ --scopes sql-admin \ --region ${REGION} \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/cloud-sql-proxy/cloud-sql-proxy.sh \ --properties "hive:hive.metastore.warehouse.dir=gs://${WAREHOUSE_BUCKET}/datasets" \ --metadata "hive-metastore-instance=${PROJECT}:${REGION}:hive-metastore" \ --metadata "enable-cloud-sql-proxy-on-workers=false" \ --public-ip-address Notes: You provide the sql-admin access scope to allow cluster instances to access the Cloud SQL Admin API. You put your initialization action in a script that you store in a Cloud Storage bucket, and you reference that bucket with the --initialization-actions flag. See Initialization actions - Important considerations and guidelines for more information. You provide the URI to the Hive warehouse bucket in the hive:hive.metastore.warehouse.dir property. This configures the Hive servers to read from and write to the correct location. This property must contain at least one directory (for example, gs://my-bucket/my-directory); Hive will not work properly if this property is set to a bucket name without a directory (for example, gs://my-bucket). You specify enable-cloud-sql-proxy-on-workers=false to ensure that the Cloud SQL Proxy only runs on master nodes, which is sufficient for the Hive metastore service to function and avoids unnecessary load on Cloud SQL. You provide the Cloud SQL Proxy initialization action that Dataproc automatically runs on all cluster instances. The action does the following: Installs the Cloud SQL Proxy. Establishes a secure connection to the Cloud SQL instance specified in the hive-metastore-instance metadata parameter. Creates the hive user and the Hive metastore's database. You can see the full code for the Cloud SQL Proxy initialization action on GitHub. This deployment uses a Cloud SQL instance with public IP address. If instead you use an instance with only a private IP address, then you can force the proxy to use the private IP address by passing the --metadata "use-cloud-sql-private-ip=true" parameter. +. The --public-ip-address flag is necessary to allow the Cloud SQL proxy to connect to the Cloud SQL instance. Creating a Hive table In this section, you upload a sample dataset to your warehouse bucket, create a new Hive table, and run some HiveQL queries on that dataset. Copy the sample dataset to your warehouse bucket: gcloud storage cp gs://hive-solution/part-00000.parquet \ gs://${WAREHOUSE_BUCKET}/datasets/transactions/part-00000.parquet The sample dataset is compressed in the Parquet format and contains thousands of fictitious bank transaction records with three columns: date, amount, and transaction type. Create an external Hive table for the dataset: gcloud dataproc jobs submit hive \ --cluster CLUSTER_NAME \ --region ${REGION} \ --execute " CREATE EXTERNAL TABLE transactions (SubmissionDate DATE, TransactionAmount DOUBLE, TransactionType STRING) STORED AS PARQUET LOCATION 'gs://${WAREHOUSE_BUCKET}/datasets/transactions';" Running Hive queries You can use different tools inside Dataproc to run Hive queries. In this section, you learn how to perform queries using the following tools: Dataproc's Hive jobs API. Beeline, a popular command line client that is based on SQLLine. SparkSQL, Apache Spark's API for querying structured data. In each section, you run a sample query. Querying Hive with the Dataproc Jobs API Run the following simple HiveQL query to verify that the parquet file is correctly linked to the Hive table: gcloud dataproc jobs submit hive \ --cluster CLUSTER_NAME \ --region ${REGION} \ --execute " SELECT * FROM transactions LIMIT 10;" The output includes the following: +-----------------+--------------------+------------------+ | submissiondate | transactionamount | transactiontype | +-----------------+--------------------+------------------+ | 2017-12-03 | 1167.39 | debit | | 2017-09-23 | 2567.87 | debit | | 2017-12-22 | 1074.73 | credit | | 2018-01-21 | 5718.58 | debit | | 2017-10-21 | 333.26 | debit | | 2017-09-12 | 2439.62 | debit | | 2017-08-06 | 5885.08 | debit | | 2017-12-05 | 7353.92 | authorization | | 2017-09-12 | 4710.29 | authorization | | 2018-01-05 | 9115.27 | debit | +-----------------+--------------------+------------------+ Querying Hive with Beeline Open an SSH session with the Dataproc's master instance(CLUSTER_NAME-m): gcloud compute ssh CLUSTER_NAME-m In the master instance's command prompt, open a Beeline session: beeline -u "jdbc:hive2://localhost:10000" Notes: You can also reference the master instance's name as the host instead of localhost: beeline -u "jdbc:hive2://CLUSTER_NAME-m:10000" If you were using the high-availability mode with 3 masters, you would have to use the following command instead: beeline -u "jdbc:hive2://CLUSTER_NAME-m-0:2181,CLUSTER_NAME-m-1:2181,CLUSTER_NAME-m-2:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2" When the Beeline prompt appears, run the following HiveQL query: SELECT TransactionType, AVG(TransactionAmount) AS AverageAmount FROM transactions WHERE SubmissionDate = '2017-12-22' GROUP BY TransactionType; The output includes the following: +------------------+--------------------+ | transactiontype | averageamount | +------------------+--------------------+ | authorization | 4890.092525252529 | | credit | 4863.769269565219 | | debit | 4982.781458176331 | +------------------+--------------------+ Close the Beeline session: !quit Close the SSH connection: exit Querying Hive with SparkSQL Open an SSH session with the Dataproc's master instance: gcloud compute ssh CLUSTER_NAME-m In the master instance's command prompt, open a new PySpark shell session: pyspark When the PySpark shell prompt appears, type the following Python code: from pyspark.sql import HiveContext hc = HiveContext(sc) hc.sql(""" SELECT SubmissionDate, AVG(TransactionAmount) as AvgDebit FROM transactions WHERE TransactionType = 'debit' GROUP BY SubmissionDate HAVING SubmissionDate >= '2017-10-01' AND SubmissionDate < '2017-10-06' ORDER BY SubmissionDate """).show() The output includes the following: +-----------------+--------------------+ | submissiondate | avgdebit | +-----------------+--------------------+ | 2017-10-01 | 4963.114920399849 | | 2017-10-02 | 5021.493300510582 | | 2017-10-03 | 4982.382279569891 | | 2017-10-04 | 4873.302702503676 | | 2017-10-05 | 4967.696333583777 | +-----------------+--------------------+ Close the PySpark session: exit() Close the SSH connection: exit Inspecting the Hive metastore You now verify that the Hive metastore in Cloud SQL contains information about the transactions table. In Cloud Shell, start a new MySQL session on the Cloud SQL instance: gcloud sql connect hive-metastore --user=root When you're prompted for the root user password, do not type anything and just press the RETURN key. For the sake of simplicity in this deployment, you did not set any password for the root user. For information about setting a password to further protect the metastore database, refer to the Cloud SQL documentation. The Cloud SQL Proxy initialization action also provides a mechanism for protecting passwords through encryption—for more information, see the action's code repository. In the MySQL command prompt, make hive_metastore the default database for the rest of the session: USE hive_metastore; Verify that the warehouse bucket's location is recorded in the metastore: SELECT DB_LOCATION_URI FROM DBS; The output looks like this: +-------------------------------------+ | DB_LOCATION_URI | +-------------------------------------+ | gs://[WAREHOUSE_BUCKET]/datasets | +-------------------------------------+ Verify that the table is correctly referenced in the metastore: SELECT TBL_NAME, TBL_TYPE FROM TBLS; The output looks like this: +--------------+----------------+ | TBL_NAME | TBL_TYPE | +--------------+----------------+ | transactions | EXTERNAL_TABLE | +--------------+----------------+ Verify that the table's columns are also correctly referenced: SELECT COLUMN_NAME, TYPE_NAME FROM COLUMNS_V2 c, TBLS t WHERE c.CD_ID = t.SD_ID AND t.TBL_NAME = 'transactions'; The output looks like this: +-------------------+-----------+ | COLUMN_NAME | TYPE_NAME | +-------------------+-----------+ | submissiondate | date | | transactionamount | double | | transactiontype | string | +-------------------+-----------+ Verify that the input format and location are also correctly referenced: SELECT INPUT_FORMAT, LOCATION FROM SDS s, TBLS t WHERE s.SD_ID = t.SD_ID AND t.TBL_NAME = 'transactions'; The output looks like this: +---------------------------------------------------------------+------------------------------------------------+ | INPUT_FORMAT | LOCATION | +---------------------------------------------------------------+------------------------------------------------+ | org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat | gs://[WAREHOUSE_BUCKET]/datasets/transactions | +---------------------------------------------------------------+------------------------------------------------+ Close the MySQL session: exit Creating another Dataproc cluster In this section, you create another Dataproc cluster to verify that the Hive data and Hive metastore can be shared across multiple clusters. Create a new Dataproc cluster: gcloud dataproc clusters create other-CLUSTER_NAME \ --scopes cloud-platform \ --image-version 2.0 \ --region ${REGION} \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/cloud-sql-proxy/cloud-sql-proxy.sh \ --properties "hive:hive.metastore.warehouse.dir=gs://${WAREHOUSE_BUCKET}/datasets" \ --metadata "hive-metastore-instance=${PROJECT}:${REGION}:hive-metastore"\ --metadata "enable-cloud-sql-proxy-on-workers=false" Verify that the new cluster can access the data: gcloud dataproc jobs submit hive \ --cluster other-CLUSTER_NAME \ --region ${REGION} \ --execute " SELECT TransactionType, COUNT(TransactionType) as Count FROM transactions WHERE SubmissionDate = '2017-08-22' GROUP BY TransactionType;" The output includes the following: +------------------+--------+ | transactiontype | count | +------------------+--------+ | authorization | 696 | | credit | 1722 | | debit | 2599 | +------------------+--------+ Congratulations, you've completed the steps in the deployment. Clean up The following sections explain how you can avoid future charges for your Google Cloud project and the Apache Hive and Dataproc resources that you used in this deployment. Delete the Google Cloud project To avoid incurring charges to your Google Cloud account for the resources used in this deployment, you can delete the Google Cloud project. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Deleting individual resources Run the following commands in Cloud Shell to delete individual resources instead of deleting the whole project: gcloud dataproc clusters delete CLUSTER_NAME --region ${REGION} --quiet gcloud dataproc clusters delete other-CLUSTER_NAME --region ${REGION} --quiet gcloud sql instances delete hive-metastore --quiet gcloud storage rm gs://${WAREHOUSE_BUCKET}/datasets --recursive What's next Try BigQuery, Google's serverless, highly scalable, low-cost enterprise data warehouse. Check out this guide on migrating Hadoop workloads to Google Cloud. Check out this initialization action for more details on how to use Hive HCatalog on Dataproc. Learn how to configure Cloud SQL for high availability to increase service reliability. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(8).txt b/Deploy_the_architecture(8).txt new file mode 100644 index 0000000000000000000000000000000000000000..e8a6f8319a5328993fc43968c3222b2ae5734ae6 --- /dev/null +++ b/Deploy_the_architecture(8).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/riot-live-migration-redis-enterprise-cloud/deployment +Date Scraped: 2025-02-23T11:53:05.857Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy RIOT Live Migration to migrate to Redis Enterprise Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-29 UTC This document describes how you deploy RIOT Live Migration to migrate to Redis Enterprise Cloud. Database architects, DevOps and SRE teams, or Network administrators can use this architecture to offer near-zero downtime migrations to their teams. This document assumes that you're familiar with using the Google Cloud CLI and Compute Engine. Architecture The following diagram shows the reference architecture that uses RIOT Live Migration Service to migrate Redis-compatible sources to Redis Enterprise Cloud. For details about the architecture, see RIOT Live Migration to migrate to Redis Enterprise Cloud. The sample deployment in this document uses the following architecture in which the source is a Redis OSS on a Compute Engine VM: In the diagram, a Redis OSS instance and RIOT are consolidated into a single Compute Engine VM for simplicity. In a production environment, we recommend that RIOT always runs on its own VM to help ensure better performance. The sample deployment architecture contains the following components: Source: Redis OSS instance running on Compute Engine VM. Target: Redis Enterprise Cloud running in the Redis managed VPC. Migration Service: RIOT running on the same Compute Engine VM as Redis OSS. Network Setup: VPC Peering between a managed VPC and the Redis managed VPC. The RIOT migration tool has near-zero downtime. During migration from Redis OSS (source) to Redis Enterprise Cloud (target), your applications can still access Redis OSS without impact or service disruption. During the migration process, after the initial load of data from Redis OSS, RIOT Live Migration continues to migrate changes from Redis OSS as they occur. Objectives Set up your Redis OSS source by creating and loading data. Set up a migration target cluster in Redis Enterprise Cloud. Use RIOT Live Migration to migrate data from Redis OSS to Redis Enterprise Cloud. Understand testing, cutover, and fallback strategies. Costs Deployment of this architecture uses the following billable components of Google Cloud: Compute Engine costs for running Redis OSS and RIOT instances. Redis Enterprise Cloud cost procured through Google Cloud Marketplace. Network charges incurred from data migration traffic between zones and regions. Before you begin Complete the following steps to set up an environment for your migration. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the BigQuery, Pub/Sub, Dataflow, and Compute Engine APIs. Enable the APIs To get the permissions that you need to complete this deployment, ask your administrator to grant you the Billing Administrator (roles/billing.admin) IAM role on your organization. For more information about granting roles, see Manage access. You might also be able to get the required permissions through custom roles or other predefined roles. Set up a Redis OSS instance To start the deployment, you install the Redis OSS instance on a Compute Engine VM. The instance serves as your source instance. Install the Redis OSS instance In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. In Cloud Shell, create an Ubuntu VM: gcloud compute instances create redis-oss-riot-vm \ --image-family=ubuntu-2004-lts \ --image-project=ubuntu-os-cloud \ --zone=us-central1-a \ --machine-type=e2-medium \ --network=VPC_NETWORK_NAME \ --subnet=VPC_SUBNETWORK_NAME \ --metadata=startup-script='#! /bin/bash apt-get update -y apt-get install redis-tools -y snap install redis' Replace the following: VPC_NETWORK_NAME: the name of your VPC network. VPC_SUBNETWORK_NAME: the name of your VPC subnetwork. Use an SSH connection to sign in to the Compute Engine instance that runs the Redis OSS instance: PROJECT_ID=$(gcloud info --format='value(config.project)') gcloud compute ssh redis-oss-riot-vm --project $PROJECT_ID --zone us-central1-a Enable keyspace notification for live migration, which is required by RIOT: redis-cli config set notify-keyspace-events KEA Verify that the Redis OSS instance is operational In Cloud Shell, launch the Redis CLI: redis-cli Set and get a simple key-value pair: set my_key my_value get my_key unlink my_key The output is the following: OK "my_value" (integer) 1 You have now created and accessed your Redis OSS instance and confirmed that it's operational. Insert sample data In this section, you insert sample data into the Redis OSS instance and verify that the data is successfully inserted. In Cloud Shell, launch the Redis CLI: redis-cli Add the following six key-value pairs as the initial dataset. Enter each command individually and wait for the output OK before you enter the next key-value pair. set tennis federer set soccer ronaldo set basketball curry set football montana set golf woods set swimmer phelps Verify that you added six key-value pairs successfully: SCAN 0 The output is the following: "swimmer" "soccer" "football" "golf" "tennis" "basketball" After you set up and start the RIOT migration, the data is migrated to the target Redis Enterprise Cloud instance. Install RIOT on the Redis OSS instance VM In order to run RIOT, you need to make sure that your Compute Engine VM is appropriately sized. In general, we recommend that you size your VM to 8 VCPUs or larger, depending on the amount of data to be transported and the update frequency. For more information, see Machine families resource and comparison guide. In Cloud Shell, use an SSH connection to sign in to the Compute Engine instance that runs the Redis OSS instance: PROJECT_ID=$(gcloud info --format='value(config.project)') gcloud compute ssh redis-oss-riot-vm --project $PROJECT_ID --zone us-central1-a Install JDK for RIOT: sudo apt install default-jre -y Download and install RIOT: sudo apt-get install unzip wget https://github.com/redis-developer/riot/releases/download/v2.19.0/riot-redis-2.19.0.zip unzip riot-redis-2.19.0.zip Verify that RIOT is installed correctly: ./riot-redis-2.19.0/bin/riot-redis -V The output is similar to the following, which shows a RIOT logo and a version number: You have now installed the RIOT migration tool on the Redis OSS instance and confirmed that it's operational. Create a Redis Enterprise Cloud instance Redis Enterprise Cloud is available through Cloud Marketplace. If you don't have a Redis Enterprise cluster set up as your target Redis Enterprise instance, follow the steps in this section. If you already have a Redis Enterprise cluster set up as your target database, you can skip this section and proceed to Start the RIOT live migration. In Cloud Marketplace, go to Redis Enterprise Cloud Flexible - Pay as You Go. Go to Redis Enterprise in Marketplace For more information, see the instructions in the Redis document Flexible subscriptions with Cloud Marketplace. Sign in to the Redis console using the Redis account information that you provided when you subscribed to Redis Enterprise Cloud Flexible. Create a Flexible subscription by following the instructions in the Redis document Create a Flexible subscription. Choose Google Cloud as your cloud vendor, and create a database with all the default settings. Create a VPC peering between your Google Virtual Private Cloud and the Redis managed VPC by following the instructions in the Redis document Enable VPC peering. In the Redis console, go to Subscription, and then locate your Redis Enterprise database connection string: Make a note of the Private endpoint IP and Port, in the format of: ENDPOINT_IP:ENDPOINT_PORT Where the values represent the following: ENDPOINT_IP: the private endpoint IP address for the Redis Enterprise database. ENDPOINT_PORT: the private endpoint port number for the Redis Enterprise database. Make a note of the database password. Start the RIOT live migration To migrate the data from the Redis OSS (source) to Redis Enterprise Cloud (target) instance, do the following: In Cloud Shell, use an SSH connection to sign in to the Compute Engine instance that runs the Redis OSS instance: PROJECT_ID=$(gcloud info --format='value(config.project)') gcloud compute ssh redis-oss-riot-vm --project $PROJECT_ID --zone us-central1-a Initiate a live migration between the source and target. If your Redis OSS instance is on Redis 7.2, you need to use type-based replication. For information about using the --type option, see the Redis documentation Type-based replication. ./riot-redis-2.19.0/bin/riot-redis -u redis://localhost:6379 replicate \ -u redis://ENDPOINT_IP:ENDPOINT_PORT \ -a REDIS_ENTERPRISE_DB_PASSWORD \ --mode live Replace the following with the values that you noted in the previous section: ENDPOINT_IP: the private endpoint IP address for the Redis Enterprise cluster database. ENDPOINT_PORT: the private endpoint port number for the Redis Enterprise cluster database. REDIS_ENTERPRISE_DB_PASSWORD: the password for the Redis Enterprise cluster database. The output is similar to the following: Listening ? % ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━0/? (0:00:00 / ?) ?/s\ Scanning 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6/6 (0:00:00 / 0:00:00) ?/s Verify the database migration It's important to design and implement a database migration verification strategy to confirm that the database migration is successful. Although the verification strategy that you use depends on your specific use case, we recommend that you perform these checks for all migrations: Completeness check: Verify that the initial key-value pairs successfully migrated from the Redis OSS to Redis Enterprise (initial load). Dynamic check: Verify that changes in the source are being transferred to the target instance (ongoing migration). Initial load In Cloud Shell, use an SSH connection to sign in to the Compute Engine instance that runs the Redis OSS instance: PROJECT_ID=$(gcloud info --format='value(config.project)') $ gcloud compute ssh redis-oss-riot-vm --project $PROJECT_ID --zone us-central1-a Launch the Redis CLI: redis-cli -u redis://ENDPOINT_IP:ENDPOINT_PORT \ -a REDIS_ENTERPRISE_DB_PASSWORD Verify that the six key-value pairs successfully migrated from Redis OSS to Redis Enterprise instance: SCAN 0 The output is the following: "swimmer" "soccer" "football" "golf" "tennis" "basketball" Get the value of the tennis key: get tennis The output is the following: [return federer] Exit the Redis CLI: exit Ongoing migration Verify that ongoing changes to the source Redis OSS are reflected in the target Redis Enterprise instance: In Cloud Shell, use an SSH connection to sign in to the Redis OSS VM. Launch the Redis CLI: redis-cli Add new key-value pairs: Add a new runner bolt pair: set runner bolt Upsert a new tennis alcaraz pair: set tennis alcaraz The output for each of these commands is the following: OK In the Redis Enterprise instance, observe that new key-value pairs are added: get runner The output is the following: [return bolt] To verify that all key-value pairs are present, check key counts: redis-cli info keyspace and redis-cli -u info keyspace The output is the following: # Keyspace db0:keys=7,expires=0,avg_ttl=0 You have now verified that RIOT Live Migration has automatically migrated all the key-value pairs from the source Redis OSS instance and any ongoing changes to the source. Cut over from the source to the target After you verify the database migration, you can perform a cutover from the source Redis OSS instance to the target Redis Enterprise instance: Suspend client write access to the source Redis OSS instance by using Redis Access Control List. Unless you need to preserve the source database for your fallback strategy, decommission the source Redis OSS by removing the VM instance. Migrate the client to the same region as the Redis Enterprise database instance. For information, see the documentation for your client host. In the Redis console, locate the private endpoint of the Redis Enterprise database instance and update your client's Redis connection to the private endpoint. For more information, see View and edit databases in the Redis documentation. In Cloud Shell, stop the RIOT process by pressing Ctrl+C. Prepare a fallback strategy After the cutover is finished, the target Redis Enterprise instance is the system of record; the source Redis OSS instance is out of date and eventually removed. However, you might want to fall back to the source Redis OSS instance in case of severe failures in the new target Redis Enterprise instance. To fall back from such failures, you might want to keep the original source Redis OSS instance up to date with the target database changes. When you're confident that the new target instance is reliable, you can shut down the source instance. Clean up The following sections explain how you can avoid future charges for your Google Cloud project and the Redis resources that you used in this deployment. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the Redis Enterprise database To delete the Redis Enterprise database, see Delete Database in the Redis documentation What's next Learn how to Define the scope of your migration to Redis Enterprise Cloud. Read Google Cloud data migration content. For more in-depth documentation and best practices, review RIOT documentation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Saurabh Kumar | ISV Partner EngineerGilbert Lau | Principal Cloud Architect, RedisOther contributors: Chris Mague | Customer Engineer, Data ManagementGabe Weiss | Developer Advocacy ManagerMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture(9).txt b/Deploy_the_architecture(9).txt new file mode 100644 index 0000000000000000000000000000000000000000..244f3bcf24a20577bd975a80beb57a65f15f563f --- /dev/null +++ b/Deploy_the_architecture(9).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/import-logs-from-storage-to-logging/deployment +Date Scraped: 2025-02-23T11:53:17.102Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy a job to import logs from Cloud Storage to Cloud Logging Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-19 UTC This document describes how you deploy the reference architecture described in Import logs from Cloud Storage to Cloud Logging. These instructions are intended for engineers and developers, including DevOps, site reliability engineers (SREs), and security investigators, who want to configure and run the log importing job. This document also assumes you are familiar with running Cloud Run import jobs, and how to use Cloud Storage and Cloud Logging. Architecture The following diagram shows how Google Cloud services are used in this reference architecture: For details, see Import logs from Cloud Storage to Cloud Logging. Objectives Create and configure a Cloud Run import job Create a service account to run the job Costs In this document, you use the following billable components of Google Cloud: Cloud Logging Cloud Run Cloud Storage To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Ensure that the logs you intend to import were previously exported to Cloud Storage, which means that they're already organized in the expected export format. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Create or select a Google Cloud project. Create a Google Cloud project: gcloud projects create PROJECT_ID Replace PROJECT_ID with a name for the Google Cloud project you are creating. Select the Google Cloud project that you created: gcloud config set project PROJECT_ID Replace PROJECT_ID with your Google Cloud project name. Replace PROJECT_ID with the destination project ID. Note: We recommend that you create a new designated project for the imported logs. If you use an existing project, imported logs might get routed to unwanted destinations, which can cause extra charges or accidental export. To ensure logs don't route to unwanted destinations, review the filters on all the sinks, including _Required and _Default. Ensure that sinks are not inherited from organizations or folders. Make sure that billing is enabled for your Google Cloud project. Enable the Cloud Run and Identity and Access Management (IAM) APIs: gcloud services enable run.googleapis.com iam.googleapis.com Required roles To get the permissions that you need to deploy this solution, ask your administrator to grant you the following IAM roles: To grant the Logs Writer role on the log bucket: Project IAM Admin (roles/resourcemanager.projectIamAdmin) on the destination project To grant the Storage Object Viewer role on the storage bucket: Storage Admin (roles/storage.admin) on the project where the storage bucket is hosted To create a service account: Create Service Accounts (roles/iam.serviceAccountCreator) on the destination project To enable services on the project: Service Usage Admin (roles/serviceusage.serviceUsageAdmin) on the destination project To upgrade the log bucket and delete imported logs: Logging Admin (roles/logging.admin) on the destination project To create, run, and modify the import job: Cloud Run Developer (roles/run.developer) on the destination project For more information about granting roles, see Manage access to projects, folders, and organizations. You might also be able to get the required permissions through custom roles or other predefined roles. Upgrade the log bucket to use Log Analytics We recommend that you use the default log bucket, and upgrade it to use Log Analytics. However, in a production environment, you can use your own log bucket if the default bucket doesn't meet your requirements. If you decide to use your own bucket, you must route logs that are ingested to the destination project to this log bucket. For more information, see Configure log buckets and Create a sink. When you upgrade the bucket, you can use SQL to query and analyze your logs. There's no additional cost to upgrade the bucket or use Log Analytics. Note: After you upgrade a bucket, it can't be downgraded. For details about settings and restrictions, see Upgrade a bucket to use Log Analytics. To upgrade the default log bucket in the destination project, do the following: Upgrade the default log bucket to use Log Analytics: gcloud logging buckets update BUCKET_ID --location=LOCATION --enable-analytics Replace the following: BUCKET_ID: the name of the log bucket (for example, _Default) LOCATION: a supported region (for example, global) Create the Cloud Run import job When you create the job, you can use the prebuilt container image that is provided for this reference architecture. If you need to modify the implementation to change the 30-day retention period or if you have other requirements, you can build your own custom image. In Cloud Shell, create the job with the configurations and environment variables: gcloud run jobs create JOB_NAME \ --image=IMAGE_URL \ --region=REGION \ --tasks=TASKS \ --max-retries=0 \ --task-timeout=60m \ --cpu=CPU \ --memory=MEMORY \ --set-env-vars=END_DATE=END_DATE,LOG_ID=LOG_ID,\ START_DATE=START_DATE,STORAGE_BUCKET_NAME=STORAGE_BUCKET_NAME,\ PROJECT_ID=PROJECT_ID Replace the following: JOB_NAME: the name of your job. IMAGE_URL: the reference to the container image; use us-docker.pkg.dev/cloud-devrel-public-resources/samples/import-logs-solution or the URL of the custom image, if you built one by using the instructions in GitHub. REGION: the region where you want your job to be located; to avoid additional costs, we recommend keeping the job region the same or within the same multi-region as the Cloud Storage bucket region. For example, if your bucket is multi-region US, you can use us-central1. For details, see Cost optimization. TASKS: the number of tasks that the job must run. The default value is 1. You can increase the number of tasks if timeouts occur. CPU: the CPU limit, which can be 1, 2, 4, 6, or 8 CPUs. The default value is 2. You can increase the number if timeouts occur; for details, see Configure CPU limits. MEMORY: the memory limit. The default value is 2Gi. You can increase the number if timeouts occur; for details, see Configure memory limits. END_DATE: the end of the date range in the format MM/DD/YYYY. Logs with timestamps earlier than or equal to this date are imported. LOG_ID: the log identifier of the logs you want to import. Log ID is a part of the logName field of the log entry. For example, cloudaudit.googleapis.com. START_DATE: the start of the date range in the format MM/DD/YYYY. Logs with timestamps later than or equal to this date are imported. STORAGE_BUCKET_NAME: the name of the Cloud Storage bucket where logs are stored (without the gs:// prefix). The max-retries option is set to zero to prevent retries for failed tasks, which can cause duplicate log entries. If the Cloud Run job fails due to a timeout, an incomplete import can result. To prevent incomplete imports due to timeouts, increase the tasks value, as well as the CPU and memory resources. Increasing these values might increase costs. For details about costs, see Cost optimization. Create a service account to run your Cloud Run job In Cloud Shell, create the user-managed service account: gcloud iam service-accounts create SA_NAME Replace SA_NAME with the name of the service account. Grant the Storage Object Viewer role on the storage bucket: gcloud storage buckets add-iam-policy-binding gs://STORAGE_BUCKET_NAME \ --member=serviceAccount:SA_NAME@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/storage.objectViewer Replace the following: STORAGE_BUCKET_NAME: the name of the storage bucket that you used in the import job configuration. For example, my-bucket. PROJECT_ID: the destination project ID. Grant the Logs Writer role on the log bucket: gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:SA_NAME@PROJECT_ID.iam.gserviceaccount.com \ --role=roles/logging.logWriter Set the service account for the Cloud Run job: gcloud run jobs update JOB_NAME \ --region=REGION \ --service-account SA_NAME@PROJECT_ID.iam.gserviceaccount.com Replace REGION with the same region where you deployed the Cloud Run import job. Run the import job In Cloud Shell, execute the created job: gcloud run jobs execute JOB_NAME \ --region=REGION For more information, see Execute jobs and Manage job executions. If you need to rerun the job, delete the previously imported logs to avoid creating duplicates. For details, see Delete imported logs later in this document. When you query the imported logs, duplicates don't appear in the query results. Cloud Logging removes duplicates (log entries from the same project, with the same insertion ID and timestamp) from query results. For more information, see the insert_id field in the Logging API reference. Verify results To validate that the job has completed successfully, in Cloud Shell, you can query import results: gcloud logging read 'log_id("imported_logs") AND timestamp<=END_DATE' The output shows the imported logs. If this project was used to run more than one import job within the specified timeframe, the output shows imported logs from those jobs as well. For more options and details about querying log entries, see gcloud logging read. Delete imported logs If you need to run the same job more than one time, delete the previously imported logs to avoid duplicated entries and increased costs. To delete imported logs, in Cloud Shell, execute the logs delete: gcloud logging logs delete imported_logs Be aware that deleting imported logs purges all log entries that were imported to the destination project and not only the results of the last import job execution. What's Next Review the implementation code in the GitHub repository. Learn how to analyze imported logs by using Log Analytics and SQL. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Leonid Yankulin | Developer Relations EngineerOther contributors: Summit Tuladhar | Senior Staff Software EngineerWilton Wong | Enterprise ArchitectXiang Shen | Solutions Architect Send feedback \ No newline at end of file diff --git a/Deploy_the_architecture.txt b/Deploy_the_architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..c4109ea62f75ee8a2baa2522ea28af72aa3cccc5 --- /dev/null +++ b/Deploy_the_architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/building-a-vision-analytics-solution/deployment +Date Scraped: 2025-02-23T11:46:31.595Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy an ML vision analytics solution with Dataflow and Cloud Vision API Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-16 UTC This deployment document describes how to deploy a Dataflow pipeline to process image files at scale with Cloud Vision API. This pipeline stores the results of the processed files in BigQuery. You can use the files for analytical purposes or to train BigQuery ML models. The Dataflow pipeline you create in this deployment can process millions of images per day. The only limit is your Vision API quota. You can increase your Vision API quota based on your scale requirements. These instructions are intended for data engineers and data scientists. This document assumes you have basic knowledge of building Dataflow pipelines using Apache Beam's Java SDK, GoogleSQL for BigQuery, and basic shell scripting. It also assumes that you are familiar with Vision API. Architecture The following diagram illustrates the system flow for building an ML vision analytics solution. In the preceding diagram, information flows through the architecture as follows: A client uploads image files to a Cloud Storage bucket. Cloud Storage sends a message about the data upload to Pub/Sub. Pub/Sub notifies Dataflow about the upload. The Dataflow pipeline sends the images to Vision API. Vision API processes the images and then returns the annotations. The pipeline sends the annotated files to BigQuery for you to analyze. Objectives Create an Apache Beam pipeline for image analysis of the images loaded in Cloud Storage. Use Dataflow Runner v2 to run the Apache Beam pipeline in a streaming mode to analyze the images as soon as they're uploaded. Use Vision API to analyze images for a set of feature types. Analyze annotations with BigQuery. Costs In this document, you use the following billable components of Google Cloud: BigQuery Cloud Storage Dataflow Vision API Pub/Sub To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish building the example application, you can avoid continued billing by deleting the resources you created. For more information, see Clean up. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Clone the GitHub repository that contains the source code of the Dataflow pipeline: git clone https://github.com/GoogleCloudPlatform/dataflow-vision-analytics.git Go to the root folder of the repository: cd dataflow-vision-analytics Follow the instructions in the Getting started section of the dataflow-vision-analytics repository in GitHub to accomplish the following tasks: Enable several APIs. Create a Cloud Storage bucket. Create a Pub/Sub topic and subscription. Create a BigQuery dataset. Set up several environment variables for this deployment. Running the Dataflow pipeline for all implemented Vision API features The Dataflow pipeline requests and processes a specific set of Vision API features and attributes within the annotated files. The parameters listed in the following table are specific to the Dataflow pipeline in this deployment. For the complete list of standard Dataflow execution parameters, see Set Dataflow pipeline options. Parameter name Description batchSize The number of images to include in a request to Vision API. The default is 1. You can increase this value to a maximum of 16. datasetName The name of the output BigQuery dataset. features A list of image-processing features. The pipeline supports the label, landmark, logo, face, crop hint, and image properties features. keyRange The parameter that defines the max number of parallel calls to Vision API. The default is 1. labelAnnottationTable, landmarkAnnotationTable, logoAnnotationTable, faceAnnotationTable, imagePropertiesTable, cropHintAnnotationTable, errorLogTable String parameters with table names for various annotations. The default values are provided for each table—for example, label_annotation. maxBatchCompletionDurationInSecs The length of time to wait before processing images when there is an incomplete batch of images. The default is 30 seconds. subscriberId The ID of the Pub/Sub subscription that receives input Cloud Storage notifications. visionApiProjectId The project ID to use for Vision API. Note: The code samples in step 1 and step 5 don't specify a dedicated service account to run the deployment pipeline. The pipeline uses the default Compute Engine service account for the project that launches it. In Cloud Shell, run the following command to process images for all feature types supported by the Dataflow pipeline: ./gradlew run --args=" \ --jobName=test-vision-analytics \ --streaming \ --runner=DataflowRunner \ --enableStreamingEngine \ --diskSizeGb=30 \ --project=${PROJECT} \ --datasetName=${BIGQUERY_DATASET} \ --subscriberId=projects/${PROJECT}/subscriptions/${GCS_NOTIFICATION_SUBSCRIPTION} \ --visionApiProjectId=${PROJECT} \ --features=IMAGE_PROPERTIES,LABEL_DETECTION,LANDMARK_DETECTION,LOGO_DETECTION,CROP_HINTS,FACE_DETECTION" The dedicated service account needs to have read access to the bucket containing the images. In other words, that account must have the roles/storage.objectViewer role granted on that bucket. For more information about using a dedicated service account, see Dataflow security and permissions. Open the displayed URL in a new browser tab, or go to the Dataflow Jobs page and select the test-vision-analytics pipeline. After a few seconds, the graph for the Dataflow job appears: The Dataflow pipeline is now running and waiting to receive input notifications from the Pub/Sub subscription. Trigger Dataflow image processing by uploading the six sample files into the input bucket: gcloud storage cp data-sample/* gs://${IMAGE_BUCKET} In the Google Cloud console, locate the Custom Counters panel and use it to review the custom counters in Dataflow and to verify that Dataflow has processed all six images. You can use the filter functionality of the panel to navigate to the correct metrics. To only display the counters start with the numberOf prefix, type numberOf in the filter. In Cloud Shell, validate that the tables were automatically created: bq query --nouse_legacy_sql "SELECT table_name FROM ${BIGQUERY_DATASET}.INFORMATION_SCHEMA.TABLES ORDER BY table_name" The output is as follows: +----------------------+ | table_name | +----------------------+ | crop_hint_annotation | | face_annotation | | image_properties | | label_annotation | | landmark_annotation | | logo_annotation | +----------------------+ View the schema for the landmark_annotation table. The LANDMARK_DETECTION feature captures the attributes returned from the API call. bq show --schema --format=prettyjson ${BIGQUERY_DATASET}.landmark_annotation The output is as follows: [ { "name":"gcs_uri", "type":"STRING" }, { "name":"feature_type", "type":"STRING" }, { "name":"transaction_timestamp", "type":"STRING" }, { "name":"mid", "type":"STRING" }, { "name":"description", "type":"STRING" }, { "name":"score", "type":"FLOAT" }, { "fields":[ { "fields":[ { "name":"x", "type":"INTEGER" }, { "name":"y", "type":"INTEGER" } ], "mode":"REPEATED", "name":"vertices", "type":"RECORD" } ], "name":"boundingPoly", "type":"RECORD" }, { "fields":[ { "fields":[ { "name":"latitude", "type":"FLOAT" }, { "name":"longitude", "type":"FLOAT" } ], "name":"latLon", "type":"RECORD" } ], "mode":"REPEATED", "name":"locations", "type":"RECORD" } ] View the annotation data produced by the API by running the following bq query commands to see all the landmarks found in these six images ordered by the most likely score: bq query --nouse_legacy_sql "SELECT SPLIT(gcs_uri, '/')[OFFSET(3)] file_name, description, score, locations FROM ${BIGQUERY_DATASET}.landmark_annotation ORDER BY score DESC" The output is similar to the following: +------------------+-------------------+------------+---------------------------------+ | file_name | description | score | locations | +------------------+-------------------+------------+---------------------------------+ | eiffel_tower.jpg | Eiffel Tower | 0.7251996 | ["POINT(2.2944813 48.8583701)"] | | eiffel_tower.jpg | Trocadéro Gardens | 0.69601923 | ["POINT(2.2892823 48.8615963)"] | | eiffel_tower.jpg | Champ De Mars | 0.6800974 | ["POINT(2.2986304 48.8556475)"] | +------------------+-------------------+------------+---------------------------------+ For detailed descriptions of all the columns that are specific to annotations, see AnnotateImageResponse. To stop the streaming pipeline, run the following command. The pipeline continues to run even though there are no more Pub/Sub notifications to process. gcloud dataflow jobs cancel --region ${REGION} $(gcloud dataflow jobs list --region ${REGION} --filter="NAME:test-vision-analytics AND STATE:Running" --format="get(JOB_ID)") The following section contains more sample queries that analyze different image features of the images. Analyzing a Flickr30K dataset In this section, you detect labels and landmarks in the public Flickr30k image dataset hosted on Kaggle. In Cloud Shell, change the Dataflow pipeline parameters so that it's optimized for a large dataset. To allow higher throughput, also increase the batchSize and keyRange values. Dataflow scales the number of workers as needed: ./gradlew run --args=" \ --runner=DataflowRunner \ --jobName=vision-analytics-flickr \ --streaming \ --enableStreamingEngine \ --diskSizeGb=30 \ --autoscalingAlgorithm=THROUGHPUT_BASED \ --maxNumWorkers=5 \ --project=${PROJECT} \ --region=${REGION} \ --subscriberId=projects/${PROJECT}/subscriptions/${GCS_NOTIFICATION_SUBSCRIPTION} \ --visionApiProjectId=${PROJECT} \ --features=LABEL_DETECTION,LANDMARK_DETECTION \ --datasetName=${BIGQUERY_DATASET} \ --batchSize=16 \ --keyRange=5" Because the dataset is large, you can't use Cloud Shell to retrieve the images from Kaggle and send them to the Cloud Storage bucket. You must use a VM with a larger disk size to do that. To retrieve Kaggle-based images and send them to the Cloud Storage bucket, follow the instructions in the Simulate the images being uploaded to the storage bucket section in the GitHub repository. To observe the progress of the copying process by looking at the custom metrics available in the Dataflow UI, navigate to the Dataflow Jobs page and select the vision-analytics-flickr pipeline. The customer counters should change periodically until the Dataflow pipeline processes all the files. The output is similar to the following screenshot of the Custom Counters panel. One of the files in the dataset is of the wrong type, and the rejectedFiles counter reflects that. These counter values are approximate. You might see higher numbers. Also, the number of annotations will most likely change due to increased accuracy of the processing by Vision API. To determine whether you're approaching or exceeding the available resources, see the Vision API quota page. In our example, the Dataflow pipeline used only roughly 50% of its quota. Based on the percentage of the quota you use, you can decide to increase the parallelism of the pipeline by increasing the value of the keyRange parameter. Shut down the pipeline: gcloud dataflow jobs list --region $REGION --filter="NAME:vision-analytics-flickr AND STATE:Running" --format="get(JOB_ID)" Analyze annotations in BigQuery In this deployment, you've processed more than 30,000 images for label and landmark annotation. In this section, you gather statistics about those files. You can run these queries in the GoogleSQL for BigQuery workspace or you can use the bq command-line tool. Be aware that the numbers that you see can vary from the sample query results in this deployment. Vision API constantly improves the accuracy of its analysis; it can produce richer results by analyzing the same image after you initially test the solution. In the Google Cloud console, go to the BigQuery Query editor page and run the following command to view the top 20 labels in the dataset: Go to Query editor SELECT description, count(*)ascount \ FROM vision_analytics.label_annotation GROUP BY description ORDER BY count DESC LIMIT 20 The output is similar to the following: +------------------+-------+ | description | count | +------------------+-------+ | Leisure | 7663 | | Plant | 6858 | | Event | 6044 | | Sky | 6016 | | Tree | 5610 | | Fun | 5008 | | Grass | 4279 | | Recreation | 4176 | | Shorts | 3765 | | Happy | 3494 | | Wheel | 3372 | | Tire | 3371 | | Water | 3344 | | Vehicle | 3068 | | People in nature | 2962 | | Gesture | 2909 | | Sports equipment | 2861 | | Building | 2824 | | T-shirt | 2728 | | Wood | 2606 | +------------------+-------+ Determine which other labels are present on an image with a particular label, ranked by frequency: DECLARE label STRING DEFAULT 'Plucked string instruments'; WITH other_labels AS ( SELECT description, COUNT(*) count FROM vision_analytics.label_annotation WHERE gcs_uri IN ( SELECT gcs_uri FROM vision_analytics.label_annotation WHERE description = label ) AND description != label GROUP BY description) SELECT description, count, RANK() OVER (ORDER BY count DESC) rank FROM other_labels ORDER BY rank LIMIT 20; The output is as follows. For the Plucked string instruments label used in the preceding command, you should see: +------------------------------+-------+------+ | description | count | rank | +------------------------------+-------+------+ | String instrument | 397 | 1 | | Musical instrument | 236 | 2 | | Musician | 207 | 3 | | Guitar | 168 | 4 | | Guitar accessory | 135 | 5 | | String instrument accessory | 99 | 6 | | Music | 88 | 7 | | Musical instrument accessory | 72 | 8 | | Guitarist | 72 | 8 | | Microphone | 52 | 10 | | Folk instrument | 44 | 11 | | Violin family | 28 | 12 | | Hat | 23 | 13 | | Entertainment | 22 | 14 | | Band plays | 21 | 15 | | Jeans | 17 | 16 | | Plant | 16 | 17 | | Public address system | 16 | 17 | | Artist | 16 | 17 | | Leisure | 14 | 20 | +------------------------------+-------+------+ View the top 10 detected landmarks: SELECT description, COUNT(description) AS count FROM vision_analytics.landmark_annotation GROUP BY description ORDER BY count DESC LIMIT 10 The output is as follows: +--------------------+-------+ | description | count | +--------------------+-------+ | Times Square | 55 | | Rockefeller Center | 21 | | St. Mark's Square | 16 | | Bryant Park | 13 | | Millennium Park | 13 | | Ponte Vecchio | 13 | | Tuileries Garden | 13 | | Central Park | 12 | | Starbucks | 12 | | National Mall | 11 | +--------------------+-------+ Determine the images that most likely contain waterfalls: SELECT SPLIT(gcs_uri, '/')[OFFSET(3)] file_name, description, score FROM vision_analytics.landmark_annotation WHERE LOWER(description) LIKE '%fall%' ORDER BY score DESC LIMIT 10 The output is as follows: +----------------+----------------------------+-----------+ | file_name | description | score | +----------------+----------------------------+-----------+ | 895502702.jpg | Waterfall Carispaccha | 0.6181358 | | 3639105305.jpg | Sahalie Falls Viewpoint | 0.44379658 | | 3672309620.jpg | Gullfoss Falls | 0.41680416 | | 2452686995.jpg | Wahclella Falls | 0.39005348 | | 2452686995.jpg | Wahclella Falls | 0.3792498 | | 3484649669.jpg | Kodiveri Waterfalls | 0.35024035 | | 539801139.jpg | Mallela Thirtham Waterfall | 0.29260656 | | 3639105305.jpg | Sahalie Falls | 0.2807213 | | 3050114829.jpg | Kawasan Falls | 0.27511594 | | 4707103760.jpg | Niagara Falls | 0.18691841 | +----------------+----------------------------+-----------+ Find images of landmarks within 3 kilometers of the Colosseum in Rome (the ST_GEOPOINT function uses the Colosseum's longitude and latitude): WITH landmarksWithDistances AS ( SELECT gcs_uri, description, location, ST_DISTANCE(location, ST_GEOGPOINT(12.492231, 41.890222)) distance_in_meters, FROM `vision_analytics.landmark_annotation` landmarks CROSS JOIN UNNEST(landmarks.locations) AS location ) SELECT SPLIT(gcs_uri,"/")[OFFSET(3)] file, description, ROUND(distance_in_meters) distance_in_meters, location, CONCAT("https://storage.cloud.google.com/", SUBSTR(gcs_uri, 6)) AS image_url FROM landmarksWithDistances WHERE distance_in_meters < 3000 ORDER BY distance_in_meters LIMIT 100 When you run the query you will see that there are multiple images of the Colosseum, but also images of the Arch Of Constantine, the Palatine Hill, and a number of other frequently photographed places. You can visualize the data in BigQuery Geo Viz by pasting in the previous query. Select a point on the map, to see its details. The Image_url attribute contains a link to the image file. One note on query results. Location information is usually present for landmarks. The same image can contain multiple locations of the same landmark. This functionality is described in the AnnotateImageResponse type. Because one location can indicate the location of the scene in the image, multiple LocationInfo elements can be present. Another location can indicate where the image was taken. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this guide, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the Google Cloud project The easiest way to eliminate billing is to delete the Google Cloud project you created for the tutorial. Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. If you decide to delete resources individually, follow the steps in the Clean up section of the GitHub repository. What's next For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Masud Hasan | Site Reliability Engineering ManagerSergei Lilichenko | Solutions ArchitectLakshmanan Sethu | Technical Account ManagerOther contributors: Jiyeon Kang | Customer EngineerSunil Kumar Jang Bahadur | Customer Engineer Send feedback \ No newline at end of file diff --git a/Deploy_the_blueprint(1).txt b/Deploy_the_blueprint(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..ba60f0d44be5d3145bf9f5d2fdb8ce97665e4e91 --- /dev/null +++ b/Deploy_the_blueprint(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/deploy-blueprint +Date Scraped: 2025-02-23T11:47:25.777Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy the blueprint Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC This section describes the process that you can use to deploy the blueprint, its naming conventions, and alternatives to blueprint recommendations. Bringing it all together To implement the architecture described in this document, complete the steps in this section. Deploy the blueprint in a new organization To deploy the blueprint in a new Google Cloud organization, complete the following: Create your foundational infrastructure using the enterprise foundations blueprint. Complete the following: Create an organization structure, including folders for separation of environments. Configure foundational IAM permissions to grant access to developer platform administrators. Create the VPC network. Deploy the foundation infrastructure pipeline. If you don't use the enterprise foundations blueprint, see Deploy the blueprint without the enterprise foundations blueprint. Deploy the enterprise application blueprint as follows: The developer platform administrator uses the foundation infrastructure pipeline to create the multi-tenant infrastructure pipeline, application factory, and fleet-scope pipeline. The developer platform administrator uses the multi-tenant infrastructure pipeline to deploy GKE clusters and shared infrastructure. Application operators use the application factory to onboard new applications. Operators add one or more entries in the application factory repository, which triggers the creation of application-specific resources. Application developers use the application CI/CD pipeline within their application-specific infrastructure to deploy applications to the multi-tenant infrastructure. Deploy the blueprint without the enterprise foundations blueprint If you don't deploy the enterprise application blueprint on the enterprise foundations blueprint, complete the following steps: Create the following resources: An organization hierarchy with development, nonproduction, and production folders A Shared VPC network in each folder An IP address scheme that takes into account the required IP ranges for your GKE clusters A DNS mechanism for your GKE clusters Firewall policies that are aligned with your security posture A mechanism to access Google Cloud APIs through private IP addresses A connectivity mechanism with your on-premises environment Centralized logging for security and audit Security Command Center for threat monitoring Organizational policies that are aligned with your security posture A pipeline that can be used to deploy the application factory, the multi-tenant infrastructure pipeline, and the fleet-scope pipeline After you deploy the resources, continue with step 2 in Deploy the blueprint in a new organization. Incorporate the blueprint into your existing GKE deployment This blueprint requires you to deploy the developer platform first, and then deploy applications onto the developer platform. The following table describes how you can use the blueprint if you already have containerized applications running on Google Cloud. Existing usage Migration strategy Already have a CI/CD pipeline. You can use the blueprint's fleet and cluster architecture even when different products are used for application build and deployment. At a minimum, we recommend that you mirror images to two regions. Have an existing organization structure that doesn't match the enterprise foundations blueprint. Having at least two environments is recommended for safer sequential deployments. You don't have to deploy environments in separate Shared VPCs or folders. However, don't deploy workloads that belong to different environments into the same cluster. Don't use IaC. If your current application deployment process doesn't use IaC, you can assess your readiness with the Terraform on Google Cloud maturity model. Import existing resources into a different Terraform project which is organized similarly to this blueprint, with the separation of multi-tenant and per-tenant pipelines. To create new clusters, you can use Terraform modules for Google Cloud. Clusters are spread across multiple projects within the same environment. You can group clusters from multiple projects into a fleet. Verify that your namespaces have unique meanings within the same fleet. Before adding clusters to a fleet, ask teams to move their applications to a namespace with a unique name (for example, not default).You can then group namespaces into scopes. Clusters are in a single region. You don't need to use multiple regions in production and non-production to adopt the blueprint. Different sets of environments exist. You can modify the blueprint to support more or less than three environments. Creation of clusters is delegated to application developer or application operator teams. For the most secure and consistent developer platform, you can try to move ownership of clusters from the application teams to the developer platform team. If you can't, you can still adopt many of the blueprint practices. For example, you can add the clusters owned by different application teams to a fleet. However, when combining clusters with independent ownership, don't use Workload Identity Federation for GKE or Cloud Service Mesh, because they don't provide enough control of who can assert what workload identities. Instead, use a custom organization policy to prevent teams from enabling these features on GKE clusters.When clusters are grouped into a fleet, you can still audit and enforce policies. You can use a custom organization policy to require clusters to be created within a fleet that corresponds to the environment folder that the cluster's project is under. You can use fleet default configuration to require that new clusters use policy control. Alternatives to default recommendations This section describes alternatives to the default recommendations that are included in this guide. Decision area Possible alternatives All applications run in the same set of five clusters. The blueprint uses a set of five clusters (two in production, two in non-production, and one in development). You can modify the blueprint to make additional sets of five clusters.Assign applications to sets of five clusters. Don't bind an application's scopes or fleet namespaces to clusters in the other sets. You may want to segregate applications into different cluster sets to complete activities such as the following: Group together a few applications with special regulatory concerns (for example, PCI-DSS). Isolate applications that require special handling during cluster upgrades (for example, stateful applications that use persistent volumes). Isolate applications with risky security profiles (for example, processing user-provided content in a memory-unsafe language). Isolate applications that require GPUs, cost sensitivity, and performance sensitivity. If you are nearing the GKE cluster limit on the number of nodes (65,000 nodes), you can create a new set of clusters. Use a different Shared VPC for applications that need to run within a VPC Service Controls perimeter. Create one cluster set in the perimeter and one cluster set outside of the perimeter. Avoid creating new clusters sets for every application or tenant, because this practice might result in one of the following circumstances: Applications that don't make good use of the minimum cluster size (3 VMs x 2 regions) even with the smallest available instance types. Missed potential for cost reduction from bin-packing different applications. Tedious and uncertain capacity growth planning because planning is applied to each application individually. Predictions of capacity are more accurate when done in aggregate for a broad set of applications. Delays in creating new clusters for new tenants or applications, reducing tenant satisfaction with the platform. For example, in some organizations, the required IP address allocations may take time and require extra approvals. Reaching the private cluster limit in a VPC network. Production and non-production environments have clusters in two regions. For lower latency to end users in multiple regions, you can deploy the production and non-production workloads across more than two regions (for example, three regions for production, three regions for non-production, and one region for development). This deployment strategy increases the cost and overhead of maintaining resources in additional regions. If all applications have lower availability requirements, you can deploy production and non-production workloads to only one region (one production environment, one non-production environment, and one development environment). This strategy helps reduce cost, but doesn't protect the same level of availability as a dual-regional or multi-regional architecture. If applications have different availability requirements, you can create different cluster sets for different applications (for example, cluster-set-1 with two production environments, two non-production environments, and one development environment and cluster-set-2 with one production environment, one non-production environment, and one development environment). Hub-and-spoke topology matches your requirements better than Shared VPC. You can deploy the blueprint in a hub-and-spoke configuration, where each environment (development, production, and non-production) is hosted in its own spoke. Hub-and-spoke topology can increase segregation of the environments. For more information, see Hub-and-spoke network topology. Each application has a separate Git repository. Some organizations use a single Git repository (a monorepo) for all source code instead of multiple repositories. If you use a monorepo, you can modify the application factory component of the blueprint to support your repository. Complete the following: Instead of creating a new repository for each new application, create a sub-directory in the existing repository. Instead of giving owner permissions on the new repository to the application developer group, give write permission on the existing repository and make the application developer group a required reviewer for changes to the new sub-directory. Use the CODEOWNERS feature and branch protection. What's next Learn more about the enterprise foundations blueprint. Learn more about software delivery on Google Cloud from the following: Best practices for continuous integration and delivery to GKE Software delivery framework Secure CI/CD pipeline repository Learn more about running applications on GKE from the following: Best practices for GKE networking Best practices for GKE enterprise multi-tenancy Best practices for running cost-optimized Kubernetes applications on GKE GKE safer cluster repository Harden your cluster's security Send feedback \ No newline at end of file diff --git a/Deploy_the_blueprint.txt b/Deploy_the_blueprint.txt new file mode 100644 index 0000000000000000000000000000000000000000..b79ba123ac71a3e78d7834183db60301bc9a3098 --- /dev/null +++ b/Deploy_the_blueprint.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/summary +Date Scraped: 2025-02-23T11:45:40.735Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy the blueprint Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC This section describes the process that you can use to deploy the blueprint, its naming conventions, and alternatives to blueprint recommendations. Bringing it all together To deploy your own enterprise foundation in alignment with the best practices and recommendations from this blueprint, follow the high-level tasks summarized in this section. Deployment requires a combination of prerequisite setup steps, automated deployment through the terraform-example-foundation on GitHub, and additional steps that must be configured manually after the initial foundation deployment is complete. Process Steps Prerequisites before deploying the foundation pipeline resources Complete the following steps before you deploy the foundation pipeline: Create a Cloud Identity account and verify domain ownership. Apply for an invoiced billing account with your Google Cloud sales team or create a self-service billing account. Enforce security best practices for administrator accounts. Verify and reconcile issues with consumer user accounts. Configure your external identity provider as source of truth for synchronizing user accounts and SSO. Provision the groups for access control that are required to run the blueprint. Determine the network topology that you will use. Decide your source code management tool. The instructions for terraform-example-foundation are written to a Git repository that is hosted in Cloud Source Repositories. Decide your CI/CD automation tools. The terraform-example-foundation provides different sets of directions for different automation tools. To connect to an an existing on-premises environment, prepare the following: Plan your IP address allocation based on the number and size of ranges that are required by the blueprint. Order your Dedicated Interconnect connections. Steps to deploy the terraform-example-foundation from GitHub Follow the README directions for each stage to deploy the terraform-example-foundation from GitHub: Stage 0-bootstrap to create a foundation pipeline. If using a self-service billing account, you must request additional project quota before proceeding to the next stage. Stage 1-org to configure organization-level resources. Stage 2-environments to create environments. Stage either 3-networks-dual-svpc or 3-networks-hub-and-spoke to create networking resources in your preferred topology. Stage 4-projects to create an infrastructure pipeline. Optionally, stage 5-app-infra for sample usage of the infrastructure pipeline. Additional steps after IaC deployment After you deploy the Terraform code, complete the following: Complete the on-premises configuration changes. Activate Security Command Center Premium. Export Cloud Billing data to BigQuery. Sign up for a Cloud Customer Care plan. Enable Access Transparency logs. Share data from Cloud Identity with Google Cloud. Apply the administrative controls for Cloud Identity which aren't automated by the IaC deployment. Assess which of the additional administrative controls for customers with sensitive workloads are appropriate for your use case. Review operation best practices and plan how to connect your existing operations and capabilities to the foundation resources. Additional administrative controls for customers with sensitive workloads Google Cloud provides additional administrative controls that can help your security and compliance requirements. However, some controls involve additional cost or operational trade-offs that might not be appropriate for every customer. These controls also require customized inputs for your specific requirements that can't be fully automated in the blueprint with a default value for all customers. This section introduces security controls that you apply centrally to your foundation. This section isn't intended to be exhaustive of all the security controls that you can apply to specific workloads. For more information on Google's security products and solutions, see Google Cloud security best practices center. Evaluate whether the following controls are appropriate for your foundation based on your compliance requirements, risk appetite, and sensitivity of data. Control Description Protect your resources with VPC Service Controls VPC Service Controls lets you define security policies that prevent access to Google-managed services outside of a trusted perimeter, block access to data from untrusted locations, and mitigate data exfiltration risks. However, VPC Service Controls can cause existing services to break until you define exceptions to allow intended access patterns.Evaluate whether the value of mitigating exfiltration risks justifies the increased complexity and operational overhead of adopting VPC Service Controls. The blueprint prepares restricted networks and optional variables to configure VPC Service Controls, but the perimeter isn't enabled until you take additional steps to design and enable it. Restrict resource locations You might have regulatory requirements that cloud resources must only be deployed in approved geographical locations. This organization policy constraint enforces that resources can only be deployed in the list of locations you define. Enable Assured Workloads Assured Workloads provides additional compliance controls that help you meet specific regulatory regimes. The blueprint provides optional variables in the deployment pipeline for enablement. Enable data access logs You might have a requirement to log all access to certain sensitive data or resources.Evaluate where your workloads handle sensitive data that requires data access logs, and enable the logs for each service and environment working with sensitive data. Enable Access Approval Access Approval ensures that Cloud Customer Care and engineering require your explicit approval whenever they need to access your customer content.Evaluate the operational process required to review Access Approval requests to mitigate possible delays in resolving support incidents. Enable Key Access Justifications Key Access Justifications lets you programmatically control whether Google can access your encryption keys, including for automated operations and for Customer Care to access your customer content.Evaluate the cost and operational overhead associated with Key Access Justifications as well as its dependency on Cloud External Key Manager (Cloud EKM). Disable Cloud Shell Cloud Shell is an online development environment. This shell is hosted on a Google-managed server outside of your environment, and thus it isn't subject to the controls that you might have implemented on your own developer workstations.If you want to strictly control which workstations a developer can use to access cloud resources, disable Cloud Shell. You might also evaluate Cloud Workstations for a configurable workstation option in your own environment. Restrict access to the Google Cloud console Google Cloud lets you restrict access to the Google Cloud console based on access level attributes like group membership, trusted IP address ranges, and device verification. Some attributes require an additional subscription to Chrome Enterprise Premium.Evaluate the access patterns that you trust for user access to web-based applications such as the console as part of a larger zero trust deployment. Naming conventions We recommend that you have a standardized naming convention for your Google Cloud resources. The following table describes recommended conventions for resource names in the blueprint. Resource Naming convention Folder fldr-environmentenvironment is a description of the folder-level resources within the Google Cloud organization. For example, bootstrap, common, production, nonproduction, development, or network.For example: fldr-production Project ID prj-environmentcode-description-randomid environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). Shared VPC host projects use the environmentcode of the associated environment. Projects for networking resources that are shared across environments, like the interconnect project, use the net environment code. description is additional information about the project. You can use short, human-readable abbreviations. randomid is a randomized suffix to prevent collisions for resource names that must be globally unique and to mitigate against attackers guessing resource names. The blueprint automatically adds a random four-character alphanumeric identifier. For example: prj-c-logging-a1b2 VPC network vpc-environmentcode-vpctype-vpcconfig environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. For example: vpc-p-shared-base Subnet sn-environmentcode-vpctype-vpcconfig-region{-description} environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid hitting character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the subnet. You can use short, human-readable abbreviations. For example: sn-p-shared-restricted-uswest1 Firewall policies fw-firewalltype-scope-environmentcode{-description} firewalltype is hierarchical or network. scope is global or the Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). environmentcode is a short form of the environment field (one of b, c, p, n, d, or net) that owns the policy resource. description is additional information about the hierarchical firewall policy. You can use short, human-readable abbreviations. For example:fw-hierarchical-global-c-01fw-network-uswest1-p-shared-base Cloud Router cr-environmentcode-vpctype-vpcconfig-region{-description} environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the Cloud Router. You can use short, human-readable abbreviations. For example: cr-p-shared-base-useast1-cr1 Cloud Interconnect connection ic-dc-colo dc is the name of your data center to which a Cloud Interconnect is connected. colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with. For example: ic-mydatacenter-lgazone1 Cloud Interconnect VLAN attachment vl-dc-colo-environmentcode-vpctype-vpcconfig-region{-description} dc is the name of your data center to which a Cloud Interconnect is connected. colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with. environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the VLAN. You can use short, human-readable abbreviations. For example: vl-mydatacenter-lgazone1-p-shared-base-useast1-cr1 Group grp-gcp-description@example.com Where description is additional information about the group. You can use short, human-readable abbreviations.For example: grp-gcp-billingadmin@example.com Custom role rl-descriptionWhere description is additional information about the role. You can use short, human-readable abbreviations.For example: rl-customcomputeadmin Service account sa-description@projectid.iam.gserviceaccount.comWhere: description is additional information about the service account. You can use short, human-readable abbreviations. projectid is the globally unique project identifier. For example: sa-terraform-net@prj-b-seed-a1b2.iam.gserviceaccount.com Storage bucket bkt-projectid-descriptionWhere: projectid is the globally unique project identifier. description is additional information about the storage bucket. You can use short, human-readable abbreviations. For example: bkt-prj-c-infra-pipeline-a1b2-app-artifacts Alternatives to default recommendations The best practices that are recommended in the blueprint might not work for every customer. You can customize any of the recommendations to meet your specific requirements. The following table introduces some of the common variations that you might require based on your existing technology stack and ways of working. Decision area Possible alternatives Organization: The blueprint uses a single organization as the root node for all resources. Decide a resource hierarchy for your Google Cloud landing zone introduces scenarios in which you might prefer multiple organizations, such as the following: Your organization includes sub-companies that are likely to be sold in the future or that run as completely separate entities. You want to experiment in a sandbox environment with no connectivity to your existing organization. Folder structure: The blueprint has a simple folder structure, with workloads divided into production, non-production and development folders at the top layer. Decide a resource hierarchy for your Google Cloud landing zone introduces other approaches for structuring folders based on how you want to manage resources and inherit policies, such as: Folders based on application environments Folders based on regional entities or subsidiaries Folders based on accountability framework Organization policies: The blueprint enforces all organization policy constraints at the organization node. You might have different security policies or ways of working for different parts of the business. In this scenario, enforce organization policy constraints at a lower node in the resource hierarchy. Review the complete list of organization policy constraints that help meet your requirements. Deployment pipeline tooling: The blueprint uses Cloud Build to run the automation pipeline. You might prefer other products for your deployment pipeline, such as Terraform Enterprise, GitLab Runners, GitHub Actions, or Jenkins. The blueprint includes alternative directions for each product. Code repository for deployment: The blueprint uses Cloud Source Repositories as the managed private Git repository. Use your preferred version control system for managing code repositories, such as GitLab, GitHub, or Bitbucket.If you use a private repository that is hosted in your on-premises environment, configure a private network path from your repository to your Google Cloud environment. Identity provider: The blueprint assumes an on-premises Active Directory and federates identities to Cloud Identity using Google Cloud Directory Sync. If you already use Google Workspace, you can use the Google identities that are already managed in Google Workspace.If you don't have an existing identity provider, you might create and manage user identities directly in Cloud Identity.If you have an existing identity provider, such as Okta, Ping, or Azure Entra ID, you might manage user accounts in your existing identity provider and synchronize to Cloud Identity.If you have data sovereignty or compliance requirements that prevent you from using Cloud Identity, and if you don't require managed Google user identities for other Google services such as Google Ads or Google Marketing Platform, then you might prefer workforce identity federation. In this scenario, be aware of limitations with supported services. Multiple regions: The blueprint deploys regional resources into two different Google Cloud regions to help enable workload design with high availability and disaster recovery requirements in mind. If you have end users in more geographical locations, you might configure more Google Cloud regions to create resources closer to the end user with less latency.If you have data sovereignty constraints or your availability needs can be met in a single region, you might configure only one Google Cloud region. IP address allocation: The blueprint provides a set of IP address ranges. You might need to change the specific IP address ranges that are used based on the IP address availability in your existing hybrid environment. If you modify the IP address ranges, use the blueprint as guidance for the number and size of ranges required, and review the valid IP address ranges for Google Cloud. Hybrid networking: The blueprint uses Dedicated Interconnect across multiple physical sites and Google Cloud regions for maximum bandwidth and availability. Depending on your requirements for cost, bandwidth, and reliability requirements, you might configure Partner Interconnect or Cloud VPN instead.If you need to start deploying resources with private connectivity before a Dedicated Interconnect can be completed, you might start with Cloud VPN and change to using Dedicated Interconnect later.If you don't have an existing on-premises environment, you might not need hybrid networking at all. VPC Service Controls perimeter: The blueprint recommends a single perimeter which includes all the service projects that are associated with a restricted VPC network. Projects that are associated with a base VPC network are not included inside the perimeter. You might have a use case that requires multiple perimeters for an organization or you might decide not to use VPC Service Controls at all.For information, see decide how to mitigate data exfiltration through Google APIs. Secret Manager: The blueprint deploys a project for using Secret Manager in the common folder for organization-wide secrets, and a project in each environment folder for environment-specific secrets. If you have a single team who is responsible for managing and auditing sensitive secrets across the organization, you might prefer to use only a single project for managing access to secrets.If you let workload teams manage their own secrets, you might not use a centralized project for managing access to secrets, and instead let teams use their own instances of Secret Manager in workload projects. Cloud KMS: The blueprint deploys a project for using Cloud KMS in the common folder for organization-wide keys, and a project for each environment folder for keys in each environment. If you have a single team who is responsible for managing and auditing encryption keys across the organization, you might prefer to use only a single project for managing access to keys. A centralized approach can help meet compliance requirements like PCI key custodians.If you let workload teams manage their own keys, you might not use a centralized project for managing access to keys, and instead let teams use their own instances of Cloud KMS in workload projects. Aggregated log sinks: The blueprint configures a set of log sinks at the organization node so that a central security team can review audit logs from across the entire organization. You might have different teams who are responsible for auditing different parts of the business, and these teams might require different logs to do their jobs. In this scenario, design multiple aggregated sinks at the appropriate folders and projects and create filters so that each team receives only the necessary logs, or design log views for granular access control to a common log bucket. Granularity of infrastructure pipelines: The blueprint uses a model where each business unit has a separate infrastructure pipeline to manage their workload projects. You might prefer a single infrastructure pipeline that is managed by a central team if you have a central team who is responsible for deploying all projects and infrastructure. This central team can accept pull requests from workload teams to review and approve before project creation, or the team can create the pull request themselves in response to a ticketed system.You might prefer more granular pipelines if individual workload teams have the ability to customize their own pipelines and you want to design more granular privileged service accounts for the pipelines. SIEM exports:The blueprint manages all security findings in Security Command Center. Decide whether you will export security findings from Security Command Center to tools such as Google Security Operations or your existing SIEM, or whether teams will use the console to view and manage security findings. You might configure multiple exports with unique filters for different teams with different scopes and responsibilities. DNS lookups for Google Cloud services from on-premises: The blueprint configures a unique Private Service Connect endpoint for each Shared VPC, which can help enable designs with multiple VPC Service Controls perimeters. You might not require routing from an on-premises environment to Private Service Connect endpoints at this level of granularity if you don't require multiple VPC Service Control perimeters.Instead of mapping on-premises hosts to Private Service Connect endpoints by environment, you might simplify this design to use a single Private Service Connect endpoint with the appropriate API bundle, or use the generic endpoints for private.googleapis.com and restricted.googleapis.com. What's next Implement the blueprint using the Terraform example foundation on GitHub. Learn more about best practice design principles with the Google Cloud Architecture Framework. Review the library of blueprints to help you accelerate the design and build of common enterprise workloads, including the following: Import data from Google Cloud into a secured BigQuery data warehouse Import data from an external network into a secured BigQuery data warehouse Deploy a secured serverless architecture using Cloud Run functions Deploy a secured serverless architecture using Cloud Run See related solutions to deploy on top of your foundation environment. For access to a demonstration environment, contact us at security-foundations-blueprint-support@google.com. Send feedback \ No newline at end of file diff --git a/Deploy_your_workloads.txt b/Deploy_your_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..63c9727141447def71491d487ea446a79d5ce811 --- /dev/null +++ b/Deploy_your_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-gcp-deploying-your-workloads +Date Scraped: 2025-02-23T11:51:39.400Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Deploy your workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-08 UTC This document can help you plan and design the deployment phase of your migration to Google Cloud. After you've assessed your current environment, planned the migration to Google Cloud, and built your Google Cloud foundation, you can deploy your workloads. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads (this document) Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs The following diagram illustrates the path of your migration journey. The deployment phase is the third phase in your migration to Google Cloud where you design a deployment process for your workloads. This document is useful if you're planning a migration from an on-premises environment, from a private-hosting environment, from another cloud provider to Google Cloud, or if you're evaluating the opportunity to migrate and want to explore what it might look like. In this document, you review the different deployment process types, in order of flexibility, automation, and complexity, along with criteria on how to pick an approach that's right for you: Deploy manually. Deploy with configuration management (CM) tools. Deploy by using container orchestration tools. Deploy automatically. Before you deploy your workloads, plan and design your deployment phase. First, you should evaluate the different deployment process types that you implement for your workloads. When you evaluate deployment process types, you can decide to start with a targeted process and move to a more complex one in the future. This approach can lead to quicker results, but can also introduce friction when you move to a more advanced process, because you have to absorb the technical debt you accumulated while using the targeted process. For example, if you move from fully manual deployments to an automated solution, you might have to manage upgrades to your deployment pipeline and apps. While it's possible to implement different types of deployment processes according to your workloads' needs, this approach can also increase the complexity of this phase. If you implement different types of deployment processes, you can benefit from the added flexibility, but you might need expertise, tooling, and resources tailored to each process, which translates to more effort on your side. Deploy manually A fully manual deployment is backed by a provisioning, configuration, and deployment process that is completely non-automated. While there might be specifications and checklists for each step of the process, there is no automated check or enforcement of those specifications. A manual process is prone to human error, not repeatable, and its performance is limited by the human factor. Fully manual deployment processes can be useful, for example, when you need to quickly instrument an experiment in a sandboxed environment. Setting up a structured, automated process for an experiment that lasts minutes can unnecessarily slow down your pace, especially in the early stages of your migration, when you might lack the necessary expertise in the tools and practices that let you build an automated process. While this limitation isn't the case with Google Cloud, fully manual deployments might be your only option when dealing with bare metal environments that lack the necessary management APIs. In this case, you cannot implement an automated process due to the lack of the necessary interfaces. If you have a legacy virtualized infrastructure that doesn't support any automation, you might be forced to implement a fully manual process We recommend that you avoid a fully manual deployment unless you have no other option. You can implement a fully manual provisioning, configuration, and deployment process by using tools, such as Google Cloud console, Cloud Shell, Cloud APIs, and Google Cloud CLI. Deploy with configuration management tools CM tools let you configure an environment in a repeatable and controlled way. These tools include a set of plugins and modules that already implement common configuration operations. These tools let you focus on the end state that you want to achieve for your environment, rather than implementing the logic to reach that end state. If the included operations set isn't enough, CM tools often feature an extension system that you can use to develop your own modules. While these extensions are possible, try to use the predefined modules and plugins where applicable, to avoid extra development and maintenance burden. You use CM tools when you need to configure environments. You can also use them to provision your infrastructure and to implement a deployment process for your workloads. CM tools are a better process compared to a fully manual provisioning, configuration, and deployment process because it's repeatable, controlled, and auditable. However, there are several downsides, because CM tools aren't designed for provisioning or deployment tasks. They usually lack built-in features to implement elaborate provisioning logic, such as detecting and managing differences between the real-world state of your infrastructure and the wanted state, or rich deployment processes, such as deployments with no downtime or blue-green deployments. You can implement the missing features using the previously mentioned extension points. These extensions can result in extra effort and can increase the overall complexity of the deployment process, because you need the necessary expertise to design, develop, and maintain a customized deployment solution. You can implement this type of provisioning, configuration, and deployment process by using tools such as Ansible, Chef, Puppet, and SaltStack. A basic deployment process that uses CM tools can prepare runtime environments and deploy workloads in those environments. For example, your process might create a Compute Engine instance, install the required software, and deploy your workloads. It takes time to configure a runtime environment that supports your workloads. To reduce the amount of time needed to configure a runtime environment, you can implement a process that runs CM tools to produce a template, such as an operating system (OS) image. You can use this template to create instances of your runtime environment that are ready for your workloads. For example, you can use Cloud Build to build Compute Engine images. These images are often called golden images or silver images, both of which are immutable templates, such as OS images, that you create for your runtime environments. The difference between the two depends on the amount of work a deployment process must complete before the images can run a workload: Golden image: A template that you create for your runtime environments or prepare from a base template. Golden images include all data and configuration information that your runtime environments need to accomplish their assigned tasks. You can prepare several types of golden images to accomplish different tasks. Synonyms for golden image types include flavors, spins, and archetypes. Silver image: A template that you create for your runtime environments by applying minimal changes to a golden image or a base template. Runtime environments running a silver image complete their provisioning and configuration upon the first boot, according to the needs of the use cases that those runtime environments must support. Deploy by using container orchestration tools If you already invested, or plan to invest in the containerization of your workloads, you can use a container orchestration tool to deploy your workloads. A container orchestration tool takes care of managing the infrastructure underpinning your environment, and supports a wide range of deployment operations and building blocks to implement your deployment logic that you can use when the built-in ones aren't enough. By using these tools, you can focus on composing the actual deployment logic using the provided mechanisms, instead of having to implement them. Container orchestration tools also provide abstractions that you can use to generalize your deployment processes to different underlying environments, so you don't have to design and implement multiple processes for each of your environments. For example, these tools usually include the logic for scaling and upgrading your deployments, so you don't have to implement them by yourself. You can even start leveraging these tools to implement your deployment processes in your current environment, and you can then port them to the target environment, because the implementation is largely the same, by design. By adopting these tools early, you gain the experience in administering containerized environments, and this experience is useful for your migration to Google Cloud. You use a container orchestration tool if your workloads are already containerized or if you can containerize them in the future and you plan to invest in this effort. In the latter case, you should run a thorough analysis of each workload to determine the following: Ensure that a containerization of the workload is possible. Assess the potential benefits that you could gain by containerizing the workload. If the potential pitfalls outweigh the benefits of containerization, you should only use a container orchestration tool if your teams are already committed to using them and if you don't want to manage heterogeneous environments. For example, data warehouse solutions aren't typically deployed using container orchestration tools, because they aren't designed to run in ephemeral containers. You can implement this deployment process using Google Kubernetes Engine (GKE) on Google Cloud. If you're interested in a serverless environment, you can use tools, such as Cloud Run. Deploy automatically Regardless of the provisioning, configuration, deployment, and orchestration tools you use in your environment, you can implement fully automated deployment processes to minimize human errors and to consolidate, streamline, and standardize the processes across your organization. You can also insert manual approval steps in the deployment process if needed, but every step is automated. The steps of a typical end-to-end deployment pipeline are as follows: Code review. Continuous integration (CI). Artifact production. Continuous deployment (CD), with eventual manual approvals. You can automate each of those steps independently from the others, so you can gradually migrate your current deployment processes towards an automated solution, or you can implement a new process directly in the target environment. For this process to be effective, you need testing and validation procedures in each step of the pipeline, not just during the code review step or the CI step. For each change in your codebase, you should perform a thorough review to assess the quality of the change. Most source code management tools have a top-level support for code reviews. They also often support the automatic creation and initialization of reviews by looking at the source code area that was modified, provided that you configured the teams responsible for each area of your codebase. In each review you can also run automated checks on the source code, such as linters and static analyzers to enforce consistency and quality standards across the codebase. After you review and integrate a change in the codebase, the CI tool can automatically run tests, evaluate the results, and then notify you about any issues with the current build. You can add value to this step by following a test-driven development process for a complete test coverage of the features of each workload. For each successful build, you can automate the creation of deployment artifacts. Such artifacts represent a ready-to-deploy version of your workloads, with the latest changes. As part of the artifact creation step, you can also perform an automated validation of the artifact itself. For example, you run a vulnerability scan against known issues and approve the artifact for deployment only if no vulnerabilities are found. For example, you can use Artifact Registry to scan your artifacts for known vulnerabilities. Finally, you can automate the deployment of each approved artifact in the target environment. If you have multiple runtime environments, you can also implement unique deployment logic for each one, even adding manual approval steps, if needed. For example, you can automatically deploy new versions of your workloads in your development, quality assurance, and pre-production environments, while still requiring a manual review and approval from your production control team to deploy in your production environment. While a fully automated end-to-end process is one of your best options if you need an automated, structured, streamlined, and auditable process, implementing this process isn't a trivial task. Before choosing this kind of process, you should have a clear view on the expected benefits, the costs involved, and if your current level of team knowledge and expertise is sufficient to implement a fully automated deployment process. You can implement fully automated deployment processes with Cloud Deploy. What's next Learn how to migrate your deployment processes. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Deployment_methodology(1).txt b/Deployment_methodology(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..e021b5fef1d445f10f1bd8af340f204fb007edc0 --- /dev/null +++ b/Deployment_methodology(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/deployment-methodology +Date Scraped: 2025-02-23T11:47:18.105Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deployment methodology Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC The enterprise application blueprint is deployed through a series of automated systems and pipelines. Each pipeline deploys a particular aspect of the blueprint. Pipelines provide a controllable, auditable, and repeatable mechanism for building out the blueprint. The following diagram shows the interaction of the various pipelines, repositories, and personas. The blueprint uses the following pipelines: The foundation infrastructure pipeline (part of the enterprise foundations blueprint) deploys the application factory, the multi-tenant infrastructure pipeline, and the fleet-scope pipeline. The multi-tenant infrastructure pipeline deploys the GKE clusters, and the other managed services that the enterprise application blueprint relies on. The fleet-scope pipeline configures fleet scopes, namespaces, and RBAC roles and bindings. The application factory provides a mechanism to deploy new application pipelines through a templated process. The application CI/CD pipeline provides a CI/CD pipeline to deploy services into GKE clusters. Config Sync deploys and maintains additional Kubernetes configurations, including Policy Controller constraints. Repositories, repository contributors, and repository change approvers The blueprint pipelines are triggered through changes to Git repositories. The following table describes the repositories that are used throughout the blueprint, who contributes to the repository, who approves changes to the repository, which pipeline uses the repository, and the description of what the repository contains. Repository Repository contributor Repository change approver Pipeline Description infra Developer platform developer Developer platform administrator Foundation infrastructure pipeline Repository that contains the code to deploy the multi-tenant infrastructure pipeline, the application, and the fleet-scope pipeline eab-infra Developer platform developer Developer platform administrator Multi-tenant infrastructure The Terraform modules that are used by developer platform teams when they create the infrastructure fleet-scope Developer platform developer Developer platform administrator Fleet-scope The repository that defines the fleet team scopes and namespaces in the fleet app-factory Developer platform developer Developer platform administrator Application factory The code that defines the application repository and references the modules in the terraform-modules repository app-template Application developer Application operator Application factory The base code that is placed in the app-code repository when the repository is first created terraform-modules Developer platform developer Developer platform administrator Application factoryMulti-tenant infrastructure The Terraform modules that define the application and the infrastructure app-code Application developer Application operator Application CI/CD The application code that is deployed into the GKE clusters config-policy Developer platform developer Developer platform administrator Config Sync The policies that are used by the GKE clusters to maintain their configurations Automated pipelines help build security, auditability, traceability, repeatability, controllability, and compliance into the deployment process. By using different systems that have different permissions and putting different people into different operating groups, you create separation of responsibilities and follow the principle of least privilege. Foundation infrastructure pipeline The foundation infrastructure pipeline is described in the enterprise foundations blueprint and is used as a generic entrypoint for further resource deployments. The following table describes the components that the pipeline creates. Component Description Multi-tenant infrastructure pipeline Creates the shared infrastructure that is used by all tenants of the developer platform. Fleet-scope pipeline Creates namespaces and RBAC role bindings. Application factory Creates the application CI/CD pipelines that are used to deploy the services. Multi-tenant infrastructure pipeline The multi-tenant infrastructure infrastructure pipeline deploys fleets, GKE clusters, and related shared resources. The following diagram shows the components of the multi-tenant infrastructure pipeline. The following table describes the components that the multi-tenant infrastructure pipeline builds. Component Description GKE clusters Provides hosting for the services of the containerized application. Policy Controller Provides policies that help ensure the proper configuration of the GKE clusters and services. Config Sync Applies the Policy Controller policies to clusters and maintains consistant appplication of the policies. Cloud Key Management Service (Cloud KMS) key Creates the encryption key that is based on the customer-managed encryption key (CMEK) for GKE, AlloyDB for PostgreSQL, and Secret Manager. Secret Manager secret Provides a secret store for the RSA key pair that's used for user authentication with JSON Web Tokens (JWT). Google Cloud Armor security policy Provides the policy that's used by the Google Cloud Armor web-application firewall. Fleet-scope pipeline The fleet-scope pipeline is responsible for configuring the namespaces and RBAC bindings in the fleet's GKE clusters. The following table describes the components that the fleet-scope pipeline builds. Component Description Namespace Defines the logical clusters within the physical cluster. RBAC (roles and bindings) Defines the authorization that a Kubernetes service account has at the cluster level or namespace level. Application factory The application factory is deployed by the foundation infrastructure pipeline and is used to create infrastructure for each new application. This infrastructure includes a Google Cloud project which holds the application CI/CD pipeline. As engineering organizations scale, the application team can onboard new applications using the application factory. Scaling enables growth by adding discrete application CI/CD pipelines and supporting the infrastructure for deploying new applications within the multi-tenant architecture. The following diagram shows the application factory. The application factory has the following components: Application factory repository: A Git repository that stores the declarative application definition. Pipelines to create applications: Pipelines that require Cloud Build to complete the following: Create a declarative application definition and store it in the application catalog. Apply the declarative application definition to create the application resources. Starter application template repository: Templates for creating a simple application (for example, a Python, Golang, or Java microservice). Shared modules: Terraform modules that are created with standard practices and that are used for multiple purposes, including application onboarding and deployment. The following table lists the components that the application factory creates for each application. Component Description Application source code repository Contains source code and related configuration used for building and deploying a specific application. Application CI/CD pipeline A Cloud Build based pipeline that is used to connect to the source code repository and provides a CI/CD pipeline for deploying application services. Application CI/CD pipeline The application CI/CD pipeline enables automated build and deployment of container-based applications. The pipeline consists of continuous integration (CI) and continuous deployment (CD) steps. The pipeline architecture is based on the Secure CI/CD blueprint. The application CI/CD pipeline uses immutable container images across your environments. Immutable container images help ensure that the same image is deployed across all environments and isn't modified while the container is running. If you must update the application code or apply a patch, you build a new image and redeploy it. The use of immutable container images requires you to externalize your container configuration so that configuration information is read during run time. To reach GKE clusters over a private network path and manage kubeconfig authentication, the application CI/CD pipeline interacts with the GKE clusters through the Connect gateway. The pipeline also uses private pools for the CI/CD environment. Each application source code repository includes Kubernetes configurations. These configurations enable applications to successfully run as Kubernetes services on GKE. The following table describes the types of Kubernetes configurations that the application CI/CD pipeline applies. Component Description Deployment Defines a scaled set of pods (containers). Service Makes a deployment reachable over the cluster network. Virtual service Makes a service part of the service mesh. Destination rule Defines how peers on the service mesh should reach a virtual service. Used in the blueprint to configure locality load balancing for east-west traffic. Authorization policy Sets access control between workloads in the service mesh. Kubernetes service account Defines the identity that's used by a Kubernetes service. Workload Identity Federation for GKE defines the Google Cloud service account that's used to access Google Cloud resources. Gateway Allows external ingress traffic to reach a service. The gateway is only required by deployments that receive external traffic. GCPBackendPolicy Configure SSL, Google Cloud Armor, session affinity, and connection draining for deployments that receive external traffic. GCPBackendPolicy is used only by deployments that receive external traffic. PodMonitoring Configures collection of Prometheus metrics exported by an application. Continuous integration The following diagram shows the continuous integration process. The process is the following: A developer commits application code to the application source repository. This operation triggers Cloud Build to begin the integration pipeline. Cloud Build creates a container image, pushes the container image to Artifact Registry, and creates an image digest. Cloud Build performs automated tests for the application. Depending on the application language, different testing packages may be performed. Cloud Build performs the following scans on the container image: Cloud Build analyzes the container using the Container Structure Tests framework. This framework performs command tests, file existence tests, file content tests, and metadata tests. Cloud Build uses vulnerability scanning to identify any vulnerabilities in the container image against a vulnerability database that's maintained by Google Cloud. Cloud Build approves the image to continue in the pipeline after successful scan results. Binary Authorization signs the image. Binary Authorization is a service on Google Cloud that provides software supply-chain security for container-based applications by using policies, rules, notes, attestations, attestors, and signers. At deployment time, the Binary Authorization policy enforcer helps ensure the provenance of the container before allowing the container to deploy. Cloud Build creates a release in Cloud Deploy to begin the deployment process. To see the security information for a build, go to the Security insights panel. These insights include vulnerabilities that were detected using Artifact Analysis, and the build's level of security assurance denoted by SLSA guidelines. Continuous deployment The following diagram shows the continuous deployment process. The process is the following: At the end of the build process, the application CI/CD pipeline creates a new Cloud Deploy release to launch the newly built container images progressively to each environment. Cloud Deploy initiates a rollout to the first environment of the deployment pipeline, which is usually development. Each deployment stage is configured to require manual approval. The Cloud Deploy pipelines uses sequential deployment to deploy images to each cluster in an environment in order. At the end of each deployment stage, Cloud Deploy verifies the functionality of the deployed containers. These steps are configurable within the Skaffold configuration for the applications. Deploying a new application The following diagram shows how the application factory and application CI/CD pipeline work together to create and deploy a new application. The process for defining a new application is the following: An application operator defines a new application within their tenant by executing a Cloud Build trigger to generate the application definition. The trigger adds a new entry for the application in Terraform and commits the change to the application factory repository. The committed change triggers the creation of application-specific repositories and projects. Cloud Build completes the following: Creates two new Git repositories to host the application's source code and IaC. Pushes the Kubernetes manifests for network policies, and Workload Identity Federation for GKE to the Configuration management repository. Creates the application's CI/CD project and the Cloud Build IaC trigger. The Cloud Build IaC trigger for the application creates the application CI/CD pipeline and the Artifact Registry repository in the application's CI/CD project. Config Sync deploys the network policies and Workload Identity Federation for GKE configurations to the multi-tenant GKE clusters. The fleet scope creation pipeline creates the fleet scope and namespace for the application on multi-tenant GKE clusters. The application's CI/CD pipeline performs the initial deployment of the application to the GKE clusters. Optionally, the application team uses the Cloud Build IaC trigger to deploy projects and additional resources (for example, databases and other managed services) to dedicated single-tenant projects, one for each environment. GKE Enterprise configuration and policy management In the blueprint, developer platform administrators use Config Sync to create cluster-level configurations in each environment. Config Sync connects to a Git repository which serves as the source of truth for the chosen state of the cluster configuration. Config Sync continuously monitors the actual state of the configuration in the clusters and reconciles any discrepancies by applying updates to ensure adherence to the chosen state, despite manual changes. Configs are applied to the environments (development, non-production, and production) by using a branching strategy on the repository. In this blueprint, Config Sync applies Policy Controller constraints. These configurations define security and compliance controls as defined by developer platform administrators for the organization. This blueprint relies on other pipelines to apply other configurations: the application CI/CD pipelines apply application-specific configuration, and the fleet-scope pipeline creates namespaces and associated role bindings. What's next Read about the Cymbal Bank application architecture (next document in this series). Send feedback \ No newline at end of file diff --git a/Deployment_methodology.txt b/Deployment_methodology.txt new file mode 100644 index 0000000000000000000000000000000000000000..679c56d4b2cc61d5593b0f56961bbf6c9bd66178 --- /dev/null +++ b/Deployment_methodology.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/deployment-methodology +Date Scraped: 2025-02-23T11:45:36.109Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deployment methodology Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC We recommend that you use declarative infrastructure to deploy your foundation in a consistent and controllable manner. This approach helps enable consistent governance by enforcing policy controls about acceptable resource configurations into your pipelines. The blueprint is deployed using a GitOps flow, with Terraform used to define infrastructure as code (IaC), a Git repository for version control and approval of code, and Cloud Build for CI/CD automation in the deployment pipeline. For an introduction to this concept, see managing infrastructure as code with Terraform, Cloud Build, and GitOps. The following sections describe how the deployment pipeline is used to manage resources in your organization. Pipeline layers To separate the teams and technology stack that are responsible for managing different layers of your environment, we recommend a model that uses different pipelines and different personas that are responsible for each layer of the stack. The following diagram introduces our recommended model for separating a foundation pipeline, infrastructure pipeline, and application pipeline. The diagram introduces the pipeline layers in this model: The foundation pipeline deploys the foundation resources that are used across the platform. We recommend that a single central team is responsible for managing the foundation resources that are consumed by multiple business units and workloads. The infrastructure pipeline deploys projects and infrastructure that are used by workloads, such as VM instances or databases. The blueprint sets up a separate infrastructure pipeline for each business unit, or you might prefer a single infrastructure pipeline used by multiple teams. The application pipeline deploys the artifacts for each workload, such as containers or images. You might have many different application teams with individual application pipelines. The following sections introduce the usage of each pipeline layer. The foundation pipeline The foundation pipeline deploys the foundation resources. It also sets up the infrastructure pipeline that is used to deploy infrastructure used by workloads. To create the foundation pipeline, you first clone or fork the terraform-example-foundation to your own Git repository. Follow the steps in the 0-bootstrap README file to configure your bootstrap folder and resources. Stage Description 0-bootstrap Bootstraps a Google Cloud organization. This step also configures a CI/CD pipeline for the blueprint code in subsequent stages. The CICD project contains the Cloud Build foundation pipeline for deploying resources. The seed project includes the Cloud Storage buckets that contain the Terraform state of the foundation infrastructure and includes highly privileged service accounts that are used by the foundation pipeline to create resources. The Terraform state is protected through storage Object Versioning. When the CI/CD pipeline runs, it acts as the service accounts that are managed in the seed project. After you create the foundation pipeline in the 0-bootstrap stage, the following stages deploy resources on the foundation pipeline. Review the README directions for each stage and implement each stage sequentially. Stage Description 1-org Sets up top-level shared folders, projects for shared services, organization-level logging, and baseline security settings through organization policies. 2-environments Sets up development, non-production, and production environments within the Google Cloud organization that you've created. 3-networks-dual-svpcor3-networks-hub-and-spoke Sets up shared VPCs in your chosen topology and the associated network resources. The infrastructure pipeline The infrastructure pipeline deploys the projects and infrastructure (for example, the VM instances and databases) that are used by workloads. The foundation pipeline deploys multiple infrastructure pipelines. This separation between the foundation pipeline and infrastructure pipeline allows for a separation between platform-wide resources and workload-specific resources. The following diagram describes how the blueprint configures multiple infrastructure pipelines that are intended for use by separate teams. The diagram describes the following key concepts: Each infrastructure pipeline is used to manage infrastructure resources independently of the foundation resources. Each business unit has its own infrastructure pipeline, managed in a dedicated project in the common folder. Each of the infrastructure pipelines has a service account with permission to deploy resources only to the projects that are associated with that business unit. This strategy creates a separation of duties between the privileged service accounts used for the foundation pipeline and those used by each infrastructure pipeline This approach with multiple infrastructure pipelines is recommended when you have multiple entities inside your organization that have the skills and appetite to manage their infrastructure separately, particularly if they have different requirements such as the types of pipeline validation policy they want to enforce. Alternatively, you might prefer to have a single infrastructure pipeline managed by a single team with consistent validation policies. In the terraform-example-foundation, stage 4 configures an infrastructure pipeline, and stage 5 demonstrates an example of using that pipeline to deploy infrastructure resources. Stage Description 4-projects Sets up a folder structure, projects, and an infrastructure pipeline. 5-app-infra (optional) Deploys workload projects with a Compute Engine instance using the infrastructure pipeline as an example. The application pipeline The application pipeline is responsible for deploying application artifacts for each individual workload, such as images or Kubernetes containers that run the business logic of your application. These artifacts are deployed to infrastructure resources that were deployed by your infrastructure pipeline. The enterprise foundation blueprint sets up your foundation pipeline and infrastructure pipeline, but doesn't deploy an application pipeline. For an example application pipeline, see the enterprise application blueprint. Automating your pipeline with Cloud Build The blueprint uses Cloud Build to automate CI/CD processes. The following table describes the controls are built into the foundation pipeline and infrastructure pipeline that are deployed by the terraform-example-foundation repository. If you are developing your own pipelines using other CI/CD automation tools, we recommend that you apply similar controls. Control Description Separate build configurations to validate code before deploying The blueprint uses two Cloud Build build configuration files for the entire pipeline, and each repository that is associated with a stage has two Cloud Build triggers that are associated with those build configuration files. When code is pushed to a repository branch, the build configuration files are triggered to first run cloudbuild-tf-plan.yaml which validates your code with policy checks and Terraform plan against that branch, then cloudbuild-tf-apply.yaml runs terraform apply on the outcome of that plan. Terraform policy checks The blueprint includes a set of Open Policy Agent constraints that are enforced by the policy validation in Google Cloud CLI. These constraints define the acceptable resource configurations that can be deployed by your pipeline. If a build doesn't meet policy in the first build configuration, then the second build configuration doesn't deploy any resources.The policies enforced in the blueprint are forked from GoogleCloudPlatform/policy-library on GitHub. You can write additional policies for the library to enforce custom policies to meet your requirements. Principle of least privilege The foundation pipeline has a different service account for each stage with an allow policy that grants only the minimum IAM roles for that stage. Each Cloud Build trigger runs as the specific service account for that stage. Using different accounts helps mitigate the risk that modifying one repository could impact the resources that are managed by another repository. To understand the particular IAM roles applied to each service account, see the sa.tf Terraform code in the bootstrap stage. Cloud Build private pools The blueprint uses Cloud Build private pools. Private pools let you optionally enforce additional controls such as restricting access to public repositories or running Cloud Build inside a VPC Service Controls perimeter. Cloud Build custom builders The blueprint creates its own custom builder to run Terraform. For more information, see 0-bootstrap/Dockerfile. This control enforces that the pipeline consistently runs with a known set of libraries at pinned versions. Deployment approval Optionally, you can add a manual approval stage to Cloud Build. This approval adds an additional checkpoint after the build is triggered but before it runs so that a privileged user can manually approve the build. Branching strategy We recommend a persistent branch strategy for submitting code to your Git system and deploying resources through the foundation pipeline. The following diagram describes the persistent branch strategy. The diagram describes three persistent branches in Git (development, non-production, and production) that reflect the corresponding Google Cloud environments. There are also multiple ephemeral feature branches that don't correspond to resources that are deployed in your Google Cloud environments. We recommend that you enforce a pull request (PR) process into your Git system so that any code that is merged to a persistent branch has an approved PR. To develop code with this persistent branch strategy, follow these high-level steps: When you're developing new capabilities or working on a bug fix, create a new branch based off of the development branch. Use a naming convention for your branch that includes the type of change, a ticket number or other identifier, and a human-readable description, like feature/123456-org-policies. When you complete the work in the feature branch, open a PR that targets the development branch. When you submit the PR, the PR triggers the foundation pipeline to perform terraform plan and terraform validate to stage and verify the changes. After you validate the changes to the code, merge the feature or bug fix into the development branch. The merge process triggers the foundation pipeline to run terraform apply to deploy the latest changes in the development branch to the development environment. Review the changes in the development environment using any manual reviews, functional tests, or end-to-end tests that are relevant to your use case. Then promote changes to the non-production environment by opening a PR that targets the non-production branch and merge your changes. To deploy resources to the production environment, repeat the same process as step 6: review and validate the deployed resources, open a PR to the production branch, and merge. What's next Read about operations best practices (next document in this series). Send feedback \ No newline at end of file diff --git a/Design_a_self-service_data_platform_for_a_data_mesh.txt b/Design_a_self-service_data_platform_for_a_data_mesh.txt new file mode 100644 index 0000000000000000000000000000000000000000..561376d4c99b0b032e2ef284d8f6995e33ba75f3 --- /dev/null +++ b/Design_a_self-service_data_platform_for_a_data_mesh.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/design-self-service-data-platform-data-mesh +Date Scraped: 2025-02-23T11:48:55.743Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design a self-service data platform for a data mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-03 UTC In a data mesh, a self-service data platform enables users to generate value from data by enabling them to autonomously build, share, and use data products. To fully realize these benefits, we recommend that your self-service data platform provide the capabilities described in this document. This document is part of a series which describes how to implement a data mesh on Google Cloud. It assumes that you have read and are familiar with the concepts described in Build a modern, distributed Data Mesh with Google Cloud and Architecture and functions in a data mesh. The series has the following parts: Architecture and functions in a data mesh Design a self-service data platform for a data mesh (this document) Build data products in a data mesh Discover and consume data products in a data mesh Data platform teams typically create central self-service data platforms, as described in this document. This team builds the solutions and components that domain teams (both data producers and data consumers) can use to both create and consume data products. Domain teams represent functional parts of a data mesh. By building these components, the data platform team enables a smooth development experience and reduces the complexity of building, deploying, and maintaining data products that are secure and interoperable. Ultimately, the data platform team should allow domain teams to move faster. They help increase the efficiency of domain teams by providing those teams with a limited set of tools that address their needs. In providing these tools, the data platform team removes the burden of having the domain team build and source these tools themselves. The tooling choices should be customizable to different needs and not force an inflexible way of working on the data domain teams. The data platform team shouldn't focus on building custom solutions for data pipeline orchestrators or for continuous integration and continuous deployment (CI/CD) systems. Solutions such as CI/CD systems are readily available as managed cloud services, for example, Cloud Build. Using managed cloud services can reduce operational overheads for the data platform team and let them focus on the specific needs of the data domain teams as the users of the platform. With reduced operational overhead, the data platform team can focus more time on addressing the specific needs of the data domain teams. Architecture The following diagram illustrates the architecture components of a self-service data platform. The diagram also shows how these components can support teams as they develop and consume data products across the data mesh. As shown in the preceding diagram, the self-service data platform provides the following: Platform solutions: These solutions consist of composable components for provisioning Google Cloud projects and resources, which users select and assemble in different combinations to meet their specific requirements. Instead of directly interacting with the components, users of the platform can interact with platform solutions to help them to achieve a specific goal. Data domain teams should design platform solutions to solve common pain-points and friction areas that cause slowdowns in data product development and consumption. For example, data domain teams onboarding onto the data mesh can use an infrastructure-as-code (IaC) template. Using IaC templates lets them quickly create a set of Google Cloud projects with standard Identity and Access Management (IAM) permissions, networking, security policies, and relevant Google Cloud APIs enabled for data product development. We recommend that each solution is accompanied with documentation such as "how to get started" guidance and code samples. Data platform solutions and their components must be secure and compliant by default. Common services: These services provide data product discoverability, management, sharing, and observability. These services facilitate data consumers' trust in data products, and are an effective way for data producers to alert data consumers to issues with their data products. Data platform solutions and common services might include the following: IaC templates to set up foundational data product development workspace environments, which include the following: IAM Logging and monitoring Networking Security and compliance guardrails Resource tagging for billing attribution Data product storage, transformation, and publishing Data product registration, cataloging, and metadata tagging IaC templates which follow organizational security guardrails and best practices that can be used to deploy Google Cloud resources into existing data product development workspaces. Application and data pipeline templates that can be used to bootstrap new projects or used as reference for existing projects. Examples of such templates include the following: Usage of common libraries and frameworks Integration with platform logging, monitoring, and observability tooling Build and test tooling Configuration management Packaging and CI/CD pipelines for deployment Authentication, deployment, and management of credentials Common services to provide data product observability and governance which can include the following: Uptime checks to show the overall state of data products. Custom metrics to give helpful indicators about data products. Operational support by the central team such that data consumer teams are alerted of changes in data products they use. Product scorecards to show how data products are performing. A metadata catalog for discovering data products. A centrally defined set of computational policies that can be applied globally across the data mesh. A data marketplace to facilitate data sharing across domain teams. Create platform components and solutions using IaC templates discusses the advantages of IaC templates to expose and deploy data products. Provide common services discusses why it's helpful to provide domain teams with common infrastructure components that have been built and are managed by the data platform team. Create platform components and solutions using IaC templates The goal of data platform teams is to set up self-service data platforms to get more value from data. To build these platforms, they create and provide domain teams with vetted, secure, and self-serviceable infrastructure templates. Domain teams use these templates to deploy their data development and data consumption environments. IaC templates help data platform teams achieve that goal and enable scale. Using vetted and trusted IaC templates simplifies the resource deployment process for domain teams by allowing those teams to reuse existing CI/CD pipelines. This approach lets domain teams quickly get started and become productive within the data mesh. IaC templates can be created using an IaC tool. Although there are multiple IaC tools, including Cloud Config Connector, Pulumi, Chef, and Ansible, this document provides examples for Terraform-based IaC tools. Terraform is an open source IaC tool that allows the data platform team to efficiently create composable platform components and solutions for Google Cloud resources. Using Terraform, the data platform team writes code that specifies the chosen end-state and lets the tool figure out how to achieve that state. This declarative approach lets the data platform team treat infrastructure resources as immutable artifacts for deployment across environments. It also helps to reduce the risk of inconsistencies arising between deployed resources and the declared code in source control (referred to as configuration drift). Configuration drift caused by ad hoc and manual changes to infrastructure hinders safe and repeatable deployment of IaC components into production environments. Common IaC templates for composable platform components include using Terraform modules for deploying resources such as a BigQuery dataset, Cloud Storage bucket, or Cloud SQL database. Terraform modules can be combined into end-to-end solutions for deploying complete Google Cloud projects, including relevant resources deployed using the composable modules. Example Terraform modules can be found in the Terraform blueprints for Google Cloud. Each Terraform module should by default satisfy security guardrails and compliance policies that your organization uses. These guardrails and policies can also be expressed as code and be automated using automated compliance verification tooling such as Google Cloud policy validation tool. Your organization should continuously test the platform-provided Terraform modules, using the same automated compliance guardrails that it uses to promote changes into production. To make IaC components and solutions discoverable and consumable for domain teams that have minimal experience with Terraform, we recommend that you use services such as Service Catalog. Users who have significant customization requirements should be allowed to create their own deployment solutions from the same composable Terraform templates used by existing solutions. When using Terraform, we recommend that you follow the Google Cloud best-practices as outlined in Best practices for using Terraform. To illustrate how Terraform can be used to create platform components, the following sections discuss examples of how Terraform can be used to expose consumption interfaces and to consume a data product. Expose a consumption interface A consumption interface for a data product is a set of guarantees on the data quality and operational parameters provided by the data domain team to enable other teams to discover and use their data products. Each consumption interface also includes a product support model and product documentation. A data product may have different types of consumption interfaces, such as APIs or streams, as described in Build data products in a data mesh. The most common consumption interface might be a BigQuery authorized dataset, authorized view, or authorized function. This interface exposes a read-only virtual table, which is expressed as a query into the data mesh. The interface does not grant reader permissions to directly access the underlying data. Google provides an example Terraform module for creating authorized views without granting teams permissions to the underlying authorized datasets. The following code from this Terraform module grants these IAM permissions on the dataset_id authorized view: module "add_authorization" { source = "terraform-google-modules/bigquery/google//modules/authorization" version = "~> 4.1" dataset_id = module.dataset.bigquery_dataset.dataset_id project_id = module.dataset.bigquery_dataset.project roles = [ { role = "roles/bigquery.dataEditor" group_by_email = "ops@mycompany.com" } ] authorized_views = [ { project_id = "view_project" dataset_id = "view_dataset" table_id = "view_id" } ] authorized_datasets = [ { project_id = "auth_dataset_project" dataset_id = "auth_dataset" } ] } If you need to grant users access to multiple views, granting access to each authorized view can be both time consuming and harder to maintain. Instead of creating multiple authorized views, you can use an authorized dataset to automatically authorize any views created in the authorized dataset. Consume a data product For most analytics use cases, consumption patterns are determined by the application that the data is being used in. The main use of a centrally provided consumption environment is for data exploration before the data is used within the consuming application. As discussed in Discover and consume products in a data mesh, SQL is the most commonly used method for querying data products. For this reason, the data platform should provide data consumers with a SQL application for exploration of the data. Depending on the analytics use case, you may be able to use Terraform to deploy the consumption environment for data consumers. For example, data science is a common use case for data consumers. You can use Terraform to deploy Vertex AI user-managed notebooks to be used as a data science development environment. From the data science notebooks, data consumers can use their credentials to sign in to the data mesh to explore data to which they have access and develop ML models based on this data. To learn how to use Terraform to deploy and help to secure a notebook environment on Google Cloud, see Build and deploy generative AI and machine learning models in an enterprise. Provide common services In addition to self-service IaC components and solutions, the data platform team might also take ownership over building and operating common shared platform services used by multiple data domain teams. Common examples of shared platform services include self-hosted third-party software such as business intelligence visualization tools or a Kafka cluster. In Google Cloud, the data platform team might choose to manage resources such as Dataplex and Cloud Logging sinks on behalf of data domain teams. Managing resources for the data domain teams lets the data platform team facilitate centralized policy management and auditing across the organization. The following sections show how to use Dataplex for central management and governance within a data mesh on Google Cloud, and the implementation of data observability features in a data mesh. Dataplex for data governance Dataplex provides a data management platform that helps you to build independent data domains within a data mesh that spans the organization. Dataplex lets you maintain central controls for governing and monitoring the data across domains. With Dataplex an organization can logically organize their data (supported data sources) and related artifacts such as code, notebooks, and logs, into a Dataplex lake that represents a data domain. In the following diagram, a sales domain uses Dataplex to organize its assets, including data quality metrics and logs, into Dataplex zones. As shown in the preceding diagram, Dataplex can be used to manage domain data across the following assets: Dataplex allows data domain teams to consistently manage their data assets in a logical group called a Dataplex lake. The data domain team can organize their Dataplex assets within the same Dataplex lake without physically moving data or storing it into a single storage system. Dataplex assets can refer to Cloud Storage buckets and BigQuery datasets stored in multiple Google Cloud projects other than the Google Cloud project containing the Dataplex lake. Dataplex assets can be structured or unstructured, or be stored in an analytical data lake or data warehouse. In the diagram, there are data lakes for the sales domain, supply chain domain, and product domain. Dataplex zones enable the data domain team to further organize data assets into smaller subgroups within the same Dataplex lake and add structures that capture key aspects of the subgroup. For example, Dataplex zones can be used to group associated data assets in a data product. Grouping data assets into a single Dataplex zone allows data domain teams to manage access policies and data governance policies consistently across the zone as a single data product. In the diagram, there are data zones for offline sales, online sales, supply chain warehouses, and products. Dataplex lakes and zones enable an organization to unify distributed data and organize it based on the business context. This arrangement forms the foundation for activities such as managing metadata, setting up governance policies, and monitoring data quality. Such activities allow the organization to manage its distributed data at scale, such as in a data mesh. Data observability Each data domain should implement its own monitoring and alerting mechanisms, ideally using a standardized approach. Each domain can apply the monitoring practices described in Concepts in service monitoring, making the necessary adjustments to the data domains. Observability is a large topic, and is outside of the scope of this document. This section only addresses patterns which are useful in data mesh implementations. For products with multiple data consumers, providing timely information to each consumer about the status of the product can become an operational burden. Basic solutions, such as manually managed email distributions, are typically prone to error. They can be helpful for notifying consumers of planned outages, upcoming product launches, and deprecations, but they don't provide real-time operational awareness. Central services can play an important role in monitoring the health and quality of the products in the data mesh. Although not a prerequisite for a successful implementation of the data mesh, implementing observability features can improve satisfaction of the data producers and consumers, and reduce overall operational and support costs. The following diagram shows an architecture of data mesh observability based on Cloud Monitoring. The following sections describe the components shown in the diagram, which are as follows: Uptime checks to show the overall state of data products. Custom metrics to give helpful indicators about data products. Operational support by the central data platform team to alert data consumers of changes in the data products that they use. Product scorecards and dashboards to show how data products are performing. Uptime checks Data products can create simple custom applications that implement uptime checks. These checks can serve as high-level indicators of the overall state of the product. For example, if the data product team discovers a sudden drop in data quality of its product, the team can mark that product unhealthy. Uptime checks that are close to real time are especially important to data consumers who have derived products that rely on the constant availability of the data in the upstream data product. Data producers should build their uptime checks to include checking their upstream dependencies, thus providing an accurate picture of the health of their product to their data consumers. Data consumers can include product uptime checks into their processing. For example, a composer job that generates a report based on the data provided by a data product can, as the first step, validate whether the product is in the "running" state. We recommend that your uptime check application returns a structured payload in the message body of its HTTP response. This structured payload should indicate whether there's a problem, the root cause of the problem in human readable form, and if possible, the estimated time to restore the service. This structured payload can also provide more fine-grained information about the state of the product. For example, it can contain the health information for each of the views in the authorized dataset exposed as a product. Custom metrics Data products can have various custom metrics to measure their usefulness. Data producer teams can publish these custom metrics to their designated domain-specific Google Cloud projects. To create a unified monitoring experience across all data products, a central data mesh monitoring project can be given access to those domain-specific projects. Each type of data product consumption interface has different metrics to measure its usefulness. Metrics can also be specific to the business domain. For example, the metrics for BigQuery tables exposed through views or through the Storage Read API can be as follows: The number of rows. Data freshness (expressed as the number of seconds before the measurement time). The data quality score. The data that's available. This metric can indicate that the data is available for querying. An alternative is to use the uptime checks mentioned earlier in this document. These metrics can be viewed as service level indicators (SLI) for a particular product. For data streams (implemented as Pub/Sub topics), this list can be the standard Pub/Sub metrics, which are available through topics. Operational support by the central data platform team The central data platform team can expose custom dashboards to display different levels of details to the data consumers. A simple status dashboard that lists the products in the data mesh and uptime status for those products can help answer multiple end-user requests. The central team can also serve as a notification distribution hub to notify data consumers about various events in the data products they use. Typically, this hub is made by creating alerting policies. Centralizing this function can reduce the work that must be done by each data producer team. Creating these policies doesn't require knowledge of the data domains and should help avoid bottlenecks in data consumption. An ideal end state for data mesh monitoring is for the data product tag template to expose the SLIs and service-level objectives (SLOs) that the product supports when the product becomes available. The central team can then automatically deploy the corresponding alerting using service monitoring with the Monitoring API. Product scorecards As part of the central governance agreement, the four functions in a data mesh can define the criteria to create scorecards for data products. These scorecards can become an objective measurement of data product performance. Many of the variables used to calculate the scorecards are the percentage of time that data products are meeting their SLO. Useful criteria can be the percentage of uptime, average data quality scores, and percentage of products with data freshness that does not fall below a threshold. To calculate these metrics automatically using Monitoring Query Language (MQL), the custom metrics and the results of the uptime checks from the central monitoring project should be sufficient. What's next Learn more about BigQuery. Read about Dataplex. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Design_an_optimal_storage_strategy_for_your_cloud_workload.txt b/Design_an_optimal_storage_strategy_for_your_cloud_workload.txt new file mode 100644 index 0000000000000000000000000000000000000000..a04399c967a81ae5a374c27cfc1a12728af77530 --- /dev/null +++ b/Design_an_optimal_storage_strategy_for_your_cloud_workload.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/storage-advisor +Date Scraped: 2025-02-23T11:56:58.847Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design an optimal storage strategy for your cloud workload Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-22 UTC This guide helps you assess the storage requirements of your cloud workload, understand the available storage options in Google Cloud, and design a storage strategy that provides optimal business value. For a visual summary of the main design recommendations, see the decision tree diagram. For information about selecting storage services for AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud. Overview of the design process As a cloud architect, when you plan storage for a cloud workload, you need to first consider the functional characteristics of the workload, security constraints, resilience requirements, performance expectations, and cost goals. Next, you need to review the available storage services and features in Google Cloud. Then, based on your requirements and the available options, you select the storage services and features that you need. The following diagram shows this three-phase design process: Note: The design process can be iterative. When reviewing or selecting storage options, you might discover features that could improve your application's behavior and decide to adjust your requirements to take advantage of those features. Define your requirements Use the questionnaires in this section to define the key storage requirements of the workload that you want to deploy in Google Cloud. Guidelines for defining storage requirements When answering the questionnaires, consider the following guidelines: Define requirements granularly For example, if your application needs Network File System (NFS)-based file storage, identify the required NFS version. Consider future requirements For example, your current deployment might serve users in countries within Asia, but you might plan to expand the business to other continents. In this case, consider any storage-related regulatory requirements of the new business territories. Consider cloud-specific opportunities and requirements Take advantage of cloud-specific opportunities. For example, to optimize the storage cost for data stored in Cloud Storage, you can control the storage duration by using data retention policies and lifecycle configurations. Consider cloud-specific requirements. For example, the on-premises data might exist in a single data center, and you might need to replicate the migrated data across two Google Cloud locations for redundancy. Questionnaires The questionnaires that follow are not exhaustive checklists for planning. Use them as a starting point to systematically analyze all the storage requirements of the workload that you want to deploy to Google Cloud. Assess your workload's characteristics What kind of data do you need to store? Examples Static website content Backups and archives for disaster recovery Audit logs for compliance Large data objects that users download directly Transactional data Unstructured, and heterogeneous data How much capacity do you need? Consider your current and future requirements. Should capacity scale automatically with usage? What are the access requirements? For example, should the data be accessible from outside Google Cloud? What are the expected read-write patterns? Examples Frequent writes and reads Frequent writes, but occasional reads Occasional writes and reads Occasional writes, but frequent reads Does the workload need file-based access, using NFS for example? Should multiple clients be able to read or write data simultaneously? Identify security constraints What are your data-encryption requirements? For example, do you need to use keys that you control? Are there any data-residency requirements? Define data-resilience requirements Does your workload need low-latency caching or scratch space? Do you need to replicate the data in the cloud for redundancy? Do you need strict read-write consistency for replicated datasets? Set performance expectations What is the required I/O rate? What levels of read and write throughput does your application need? What environments do you need storage for? For a given workload, you might need high-performance storage for the production environment, but could choose a lower-performance option for the non-production environments. Review the storage options Google Cloud offers storage services for all the key storage formats: block, file, and object. Review and evaluate the features, design options, and relative advantages of the services available for each storage format. Overview Block storage The data that you store in block storage is divided into chunks, each stored as a separate block with a unique address. Applications access data by referencing the appropriate block addresses. Block storage is optimized for high-IOPS workloads, such as transaction processing. It's similar to on-premises storage area network (SAN) and directly attached storage (DAS) systems. The block storage options in Google Cloud are a part of the Compute Engine service. Option Overview Persistent Disk Dedicated hard-disk drives (HDD) and solid-state drives (SSD) for enterprise and database applications deployed to Compute Engine VMs and Google Kubernetes Engine (GKE) clusters. Google Cloud Hyperdisk Fast and redundant network storage for Compute Engine VMs, with configurable performance and volumes that can be dynamically resized. Local SSD Ephemeral, locally attached block storage for high-performance applications. File storage Data is organized and represented in a hierarchy of files that are stored in folders, similar to on-premises network-attached storage (NAS). File systems can be mounted on clients using protocols such as NFS and Server Message Block (SMB). Applications access data using the relevant filename and directory path. Google Cloud provides a range of fully managed and third-party solutions for file storage. Solution Overview Filestore File-based storage using NFS file servers for Compute Engine VMs and Google Kubernetes Engine clusters. You can choose a service tier (Basic, Zonal, or Regional) that suits your use case. Parallelstore Low-latency parallel file system for AI, high performance computing (HPC), and data-intensive applications. NetApp Volumes File-based storage using NFS or SMB. You can choose a service level (Flex, Standard, Premium, or Extreme) that suits your use case. More options See Summary of file server options. Object storage Data is stored as objects in a flat hierarchy of buckets. Each object is assigned a globally unique ID. Objects can have system-assigned and user-defined metadata, to help you organize and manage the data. Applications access data by referencing the object IDs, using REST APIs or client libraries. Cloud Storage provides low-cost, highly durable, no-limit object storage for diverse data types. The data you store in Cloud Storage can be accessed from anywhere, within and outside Google Cloud. Optional redundancy across regions provides maximum reliability. You can select a storage class that suits your data-retention and access-frequency requirements. Comparative analysis The following table lists the key capabilities of the storage services in Google Cloud. Persistent Disk Hyperdisk Local SSD Filestore Parallelstore NetApp Volumes Cloud Storage Capacity fullscreen 10 GiB to 64 TiB per disk 257 TiB per VM 4 GiB to 64 TiB per disk 512 TiB per VM 10 TiB to 1 PiB per storage pool 375 GiB per disk 12 TiB per VM 1-100 TiB per instance 12-100 TiB 1 TiB to 10 PiB per storage pool 1 GiB to 100 TiB per volume No lower or upper limit Scaling aspect_ratio Scale up Add and remove disks Autoscale Scale up Not scalable Basic: scale up Zonal and Regional: scale up and down Not scalable Scale up and down Scales automatically based on usage Sharing share Supported Supported Not shareable Mountable on multiple Compute Engine VMs, remote clients, and GKE clusters Mountable on multiple Compute Engine VMs and GKE clusters. Mountable on multiple Compute Engine VMs and GKE clusters Read/write from anywhere Integrates with Cloud CDN and third-party CDNs Encryption key options enhanced_encryption Google-owned and Google-managed encryption keys Customer-managed Customer-supplied Google-owned and Google-managed encryption keys Customer-managed Customer-supplied Google-owned and Google-managed encryption keys Google-owned and Google-managed encryption keys Customer-managed (Zonal and Regional tiers) Google-owned and Google-managed encryption keys Google-owned and Google-managed encryption keys Customer-managed Google-owned and Google-managed encryption keys Customer-managed Customer-supplied Persistence save Lifetime of the disk Lifetime of the disk Ephemeral (data is lost when the VM is stopped or deleted) Lifetime of the Filestore instance Ephemeral (data is lost when the instance is deleted) Lifetime of the volume Lifetime of the bucket Availability content_copy Zonal and regional replication Snapshots (manual or scheduled) Disk cloning Zonal Disk cloning Zonal Regional or zonal based on tier Snapshots for Zonal and Regional tiers Backups Zonal Regional (Flex) or zonal (all levels) Backups Snapshots Cross-region replication Data redundant across zones Options for redundancy across regions Performance speed Linear scaling with disk size and CPU count Dynamic scaling persistent storage High-performance scratch storage Basic: consistent performance Zonal and Regional: linear scaling Linear scaling with provisioned capacity Scalable performance Expectations depend on the service level Autoscaling read-write rates, and dynamic load redistribution Management construction Manually format and mount Manually format and mount Manually format, stripe, and mount Fully managed Fully managed Fully managed Fully managed Note: To compare the costs of the storage options, use the Google Cloud Pricing Calculator. The following table lists the workload types that each Google Cloud storage option is appropriate for: Storage option Workload types Persistent Disk IOPS-intensive or latency-sensitive applications Databases Shared read-only storage Rapid, durable VM backups Hyperdisk Performance-intensive workloads Scale-out analytics Local SSD Flash-optimized databases Hot-caching for analytics Scratch disk Filestore Lift-and-shift on-premises file systems Shared configuration files Common tooling and utilities Centralized logs Parallelstore AI and ML workloads HPC NetApp Volumes Lift-and-shift on-premises file systems Shared configuration files Common tooling and utilities Centralized logs Windows workloads Cloud Storage Streaming videos Media asset libraries High-throughput data lakes Backups and archives Long-tail content Choose a storage option There are two parts to selecting a storage option: Deciding which storage services you need. Choosing the required features and design options in a given service. Examples of service-specific features and design options Persistent Disk Deployment region and zone Regional replication Disk type, size, and IOPS (for Extreme Persistent Disk) Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied Snapshot schedule Hyperdisk Deployment zone Disk type, size, throughput (for Hyperdisk Throughput) and IOPS (for Hyperdisk Extreme) Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied Snapshot schedule Filestore Deployment region and zone Instance tier Capacity IP range: auto-allocated or custom Access control NetApp Volumes Deployment region Service level for the storage pool Pool and volume capacity Volume protocol Volume export rules Cloud Storage Location: multi-region, dual-region, single region Storage class: Standard, Nearline, Coldline, Archive Access control: uniform or fine-grained Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied Retention policy Storage recommendations Use the following recommendations as a starting point to choose the storage services and features that meet your requirements. For guidance that's specific to AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud. General storage recommendations are also presented as a decision tree later in this document. Note: The recommendations in this document are based on key differentiators of each service, such as the unlimited scale of Cloud Storage. But as shown in the Comparative analysis , all the storage services in Google Cloud offer comparable enterprise-grade and cost-effective performance, reliability, flexibility, and security. When choosing the storage services that you need, consider your specific requirements, cost goals, and the detailed features of each service. For applications that need a parallel file system, use Parallelstore. For applications that need file-based access, choose a suitable file storage service based on your requirements for access protocol, availability, and performance. Access protocol Recommendation NFS If you need regional availability and high performance that scales with capacity, use Filestore Regional. If zonal availability is sufficient, but you need high performance that scales with capacity, use Filestore Zonal or NetApp Volumes Premium or Extreme. Otherwise, use either Filestore Basic or NetApp Volumes. For information about the differences between the Filestore service tiers, see Service tiers. SMB Use NetApp Volumes. For workloads that need primary storage with high performance, use local SSD, Persistent Disk, or Hyperdisk depending on your requirements. Requirement Recommendation Fast scratch disk or cache Use local SSD disks (ephemeral). Sequential IOPS Use Persistent Disks with the pd-standard disk type. IOPS-intensive workload Use Persistent Disks with the pd-extreme or pd-ssd disk type. Balance between performance and cost Use Persistent Disks with the pd-balanced disk type. Scale performance and capacity dynamically Use Hyperdisk. Choose a suitable Hyperdisk type: Hyperdisk Balanced is recommended for general-purpose workloads and highly available applications that need shared write access. Hyperdisk Extreme is recommended for workloads that need high I/O, such as high-performance databases. Hyperdisk Throughput is recommended for scale-out analytics, data drives for cost-sensitive apps, and for cold storage. Hyperdisk ML is recommended for ML workloads that need high throughput to multiple VMs in read-only mode. For more information, see About Google Cloud Hyperdisk. Depending on your redundancy requirements, choose between zonal and regional disks. Requirement Recommendation Redundancy within a single zone in a region Use zonal Persistent Disks or Hyperdisks. Redundancy across multiple zones within a region Use regional Persistent Disks. For a detailed comparative analysis, see Persistent Disk options. For unlimited-scale and globally available storage, use Cloud Storage. Depending on the data-access frequency and the storage duration, choose a suitable Cloud Storage class. Requirement Recommendation> Access frequency varies, or the data-retention period is unknown or not predictable. Use the Autoclass feature to automatically transition objects in a bucket to appropriate storage classes based on each object's access pattern. Storage for data that's accessed frequently, including for high-throughput analytics, data lakes, websites, streaming videos, and mobile apps. Use the Standard storage class. To cache frequently accessed data and serve it from locations that are close to the clients, use Cloud CDN. Low-cost storage for infrequently accessed data that can be stored for at least 30 days (for example, backups and long-tail multimedia content). Use the Nearline storage class. Low-cost storage for infrequently accessed data that can be stored for at least 90 days (for example, disaster recovery). Use the Coldline storage class. Lowest-cost storage for infrequently accessed data that can be stored for at least 365 days, including regulatory archives. Use the Archive storage class. For a detailed comparative analysis, see Cloud Storage classes. Data transfer options After you choose appropriate Google Cloud storage services, to deploy and run workloads, you need to transfer your data to Google Cloud. The data that you need to transfer might exist on-premises or on other cloud platforms. You can use the following methods to transfer data to Google Cloud: Transfer data online by using Storage Transfer Service: Automate the transfer of large amounts of data between object and file storage systems, including Cloud Storage, Amazon S3, Azure storage services, and on-premises data sources. Transfer data offline by using Transfer Appliance: Transfer and load large amounts of data offline to Google Cloud in situations where network connectivity and bandwidth are unavailable, limited, or expensive. Upload data to Cloud Storage: Upload data online to Cloud Storage buckets by using the Google Cloud console, gcloud CLI, Cloud Storage APIs, or client libraries. When you choose a data transfer method, consider factors like the data size, time constraints, bandwidth availability, cost goals, and security and compliance requirements. For information about planning and implementing data transfers to Google Cloud, see Migrate to Google Cloud: Transfer your large datasets. Storage options decision tree The following decision tree diagram guides you through the Google Cloud storage recommendations discussed earlier. For guidance that's specific to AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud. View a larger image What's next Estimate storage cost using the Google Cloud Pricing Calculator. Learn about the best practices for building a cloud topology that's optimized for security, resilience, cost, and performance. Learn when to use parallel file systems like Lustre for HPC workloads. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Brennan Doyle | Solutions ArchitectDean Hildebrand | Technical Director, Office of the CTOGeoffrey Noer | Group Product ManagerJack Zhou | Technical WriterJason Wu | Director, Product ManagementJeff Allen | Solutions ArchitectSean Derrington | Group Outbound Product Manager, Storage Send feedback \ No newline at end of file diff --git a/Design_considerations.txt b/Design_considerations.txt new file mode 100644 index 0000000000000000000000000000000000000000..2cba82b8ddc6e107952e9b25a87fb357ff020963 --- /dev/null +++ b/Design_considerations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/design-considerations +Date Scraped: 2025-02-23T11:50:24.383Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design considerations Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC When designing a hybrid and multicloud network, various factors influence your architectural choices. As you analyze your hybrid and multicloud networking design, think about the following design considerations. To build a cohesive architecture, assess these considerations collectively, not in isolation. Hybrid and multicloud connectivity Hybrid and multicloud connectivity refers to the communication connections that link on-premises, Google Cloud, and other cloud environments. Choosing the right connectivity method is essential to the success of hybrid and multicloud architectures, because these connections carry all inter-environment traffic. Any network performance issues, such as bandwidth, latency, packet loss, or jitter, can directly affect the performance of business applications and services. For the connectivity between an on-premises environment and Google Cloud or other clouds, Google Cloud offers multiple connectivity options to select from, including: Internet-based connectivity using public IP addresses: Transfer data between Google Cloud and an on-premises environment or another cloud environment over the internet. This option uses the public external IP addresses of an instance—ideally with application layer encryption in transit. Secure connectivity over APIs with Transport Layer Security (TLS) encryption over the public internet. This option requires the application or target APIs to be publicly reachable from the internet and that the application performs the encryption in transit. Private secure connectivity over the public internet using either Cloud VPN or customer-managed VPN gateways. This option includes using a network virtual appliance (NVA) including software-defined WAN (SD-WAN) solutions from Google Cloud partners. These solutions are available on Google Cloud Marketplace. Private connectivity over a private transport using Cloud Interconnect (Dedicated Interconnect or Partner Interconnect) that offers a more deterministic performance and has an SLA. If encryption in transit is required at the network connectivity layer, you can use HA VPN over Cloud Interconnect or MACsec for Cloud Interconnect. Cross-Cloud Interconnect provides enterprises that use multicloud environments the ability to enable private and secure connectivity across clouds (between Google Cloud and supported cloud service providers in certain locations). This option has line-rate performance with high availability options of 99.9% and 99.99%, which ultimately helps to lower total cost of ownership (TCO) without the complexity and cost of managing infrastructure. Also, if encryption in transit is required at the network connectivity layer for additional security, Cross-Cloud Interconnect supports MACsec for Cloud Interconnect encryption. Consider using Network Connectivity Center when it fits your cloud solution architecture use case. Network Connectivity Center is an orchestration framework that provides network connectivity among spoke resources, like virtual private clouds (VPCs), router appliances, or hybrid connections that are connected to a central management resource called a hub. A Network Connectivity Center hub supports either VPC spokes or hybrid spokes. For more information, see Route exchange with VPC connectivity. Also, to facilitate route exchange with the Cloud Router instance, Network Connectivity Center enables the integration of third-party network virtual appliances. That integration includes third-party SD-WAN routers that are supported by Google Cloud Network Connectivity Center partners. With the variety of hybrid and multicloud connectivity options available, selecting the most suitable one requires a thorough evaluation of your business and technical requirements. These requirements include the following factors: Network performance Security Cost Reliability and SLA Scalability For more information on selecting a connectivity option to Google Cloud, see Choosing a Network Connectivity product. For guidance on selecting a network connectivity option that meets the needs of your multicloud architecture, see Patterns for connecting other cloud service providers with Google Cloud. Google Cloud projects and VPCs You can use the networking architecture patterns discussed in this guide with either single or multiple projects where supported. A project in Google Cloud contains related services and workloads that have a single administrative domain. Projects form the basis for the following processes: Creating, enabling, and using Google Cloud services Managing services APIs Enabling billing Adding and removing collaborators Managing permissions A project can contain one or more VPC networks. Your organization, or the structure of the applications you use in a project, should determine whether to use a single project or multiple projects. Your organization, or the structure of the applications, should also determine how to use VPCs. For more information, see Decide a resource hierarchy for your Google Cloud landing zone. The following factors can influence whether you decide to use a single VPC, multiple VPCs, or a shared VPC with one or multiple projects: Organizational resource hierarchies. Network traffic, communication, and administrative domain requirements between workloads. Security requirements. Security requirements can require Layer 7 firewall inspection by third-party NVAs located in the path between certain networks or applications. Resource management. Enterprises that use an administrative model where the network operation team manages networking resources, can require workload separation at the team level. VPC use decisions. Using shared VPCs across multiple Google Cloud projects avoids the need to maintain many individual VPCs per workload or per team. Using shared VPCs enables centralized management for host VPC networking, including the following technical factors: Peering configuration Subnet configuration Cloud Firewall configuration Permission configuration Sometimes, you might need to use more than one VPC (or shared VPCs) to meet scale requirements without exceeding the limits of resources for a single VPC. For more information, see Deciding whether to create multiple VPC networks. DNS resolution In a hybrid and multicloud architecture, it's essential that the domain name system (DNS) is extended and integrated between environments where communication is permitted. This action helps to provide seamless communication between various services and applications. It also maintains private DNS resolution between these environments. In a hybrid and multicloud architecture with Google Cloud, you can use DNS peering and DNS forwarding capabilities to enable the DNS integration between different environments. With these DNS capabilities, you can cover the different use cases that can align with different networking communication models. Technically, you can use DNS forwarding zones to query on-premises DNS servers and inbound DNS server policies to allow queries from on-premises environments. You can also use DNS peering to forward DNS requests within Google Cloud environments. For more information, see Best practices for Cloud DNS and reference architectures for hybrid DNS with Google Cloud. To learn about redundancy mechanisms for maintaining Cloud DNS availability in a hybrid setup, see It's not DNS: Ensuring high availability in a hybrid cloud environment. Also watch this demonstration of how to design and set up a multicloud private DNS between AWS and Google Cloud. Cloud network security Cloud network security is a foundational layer of cloud security. To help manage the risks of the dissolving network perimeter, it enables enterprises to embed security monitoring, threat prevention, and network security controls. A standard on-premises approach to network security is primarily based on a distinct perimeter between the internet edge and the internal network of an organization. It uses various multi-layered security preventive systems in the network path, like physical firewalls, routers, intrusion detection systems, and others. With cloud-based computing, this approach is still applicable in certain use cases. But it's not enough to handle the scale and the distributed and dynamic nature of cloud workloads—such as autoscaling and containerized workloads—by itself. The cloud network security approach helps you minimize risk, meet compliance requirements, and ensure safe and efficient operations though several cloud-first capabilities. For more information, see Cloud network security benefits. To secure your network, also look at Cloud network security challenges, and general Cloud network security best practices. Adopting a hybrid cloud architecture calls for a security strategy that goes beyond replicating the on-premises approach. Replicating that approach can limit design flexibility. It can also potentially expose the cloud environment to security threats. Instead, you should first identify the available cloud-first network security capabilities that meet the security requirements of your company. You might also need to combine these capabilities with third-party security solutions from Google Cloud technology partners, like network virtual appliances. To design a consistent architecture across environments in a multicloud architecture, it's important to identify the different services and capabilities offered by each cloud provider. We recommend, in all cases, that you use a unified security posture that has visibility across all environments. To protect your hybrid cloud architecture environments, you should also consider using defense-in-depth principles. Finally, design your cloud solution with network security in mind from the start. Incorporate all required capabilities as part of your initial design. This initial work will help you avoid the need to make major changes to the design to integrate security capabilities later in your design process. However, cloud security isn't limited to networking security. It must be applied throughout the entire application development lifecycle across the entire application stack, from development to production and operation. Ideally, you should use multiple layers of protection (the defense-in-depth approach) and security visibility tools. For more information on how to architect and operate secure services on Google Cloud, see the Security, privacy, and compliance pillar of the Google Cloud Architecture Framework. To protect your valuable data and infrastructure from a wide range of threats, adopt a comprehensive approach to cloud security. To stay ahead of existing threats, continuously assess and refine your security strategy. Previous arrow_back Overview Next Architecture patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Design_for_graceful_degradation.txt b/Design_for_graceful_degradation.txt new file mode 100644 index 0000000000000000000000000000000000000000..7f0d13c0b4b67784dc7326e94962ca235b990f23 --- /dev/null +++ b/Design_for_graceful_degradation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/graceful-degradation +Date Scraped: 2025-02-23T11:43:34.089Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design for graceful degradation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you to design your Google Cloud workloads to fail gracefully. This principle is relevant to the response focus area of reliability. Principle overview Graceful degradation is a design approach where a system that experiences a high load continues to function, possibly with reduced performance or accuracy. Graceful degradation ensures continued availability of the system and prevents complete failure, even if the system's work isn't optimal. When the load returns to a manageable level, the system resumes full functionality. For example, during periods of high load, Google Search prioritizes results from higher-ranked web pages, potentially sacrificing some accuracy. When the load decreases, Google Search recomputes the search results. Recommendations To design your systems for graceful degradation, consider the recommendations in the following subsections. Implement throttling Ensure that your replicas can independently handle overloads and can throttle incoming requests during high-traffic scenarios. This approach helps you to prevent cascading failures that are caused by shifts in excess traffic between zones. Use tools like Apigee to control the rate of API requests during high-traffic times. You can configure policy rules to reflect how you want to scale back requests. Drop excess requests early Configure your systems to drop excess requests at the frontend layer to protect backend components. Dropping some requests prevents global failures and enables the system to recover more gracefully.With this approach, some users might experience errors. However, you can minimize the impact of outages, in contrast to an approach like circuit-breaking, where all traffic is dropped during an overload. Handle partial errors and retries Build your applications to handle partial errors and retries seamlessly. This design helps to ensure that as much traffic as possible is served during high-load scenarios. Test overload scenarios To validate that the throttle and request-drop mechanisms work effectively, regularly simulate overload conditions in your system. Testing helps ensure that your system is prepared for real-world traffic surges. Monitor traffic spikes Use analytics and monitoring tools to predict and respond to traffic surges before they escalate into overloads. Early detection and response can help maintain service availability during high-demand periods. Previous arrow_back Detect potential failures by using observability Next Perform testing for recovery from failures arrow_forward Send feedback \ No newline at end of file diff --git a/Design_reliable_infrastructure.txt b/Design_reliable_infrastructure.txt new file mode 100644 index 0000000000000000000000000000000000000000..7e0ebaf5dda4d6a5827c9a7fba03a902a39226aa --- /dev/null +++ b/Design_reliable_infrastructure.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/design +Date Scraped: 2025-02-23T11:54:12.291Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design reliable infrastructure for your workloads in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC As described in Platform availability, Google Cloud infrastructure is designed to support a target availability of 99.9% for a workload that's deployed in a single zone. The target availability is 99.99% for a multi-zone deployment1 and 99.999% for a multi-region deployment. This part of the Google Cloud infrastructure reliability guide provides deployment guidance, example architectures, and design techniques that can help to protect your workloads against failures at the resource, zone, and region level. Avoid single points of failure Applications are typically composed of multiple interdependent components, each designed to perform a specific function. These components are typically grouped into tiers based on the function that they perform and their relationship with the other components. For example, a content-serving application might have three tiers: a web tier containing a load balancer and web servers; an app tier with a cluster of application servers; and a data tier for persistence. If any component of this application stack depends on a single infrastructure resource, a failure of that resource can affect the availability of the entire stack. For example, if the app tier runs on a single VM, and if the VM crashes, then the entire stack is effectively unavailable. Such a component is a single point of failure (SPOF). An application stack might have more than one SPOF. Consider the multi-tier application stack that's shown in the following diagram: As shown in the preceding diagram, this example architecture contains a single load balancer, two web servers, a single app server, and a single database. The load balancer, app server, and database in this example are SPOFs. A failure of any of these components can cause user requests to the application to fail. To remove the SPOFs in your application stack, distribute resources across locations and deploy redundant resources. Note: You might accept the Google Cloud global load balancer as a SPOF considering the stringent controls in Google Cloud and the defense-in-depth measures that Google implements to prevent global outages. To help reduce the risk of failure, manage changes to the load balancer carefully. Also, avoid or minimize dependencies on non-data plane actions, such as creating a new load balancer. Distribute resources and create redundancy Depending on the reliability requirements of your application, you can choose from the following deployment architectures: Architecture Workload recommendation Multi-region Workloads that are business-critical and where high availability is essential, such as retail and social media applications. Multi-zone Workloads that need resilience against zone outages but can tolerate some downtime caused by region outages. Single-zone Workloads that can tolerate downtime or can be deployed at another location when necessary with minimal effort. Cost, latency, and operational considerations When you design a distributed architecture with redundant resources, besides the availability requirements of the application, you must also consider the effects on operational complexity, latency, and cost. In a distributed architecture, you provision and manage a higher number of resources. The volume of cross-location network traffic is higher. You also store and replicate more data. As a result, the cost of your cloud resources in a distributed architecture is higher, and operating such deployments involves more complexity. For business-critical applications, the availability advantage of a distributed architecture might outweigh the increased cost and operational complexity. For applications that aren't business-critical, the high availability that a distributed architecture provides might not be essential. Certain applications have other requirements that are more important than availability. For example, batch computing applications require low-latency and high-bandwidth network connections between the VMs. A single-zone architecture might be well suited for such applications, and it can also help you reduce data transfer costs. Deployment architectures This section presents the following architectural options to build infrastructure for your workloads in Google Cloud: Single-zone deployment Multi-zone deployment Multi-region deployment with regional load balancing Multi-region deployment with global load balancing Single-zone deployment The following diagram shows a single-zone application architecture with redundancy in every tier, to achieve higher availability of the functions performed by each component: As shown in the preceding diagram, this example architecture includes the following components: A regional external HTTP/S load balancer to receive and respond to user requests. A zonal managed instance group (MIG) as the backend for the HTTP/S load balancer. The MIG has two Compute Engine VMs. Each VM hosts an instance of a web server. An internal load balancer to handle communication between the web server and the app server instances. A second zonal MIG as the backend for the internal load balancer. This MIG contains two Compute Engine VMs. Each VM hosts an instance of an application server. A Cloud SQL database instance (Enterprise edition) that the application writes data to and reads from. The database is replicated manually to a second Cloud SQL database instance in the same zone. Aggregate availability: Single-zone deployment The following table shows the availability of each tier in the preceding single-zone architecture diagram: Resource SLA External load balancer 99.99% Web tier: Compute Engine VMs in a single zone 99.9% Internal load balancer 99.99% Application tier: Compute Engine VMs in a single zone 99.9% Cloud SQL instance (Enterprise edition) 99.95% You can expect the Google Cloud infrastructure resources that are listed in the preceding table to provide the following aggregate availability and estimated maximum monthly downtime: Aggregate availability: 0.9999 x 0.999 x 0.9999 x 0.999 x 0.9995 = 99.73% Estimated maximum monthly downtime: Approximately 1 hour and 57 minutes This calculation considers only the infrastructure resources that are shown in the preceding architecture diagram. To assess the availability of an application in Google Cloud, you must also consider other factors, like the following: The internal design of the application The DevOps processes and tools used to build, deploy, and maintain the application, its dependencies, and the Google Cloud infrastructure For more information, see Factors that affect application reliability. Effects of outages, and guidance for recovery In a single-zone deployment architecture, if any component fails, the application can process requests if each tier contains at least one functioning component with adequate capacity. For example, if a web server instance fails, the load balancer forwards user requests to the other web server instances. If a VM that hosts a web server or app server instance crashes, the MIG ensures that a new VM is created automatically. If the database crashes, you must manually activate the second database and update the app server instances to connect to the database. A zone outage or region outage affects the Compute Engine VMs and the Cloud SQL database instances in a single-zone deployment. A zone outage doesn't affect the load balancer in this architecture because it is a regional resource. However, the load balancer can't distribute traffic, because there are no available backends. If a zone outage occurs, you must wait for Google to resolve the outage, and then verify that the application works as expected. The next section describes an architectural approach that you can use to distribute resources across multiple zones, which helps to improve the resilience of the application to zone outages. Multi-zone deployment In a single-zone deployment, if a zone outage occurs, the application might not be able to serve requests until the issue is resolved. To help to improve the resilience of your application against zone outages, you can provision multiple instances of zonal resources (such as Compute Engine VMs) across two or more zones. For services that support region-scoped resources (such as Cloud Storage buckets), you can deploy regional resources. The following diagram shows a highly available cross-zone architecture, with the components in each tier of the application stack distributed across two zones: As shown in the preceding diagram, this example architecture includes the following components: A regional external HTTP/S load balancer receives and responds to user requests. A regional MIG is the backend for the HTTP/S load balancer. The MIG contains two Compute Engine VMs in different zones. Each VM hosts an instance of a web server. An internal load balancer handles communication between the web server and the app server instances. A second regional MIG is the backend for the TCP load balancer. This MIG has two Compute Engine VMs in different zones. Each VM hosts an instance of an app server. A Cloud SQL instance (Enterprise edition) that's configured for HA is the database for the application. The primary database instance is replicated synchronously to a standby database instance. Aggregate availability: Multi-zone deployment The following table shows the availability of each tier in the preceding dual-zone architecture diagram: Resource SLA External load balancer 99.99% Web tier: Compute Engine VMs in separate zones 99.99% Internal load balancer 99.99% Application tier: Compute Engine VMs in separate zones 99.99% Cloud SQL instance (Enterprise edition) 99.95% You can expect the Google Cloud infrastructure resources that are listed in the preceding table to provide the following aggregate availability and estimated maximum monthly downtime: Aggregate availability: 0.9999 x 0.9999 x 0.9999 x 0.9999 x 0.9995 = 99.91% Estimated maximum monthly downtime: Approximately 39 minutes This calculation considers only the infrastructure resources that are shown in the preceding architecture diagram. To assess the availability of an application in Google Cloud, you must also consider other factors, like the following: The internal design of the application The DevOps processes and tools used to build, deploy, and maintain the application, its dependencies, and the Google Cloud infrastructure For more information, see Factors that affect application reliability. Effects of outages, and guidance for recovery In a dual-zone deployment, if any component fails, the application can process requests if at least one functioning component with adequate capacity exists in each tier. For example, if a web server instance fails, the load balancer forwards user requests to the web server instance in the other zone. If a VM that hosts a web server or app server instance crashes, the MIG ensures that a new VM is created automatically. If the primary Cloud SQL database crashes, Cloud SQL automatically fails over to the standby database instance. The following diagram shows the same architecture as the previous diagram and the effects of a zone outage on the availability of the application: As shown in the preceding diagram, if an outage occurs at one of the zones, the load balancer in this architecture is not affected, because it is a regional resource. A zone outage might affect individual Compute Engine VMs and one of the Cloud SQL database instances. But the application remains available and responsive, because the VMs are in regional MIGs and the Cloud SQL database is configured for HA. The MIGs ensure that new VMs are created automatically to maintain the configured minimum number of VMs. If the primary Cloud SQL database instance is affected by a zone outage, Cloud SQL fails over automatically to the standby instance in the other zone. After Google resolves the outage, you must verify that the application runs as expected in all the zones where it's deployed. For more information about region-specific considerations, see Geography and regions. If both the zones in this architecture have an outage, then the application is unavailable. The load balancer continues to be available unless a region-wide outage occurs. However, the load balancer can't distribute traffic, because there are no available backends. If a multi-zone outage or region outage occurs, you must wait for Google to resolve the outage, and then verify that the application works as expected. The next sections present architectural options to protect your application against multi-zone outages and region outages. Multi-region deployment with regional load balancing In a single-zone or multi-zone deployment, if a region outage occurs, the application can't serve requests until the issue is resolved. To protect your application against region outages, you can distribute the Google Cloud resources across two or more regions. The following diagram shows a highly available cross-region architecture, with the components in each tier of the application stack distributed across multiple regions: As shown in the preceding diagram, this example architecture includes the following components: A public Cloud DNS zone with a routing policy that steers traffic to two Google Cloud regions. A regional external HTTP/S load balancer in each region to receive and respond to user requests. The backend for each regional HTTP/S load balancer is a regional MIG. Each MIG contains two Compute Engine VMs in different zones. Each of these VMs hosts an instance of a web server. An internal load balancer in each region handles communication between the web server instances and the app server instances. A second pair of regional MIGs is the backend for the internal load balancers. Each of these MIGs contains two Compute Engine VMs in different zones. Each VM hosts an instance of an app server. The application writes data to and reads from a multi-region Spanner instance. The multi-region configuration that's used in this architecture (eur5) includes four read-write replicas. The read-write replicas are provisioned equally across two regions and in separate zones. The multi-region Spanner configuration also includes a witness replica in a third region. Aggregate availability: Multi-region deployment with regional load balancing In the multi-region deployment that's shown in the preceding diagram, the load balancers and the VMs are provisioned redundantly in two regions. The DNS zone is a global resource, and the Spanner instance is a multi-region resource. To calculate the aggregate availability of the Google Cloud infrastructure that's shown in this architecture, we must first calculate the aggregate availability of the resources in each region, and then consider the resources that span multiple regions. Use the following process: Calculate the aggregate availability of the infrastructure resources per region; that is, excluding the DNS and database resources: Resource and SLA SLA External load balancer 99.99% Web tier: Compute Engine VMs in separate zones 99.99% Internal load balancer 99.99% Application tier: Compute Engine VMs in separate zones 99.99% Aggregate availability per region: 0.9999 x 0.9999 x 0.9999 x 0.9999 = 99.96% Calculate the aggregate availability of the infrastructure resources considering the dual-region redundancy of the load balancers and the Compute Engine VMs. The theoretical availability is 1-(1-0.9996)(1-0.9996) = 99.999984%. However, the actual availability that you can expect is limited to the target availability for multi-region deployments, which is 99.999%. Calculate the aggregate availability of all the infrastructure resources, including the Cloud DNS and Spanner resources: Aggregate availability: 0.99999 x 1 x 0.99999 = 99.998% Estimated maximum monthly downtime: Approximately 52 seconds This calculation considers only the infrastructure resources that are shown in the preceding architecture diagram. To assess the availability of an application in Google Cloud, you must also consider other factors, like the following: The internal design of the application The DevOps processes and tools used to build, deploy, and maintain the application, its dependencies, and the Google Cloud infrastructure For more information, see Factors that affect application reliability. Effects of outages, and guidance for recovery If any component in this multi-region deployment fails but there is at least one functioning component with adequate capacity in each tier, the application continues to work. For example, if a web server instance fails, the regional external HTTP/S load balancer forwards user requests to the other web server instances in the region. Similarly, if one of the app server instances crashes, the internal load balancers send requests to the other app server instances. If any of the VMs crash, the MIGs ensure that new VMs are created automatically to maintain the minimum configured number of VMs. An outage at a single zone doesn't affect the load balancers, because they are regional resources and are resilient to zone outages. A zone outage might affect individual Compute Engine VMs. But the web server and app server instances remain available, because the VMs are part of regional MIGs. The MIGs ensure that new VMs are created automatically to maintain the minimum configured number of VMs. The Spanner instance in this architecture uses a multi-region configuration, which is resilient to zone outages. For information about how multi-region replication works in Spanner, see Regional and multi-region configurations and Demystifying Spanner multi-region configurations. The following diagram shows the same multi-region architecture as the previous diagram and the effects of a single-region outage on the availability of the application: As shown in the preceding diagram, even if an outage occurs at both the zones in any region, the application remains available, because an independent application stack is deployed in each region. The DNS zone steers user requests to the region that's not affected by the outage. The multi-region Spanner instance is resilient to region outages. After Google resolves the outage, you must verify that the application runs as expected in the region that had the outage. If any two of the regions in this architecture have outages, then the application is unavailable. Wait for Google to resolve the outages. Then, verify that the application runs as expected in all the regions where it's deployed. For multi-region deployments, instead of using regional load balancers, you can consider using a global load balancer. The next section presents a multi-region deployment architecture that uses a global load balancer and describes the benefits and risks of that approach. Multi-region deployment with global load balancing The following diagram shows an alternative multi-region deployment that uses a global load balancer instead of regional load balancers: As shown in the preceding diagram, this architecture uses a global external HTTP/S load balancer (with Cloud CDN enabled) to receive and respond to user requests. Each forwarding rule of the load balancer uses a single external IP address; you don't need to configure a separate DNS record for each region. The backends for the global external HTTP/S load balancer are two regional MIGs. The load balancer routes requests to the region that's closest to the users. All the other components in this architecture are identical to the architecture shown in Multi-region deployment with regional load balancing. Benefits and risks of global load balancing for multi-region deployments To load-balance external traffic to an application that's distributed across multiple regions, you can use either a global load balancer or multiple regional load balancers. The following are the benefits of an architecture that uses a global load balancer: You need to manage only a single load balancer. Global load balancers use a single anycast IP address to provide load balancing across Google Cloud regions. Global load balancers are resilient to region outages, and provide automatic cross-region failover. Global load balancers support the following features, which can help enhance the reliability of your deployments: Edge caching using Cloud CDN Ability to use highly durable Cloud Storage buckets as backends Google Cloud Armor security policies The following are the risks of an architecture that uses a global load balancer: An incorrect configuration change to the global load balancer might make the application unavailable to users. For example, while updating the frontend of the global load balancer, if you accidentally delete a forwarding rule, the load balancer stops receiving user requests. The effect of this risk is lower in the case of a multi-region architecture that uses regional load balancers, because even if the regional load balancer in one of the regions is affected by a configuration error, the load balancers in the other regions continue to work. An infrastructure outage that affects global resources might make the global load balancer unavailable. To mitigate these risks, you must manage changes to the global load balancer carefully, and consider using defense-in-depth fallbacks where possible. For more information, see Recommendations to manage the risk of outages of global resources. Aggregate availability: Multi-region deployment with global load balancing In the multi-region deployment that's shown in the preceding diagram, the VMs and the internal load balancers are distributed redundantly across two regions. The external load balancer is a global resource, and the Spanner instance is a multi-region resource. To calculate the aggregate availability of this deployment, we first calculate the aggregate availability of the resources in each region, and then consider the resources that span multiple regions. Calculate the aggregate availability of the infrastructure resources per region, excluding the external load balancer and the database: Resource SLA Web tier: Compute Engine VMs in separate zones 99.99% Internal load balancer 99.99% Application tier: Compute Engine VMs in separate zones 99.99% Aggregate availability per region: 0.9999 x 0.9999 x 0.9999 = 99.97% Calculate the aggregate availability of the infrastructure resources considering the dual-region redundancy of the internal load balancer and the Compute Engine VMs. The theoretical availability is 1-(1-0.9997)(1-0.9997) = 99.999991%. However, the actual availability that you can expect is limited to the target availability for multi-region deployments, which is 99.999%. Calculate the aggregate availability of all the infrastructure resources, including the global load balancer and Spanner resources: Aggregate availability: 0.99999 x 0.9999 x 0.99999 = 99.988% Estimated maximum monthly downtime: Approximately 5 minutes and 11 seconds This calculation considers only the infrastructure resources that are shown in the preceding architecture diagram. To assess the availability of an application in Google Cloud, you must also consider other factors, like the following: The internal design of the application The DevOps processes and tools used to build, deploy, and maintain the application, its dependencies, and the Google Cloud infrastructure For more information, see Factors that affect application reliability. Effects of outages, and guidance for recovery If any component in this architecture fails, the application continues to work if at least one functioning component with adequate capacity exists in each tier. For example, if a web server instance fails, the global external HTTP/S load balancer forwards user requests to the other web server instances. If an app server instance crashes, the internal load balancers send the requests to the other app server instances. If any of the VMs crash, the MIGs ensure that new VMs are created automatically to maintain the minimum configured number of VMs. If an outage occurs at one of the zones in any region, the load balancer is not affected. The global external HTTP/S load balancer is resilient to zone and region outages. The internal load balancers are regional resources; they're resilient to zone outages. A zone outage might affect individual Compute Engine VMs. But the web server and app server instances remain available, because the VMs are part of regional MIGs. The MIGs ensure that new VMs are created automatically to maintain the minimum configured number of VMs. The Spanner instance in this architecture uses a multi-region configuration, which is resilient to zone outages. The following diagram shows the same multi-region architecture as the previous diagram and the effects of a single-region outage on the availability of the application: As shown in the preceding diagram, even if an outage occurs at both the zones in any region, the application remains available, because an independent application stack is deployed in each region. The global external HTTP/S load balancer routes user requests to the application in the region that's not affected by the outage. The multi-region Spanner instance is resilient to region outages. After Google resolves the outage, you must verify that the application runs as expected in the region that had the outage. For information about how multi-region replication works in Spanner, see Regional and multi-region configurations and Demystifying Spanner multi-region configurations. If any two of the regions in this architecture have outages, then the application is unavailable. The global external HTTP/S load balancer is available, but it can't distribute traffic because there are no available backends. Wait for Google to resolve the outages. Then, verify that the application runs as expected in all the regions where it's deployed. Multi-region deployments can help ensure high availability for your most critical business applications. To ensure business continuity during failure events, besides deploying the application across multiple regions, you must take certain additional steps. For example, you must perform capacity planning to ensure that either sufficient capacity is reserved in all the regions or the risks associated with emergency autoscaling are acceptable. You must also implement operational practices for DR testing, managing incidents, verifying application status after incidents, and performing retrospectives. For more information about region-specific considerations, see Geography and regions. ↩ Previous arrow_back Assess reliability requirements Next Manage traffic and load arrow_forward Send feedback \ No newline at end of file diff --git a/Design_resilient_single-region_environments_on_Google_Cloud.txt b/Design_resilient_single-region_environments_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2bbaf0fb51f09bb92e6bc3c28319237a6e21088 --- /dev/null +++ b/Design_resilient_single-region_environments_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-across-regions/design-resilient-single-region-environments +Date Scraped: 2025-02-23T11:52:18.272Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate across Google Cloud regions: Design resilient single-region environments on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-08 UTC This document helps you design resilient, single-region environments on Google Cloud. This document is useful if you're planning to migrate a single-region environment or if you're evaluating the opportunity to do so in the future and want to explore what it might look like. This document is part of a series: Get started Architect single-region environments on Google Cloud (this document) Architect your workloads Prepare data and batch workloads for migration across regions This document aims to provide guidance about how to design resilient, single-region environments on Google Cloud, and it focuses on the following architectural components: Network services, such as Cloud Load Balancing. Computing services, such as Compute Engine, Google Kubernetes Engine (GKE), Google Cloud VMware Engine, and Cloud Run. Data storage services, such as Cloud Storage, Filestore, Bigtable, Firestore, Memorystore, and Spanner. Data analytics services, such as BigQuery, Pub/Sub, Dataproc, and Dataflow. Workloads that you deploy in the environment. The guidance in this document assumes that you're designing and implementing single-region environments. If you use a single-region environment now, in the future you can migrate to a multi-region environment. If you're considering a future migration and evolution of your zonal and single-region environments to multi-region environments, see Migrate across Google Cloud regions: Get started. Properties of different deployment archetypes Google Cloud products are provided across many regions and zones. When you design your Google Cloud environment, you can choose between the following deployment archetypes, presented in order of increasing reliability and operational overhead: Zonal: You provision Google Cloud resources in a single zone within a region, and you use zonal services where they're available. If zonal services aren't available, you use regional services. Regional: You provision Google Cloud resources in multiple zones within a region, and you use regional services when possible. Multi-regional: You provision Google Cloud resources in multiple zones across different regions. Zonal resources are provisioned in one or more zones in each region. Global: You provision Google Cloud resources in multiple zones across different regions worldwide. Zonal resources are provisioned in one or more zones in each region. The preceding deployment archetypes have different reliability properties, and you can use them to provide the reliability guarantees that your environment needs. For example, a multi-region environment is more likely to survive a regional outage compared to a single-region or zonal environment. For more information about the reliability properties of each deployment archetype, see How to leverage zones and regions to achieve reliability and the Google Cloud infrastructure reliability guide. Designing, implementing, and operating an environment based on these deployment archetypes requires different levels of effort due to the cost and complexity properties of each deployment archetype. For example, a zonal environment might be cheaper and easier to design, implement, and operate compared to a regional or a multi-region environment. The potentially lower effort and cost of the zonal environment is because of the additional overhead that you have to manage to coordinate workloads, data, and processes that reside in different regions. The following table summarizes the resource distribution, the reliability properties, and the complexity of each deployment archetype. It also describes the effort that's required to design and implement an environment based on each. Deployment archetype name Resource distribution Helps to resist Design complexity Zonal environment In a single zone Resource failures Requires coordination inside a single zone Single-region environment Across multiple zones, in a single region Resource failures, zonal outages Requires coordination across multiple zones, in a single region Multi-region environment Across multiple zones, across multiple regions Resource failures, zonal outages, regional outages, multi-region outages Requires coordination across multiple zones, across multiple regions Global environment Across multiple zones, across multiple regions globally Resource failures, zonal outages, regional outages, multi-region outages Requires coordination across multiple zones, across multiple regions Note: For more information about region-specific considerations, see Geography and regions. For more information about these and other deployment archetypes, see Google Cloud deployment archetypes. Choose deployment archetypes for your environments To choose the deployment archetype that best fits your needs, do the following: Define the failure models that you want to guard against. Evaluate the deployment archetypes to determine what will best fit your needs. Define failure models To define failure models, consider the following questions: Which components of your environment need failure models? Failure models can apply to anything that you provision or deploy on Google Cloud. A failure model can apply to an individual, or you can apply a failure model to all resources in an entire zone or region. We recommend that you apply a failure model to anything that provides you value, such as workloads, data, processes, and any Google Cloud resource. What are your high availability, business continuity, and disaster recovery requirements for these components? Each component of your environment might have its own service level objectives (SLOs) that define the acceptable service levels for that component, and its own disaster recovery requirements. For example, the Compute Engine SLA indicates that if you need to achieve more than 99.5% of monthly uptime, you need to provision instances in multiple zones across a single region. For more information, see the Disaster recovery planning guide. How many failure models do you need to define? In a typical environment, not all components have to provide the same reliability guarantees. If you offer guarantees for higher uptime and stronger resilience, you usually have to expend more effort and resources. When you define your failure models, we recommend that you consider an approach where you define multiple failure models for each component, and not just one for all your components. For example, business-critical workloads usually need to offer higher reliability, although it might be acceptable to offer lesser reliability guarantees for other, less critical workloads. How many resources do the failure models need in order to guard against failures? To guard against the failure models that you defined, you expend resources such as the time and cost required for people to design, provision, and configure protection mechanisms and automated processes. We recommend that you assess how many resources you need to guard against each failure model that you define. How will you detect that a failure is happening? Being able to detect that a failure is happening or is about to happen is critical so that you can start mitigation, recovery, and reconciliation processes. For example, you can configure Google Cloud Observability to alert you about degraded performance. How can you test the failure models that you're defining? When you define failure models, we recommend that you think about how to continuously test each model to verify that it effectively guards against the failures that the models are aimed at. For example, you can inject faults in your environments, or to assess the ability of your environments to tolerate failures, you can adopt chaos engineering. How much impact do you expect if a particular failure model occurs? To gain an understanding of the impact that a failure might have on your business, we recommend that, for each failure model, you estimate the consequences of each failure the model is designed against. This understanding is useful in establishing priorities and recovery orders so that you and your processes deal with the most critical components first. How long do you expect the failures to last in the failure models that you're defining? The duration of a failure can greatly affect mitigation and recovery plans. Therefore, when you define failure models, we recommend that you account for how much time a failure can last. When you consider how much time that a failure can last, also consider how much time it takes to: identify a failure, reconcile the failure, and to restore the resources that failed. For more considerations about failure models and how to design a reliable disaster recovery plan, see Architecting disaster recovery for cloud infrastructure outages. Evaluate deployment archetypes After you define the failure models that you want to guard against, you evaluate the deployment archetypes to determine what will best fit your needs. When you evaluate the deployment archetypes, consider the following questions: How many deployment archetypes do you need? You don't have to choose just one deployment archetype to fit all your environments. Instead, you can implement a hybrid approach where you pick multiple deployment archetypes according to the reliability guarantees that you need in order to guard against the failure models you defined. For example, if you defined two failure models—one that requires a zonal environment, and one that requires a regional environment—you might want to choose separate deployment archetypes to guard against each failure model. If you choose multiple deployment archetypes, we recommend that you evaluate the potentially increasing complexity of designing, implementing, and operating multiple environments. How many resources do you need to design and implement environments based on the deployment archetypes? Designing and implementing any kind of environment requires resources and effort. We recommend that you assess how many resources you think that you'll need in order to design and implement each environment based on the archetype you choose. When you have a complete understanding of how many resources you need, you can balance the trade-offs between the reliability guarantees that each deployment archetype offers, and the cost and the complexity of designing, implementing, and operating environments based on those archetypes. Do you expect to migrate an environment based on one deployment archetype to an environment based on a different archetype? In the future, you might migrate workloads, data, and processes from one Google Cloud environment to a different Google Cloud environment. For example, you might migrate from a zonal environment to a regional environment. How business-critical are the environments that you're designing and implementing? Business-critical environments likely need more reliability guarantees. For example, you might choose to design and implement a multi-region environment for business-critical workloads, data, and processes, and design a zonal or regional environment for less critical workloads, data, and processes. Do you need the features that are offered by particular deployment archetypes for certain environments? Aside from the reliability guarantees that each deployment archetype offers, the archetypes also offer different scalability, geographical proximity, latency, and data locality guarantees. We recommend that you consider those guarantees when you choose the deployment archetypes for your environments. Along with the technical aspects of the failure modes that you defined by following the preceding guidance, we recommend that you consider any non-functional requirements such as regulatory, locality, and sovereignty requirements. Those requirements can restrict the options that are available to you. For example, if you need to meet regulatory requirements that mandate the usage of a specific region, then you have to design and implement either a single-region environment, or a zonal environment in that region. Choose a Google Cloud region for your environment When you start designing your single-region environments, you have to determine the region that best fits the requirements of each environment. The following sections describe these two categories of selection criteria: Functional criteria. These criteria are about which Google Cloud products a particular region offers, and whether a particular region meets your latency and geographical proximity to users and other environments outside Google Cloud. For example, if your workloads and data have latency requirements for your users or other environments outside Google Cloud, you might need to choose the region that's closest to your users or other environments to minimize that latency. Non-functional criteria. These criteria are about the product prices that are associated with specific regions, carbon footprint requirements, and mandatory requirements and regulations that are in place for your business. For example, highly regulated markets such as banking and public sector have very stringent and specific requirements about data and workload locality, and how they share the cloud provider infrastructure with other customers. If you choose a particular Google Cloud region now, in the future you can migrate to different regions or to a multi-region environment. If you're considering a future migration to other regions, see Migrate across Google Cloud regions: Get started. Evaluate functional criteria To evaluate functional criteria, consider the following questions: What are your geographical proximity requirements? When you choose a Google Cloud region, you might need to place your workloads, data, and processes near your users or your environments outside Google Cloud, such as your on-premises environments. For example, if you're targeting a user base that's concentrated in a particular geographic area, we recommend that you choose a Google Cloud region that's closest to that geographic area. Choosing a Google Cloud region that best fits your geographical proximity requirements lets your environments guarantee lower latency and lower reaction times to requests from your users and from your environments outside Google Cloud. Tools like the Google Cloud latency dashboard, and unofficial tools such as GCPing and the Google Cloud Region Picker can give you a high-level idea of the latency characteristics of Google Cloud regions. However, we recommend that you perform a comprehensive assessment to evaluate if the latency properties fit your requirements, workloads, data, and processes. Which of the regions that you want to use offer the products that you need? We recommend that you assess the products that are available in each Google Cloud region, and which regions provide the services that you need to design and implement your environments. For more information about which products are available in each region and their availability timelines, see Cloud locations. Additionally, some products might not offer all their features in every region where they're available. For example, the available regions and zones for Compute Engine offer specific machine types in specific Google Cloud regions. For more information about what features each product offers in each region, see the product documentation. Are the resources that you need in each Google Cloud region within the per-region quota limits? Google Cloud uses quotas to restrict how much of a shared Google Cloud resource that you can use. Some quotas are global and apply to your usage of the resource anywhere in Google Cloud, while others are regional or zonal and apply to your usage of the resource in a specific Google Cloud region. For example, most Compute Engine resource usage quotas, such as the number of virtual machines that you can create, are regional. For more information about quotas and how to increase them, see Working with quotas. Evaluate non-functional criteria To evaluate non-functional criteria, consider the following questions: Do you prefer a low carbon footprint? Google Cloud continuously invests in sustainability and in carbon-free energy for Google Cloud regions, and it's committed to carbon free energy for all cloud regions. Google Cloud regions have different carbon footprints. For information about the carbon footprint of each Google Cloud region, and how to incorporate carbon-free energy in your location strategy, see Carbon free energy for Google Cloud regions. Do your environments need to meet particular regulations? Governments and national and supranational entities often strictly regulate certain markets and business areas, such as banking and public sector. These regulations might mandate that workloads, data, and processes reside only in certain geographic regions. For example, your environments might need to comply with data, operational, and software sovereignty requirements to guarantee certain levels of control and transparency for sensitive data and workloads running in the cloud. We recommend that you assess your current and upcoming regulatory requirements when choosing the Google Cloud regions for your environments, and select the Google Cloud regions that best fit your regulatory requirements. Design and build your single-region environments To design a single-region environment, do the following: Build your foundation on Google Cloud. Provision and configure computing resources. Provision and configure data storage resources. Provision and configure data analytics resources. When you design your environment, consider the following general design principles: Provision regional resources. Many Google Cloud products support provisioning resources in multiple zones across a region. We recommend that you provision regional resources instead of zonal resources when possible. Theoretically, you might be able to provision zonal resources in multiple zones across a region and manage them yourself to achieve a higher reliability. However, that configuration wouldn't fully benefit from all the reliability features of the Google infrastructure that underpins Google Cloud services. Verify that the environments work as expected with the failure model assumptions. When you design and implement your single-region environments, we recommend that you verify whether those environments meet the requirements to guard against the failure models that you're considering, before you promote those environments as part of your production environment. For example, you can simulate zonal outages to verify that your single-region environments can survive with minimal disruption. For more general design principles for designing reliable single- and multi-region environments and for information about how Google achieves better reliability with regional and multi-region services, see Architecting disaster recovery for cloud infrastructure outages: Common themes. Build your foundation on Google Cloud To build the foundation of your single-region environments, see Migrate to Google Cloud: Plan and build your foundation. The guidance in that document is aimed at building a foundation for migrating workloads, data, and processes to Google Cloud, but it's also applicable to build the foundation for your single-region environments. After you read that document, continue to read this document. After you build your foundation on Google Cloud, you design and implement security controls and boundaries. Those security measures help to ensure that your workloads, data, and processes stay inside their respective regions. The security measures also help to ensure that your resources don't leak anything to other regions due to bugs, misconfigurations, or malicious attacks. Provision and configure computing resources After you build the foundation of your single-region environments, you provision and configure computing resources. The following sections describe the Google Cloud computing products that support regional deployments. Compute Engine Compute Engine is Google Cloud's infrastructure as a service (IaaS). It uses Google's worldwide infrastructure to offer virtual machines and related services to customers. Compute Engine resources are either zonal, such as virtual machines or zonal Persistent Disk; regional, such as static external IP addresses; or global, such as Persistent Disk snapshots. For more information about the zonal, regional, and global resources that Compute Engine supports, see Global, regional, and zonal resources. To allow for better flexibility and resource management of physical resources, Compute Engine decouples zones from their physical resources. For more information about this abstraction and what it might imply for you, see Zones and clusters. To increase the reliability of your environments that use Compute Engine, consider the following: Regional managed instance groups (MIGs). Compute Engine virtual machines are zonal resources, so they will be unavailable in the event of a zonal outage. To mitigate this issue, Compute Engine lets you create regional MIGs that provision virtual machines across multiple zones in a region automatically, according to demand and regional availability. Note: For more information about region-specific considerations, see Geography and regions. If your workloads are stateful, you can also create regional stateful MIGs to preserve stateful data and configurations. Regional MIGs support simulating zonal failures. For information about simulating a zonal failure when using a regional MIG, see Simulate a zone outage for a regional MIG. For information about how regional MIGs compare to other deployment options, see Choose a Compute Engine deployment strategy for your workload. Target distribution shape. Regional MIGs distribute virtual machines according to the target distribution shape. To ensure that virtual machine distribution doesn't differ by more than one unit between any two zones in a region, we recommend that you choose the EVEN distribution shape when you create regional MIGs. For information about the differences between target distribution shapes, see Comparison of shapes. Instance templates. To define the virtual machines to provision, MIGs use a global resource type called instance templates. Although instance templates are global resources, they might reference zonal or regional resources. When you create instance templates, we recommend that you reference regional resources over zonal resources when possible. If you use zonal resources, we recommend that you assess the impact of using them. For example, if you create an instance template that references a Persistent Disk volume that's available only in a given zone, you can't use that template in any other zones because the Persistent Disk volume isn't available in those other zones. Configure load balancing and scaling. Compute Engine supports load balancing traffic between Compute Engine instances, and it supports autoscaling to automatically add or remove virtual machines from MIGs, according to demand. To increase the reliability and the flexibility of your environments, and to avoid the management burden of self-managed solutions, we recommend that you configure load balancing and autoscaling. For more information about configuring load balancing and scaling for Compute Engine, see Load balancing and scaling. Configure resource reservations. To ensure that your environments have the necessary resources when you need them, we recommend that you configure resource reservations to provide assurance in obtaining capacity for zonal Compute Engine resources. For example, if there is a zonal outage, you might need to provision virtual machines in another zone to supply the necessary capacity to make up for the ones that are unavailable because of the outage. Resource reservations ensure that you have the resources available to provision the additional virtual machines. Use zonal DNS names. To mitigate the risk of cross-regional outages, we recommend that you use zonal DNS names to uniquely identify virtual machines that use DNS names in your environments. Google Cloud uses zonal DNS names for Compute Engine virtual machines by default. For more information about how the Compute Engine internal DNS works, see Internal DNS. To facilitate a future migration across regions, and to make your configuration more maintainable, we recommend that you consider zonal DNS names as configuration parameters that you can eventually change in the future. Choose appropriate storage options. Compute Engine supports several storage options for your virtual machines, such as Persistent Disk volumes and local solid state drives (SSDs): Persistent Disk volumes are distributed across several physical disks, and they're located independently from your virtual machines. Persistent disks can either be zonal or regional. Zonal persistent disks store data in a single zone, while regional persistent disks replicate data across two different zones. When you choose storage options for your single-region environments, we recommend that you choose regional persistent disks because they provide you with failover options if there are zonal failures. For more information about how to react to zonal failures when you use regional persistent disks, see High availability options using regional Persistent Disk and Regional Persistent Disk failover. Local SSDs have high throughput, but they store data only until an instance is stopped or deleted. Therefore, local SSds are ideal to store temporary data, caches, and data that you can reconstruct by other means. Persistent disks are durable storage devices that virtual machines can access like physical disks. Design and implement mechanisms for data protection. When you design your single-region environments, we recommend that you put in place automated mechanisms to protect your data if there are adverse events, such as zonal, regional, or multi-regional failures, or deliberate attacks by malicious third parties. Compute Engine provides several options to protect your data. You can use those data options features as building blocks to design and implement your data protection processes. GKE GKE helps you to deploy, manage, and scale containerized workloads on Kubernetes. GKE builds on top of Compute Engine, so the recommendations in the previous section about Compute Engine partially apply to GKE. To increase the reliability of your environments that use GKE, consider the following design points and GKE features: Use regional GKE clusters to increase availability. GKE supports different availability types for your clusters, depending on the type of cluster that you need. GKE clusters can have a zonal or regional control plane, and they can have nodes that run in a single zone or across multiple zones within a region. Different cluster types also offer different service level agreements (SLAs). To increase the reliability of your environments, we recommend that you choose regional clusters. Note: For more information about region-specific considerations, see Geography and regions. If you're using the GKE Autopilot feature, you can provision regional clusters only. Consider a multi-cluster environment. Deploying multiple GKE clusters can increase the flexibility and the availability properties of your environment, at the cost of increasing complexity. For example, if you need to use a new GKE feature that you can only enable when you create a GKE cluster, you can avoid downtime and reduce the complexity of the migration by adding a new GKE cluster to your multi-cluster environment, deploying workloads in the new cluster, and destroying the old cluster. For more information about the benefits of a multi-cluster GKE environment, see Multi-cluster use cases. To help you manage the complexity of the migration, Google Cloud offers Fleet management, a set of capabilities to manage a group of GKE clusters, their infrastructure, and the workloads that are deployed in those clusters. Set up Backup for GKE. Backup for GKE is a regional service for backing up workload configuration and volumes in a source GKE cluster, and restoring them in a target GKE cluster. To protect workload configuration and data from possible losses, we recommend that you enable and configure Backup for GKE. For more information, see Backup for GKE Overview. Cloud Run Cloud Run is a managed compute platform to run containerized workloads. Cloud Run uses services to provide you with the infrastructure to run your workloads. Cloud Run services are regional resources, and the services are replicated across multiple zones in the region that they're in. When you deploy a Cloud Run service, you can choose a region. Then, Cloud Run automatically chooses the zones inside that region in which to deploy instances of the service. Cloud Run automatically balances traffic across service instances, and it's designed to greatly mitigate the effects of a zonal outage. Note: For more information about region-specific considerations, see Geography and regions. VMware Engine VMware Engine is a fully managed service that lets you run the VMware platform in Google Cloud. To increase the reliability of your environments that use VMware Engine, we recommend the following: Provision multi-node VMware Engine private clouds. VMware Engine supports provisioning isolated VMware stacks called private clouds, and all nodes that compose a private cloud reside in the same region. Private cloud nodes run on dedicated, isolated bare-metal hardware nodes, and they're configured to eliminate single points of failure. VMware Engine supports single-node private clouds, but we only recommend using single-node private clouds for proofs of concept and testing purposes. For production environments, we recommend that you use the default, multi-node private clouds. Provision VMware Engine stretched private clouds. A stretched private cloud is a multi-node private cloud whose nodes are distributed across the zones in a region. A stretched private cloud protects your environment against zonal outages. For more information about the high-availability and redundancy features of VMware Engine, see Availability and redundancy. Provision and configure data storage resources After you provision and configure computing resources for your single-region environments, you provision and configure resources to store and manage data. The following sections describe on the Google Cloud data storage and management products that support regional and multi-regional configurations. Cloud Storage Cloud Storage is a service to store objects, which are immutable pieces of data, in buckets, which are basic containers to hold your data. When you create a bucket, you select the bucket location type that best meets your availability, regulatory, and other requirements. Location types have different availability guarantees. To protect your data against failure and outages, Cloud Storage makes your data redundant across at least two zones for buckets that have a region location type, across two regions for buckets that have a dual-region location type, and across two or more regions for buckets that have a multi-region location type. For example, if you need to make a Cloud Storage bucket available if there are zonal outages, you can provision it with a region location type. For more information about how to design disaster mechanisms for data stored in Cloud Storage, and about how Cloud Storage reacts to zonal and regional outages, see Architecting disaster recovery for cloud infrastructure outages: Cloud Storage. Filestore Filestore provides fully managed file servers on Google Cloud that can be connected to Compute Engine instances, GKE clusters, and your on-premises machines. Filestore offers several service tiers. Each tier offers unique availability, scalability, performance, capacity, and data-recovery features. When you provision Filestore instances, we recommend that you choose the Enterprise tier because it supports high availability and data redundancy across multiple zones in a region; instances that are in other tiers are zonal resources. Note: For more information about region-specific considerations, see Geography and regions. Bigtable Bigtable is a fully managed, high-performance, and high-scalability database service for large analytical and operational workloads. Bigtable instances are zonal resources. To increase the reliability of your instances, you can configure Bigtable to replicate data across multiple zones within the same region or across multiple regions. Note: For more information about region-specific considerations, see Geography and regions. When replication is enabled, if there is an outage, Bigtable automatically fails requests over to other available instances where you replicated data to. For more information about how replication works in Bigtable, see About replication and Architecting disaster recovery for cloud infrastructure outages: Bigtable. Firestore Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. When you provision a Firestore database, you select its location. Locations can be either multi-region or regional, and they offer different reliability guarantees. If a database has a regional location, it replicates data across different zones within a region. A multi-region database replicates data across more than one region. For information about how replication works in Firestore, and about how Firestore reacts to zonal and regional outages, see Firestore locations and Architecting disaster recovery for cloud infrastructure outages: Firestore. Memorystore Memorystore lets you configure scalable, secure, and highly available in-memory data storage services. It supports data backends for Redis, Memcached, and Valkey. When you provision Memorystore for Redis instances, you select a service tier for that instance. Memorystore for Redis supports several instance service tiers, and each tier offers unique availability, node size, and bandwidth features. When you provision a Memorystore for Redis instance, we recommend that you choose a Standard tier or a Standard tier with read replicas. Memorystore instances in those two tiers automatically replicate data across multiple zones in a region. Note: For more information about region-specific considerations, see Geography and regions. For more information about how Memorystore for Redis achieves high availability, see High availability for Memorystore for Redis. When you provision Memorystore for Memcached instances, consider the following: Zone selection. When you provision Memorystore for Memcached instances, you select the region in which you want to deploy the instance. Then, you can either select the zones within that region where you want to deploy the nodes of that instance, or you can let Memorystore for Memcached automatically distribute the nodes across zones. To optimally place instances, and to avoid provisioning issues such as placing all the nodes inside the same zone, we recommend that you let Memorystore for Memcached automatically distribute nodes across zones within a region. Data replication across zones. Memorystore for Memcached instances don't replicate data across zones or regions. For more information about how Memorystore for Memcached instances work if there are zonal or regional outages, see Architecting disaster recovery for cloud infrastructure outages: Memorystore for Memcached. When you provision Memorystore for Valkey instances, you choose availability and reliability options. Memorystore for Valkey supports several configurations, such as zonal and multi-zonal instances. For more information about Memorystore for Valkey availability and reliability, see Memorystore for Valkey: High availability and replicas. Spanner Spanner is a fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability. To use Spanner, you provision Spanner instances. When you provision Spanner instances, consider the following: Instance configuration. An instance configuration defines the geographic placement and replication of the databases in a Spanner instance. When you create a Spanner instance, you configure it as either regional or multi-region. Replication. Spanner supports automatic, byte-level replication, and it supports the creation of replicas according to your availability, reliability, and scalability needs. You can distribute replicas across regions and environments. Spanner instances that have a regional configuration maintain one read-write replica for each zone within a region. Instances that have a multi-region configuration replicate data in multiple zones across multiple regions. Moving instances. Spanner lets you move an instance from any instance configuration to any other instance configuration without causing any downtime, or disruption to transaction guarantees during the move. For more information about Spanner replication, and about how Spanner reacts to zonal and regional outages, see Spanner replication and Architecting disaster recovery for cloud infrastructure outages: Spanner. Provision and configure data analytics resources After you provision and configure data storage resources for your single-region environments, you provision and configure data analytics resources. The following sections describe on the Google Cloud data analytics products that support regional configurations. BigQuery BigQuery is a fully managed enterprise data warehouse that helps you to manage and analyze your data with built-in features like machine learning, geospatial analysis, and business intelligence. To organize and control access to data in BigQuery, you provision top-level containers called datasets. When you provision BigQuery datasets, consider the following: Dataset location. To select the BigQuery location where you want to store your data, you configure the dataset location. A location can either be regional or multi-region. For either location type, BigQuery stores copies of your data in two different zones within the selected location. You can't change the dataset location after you create a dataset. Disaster planning. BigQuery is a regional service, and it handles zonal failures automatically, for computing and for data. However, there are certain scenarios that you have to plan for yourself, such as regional outages. We recommend that you consider those scenarios when you design your environments. For more information about BigQuery disaster recovery planning and features, see Understand reliability: Disaster planning in the BigQuery documentation, and see Architecting disaster recovery for cloud infrastructure outages: BigQuery. Dataproc Dataproc is a managed service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc builds on top of Compute Engine, so the recommendations in the previous section about Compute Engine partially apply to Dataproc as well. To use Dataproc, you create Dataproc clusters. Dataproc clusters are zonal resources. When you create Dataproc clusters, consider the following: Automatic zone placement. When you create a cluster, you can either specify the zone within a region where you want to provision the nodes of the cluster, or let Dataproc auto zone placement select the zone automatically. We recommend that you use auto zone placement unless you need to fine tune the zone placement of cluster nodes inside the region. High availability mode. When you create a cluster, you can enable high availability mode. You can't enable high availability mode after you create a cluster. We recommend that you enable high availability mode if you need the cluster to be resilient to the failure of a single coordinator node, or to partial zonal outages. High availability Dataproc clusters are zonal resources. For more information about how Dataproc reacts to zonal and regional outages and how to increase the reliability of your Dataproc clusters if there are failures, see Architecting disaster recovery for cloud infrastructure outages: Dataproc. Dataflow Dataflow is a fully managed service for running stream and batch data processing pipelines. To use Dataflow, you create Dataflow pipelines, and then Dataflow runs jobs, which are instances of those pipelines, on worker nodes. Because jobs are zonal resources, when you use Dataflow resources, you should consider the following: Regional endpoints. When you create a job, Dataflow requires that you configure a regional endpoint. By configuring a regional endpoint for your job, you restrict the computing and data resource placement to a particular region. Zonal placement. Dataflow automatically distributes worker nodes either across all the zones within a region or in the best zone within a region, according to the job type. Dataflow lets you override the zonal placement of worker nodes by placing all of the worker nodes in the same zone within a region. To mitigate the issues caused by zonal outages, we recommend that you let Dataflow automatically select the best zone placement unless you need to place the worker nodes in a specific zone. For more information about how Dataproc reacts to zonal and regional outages and how to increase the reliability of your Dataproc clusters if there are failures, see Architecting disaster recovery for cloud infrastructure outages: Dataflow. Pub/Sub Pub/Sub is an asynchronous and scalable messaging service that decouples services that produce messages from the services that process those messages. Pub/Sub organizes messages in topics. Publishers (services that produce messages) send messages to topics, and subscribers receive messages from topics. Pub/Sub stores each message in a single region, and replicates it in at least two zones within that region. For more information, see Architectural overview of Pub/Sub. When you configure your Pub/Sub environment, consider the following: Global and regional endpoints. Pub/Sub supports global and regional endpoints to publish messages. When a publisher sends a message to the global endpoint, Pub/Sub automatically selects the closest region to process that message. When a producer sends a message to a regional endpoint, Pub/Sub processes the message in that region. Message storage policies. Pub/Sub lets you configure message storage policies to restrict where Pub/Sub processes and stores messages, regardless of the origin of the request and the endpoint that the publisher used to publish the message. We recommend that you configure message storage policies to ensure that messages don't leave your single-region environment. For more information about how Pub/Sub handles zonal and regional outages, see Architecting disaster recovery for cloud infrastructure outages: Pub/Sub. Adapt your workloads to single-region environments When you complete the provisioning and the configuration of your environments, you need to consider how to make your workloads more resilient to zonal and regional failures. Each workload can have its own availability and reliability requirements and properties, but there are a few design principles that you can apply, and strategies that you can adopt to improve your overall resilience posture in the unlikely event of zonal and regional failures. When you design and implement your workloads, consider the following: Implement Site Reliability Engineering (SRE) practices and principles. Automation and extensive monitoring are part of the core principles of SRE. Google Cloud provides the tools and the professional services to implement SRE to increase the resilience and the reliability of your environments and to reduce toil. Design for scalability and resiliency. When you design workloads aimed at cloud environments, we recommend that you consider scalability and resiliency to be inherent requirements that your workloads must respect. For more information about this kind of design, see Patterns for scalable and resilient apps. Design for recovering from cloud infrastructure outages. Google Cloud availability guarantees are defined by the Google Cloud Service Level Agreements. In the unlikely event that a zonal or a regional failure occurs, we recommend that you design your workloads so that they're resilient to zonal and regional failures. Implement load shedding and graceful degradation. If there are cloud infrastructure failures, or failures in other dependencies of your workloads, we recommend that you design your workloads so that they're resilient. Your workloads should maintain certain and well-defined levels of functionality even if there are failures (graceful degradation) and they should be able to drop some proportion of their load as they approach overload conditions (load shedding). Plan for regular maintenance. When you design your deployment processes and your operational processes, we recommend that you also think about all the activities that you need to perform as part of the regular maintenance of your environments. Regular maintenance should include activities like applying updates and configuration changes to your workloads and their dependencies, and how those activities might impact the availability of your environments. For example, you can configure a host maintenance policy for your Compute Engine instances. Adopt a test-driven development approach. When you design your workloads, we recommend that you adopt a test-driven development approach to ensure that your workloads behave as intended from all angles. For example, you can test if your workloads and cloud infrastructure meet the functional, non-functional, and security requirements that you require. What's next Learn about how to design scalable and resilient apps. Read about Google Cloud infrastructure reliability. To improve the reliability and the resilience of your environments, read about Site Reliability Engineering (SRE). Read about how to architect your disaster recovery for cloud infrastructure outages. To learn about the migration framework, read Migrate to Google Cloud: Get started. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions ArchitectOther contributors: Henry Bell | Cloud Solutions ArchitectElliot Eaton | Cloud Solutions ArchitectGrace Mollison | Solutions LeadIdo Flatow | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Design_secure_deployment_pipelines.txt b/Design_secure_deployment_pipelines.txt new file mode 100644 index 0000000000000000000000000000000000000000..aba1c3cf0d7cc2f87e70891b316f31acf49ab0d8 --- /dev/null +++ b/Design_secure_deployment_pipelines.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/design-secure-deployment-pipelines-bp +Date Scraped: 2025-02-23T11:56:10.550Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design secure deployment pipelines Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC A deployment pipeline is an automated process that takes code or prebuilt artifacts and deploys them to a test environment or a production environment. Deployment pipelines are commonly used to deploy applications, configuration, or cloud infrastructure (infrastructure as code), and they can play an important role in the overall security posture of a cloud deployment. This guide is intended for DevOps and security engineers and describes best practices for designing secure deployment pipelines based on your confidentiality, integrity, and availability requirements. Architecture The following diagram shows the flow of data in a deployment pipeline. It illustrates how you can turn your artifacts into resources. Deployment pipelines are often part of a larger continuous integration/continuous deployment (CI/CD) workflow and are typically implemented using one of the following models: Push model: In this model, you implement the deployment pipeline using a central CI/CD system such as Jenkins or GitLab. This CI/CD system might run on Google Cloud, on-premises, or on a different cloud environment. Often, the same CI/CD system is used to manage multiple deployment pipelines. The push model leads to a centralized architecture with a few CI/CD systems that are used for managing a potentially large number of resources or applications. For example, you might use a single Jenkins or GitLab instance to manage your entire production environment, including all its projects and applications. Pull model: In this model, the deployment process is implemented by an agent that is deployed alongside the resource–for example, in the same Kubernetes cluster. The agent pulls artifacts or source code from a centralized location, and deploys them locally. Each agent manages one or two resources. The pull model leads to a more decentralized architecture with a potentially large number of single-purpose agents. Compared to manual deployments, consistently using deployment pipelines can have the following benefits: Increased efficiency, because no manual work is required. Increased reliability, because the process is fully automated and repeatable. Increased traceability, because you can trace all deployments to changes in code or to input artifacts. To perform, a deployment pipeline requires access to the resources it manages: A pipeline that deploys infrastructure by using tools like Terraform might need to create, modify, or even delete resources like VM instances, subnets, or Cloud Storage buckets. A pipeline that deploys applications might need to upload new container images to Artifact Registry, and deploy new application versions to App Engine, Cloud Run, or Google Kubernetes Engine (GKE). A pipeline that manages settings or deploys configuration files might need to modify VM instance metadata, Kubernetes configurations, or modify data in Cloud Storage. If your deployment pipelines aren't properly secured, their access to Google Cloud resources can become a weak spot in your security posture. Weakened security can lead to several kinds of attacks, including the following: Pipeline poisoning attacks: Instead of attacking a resource directly, a bad actor might attempt to compromise the deployment pipeline, its configuration, or its underlying infrastructure. Taking advantage of the pipeline's access to Google Cloud, the bad actor could make the pipeline perform malicious actions on Cloud resources, as shown in the following diagram: Supply chain attacks: Instead of attacking the deployment pipeline, a bad actor might attempt to compromise or replace pipeline input—including source code, libraries, or container images, as shown in the following diagram: To determine whether your deployment pipelines are appropriately secured, it's insufficient to look only at the allow policies and deny policies of Google Cloud resources in isolation. Instead, you must consider the entire graph of systems that directly or indirectly grant access to a resource. This graph includes the following information: The deployment pipeline, its underlying CI/CD system, and its underlying infrastructure The source code repository, its underlying servers, and its underlying infrastructure Input artifacts, their storage locations, and their underlying infrastructure Systems that produce the input artifacts, and their underlying infrastructure Complex input graphs make it difficult to identify user access to resources and systemic weaknesses. The following sections describe best practices for designing deployment pipelines in a way that helps you manage the size of the graph, and reduce the risk of lateral movement and supply chain attacks. Assess security objectives Your resources on Google Cloud are likely to vary in how sensitive they are. Some resources might be highly sensitive because they're business critical or confidential. Other resources might be less sensitive because they're ephemeral or only intended for testing purposes. To design a secure deployment pipeline, you must first understand the resources the pipeline needs to access, and how sensitive these resources are. The more sensitive your resources, the more you should focus on securing the pipeline. The resources accessed by deployment pipelines might include: Applications, such as Cloud Run or App Engine Cloud resources, such as VM instances or Cloud Storage buckets Data, such as Cloud Storage objects, BigQuery records, or files Some of these resources might have dependencies on other resources, for example: Applications might access data, cloud resources, and other applications. Cloud resources, such as VM instances or Cloud Storage buckets, might contain applications or data. As shown in the preceding diagram, dependencies affect how sensitive a resource is. For example, if you use an application that accesses highly sensitive data, typically you should treat that application as highly sensitive. Similarly, if a cloud resource like a Cloud Storage bucket contains sensitive data, then you typically should treat the bucket as sensitive. Because of these dependencies, it's best to first assess the sensitivity of your data. Once you've assessed your data, you can examine the dependency chain and assess the sensitivity of your Cloud resources and applications. Categorize the sensitivity of your data To understand the sensitivity of the data in your deployment pipeline, consider the following three objectives: Confidentiality: You must protect the data from unauthorized access. Integrity: You must protect the data against unauthorized modification or deletion. Availability: You must ensure that authorized people and systems can access the data in your deployment pipeline. For each of these objectives, ask yourself what would happen if your pipeline was breached: Confidentiality: How damaging would it be if data was disclosed to a bad actor, or leaked to the public? Integrity: How damaging would it be if data was modified or deleted by a bad actor? Availability: How damaging would it be if a bad actor disrupted your data access? To make the results comparable across resources, it's useful to introduce security categories. Standards for Security Categorization (FIPS-199) suggests using the following four categories: High: Damage would be severe or catastrophic Moderate: Damage would be serious Low: Damage would be limited Not applicable: The standard doesn't apply Depending on your environment and context, a different set of categories could be more appropriate. The confidentiality and integrity of pipeline data exist on a spectrum, based on the security categories just discussed. The following subsections contain examples of resources with different confidentiality and integrity measurements: Resources with low confidentiality, but low, moderate, and high integrity The following resource examples all have low confidentiality: Low integrity: Test data Moderate integrity: Public web server content, policy constraints for your organization High integrity: Container images, disk images, application configurations, access policies (allow and deny lists), liens, access-level data Resources with medium confidentiality, but low, moderate, and high integrity The following resource examples all have medium confidentiality: Low integrity: Internal web server content Moderate integrity: Audit logs High integrity: Application configuration files Resources with high confidentiality, but low, moderate, and high integrity The following resource examples all have high confidentiality: Low integrity: Usage data and personally identifiable information Moderate integrity: Secrets High integrity: Financial data, KMS keys Categorize applications based on the data that they access When an application accesses sensitive data, the application and the deployment pipeline that manages the application can also become sensitive. To qualify that sensitivity, look at the data that the application and the pipeline need to access. Once you've identified and categorized all data accessed by an application, you can use the following categories to initially categorize the application before you design a secure deployment pipeline: Confidentiality: Highest category of any data accessed Integrity: Highest category of any data accessed Availability: Highest category of any data accessed This initial assessment provides guidance, but there might be additional factors to consider—for example: Two sets of data might have low-confidentiality in isolation. But when combined, they could reveal new insights. If an application has access to both sets of data, you might need to categorize it as medium- or high-confidentiality. If an application has access to high-integrity data, then you should typically categorize the application as high-integrity. But if that access is read only, a categorization of high-integrity might be too strict. For details on a formalized approach to categorize applications, see Guide for Mapping Types of Information and Information Systems to Security Categories (NIST SP 800-60 Vol. 2 Rev1). Categorize cloud resources based on the data and applications they host Any data or application that you deploy on Google Cloud is hosted by a Google Cloud resource: An application might be hosted by an App Engine service, a VM instance, or a GKE cluster. Your data might be hosted by a persistent disk, a Cloud Storage bucket, or a BigQuery dataset. When a cloud resource hosts sensitive data or applications, the resource and the deployment pipeline that manages the resource can also become sensitive. For example, you should consider a Cloud Run service and its deployment pipeline to be as sensitive as the application that it's hosting. After categorizing your data and your applications, create an initial security category for the application. To do so, determine a level from the following categories: Confidentiality: Highest category of any data or application hosted Integrity: Highest category of any data or application hosted Availability: Highest category of any data or application hosted When making your initial assessment, don't be too strict—for example: If you encrypt highly confidential data, treat the encryption key as highly confidential. But, you can use a lower security category for the resource containing the data. If you store redundant copies of data, or run redundant instances of the same applications across multiple resources, you can make the category of the resource lower than the category of the data or application it hosts. Constrain the use of deployment pipelines If your deployment pipeline needs to access sensitive Google Cloud resources, you must consider its security posture. The more sensitive the resources, the better you need to attempt to secure the pipeline. However, you might encounter the following practical limitations: When using existing infrastructure or an existing CI/CD system, that infrastructure might constrain the security level you can realistically achieve. For example, your CI/CD system might only support a limited set of security controls, or it might be running on infrastructure that you consider less secure than some of your production environments. When setting up new infrastructure and systems to run your deployment pipeline, securing all components in a way that meets your most stringent security requirements might not be cost effective. To deal with these limitations, it can be useful to set constraints on what scenarios should and shouldn't use deployment pipelines and a particular CI/CD system. For example, the most sensitive deployments are often better handled outside of a deployment pipeline. These deployments could be manual, using a privileged session management system or a privileged access management system, or something else, like tool proxies. To set your constraints, define which access controls you want to enforce based on your resource categories. Consider the guidance offered in the following table: Category of resource Access controls Low No approval required Moderate Team lead must approve High Multiple leads must approve and actions must be recorded Contrast these requirements with the capabilities of your source code management (SCM) and CI/CD systems by asking the following questions and others: Do your SCM or CI/CD systems support necessary access controls and approval mechanisms? Are the controls protected from being subverted if bad actors attack the underlying infrastructure? Is the configuration that defines the controls appropriately secured? Depending on the capabilities and limitations imposed by your SCM or CI/CD systems, you can then define your data and application constraints for your deployment pipelines. Consider the guidance offered in the following table: Category of resource Constraints Low Deployment pipelines can be used, and developers can self-approve deployments. Moderate Deployment pipelines can be used, but a team lead has to approve every commit and deployment. High Don't use deployment pipelines. Instead, administrators have to use a privileged access management system and session recording. Maintain resource availability Using a deployment pipeline to manage resources can impact the availability of those resources and can introduce new risks: Causing outages: A deployment pipeline might push faulty code or configuration files, causing a previously working system to break, or data to become unusable. Prolonging outages: To fix an outage, you might need to rerun a deployment pipeline. If the deployment pipeline is broken or unavailable for other reasons, that could prolong the outage. A pipeline that can cause or prolong outages poses a denial of service risk: A bad actor might use the deployment pipeline to intentionally cause an outage. Create emergency access procedures When a deployment pipeline is the only way to deploy or configure an application or resource, pipeline availability can become critical. In extreme cases, where a deployment pipeline is the only way to manage a business-critical application, you might also need to consider the deployment pipeline business-critical. Because deployment pipelines are often made from multiple systems and tools, maintaining a high level of availability can be difficult or uneconomical. You can reduce the influence of deployment pipelines on availability by creating emergency access procedures. For example, create an alternative access path that can be used if the deployment pipeline isn't operational. Creating an emergency access procedure typically requires most of the following processes: Maintain one of more user accounts with privileged access to relevant Google Cloud resources. Store the credentials of emergency-access user accounts in a safe location, or use a privileged access management system to broker access. Establish a procedure that authorized employees can follow to access the credentials. Audit and review the use of emergency-access user accounts. Ensure that input artifacts meet your availability demands Deployment pipelines typically need to download source code from a central source code repository before they can perform a deployment. If the source code repository isn't available, running the deployment pipeline is likely to fail. Many deployment pipelines also depend on third-party artifacts. Such artifacts might include libraries from sources such as npm, Maven Central, or the NuGet Gallery, as well as container base images, and .deb, and .rpm packages. If one of the third-party sources is unavailable, running the deployment pipeline might fail. To maintain a certain level of availability, you must ensure that the input artifacts of your deployment pipeline all meet the same or higher availability requirements. The following list can help you ensure the availability of input artifacts: Limit the number of sources for input artifacts, particularly third-party sources Maintain a cache of input artifacts that deployment pipelines can use if source systems are unavailable Treat deployment pipelines and their infrastructure like production systems Deployment pipelines often serve as the connective tissue between development, staging, and production environments. Depending on the environment, they might implement multiple stages: In the first stage, the deployment pipeline updates a development environment. In the next stage, the deployment pipeline updates a staging environment. In the final stage, the deployment pipeline updates the production environment. When using a deployment pipeline across multiple environments, ensure that the pipeline meets the availability demands of each environment. Because production environments typically have the highest availability demands, you should treat the deployment pipeline and its underlying infrastructure like a production system. In other words, apply the same access control, security, and quality standards to the infrastructure running your deployment pipelines as you do for your production systems. Limit the scope of deployment pipelines The more resources that a deployment pipeline can access, the more damage it can possibly cause if compromised. A compromised deployment pipeline that has access to multiple projects or even your entire organization could, in the worst case, possibly cause lasting damage to all your data and applications on Google Cloud. To help avoid this worst-case scenario, limit the scope of your deployment pipelines. Define the scope of each deployment pipeline so it only needs access to a relatively small number of resources on Google Cloud: Instead of granting access on the project level, only grant deployment pipelines access to individual resources. Avoid granting access to resources across multiple Google Cloud projects. Split deployment pipelines into multiple stages if they need access to multiple projects or environments. Then, secure the stages individually. Maintain confidentiality A deployment pipeline must maintain the confidentiality of the data it manages. One of the primary risks related to confidentiality is data exfiltration. There are multiple ways in which a bad actor might attempt to use a deployment pipeline to exfiltrate data from your Google Cloud resources. These ways include: Direct: A bad actor might modify the deployment pipeline or its configuration so that it extracts data from your Google Cloud resources and then copies it elsewhere. Indirect: A bad actor might use the deployment pipeline to deploy compromised code, which then steals data from your Google Cloud environment. You can reduce confidentiality risks by minimizing access to confidential resources. Removing all access to confidential resources might not be practical, however. Therefore, you must design your deployment pipeline to meet the confidentiality demands of the resources it manages. To determine these demands, you can use the following approach: Determine the data, applications, and resources the deployment pipeline needs to access, and categorize them. Find the resource with the highest confidentiality category and use it as an initial category for the deployment pipeline. Similar to the categorization process for applications and cloud resources, this initial assessment isn't always appropriate. For example, you might use a deployment pipeline to create resources that will eventually contain highly confidential information. If you restrict the deployment pipeline so that it can create–but can't read–these resources, then a lower confidentiality category might be sufficient. To maintain confidentiality, the Bell–LaPadula model suggests that a deployment pipeline must not: Consume input artifacts of higher confidentiality Write data to a resource of lower confidentiality According to the Bell–LaPadula model, the preceding diagram shows how data should flow in the pipeline to help ensure data confidentiality. Don't let deployment pipelines read data they don't need Deployment pipelines often don't need access to data, but they might still have it. Such over-granting of access can result from: Granting incorrect access permissions. A deployment pipeline might be granted access to Cloud Storage on the project level, for example. As a result, the deployment pipeline can access all Cloud Storage buckets in the project, although access to a single bucket might be sufficient. Using an overly permissive role. A deployment pipeline might be granted a role that provides full access to Cloud Storage, for example. However, the permission to create new buckets would suffice. The more data that a pipeline can access, the higher the risk that someone or something can steal your data. To help minimize this risk, avoid granting deployment pipelines access to any data that they don't need. Many deployment pipelines don't need data access at all, because their sole purpose is to manage configuration or software deployments. Don't let deployment pipelines write to locations they don't need To remove data, a bad actor needs access and a way to transfer the data out of your environment. The more storage and network locations a deployment pipeline can send data to, the more likely it is that a bad actor can use one of those locations for exfiltration. You can help reduce risk by limiting the number of network and storage locations where a pipeline can send data: Revoke write access to resources that the pipeline doesn't need, even if the resources don't contain any confidential data. Block internet access, or restrict connections, to an allow-listed set of network locations. Restricting outbound access is particularly important for pipelines that you've categorized as moderately confidential or highly confidential because they have access to confidential data or cryptographic key material. Use VPC Service Controls to help prevent compromised deployments from stealing data Instead of letting the deployment pipeline perform data exfiltration, a bad actor might attempt to use the deployment pipeline to deploy compromised code. That compromised code can then steal data from within your Google Cloud environment. You can help reduce the risk of such data-theft threats by using VPC Service Controls. VPC Service Controls let you restrict the set of resources and APIs that can be accessed from within certain Google Cloud projects. Maintain integrity To keep your Google Cloud environment secure, you must protect its integrity. This includes: Preventing unauthorized modification or deletion of data or configuration Preventing untrusted code or configuration from being deployed Ensuring that all changes leave a clear audit trail Deployment pipelines can help you maintain the integrity of your environment by letting you: Implement approval processes—for example, in the form of code reviews Enforce a consistent process for all configuration or code changes Run automated tests or quick checks before each deployment To be effective, you must try to ensure that bad actors can't undermine or sidestep these measures. To prevent such activity, you must protect the integrity of: The deployment pipeline and its configuration The underlying infrastructure All inputs consumed by the deployment pipeline To prevent the deployment pipeline from becoming vulnerable, try to ensure that the integrity standards of the deployment pipeline match or exceed the integrity demands of the resources it manages. To determine these demands, you can use the following approach: Determine the data, applications, and resources the deployment pipeline needs to access, and categorize them. Find the resource with the highest integrity category and use it as the category for the deployment pipeline. To maintain the integrity of the deployment pipeline, the Biba model suggests that: The deployment pipeline must not consume input artifacts of lower integrity. The deployment pipeline must not write data to a resource of higher integrity. According to the Biba model, the preceding diagram shows how data should flow in the pipeline to help ensure data integrity. Verify the authenticity of input artifacts Many deployment pipelines consume artifacts from third-party sources. Such artifacts might include: Docker base images .rpm or .debpackages Maven, .npm, or NuGet libraries A bad actor might attempt to modify your deployment pipeline so that it uses compromised versions of third-party artifacts by: Compromising the repository that stores the artifacts Modifying the deployment pipeline's configuration to use a different source repository Uploading malicious packages with similar names, or names that contain typos Many package managers let you verify the authenticity of a package by supporting code-signing. For example, you can use PGP to sign RPM and Maven packages. You can use Authenticode to sign NuGet packages. You can use code-signing to reduce the risk of falling victim to compromised third-party packages by: Requiring that all third-party artifacts are signed Maintaining a curated list of trusted publisher certificates or public keys Letting the deployment pipeline verify the signature of third-party artifacts against the trusted publishers list Alternatively, you can verify the hashes of artifacts. You can use this approach for artifacts that don't support code-signing and change infrequently. Ensure that underlying infrastructure meets your integrity demands Instead of compromising the deployment pipeline itself, bad actors might attempt to compromise its infrastructure, including: The CI/CD software that runs the deployment pipeline The tools used by the pipeline—for example, Terraform, kubectl, or Docker The operating system and all its components Because the infrastructure that underlies deployment pipelines is often complex and might contain components from various vendors or sources, this type of security breach can be difficult to detect. You can help reduce the risk of compromised infrastructure by: Holding the infrastructure and all its components to the same integrity standards as the deployment pipeline and the Google Cloud resources that it manages Making sure tools come from a trusted source and verifying their authenticity Regularly rebuilding infrastructure from scratch Running the deployment pipeline on shielded VMs Apply integrity controls in the pipeline While bad actors are a threat, they aren't the only possible source of software or configuration changes that can impair the integrity of your Google Cloud environment. Such changes can also originate from developers and simply be accidental, due to unawareness, or the result of typos and other mistakes. You can help reduce the risk of inadvertently applying risky changes by configuring deployment pipelines to apply additional integrity controls. Such controls can include: Performing static analysis of code and configuration Requiring all changes to pass a set of rules (policy as code) Limiting the number of changes that can be done at the same time What's next Learn about our best practices for using service accounts in deployment pipelines. Review our best practices for securing service accounts. Learn more about Investigating and responding to threats. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Design_storage_for_AI_and_ML_workloads_in_Google_Cloud.txt b/Design_storage_for_AI_and_ML_workloads_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..18a9457480a96f4ca448ef40b575aed94c189dbd --- /dev/null +++ b/Design_storage_for_AI_and_ML_workloads_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml/storage-for-ai-ml +Date Scraped: 2025-02-23T11:46:33.900Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design storage for AI and ML workloads in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-20 UTC When you choose Google Cloud storage services for your artificial intelligence (AI) and machine learning (ML) workloads, you must be careful to select the correct combination of storage options for each specific job. This need for careful selection applies when you upload your dataset, train and tune your model, place the model into production, or store the dataset and model in an archive. In short, you need to select the best storage services that provide the proper latency, scale, and cost for each stage of your AI and ML workloads. To help you make well-informed choices, this document provides design guidance on how to use and integrate the variety of storage options offered by Google Cloud for key AI and ML workloads. Figure 1 shows a summary of the primary storage choices. As shown in the diagram, you typically choose Cloud Storage when you have larger file sizes, lower input and output operations per second (IOPS), or higher latency. However, when you require higher IOPS, smaller file sizes, or lower latency, choose Filestore instead. Figure 1: Primary AI and ML storage considerations Overview of AI and ML workload stages AI and ML workloads consist of four primary stages: prepare, train, serve, and archive. These are the four times in the lifecycle of an AI and ML workload where you need to make a decision about which storage options you should choose to use. In most cases, we recommend that you continue to use the same storage choice that you select in the prepare stage for the remaining stages. Following this recommendation helps you to reduce the copying of datasets between storage services. However, there are some exceptions to this general rule, which are described later in this guide. Some storage solutions work better than others at each stage and might need to be combined with additional storage choices for the best results. The effectiveness of the storage choice depends on the dataset properties, scale of the required compute and storage resources, latency, and other factors. The following table describes the stages and a brief summary of the recommended storage choices for each stage. For a visual representation of this table and additional details, see the decision tree. Table 1: Storage recommendations for the stages and steps in AI and ML workloads Stages Steps Storage recommendations Prepare Data preparation Upload and ingest your data. Transform the data into the correct format before training the model. Cloud Storage Large files (50 MB or larger) that can tolerate higher storage latency (tens of milliseconds). Filestore Zonal Smaller datasets with smaller files (less than 50 MB) and lower storage latency (~ 1 millisecond). Train Model development Develop your model by using notebooks and applying iterative trial and error. Model training Use small-to-large scale numbers of graphics processing units (Cloud GPUs) or Tensor Processing Units (Cloud TPUs) to repeatedly read the training dataset. Apply an iterative process to model development and training. Cloud Storage If you select Cloud Storage in the prepare stage, it's best to train your data in Cloud Storage. Cloud Storage with Local SSD or Filestore If you select Cloud Storage in the prepare stage but need to support small I/O requests or small files, you can supplement your training tasks. To do so, move some of your data from Cloud Storage to Local SSD or Filestore Zonal. Filestore If you select Filestore in the prepare stage, it's best to train your data in Filestore. Create a Local SSD cache to supplement your Filestore training tasks. Checkpointing and restart Save state periodically during model training by creating a checkpoint so that the training can restart after a node failure. Make this selection based on the I/O pattern and the amount of data that needs to be saved at the checkpoint. Cloud Storage If you select Cloud Storage in the prepare stage, it's best to use Cloud Storage for checkpointing and restart. Good for throughput, and workloads that need large numbers of threads. Filestore Zonal If you select Filestore in the prepare stage, it's best to use Filestore for checkpointing and restart. Good for latency, high per-client throughput, and low numbers of threads. Serve Store the model. Load the model into an instance running Cloud GPUs or Cloud TPUs at startup. Store results of model inference, such as generated images. Optionally, store and load the dataset used for model inference. Cloud Storage If you train your model in Cloud Storage, it's best to use Cloud Storage to serve your model. Save the content generated by your model in Cloud Storage. Filestore If you train your model in Filestore, it's best to use Filestore for serving your model. If you need durability and low latency when generating small files, choose Filestore Zonal (zonal) or Filestore Enterprise (regional). Archive Retain the training data and the model for extended time periods. Cloud Storage Optimize storage costs with multiple storage classes, Autoclass, or object lifecycle management. If you use Filestore, you can use Filestore snapshots and backups, or copy the data to Cloud Storage. For more details about the underlying assumptions for this table, see the following sections: Criteria Storage options Map your storage choices to the AI and ML stages Storage recommendations for AI and ML Criteria To narrow your choices of which storage options to use for your AI and ML workloads, start by answering these questions: Are your AI and ML I/O request sizes and file sizes small, medium, or large in size? Are your AI and ML workloads sensitive to I/O latency and time to first byte (TTFB)? Do you require high read and write throughput for single clients, aggregated clients, or both? What is the largest number of Cloud GPUs or Cloud TPUs that your single largest AI and ML training workload requires? In addition to answering the previous questions, you also need to be aware of the compute options and accelerators that you can choose to help optimize your AI and ML workloads. Compute platform considerations Google Cloud supports three primary methods for running AI and ML workloads: Compute Engine: Virtual machines (VMs) support all Google managed storage services and partner offerings. Compute Engine provides support for Local SSD, Persistent Disk, Cloud Storage, Cloud Storage FUSE, NetApp Volumes, and Filestore. For large scale training jobs in Compute Engine, Google has partnered with SchedMD to deliver Slurm scheduler enhancements. Google Kubernetes Engine (GKE): GKE is a popular platform for AI that integrates with popular frameworks, workloads, and data processing tools. GKE provides support for Local SSD, persistent volumes, Cloud Storage FUSE, and Filestore. Vertex AI: Vertex AI is a fully managed AI platform that provides an end-to-end solution for AI and ML workloads. Vertex AI supports both Cloud Storage and Network File System (NFS) file-based storage, such as Filestore and NetApp Volumes. For both Compute Engine and GKE, we recommend using the Cluster Toolkit to deploy repeatable and turnkey clusters that follow Google Cloud best practices. Accelerator considerations When you select storage choices for AI and ML workloads, you also need to select the accelerator processing options that are appropriate for your task. Google Cloud supports two accelerator choices: NVIDIA Cloud GPUs and the custom-developed Google Cloud TPUs. Both types of accelerator are application-specific integrated circuits (ASICs) that are used to process machine learning workloads more efficiently than standard processors. There are some important storage differences between Cloud GPUs and Cloud TPU accelerators. Instances that use Cloud GPUs support Local SSD with up to 200 GBps remote storage throughput available. Cloud TPU nodes and VMs don't support Local SSD, and rely exclusively on remote storage access. For more information about accelerator-optimized machine types, see Accelerator-optimized machine family. For more information about Cloud GPUs, see Cloud GPUs platforms. For more information about Cloud TPUs, see Introduction to Cloud TPU. For more information about choosing between Cloud TPUs and Cloud GPUs, see When to use Cloud TPUs. Storage options As summarized previously in Table 1, use object storage or file storage with your AI and ML workloads and then supplement this storage option with block storage. Figure 2 shows three typical options that you can consider when selecting the initial storage choice for your AI and ML workload: Cloud Storage, Filestore, and Google Cloud NetApp Volumes. Figure 2: AI and ML appropriate storage services offered by Google Cloud If you need object storage, choose Cloud Storage. Cloud Storage provides the following: A storage location for unstructured data and objects. APIs, such as the Cloud Storage JSON API, to access your storage buckets. Persistent storage to save your data. Throughput of terabytes per second, but requires higher storage latency. If you need file storage, you have two choices–Filestore and NetApp Volumes–which offer the following: Filestore Enterprise, high-performance file storage based on NFS. Persistent storage to save your data. Low storage latency, and throughput of 26 GBps. NetApp Volumes File storage compatible with NFS and Server Message Block (SMB). Can be managed with the option to use NetApp ONTAP storage-software tool. Persistent storage to save your data. Throughput of 4.5 GBps. Use the following storage options as your first choice for AI and ML workloads: Cloud Storage Filestore Use the following storage options to supplement your AI and ML workloads: Google Cloud NetApp Volumes Block storage If you need to transfer data between these storage options, you can use the data transfer tools. Cloud Storage Cloud Storage is a fully managed object storage service that focuses on data preparation, AI model training, data serving, backup, and archiving for unstructured data. Some of the benefits of Cloud Storage include the following: Unlimited storage capacity that scales to exabytes on a global basis Ultra-high throughput performance Regional and dual-region storage options for AI and ML workloads Cloud Storage scales throughput to terabytes per second and beyond, but it has relatively higher latency (tens of milliseconds) than Filestore or a local file system. Individual thread throughput is limited to approximately 100-200 MB per second, which means that high throughput can only be achieved by using hundreds to thousands of individual threads. Additionally, high throughput also requires the use of large files and large I/O requests. Cloud Storage supports client libraries in a variety of programming languages, but it also supports Cloud Storage FUSE. Cloud Storage FUSE lets you mount Cloud Storage buckets to your local file system. Cloud Storage FUSE enables your applications to use standard file system APIs to read from a bucket or write to a bucket. You can store and access your training data, models, and checkpoints with the scale, affordability, and performance of Cloud Storage. Note: Cloud Storage FUSE doesn't support full Portable Operating System Interface (POSIX) semantics. For example, concurrent file writes are not supported. To learn more about Cloud Storage, use the following resources: Product overview of Cloud Storage Request rate and access distribution guidelines Cloud Storage FUSE Cloud Storage FUSE performance Filestore Filestore is a fully managed NFS file-based storage service. The Filestore service tiers used for AI and ML workloads include the following: Enterprise tier: Used for mission-critical workloads requiring regional availability. Zonal tier: Used for high-performance applications that require zonal availability with high IOPS and throughput performance requirements. Basic tier: Used for file sharing, software development, web hosting, and basic AI and ML workloads. Filestore delivers low latency I/O performance. It's a good choice for datasets with either small I/O access requirements or small files. However, Filestore can also handle large I/O or large file use cases as needed. Filestore can scale up to approximately 100 TB in size. For AI training workloads that read data repeatedly, you can improve read throughput by using FS-Cache with Local SSD. For more information about Filestore, see the Filestore overview. For more information about Filestore service tiers, see Service tiers. For more information about Filestore performance, see Optimize and test instance performance. Google Cloud NetApp Volumes NetApp Volumes is a fully managed service with advanced data management features that support NFS, SMB, and multiprotocol environments. NetApp Volumes supports low latency, multi-tebibyte volumes, and gigabytes per second of throughput. For more information about NetApp Volumes, see What is Google Cloud NetApp Volumes? For more information about NetApp Volumes performance, see Expected performance. Block storage After you select your primary storage choice, you can use block storage to supplement performance, transfer data between storage options, and take advantage of low latency operations. You have two storage options with block storage: Local SSD and Persistent Disk. Local SSD Local SSD provides local storage directly to a VM or a container. Most Google Cloud machine types that contain Cloud GPUs include some amount of Local SSD. Because Local SSD disks are attached physically to the Cloud GPUs, they provide low latency access with potentially millions of IOPS. In contrast, Cloud TPU-based instances don't include Local SSD. Although Local SSD delivers high performance, each storage instance is ephemeral. Thus, the data stored on a Local SSD drive is lost when you stop or delete the instance. Because of the ephemeral nature of Local SSD, consider other types of storage when your data requires better durability. However, when the amount of training data is very small, it's common to copy the training data from Cloud Storage to the Local SSD of a GPU. The reason is that Local SSD provides lower I/O latency and reduces training time. For more information about Local SSD, see About Local SSDs. For more information about the amount of Local SSD capacity available with Cloud GPUs instance types, see GPU platforms. Persistent Disk Persistent Disk is a network block storage service with a comprehensive suite of data persistence and management capabilities. In addition to its use as a boot disk, you can use Persistent Disk with AI workloads, such as scratch storage. Persistent Disk is available in the following options: Standard, which provides efficient and reliable block storage. Balanced, which provides cost-effective and reliable block storage. SSD, which provides fast and reliable block storage. Extreme, which provides the highest performance block storage option with customizable IOPS. For more information about Persistent Disk, see Persistent Disk. Data transfer tools When you perform AI and ML tasks, there are times when you need to copy your data from one location to another. For example, if your data starts in Cloud Storage, you might move it elsewhere to train the model, then copy the checkpoint snapshots or trained model back to Cloud Storage. You could also perform most of your tasks in Filestore, then move your data and model into Cloud Storage for archive purposes. This section discusses your options for moving data between storage services in Google Cloud. Storage Transfer Service With the Storage Transfer Service, you can transfer your data between Cloud Storage, Filestore, and NetApp Volumes. This fully-managed service also lets you copy data between your on-premises file storage and object storage repositories, your Google Cloud storage, and from other cloud providers. The Storage Transfer Service lets you copy your data securely from the source location to the target location, as well as perform periodic transfers of changed data. It also provides data integrity validation, automatic retries, and load balancing. For more information about Storage Transfer Service, see What is Storage Transfer Service? Command-line interface When you move data between Filestore and Cloud Storage, you should use the Google Cloud CLI. The gcloud CLI lets you create and manage Cloud Storage buckets and objects with optimal throughput and a full suite of commands. Map your storage choices to the AI and ML stages This section expands upon the summary provided in Table 1 to explore the specific recommendations and guidance for each stage of an AI and ML workload. The goal is to help you understand the rationale for these choices and select the best storage options for each AI and ML stage. This analysis results in three primary recommendations that are explored in the section, Storage recommendations for AI and ML. The following figure provides a decision tree that shows the recommended storage options for the four main stages of an AI and ML workload. The diagram is followed by a detailed explanation of each stage and the choices that you can make at each stage. Figure 3: Storage choices for each AI and ML stage Prepare At this initial stage, you need to select whether you want to use Cloud Storage or Filestore as your persistent source of truth for your data. You can also select potential optimizations for data-intensive training. Know that different teams in your organization can have varying workload and dataset types that might result in those teams making different storage decisions. To accommodate these varied needs, you can mix and match your storage choices between Cloud Storage and Filestore accordingly. Cloud Storage for the prepare stage Your workload contains large files of 50 MB or more. Your workload requires lower IOPS. Your workload can tolerate higher storage latency in the tens of milliseconds. You need to gain access to the dataset through Cloud Storage APIs, or Cloud Storage FUSE and a subset of file APIs. To optimize your workload in Cloud Storage, you can select regional storage and place your bucket in the same region as your compute resources. However, if you need higher reliability, or if you use accelerators located in two different regions, you'll want to select dual-region storage. Filestore for the prepare stage You should select Filestore to prepare your data if any of the following conditions apply: Your workload contains smaller file sizes of less than 50 MB. Your workload requires higher IOPS. Your workload needs lower latency of less than 1 millisecond to meet storage requirements for random I/O and metadata access. Your users need a desktop-like experience with full POSIX support to view and manage the data. Your users need to perform other tasks, such as software development. Other considerations for the prepare stage If you find it hard to choose an option at this stage, consider the following points to help you make your decision: If you want to use other AI and ML frameworks, such as Dataflow, Spark, or BigQuery on the dataset, then Cloud Storage is a logical choice because of the custom integration it has with these types of frameworks. Filestore has a maximum capacity of approximately 100 TB. If you need to train your model with datasets larger than this, or if you can't break the set into multiple 100 TB instances, then Cloud Storage is a better option. During the data preparation phase, many users reorganize their data into large chunks to improve access efficiency and avoid random read requests. To further reduce the I/O performance requirements on the storage system, many users use pipelining, training optimization to increase the number of I/O threads, or both. Train At the train stage, you typically reuse the primary storage option that you selected for the prepare stage. If your primary storage choice can't handle the training workload alone, you might need to supplement the primary storage. You can add supplemental storage as needed, such as Local SSDs, to balance the workload. In addition to providing recommendations for using either Cloud Storage or Filestore at this stage, this section also provides you with more details about these recommendations. The details include the following: Guidance for file sizes and request sizes Suggestions on when to supplement your primary storage choice An explanation of the implementation details for the two key workloads at this stage—data loading, and checkpointing and restart Cloud Storage for the train stage The main reasons to select Cloud Storage when training your data include the following: If you use Cloud Storage when you prepare your data, it's best to train your data in Cloud Storage. Cloud Storage is a good choice for throughput, workloads that don't require high single-VM throughput, or workloads that use many threads to increase throughput as needed. Cloud Storage with Local SSD or Filestore for the train stage The main reason to select Cloud Storage with Local SSD or Filestore when training your data occurs when you need to support small I/O requests or small files. In this case, you can supplement your Cloud Storage training task by moving some of the data to Local SSD or Filestore Zonal. Filestore for the train stage The main reasons to select Filestore when training your data include the following: If you use Filestore when you prepare your data, in most cases, you should continue to train your data in Filestore. Filestore is a good choice for low latency, high per-client throughput, and applications that use a low number of threads but still require high performance. If you need to supplement your training tasks in Filestore, consider creating a Local SSD cache as needed. File sizes and request sizes Once the dataset is ready for training, there are two main options that can help you evaluate the different storage options. Data sets containing large files and accessed with large request sizes Data sets containing small-to-medium sized files, or accessed with small request sizes Data sets containing large files and accessed with large request sizes In this option, the training job consists primarily of larger files of 50 MB or more. The training job ingests the files with 1 MB to 16 MB per request. We generally recommend Cloud Storage with Cloud Storage FUSE for this option because the files are large enough that Cloud Storage should be able to keep the accelerators supplied. Keep in mind that you might need hundreds to thousands of threads to achieve maximum performance with this option. However, if you require full POSIX APIs for other applications, or your workload isn't appropriate for the high number of required threads, then Filestore is a good alternative. Data sets containing small-to-medium sized files, or accessed with small request sizes With this option, you can classify your training job in one of two ways: Many small-to-medium sized files of less than 50 MB. A dataset with larger files, but the data is read sequentially or randomly with relatively small read request sizes (for example, less than 1 MB). An example of this use case is when the system reads less than 100 KB at a time from a multi-gigabyte or multi-terabyte file. If you already use Filestore for its POSIX capabilities, then we recommend keeping your data in Filestore for training. Filestore offers low I/O latency access to the data. This lower latency can reduce the overall training time and might lower the cost of training your model. If you use Cloud Storage to store your data, then we recommend that you copy your data to Local SSD or Filestore prior to training. Data loading During data loading, Cloud GPUs or Cloud TPUs import batches of data repeatedly to train the model. This phase can be cache friendly, depending on the size of the batches and the order in which you request them. Your goal at this point is to train the model with maximum efficiency but at the lowest cost. If the size of your training data scales to petabytes, the data might need to be re-read multiple times. Such a scale requires intensive processing by a GPU or TPU accelerator. However, you need to ensure that your Cloud GPUs and Cloud TPUs aren't idle, but process your data actively. Otherwise, you pay for an expensive, idle accelerator while you copy the data from one location to another. For data loading, consider the following: Parallelism: There are numerous ways to parallelize training, and each can have an impact on the overall storage performance required and the necessity of caching data locally on each instance. Maximum number of Cloud GPUs or Cloud TPUs for a single training job: As the number of accelerators and VMs increases, the impact on the storage system can be significant and might result in increased costs if the Cloud GPUs or Cloud TPUs are idle. However, there are ways to minimize costs as you increase the number of accelerators. Depending on the type of parallelism that you use, you can minimize costs by increasing the aggregate read throughput requirements that are needed to avoid idle accelerators. To support these improvements in either Cloud Storage or Filestore, you need to add Local SSD to each instance so that you can offload I/O from the overloaded storage system. However, preloading data into each instance's Local SSD from Cloud Storage has its own challenges. You risk incurring increased costs for the idle accelerators while the data is being transferred. If your data transfer times and accelerator idle costs are high, you might be able to lower costs by using Filestore with Local SSD instead. Number of Cloud GPUs per instance: When you deploy more Cloud GPUs to each instance, you can increase the inter-Cloud GPUs throughput with NVLink. However, the available Local SSD and storage networking throughput doesn't always increase linearly. Storage and application optimizations: Storage options and applications have specific performance requirements to be able to run optimally. Be sure to balance these storage and application system requirements with your data loading optimizations, such as keeping your Cloud GPUs or Cloud TPUs busy and operating efficiently. Checkpointing and restart For checkpointing and restart, training jobs need to periodically save their state so they can recover quickly from instance failures. When the failure happens, jobs must restart, ingest the latest checkpoint, and then resume training. The exact mechanism used to create and ingest checkpoints is typically specific to a framework, such as TensorFlow or PyTorch. Some users have built complex frameworks to increase the efficiency of checkpointing. These complex frameworks allow them to perform a checkpoint more frequently. However, most users typically use shared storage, such as Cloud Storage or Filestore. When saving checkpoints, you only need to save three to five checkpoints at any one point in time. Checkpoint workloads tend to consist of mostly writes, several deletes, and, ideally, infrequent reads when failures occur. During recovery, the I/O pattern includes intensive and frequent writes, frequent deletes, and frequent reads of the checkpoint. You also need to consider the size of the checkpoint that each GPU or TPU needs to create. The checkpoint size determines the write throughput that is required to complete the training job in a cost-effective and timely manner. To minimize costs, consider increasing the following items: The frequency of checkpoints The aggregate write throughput that is required for checkpoints Restart efficiency Serve When you serve your model, which is also known as AI inference, the primary I/O pattern is read-only to load the model into Cloud GPUs or Cloud TPU memory. Your goal at this stage is to run your model in production. The model is much smaller than the training data, so you can replicate and scale the model across multiple instances. High availability and protection against zonal and regional failures are important at this stage, so you must ensure that your model is available for a variety of failure scenarios. For many generative AI use cases, input data to the model might be quite small and might not need to be stored persistently. In other cases, you might need to run large volumes of data over the model (for example, scientific datasets). In this case, you need to select an option that can keep the Cloud GPUs or Cloud TPUs supplied during the analysis of the dataset, as well as select a persistent location to store the inference results. There are two primary choices when you serve your model. Cloud Storage for the serve stage The main reasons to select Cloud Storage when serving your data include the following: When you train your model in Cloud Storage, you can save on migration costs by leaving the model in Cloud Storage when you serve it. You can save your generated content in Cloud Storage. Cloud Storage is a good choice when AI inferencing occurs in multiple regions. You can use dual-region and multi-region buckets to provide model availability across regional failures. Filestore for the serve stage The main reasons to select Filestore when serving your data include the following: When you train your model in Filestore, you can save on migration costs by leaving the model in Filestore when you serve it. Because its service level agreement (SLA) provides 99.99% availability, the Filestore Enterprise service tier is a good choice for high availability when you want to serve your model between multiple zones in a region. Note: For more information about region-specific considerations, see Geography and regions. The Filestore Zonal service tiers might be a reasonable lower-cost choice, but only if high availability is not a requirement for your AI and ML workload. If you require cross-region recovery, you can store the model in a remote backup location or a remote Cloud Storage bucket, and then restore the model as needed. Filestore offers a durable and highly available option that gives low latency access to your model when you generate small files or require file APIs. Archive The archive stage has an I/O pattern of "write once, read rarely." Your goal is to store the different sets of training data and the different versions of models that you generated. You can use these incremental versions of data and models for backup and disaster recovery purposes. You must also store these items in a durable location for a long period of time. Although you might not require access to the data and models very often, you do want these items to be available when you need them. Because of its extreme durability, expansive scale, and low cost, the best option for storing object data over a long period of time is Cloud Storage. Depending on the frequency of when you access the dataset, model, and backup files, Cloud Storage offers cost optimization through different storage classes with the following approaches: Place your frequently accessed data in Standard storage. Keep data that you access monthly in Nearline storage. Store data that you access every three months in Coldline storage. Preserve data that you access once a year in Archive storage. Using object lifecycle management, you can create policies to move data to colder storage classes or to delete data based on specific criteria. If you're not sure how often you'll access your data, you can use the Autoclass feature to move data between storage classes automatically, based on your access pattern. If your data is in Filestore, moving the data to Cloud Storage for archive purposes often makes sense. However, you can provide additional protection for your Filestore data by creating Filestore backups in another region. You can also take Filestore snapshots for local file and file system recovery. For more information about Filestore backups, see Backups overview. For more information about Filestore snapshots, see Snapshots overview. Storage recommendations for AI and ML This section summarizes the analysis provided in the previous section, Map your storage choices to the AI and ML stages. It provides details about the three primary storage option combinations that we recommend for most AI and ML workloads. The three options are as follows: Select Cloud Storage Select Cloud Storage and Local SSD or Filestore Select Filestore and Optional Local SSD Select Cloud Storage Cloud Storage provides the lowest cost-per-capacity storage offering when compared to all other storage offerings. It scales to large numbers of clients, provides regional and dual-region accessibility and availability, and can be accessed through Cloud Storage FUSE. You should select regional storage when your compute platform for training is in the same region, and choose dual-region storage if you need higher reliability or use Cloud GPUs or Cloud TPUs located in two different regions. Cloud Storage is the best choice for long term data retention, and for workloads with lower storage performance requirements. However, other options such as Filestore and Local SSD are valuable alternatives in specific cases where you require full POSIX support or Cloud Storage becomes a performance bottleneck. Select Cloud Storage with Local SSD or Filestore For data-intensive training or checkpoint and restart workloads, it can make sense to use a faster storage offering during the I/O intensive training phase. Typical choices include copying the data to a Local SSD or Filestore. This action reduces the overall job runtime by keeping the Cloud GPUs or Cloud TPUs supplied with data and prevents the instances from stalling while checkpoint operations complete. In addition, the more frequently you create checkpoints, the more checkpoints you have available as backups. This increase in the number of backups also increases the overall rate at which the useful data arrives (also known as goodput). This combination of optimizing the processors and increasing goodput lowers the overall costs of training your model. There are trade-offs to consider when utilizing Local SSD or Filestore. The following section describes some some advantages and disadvantages for each. Local SSD advantages High throughput and IOPS once the data has been transferred Low to minimal extra cost Local SSD disadvantages Cloud GPUs or Cloud TPUs remain idle while the data loads. Data transfer must happen on every job for every instance. Is only available for some Cloud GPUs instance types. Provides limited storage capacity. Supports checkpointing, but you must manually transfer the checkpoints to a durable storage option such as Cloud Storage. Filestore advantages Provides shared NFS storage that enables data to be transferred once and then shared across multiple jobs and users. There is no idle Cloud GPUs or Cloud TPUs time because the data is transferred before you pay for the Cloud GPUs or Cloud TPUs. Has a large storage capacity. Supports fast checkpointing for thousands of VMs. Supports Cloud GPUs, Cloud TPUs, and all other Compute Engine instance types. Filestore disadvantages High upfront cost; but the increased compute efficiency has the potential to reduce the overall training costs. Select Filestore with optional Local SSD Filestore is the best choice for AI and ML workloads that need low latency and full POSIX support. Beyond being the recommended choice for small file or small I/O training jobs, Filestore can deliver a responsive experience for AI and ML notebooks, software development, and many other applications. You can also deploy Filestore in a zone for high performance training and persistent storage of checkpoints. Deploying Filestore in a zone also offers fast restart upon failure. Alternatively, you can deploy Filestore regionally to support highly available inference jobs. The optional addition of FS-Cache to support Local SSD caching enables fast repeated reads of training data to optimize workloads. Note: Cloud Storage typically has a lower storage cost. However, when compared to Filestore, Cloud Storage might be more expensive because of the higher latency and slower training times required to access compute resources. What's next For more information on storage options and AI and ML, see the following resources: Design an optimal storage strategy for your cloud workload Product overview of Cloud Storage Cloud Storage FUSE Filestore overview About Local SSDs Storage Transfer Service overview Introduction to Vertex AI Extending network reachability of Vertex AI Pipelines Video -- Access larger dataset faster and easier to accelerate your ML models training in Vertex AI | Google Cloud Cloud Storage as a File System in AI Training Reading and storing data for custom model training on Vertex AI | Google Cloud Blog For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. ContributorsAuthors: Dean Hildebrand | Technical Director, Office of the CTOSean Derrington | Group Outbound Product Manager, StorageRichard Hendricks | Architecture Center StaffOther contributor: Kumar Dhanagopal | Cross-Product Solution Developer Send feedback \ No newline at end of file diff --git a/Detect_potential_failures_by_using_observability.txt b/Detect_potential_failures_by_using_observability.txt new file mode 100644 index 0000000000000000000000000000000000000000..229d2beae862be8c69a82978af2327854b837098 --- /dev/null +++ b/Detect_potential_failures_by_using_observability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/observability +Date Scraped: 2025-02-23T11:43:31.795Z + +Content: +Home Docs Cloud Architecture Center Send feedback Detect potential failures by using observability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you proactively identify areas where errors and failures might occur. This principle is relevant to the observation focus area of reliability. Principle overview To maintain and improve the reliability of your workloads in Google Cloud, you need to implement effective observability by using metrics, logs, and traces. Metrics are numerical measurements of activities that you want to track for your application at specific time intervals. For example, you might want to track technical metrics like request rate and error rate, which can be used as service-level indicators (SLIs). You might also need to track application-specific business metrics like orders placed and payments received. Logs are time-stamped records of discrete events that occur within an application or system. The event could be a failure, an error, or a change in state. Logs might include metrics, and you can also use logs for SLIs. A trace represents the journey of a single user or transaction through a number of separate applications or the components of an application. For example, these components could be microservices. Traces help you to track what components were used in the journeys, where bottlenecks exist, and how long the journeys took. Metrics, logs, and traces help you monitor your system continuously. Comprehensive monitoring helps you find out where and why errors occurred. You can also detect potential failures before errors occur. Recommendations To detect potential failures efficiently, consider the recommendations in the following subsections. Gain comprehensive insights To track key metrics like response times and error rates, use Cloud Monitoring and Cloud Logging. These tools also help you to ensure that the metrics consistently meet the needs of your workload. To make data-driven decisions, analyze default service metrics to understand component dependencies and their impact on overall workload performance. To customize your monitoring strategy, create and publish your own metrics by using the Google Cloud SDK. Perform proactive troubleshooting Implement robust error handling and enable logging across all of the components of your workloads in Google Cloud. Activate logs like Cloud Storage access logs and VPC Flow Logs. When you configure logging, consider the associated costs. To control logging costs, you can configure exclusion filters on the log sinks to exclude certain logs from being stored. Optimize resource utilization Monitor CPU consumption, network I/O metrics, and disk I/O metrics to detect under-provisioned and over-provisioned resources in services like GKE, Compute Engine, and Dataproc. For a complete list of supported services, see Cloud Monitoring overview. Prioritize alerts For alerts, focus on critical metrics, set appropriate thresholds to minimize alert fatigue, and ensure timely responses to significant issues. This targeted approach lets you proactively maintain workload reliability. For more information, see Alerting overview. Previous arrow_back Take advantage of horizontal scalability Next Design for graceful degradation arrow_forward Send feedback \ No newline at end of file diff --git a/Detective_controls.txt b/Detective_controls.txt new file mode 100644 index 0000000000000000000000000000000000000000..c9999ebc051b88d401968ed602cad469fc146771 --- /dev/null +++ b/Detective_controls.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/detective-controls +Date Scraped: 2025-02-23T11:45:31.826Z + +Content: +Home Docs Cloud Architecture Center Send feedback Detective controls Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC Threat detection and monitoring capabilities are provided using a combination of built-in security controls from Security Command Center and custom solutions that let you detect and respond to security events. Centralized logging for security and audit The blueprint configures logging capabilities to track and analyze changes to your Google Cloud resources with logs that are aggregated to a single project. The following diagram shows how the blueprint aggregates logs from multiple sources in multiple projects into a centralized log sink. The diagram describes the following: Log sinks are configured at the organization node to aggregate logs from all projects in the resource hierarchy. Multiple log sinks are configured to send logs that match a filter to different destinations for storage and analytics. The prj-c-logging project contains all the resources for log storage and analytics. Optionally, you can configure additional tooling to export logs to a SIEM. The blueprint uses different log sources and includes these logs in the log sink filter so that the logs can be exported to a centralized destination. The following table describes the log sources. Log source Description Admin Activity audit logs You cannot configure, disable, or exclude Admin Activity audit logs. System Event audit logs You cannot configure, disable, or exclude System Event audit logs. Policy Denied audit logs You cannot configure or disable Policy Denied audit logs, but you can optionally exclude them with exclusion filters. Data Access audit logs By default, the blueprint doesn't enable data access logs because the volume and cost of these logs can be high.To determine whether you should enable data access logs, evaluate where your workloads handle sensitive data and consider whether you have a requirement to enable data access logs for each service and environment working with sensitive data. VPC Flow Logs The blueprint enables VPC Flow Logs for every subnet. The blueprint configures log sampling to sample 50% of logs to reduce cost.If you create additional subnets, you must ensure that VPC Flow Logs are enabled for each subnet. Firewall Rules Logging The blueprint enables Firewall Rules Logging for every firewall policy rule.If you create additional firewall policy rules for workloads, you must ensure that Firewall Rules Logging is enabled for each new rule. Cloud DNS logging The blueprint enables Cloud DNS logs for managed zones.If you create additional managed zones, you must enable those DNS logs. Google Workspace audit logging Requires a one-time enablement step that is not automated by the blueprint. For more information, see Share data with Google Cloud services. Access Transparency logs Requires a one-time enablement step that is not automated by the blueprint. For more information, see Enable Access Transparency. The following table describes the log sinks and how they are used with supported destinations in the blueprint. Sink Destination Purpose sk-c-logging-la Logs routed to Cloud Logging buckets with Log Analytics and a linked BigQuery dataset enabled Actively analyze logs. Run ad hoc investigations by using Logs Explorer in the console, or write SQL queries, reports, and views using the linked BigQuery dataset. sk-c-logging-bkt Logs routed to Cloud Storage Store logs long-term for compliance, audit, and incident-tracking purposes.Optionally, if you have compliance requirements for mandatory data retention, we recommend that you additionally configure Bucket Lock. sk-c-logging-pub Logs routed to Pub/Sub Export logs to an external platform such as your existing SIEM.This requires additional work to integrate with your SIEM, such as the following mechanisms: For many tools, third-party integration with Pub/Sub is the preferred method to ingest logs. For Google Google Security Operations, you can ingest Google Cloud data to Google Security Operations without provisioning additional infrastructure. For Splunk, you can stream logs from Google Cloud to Splunk using Dataflow. For guidance on enabling additional log types and writing log sink filters, see the log scoping tool. Threat monitoring with Security Command Center We recommend that you activate Security Command Center Premium for your organization to automatically detect threats, vulnerabilities, and misconfigurations in your Google Cloud resources. Security Command Center creates security findings from multiple sources including the following: Security Health Analytics: detects common vulnerabilities and misconfigurations across Google Cloud resources. Attack path exposure: shows a simulated path of how an attacker could exploit your high-value resources, based on the vulnerabilities and misconfigurations that are detected by other Security Command Center sources. Event Threat Detection: applies detection logic and proprietary threat intelligence against your logs to identify threats in near-real time. Container Threat Detection: detects common container runtime attacks. Virtual Machine Threat Detection: detects potentially malicious applications that are running on virtual machines. Web Security Scanner: scans for OWASP Top Ten vulnerabilities in your web-facing applications on Compute Engine, App Engine, or Google Kubernetes Engine. For more information on the vulnerabilities and threats addressed by Security Command Center, see Security Command Center sources. You must activate Security Command Center after you deploy the blueprint. For instructions, see Activate Security Command Center for an organization. After you activate Security Command Center, we recommend that you export the findings that are produced by Security Command Center to your existing tools or processes for triaging and responding to threats. The blueprint creates the prj-c-scc project with a Pub/Sub topic to be used for this integration. Depending on your existing tools, use one of the following methods to export findings: If you use the console to manage security findings directly in Security Command Center, configure folder-level and project-level roles for Security Command Center to let teams view and manage security findings just for the projects for which they are responsible. If you use Google SecOps as your SIEM, ingest Google Cloud data to Google SecOps. If you use a SIEM or SOAR tool with integrations to Security Command Center, share data with Cortex XSOAR, Elastic Stack, ServiceNow, Splunk, or QRadar. If you use an external tool that can ingest findings from Pub/Sub, configure continuous exports to Pub/Sub and configure your existing tools to ingest findings from the Pub/Sub topic. Custom solution for automated log analysis You might have requirements to create alerts for security events that are based on custom queries against logs. Custom queries can help supplement the capabilities of your SIEM by analyzing logs on Google Cloud and exporting only the events that merit investigation, especially if you don't have the capacity to export all cloud logs to your SIEM. The blueprint helps enable this log analysis by setting up a centralized source of logs that you can query using a linked BigQuery dataset. To automate this capability, you must implement the code sample at bq-log-alerting and extend the foundation capabilities. The sample code lets you regularly query a log source and send a custom finding to Security Command Center. The following diagram introduces the high-level flow of the automated log analysis. The diagram shows the following concepts of automated log analysis: Logs from various sources are aggregated into a centralized logs bucket with log analytics and a linked BigQuery dataset. BigQuery views are configured to query logs for the security event that you want to monitor. Cloud Scheduler pushes an event to a Pub/Sub topic every 15 minutes and triggers Cloud Run functions. Cloud Run functions queries the views for new events. If it finds events, it pushes them to Security Command Center as custom findings. Security Command Center publishes notifications about new findings to another Pub/Sub topic. An external tool such as a SIEM subscribes to the Pub/Sub topic to ingest new findings. The sample has several use cases to query for potentially suspicious behavior. Examples include a login from a list of super admins or other highly privileged accounts that you specify, changes to logging settings, or changes to network routes. You can extend the use cases by writing new query views for your requirements. Write your own queries or reference security log analytics for a library of SQL queries to help you analyze Google Cloud logs. Custom solution to respond to asset changes To respond to events in real time, we recommend that you use Cloud Asset Inventory to monitor asset changes. In this custom solution, an asset feed is configured to trigger notifications to Pub/Sub about changes to resources in real time, and then Cloud Run functions runs custom code to enforce your own business logic based on whether the change should be allowed. The blueprint has an example of this custom governance solution that monitors for IAM changes that add highly sensitive roles including Organization Admin, Owner, and Editor. The following diagram describes this solution. The previous diagram shows these concepts: Changes are made to an allow policy. The Cloud Asset Inventory feed sends a real-time notification about the allow policy change to Pub/Sub. Pub/Sub triggers a function. Cloud Run functions runs custom code to enforce your policy. The example function has logic to assess if the change has added the Organization Admin, Owner, or Editor roles to an allow policy. If so, the function creates a custom security finding and sends it to Security Command Center. Optionally, you can use this model to automate remediation efforts. Write additional business logic in Cloud Run functions to automatically take action on the finding, such as reverting the allow policy to its previous state. In addition, you can extend the infrastructure and logic used by this sample solution to add custom responses to other events that are important to your business. What's next Read about preventative controls (next document in this series). Send feedback \ No newline at end of file diff --git a/DevOps_Best_Practices.txt b/DevOps_Best_Practices.txt new file mode 100644 index 0000000000000000000000000000000000000000..d3a1865c1a5451bfdcb202a6af2de14cb3dca244 --- /dev/null +++ b/DevOps_Best_Practices.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/devops +Date Scraped: 2025-02-23T11:58:20.163Z + +Content: +DevOpsTake a deep dive into DevOps, the organizational and cultural movement that aims to increase software delivery velocity, improve service reliability, and build shared ownership among software stakeholders.Take the quick checkReview 2021 DevOps AwardsAccelerate State of DevOps Report 2023Register to read the 2023 reportBenefitsIncrease your DevOps flow through improved software delivery and operationsIncrease the speed of your deploymentsThe best teams deploy 973x more frequently and have lead times 6750x faster when compared to low performers.Improve the stability of your softwareHigh performers don’t trade off speed and stability. The best teams recover from incidents 6570x faster and have change fail rates 3x lower.Build security in from the startHigh performers spend 50% less time fixing security issues compared to low performers.Source: 2017, 2019, 2021 State of DevOps ReportsKey featuresImprove your technical and cultural capabilities to drive improved performanceLearn how to improve the speed, stability, availability, and security of your software delivery capability.Explore our research programFor nearly a decade, Google Cloud’s DevOps Research and Assessment (DORA) team has collected insights from 36,000+ professionals. This research has validated technical, process, and cultural capabilities that drive higher software delivery and organizational performance. Explore DORA’s research program and discover these capabilities, how to implement them, and how to overcome common obstacles.Read DORA’s State of DevOps reports and DevOps ROI whitepaperDORA’s research reports give readers an in-depth understanding of practices and capabilities that drive performance, and how the industry continues to evolve. Read the 2023 report as well as reports from previous years. Learn how to forecast the value of DevOps transformations with our ROI whitepaper.Take the DORA DevOps Quick CheckMeasure your team's software delivery performance and compare it to the rest of the industry with the quick check.Measure your software delivery performance with the Four KeysDORA’s research identified four key metrics that indicate software delivery performance. Use our Four Keys open source project to gather and display this key DevOps performance data from your GitHub or GitLab repos. Measure your software delivery performance and track it over time.Ready to get started? Contact usSee how DevOps can help your companyWant to get better at software delivery? Start by taking our quick check and learning about our research program.VideoCreating a better team cultureWatch videoVideoSecuring your code without slowing downWatch videoVideoCreating reliability with your customersWatch videoCustomersSee how customers are improving their DevOps practices with DORAVideoAccelerating DevOps with DORAVideo (2:40)VideoUtilizing IT operations to drive informed business decisionsVideo (2:59)Case studyCapital One drives continuous delivery improvement with insights from DORA6-min readVideoVendasta uses DORA research and Google Cloud to drive faster software deliveryVideo(8:42)See all customersPartnersOur partnersOur knowledgeable partners are ready to step in and help address your DevOps challenges.See all partnersRelated servicesDevOps products and integrationsBuild and deploy new cloud applications, store artifacts, and monitor app security and reliability on Google Cloud.Cloud BuildDefine custom workflows for building, testing, and deploying across multiple environments.Artifact RegistryStore, manage, and secure your container images and language packages.Binary AuthorizationEnsure only trusted container images are deployed on Google Kubernetes Engine.TektonOpen source framework for creating continuous integration and delivery (CI/CD) systems.Google Cloud DeployFully managed continuous delivery for Google Kubernetes Engine with built-in metrics, approvals, and security.ObservabilityMonitor, troubleshoot, and improve infrastructure and app performance.DocumentationExplore common use cases for DevOpsBest PracticeDevOps capabilitiesExplore the technical, process, and cultural capabilities, which drive higher software delivery and organizational performance.Learn moreQuickstartGetting started with Cloud BuildUse Cloud Build to build a Docker image and push the image to Container Registry.Learn moreQuickstartGetting started with Artifact RegistryLearn about Artifact Registry, a universal package manager for all your build artifacts and dependencies. Try the Docker quickstart for an example of what it can do.Learn moreQuickstartGetting started with Cloud MonitoringUse Cloud Monitoring for visibility into the performance, availability, and overall health of your cloud-powered applications.Learn moreBest PracticeJenkins on Google Kubernetes EngineJenkins on Google Kubernetes Engine allows you to improve software delivery performance with continuous integration, continuous delivery, and automated deployment.Learn moreBest PracticeDefining SLOsService level objectives (SLOs) help teams define a target level of reliability. SLOs allow teams to monitor for business decisions and to experiment safely.Learn moreTutorialCreating continuous delivery pipelinesUse Google Cloud Deploy and Google Kubernetes Engine to create continuous delivery pipelines, allowing for change approvals and automated deployments and rollbacks.Learn moreTutorialCloud Monitoring metric exportExport metrics to BigQuery for long-term analysis, allowing you to improve monitoring and observability, monitor for business decisions, and create visual displays.Learn moreBest PracticeUsing Terraform with Google CloudProvision Google Cloud resources declaratively with Terraform.Learn moreNot seeing what you’re looking for?View documentationWhat's newSee the latest from DevOpsSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoLearn about the ROI of DevOps TransformationWatch videoBlog postAre you an Elite DevOps performer? Find out with the Four Keys projectRead the blogBlog post2023 State of DevOps Report: Culture is everythingRead the blogBlog postIntroducing the Dev(Sec)Ops toolkitRead the blogBlog post2022 State of DevOps Report data deep dive: Documentation is like sunshineRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact usWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/DevOps_Research_and_Assessment_(DORA)_capabilities.txt b/DevOps_Research_and_Assessment_(DORA)_capabilities.txt new file mode 100644 index 0000000000000000000000000000000000000000..a0645ca51ac4bf0837f4b9d4c7cf9e5011a76389 --- /dev/null +++ b/DevOps_Research_and_Assessment_(DORA)_capabilities.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/devops +Date Scraped: 2025-02-23T11:47:54.941Z + +Content: +Home Docs Cloud Architecture Center Send feedback DevOps capabilities Stay organized with collections Save and categorize content based on your preferences. The DevOps Research and Assessment (DORA) team has identified and validated a set of capabilities that drive higher software delivery and organizational performance. These articles describe how to implement, improve, and measure these capabilities. Cloud infrastructure Find out how to manage cloud infrastructure effectively so you can achieve higher levels of agility, availability, and cost visibility. Code maintainability Make it easy for developers to find, reuse, and change code, and keep dependencies up-to-date. Continuous delivery Make deploying software a reliable, low-risk process that can be performed on demand at any time. Continuous integration Learn about common mistakes, ways to measure, and how to improve on your continuous integration efforts. Test automation Improve software quality by building reliable automated test suites and performing all kinds of testing throughout the software delivery lifecycle. Database change management Make sure database changes don't cause problems or slow you down. Deployment automation Best practices and approaches for deployment automation and reducing manual intervention in the release process. Empowering teams to choose tools Empower teams to make informed decisions on tools and technologies. Learn how these decisions drive more effective software delivery. Loosely coupled architecture Learn about moving from a tightly coupled architecture to service-oriented and microservice architectures without re-architecting everything at once. Monitoring and observability Learn how to build tooling to help you understand and debug your production systems. Shifting left on security Build security into the software development lifecycle without compromising delivery speed. Test data management Understand the right strategies for managing test data effectively along with approaches to provide fast, secure data access for testing. Trunk-based development Prevent merge-conflict hassles with trunk-based development practices. Version control A guide to implementing the right version control practices for reproducibility and traceability. Customer feedback Drive better organizational outcomes by gathering customer feedback and incorporating it into product and feature design. Monitoring systems to inform business decisions Improve monitoring across infrastructure platforms, middleware, and the application tier, so you can provide fast feedback to developers. Proactive failure notification Set proactive failure notifications to identify critical issues and act on problems before they arise. Streamlining change approval Replace heavyweight change-approval processes with peer review, to get the benefits of a more reliable, compliant release process without sacrificing speed. Team experimentation Innovate faster by building empowered teams that can try out new ideas without approval from people outside the team. Visibility of work in the value stream Understand and visualize the flow of work from idea to customer outcome in order to drive higher performance. Visual management Learn about the principles of visual management to promote information sharing, get a common understanding of where the team is, and how to improve. Work in process limits Prioritize work, limit the amount of things that people are working on, and focus on getting a small number of high-priority tasks done. Working in small batches Create shorter lead times and faster feedback loops by working in small batches. Learn common obstacles to this critical capability and how to overcome them. Generative organizational culture Discover how growing a generative, high-trust culture drives better organizational and software delivery performance. Job satisfaction Find out about the importance of ensuring your people have the tools and resources to do their job, and of making good use of their skills and abilities. Learning culture Grow a learning culture and understand its impact on your organizational performance. Transformational leadership Learn how effective leaders influence software delivery performance by driving the adoption of technical and product management capabilities. Send feedback \ No newline at end of file diff --git a/Developer_Center.txt b/Developer_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..23d34212a73fd1b33ed70085bf777c8a7307d54d --- /dev/null +++ b/Developer_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/developers +Date Scraped: 2025-02-23T12:11:46.589Z + +Content: +Home Developer Center Stay organized with collections Save and categorize content based on your preferences. Register now for Google Cloud Next in Las Vegas April 9-11, 2025, to take advantage of early bird pricing. Experience the new way to cloud What's new in Google Cloud and Google Workspace Watch Google Cloud: The new way to cloud Watch Build AI-powered Gemini + Google Workspace solutions Watch Start building with Google Cloud Build intelligent apps using your favorite programming languages leveraging Google Cloud APIs/SDKs and code samples Programming languages APIs/SDKs Code Samples Compute APIs Quickly build and deploy applications using compute APIs such as Cloud Run Admin, Compute Engine, Kubernetes Engine, App Engine Admin APIs, and more Start building arrow_forward Database and storage APIs Discover APIs for Bigtable, Datastore, Spanner, Cloud SQL, Cloud Storage, and Storage transfer to automatically manage your workflows Start building arrow_forward Data analytics APIs Access your favorite data analytics APIs for BigQuery, Dataflow, Dataproc, Pub/Sub, and more Start building arrow_forward Machine learning APIs Use machine learning APIs including AutoML, Vision, Speech-to-Text, Natural Language APIs, and more to accelerate application development Start building arrow_forward Operations APIs Quickly access operations APIs for integrated monitoring, logging, and trace services Start building arrow_forward Security APIs Manage security and identity to meet your policy, regulatory, and business needs Start building arrow_forward Vertex AI Access Vertex AI code samples to build and deploy ML models faster Start building arrow_forward BigQuery Access BigQuery code samples to unlock insights with petabyte-scale analysis and built-in ML Start building arrow_forward Cloud Run Access Cloud Run code samples for containerized apps written in Go, Python, Java, Node.js, .NET, and Ruby Start building arrow_forward GKE Access GKE code samples to get to market faster with the lowest TCO on the most scalable and automated Kubernetes platform Start building arrow_forward Cloud SQL Access Cloud SQL code samples to set up, manage, and administer relational databases such as PostgreSQL, MySQL, and SQL Server Start building arrow_forward Cloud IAM Access Cloud IAM code samples for fine-grained access control and visibility across all your Google Cloud resources Start building arrow_forward Best of Google Cloud handpicked for you If you don't see a suggested topic below, search for more interests here. Video Say goodbye to dashboard delays! Upgrading to a proactive dashboard with AI puts the power of timely insights in your hands. In this video, watch as a Data Engineer streamlines a manufacturing plant's dashboard, giving employees real-time recommendations to optimize operations and stay ahead of the curve. Learning path Learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. Tutorial Check out this whitepaper: Unlocking gen AI's full potential with operational databases. Your guide to generative AI app development. Product This guide provides a glimpse into the capabilities of Google Cloud Consulting and our commitment to developing solutions that not only solve immediate challenges, but also pave the way for future advancements. Video Troubleshoot with speed! Learn how AI can help users pinpoint and fix issues in a flash. Join a DevOps Engineer in a Gemini-powered rescue mission. Watch them use natural language to uncover the root cause of errors, slashing downtime and restoring order. Learning path Gemini, a generative AI-powered collaborator from Google Cloud, helps engineers manage infrastructure. You learn how to prompt Gemini to find and understand application logs, create a GKE cluster, and investigate how to create a build environment. Tutorial Check out this Innovators Live session on Building RAG Applications with Google's AI Powered Databases Product Build enterprise gen AI apps with Google Cloud databases. An increasingly popular approach to this problem is to "ground" LLMs by utilizing a technique called Retrieval Augmented Generation (RAG). Video Say goodbye to dashboard delays! Upgrading to a proactive dashboard with AI puts the power of timely insights in your hands. In this video, watch as a Data Engineer streamlines a manufacturing plant's dashboard, giving employees real-time recommendations to optimize operations and stay ahead of the curve. Learning path Learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. Tutorial Check out this whitepaper: Unlocking gen AI's full potential with operational databases. Your guide to generative AI app development. Product Bring analytics to your data wherever it resides Video Tired of wrestling with web app deployment and cloud infrastructure? In this video, a Cloud Architect uses Gemini to revolutionize their workflow. Tune in as they demonstrate how to establish the proper organizational patterns for long-term web app success. Learning path Core Infrastructure introduces important concepts and terminology for working with Google Cloud. This course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools. Tutorial Check out this Innovators Live session on Unlocking AI Innovation: Deploying GenAI models on GKE Product Introduction to Cloud TPU which gives an overview of working with Cloud TPUs. Video At Google, we take responsible AI seriously. Discover why responsible AI matters more than ever, and why we care so much about it. Explore Google's 7 AI Principles, a framework designed to guide users toward a safer and more ethical use of AI. Learning path Earn a skill badge by completing the Introduction to Responsible AI on Cloud Skills Boost Tutorial Check out this Innovators Live session on Unlocking AI Innovation: Deploying GenAI models on GKE Product Google has a long history of supporting and powering innovation from startups, and the rise of generative AI has created a tremendous opportunity for these startups to bring new value to businesses through new models, solutions, and applications. Increasingly, they're doing so on Google Cloud. Video Say goodbye to dashboard delays! Upgrading to a proactive dashboard with AI puts the power of timely insights in your hands. In this video, watch as a Data Engineer streamlines a manufacturing plant's dashboard, giving employees real-time recommendations to optimize operations and stay ahead of the curve. Learning path Learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. Tutorial Check out this Innovators Live session on Unlocking AI Innovation: Deploying GenAI models on GKE Product If you run workloads on Kubernetes, chances are you've experienced a 'cold start': a delay in launching an application that happens when workloads are scheduled to nodes that haven't hosted the workload before and the pods need to spin up from scratch. Video Tired of wrestling with web app deployment and cloud infrastructure? In this video, a Cloud Architect uses Gemini to revolutionize their workflow. Tune in as they demonstrate how to establish the proper organizational patterns for long-term web app success. Learning path Description: Gemini, a generative AI-powered collaborator from Google Cloud, helps network engineers create, update, and maintain VPC networks. You learn how to prompt Gemini to provide specific guidance for your networking tasks, beyond what you would receive from a search engine. Tutorial Check out this Innovators Live session on Building RAG Applications with Google's AI Powered Databases Product With the addition of custom target types, you can now create, manage, and observe delivery pipelines for any purpose with Cloud Deploy — whether it be application, infrastructure, AI model deployment or more. Video Experience the power of Google's AI assistant for software development! Check out how a developer utilizes Gemini, to streamline the process of bringing a new web app feature to life. Gemini assists with various tasks, including design collaboration, code generation, troubleshooting, and final impact analysis. Learning path Gemini, a generative AI-powered collaborator from Google Cloud, helps you use Google products and services to develop, test, deploy, and manage applications. With help from Gemini, you learn how to develop and build a web application, fix errors in the application, develop tests, and query data. Tutorial Check out this whitepaper: Unlocking gen AI's full potential with operational databases. Your guide to generative AI app development. Product Bring analytics to your data wherever it resides Video At Google, we take responsible AI seriously. Discover why responsible AI matters more than ever, and why we care so much about it. Explore Google's 7 AI Principles, a framework designed to guide users toward a safer and more ethical use of AI. Learning path Earn a skill badge by completing the Introduction to Responsible AI on Cloud Skills Boost Tutorial Check out this Innovators Live session: Reimagine supply chain management with Generative AI Product Build enterprise gen AI apps with Google Cloud databases. An increasingly popular approach to this problem is to “ground” LLMs by utilizing a technique called Retrieval Augmented Generation (RAG). Video Check out how large language models (LLMs) and generative AI intersect to push the boundaries of possibility. Unlock real-world use cases and learn how the power of a prompt can enhance LLM performance. You'll also explore Google tools to help you learn to develop your own gen AI apps. Learning path explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps. Tutorial Check out this Innovators Live session on Unlocking AI Innovation: Deploying GenAI models on GKE Product If you run workloads on Kubernetes, chances are you've experienced a "cold start": a delay in launching an application that happens when workloads are scheduled to nodes that haven't hosted the workload before and the pods need to spin up from scratch. Video Troubleshoot with speed! Learn how AI can help users pinpoint and fix issues in a flash. Join a DevOps Engineer in a Gemini-powered rescue mission. Watch them use natural language to uncover the root cause of errors, slashing downtime and restoring order. Learning path Gemini, a generative AI-powered collaborator from Google Cloud, helps engineers manage infrastructure. You learn how to prompt Gemini to find and understand application logs, create a GKE cluster, and investigate how to create a build environment. Tutorial Check out this whitepaper: Unlocking gen AI's full potential with operational databases. Your guide to generative AI app development. Product Build enterprise gen AI apps with Google Cloud databases. An increasingly popular approach to this problem is to “ground” LLMs by utilizing a technique called Retrieval Augmented Generation (RAG). Video Say goodbye to dashboard delays! Upgrading to a proactive dashboard with AI puts the power of timely insights in your hands. In this video, watch as a Data Engineer streamlines a manufacturing plant's dashboard, giving employees real-time recommendations to optimize operations and stay ahead of the curve. Learning path Write SQL queries, query public tables, load sample data into BigQuery, troubleshoot common syntax errors with the query validator in BigQuery, and create reports in Looker Studio by connecting to BigQuery data. Tutorial Check out this whitepaper: Unlocking gen AI's full potential with operational databases. Your guide to generative AI app development. Product Build a Data Warehouse with BigQuery Video Discover how Gemini in BigQuery can unlock new levels of efficiency and productivity. Learning path We describe Vertex AI AutoML and how to build, train, and deploy an ML model without writing a single line of code, and help you understand the benefits of BigQuery ML.. Tutorial Redefining Productivity in the New Era of Gen AI Product Bring analytics to your data wherever it resides Video Unleash your network knowledge! AI is your handy tool for working with complex configurations. In this video, a Network Engineer utilizes Gemini for Google Cloud to tackle VPCs, dual-stack subnets, and on-premises connections. Get ready to level up your networking game! Learning path Gemini, a generative AI-powered collaborator from Google Cloud, helps network engineers create, update, and maintain VPC networks. You learn how to prompt Gemini to provide specific guidance for your networking tasks, beyond what you would receive from a search engine. Tutorial Check out this Innovators Live session: Accelerate Your AI Development with Google Cloud's Data Platform Product With the addition of custom target types, you can now create, manage, and observe delivery pipelines for any purpose with Cloud Deploy — whether it be application, infrastructure, AI model deployment or more. Video Upgrade your cyber defenses! With AI as your sidekick, finding and fixing vulnerabilities can be much simpler. In this video, a Cybersecurity Analyst harnesses Gemini for Google Cloud to become a threat-detector, safeguarding their cloud from relentless attacks. Learning path Explore the essentials of cybersecurity, including the security lifecycle, digital transformation, and key cloud computing concepts. Tutorial Check out this Innovators Live session on Unlocking AI Innovation: Deploying GenAI models on GKE Product Google Cloud's Organization Policy Service can help you control resource configurations and establish guardrails in your cloud environment. Video Get started with Google Cloud. Watch along as Googlers tell you about their top life hacks for getting started with Google Cloud. Learning path Learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. Tutorial Check out this Innovators Live session: Accelerate Your AI Development with Google Cloud's Data Platform Product Build enterprise gen AI apps with Google Cloud databases. An increasingly popular approach to this problem is to “ground” LLMs by utilizing a technique called Retrieval Augmented Generation (RAG). Video Get started with Google Cloud. Watch along as Googlers tell you about their top life hacks for getting started with Google Cloud. Learning path Learn how Gemini, a generative AI-powered collaborator from Google Cloud, helps analyze customer data and predict product sales. Tutorial Check out this Innovators Live session: Accelerate Your AI Development with Google Cloud's Data Platform Product Introduction to Cloud TPU which gives an overview of working with Cloud TPUs. Video Dive into the world of event-driven architectures (EDAs) and discover how they can revolutionize your software applications. In this video, explore the key concepts of EDAs, their benefits, and how to effectively implement them using Google Cloud's powerful suite of tools. Learning path Core Infrastructure introduces important concepts and terminology for working with Google Cloud. This course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools. Tutorial Check out this Innovators Live session: Accelerate Your AI Development with Google Cloud's Data Platform Product Bring analytics to your data wherever it resides Video Dive into the world of event-driven architectures (EDAs) and discover how they can revolutionize your software applications. In this video, explore the key concepts of EDAs, their benefits, and how to effectively implement them using Google Cloud's powerful suite of tools. Learning path Core Infrastructure introduces important concepts and terminology for working with Google Cloud. This course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools. Tutorial Check out this Innovators Live session: Accelerate Your AI Development with Google Cloud's Data Platform Product Bring analytics to your data wherever it resides Select your persona to view curated content Personalize your experience Get the best of Google Cloud. Google Developer Profile is a way to learn about Google technologies and unlock achievements. Your profile captures your achievements with badges and saves your progress as you complete pathways, which include codelabs and videos. Access your Google Cloud toolkit Expand all Jump Start Solutions Access click-to-deploy sample applications and infrastructure best practices right in the Google Cloud Console. Deploy a dynamic website Build a dynamic website leveraging responsive web frameworks and familiar languages Learn more arrow_forward Deploy load-balanced managed virtual machines (VM) Create a VM with a load balancer, scale automatically, and efficiently manage traffic Learn more arrow_forward Create a log analytics pipeline Build a data pipeline that analyzes various data and logs of applications running across different environments Learn more arrow_forward Deploy a secure CI/CD pipeline Set up a secure CI/CD pipeline that follows building, scanning, storing, and deploying containers best practices Learn more arrow_forward Cloud architectures Discover reference architectures and best practices for your Google Cloud projects Application development Quickly build apps that scale no matter the load with Google Cloud application development reference architectures, tutorials, and more Learn more arrow_forward AI and Machine Learning Use these AI and machine learning reference architectures, tutorials, and more to deploy high quality AI solutions Learn more arrow_forward Security and IAM Browse featured IAM and security resources including reference architectures, tutorials, and more Learn more arrow_forward Grow your skills Gain new cloud skills and demonstrate your expertise Google Cloud tutorials Try out Google Cloud products with step-by-step product quickstarts, tutorials, or interactive walkthroughs Learn more arrow_forward Google Cloud Skills Boost Get hands-on experience through interactive labs and accelerate your cloud learning with Google Cloud Skills Boost Learn more arrow_forward Google Cloud Certification Show off your cloud expertise and skills with Google Cloud certifications and advance your career at the same time Learn more arrow_forward Go further with Google Developer Premium The Google Developer Program premium tier provides access to exclusive resources and opportunities to help you learn, build, and grow with Google. New Google Developer Premium Learn more arrow_forward Connect with Google Cloud experts Champion Innovators Connect with a community of technical and thought leaders from the Google Cloud Innovators program. Connect now Stay in the know keyboard_arrow_left keyboard_arrow_right Spotlight AI Skills Quest Start your quest Spotlight The next big thing is here Register today Whitepaper Gen AI App Development with Databases Read more Worldwide events Explore upcoming developer and community events and don't miss out on the opportunity to learnabout Google Cloud and connect developers like you. Event December 17, 2024 11:00 am - 12:00 pm PST Build AI Agents with Genkit Register now Event January 16 - 17, 2025 Launchpad for Women Register now Gen AI for Developers Gen AI for Developers | Virtual Workshop series Register now Innovators Events Google Cloud Weeklies | Big ideas in byte-sized sessions Register now Vertex Gemini Vertex Gemini | Prompt engineering series Register now Innovators Live Google Cloud Innovators Live | Interactive deep dives, simplified Register now Community Get to know the Google Cloud community by meeting, networking, and hearing from inspiring community members around the world. Blog 3 ways to use the new Multimodal Llama 3.2 on Vertex AI Check out three different ways you can get started building with Llama 3.2 on Vertex AI today. Read more Additional resources Get started with Google Cloud Learn more #Cloudbytes - product videos in a minute Go to playlist Build your startup on Google Cloud Learn more Google Cloud for Students Learn more 90 day, $300 free trial to Google Cloud Get started Dive into Google Cloud Documentation Start reading Explore the Cloud Architecture Center Learn more Go to Cloud console Get started \ No newline at end of file diff --git a/Developer_Tools.txt b/Developer_Tools.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a95378f672b8714306ed0d4e30555c713468f82 --- /dev/null +++ b/Developer_Tools.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/tools +Date Scraped: 2025-02-23T12:04:15.301Z + +Content: +Gemini Code Assist is available to try at no cost until July 11, 2024, limited to one user per billing account. Get Started Google Cloud developer toolsAll of the tools developers and development teams need to be productive when writing, deploying, and debugging applications hosted in Google Cloud. New customers get $300 in free credits to run, test, and deploy workloads. Go to consoleGKE interactive tutorialLearn how to create and deploy a containerized web app with Cloud Shell Editor.Cloud Run interactive tutorialQuickly create and deploy a Cloud Run serverless service with Cloud Shell Editor.Click-to-deploy solutionsExplore pre-built solution samples for various use cases that you can deploy directly from the Google Cloud console. Explore our developer toolsCategoryProductsFeaturesCodeGemini Code AssistGemini Code Assist offers code recommendations in real time, suggests full function and code blocks, and identifies vulnerabilities and errors in the code—while suggesting fixes. Assistance can be accessed via a chat interface, Cloud Shell Editor, or Cloud Code IDE extensions for VSCode and JetBrains IDEs. Code assistance for Go, Java, Javascript, Python, and SQLSQL completions, query generation, and summarization using natural language Suggestions to structure, modify, or query your data during database migrationIdentify and troubleshoot errors using natural languageCloud WorkstationsFully-managed development environments on Google Cloud with built-in security and developer flexibility.Accessible anytime via browser or local IDEBuilt-in security measures such as VPC Service Controls and forced image updateMulti-editor support and container-based customizationSupport for 3rd party DevOps tools Cloud CodeWrite, debug, and run cloud-native applications, locally or in the cloud—quickly and easily. Extensions to IDEs such as Visual Studio Code and IntelliJ are provided to let you rapidly iterate, debug, and deploy code to Kubernetes. Cloud Shell Editor is an Eclipse Theia-based IDE in your browser that gives you instant access to Cloud Code.Service deployment to Cloud Run or Cloud Run for AnthosSkaffold, Jib, kubectl integration for real-time feedbackRun-ready samples and out-of-the-box configuration snippets Cloud SDKManage Google Cloud resources and applications with command-line tools and libraries. The Cloud SDK contains gcloud, gsutil, and bq command-line tools, which you can use to access Compute Engine, Cloud Storage, BigQuery, and more.VM orchestration directly from the command lineClient Libraries for Java, Python, Node.js, Ruby, Go, .NET, PHP, C++Local service emulators for Pub/Sub, Spanner, Bigtable, Datastore Spring Framework on Google CloudDeliver the simplicity and productivity of Spring—the most popular open source Java framework—to Java developers on Google Cloud.Google Cloud service integrations to extend SpringImplements existing abstractions and introduces new onesBuild Cloud BuildContinuously build, test, and deploy software across all languages and in multiple environments—including VMs, serverless, Kubernetes, and Firebase.Fully serverless platformSecurity scans as part of your CI/CD pipelinePackage source into containers or non-container artifactsTekton on Google CloudStandardize CI/CD pipelines across languages, and tools—on-premises or in the cloud—with a Kubernetes-native open-source framework.Native support for Jenkins, Skaffold, Knative, and Jenkins XBuilt-in best practices for KubernetesDeployment across hybrid or multicloud environments Jenkins on Google CloudGet more speed, scale, and security from your Jenkins pipeline. Leverage Compute Engine to seamlessly run your jobs and scale out your build farm.Easily set up a CI/CD pipeline with native Kubernetes supportGKE-based scaling and load balancingAutomatic security and compliance checks for artifactsBuilt-in CD best practices Manage artifacts Artifact RegistryManage container images and language packages—Maven and npm—in one place, fully integrated with Google Cloud’s tooling and runtimes.Native artifact format supportRegional and multi-regional repositoriesMultiple repositories per projectGranular access controlsDeployCloud DeployDeliver continuously to Google Kubernetes Engine using pipelines defined as code and let Google Cloud handle rollouts. ​​Create deployment pipelines for GKE within minutesFully managed continuous delivery service for easy scalingOpinionated control plane for rollout and rollback across the organizationEnterprise security and auditCloud BuildDeploy using built-in integrations to Google Kubernetes Engine, App Engine, Cloud Functions, and Firebase.Fully serverless platform for load-based scalingComplex pipeline creation support with SpinnakerCustom steps and extensions to third-party appsGoogle Cloud security protectionCloud Deployment ManagerCreate and manage cloud resources with simple templates. Specify all the resources needed for applications in a declarative format using yaml. Parallel resource deploymentPython and Jinja2 resource templatesJSON schema for managing parametersHierarchical deployment view in Cloud ConsoleExplore our developer toolsCodeGemini Code AssistGemini Code Assist offers code recommendations in real time, suggests full function and code blocks, and identifies vulnerabilities and errors in the code—while suggesting fixes. Assistance can be accessed via a chat interface, Cloud Shell Editor, or Cloud Code IDE extensions for VSCode and JetBrains IDEs. Code assistance for Go, Java, Javascript, Python, and SQLSQL completions, query generation, and summarization using natural language Suggestions to structure, modify, or query your data during database migrationIdentify and troubleshoot errors using natural languageBuild Cloud BuildContinuously build, test, and deploy software across all languages and in multiple environments—including VMs, serverless, Kubernetes, and Firebase.Fully serverless platformSecurity scans as part of your CI/CD pipelinePackage source into containers or non-container artifacts Manage artifacts Artifact RegistryManage container images and language packages—Maven and npm—in one place, fully integrated with Google Cloud’s tooling and runtimes.Native artifact format supportRegional and multi-regional repositoriesMultiple repositories per projectGranular access controlsDeployCloud DeployDeliver continuously to Google Kubernetes Engine using pipelines defined as code and let Google Cloud handle rollouts. ​​Create deployment pipelines for GKE within minutesFully managed continuous delivery service for easy scalingOpinionated control plane for rollout and rollback across the organizationEnterprise security and auditJava is a registered trademark of Oracle and/or its affiliates.Take the next stepStart your next project, explore interactive tutorials, and manage your account. Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Developer_platform_controls.txt b/Developer_platform_controls.txt new file mode 100644 index 0000000000000000000000000000000000000000..aa2469d6714fa3d8b4a9b85bfb651f2bf89ec3a1 --- /dev/null +++ b/Developer_platform_controls.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/developer-platform-controls +Date Scraped: 2025-02-23T11:47:05.248Z + +Content: +Home Docs Cloud Architecture Center Send feedback Developer platform controls Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC This section describes the controls that are used in the developer platform. Platform identity, roles, and groups Access to Google Cloud services requires Google Cloud identities. The blueprint uses fleet Workload Identity Federation for GKE to map the Kubernetes service accounts that are used as identities for pods to Google Cloud service accounts that control access to Google Cloud services. To help protect against cross-environment escalations, each environment has a separate identity pool (known as a set of trusted identity providers) for its Workload Identity Federation for GKE accounts. Platform personas When you deploy the blueprint, you create three types of user groups: a developer platform team, an application team (one team per application), and a security operations team. The developer platform team is responsible for development and management of the developer platform. The members of this team are the following: Developer platform developers: These team members extend the blueprint and integrate it into existing systems. This team also creates new application templates. Developer platform administrator: This team is responsible for the following: Approving the creation of new tenants teams. Performing scheduled and unscheduled tasks that affect multiple tenants, including the following: Approving the promotion of applications to the nonproduction environment and the production environment. Coordinating infrastructure updates. Making platform-level capacity plans. A tenant of the developer platform is a single software development team and those responsible for the operation of that software. The tenant team consists of two groups: application developers and application operators. The duties of the two groups of the tenant team are as follows: Application developers: This team writes and debugs application code. They are sometimes also called software engineers or full-stack developers. Their responsibilities include the following: Performing testing and quality assurance on an application component when it is deployed into the development environment. Managing application-owned cloud resources (such as databases and storage buckets) in the development environment. Designing database or storage schemas for use by applications. Application operators or site reliability engineers (SREs): This team manages the reliability of applications that are running in the production environments, and the safe advancement of changes made by application developers into production. They are sometimes called service operators, systems engineers, or system administrators. Their responsibilities include the following: Planning application-level capacity needs. Creating alerting policies and setting service level objectives (SLOs) for services. Diagnosing service issues using logs and metrics that are specific to that application. Responding to alerts and pages, such as when a service doesn't meet its SLOs. Working with a group or several groups of application developers. Approving deployment of new versions to production. Managing application-owned cloud resources in the non-production and production environments (for example, backups and schema updates). Platform organization structure The enterprise application blueprint uses the organization structure that is provided by the enterprise foundation blueprint. The following diagram shows how the enterprise application blueprint projects fit within the structure of the foundation blueprint. Platform projects The following table describes the additional projects, beyond those provided by the foundation blueprint, that the application blueprint needs for deploying resources, configurations, and applications. Folder Project Description common eab-infra-cicd Contains the multi-tenant infrastructure pipeline to deploy the tenant infrastructure. eab-app-factory Contains the application factory , which is used to create single-tenant application architecture and continuous integration and continuous deployment (CI/CD) pipelines. This project also contains the Config Sync that's used for GKE cluster configuration. eab-{tenant}-cicd Contains the application CI/CD pipelines, which are in independent projects to enable separation between development teams. There is one pipeline for each application. development,nonproduction,production eab-gke Contains the GKE clusters for the developer platform and the logic that is used to register clusters for fleet management. eab-{tenant}(1-n) Contains any single-tenant application services such as databases or other managed services that are used by an application team. Platform cluster architecture The blueprint deploys applications across three environments: development, non-production, and production. Each environment is associated with a fleet. In this blueprint, a fleet is a project that includes one or more clusters. However, fleets can also group several projects. A fleet provides a logical security boundary for administrative control. A fleet provides a way to logically group and normalize Kubernetes clusters, and makes administration of infrastructure easier. The following diagram shows two GKE clusters, which are created in each environment to deploy applications. The two clusters act as identical GKE clusters in two different regions to provide multi-region resiliency. To take advantage of fleet capabilities, the blueprint uses the concept of sameness across namespace objects, services, and identity. The enterprise application blueprint uses GKE clusters with private spaces enabled through Private Service Connect access to the control plane and private node pools to remove potential attack surfaces from the internet. Neither the cluster nodes nor the control plane has a public endpoint. The cluster nodes run Container-Optimized OS to limit their attack surface and the cluster nodes use Shielded GKE Nodes to limit the ability of an attacker to impersonate a node. Administrative access to the GKE clusters is enabled through the Connect gateway. As part of the blueprint deployment, one Cloud NAT instance is used for each environment to give pods and Config Sync a mechanism to access resources on the internet such as the GitHub. Access to the GKE clusters is controlled by Kubernetes RBAC authorization that is based on Google Groups for GKE. Groups let you control identities using a central identity management system that's controlled by identity administrators. Platform GKE Enterprise components The developer platform uses GKE Enterprise components to enable you to build, deliver, and manage the lifecycle of your applications. The GKE Enterprise components that are used in the blueprint deployment are the following: GKE for container management Policy Controller for policy management and enforcement Cloud Service Mesh for service management Binary Authorization for container image attestation GKE Gateway controller for the multi-cluster gateway controller for GKE clusters Platform fleet management Fleets provide you with the ability to manage multiple GKE clusters in a single unified way. Fleet team management makes it easier for platform administrators to provision and manage infrastructure resources for developer platform tenants. Tenants have scoped control of resources within their own namespace, including their applications, logs, and metrics. To provision subsets of fleet resources on a per-team basis, administrators can use team scopes. Team scopes let you define subsets of fleet resources for each team, with each team scope associated with one or more fleet member clusters. Fleet namespaces provide control over who has access to specific namespaces within your fleet. The application uses two GKE clusters that are deployed on one fleet, with three team scopes, and each scope having one fleet namespace. The following diagram shows the fleet and scope resources that correspond to sample clusters in an environment, as implemented by the blueprint. Platform networking For networking, GKE clusters are deployed in a Shared VPC that's created as part of the enterprise foundation blueprint. GKE clusters require multiple IP address ranges to be assigned in the development, non-production, and production environments. Each GKE cluster that's used by the blueprint needs separate IP address ranges allocated for the nodes, pods, services, and control plane. AlloyDB for PostgreSQL instances also requires separate IP address ranges. The following table describes the VPC subnets and IP address ranges that are used in the different environments to deploy the blueprint clusters. For the development environment in Application GKE cluster region 2, the blueprint deploys only one IP address space even though there is IP address space allocated for two development GKE clusters. Resource IP address range type Development Nonproduction Production Application GKE cluster region 1 Primary IP address range 10.0.64.0/24 10.0.128.0/24 10.0.192.0/24 Pod IP address range 100.64.64.0/24 100.64.128.0/24 100.64.192.0/24 Service IP address range 100.0.80.0/24 100.0.144.0/24 100.0.208.0/24 GKE control plane IP address range 10.16.0.0/21 10.16.8.0/21 10.16.16.0/21 Application GKE cluster region 2 Primary IP address range 10.1.64.0/24 10.1.128.0/24 10.1.192.0/24 Pod IP address range 100.64.64.0/24 100.64.128.0/24 100.64.192.0/24 Service IP address range 100.1.80.0/24 100.1.144.0/24 100.1.208.0/24 GKE control plane IP address range 10.16.0.0/21 10.16.8.0/21 10.16.16.0/21 AlloyDB for PostgreSQL Database IP address range 10.9.64.0/18 10.9.128.0/18 10.9.192.0/18 If you need to design your own IP address allocation scheme, see IP address management in GKE and GKE IPv4 address planning. Platform DNS The blueprint uses Cloud DNS for GKE to provide DNS resolution for pods and Kubernetes services. Cloud DNS for GKE is a managed DNS that doesn't require a cluster-hosted DNS provider. In the blueprint, Cloud DNS is configured for VPC scope. VPC scope lets services in all GKE clusters in a project share a single DNS zone. A single DNS zone lets services be resolved across clusters, and VMs or pods outside the cluster can resolve services within the cluster. Platform firewalls GKE automatically creates firewall rules when creating GKE clusters, GKE services, GKE Gateway firewalls, and GKE Ingress firewalls that allow clusters to operate in your environments. The priority for all the automatically created firewall rules is 1000. These rules are needed as the enterprise foundation blueprint has a default rule to block traffic in the Shared VPC. Platform access to Google Cloud services Because the blueprint applications use private clusters, Private Google Access provides access to Google Cloud services. Platform high availability The blueprint was designed to be resilient to both zone and region outages. Resources needed to keep applications running are spread across two regions. You select the regions that you want to deploy the blueprint to. Resources that are not in the critical path for serving and responding to load are only one region or are global. The following table describes the resources and where they are deployed. Location Region 1 Region 2 Global Environments with resources in this location common development nonproduction production nonproduction production common development nonproduction production Projects with resources in this location eab-gke-{env} eab-infra-cicd eab-{ns}-cicd eab-gke-{env} eab-{ns}-cicd (only for the Artifact Registry mirror) eab-gke-{env} Resource types in this location GKE cluster (applications and the Gateway configuration) Artifact Registry AlloyDB for PostgreSQL Cloud Build Cloud Deploy GKE cluster (applications only) Artifact Registry AlloyDB for PostgreSQL Cloud Logging Cloud Monitoring Cloud Load Balancing Fleet scopes Fleet namespaces The following table summarizes how different components react to a region outage or a zone outage, and how you can mitigate these effects. Failure scope External services effects Database effects Build and deploy effects Terraform pipelines effects A zone of Region 1 Available. Available.The standby instance becomes active with zero RPO. Available, manual change might be needed.You might need to restart any terraform apply command that was in progress, but completed during the outage. Available, manual change might be needed.You might need to restart any terraform apply command that was in progress, but completed during the outage. A zone of Region 2 Available. Available. Available. Available, manual change might be needed.You might need to restart any terraform apply command that was in progress, but completed during the outage. Region 1 Available. Manual change needed.An operator must promote the secondary cluster manually. Unavailable. Unavailable. Region 2 Available. Available. Available, manual change might be neededBuilds remain available. You might need to deploy new builds manually. Existing builds might not complete successfully. Available. Cloud provider outages are only one source of downtime. High availability also depends on processes and operations that help make mistakes less likely. The following table describes all the decisions made in the blueprint that relate to high availability and the reasons for those decisions. Blueprint decision Availability impact Change management Use GitOps and IaC. Supports peer review of changes and supports reverting quickly to previous configurations. Promote changes gradually through environments. Lowers the impact of software and configuration errors. Make non-production and production environments similar. Ensures that differences don't delay discovery of an error. Both environments are dual-region. Change replicated resources one region at a time within an environment. Ensures that issues that aren't caught by gradual promotion only affect half of the run-time infrastructure. Change a service in one region at a time within an environment. Ensures that issues that aren't caught by gradual promotion only affect half of the service replicas. Replicated compute infrastructure Use a regional cluster control plane. Regional control plane is available during upgrade and resize. Create a multi-zone node pool. A cluster node pool has at least three nodes spread across three zones. For more information about region-specific considerations, see Geography and regions. Configure a Shared VPC network. The Shared VPC network covers two regions. A regional failure only affects network traffic to and from resources in the failing region. Replicate the image registry. Images are stored in Artifact Registry, which is configured to replicate to multiple regions so that a cloud region outage doesn't prevent application scale-up in the surviving region. Replicated services Deploy service replicas to two regions. In case of a regional outage, a Kubernetes service remains available in the production and non-production environments. Use rolling updates to service changes within a region. You can update Kubernetes services using a rolling update deployment pattern which reduces risk and downtime. Configure three replicas in a region for each service. A Kubernetes service has at least three replicas (pods) to support rolling updates in the production and non-production environment. Spread the deployment's pods across multiple zones. Kubernetes services are spread across VMs in different zones using an anti-affinity stanza. A single-node disruption or full zone outage can be tolerated without incurring additional cross-region traffic between dependent services. Replicated storage Deploy multi-zone database instances. AlloyDB for PostgreSQL offers high availability in a region. Its primary instance's redundant nodes are located in two different zones of the region. The primary instance maintains regional availability by triggering an automatic failover to the standby zone if the active zone encounters an issue. Regional storage helps provide data durability in the event of a single-zone loss. For more information about region-specific considerations, see Geography and regions. Replicate databases cross-region. AlloyDB for PostgreSQL uses cross-region replication to provide disaster recovery capabilities. The database asynchronously replicates your primary cluster's data into secondary clusters that are located in separate Google Cloud regions. Operations Provision applications for twice their expected load. If one cluster fails (for example, due to a regional service outage), the portion of the service that runs in the remaining cluster can fully absorb the load. Repair nodes automatically. The clusters are configured with node auto repair. If a node's consecutive health checks fail repeatedly over an extended time period, GKE initiates a repair process for that node. Ensure node pool upgrades are application-aware. Deployments define a pod disruption budget with maxUnavailable: 1 to allow parallel node pool upgrades in large clusters. No more than one of three (in the development environment) or one of six (in non-production and production) replicas are unavailable during node pool upgrades. Automatically restart deadlocked services. The deployment backing a service defines a liveness probe, which identifies and restarts deadlocked processes. Automatically check for replicas to be ready. The deployment backing a service defines a readiness probe, which identifies when an application is ready to serve after starting. A readiness probe eliminates the need for manual checks or timed-waits during rolling updates and node pool upgrades. The reference architecture is designed for applications with zonal and regional high availability requirements. Ensuring high availability does incur some costs (for example, standby spare costs or cross-region replication costs). The Alternatives section describes some ways to mitigate these costs. Platform quotas, performance limits, and scaling limits You can control quotas, performance, and scaling of resources in the developer platform. The following list describes some items to consider: The base infrastructure requires numerous projects, and each additional tenant requires four projects. You might need to request additional project quota before deploying and before adding more tenants. There is a limit of 100 MultiClusterGateway resources for each project. Each internet-facing Kubernetes service on the developer platform requires one MultiClusterGateway. Cloud Logging has a limit of 100 buckets in a project. The per-tenant log access in the blueprint relies on a bucket for each tenant. To create more than 20 tenants, you can request an increase to the project's quota for Scope and Scope Namespace resources. For instructions on viewing quotas, see View and manage quotas. Use a filter to find the gkehub.googleapis.com/global-per-project-scopes and gkehub.googleapis.com/global-per-project-scope-namespaces quota types. What's next Read about service architecture (next document in this series). Send feedback \ No newline at end of file diff --git a/Device_Connect_for_Fitbit.txt b/Device_Connect_for_Fitbit.txt new file mode 100644 index 0000000000000000000000000000000000000000..68c4c476b1dfd7d33b321f5b5308b0a4b7bbbf92 --- /dev/null +++ b/Device_Connect_for_Fitbit.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/device-connect +Date Scraped: 2025-02-23T12:05:04.848Z + +Content: +Jump to Device Connect for FitbitDevice Connect for FitbitEnable a more holistic view of patients with connected Fitbit data on Google Cloud.Contact usSimplify patient enrollment and consentEnable data interoperabilityAccelerate time to insights1:29Introducing Google Cloud\'s Device Connect for FitbitBenefitsEnable a more holistic view of the patient Device Connect for Fitbit can solve data integration challenges for Fitbit and other data with an open, standards-based approach.Help increase care team efficiencyBetter device interoperability can accelerate time to insight for care teams to help increase productivity, ease workloads and reduce burnout.Support more personalized care for better outcomesDevice Connect for Fitbit can empower the healthcare ecosystem with visibility to rich patient data to better understand what’s driving variability in outcomes and help deliver more personalized care.Key featuresGoogle Cloud's Device Connect for FitbitEnrollment and consent app for web and mobileThe pre-built patient enrollment and consent app enables organizations to provide their users with the permissions, transparency, and frictionless experience they expect. For example, users have control over what data they share and how that data is used.Fitbit data connectorOpen source data connector that provides automated data normalization and integration with Google Cloud BigQuery for advanced analytics. Can support emerging standards like Open mHealth and enables interoperability with clinical data when used with Cloud Healthcare API for cohort building and AI training pipelines. Pre-built analytics dashboardThe pre-built Looker interactive visualization dashboard can be easily customized for different clinical settings and use cases to provide faster time to insights.AI and machine learning toolsUse AutoML Tables to build advanced models directly from BigQuery or build custom models with 80% fewer lines of code using Vertex AI–the groundbreaking ML tools that power Google, developed by Google Research. Collaborating with Google Cloud allows us to do our research [on early identification and prevention of vascular disease], with the help of data analytics and AI, on a much greater scale. Being able to leverage the new solution makes it easier than ever to gain the insights we need.Dr. Ivo van der Bilt, Cardiologist Department Chair, Haga Teaching HospitalRead the blog postUse casesMake your Fitbit data accessible, interoperable, and usefulUse casePre- and post-surgerySupporting the patient journey before and after surgery can lead to higher patient engagement and more successful outcomes. However, many organizations lack a holistic view of patients.Fitbit tracks multiple behavioral metrics of interest, including activity level, sleep, weight and stress, and can provide visibility and new insights for care teams to what’s happening with patients outside of the hospital.Use caseChronic condition managementPromoting healthy behaviors can help improve outcomes for patients living with chronic diseases.Better understanding how lifestyle factors impact disease indicators can enable organizations to deliver more personalized care and tools to support healthy lifestyle changes. Use casePopulation healthSupport better management of community health outcomes with a focus on preventative care.Fitbit users can share data with partners that deliver lifestyle behavior change programs aimed at both prevention and management of chronic or acute conditions.Use caseClinical researchClinical trials depend on rich patient data. Collection in a physician’s office captures a snapshot of the participant’s data at one point in time and doesn’t account for daily lifestyle variables. Fitbit can enrich clinical trial endpoints with new insights from longitudinal lifestyle data and can improve patient retention and compliance with study protocols.Use caseHealth equityAddressing healthcare disparities is a priority across the healthcare ecosystem. Analyzing a variety of datasets, such as demographic, social determinants of health (SDOH) and Fitbit data, has the potential to provide organizations and researchers with new insights regarding disparities that may exist across populations.View all technical guidesPricingPricingPlease contact our sales team to discuss pricing for your organization.PartnersAn ecosystem of global service partners to help you deploy at scaleTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact usNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Device_to_Pub-Sub_connection_to_Google_Cloud.txt b/Device_to_Pub-Sub_connection_to_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..35b3c2c2a18ebe36aa1db7984ba7fd1064ef5847 --- /dev/null +++ b/Device_to_Pub-Sub_connection_to_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices/device-pubsub-architecture +Date Scraped: 2025-02-23T11:48:13.190Z + +Content: +Home Docs Cloud Architecture Center Send feedback Device on Pub/Sub connection to Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-09 UTC Rather than implementing a specific architecture to connect devices to analytics applications, some organizations might benefit from connecting directly to Pub/Sub from edge devices. We recommend this approach for organizations that have a small number of connected devices that aggregate data from a larger number of devices and sensors in a local or on-premises network. This approach is also recommended when your organization has connected devices that are in a more secure environment, such as a factory. This document outlines the high-level architectural considerations that you need to make to use this approach to connect devices to Google Cloud products. This document is part of a series of documents that provide information about IoT architectures on Google Cloud. The other documents in this series include the following: Connected device architectures on Google Cloud overview Standalone MQTT broker architecture on Google Cloud IoT platform product architecture on Google Cloud Best practices for running an IoT backend on Google Cloud Device on Pub/Sub architecture to Google Cloud (this document) Best practices for automatically provisioning and configuring edge and bare metal systems and servers Architecture The following diagram shows a connected aggregation device or gateway that connects directly to Pub/Sub. The flow of events in the preceding diagram is as follows: You use the Identity and Access Management API to create a new key pair for a service account. The public key is stored in IAM. However, you must download the private key securely and store it in the gateway device so that it can be used for authentication. The aggregation device collects data from multiple other remote devices and sensors located within a secure local network. The remote devices communicate with the gateway using a local edge protocol such as MODBUS, BACNET, OPC-UA, or another local protocol. The aggregation device sends data to Pub/Sub over either HTTPS or gRPC. These API calls are signed using the service account private key held on the aggregation device. Architectural considerations and choices Because Pub/Sub is a serverless data streaming service, you can use it to create bidirectional systems that are composed of event producers and consumers (known as publishers and subscribers). In some connected device scenarios, you only need a scalable publish and subscribe service to create an effective data architecture. The following sections describe the considerations and choices that you need to make when you implement a device to Pub/Sub architecture on Google Cloud. Ingestion endpoints Pub/Sub provides prebuilt client libraries in multiple languages that implement the REST and gRPC APIs. It supports two protocols for message ingestion: REST (HTTP) and gRPC. For a connected device to send and receive data through Pub/Sub, the device must be able to interact with one of these endpoints. Many software applications have built-in support for REST APIs, so connecting with the Pub/Sub REST API is often the easiest solution. In some use cases, however, gRPC can be a more efficient and faster alternative. Because it uses serialized protocol buffers for the message payload instead of JSON, XML, or another text-based format, gRPC is better suited for the low-bandwidth applications that are commonly found in connected device use cases. gRPC API connections are also faster than REST for data transmission, and gRPC supports simultaneous bidirectional communication. One study found that gRPC is up to seven times faster than REST. As a result, for many connected device scenarios, gRPC is a better option if a gRPC connector is available or can be implemented for the connected device application. Device authentication and credential management Pub/Sub supports a number of authentication methods for access from outside Google Cloud. If your architecture includes an external identity provider such as Active Directory or a local Kubernetes cluster, you can use workload identity federation to manage access to Pub/Sub. This approach lets you create short-lived access tokens for connected devices. You can also grant IAM roles to your connected devices, without the management and security overhead of using service account keys. In cases when an external identity provider is not available, service account keys are the only option for authentication. Service account keys can become a security risk if not managed correctly, so we recommend that you follow security best practices for deploying service account keys to connected devices. To learn more, see Best practices for managing service account keys. Service accounts are also a limited resource and any cloud project has a limited quota of user-managed service accounts. Consequently, this approach is only an option for deployments that have a small number of devices that need to be connected. Backend applications After data is ingested into a Pub/Sub topic, the data is available to any application that runs on Google Cloud that has the appropriate credentials and access privileges. No additional connectors are necessary other than the Pub/Sub API in your application. Messages can be made available to multiple applications across your backend infrastructure for parallel processing or alerting, as well as archival storage and other analytics. Use cases The following sections describe example scenarios where a direct connection from devices to Pub/Sub is well suited for connected device use cases. Bulk data ingestion from an on-premises data historian A device to Pub/Sub connection is best suited for applications which have a small number of endpoints that transmit large volumes of data. An operational data historian is a good example of an on-premises system that stores a lot of data which needs to be transmitted to Google Cloud. For this use case, a small number of endpoints must be authenticated, typically one to a few connected devices, which is within the typical parameters for service account authentication. These systems also commonly have modular architectures, which lets you implement the Pub/Sub API connection that you need to communicate with Google Cloud. Local gateway data aggregation for a factory Aggregation of factory sensor data in a local gateway is another use case well suited for a direct Pub/Sub connection. In this case, a local data management and aggregation system are deployed on a gateway device in the factory. This system is typically a software product that connects to a wide variety of local sensors and machines. The product collects the data and frequently transforms it into a standardized representation before passing it on to the cloud application. Many devices can be connected in this scenario. However, those devices are usually only connected to the local gateway and are managed by the software on that device, so there's no need for a cloud-based management application. Unlike in an MQTT broker architecture, in this use case, the gateway plays an active role in aggregating and transforming the data. When the gateway connects to Google Cloud, it authenticates with Pub/Sub through a service account key. The key sends the aggregated and transformed data to the cloud application for further processing. The number of connected gateways is also typically in the range of tens to hundreds of devices, which is within the typical range for service account authentication. What's next Learn best practices for managing service account keys. Read an overview of identity federation for external workloads. Learn more about Pub/Sub Explore reference architectures, diagrams, and best practices for Google Cloud. Take a look at our Cloud Architecture Center. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Discover_and_consume_data_products_in_a_data_mesh.txt b/Discover_and_consume_data_products_in_a_data_mesh.txt new file mode 100644 index 0000000000000000000000000000000000000000..8605656cc104a96a910e6b138532f9ef226a8f48 --- /dev/null +++ b/Discover_and_consume_data_products_in_a_data_mesh.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/discover-consume-data-products-data-mesh +Date Scraped: 2025-02-23T11:49:02.363Z + +Content: +Home Docs Cloud Architecture Center Send feedback Discover and consume data products in a data mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-03 UTC We recommend that you design your data mesh to support a wide variety of use cases for data consumption. The most common data consumption use cases in an organization are described in this document. The document also discusses what information data consumers must consider when determining the right data product for their use case, and how they discover and use data products. Understanding these factors can help organizations to ensure that they have the right guidance and tooling in place to support data consumers. This document is part of a series which describes how to implement a data mesh on Google Cloud. It assumes that you have read and are familiar with the concepts described in Architecture and functions in a data mesh and Build a modern, distributed Data Mesh with Google Cloud. The series has the following parts: Architecture and functions in a data mesh Design a self-service data platform for a data mesh Build data products in a data mesh Discover and consume data products in a data mesh (this document) The design of a data consumption layer, specifically, how the data domain-based consumers use data products, depends on the data consumer requirements. As a prerequisite, it's assumed that consumers have a use case in mind. It's assumed that they have identified the data that they require, and can search the central data product catalog to find it. If that data is not in the catalog or is not in the preferred state (for example, if the interface is not appropriate, or the SLAs are insufficient), the consumer must contact the data producer. Alternatively, the consumer can contact the center of excellence (COE) for the data mesh for advice on which domain is the best suited to produce that data product. The data consumers can also ask how to make their request. If your organization is large, there should be a process to make data product requests in a self-service manner. Data consumers use data products through the applications that they run. The type of insights required drives the choice of design of the data-consuming application. When they develop the design of the application, the data consumer also identifies their preferred use of data products in the application. They establish the confidence that they need to have in the trustworthiness and reliability of that data. The data consumers can then establish a view on the data product interfaces and SLAs that the application requires. Data consumption use cases For data consumers to create data applications, sources could be one or more data products and, perhaps, the data from the data consumer's own domain. As described in Build data products in a data mesh, analytical data products could be made from data products which are based on various physical data repositories. Although data consumption can happen within the same domain, the most common consumption patterns are those that search for the right data product, regardless of domain, as the source for the application. When the right data product exists in another domain, the consumption pattern requires you to set up the subsequent mechanism for access and usage of the data across domains. The consumption of data products created in domains other than the consuming domain is discussed in Data consumption steps. Architecture The following diagram shows an example scenario in which consumers use data products through a range of interfaces, including authorized datasets and APIs. As shown in the preceding diagram, the data producer has exposed four data product interfaces: two BigQuery authorized datasets, a BigQuery dataset exposed by the BigQuery storage read API, and data access APIs hosted on Google Kubernetes Engine. In using the data products, data consumers use a range of applications that query or directly access the data resources within the data products. For this scenario, data consumers access data resources in one of two different ways based on their specific data access requirements. In the first way, Looker uses BigQuery SQL to query an authorized dataset. In the second way, Dataproc directly accesses a dataset through the BigQuery API and then processes that ingested data to train a machine learning (ML) model. The use of a data consumption application might not always result in a business intelligence (BI) report or a BI dashboard. Consumption of data from a domain can also result in ML models that further enrich analytical products, are used in data analysis, or are a part of operational processes, for example, fraud detection. Some typical data product consumption use cases are as follows: BI reporting and data analysis: In this case, data applications are built to consume data from multiple data products. For example, data consumers from the customer relationship management (CRM) team need access to data from multiple domains such as sales, customers, and finance. The CRM application that is developed by these data consumers might need to query both a BigQuery authorized view in one domain and extract data from a Cloud Storage Read API in another domain. For data consumers, the optimizing factors that influence their preferred consumption interface are computing costs and any additional data processing that is required after they query the data product. In BI and data analysis use cases, BigQuery authorized views are likely to be most commonly used. Data science use cases and model training: In this case, the data consuming team is using the data products from other domains to enrich their own analytical data product such as an ML model. By using Dataproc Serverless for Spark, Google Cloud provides data pre-processing and feature engineering capabilities to enable data enrichment before running ML tasks. The key considerations are availability of sufficient amounts of training data at a reasonable cost, and confidence that the training data is the appropriate data. To keep costs down, the preferred consumption interfaces are likely to be direct read APIs. It's possible for a data consuming team to build an ML model as a data product, and in turn, that data consuming team also becomes a new data producing team. Operator processes: Consumption is a part of the operational process within the data consuming domain. For example, a data consumer in a team that deals with fraud might be using transaction data coming from operational data sources in the merchant domain. By using a data integration method like change data capture, this transaction data is intercepted at near real time. You can then use Pub/Sub to define a schema for this data and expose that information as events. In this case, the appropriate interfaces would be data exposed as Pub/Sub topics. Data consumption steps Data producers document their data product in the central catalog, including guidance on how to consume the data. For an organization with multiple domains, this documentation approach creates an architecture that's different from the traditional centrally built ELT/ETL pipeline, where processors create outputs without the boundary of business domains. Data consumers in a data mesh must have a well-designed discovery and consumption layer to create a data consumption lifecycle. The layer should include the following: Step 1: Discover data products through declarative search and exploration of data product specifications: Data consumers are free to search for any data product that data producers have registered in the central catalog. For all data products, the data product tag specifies how to make data access requests and the mode to consume data from the required data product interface. The fields in the data product tags are searchable using a search application. Data product interfaces implement data URIs, which means data does not need to be moved to a separate consumption zone to service consumers. In situations when real-time data isn't needed, consumers query data products and create reports with the results that are generated. Step 2: Exploring data through interactive data access and prototyping: Data consumers use interactive tools like BigQuery Studio and Jupyter Notebooks to interpret and experiment with the data to refine the queries that they need for production use. Interactive querying enables data consumers to explore newer dimensions of data and improve the correctness of insights generated in production scenarios. Step 3: Consuming data product through an application, with programmatic access and production: BI reports. Batch and near-real time reports and dashboards are the most common group of analytic use cases required by data consumers. Reports might require cross-data product access to help facilitate decision making. For example, a customer data platform requires programmatically querying both orders and CRM data products in a scheduled fashion. The results from such an approach provide a holistic customer view to the business users who consume the data. AI/ML model for batch and real-time prediction. Data scientists use common MLOps principles to build and service ML models that consume data products made available by the data product teams. ML models provide real-time inference capabilities for transactional use-cases like fraud detection. Similarly, with exploratory data analysis, data consumers can enrich source data. For example, exploratory data analysis on sales and marketing campaigns data shows demographic customer segments where sales are expected to be highest and hence where campaigns should be run. What's next See a reference implementation of the data mesh architecture. Learn more about BigQuery. Read more about Vertex AI. Learn about data science on Dataproc. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Distributed_Cloud.txt b/Distributed_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..f1d25edab3214c3b02287a400006be4ff8c9270b --- /dev/null +++ b/Distributed_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/distributed-cloud +Date Scraped: 2025-02-23T12:04:41.878Z + +Content: +Download “The 2024 State of Edge Computing” report to leverage insights from 640 business leaders.Google Distributed CloudExtend Google Cloud AI and infrastructure on-premisesGoogle Distributed Cloud’s hardware and software bring Google Cloud to your data center, designed for sovereignty, regulatory, and low-latency requirements.Contact usView documentationProduct highlightsBuild apps with an air-gapped optionEnable modern retail experiences Implement modern manufacturing outcomesTransform telecommunicationsWhat is Google Distributed Cloud?1.5-minute video overviewOverviewEnable inference at the edgeOn-premises computing is not new, but extending AI-optimized cloud infrastructure from the cloud to on-premises deployments is. With Google Distributed Cloud, you can extend the latest in AI models and AI-optimized infrastructure without compromising on data residency, latency, or connectivity.12:56Run modern apps on-premisesBuild modern apps everywhere in a uniform developer environment from cloud to edge. Google Distributed Cloud enables your team to build, deploy, and scale with a Kubernetes-based developer workflow and leverage an active ecosystem of partners.12:55Address data residency and operational sovereignty needsGoogle Distributed Cloud comes with an air-gapped option, coupled with a partner-operated model, to support public sector and regulated industries to meet the strictest sovereignty regulations, including ensuring data residency, control of operational staffing, and limiting impacts of jurisdictional challenges.Strategic partnership to deliver sovereign cloud services in EuropeScale anywhere with cloud-native agilityScale from one to thousands of locations with flexible hardware and software options for your business. Google Distributed Cloud equips your team with fully managed software to quickly adapt and respond to the latest customer requirement and market changes.2:33View moreHow It WorksDrive modern business use cases that leverage Google's AI, security, and an open ecosystem for better experiences, improved growth, and optimized operations. Securely store data where you need it and run modern apps anywhere with Kubernetes-based flexible scaling for thousands of edge locations.Technical overviewGoogle Distributed CloudCommon UsesBuild apps with an air-gapped optionFull isolation with no connectivity to the public internetSafeguard sensitive data and adhere to strict regulations with Google Distributed Cloud's air-gapped option. The solution does not require connectivity to Google Cloud or the public internet to manage infrastructure, services, APIs, or tooling, and is built to remain disconnected in perpetuity. Our product's open architecture creates familiarity across both public and air-gapped private clouds.Learn more about our air-gapped solutions1:13Watch "Google Distributed Cloud: An air-gapped cloud solution"How-tosFull isolation with no connectivity to the public internetSafeguard sensitive data and adhere to strict regulations with Google Distributed Cloud's air-gapped option. The solution does not require connectivity to Google Cloud or the public internet to manage infrastructure, services, APIs, or tooling, and is built to remain disconnected in perpetuity. Our product's open architecture creates familiarity across both public and air-gapped private clouds.Learn more about our air-gapped solutions1:13Watch "Google Distributed Cloud: An air-gapped cloud solution"Enable modern retail experiencesDeliver modern store use cases to thousands of locationsBuilding, deploying, and scaling software for thousands of clusters and locations can be challenging. With Google Distributed Cloud, you can build, deploy, and scale configurations from a single to thousands of retail locations to support use cases like store analytics, fast checkout, and predictive analytics.Learn more about the key insights driving retail at the edgeDownload the ESG ShowcaseLearn how to modernize retail with AI, cloud, and edge computingDownload the ESG WhitepaperLearn how AI, cloud, and edge enable the store of the futureDownload the ESG Economic ValidationLearn Google Distributed Cloud’s economic benefits Learn more about GDC for retailRead our blog: Introducing Google Distributed Cloud for retailHow-tosDeliver modern store use cases to thousands of locationsBuilding, deploying, and scaling software for thousands of clusters and locations can be challenging. With Google Distributed Cloud, you can build, deploy, and scale configurations from a single to thousands of retail locations to support use cases like store analytics, fast checkout, and predictive analytics.Learn more about the key insights driving retail at the edgeDownload the ESG ShowcaseLearn how to modernize retail with AI, cloud, and edge computingDownload the ESG WhitepaperLearn how AI, cloud, and edge enable the store of the futureDownload the ESG Economic ValidationLearn Google Distributed Cloud’s economic benefits Additional resourcesLearn more about GDC for retailRead our blog: Introducing Google Distributed Cloud for retailEnable the modern factory floorDeliver factory floor use cases directly on-siteModernizing factory operations with legacy solutions can be difficult. With Google Distributed Cloud, you can enable modern industrial outcomes, such as process optimization, security, asset protection, visual inspection, and assisted workforce across the factory floor, while delivering improved customer experiences, such as predictable delivery lead times and real time updates.Learn more about the key insights driving manufacturing at the edgeRead our blog: Introducing Google Distributed Cloud for manufacturingInteractive demos and videosIn this interactive demo created in collaboration with Google Cloud partner ClearObject, you can experience real-time anomaly detection on the factory floor solution, built with GDC connected and ClearVision for manufacturing.0:36VIDEO - ClearVision and Google Cloud provide deep insights into production operationsHow-tosDeliver factory floor use cases directly on-siteModernizing factory operations with legacy solutions can be difficult. With Google Distributed Cloud, you can enable modern industrial outcomes, such as process optimization, security, asset protection, visual inspection, and assisted workforce across the factory floor, while delivering improved customer experiences, such as predictable delivery lead times and real time updates.Learn more about the key insights driving manufacturing at the edgeRead our blog: Introducing Google Distributed Cloud for manufacturingAdditional resourcesInteractive demos and videosIn this interactive demo created in collaboration with Google Cloud partner ClearObject, you can experience real-time anomaly detection on the factory floor solution, built with GDC connected and ClearVision for manufacturing.0:36VIDEO - ClearVision and Google Cloud provide deep insights into production operationsTransform telecommunicationsDigitally transform from IT to networks with the latest AIModernizing and monetizing workflows for cloud solution providers (CSPs) can be expensive. In the 5G era, CSPs focus on activating data and analytics with AI and ML. A common operating model is needed to manage networks running across premises-based and cloud environments—from core to edge. Google’s flexibility optimizes deployment and operating costs—both essential to monetizing 5G investments.Learn more about the best practices for monetizing cloud-based technology and 5G networks18:43Watch the Google Cloud Innovators Live session on telecommunicationsHow-tosDigitally transform from IT to networks with the latest AIModernizing and monetizing workflows for cloud solution providers (CSPs) can be expensive. In the 5G era, CSPs focus on activating data and analytics with AI and ML. A common operating model is needed to manage networks running across premises-based and cloud environments—from core to edge. Google’s flexibility optimizes deployment and operating costs—both essential to monetizing 5G investments.Learn more about the best practices for monetizing cloud-based technology and 5G networks18:43Watch the Google Cloud Innovators Live session on telecommunicationsPricingHow our pricing worksGoogle Distributed Cloud pricing is based on Google Cloud services, storage, networking, and hardware configurations.Services and configurationsDescriptionPrice ServerChoose your hardware configuration, starting with a 3-node configurationStarting at$165 per node per month 5-year commitment is $495/month for 3-node configurationSoftware license for managed Kubernetes, starting with a 3-node configurationStarting at$7 per vCPU per month5-year commitment is $672/month for 3-node configurationServer installation$2,300One-time feeRacksChoose your rack configuration, starting with 6 nodesContact sales for pricing3-year commitmentRequest a quote from sales for costs based on your requirements.How our pricing worksGoogle Distributed Cloud pricing is based on Google Cloud services, storage, networking, and hardware configurations. ServerDescriptionChoose your hardware configuration, starting with a 3-node configurationPriceStarting at$165 per node per month 5-year commitment is $495/month for 3-node configurationSoftware license for managed Kubernetes, starting with a 3-node configurationDescriptionStarting at$7 per vCPU per month5-year commitment is $672/month for 3-node configurationServer installationDescription$2,300One-time feeRacksDescriptionChoose your rack configuration, starting with 6 nodesPriceContact sales for pricing3-year commitmentRequest a quote from sales for costs based on your requirements.Explore additional purchase requirementsPremier Support Plan and Necessary StorageReview Google Distributed Cloud's requirements including size, power supply, support, and moreWhere can I get GDC?Now available in 25 countriesLearn more about GEOs and hardware orderingNeed help getting started?Contact GDC sales to beginContact usResources to learn more about our productView documentationUsing AI and edge infrastructure to analyze dataLearn how Orange analyzes a petabyte of data on-premises across 26 countries with Google Distributed CloudBuild cloud-native apps with AI and KubernetesLearn more about Google Distributed Cloud inventory detection appsGoogle in the public sector and WWTLearn how Google Distributed Cloud enhances cloud sovereignty for the public sectorPartners & IntegrationPartner with GDCAll partnersGoogle Cloud ReadyManaged Google Distributed Cloud providersHardware partnersService partnersVisit our partner directory to learn about these Google Distributed Cloud partners.FAQExpand allWhat is a Google Cloud Ready partner and how does my company become one?The Google Cloud Ready - Distributed Cloud validation recognizes partner solutions that have met a core set of functional and interoperability requirements. Through the process, partners closely collaborate with Google to provide new integrations to best support customer use cases. Learn more in the Google Cloud Ready - Distributed Cloud Partner Validation Guide.What is a Managed Google Distributed Cloud provider and how does my company become one?The Managed GDC Providers (MGP) is a strategic partnership initiative by Google Cloud Partner Advantage program designed to accelerate Google Distributed Cloud adoption by collaborating with specialized partners who are skilled in deploying, operating, and managing services. These MGPs form a comprehensive ecosystem, providing end-to-end Google Distributed Cloud solutions, including top-tier support, robust data security, and more. By offering Google Distributed Cloud as a managed service, MGPs empower businesses to scale efficiently while maintaining high service quality.Where can I find more videos about Google Distributed Cloud?Google Distributed Cloud YouTube playlist has a variety of content about driving data and AI transformation, accelerating cloud-native network adoption, and new monetization models. Learn more about our managed edge hardware and software product configurations for enterprises and public sector to innovate with AI, keep data secure, and modernize with a Kubernetes-based consistent and open developer experience from edge to cloud.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Distributed_architecture_patterns.txt b/Distributed_architecture_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..03689b8b275167d33b888a2e5ff457177118a5fe --- /dev/null +++ b/Distributed_architecture_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/distributed-patterns +Date Scraped: 2025-02-23T11:49:57.589Z + +Content: +Home Docs Cloud Architecture Center Send feedback Distributed architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC When migrating from a non-hybrid or non-multicloud computing environment to a hybrid or multicloud architecture, first consider the constraints of your existing applications and how those constraints could lead to application failure. This consideration becomes more important when your applications or application components operate in a distributed manner across different environments. After you have considered your constraints, develop a plan to avoid or overcome them. Make sure to consider the unique capabilities of each computing environment in a distributed architecture. Note: You can apply different architecture patterns to different applications, based on their use cases and requirements. This means that you might have multiple applications with different hybrid and multicloud architecture patterns operating at the same time. Design considerations The following design considerations apply to distributed deployment patterns. Depending on the target solution and business objectives, the priority and the effect of each consideration can vary. Latency In any architecture pattern that distributes application components (frontends, backends, or microservices) across different computing environments, communication latency can occur. This latency is influenced by the hybrid network connectivity (Cloud VPN and Cloud Interconnect) and the geographical distance between the on-premises site and the cloud regions, or between cloud regions in a multicloud setup. Therefore, it's crucial to assess the latency requirements of your applications and their sensitivity to network delays. Applications that can tolerate latency are more suitable candidates for initial distributed deployment in a hybrid or multicloud environment. Temporary versus final state architecture To specify the expectations and any potential implications for cost, scale, and performance, it's important to analyze what type of architecture you need and the intended duration as part of the planning stage. For example, if you plan to use a hybrid or multicloud architecture for a long time or permanently, you might want to consider using Cloud Interconnect. To reduce outbound data transfer costs and to optimize hybrid connectivity network performance, Cloud Interconnect discounts the outbound data transfer charges that meet the discounted data transfer rate conditions. Reliability Reliability is a major consideration when architecting IT systems. Uptime availability is an essential aspect of system reliability. In Google Cloud, you can increase the resiliency of an application by deploying redundant components of that application across multiple zones in a single region1, or across multiple regions, with switchover capabilities. Redundancy is one of the key elements to improve the overall availability of an application. For applications with a distributed setup across hybrid and multicloud environments, it's important to maintain a consistent level of availability. To enhance the availability of a system in an on-premises environment, or in other cloud environments, consider what hardware or software redundancy—with failover mechanisms—you need for your applications and their components. Ideally, you should consider the availability of a service or an application across the various components and supporting infrastructure (including hybrid connectivity availability) across all the environments. This concept is also referred to as the composite availability of an application or service. Based on the dependencies between the components or services, the composite availability for an application might be higher or lower than for an individual service or component. For more information, see Composite availability: calculating the overall availability of cloud infrastructure. To achieve the level of system reliability that you want, define clear reliability metrics and design applications to self-heal and endure disruptions effectively across the different environments. To help you define appropriate ways to measure the customer experience of your services, see Define reliability based on user-experience goals. Hybrid and multicloud connectivity The requirements of the communication between the distributed applications components should influence your selection of a hybrid network connectivity option. Each connectivity option has its advantages and disadvantages, as well as specific drivers to consider, such as cost, traffic volume, security, and so forth. For more information, see the connectivity design considerations section. Manageability Consistent and unified management and monitoring tools are essential for successful hybrid and multicloud setups (with or without workload portability). In the short term, these tools can add development, testing, and operations costs. Technically, the more cloud providers you use, the more complex managing your environments becomes. Most public cloud vendors not only have different features, but also have varying tools, SLAs, and APIs for managing cloud services. Therefore, weigh the strategic advantages of your selected architecture against the potential short-term complexity versus the long-term benefits. Cost Each cloud service provider in a multicloud environment has its own billing metrics and tools. To provide better visibility and unified dashboards, consider using multicloud cost management and optimization tooling. For example, when building cloud-first solutions across multiple cloud environments each provider's products, pricing, discounts, and management tools can create cost inconsistencies between those environments. We recommend having a single, well-defined method for calculating the full costs of cloud resources, and to provide cost visibility. Cost visibility is essential for cost optimization. For example, by combining billing data from the cloud providers you use and using Google Cloud Looker Cloud Cost Management Block, you can create a centralized view of your multicloud costs. This view can help provide a consolidated reporting view of your spend across multiple clouds. For more information, see The strategy for effectively optimizing cloud billing cost management. We also recommend using FinOps practice to make costs visible. As a part of a strong FinOps practice, a central team can delegate the decision making for resource optimization to any other teams involved in a project to encourage individual accountability. In this model, the central team should standardize the process, the reporting, and the tooling for cost optimization. For more information about the different cost optimization aspects and recommendations that you should consider, see Google Cloud Architecture Framework: Cost optimization. Data movement Data movement is an important consideration for hybrid and multicloud strategy and architecture planning, especially for distributed systems. Enterprises need to identify their different business use cases, the data that powers them, and how the data is classified (for regulated industries). They should also consider how data storage, sharing, and access for distributed systems across environments might affect application performance and data consistency. Those factors might influence the application and the data pipeline architecture. Google Cloud's comprehensive set of data movement options makes it possible for businesses to meet their specific needs and adopt hybrid and multicloud architectures without compromising simplicity, efficiency, or performance. Security When migrating applications to the cloud, it's important to consider cloud-first security capabilities like consistency, observability, and unified security visibility. Each public cloud provider has its own approach, best practices, and capabilities for security. It's important to analyze and align these capabilities to build a standard, functional security architecture. Strong IAM controls, data encryption, vulnerability scanning, and compliance with industry regulations are also important aspects of cloud security. When planning a migration strategy, we recommend that you analyze the previously mentioned considerations. They can help you minimize the chances of introducing complexities to the architecture as your applications or traffic volumes grow. Also, designing and building a landing zone is almost always a prerequisite to deploying enterprise workloads in a cloud environment. A landing zone helps your enterprise deploy, use, and scale cloud services more securely across multiple areas and includes different elements, such as identities, resource management, security, and networking. For more information, see Landing zone design in Google Cloud. The following documents in this series describe other distributed architecture patterns: Tiered hybrid pattern Partitioned multicloud pattern Analytics hybrid and multicloud pattern Edge hybrid pattern For more information about region-specific considerations, see Geography and regions. ↩ Previous arrow_back Overview Next Tiered hybrid pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Distributed_load_testing_using_Google_Kubernetes_Engine.txt b/Distributed_load_testing_using_Google_Kubernetes_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..4fe559a84846634125b142c937f13106bc3e8033 --- /dev/null +++ b/Distributed_load_testing_using_Google_Kubernetes_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/distributed-load-testing-using-gke +Date Scraped: 2025-02-23T11:48:21.515Z + +Content: +Home Docs Cloud Architecture Center Send feedback Distributed load testing using Google Kubernetes Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-13 UTC This document explains how to use Google Kubernetes Engine (GKE) to deploy a distributed load testing framework that uses multiple containers to create traffic for a simple REST-based API. This document load-tests a web application deployed to App Engine that exposes REST-style endpoints to respond to incoming HTTP POST requests. You can use this same pattern to create load testing frameworks for a variety of scenarios and applications, such as messaging systems, data stream management systems, and database systems. Objectives Define environment variables to control deployment configuration. Create a GKE cluster. Perform load testing. Optionally scale up the number of users or extend the pattern to other use cases. Costs In this document, you use the following billable components of Google Cloud: App Engine Artifact Registry Cloud Build Cloud Storage Google Kubernetes Engine To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the App Engine, Artifact Registry, Cloud Build, Compute Engine, Resource Manager, Google Kubernetes Engine, and Identity and Access Management APIs. Enable the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the App Engine, Artifact Registry, Cloud Build, Compute Engine, Resource Manager, Google Kubernetes Engine, and Identity and Access Management APIs. Enable the APIs When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Grant roles to your user account. Run the following command once for each of the following IAM roles: roles/serviceusage.serviceUsageAdmin, roles/container.admin, roles/appengine.appAdmin, roles/appengine.appCreator, roles/artifactregistry.admin, roles/resourcemanager.projectIamAdmin, roles/compute.instanceAdmin.v1, roles/iam.serviceAccountUser, roles/cloudbuild.builds.builder, roles/iam.serviceAccountAdmin gcloud projects add-iam-policy-binding PROJECT_ID --member="user:USER_IDENTIFIER" --role=ROLE Replace PROJECT_ID with your project ID. Replace USER_IDENTIFIER with the identifier for your user account. For example, user:myemail@example.com. Replace ROLE with each individual role. Example workload The following diagram shows an example workload where requests go from client to application. To model this interaction, you can use Locust, a distributed, Python-based load testing tool that can distribute requests across multiple target paths. For example, Locust can distribute requests to the /login and /metrics target paths. The workload is modeled as a set of tasks in Locust. Architecture This architecture involves two main components: The Locust Docker container image. The container orchestration and management mechanism. The Locust Docker container image contains the Locust software. The Dockerfile, which you get when you clone the GitHub repository that accompanies this document, uses a base Python image and includes scripts to start the Locust service and execute the tasks. To approximate real-world clients, each Locust task is weighted. For example, registration happens once per thousand total client requests. GKE provides container orchestration and management. With GKE, you can specify the number of container nodes that provide the foundation for your load testing framework. You can also organize your load testing workers into Pods, and specify how many Pods you want GKE to keep running. To deploy the load testing tasks, you do the following: Deploy a load testing primary, which is referred to as a master by Locust. Deploy a group of load testing workers. With these load testing workers, you can create a substantial amount of traffic for testing purposes. The following diagram shows the architecture that demonstrates load testing using a sample application. The master Pod serves the web interface used to operate and monitor load testing. The worker Pods generate the REST request traffic for the application undergoing test, and send metrics to the master. Note: Generating excessive amounts of traffic to external systems can resemble a denial-of-service attack. Be sure to review the Google Cloud Terms of Service and the Google Cloud Acceptable Use Policy. About the load testing master The Locust master is the entry point for executing the load testing tasks. The Locust master configuration specifies several elements, including the default ports used by the container: 8089 for the web interface 5557 and 5558 for communicating with workers This information is later used to configure the Locust workers. You deploy a Service to ensure that the necessary ports are accessible to other Pods within the cluster through hostname:port. These ports are also referenceable through a descriptive port name. This Service allows the Locust workers to easily discover and reliably communicate with the master, even if the master fails and is replaced with a new Pod by the Deployment. A second Service is deployed with the necessary annotation to create an internal passthrough Network Load Balancer that makes the Locust web application Service accessible to clients outside of your cluster that use the same VPC network and are located in the same Google Cloud region as your cluster. After you deploy the Locust master, you can open the web interface using the internal IP address provisioned by the internal passthrough Network Load Balancer. After you deploy the Locust workers, you can start the simulation and look at aggregate statistics through the Locust web interface. About the load testing workers The Locust workers execute the load testing tasks. You use a single Deployment to create multiple Pods. The Pods are spread out across the Kubernetes cluster. Each Pod uses environment variables to control configuration information, such as the hostname of the system under test and the hostname of the Locust master. The following diagram shows the relationship between the Locust master and the Locust workers. Initialize common variables You must define several variables that control where elements of the infrastructure are deployed. Open Cloud Shell: Open Cloud Shell You run all the terminal commands in this document from Cloud Shell. Set the environment variables that require customization: export GKE_CLUSTER=GKE_CLUSTER export AR_REPO=AR_REPO export REGION=REGION export ZONE=ZONE export SAMPLE_APP_LOCATION=SAMPLE_APP_LOCATION Replace the following: GKE_CLUSTER: the name of your GKE cluster. AR_REPO: the name of your Artifact Registry repository REGION: the region where your GKE cluster and Artifact Registry repository will be created ZONE: the zone in your region where your Compute Engine instance will be created SAMPLE_APP_LOCATION: the (regional) location where your sample App Engine application will be deployed The commands should look similar to the following example: export GKE_CLUSTER=gke-lt-cluster export AR_REPO=dist-lt-repo export REGION=us-central1 export ZONE=us-central1-b export SAMPLE_APP_LOCATION=us-central Set the following additional environment variables: export GKE_NODE_TYPE=e2-standard-4 export GKE_SCOPE="https://www.googleapis.com/auth/cloud-platform" export PROJECT=$(gcloud config get-value project) export SAMPLE_APP_TARGET=${PROJECT}.appspot.com Set the default zone so you don't have to specify these values in subsequent commands: gcloud config set compute/zone ${ZONE} Create a GKE cluster Create a service account with the minimum permissions required by the cluster: gcloud iam service-accounts create dist-lt-svc-acc gcloud projects add-iam-policy-binding ${PROJECT} --member=serviceAccount:dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com --role=roles/artifactregistry.reader gcloud projects add-iam-policy-binding ${PROJECT} --member=serviceAccount:dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com --role=roles/container.nodeServiceAccount Create the GKE cluster: gcloud container clusters create ${GKE_CLUSTER} \ --service-account=dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com \ --region ${REGION} \ --machine-type ${GKE_NODE_TYPE} \ --enable-autoscaling \ --num-nodes 3 \ --min-nodes 3 \ --max-nodes 10 \ --scopes "${GKE_SCOPE}" Connect to the GKE cluster: gcloud container clusters get-credentials ${GKE_CLUSTER} \ --region ${REGION} \ --project ${PROJECT} Set up the environment Clone the sample repository from GitHub: git clone https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes Change your working directory to the cloned repository: cd distributed-load-testing-using-kubernetes Build the container image Create an Artifact Registry repository: gcloud artifacts repositories create ${AR_REPO} \ --repository-format=docker \ --location=${REGION} \ --description="Distributed load testing with GKE and Locust" Build the container image and store it in your Artifact Registry repository: export LOCUST_IMAGE_NAME=locust-tasks export LOCUST_IMAGE_TAG=latest gcloud builds submit \ --tag ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO}/${LOCUST_IMAGE_NAME}:${LOCUST_IMAGE_TAG} \ docker-image The accompanying Locust Docker image embeds a test task that calls the /login and /metrics endpoints in the sample application. In this example test task set, the respective ratio of requests submitted to these two endpoints will be 1 to 999. class MetricsTaskSet(TaskSet): _deviceid = None def on_start(self): self._deviceid = str(uuid.uuid4()) @task(1) def login(self): self.client.post( '/login', {"deviceid": self._deviceid}) @task(999) def post_metrics(self): self.client.post( "/metrics", {"deviceid": self._deviceid, "timestamp": datetime.now()}) class MetricsLocust(FastHttpUser): tasks = {MetricsTaskSet} Verify that the Docker image is in your Artifact Registry repository: gcloud artifacts docker images list ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO} | \ grep ${LOCUST_IMAGE_NAME} The output is similar to the following: Listing items under project PROJECT, location REGION, repository AR_REPO REGION-docker.pkg.dev/PROJECT/AR_REPO/locust-tasks sha256:796d4be067eae7c82d41824791289045789182958913e57c0ef40e8d5ddcf283 2022-04-13T01:55:02 2022-04-13T01:55:02 Deploy the sample application Create and deploy the sample-webapp as App Engine: gcloud app create --region=${SAMPLE_APP_LOCATION} gcloud app deploy sample-webapp/app.yaml \ --project=${PROJECT} When prompted, type y to proceed with deployment. The output is similar to the following: File upload done. Updating service [default]...done. Setting traffic split for service [default]...done. Deployed service [default] to [https://PROJECT.appspot.com] The sample App Engine application implements /login and /metrics endpoints: @app.route('/login', methods=['GET', 'POST']) def login(): deviceid = request.values.get('deviceid') return '/login - device: {}\n'.format(deviceid) @app.route('/metrics', methods=['GET', 'POST']) def metrics(): deviceid = request.values.get('deviceid') timestamp = request.values.get('timestamp') return '/metrics - device: {}, timestamp: {}\n'.format(deviceid, timestamp) Deploy the Locust master and worker Pods Substitute the environment variable values for target host, project, and image parameters in the locust-master-controller.yaml and locust-worker-controller.yaml files, and create the Locust master and worker Deployments: envsubst < kubernetes-config/locust-master-controller.yaml.tpl | kubectl apply -f - envsubst < kubernetes-config/locust-worker-controller.yaml.tpl | kubectl apply -f - envsubst < kubernetes-config/locust-master-service.yaml.tpl | kubectl apply -f - Verify the Locust Deployments: kubectl get pods -o wide The output looks something like the following: NAME READY STATUS RESTARTS AGE IP NODE locust-master-87f8ffd56-pxmsk 1/1 Running 0 1m 10.32.2.6 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-279q9 1/1 Running 0 1m 10.32.1.5 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-9frbw 1/1 Running 0 1m 10.32.2.8 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-dppmz 1/1 Running 0 1m 10.32.2.7 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-g8tzf 1/1 Running 0 1m 10.32.0.11 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-qcscq 1/1 Running 0 1m 10.32.1.4 gke-gke-load-test-default-pool-96a3f394 Verify the Services: kubectl get services The output looks something like the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.87.240.1 443/TCP 12m locust-master ClusterIP 10.87.245.22 5557/TCP,5558/TCP 1m locust-master-web LoadBalancer 10.87.246.225 8089:31454/TCP 1m Run a watch loop while the internal passthrough Network Load Balancer's internal IP address (GKE external IP address) is provisioned for the Locust master web application Service: kubectl get svc locust-master-web --watch Press Ctrl+C to exit the watch loop once an EXTERNAL-IP address is provisioned. Connect to Locust web frontend You use the Locust master web interface to execute the load testing tasks against the system under test. Make a note of the internal load balancer IP address of the web host service: export INTERNAL_LB_IP=$(kubectl get svc locust-master-web \ -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \ echo $INTERNAL_LB_IP Depending on your network configuration, there are two ways that you can connect to the Locust web application through the provisioned IP address: Network routing. If your network is configured to allow routing from your workstation to your project VPC network, you can directly access the internal passthrough Network Load Balancer IP address from your workstation. Proxy & SSH tunnel. If there is not a network route between your workstation and your VPC network, you can route traffic to the internal passthrough Network Load Balancer's IP address by creating a Compute Engine instance with an nginx proxy and an SSH tunnel between your workstation and the instance. Network routingIf there is a route for network traffic between your workstation and your Google Cloud project VPC network, open your browser and then open the Locust master web interface. To open the Locust interface, go to the following URL: http://INTERNAL_LB_IP:8089 Replace INTERNAL_LB_IP with the URL and IP address that you noted in the previous step. Proxy & SSH tunnel Set an environment variable with the name of the instance. export PROXY_VM=locust-nginx-proxy Start an instance with a ngnix docker container configured to proxy the Locust web application port 8089 on the internal passthrough Network Load Balancer: gcloud compute instances create-with-container ${PROXY_VM} \ --zone ${ZONE} \ --container-image gcr.io/cloud-marketplace/google/nginx1:latest \ --container-mount-host-path=host-path=/tmp/server.conf,mount-path=/etc/nginx/conf.d/default.conf \ --metadata=startup-script="#! /bin/bash cat < /tmp/server.conf server { listen 8089; location / { proxy_pass http://${INTERNAL_LB_IP}:8089; } } EOF" Open an SSH tunnel from Cloud Shell to the proxy instance: gcloud compute ssh --zone ${ZONE} ${PROXY_VM} \ -- -N -L 8089:localhost:8089 Click the Web Preview icon (), and select Change Port from the options listed. On the Change Preview Port dialog, enter 8089 in the Port Number field, and select Change and Preview. In a moment, a browser tab will open with the Locust web interface. Run a basic load test on your sample application After you open the Locust frontend in your browser, you see a dialog that can be used to start a new load test. Specify the total Number of users (peak concurrency) as 10 and the Spawn rate (users started/second) as 5 users per second. Click Start swarming to begin the simulation. After requests start swarming, statistics begin to aggregate for simulation metrics, such as the number of requests and requests per second, as shown in the following image: View the deployed service and other metrics from the Google Cloud console. Note: After the swarm test, it might take the App Engine dashboard several minutes to show the metrics. When you have observed the behavior of the application under test, click Stop to terminate the test. Scale up the number of users (optional) If you want to test increased load on the application, you can add simulated users. Before you can add simulated users, you must ensure that there are enough resources to support the increase in load. With Google Cloud, you can add Locust worker Pods to the Deployment without redeploying the existing Pods, as long as you have the underlying VM resources to support an increased number of Pods. The initial GKE cluster starts with 3 nodes and can auto-scale up to 10 nodes. Scale the pool of Locust worker Pods to 20. kubectl scale deployment/locust-worker --replicas=20 It takes a few minutes to deploy and start the new Pods. If you see a Pod Unschedulable error, you must add more nodes to the cluster. For details, see resizing a GKE cluster. After the Pods start, return to the Locust master web interface and restart load testing. Extend the pattern To extend this pattern, you can create new Locust tasks or even switch to a different load testing framework. You can customize the metrics you collect. For example, you might want to measure the requests per second, or monitor the response latency as load increases, or check the response failure rates and types of errors. For information, see the Cloud Monitoring documentation. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this document, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the GKE cluster If you don't want to delete the whole project, run the following command to delete the GKE cluster: gcloud container clusters delete ${GKE_CLUSTER} --region ${REGION} What's next Building Scalable and Resilient Web Applications. Review GKE documentation in more detail. Try tutorials on GKE. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Distributed_tracing_in_a_microservices_application.txt b/Distributed_tracing_in_a_microservices_application.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c885820138bce3dcfce30e686fce6f520ffa364 --- /dev/null +++ b/Distributed_tracing_in_a_microservices_application.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/microservices-architecture-distributed-tracing +Date Scraped: 2025-02-23T11:46:56.247Z + +Content: +Home Docs Cloud Architecture Center Send feedback Distributed tracing in a microservices application Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document is the fourth in a four-part series about designing, building, and deploying microservices. This series describes the various elements of a microservices architecture. The series includes information about the benefits and drawbacks of the microservices architecture pattern, and how to apply it. Introduction to microservices Refactor a monolith into microservices Interservice communication in a microservices setup Distributed tracing in a microservices application (this document) This series is intended for application developers and architects who design and implement the migration to refactor a monolith application to a microservices application. In a distributed system, it's important to know how a request flows from one service to another and how long it takes to perform a task in each service. Consider the microservices-based Online Boutique application that you deployed in the previous document, Refactor a monolith into microservices. The application is composed of multiple services. For example, the following screenshot shows the product details page, which fetches information from the frontend, recommendation, and ads services. To render the product details page, the frontend service communicates with the recommendation service and the ads service, as show in the following diagram: Figure 1. Services written in different languages. In figure 1, the frontend service is written in Go. The recommendation service, which is written in Python, uses gRPC to communicate with the frontend service. The ads service, which is written in Java, also uses gRPC to communicate with the frontend service. Besides gRPC the inter-service communication method can also be in REST HTTP. When you build such a distributed system, you want your observability tools to provide the following insights: The services that a request went through. Where delays occurred if a request was slow. Where an error occurred if the request failed. How the execution of the request was different from the normal behavior of the system. Whether differences in the execution of the request were related to performance (whether some service calls took a longer or shorter time than usual). Objectives Use kustomize manifest files to set up the infrastructure. Deploy the Online Boutique example application to Google Kubernetes Engine (GKE). Use Cloud Trace to review a user's journey in the example application. Costs In this document, you use the following billable components of Google Cloud: GKE Cloud SQL Container Registry Cloud Trace To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish this document, you can avoid continued billing by deleting the resources you created. For more information, see Cleaning up. Before you begin If you already set up a project by completing the previous document in this series, Interservice communication in a microservices setup, you can reuse the project. Complete the following steps to enable additional APIs and set environment variables. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Enable the APIs for Compute Engine, GKE, Cloud SQL, Artifact Analysis, Trace, and Container Registry: gcloud services enable \ compute.googleapis.com \ sql-component.googleapis.com \ servicenetworking.googleapis.com\ container.googleapis.com \ containeranalysis.googleapis.com \ containerregistry.googleapis.com \ sqladmin.googleapis.com Distributed tracing Distributed tracing attaches contextual metadata to each request and ensures that metadata is shared between requests. You use trace points to instrument distributed tracing. For example, you can instrument your services (frontend, recommendation, and ads) with two trace points to handle a client request to view a product's details: one trace point to send the request and another trace point to receive the response. The following diagram shows how this trace point instrumentation works: Figure 2. Each interservice call has two trace points that consist of a request-response pair. For trace points to understand which request to execute when the service is invoked, the originating service passes a trace ID along the execution flow. The process that passes the trace ID is called metadata propagation or distributed context propagation. Context propagation transfers metadata over network calls when services of a distributed application communicate with each other during the execution of a given request. The following diagram shows the metadata propagation: Figure 3. Trace metadata is passed between services. The metadata includes information like which service calls which and their timestamps. In the Online Boutique example, a trace begins when a user sends an initial request to fetch product details. A new trace ID is generated, and each successive request is decorated with headers that contain contextual metadata back to the original request. Each individual operation that is invoked as part of fulfilling the end user's request is called a span. The originating service tags each span with its own unique ID and the trace ID of the parent span. The following diagram shows a Gantt chart visualization of a trace: Figure 4. A parent span includes the response time of child spans. Figure 4 shows a trace tree in which the frontend service calls the recommendation service and the ads service. The frontend service is the parent span, which describes the response time as observed by the end user. The child spans describe how the recommendation service and the ads service were called and responded, including response time information. A service mesh like Istio enables distributed tracing of service-to-service traffic without the need for any dedicated instrumentation. However, there might be situations in which you want to have more control over the traces, or you might need to trace code that isn't running within a service mesh. This document uses OpenTelemetry to enable instrumentation of distributed microservice applications to collect traces and metrics. OpenTelemetry lets you collect metrics and traces and then export them to backends, such as Prometheus, Cloud Monitoring, Datadog, Graphite, Zipkin, and Jaeger. Instrumentation using OpenTelemetry The following sections show how to use context propagation to allow spans from multiple requests to be appended to a single parent trace. This example uses the OpenTelemetry Javascript, Python, and Go libraries to instrument trace implementation for the payment, recommendation, and frontend services. Depending on the verbosity of the instrumentation, tracing data can affect on the project's cost (Cloud Trace billing). To mitigate cost concerns, most tracing systems employ various forms of sampling to capture only a certain percentage of the observed traces. In your production environments, your organization might have reasons for both what they want to sample and why. You might want to customize your sampling strategy based on managing costs, focusing on interesting traces, or filtering out noise. To learn more about sampling see OpenTelemetry Sampling. This document uses Trace to visualize distributed traces. You use an OpenTelemetry exporter to send traces to Trace. Register trace exporters This section shows how to register the trace exporter in each service by adding lines to the microservice code. For the frontend service (written in Go), the following code sample registers the exporter: [...] exporter, err := otlptracegrpc.New( ctx, otlptracegrpc.WithGRPCConn(svc.collectorConn)) if err != nil { log.Warnf("warn: Failed to create trace exporter: %v", err) } tp := sdktrace.NewTracerProvider( sdktrace.WithBatcher(exporter), sdktrace.WithSampler(sdktrace.AlwaysSample())) otel.SetTracerProvider(tp) For the recommendation service (written in Python), the following code sample registers the exporter: if os.environ["ENABLE_TRACING"] == "1": trace.set_tracer_provider(TracerProvider()) otel_endpoint = os.getenv("COLLECTOR_SERVICE_ADDR", "localhost:4317") trace.get_tracer_provider().add_span_processor( BatchSpanProcessor( OTLPSpanExporter( endpoint = otel_endpoint, insecure = True ) ) ) For the payment service (written in Javascript), the following code sample registers the exporter: provider.addSpanProcessor(new SimpleSpanProcessor(new OTLPTraceExporter({url: collectorUrl}))); provider.register(); Set up context propagation The tracing system needs to follow a trace context specification that defines the format to propagate tracing context between services. Propagation format examples include Zipkin's B3 format and X-Google-Cloud-Trace. OpenTelemetry propagates context using the global TextMapPropagator. This example uses the Trace Context propagator, which uses the W3C traceparent format. Instrumentation libraries, such as OpenTelemetry's HTTP and gRPC libraries, use the global propagator to add trace context as metadata to HTTP or gRPC requests. For context propagation to succeed, the client and server must use the same propagation format. Context propagation over HTTP The frontend service injects a trace context into the HTTP request headers. The backend services extracts the trace context. The following code sample shows how the frontend service is instrumented to configure the trace context: otel.SetTextMapPropagator( propagation.NewCompositeTextMapPropagator( propagation.TraceContext{}, propagation.Baggage{})) if os.Getenv("ENABLE_TRACING") == "1" { log.Info("Tracing enabled.") initTracing(log, ctx, svc) } else { log.Info("Tracing disabled.") } ... var handler http.Handler = r handler = &logHandler{log: log, next: handler} // add logging handler = ensureSessionID(handler) // add session ID handler = otelhttp.NewHandler(handler, "frontend") // add OpenTelemetry tracing Context propagation over gRPC Consider the flow in which the checkout service places the order based on the product that a user selects. These services communicate over gRPC. The following code sample uses a gRPC call interceptor that intercepts the outgoing calls and injects the trace context: var srv *grpc.Server // Propagate trace context always otel.SetTextMapPropagator( propagation.NewCompositeTextMapPropagator( propagation.TraceContext{}, propagation.Baggage{})) srv = grpc.NewServer( grpc.UnaryInterceptor(otelgrpc.UnaryServerInterceptor()), grpc.StreamInterceptor(otelgrpc.StreamServerInterceptor()), ) After receiving the request, the payment or product catalogue service (ListProducts) extracts the context from the request headers and uses the parent trace metadata to spawn a child span. The following sections provide details of how to set up and review distributed tracing for the example Online Boutique application. Deploy the application If you already have a running application from completing the previous document in this series, Interservice communication in a microservices setup, you can skip to the next section, Review traces. Otherwise, complete the following steps to deploy the example Online Boutique example: To set up infrastructure, in Cloud Shell clone the GitHub repository: git clone https://github.com/GoogleCloudPlatform/microservices-demo.git For the new deployment, reset the environment variables: PROJECT_ID=PROJECT_ID REGION=us-central1 GSA_NAME=microservices-sa GSA_EMAIL=$GSA_NAME@$PROJECT_ID.iam.gserviceaccount.com Replace PROJECT_ID with the Google Cloud project ID that you want to use. Optional: Create a new cluster or reuse an existing cluster if it exists: gcloud container clusters create-auto online-boutique --project=${PROJECT_ID} --region=${REGION} Create a Google service account: gcloud iam service-accounts create $GSA_NAME \ --project=$PROJECT_ID Enable the APIs: gcloud services enable \ monitoring.googleapis.com \ cloudtrace.googleapis.com \ cloudprofiler.googleapis.com \ --project ${PROJECT_ID} Grant the roles required for Cloud Trace to the service account: gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role roles/cloudtrace.agent gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role roles/monitoring.metricWriter gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member "serviceAccount:${GSA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ --role roles/cloudprofiler.agent gcloud iam service-accounts add-iam-policy-binding ${GSA_EMAIL} \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:${PROJECT_ID}.svc.id.goog[default/default]" Annotate your Kubernetes service account (default/default for the default namespace) to use the Identity and Access Management (IAM) service account: kubectl annotate serviceaccount default \ iam.gke.io/gcp-service-account=${GSA_EMAIL} Enable Google Cloud Observability for GKE configuration, which enables tracing: cd ~/microservices-demo/kustomize && \ kustomize edit add component components/google-cloud-operations The preceding command updates the kustomize/kustomization.yaml file, which is similar to the following: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - base components: - components/google-cloud-operations [...] Deploy the microservices: kubectl apply -k . Check the status of the deployment: kubectl rollout status deployment/frontend kubectl rollout status deployment/paymentservice kubectl rollout status deployment/recommendationservice kubectl rollout status deployment/adservice The output for each command looks like the following: Waiting for deployment "" rollout to finish: 0 of 1 updated replicas are available... deployment "" successfully rolled out Get the IP address of the deployed application: kubectl get service frontend-external | awk '{print $4}' Wait for the load balancer's IP address to be published. To exit the command, press Ctrl+C. Note the load balancer IP address and then access the application at the URL http://IP_ADDRESS. It might take some time for the load balancer to become healthy and start passing traffic. Review traces using Cloud Trace A user's purchase journey in the Online Boutique application has the following flow: The user sees a product catalog on the landing page. To make a purchase, the user clicks Buy. The user is redirected to a product details page where they add the item to their cart. The user is redirected to a checkout page where they can make a payment to complete the order. Consider a scenario in which you need to troubleshoot high response times when loading the product details page. As described earlier, the product details page is comprised of multiple microservices. To determine where and why the high latency is occurring, you can view distributed tracing graphs to review the performance of the entire request across the different services. To review the distributed tracing graphs, do the following: Access the application and then click any product. The product details page is displayed. In the Google Cloud console, go to the Trace list page and review the timeline. To see the distributed trace results, click Frontend in the URI column. The Trace Waterfall View displays the spans associated with the URI: In the preceding screenshot, the trace for a product contains the following spans: The Frontend span captures the end-to-end latency (150.349 ms) that the client observes in loading the product details page. The Recommendation Service span captures the latency of the backend calls in fetching recommendations (4.246 ms) that are related to the product. The Ad Service span captures the latency of the backend calls in fetching ads (4.511 ms) that are relevant to the product page. To troubleshoot high response times, you can review insights that include latency distribution graphs of any outlier requests when the service's dependencies aren't meeting their service level objectives (SLOs). You can also use Cloud Trace to get performance insights and create analysis reports from the sampled data. Troubleshooting If the traces in Application Performance Management don't appear, check Logs Explorer for a permission denied error. The permission denied occurs when the service account doesn't have access to export the traces. Review the steps on granting the roles required for Cloud Trace and make sure to annotate the service account with the correct namespace. After that, restart the opentelemetrycollector: kubectl rollout restart deployment opentelemetrycollector Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the resources If you want to keep the Google Cloud project that you used in this document, delete the individual resources: In Cloud Shell, delete the resources: gcloud container clusters delete online-boutique --project=${PROJECT_ID} --region=${REGION} What's next Read the first document in this series to learn about microservices, their benefits, challenges, and use cases. Read the second document in this series to learn about application refactoring strategies to decompose microservices. Read the third document in this series to learn about interservice communication in a microservices setup. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Document_AI(1).txt b/Document_AI(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..c90240fa307dfbe9ad155b953cd6cf0db930396e --- /dev/null +++ b/Document_AI(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/document-ai +Date Scraped: 2025-02-23T12:02:19.107Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceJump to Vertex AI's document processing solution Document AI Create document processors that help automate tedious tasks, improve data extraction, and gain deeper insights from unstructured or structured document information. Document AI helps developers create high-accuracy processors to extract, classify, and split documents.Try it in consoleContact salesNew customers get $300 in free credit to try Document AI and other Google Cloud productsSummarize large documents with a Google-recommended, pre-built solution—powered by generative AISeamlessly connect to BigQuery, Vertex Search, and other Google Cloud products Enterprise-ready, along with Google Cloud's data security and privacy commitmentsBuilt for developers; use the UI or API to easily create document processorsBLOGCustom Extractor, powered by generative AI, is GABenefitsFaster time to valueUse generative AI to extract data or classify documents out of the box, with no training necessary to get started. Simply post a document to an enterprise-ready API endpoint to get structured data in return.Higher accuracyDocument AI is powered by the latest foundation models, tuned for document tasks. Also, with powerful fine-tuning and auto-labeling features, the platform offers multiple paths to reach the required accuracy.Better decision-makingStructure and digitize information from documents to drive deeper insights using generative AI to help businesses make better decisions.DemoTry Document AI in your environmentExtract data from your documents using generative AI. For full product capabilities head to Document AI in the Google Cloud Console.sparkLooking to build a solution?I want to build a solution where users can upload documents on a case and the system gives them a summary of their caseI want to build a pipeline that automatically classifies scanned receipts into various expense typesI want to build a solution that can parse through documents, extract specific information to feed into a structured formatMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSummarize documentsprompt_suggestionClassify documentsprompt_suggestionExtract informationKey featuresUse generative AI for document processingDocument AI WorkbenchDocument AI Workbench provides an easy way to build custom processors to classify, split, and extract structured data from documents. Workbench is powered by generative AI, which means it can be used out of the box to get accurate results across a wide array of documents. Furthermore, you can achieve higher accuracy by providing as few as 10 documents to fine-tune the large model—all with a simple click of a button or an API call. Try it now or learn more.Enterprise OCR With Enterprise Document OCR, users gain access to 25 years of optical character recognition (OCR) research at Google. OCR is powered by models trained on business documents and can detect text in PDFs and images of scanned documents in 200+ languages. The product can see the structure of a document to identify layout characteristics like blocks, paragraphs, lines, words, and symbols. Advanced features include best-in-class handwriting recognition (50 languages), recognizing math formulas, detecting font-style information, and extracting selection marks like checkboxes and radio buttons.Try Document OCR now for accurate text and layout extraction.Form ParserDevelopers use Form Parser to capture fields and values from standard forms, to extract generic entities, including names, addresses, and prices, and to structure data contained in tables. This product works out of the box and does not require any training or customization and is useful across a broad range of document customization.Explore document processing with Form Parser.PretrainedTry out pretrained models for commonly used document types including W2, paystub, bank statement, invoice, expense, US driver license, US passport, and identity proofing.Explore pretrained options in the processor gallery.VIDEODocument AI overview4:37CustomersHigher processing accuracy and faster development time Document AI is helping customers improve fraud detection, automate customer support, and process clinical trial data.Case studyDocument AI helps CheQ lower time to value and extract information from customer email and SMS accounts5-min readBlog postResistant AI improves customer fraud detection by 32%, reduces investigation time by 52 minutes per case5-min readNewsQuantiphi helps Cerevel Therapeutics improve clinical trial oversight with 93% extraction accuracy5-min readSee all customersWhat's newSee the latest updates about Document AISign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postAsk your documents: Document AI and PaLM2 for question answeringRead the blogBlog postMobilize your unstructured data with generative AIRead the blogBlog postDocument AI Workbench powered by generative AI to structure document data fasterRead the blogBlog postWhat is Optical Character Recognition? OCR explained by GoogleRead the blogDocumentationDocumentationGoogle Cloud BasicsDocument AI overviewGet an overview of the basics of Document AI, including extracting text from documents, classifying documents, and entity extraction.Learn moreTutorialDocument AI introduction videos and labsGet started learning about Document AI with our video series "The Future of Documents" and step-by-step codelabs.Learn moreQuickstartSetting up the Document AI APIThis guide provides all required setup steps to start using Document AI.Learn moreTutorialBuild custom processors with Document AI courseLearn how to extract data and classify documents by creating custom ML models that are specific to your business needs. Start courseNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseExtract data to drive automation and analyticsUse Document AI Workbench to automate data entry by extracting structured data from your documents. Typical applications include the mail room, shipping yards, mortgage processing divisions, procurement, and more. Use this data to make more efficient and effective business decisions.Try out the Custom Extractor. Use caseUncover insights buried in documents with BigQueryYou can now extract metadata from documents directly into a BigQuery objects table. Seamlessly join the parsed data with other BigQuery tables to combine structured and unstructured data, paving the way for comprehensive document analytics. Learn more about the BigQuery and Document AI integration here. Use caseClassify documentsAssigning categories or classes to documents as they flow into a business process makes them easier to manage, search, filter, or analyze. Custom Splitter and Classifier use machine learning to accurately predict and categorize single documents or multiple documents within a file. Use these products to improve efficiencies of the document processes.Learn more and try out the Custom Splitter and Custom Classifier.Use caseMake document processing applications smarterSaaS customers and ISV partners can improve and expand their document processing solutions, quickly with generative AI. With a simple API prediction endpoint and document response format, customers can take document applications to the next level.Use caseDigitize text for ML model trainingEnterprise Document OCR enables users to create value from archival content that is otherwise unusable for training machine learning models. OCR helps extract text from scanned documents, plots, reports, and presentations prior to saving on a cloud storage or a data warehouse. Use these high-quality OCR outputs to accelerate your digital transformation initiatives such as training ML models specific to your business. Use caseExpand business capabilities with generative AICapture document information for new generative AI architectures and frameworks. Combining OCR and the Vertex AI PaLM API lets users unlock valuable data from documents to build document Q&A experiences, perform automated document comparison, or even generate new documents. View all technical guidesPricingDocument AI pricingDocument AI offers transparent, cost effective pricing for all your document processing, model training, and storage needs. Visit our pricing page for more details.If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.PartnersDocument AI partnersGet help implementing Document AI from these trusted partners. View full partner directory.See all partnersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Take the next stepTry Document AI in the console.Go to my consoleHave a large project?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Document_AI.txt b/Document_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..e8b81e86b6cee46263d60e2499593ba7ea810057 --- /dev/null +++ b/Document_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/document-ai +Date Scraped: 2025-02-23T11:58:42.190Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceJump to Vertex AI's document processing solution Document AI Create document processors that help automate tedious tasks, improve data extraction, and gain deeper insights from unstructured or structured document information. Document AI helps developers create high-accuracy processors to extract, classify, and split documents.Try it in consoleContact salesNew customers get $300 in free credit to try Document AI and other Google Cloud productsSummarize large documents with a Google-recommended, pre-built solution—powered by generative AISeamlessly connect to BigQuery, Vertex Search, and other Google Cloud products Enterprise-ready, along with Google Cloud's data security and privacy commitmentsBuilt for developers; use the UI or API to easily create document processorsBLOGCustom Extractor, powered by generative AI, is GABenefitsFaster time to valueUse generative AI to extract data or classify documents out of the box, with no training necessary to get started. Simply post a document to an enterprise-ready API endpoint to get structured data in return.Higher accuracyDocument AI is powered by the latest foundation models, tuned for document tasks. Also, with powerful fine-tuning and auto-labeling features, the platform offers multiple paths to reach the required accuracy.Better decision-makingStructure and digitize information from documents to drive deeper insights using generative AI to help businesses make better decisions.DemoTry Document AI in your environmentExtract data from your documents using generative AI. For full product capabilities head to Document AI in the Google Cloud Console.sparkLooking to build a solution?I want to build a solution where users can upload documents on a case and the system gives them a summary of their caseI want to build a pipeline that automatically classifies scanned receipts into various expense typesI want to build a solution that can parse through documents, extract specific information to feed into a structured formatMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionSummarize documentsprompt_suggestionClassify documentsprompt_suggestionExtract informationKey featuresUse generative AI for document processingDocument AI WorkbenchDocument AI Workbench provides an easy way to build custom processors to classify, split, and extract structured data from documents. Workbench is powered by generative AI, which means it can be used out of the box to get accurate results across a wide array of documents. Furthermore, you can achieve higher accuracy by providing as few as 10 documents to fine-tune the large model—all with a simple click of a button or an API call. Try it now or learn more.Enterprise OCR With Enterprise Document OCR, users gain access to 25 years of optical character recognition (OCR) research at Google. OCR is powered by models trained on business documents and can detect text in PDFs and images of scanned documents in 200+ languages. The product can see the structure of a document to identify layout characteristics like blocks, paragraphs, lines, words, and symbols. Advanced features include best-in-class handwriting recognition (50 languages), recognizing math formulas, detecting font-style information, and extracting selection marks like checkboxes and radio buttons.Try Document OCR now for accurate text and layout extraction.Form ParserDevelopers use Form Parser to capture fields and values from standard forms, to extract generic entities, including names, addresses, and prices, and to structure data contained in tables. This product works out of the box and does not require any training or customization and is useful across a broad range of document customization.Explore document processing with Form Parser.PretrainedTry out pretrained models for commonly used document types including W2, paystub, bank statement, invoice, expense, US driver license, US passport, and identity proofing.Explore pretrained options in the processor gallery.VIDEODocument AI overview4:37CustomersHigher processing accuracy and faster development time Document AI is helping customers improve fraud detection, automate customer support, and process clinical trial data.Case studyDocument AI helps CheQ lower time to value and extract information from customer email and SMS accounts5-min readBlog postResistant AI improves customer fraud detection by 32%, reduces investigation time by 52 minutes per case5-min readNewsQuantiphi helps Cerevel Therapeutics improve clinical trial oversight with 93% extraction accuracy5-min readSee all customersWhat's newSee the latest updates about Document AISign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postAsk your documents: Document AI and PaLM2 for question answeringRead the blogBlog postMobilize your unstructured data with generative AIRead the blogBlog postDocument AI Workbench powered by generative AI to structure document data fasterRead the blogBlog postWhat is Optical Character Recognition? OCR explained by GoogleRead the blogDocumentationDocumentationGoogle Cloud BasicsDocument AI overviewGet an overview of the basics of Document AI, including extracting text from documents, classifying documents, and entity extraction.Learn moreTutorialDocument AI introduction videos and labsGet started learning about Document AI with our video series "The Future of Documents" and step-by-step codelabs.Learn moreQuickstartSetting up the Document AI APIThis guide provides all required setup steps to start using Document AI.Learn moreTutorialBuild custom processors with Document AI courseLearn how to extract data and classify documents by creating custom ML models that are specific to your business needs. Start courseNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseExtract data to drive automation and analyticsUse Document AI Workbench to automate data entry by extracting structured data from your documents. Typical applications include the mail room, shipping yards, mortgage processing divisions, procurement, and more. Use this data to make more efficient and effective business decisions.Try out the Custom Extractor. Use caseUncover insights buried in documents with BigQueryYou can now extract metadata from documents directly into a BigQuery objects table. Seamlessly join the parsed data with other BigQuery tables to combine structured and unstructured data, paving the way for comprehensive document analytics. Learn more about the BigQuery and Document AI integration here. Use caseClassify documentsAssigning categories or classes to documents as they flow into a business process makes them easier to manage, search, filter, or analyze. Custom Splitter and Classifier use machine learning to accurately predict and categorize single documents or multiple documents within a file. Use these products to improve efficiencies of the document processes.Learn more and try out the Custom Splitter and Custom Classifier.Use caseMake document processing applications smarterSaaS customers and ISV partners can improve and expand their document processing solutions, quickly with generative AI. With a simple API prediction endpoint and document response format, customers can take document applications to the next level.Use caseDigitize text for ML model trainingEnterprise Document OCR enables users to create value from archival content that is otherwise unusable for training machine learning models. OCR helps extract text from scanned documents, plots, reports, and presentations prior to saving on a cloud storage or a data warehouse. Use these high-quality OCR outputs to accelerate your digital transformation initiatives such as training ML models specific to your business. Use caseExpand business capabilities with generative AICapture document information for new generative AI architectures and frameworks. Combining OCR and the Vertex AI PaLM API lets users unlock valuable data from documents to build document Q&A experiences, perform automated document comparison, or even generate new documents. View all technical guidesPricingDocument AI pricingDocument AI offers transparent, cost effective pricing for all your document processing, model training, and storage needs. Visit our pricing page for more details.If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.PartnersDocument AI partnersGet help implementing Document AI from these trusted partners. View full partner directory.See all partnersCloud AI products comply with our SLA policies. They may offer different latency or availability guarantees from other Google Cloud services.Take the next stepTry Document AI in the console.Go to my consoleHave a large project?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Drivers,_considerations,_strategy,_and_patterns.txt b/Drivers,_considerations,_strategy,_and_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..ad6f48008274a7874796af4790e92677e6d88893 --- /dev/null +++ b/Drivers,_considerations,_strategy,_and_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/drivers +Date Scraped: 2025-02-23T11:49:42.599Z + +Content: +Home Docs Cloud Architecture Center Send feedback Drivers, considerations, strategy, and approaches Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This document defines and discusses business objectives, drivers, and requirements, and how these factors can influence your design decisions when constructing hybrid and multicloud architectures. Objectives An organization can adopt a hybrid or multicloud architecture either as a permanent solution to meet specific business objectives, or as a temporary state to facilitate certain requirements, such as a migration to the cloud. Answering the following questions about your business is a good way to define your business requirements, and to establish specific expectations about how to achieve some or all of your business objectives. These questions focus on what's needed for your business, not how to achieve it technically. Which business goals are driving the decision to adopt a hybrid or multicloud architecture? What business and technical objectives is a hybrid or multicloud architecture going to help achieve? What business drivers influenced these objectives? What are the specific business requirements? In the context of hybrid and multicloud architectures, one business goal for an enterprise customer might be to expand online sales operations or markets from a single region to become one of the global leaders in their market segment. One of the business objectives might be to start accepting purchase orders from users across the globe (or from specific regions) within six months. To support the previously mentioned business requirements and objectives, one potential primary technical objective is to expand the IT infrastructure and applications architecture of a company from an on-premises-only model to a hybrid architecture, using the global capabilities and services of public clouds. This objective should be specific and measurable, clearly defining the expansion scope in terms of target regions and timelines. Note: Sometimes business requirements are defined to satisfy certain business strategies. A business strategy can be defined as a long term plan to draw a path to achieve certain business objectives. In general, a hybrid or multicloud architecture is rarely a goal in itself, but rather a means of meeting technical objectives driven by certain business requirements. Therefore, choosing the right hybrid or multicloud architecture requires first clarifying these requirements. It's important to differentiate between the business objectives and technical objectives of your IT project. Your business objectives should focus on the goal and mission of your organization. Your technical objectives should focus on building a technological foundation that enables your organization to meet their business requirements and objectives. Business drivers influence the achievement of the business objective and goals. Therefore, clearly identifying the business drivers can help shape the business objectives or goals to be more relevant to market needs and trends. The following flowchart illustrates business drivers, goals, objectives, and requirements, and the technical objectives and requirements, and how all these factors relate to each other: Business and technical drivers Consider how your business drivers influence your technical objectives. Some common, influencing, business drivers when choosing a hybrid architecture include the following: Heeding laws and regulations about data sovereignty. Reducing capital expenditure (CAPEX) or general IT spending with the support of cloud financial management and cost optimization disciplines like FinOps. Cloud adoption can be driven by scenarios that help reduce CAPEX, like building a Disaster Recovery solution in a hybrid or multicloud architecture. Improving the user experience. Increasing flexibility and agility to respond to changing market demands. Improving transparency about costs and resource consumption. Consider your list of business drivers for adopting a hybrid or multicloud architecture together. Don't consider them in isolation. Your final decision should depend on the balance of your business priorities. After your organization realizes the benefits of the cloud, it might decide to fully migrate if there are no constraints—like costs or specific compliance requirements that require highly secure data to be hosted on-premises—that prevent it from doing so. Although adopting a single cloud provider can offer several benefits, such as reduced complexity, built-in integrations among services, and cost optimization options like committed use discounts, there are still some scenarios where a multicloud architecture can be beneficial for a business. The following are the common business drivers for adopting a multicloud architecture, along with the associated considerations for each driver: Heeding laws and regulations about data sovereignty: The most common scenario is when an organization is expanding its business to a new region or country and has to comply with new data-hosting regulations. If the existing used cloud service provider (CSP) has no local cloud region in that country, then for compliance purposes the common solution is to use another CSP that has a local cloud region in that country. Reducing costs: Cost reduction is often the most common business driver for adopting a technology or architecture. However, it's important to consider more than just the cost of services and potential pricing discounts when deciding whether to adopt a multicloud architecture. Account for the cost of building and operating a solution across multiple clouds, and any architecture constraints that might arise from existing systems. Sometimes, the potential challenges associated with a multicloud strategy might outweigh the benefits. A multicloud strategy might introduce additional costs later on. Common challenges associated with developing a multicloud strategy include the following: Increasing management complexity. Maintaining consistent security. Integrating software environments. Achieving consistent cross-cloud performance and reliability. Building a technical team with multicloud skills might be expensive and might require expanding the team, unless it's managed by a third party company. Managing the product pricing and management tools from each CSP. Without a solution that can provide unified cost visibility and dashboards, it can be difficult to efficiently manage costs across multiple environments. In such cases, you might use the Looker cloud cost management solution where applicable. For more information, see The strategy for effectively optimizing cloud billing cost management. Using the unique capabilities from each CSP: A multicloud architecture enables organizations to use additional new technologies to improve their own business capability offerings without being limited to the choices offered by a single cloud provider. To avoid any unforeseen risk or complexity, assess your potential challenges through a feasibility and effectiveness assessment, including the common challenges mentioned previously. Avoiding vendor lock-in: Sometimes, enterprises want to avoid being locked into a single cloud provider. A multicloud approach lets them choose the best solution for their business needs. However, the feasibility of this decision depends on several factors, such as the following: Technical dependencies Interoperability considerations between applications Costs of rebuilding or refactoring applications Technical skill sets Consistent security and manageability Enhancing the reliability and availability level of business critical applications: In some scenarios, a multicloud architecture can provide resilience to outages. For example, if one region of a CSP goes down, traffic can be routed to another CSP in the same region. This scenario assumes that both cloud providers support the required capabilities or services in that region. When data residency regulations in a specific country or region mandate the storage of sensitive data—like personally identifiable information (PII)—within that location, a multicloud approach can provide a compliant solution. By using two CSPs in one region to provide resilience to outages, you can facilitate compliance with regulatory restrictions while also addressing availability requirements. The following are some resilience considerations to assess before adopting a multicloud architecture: Data movement: How often might data move within your multicloud environment? Might data movement incur significant data transfer charges? Security and manageability: Are there any potential security or manageability complexities? Capability parity: Do both CSPs in the selected region offer the required capabilities and services? Technical skill set: Does the technical team have the skills required to manage a multicloud architecture? Consider all these factors when assessing the feasibility of using a multicloud architecture to improve resilience. When assessing the feasibility of a multicloud architecture, it's important to consider the long-term benefits. For example, deploying applications on multiple clouds for disaster recovery or increased reliability might increase costs in the short term, but could prevent outages or failures. Such failures can cause long-term financial and reputational damage. Therefore, it's important to weigh short-term costs against the long-term potential value of adopting multicloud. Also, the long-term potential value can vary based on the organization size, technology scale, criticality of the technology solution, and the industry. Organizations that plan to successfully create a hybrid or multicloud environment, should consider building a Cloud Center of Excellence (COE). A COE team can become the conduit for transforming the way that internal teams within your organization serve the business during your transition to the cloud. A COE is one of the ways that your organization can adopy the cloud faster, drive standardization, and maintain stronger alignment between your business strategy and your cloud investments. If the objective of the hybrid or multicloud architecture is to create a temporary state, common business drivers include: The need to reduce CAPEX or general IT spending for short-term projects. The ability to provision such infrastructure quickly to support a business use case. For example: This architecture might be used for limited-time projects. It could be used to support a project that requires a high scale distributed infrastructure within a limited duration, while still using data that is on-premises. The need for multi-year digital transformation projects that require a large enterprise to establish and that use a hybrid architecture for some time to help them align their infrastructure and applications modernization with their business priorities. The need to create a temporary hybrid, multicloud, or mixed architecture after a corporate merger. Doing so enables the new organization to define a strategy for the final state of its new cloud architecture. It's common for two merging companies to use different cloud providers, or for one company to use an on-premises private data center and the other to use the cloud. In either case, the first step in merger and acquisition is almost always to integrate the IT systems. Technical drivers The preceding section discussed business drivers. To get approved, major architectural decisions almost always need the support of those drivers. However, technical drivers, which can be based on either a technical gain or a constraint, can also influence business drivers. In some scenarios, it's necessary to translate technical drivers into business drivers and explain how they might positively or negatively affect the business. The following non-exhaustive list contains some common technical drivers for adopting a hybrid or multicloud architecture: Building out technological capabilities, such as advanced analytics services and AI, that might be difficult to implement in existing environments. Improving the quality and performance of service. Automating and accelerating application rollouts to achieve a faster time to market and shorter cycle times. Using high-level APIs and services to speed up development. Accelerating the provisioning of compute and storage resources. Using serverless services to build elastic services and capabilities faster and at scale. Using global infrastructure capabilities to build global or multi-regional architectures to satisfy certain technical requirements. The most common technical driver for both temporary hybrid and temporary multicloud architectures is to facilitate a migration from on-premises to the cloud or to an extra cloud. In general, cloud migrations almost always naturally lead to hybrid cloud setup. Enterprises often have to systematically transition applications and data based on their priorities. Similarly, a short-term setup might be intended to facilitate a proof of concept using advanced technologies available in the cloud for a certain period. Technical design decisions The identified technical objective and its drivers are key to making a business-driven architecture decision and to selecting one of the architecture patterns discussed in this guide. For example, to support a specific business goal, a company might set a business objective to build a research and development practice for three to six months. The main business requirement to support this objective might be to build the required technology environment for research and design with the lowest possible CAPEX. The technical objective in this case is to have a temporary hybrid cloud setup. The driver for this technical objective is to take advantage of the on-demand pricing model of the cloud to meet the previously mentioned business requirement. Another driver is influenced by the specific technology requirements that require a cloud-based solution with high compute capacity and quick setup. Use Google Cloud for hybrid and multicloud architectures Using open source solutions can make it easier to adopt a hybrid and multicloud approach, and to minimize vendor lock-in. However, you should consider the following potential complexities when planning an architecture: Interoperability Manageability Cost Security Building on a cloud platform that contributes to and supports open source might help to simplify your path to adopting hybrid and multicloud architectures. Open cloud empowers you with an approach that provides maximum choice and abstracts complexity. In addition, Google Cloud offers the flexibility to migrate, build, and optimize applications across hybrid and multicloud environments while minimizing vendor lock-in, using best-in-breed solutions, and meeting regulatory requirements. Google is also one of the largest contributors to the open source ecosystem and works with the open source community to develop well-known open source technologies like Kubernetes. When rolled out as a managed service, Kubernetes can help reduce complexities around hybrid and multicloud manageability and security. Previous arrow_back Overview Next Plan a hybrid and multicloud strategy arrow_forward Send feedback \ No newline at end of file diff --git a/Dynamic_web_application_with_Java.txt b/Dynamic_web_application_with_Java.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ff58dd69f2b2a9b40383c3007f901a0d1ad528a --- /dev/null +++ b/Dynamic_web_application_with_Java.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/gke-java-spring-dynamic-webapp +Date Scraped: 2025-02-23T11:48:32.224Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Dynamic web application with Java Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-21 UTC This guide helps you understand, deploy, and use the dynamic web application with Java Jump Start Solution. This solution deploys a dynamic web app named Point of Sale. Point of Sale is a web app written in Java that mimics a real-world point of sale screen for a retail store. After you deploy the web app, you can test the user experience on the Point of Sale screen. You deploy Point of Sale web app in Google Cloud using the Google-managed implementation of Kubernetes, Google Kubernetes Engine (GKE). With GKE, you can choose how granular the operational management of your infrastructure should be. This solution provides high-level requirements to start your application software design. At the end of this guide, you will be able to select the Google Cloud components required for a deployment with similar cost, performance, and scalability. This guide assumes that you're familiar with Java and basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Deploy a publicly accessible web application with GKE by completing the following high-level tasks: Configure a GKE Autopilot cluster that responds to the scaling, security, and infrastructure needs of the cluster. Configure a Google Cloud LoadBalancer through Kubernetes Services to enable incoming and outgoing traffic to the web application. Connect to Spanner from a GKE Pod following Google Cloud's recommended security practices. Build and redeploy securely. Explore your deployment. Use Cloud Trace to understand and manage problems. Products used The solution uses the following Google Cloud products: Google Kubernetes Engine: A managed environment for deploying, managing, and scaling containerized applications using Google infrastructure. Spanner: A fully managed relational database that lets your application scale and ensure high availability. Virtual Private Cloud: A global Virtual Private Cloud network that spans all Google Cloud regions and that lets you interconnect your cloud resources. For information about how these products are configured and how they interact, see the next section. Architecture The following diagram shows the architecture of the Google Cloud resources that the solution deploys: Components and configuration The architecture includes the following components: A client request goes to Cloud Load Balancing, which distributes incoming traffic to a Virtual Private Cloud (VPC). Google Cloud assigns an external IP address to the VPC instance. The VPC provides connectivity to the resources of a GKE Autopilot cluster. The cluster has a Kubernetes Service of type LoadBalancer. This Service routes requests to the Pods running three Spring Boot Java service Pods. The Pods have the following characteristics: The api-server Pod hosts the static content for the Vue.js frontend and exposes an API for the frontend. Calls to these APIs trigger connections to the inventory and payment services as needed. The inventory service Pod connects to Spanner to store and retrieve inventory information. The payment service Pod connects to Spanner to store payment details and generates a purchase bill. The Spanner instance hosts inventory and payment data. Cost For an estimate of the cost of the Google Cloud resources that the dynamic web application with Java solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. Compute Engine Network Admin (roles/compute.networkAdmin) Kubernetes Engine Admin (roles/container.admin) GKE Hub Editor (roles/gkehub.editor) Service Account Admin (roles/iam.serviceAccountAdmin) Service Account User (roles/iam.serviceAccountUser) Project IAM Admin (roles/resourcemanager.projectIamAdmin) Spanner Admin (roles/storage.admin) Service Usage Admin (roles/serviceusage.serviceUsageAdmin) Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Dynamic web application with Java solution. Go to the Dynamic web application with Java solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the deployed Point of Sale web app, follow the instructions in Explore your deployed dynamic web application using Java. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete it to avoid continued billing for the Google Cloud resources. For more information, see Delete the solution deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" For information about the values that you can assign to the required variables, see the following: PROJECT_ID: Identifying projects Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! To view and use the deployed Point of Sale web app, follow the instructions in Explore your deployed dynamic web application using Java. After the deployment is completed, the output is similar to the following: pos_application_url = "http://34.27.130.180/" The pos_application_url is the IP address of your application frontend. GKE assigns this IP address to the public endpoint of the load balancer that exposes your application to the internet. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete it to avoid continued billing for the Google Cloud resources. For more information, see Delete the solution deployment. Explore your deployed dynamic web app You have now deployed the Point of Sale dynamic web app! Visit the Point of Sale website and look around, and then explore how the solution works in Google Cloud console. Be aware that it can take a few minutes after deploying the application for the site to show up at the provided address. What is the Point of Sale web app? This Jump Start Solution uses a sample dynamic web app named Point of Sale to demonstrate how Google Cloud GKE infrastructure can help Java developers build, deploy, and manage web apps with static assets and dynamic content. Point of Sale is a web app that mimics a real-world checkout terminal for a retail store. The application frontend is used by a sales representative to check out items for a customer at a retail store. On this screen, the sales representative can perform the following actions: Add items to their cart and proceed to pay. Clear the cart or remove items from the cart. See a payment receipt. When the user pays, the web app displays a bill with the result of the transaction. Other edge cases are also covered. For example, if the user attempts to pay with no elements in the cart, the web app displays an error message. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Explore the frontend To launch the deployed Point of Sale web app frontend: Go to the Services page in the Google Cloud console. Go to Services Click the IP address of the Endpoints for the api-server-lb external load balancer. The frontend of your Point of Sale web app opens in a new browser window. You can now interact with the Point of Sale web app just as its users would see it, including adding products, paying, or seeing the bill. Generate load to the web app To examine how GKE responds to normal increases in traffic on your web app, send traced requests to the web application. The following steps use hey to automatically send several requests. hey comes pre-installed on the Cloud Shell: In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. If it isn't, go to that directory. Send 150 requests to the web application: export LB_IP_ADDRESS=$(gcloud compute addresses list --filter=name:jss-pos-1 --format='value(address)') hey -n 150 \ -m POST \ -H 'Content-Type: application/json' \ -d '{"paidAmount":14.59865,"type":"CASH","items":[{"itemId":"19a89a67-3958-46cf-9776-c29983871c93","itemCount":1},{"itemId":"729d0dd6-950e-4098-8f70-e7144076e899","itemCount":1}]}' \ http://$LB_IP_ADDRESS/api/pay The script assigns the IP address of the frontend of the dynamic web app to the LB_IP_ADDRESS variable. The output is the similar to the following: Summary: Total: 8.7963 secs Slowest: 6.0000 secs Fastest: 0.7981 secs Average: 2.7593 secs Requests/sec: 17.0527 Total data: 132600 bytes Size/request: 884 bytes Response time histogram: 0.798 [1] |■ 1.318 [24] |■■■■■■■■■■■■■■■■■■■■■■■ 1.838 [42] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■ 2.359 [26] |■■■■■■■■■■■■■■■■■■■■■■■■■ 2.879 [7] |■■■■■■■ 3.399 [0] | 3.919 [7] |■■■■■■■ 4.439 [11] |■■■■■■■■■■ 4.960 [6] |■■■■■■ 5.480 [9] |■■■■■■■■■ 6.000 [17] |■■■■■■■■■■■■■■■■ Latency distribution: 10% in 1.1932 secs 25% in 1.5938 secs 50% in 1.9906 secs 75% in 4.3013 secs 90% in 5.5936 secs 95% in 5.8922 secs 99% in 6.0000 secs Details (average, fastest, slowest): DNS+dialup: 0.0016 secs, 0.7981 secs, 6.0000 secs DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0000 secs req write: 0.0004 secs, 0.0000 secs, 0.0036 secs resp wait: 2.7565 secs, 0.7980 secs, 5.9930 secs resp read: 0.0001 secs, 0.0000 secs, 0.0002 secs Status code distribution: [200] 150 responses The Status code distribution field shows the 150 responses confirmation. This means that the script has executed 150 successful payments. Delete the solution deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-example-java-dynamic-point-of-sale/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next To continue learning more about deploying Java applications with Google Cloud products and capabilities, see: Deploy a Java service to Cloud Run for more automated infrastructure management. Build your first web app with Firebase. Deploy an app in a container image to a GKE cluster quickstart. Send feedback \ No newline at end of file diff --git a/Dynamic_web_application_with_JavaScript.txt b/Dynamic_web_application_with_JavaScript.txt new file mode 100644 index 0000000000000000000000000000000000000000..f3352a33914b11cefa3e94181033bac6477b90fe --- /dev/null +++ b/Dynamic_web_application_with_JavaScript.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/dynamic-app-javascript +Date Scraped: 2025-02-23T11:48:34.502Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Dynamic web application with JavaScript Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-06 UTC This guide helps you understand, deploy, and use the Dynamic web application with JavaScript Jump Start Solution. This solution shows you how to build and run dynamic websites on Google Cloud. You can use this as an example deployment for your own dynamic website to Google Cloud including the Google Cloud products you may need, and how they should communicate with each other. This solution guide deploys the Developer Journey web application to Google Cloud. Developer Journey is built with TypeScript, Next.js and React, and demonstrates static generation and server-side rendering techniques for pre-rendering cacheable pages. This document assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide will help you learn how to use Google Cloud to: Deploy a publicly accessible web application to Cloud Run Connect an application to a Google Cloud database following Google Cloud's recommended security practices (like using secrets and assigning least privilege including appropriate IAM permissions) Deploy and operate the backend services Make content changes or modify the application to add a feature Build and redeploy securely Explore your deployment Customize your application We advise that: You are familiar with TypeScript and React programming. You are interested in using cloud infrastructures for your new JavaScript applications. You are familiar with database connections, CI/CD, debugging, and logging. You are familiar with the fundamentals of website development. Products used The following is a summary of the Google Cloud products that the Developer Journeys solution integrates: Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Cloud Run handles scaling and other infrastructure tasks so you can focus on the business logic of your code. Firestore: a NoSQL document database built for automatic scaling, high performance, and ease of application development. Cloud Load Balancing : scalable, high performance load balancing on Google Cloud Secret Manager: A service that lets you store, manage, and access secrets as binary blobs or text strings. You can use Secret Manager to store database passwords, API keys, or TLS certificates that are needed by an application at runtime. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. Cloud Build: A service that lets you import source code from repositories or Cloud Storage spaces, execute a build, and produce artifacts such as Docker containers or Java archives for continuous delivery. Architecture The following diagram shows the architecture of the solution: Request flow The architecture includes the following components: Mobile and web users connect to the application using a URL. Cloud CDN serves cached assets if available. If not, request is routed to Cloud Load Balancing. For static assets, Cloud Load Balancing pulls from Cloud Storage bucket. For dynamic resources, Cloud Load Balancing directs requests to Cloud Run. Sensitive values are provided to Cloud Run using environment variables stored in Secret Manager. Cloud Run queries user data from Firestore, which is a NoSQL database backend for the web application. Cost For an estimate of the cost of the Google Cloud resources that the dynamic web application with javascript solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/datastore.owner roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/run.admin roles/`roles/secretmanager.admin roles/storage.admin roles/compute.networkAdmin roles/compute.admin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Dynamic web application with JavaScript solution. Go to the Dynamic web application with JavaScript solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Google Cloud zone where you want to deploy the solution # Example: us-central1-a zone = "ZONE" For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects region and zone: Available regions and zones Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore your deployment You've deployed your sample JavaScript dynamic web application. Your solution deployment consists of multiple primary services that have been integrated into a single Google Cloud project including the following: A frontend application written in TypeScript A Cloud Run service that uses the Next.js framework and the React framework. A Firestore database. A Cloud Storage enterprise-ready bucket. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Optional: customize your application To customize the Dynamic web application with JavaScript solution, you can make changes to the application that is deployed on Cloud Run and then redeploy. To follow step-by-step guidance for this task directly in Cloud Shell Editor, click Guide me. Guide me This task takes about 30 minutes to complete. Delete the deployment When you no longer need the solution, to avoid continued billing for the resources that you created in this solution, delete all the resources. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-javascript-webapp/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified code, create issues in the appropriate GitHub repository: Terraform code Application code GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next To continue learning more about Google Cloud products and capabilities, see: Google Cloud web hosting Send feedback \ No newline at end of file diff --git a/Dynamic_web_application_with_Python_and_JavaScript.txt b/Dynamic_web_application_with_Python_and_JavaScript.txt new file mode 100644 index 0000000000000000000000000000000000000000..5cee35aa4edeae55635210f63f3bd43008d212f3 --- /dev/null +++ b/Dynamic_web_application_with_Python_and_JavaScript.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/dynamic-app-python +Date Scraped: 2025-02-23T11:48:36.915Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Dynamic web application with Python and JavaScript Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-10 UTC This guide helps you understand, deploy, and use the Dynamic web application with Python and JavaScript Jump Start Solution. This solution demonstrates how to build and run dynamic websites on Google Cloud. The deployment and management of this application serves as a sample real-world implementation of building a dynamic web application with many of the tools and products that you can use for serverless applications. You might deploy this solution if you want to learn about the following: Django REST framework (DRF) application Storefront application Database-backed website This solution deploys the Avocano app. This is a faux storefront, where users can attempt to purchase an item. However, if a user adds an item to their cart, the store reveals it is fake (Avoca—no!). Although users can't actually purchase an avocado, the app illustrates inventory management, by decreasing the amount of available product by one. Avocano showcases the combination of a Cloud Run-powered API, a Cloud SQL for PostgreSQL database, and a Firebase frontend. See the Avocano application source code. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide will help you learn how to use Google Cloud to perform the following tasks: Deploy a publicly accessible web application Connect an application to a Google Cloud database following Google Cloud's recommended security practices: Using Secret Manager to store passwords, keys, and certificates. Granting only the IAM permissions required to perform the task. This is also known as applying the principle of least privilege. Deploy and operate the backend services. Customize your application Make content changes or modify the application to add a feature. Build and redeploy securely. Products used The following is a summary of the Google Cloud products that this solution uses: Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Google Cloud handles scaling and other infrastructure tasks so that you can focus on the business logic of your code. Jobs: Container-based task processing. Cloud Build: A service that lets you import source code from repositories or Cloud Storage spaces, execute a build, and produce artifacts such as Docker containers or Java archives. Cloud SQL for PostgreSQL: A cloud-based PostgreSQL database that's fully managed on the Google Cloud infrastructure. Secret Manager: A service that lets you store, manage, and access secrets as binary blobs or text strings. You can use Secret Manager to store database passwords, API keys, or TLS certificates that are needed by an application at runtime. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. Firebase: A development platform that lets you build, release, and monitor applications for iOS, Android, and the web. Architecture The following diagram shows the architecture of the solution: Components and configuration The architecture includes the following components: The web client is hosted on Firebase Hosting. The web client calls an API backend written in Python that runs as a service on Cloud Run. The configuration and other secrets for the Python application are stored in Secret Manager. Cloud SQL for PostgreSQL is used as the relational database backend for the Python application. Static assets for the application and container images are stored in Cloud Storage. Cost For an estimate of the cost of the Google Cloud resources that the dynamic web application solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/cloudsql.admin roles/cloudsql.admin roles/firebasehosting.admin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/run.admin roles/secretmanager.admin roles/storage.admin roles/compute.networkAdmin roles/compute.admin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Dynamic web application with Python and JavaScript solution. Go to the Dynamic web application with Python and JavaScript solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the solution, click the more_vertActions menu and select Explore this solution. For more information on how to explore your solution deployment, see Explore your deployment. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # ID of the project in which you want to deploy the solution. project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution. # Example: us-central1 region = "REGION" # Whether or not to enable underlying APIs in this solution. # Example: true enable_apis = "ENABLE_APIS" # Initial image to deploy to Cloud Run service. # Example: gcr.io/hsa-public/developer-journey/app initial_run_image = "INITIAL_RUN_IMAGE" # Identifier for the deployment. Used in some resource names. # Example: dev-journey deployment_name = "DEPLOYMENT_NAME" # Whether or not to initialize a Firestore instance. # Example: true init_firestore = "INIT_FIRESTORE" For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects. regions: Available regions. enable_apis: Set to true to enable underlying APIs in this solution. initial_run_image: The URL for the initial image to deploy to Cloud Run service. deployment_name: Identifier for the deployment. init_firestore: Set to true to enable the initialization of a Firestore instance. Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore your deployment You've deployed your sample dynamic web application application. Your solution deployment consists of multiple primary services that have been integrated into a single Google Cloud project, including the following: A Firebase Hosting client frontend, written using the Lit framework. A Cloud Run API server, written in Django using the Django REST Framework. A Cloud SQL database, using PostgreSQL. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Optional: customize your application To customize the Dynamic web application with Python and JavaScript solution, you can make changes to the application frontend and backend, and then redeploy. To follow step-by-step guidance for this task directly in Cloud Shell Editor, click Guide me. This task takes about 15 minutes to complete. Guide me Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete the deployment through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete the deployment using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified code, create issues in the appropriate GitHub repository: Terraform code Application code GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next To continue learning more about Google Cloud products and capabilities, see: Google Cloud web hosting Getting started with Django Send feedback \ No newline at end of file diff --git a/Earth_Engine.txt b/Earth_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..5fbec505e0185962a5df3ff1f0ece3edc641ddcc --- /dev/null +++ b/Earth_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/earth-engine +Date Scraped: 2025-02-23T12:06:01.811Z + +Content: +Google Earth EngineAnalyze satellite imagery and geospatial data at planetary scaleImprove sustainability and climate resilience decision-making with Earth Engine's curated geospatial data catalog, and large-scale computing and advanced AI of Google Cloud.Go to consoleContact salesProduct highlightsA catalog of 90+ petabytes of analysis-ready geospatial data Analyze Earth Observation data and run Machine Learning on it Work in the JavaScript Code Editor or in PythonEarth Engine - Made to make a difference1:36 videoFeatures90+ petabytes of analysis-ready geospatial data The Earth Engine catalog is one of the largest publicly available data catalogs, with 90+ petabytes of analysis-ready satellite imagery and 1,000+ curated geospatial datasets. It includes 50+ years of historical imagery, updated and expanded daily, at resolutions as fine as one meter per pixel. Examples include Landsat, MODIS and Sentinel, the National Agriculture Imagery Program (NAIP), climate and weather data, geophysical data, including terrain, land cover, and cropland data. The catalog offers data on the entire planet, allowing users to understand earth changes relevant to their sustainability goals. Explore Earth Engine datasets Powerful computation platform at scaleGoogle Cloud empowers everyone to run large-scale parallel processing using many thousands of computers. Combining Earth Engine’s data catalog with Google Cloud’s computational capability and data analytics tools, makes Earth Engine a revolutionary platform to analyze and visualize earth data at scale. Faster access, processing, and analysis of data means faster innovations, informed decisions, and viable solutions. The U.S. Forest Service, for example, manages 193 million acres of U.S. forest lands, reducing the time to complete mission critical tasks from months to hours with Earth Engine’s superior data catalog and computational scalability.Learn more about Earth Engine on Google CloudCode EditorThe Earth Engine Code Editor is a web-based coding environment designed to make developing complex geospatial workflows fast and easy with the following elements:JavaScript Code EditorMap display for visualizing geospatial datasetsAPI reference documentation (Docs tab)Git-based Script Manager (Scripts tab)Console output (Console tab)Task Manager (Tasks tab) to handle long-running queriesInteractive map query (Inspector tab)Search of the data archive or saved scriptsGeometry drawing toolsLearn more about Earth Engine Code EditorEarth Engine Python API and Colab experience (geemap)Earth Engine Python API allows users to make use of Python tools for machine learning and analysis, including those for geospatial workloads, like Cloud Optimized GeoTiffs and GeoPandas. The geemap Python library is supported in Earth Engine, for visual workflows in Python, like panning, zooming, and drawing map polygons for zonal statistics.Xarray, a well-known Python package, enables working with multidimensional arrays. Xee, its Earth Engine integration, lets users work with Earth Engine ImageCollections as Xarray datasets. Learn about recent Earth Engine Python API developmentsEarth Engine and BigQuery interoperabilityUsing BigQuery with Earth Engine, users get the best of both worlds. Earth Engine focuses on image (raster) processing, whereas BigQuery is optimized for processing large tabular datasets. The `Export.table.toBigQuery()`function simplifies a number of workflows:Combining Earth Engine and BigQuery data to get a more complete picture of a particular problemUsing BigQuery's analysis tools to extract insights from Earth Engine dataSharing Earth Engine data with SQL users in a way that's accessible for themLearn how to use the Earth Engine to BigQuery connectorEarth Engine Machine LearningEarth Engine has built-in capabilities to allow users to train and use ML models for common scenarios with easy-to-use APIs. For instance, you can use a random forest algorithm to classify land in an area of interest. If you'd prefer to use a Deep Neural Network, you can also train a TensorFlow or PyTorch model, deploy it to Vertex AI and get predictions from within the Earth Engine Code Editor.Learn how to get predictions from hosted models on Vertex AIImport your own data for analysis with the Earth Engine data catalogUsers can import their own data (images and tables) and combine them with datasets from the Earth Engine data catalog to derive insights. Using the Asset Manager in Code Editor or command line interface (CLI), georeferenced raster datasets in GeoTIFF or TFRecord format and tabular data in Shapefile or CSV format can be imported to build data products, create models, and develop unique solutions to accelerate sustainability efforts.Learn how to upload and manage geospatial datasets in Code EditorExport your results for integration into other systems If you're training a TensorFlow model or want to run hydrology simulations outside Earth Engine you might want to get data out of Earth Engine into another system. The Earth Engine export API does the heavy lifting and our data extraction methods help solve scaling issues and work with frameworks like Apache Beam, Spark, or Dask. Our Python client library comes bundled with client-side logic to convert between Earth Engine objects and NumPy, Pandas, and GeoPandas types.Learn how to export images, map tiles, tables and video from Earth EngineEarth Engine AppsFor code-free, interactive visualizations, Earth Engine Apps are dynamic, shareable user interfaces for Earth Engine analyses. With Earth Engine Apps, developers can use simple UI elements to leverage Earth Engine's data catalog and analytical power, allowing stakeholders to interact with their data and delivering insights into the hands of decision-makers.Learn more about Engine AppsCloud Score+Cloud Score+ solves the issue of cloud cover in Sentinel-2 satellite data. It is a comprehensive QA score, powered by deep learning, which provides a per-pixel “usability” score to mask or weight observations based on overall quality.Learn more about Cloud Score+Dynamic WorldDynamic World is a global, near-real-time Land Cover dataset at a 10m resolution, powered by machine learning. It gives unprecedented detail about land use and helps make accurate predictions and effective sustainability plans.Learn more about Dynamic WorldView all featuresHow It WorksReference analysis-ready datasets in Earth Engine's petabyte-scale data catalog. Run raster and vector data analysis in parallel on thousands of machines. Iteratively test and tweak results in Code Editor, by writing custom Python code with Earth Engine's client library or by extracting results from a model hosted in Vertex AI. Share results in BigQuery or in Earth Engine Apps.Get startedSolve for common geospatial use cases at scaleCommon UsesSustainable sourcingEnable global supply chain transparency and traceabilitySustainable supply chains are business critical. Earth Engine helps businesses analyze land cover and use at sourcing sites to highlight deforestation risk in their supply chains. The EC JRC global map of forest cover for 2020 is useful for this. A spatially explicit representation of forest presence/absence in 2020 at 10m resolution, this dataset corresponds to the EU Deforestation Regulation (EUDR), which will require companies to provide statements affirming goods sold or produced in the EU were not grown on land deforested after December 31, 2020.Learn more about the EC JRC global map of forest coverTraceMark: First-mile driven traceability for raw materialsTraceMark, built by Google Cloud Advantage partner NGIS, uses Earth Engine to map the sourcing of raw materials and potential risk through global supply chains, providing comprehensive first-mile monitoring and end-to-end traceability insights.TraceMark leverages leading frameworks and provides EU Deforestation Regulation (EUDR)-specific capabilities for risk mitigation and due diligence, including data exchange and engagement with suppliers, and sustainability metrics for reporting.TraceMark provides multi commodity capability to address all EUDR-impacted products, including palm, coffee, cocoa, soy, and paper.Learn more about NGISVideo: Learn about the value of TraceMarkDemo: Delve deeper into TraceMark capabilitiesSee all Earth Engine partnersTutorials, quickstarts, & labsEnable global supply chain transparency and traceabilitySustainable supply chains are business critical. Earth Engine helps businesses analyze land cover and use at sourcing sites to highlight deforestation risk in their supply chains. The EC JRC global map of forest cover for 2020 is useful for this. A spatially explicit representation of forest presence/absence in 2020 at 10m resolution, this dataset corresponds to the EU Deforestation Regulation (EUDR), which will require companies to provide statements affirming goods sold or produced in the EU were not grown on land deforested after December 31, 2020.Learn more about the EC JRC global map of forest coverPartners & integrationsTraceMark: First-mile driven traceability for raw materialsTraceMark, built by Google Cloud Advantage partner NGIS, uses Earth Engine to map the sourcing of raw materials and potential risk through global supply chains, providing comprehensive first-mile monitoring and end-to-end traceability insights.TraceMark leverages leading frameworks and provides EU Deforestation Regulation (EUDR)-specific capabilities for risk mitigation and due diligence, including data exchange and engagement with suppliers, and sustainability metrics for reporting.TraceMark provides multi commodity capability to address all EUDR-impacted products, including palm, coffee, cocoa, soy, and paper.Learn more about NGISVideo: Learn about the value of TraceMarkDemo: Delve deeper into TraceMark capabilitiesSee all Earth Engine partnersClimate riskSafeguard assets against extreme climate risks, such as firesDisaster response agencies require precise and timely data and insights in order to monitor fires, evaluate risks, and protect assets. Datasets in Earth Engine, such as GOES MCMIP (imagery), GOES FDC (fire detection), and FIRMS (Fire Information for Resource Management System), can be analyzed to monitor fires, as well as facilitate fire modeling and risk management. Analyzing this data helps improve the efficiency of response and disaster recovery efforts, making them more effective.See an example of mapping wildfires with the power of satellite dataCloud partners with climate risk expertiseClimate Engine’s SpatiaFi solution links asset and geospatial data to support regulatory reporting, climate risk reductions, and sustainable finance.CARTO’s cloud native Location Intelligence platform helps organizations analyze climate impacts, optimize processes, and predict outcomes.Deloitte is building new geospatial planning solutions using Earth Engine and Google Cloud’s GenAI to help clients build sustainable communities and infrastructure, enhance operational resilience, and prepare for climate change impacts.SIG, for 25 years, has honed expertise in mapping environmental change, specializing in evaluating risks like fire, drought, flood, agricultural disruptions, and health threats.Learn more about Climate EngineLaern more about CartoLearn more about DeloitteSee all Earth Engine partnersTutorials, quickstarts, & labsSafeguard assets against extreme climate risks, such as firesDisaster response agencies require precise and timely data and insights in order to monitor fires, evaluate risks, and protect assets. Datasets in Earth Engine, such as GOES MCMIP (imagery), GOES FDC (fire detection), and FIRMS (Fire Information for Resource Management System), can be analyzed to monitor fires, as well as facilitate fire modeling and risk management. Analyzing this data helps improve the efficiency of response and disaster recovery efforts, making them more effective.See an example of mapping wildfires with the power of satellite dataPartners & integrationsCloud partners with climate risk expertiseClimate Engine’s SpatiaFi solution links asset and geospatial data to support regulatory reporting, climate risk reductions, and sustainable finance.CARTO’s cloud native Location Intelligence platform helps organizations analyze climate impacts, optimize processes, and predict outcomes.Deloitte is building new geospatial planning solutions using Earth Engine and Google Cloud’s GenAI to help clients build sustainable communities and infrastructure, enhance operational resilience, and prepare for climate change impacts.SIG, for 25 years, has honed expertise in mapping environmental change, specializing in evaluating risks like fire, drought, flood, agricultural disruptions, and health threats.Learn more about Climate EngineLaern more about CartoLearn more about DeloitteSee all Earth Engine partnersProtect natural resourcesSustainable management and conservation of natural resourcesLeveraging the Hansen global forest change dataset in Earth Engine, users can carry out analysis of forest change, quantifying forest change over time and charting yearly forest loss. Using the Forest Monitoring for Action (FORMA, Hammer et al. 2009) data from Global Forest Watch, users can filter by dates and configure alerts within specific areas of interest.Learn more about global forest changeTutorial: Forest Change AnalysisTutorial: Forest Cover and Loss EstimationCloud partners with protecting natural resources expertiseWith 25 years of expertise, SIG excels in protecting natural resources through comprehensive mapping of land cover and use change, real-time change detection, restoration potential assessment, and biodiversity monitoring, ensuring effective environmental conservation strategies.See all partnersLearn more about SIGTutorials, quickstarts, & labsSustainable management and conservation of natural resourcesLeveraging the Hansen global forest change dataset in Earth Engine, users can carry out analysis of forest change, quantifying forest change over time and charting yearly forest loss. Using the Forest Monitoring for Action (FORMA, Hammer et al. 2009) data from Global Forest Watch, users can filter by dates and configure alerts within specific areas of interest.Learn more about global forest changeTutorial: Forest Change AnalysisTutorial: Forest Cover and Loss EstimationPartners & integrationsCloud partners with protecting natural resources expertiseWith 25 years of expertise, SIG excels in protecting natural resources through comprehensive mapping of land cover and use change, real-time change detection, restoration potential assessment, and biodiversity monitoring, ensuring effective environmental conservation strategies.See all partnersLearn more about SIGAgricultureBuild toward a higher yield, lower impact food system with agriculture insightsEarth Engine can be used to surface insights into crop health, water consumption, and seasonal patterns of productivity. MOD13A2.061 Terra Vegetation Indices 16-Day Global 1km can be leveraged to generate a time-series animation representing 20-year median vegetation productivity. For more informed decision-making, users can analyze datasets like MODIS land surface temperature data or ERA5 composites to calculate Growing Degree Days (GDDs) and then apply machine learning in Vertex AI to predict when crops will reach maturity or to calculate optimal timing for pest management.See NDVI times series animation tutorialCase Study: Learn how Regrow Ag uses Earth Engine and Google CloudStory: Learn how Sayukt is changing the lives of farmersBlog: Learn how Earth Engine can inform more sustainable agriculture practicesCloud partners with agriculture expertiseWoolpert: A partner and leading provider of state-of-the-art geospatial services. Woolpert has helped several organizations in both the public and private sector build their Earth Engine stack for geospatial intelligence, including agricultural analytics and other land management applications. NGIS: A dedicated geospatial company and Google Partner has extensive experience in partnering with leading agriculture organizations to implement Earth Engine. Using Earth Engine, the NGIS team has developed solutions across agricultural nutrition, protection, production, and analytics to operationalize petabytes of satellite imagery to deliver new insights and products. Spatial Informatics Group (SIG): It has been providing environmental decision support tools for 25 years. Expertise include: vegetation cover types identification through spectral discrimination; phenology and seasonal vegetation change analysis; crop monitoring and yield estimation.Learn more about WoolpertLearn more about NGISLearn more about SIGSee all Earth Engine partnersTutorials, quickstarts, & labsBuild toward a higher yield, lower impact food system with agriculture insightsEarth Engine can be used to surface insights into crop health, water consumption, and seasonal patterns of productivity. MOD13A2.061 Terra Vegetation Indices 16-Day Global 1km can be leveraged to generate a time-series animation representing 20-year median vegetation productivity. For more informed decision-making, users can analyze datasets like MODIS land surface temperature data or ERA5 composites to calculate Growing Degree Days (GDDs) and then apply machine learning in Vertex AI to predict when crops will reach maturity or to calculate optimal timing for pest management.See NDVI times series animation tutorialCase Study: Learn how Regrow Ag uses Earth Engine and Google CloudStory: Learn how Sayukt is changing the lives of farmersBlog: Learn how Earth Engine can inform more sustainable agriculture practicesPartners & integrationsCloud partners with agriculture expertiseWoolpert: A partner and leading provider of state-of-the-art geospatial services. Woolpert has helped several organizations in both the public and private sector build their Earth Engine stack for geospatial intelligence, including agricultural analytics and other land management applications. NGIS: A dedicated geospatial company and Google Partner has extensive experience in partnering with leading agriculture organizations to implement Earth Engine. Using Earth Engine, the NGIS team has developed solutions across agricultural nutrition, protection, production, and analytics to operationalize petabytes of satellite imagery to deliver new insights and products. Spatial Informatics Group (SIG): It has been providing environmental decision support tools for 25 years. Expertise include: vegetation cover types identification through spectral discrimination; phenology and seasonal vegetation change analysis; crop monitoring and yield estimation.Learn more about WoolpertLearn more about NGISLearn more about SIGSee all Earth Engine partnersEnvironmental impact Gather environmental insights; detect and monitor changeFor public sector organizations and companies seeking to tackle emissions and derive insights into the drivers of degradation and effectiveness of interventions, custom analysis can be applied to Earth Engine datasets to detect environmental impacts over time. For example, using annual segmented Landsat time series data from 1984-2019 to depict lake drying in Bolivia, or combining methane data with other datasets—like land cover, forests, water, ecosystems, regional borders, and more—to track methane emissions in a given area over time.See time series modeling tutorialCloud partners with expertise in environmental impactDeloitte’s methane emissions quantification solution—built on Google Earth Engine—is a geospatial artificial intelligence (AI) and machine learning (ML) analytics tool designed for organizations to monitor, quantify, and prioritize closure of problematic orphan wells to reduce methane emissions, protect water and air, and mitigate safety risks to improve human and environmental health.Learn more about DeloitteSee all Earth Engine partnersTutorials, quickstarts, & labsGather environmental insights; detect and monitor changeFor public sector organizations and companies seeking to tackle emissions and derive insights into the drivers of degradation and effectiveness of interventions, custom analysis can be applied to Earth Engine datasets to detect environmental impacts over time. For example, using annual segmented Landsat time series data from 1984-2019 to depict lake drying in Bolivia, or combining methane data with other datasets—like land cover, forests, water, ecosystems, regional borders, and more—to track methane emissions in a given area over time.See time series modeling tutorialPartners & integrationsCloud partners with expertise in environmental impactDeloitte’s methane emissions quantification solution—built on Google Earth Engine—is a geospatial artificial intelligence (AI) and machine learning (ML) analytics tool designed for organizations to monitor, quantify, and prioritize closure of problematic orphan wells to reduce methane emissions, protect water and air, and mitigate safety risks to improve human and environmental health.Learn more about DeloitteSee all Earth Engine partnersPricingHow Earth Engine pricing worksEarth Engine pricing is based on usage of Earth Engine resources (compute units and storage) and a monthly platform fee. Plans and usageDescriptionPrice (USD)BasicBest for organizations with small teams and small workloads. Includes 2 developer seats, 20 concurrent high-volume API requests, and up to 8 concurrent batch export tasks.$500 per monthProfessionalBest for organizations with moderate-sized teams and predictable, time-sensitive, large-scale workloads. Includes 5 developer seats, 500 concurrent high-volume API requests, and up to 20 concurrent batch export tasks.$2,000 per monthPremiumBest for larger teams with business critical, time-sensitive, large-scale workloads. Premium plan allocations can be customized. Please contact your Google Cloud sales representative for more information.Contact usCompute (analysis)Earth Engine Compute Units (EECUs) consist of Earth Engine managed workers used to execute tasks. Compute pricing is charged by EECU-hour and rates vary based on the processing environment you use.Online EECUsRun computations synchronously and include the output directly in the response.$1.33 perEECU-hourBatch EECUs Run computations asynchronously and output results for later access (in Google Cloud Storage, the Earth Engine asset store, etc.).$0.40 per EECU-hourStorage$0.026 per GB-monthAdditional usersFirst user free, $500 per month for each additional user*Learn more about Earth Engine pricing. View all pricing detailsHow Earth Engine pricing worksEarth Engine pricing is based on usage of Earth Engine resources (compute units and storage) and a monthly platform fee. BasicDescriptionBest for organizations with small teams and small workloads. Includes 2 developer seats, 20 concurrent high-volume API requests, and up to 8 concurrent batch export tasks.Price (USD)$500 per monthProfessionalDescriptionBest for organizations with moderate-sized teams and predictable, time-sensitive, large-scale workloads. Includes 5 developer seats, 500 concurrent high-volume API requests, and up to 20 concurrent batch export tasks.Price (USD)$2,000 per monthPremiumDescriptionBest for larger teams with business critical, time-sensitive, large-scale workloads. Premium plan allocations can be customized. Please contact your Google Cloud sales representative for more information.Price (USD)Contact usCompute (analysis)DescriptionEarth Engine Compute Units (EECUs) consist of Earth Engine managed workers used to execute tasks. Compute pricing is charged by EECU-hour and rates vary based on the processing environment you use.Price (USD)Online EECUsRun computations synchronously and include the output directly in the response.Description$1.33 perEECU-hourBatch EECUs Run computations asynchronously and output results for later access (in Google Cloud Storage, the Earth Engine asset store, etc.).Description$0.40 per EECU-hourStorageDescriptionPrice (USD)$0.026 per GB-monthAdditional usersDescriptionPrice (USD)First user free, $500 per month for each additional user*Learn more about Earth Engine pricing. View all pricing detailsPricing calculatorCalculate your Google Cloud costsEstimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteGet started with Earth EngineQuick startRegister projectQuick introView guideExplore Earth Engine catalogBrowse datasetsFind best practicesBrowse guidesTake advantage of Google CloudLeverage Google Cloud featuresBusiness CaseExplore how enterprises and public sector organizations are leveraging Earth EngineRegrow Ag is accelerating the transition to sustainable food and fiber productionJohn Shriver, Director of Data Science, Regrow Ag"Our ultimate mission is to illuminate and accelerate the world’s transition to sustainable food and fiber production. We believe advancing regenerative agriculture can bring resilience to business supply chains. Working with data specialists at Google and platforms such as Google Cloud and Google Earth Engine is key to achieving this goal."Read customer storyRelated contentUnilever and Google Cloud team up to reimagine the future of sustainable sourcingLearn moreHow the U.S. Forest Service uses Earth Engine and Google Cloud tools to analyze a changing planetLearn morePartners & IntegrationWork with a partner that has deep Earth Engine expertiseEarth Engine Initiative partnersEarth Engine partners with geospatial expertise and scalable solutions, enhance Earth Engine's capabilities and help organizations mitigate impact, protect natural resources, and build a sustainable future.See all Earth Engine partnersGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Ecommerce_platform_with_serverless_computing.txt b/Ecommerce_platform_with_serverless_computing.txt new file mode 100644 index 0000000000000000000000000000000000000000..0460fb0277f8a8825d33df4b91eda3b652979530 --- /dev/null +++ b/Ecommerce_platform_with_serverless_computing.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/ecommerce-serverless +Date Scraped: 2025-02-23T11:48:23.396Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Ecommerce platform with serverless computing Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-24 UTC This guide helps you understand, deploy, and use the Ecommerce platform with serverless computing solution. This solution demonstrates how to build and run an ecommerce application for a retail organization, with a publicly visible online store website. It shows you how to create an application that scales to handle spikes in usage (for example, during peak scale events like a seasonal sale), and that can manage requests based on the visitor's location. This design helps the online store provide a consistent service to geographically distributed customers. This solution is a good starting point if you want to learn how to deploy scalable ecommerce web apps with serverless capabilities. If you want granular operational control, check out the Ecommerce web app deployed on Kubernetes solution. This document assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Learn how to design a system architecture for an ecommerce website. Optimize an ecommerce website for performance, scale, and responsiveness. Monitor and anticipate load limitations. Use tracing and error reporting to understand and manage problems. Products The solution uses the following Google Cloud products: Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Google Cloud handles scaling and other infrastructure tasks so that you can focus on the business logic of your code. Cloud SQL: A cloud-based PostgreSQL database that's fully managed on the Google Cloud infrastructure. Secret Manager: A service that lets you store, manage, and access secrets as binary blobs or text strings. You can use Secret Manager to store database passwords, API keys, or TLS certificates that are needed by an application at runtime. Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. Firebase Hosting: A fully managed hosting service to deploy and serve your web applications and static content. Cloud Logging: A service that lets you store, search, analyze, monitor, and alert on logging data and events from Google Cloud and other clouds. Cloud Trace: A distributed tracing system for Google Cloud that helps you understand how long it takes your application to handle incoming requests from users or other applications, and how long it takes to complete operations like RPC calls performed when handling the requests. Error Reporting: This service aggregates and displays errors produced in your running cloud services. Error Reporting groups errors which are considered to have the same root cause. Architecture The following diagram shows the architecture of the solution: Request flow The following is the request processing flow of the ecommerce platform. The steps in the flow are numbered as shown in the preceding architecture diagram. A Firebase Hosting client frontend. The frontend uses Lit and web components for client-side rendering of API data. The web client calls an API backend that is running as a Cloud Run service. The Cloud Run API server is written in Django using the Django REST Framework. The configuration and other secrets for the Python application are stored in Secret Manager. Static assets for the application are stored in Cloud Storage. A Cloud SQL database, using PostgreSQL, is used as the relational database backend for the Python application. Cloud Logging, Cloud Trace, and Error Reporting stores logs, OpenTelemetry traces, and error reports sent by other cloud products and the Cloud Run API server. This data enables monitoring for the correct application behavior and troubleshooting of unexpected behavior. Cost For an estimate of the cost of the Google Cloud resources that the Ecommerce platform with serverless computing solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions Note: This list of required IAM permissions is for deploying through the console. If you deploy by using the Terraform CLI instead, your user account needs the same roles as the following, noted as being granted to the created service account. To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. Cloud SQL Admin (roles/cloudsql.admin) Compute Engine Admin (roles/compute.admin) Compute Engine Network Admin (roles/compute.networkAdmin) Firebase Service Management Service Agent (roles/firebase.managementServiceAgent) Firebase Hosting Admin (roles/firebasehosting.admin) Service Account Admin (roles/iam.serviceAccountAdmin) Service Account User (roles/iam.serviceAccountUser) Project IAM Admin (roles/resourcemanager.projectIamAdmin) Cloud Run Admin (roles/run.admin) Secret Manager Admin (roles/secretmanager.admin) Cloud Storage Admin (roles/storage.admin) Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Ecommerce platform with serverless computing solution. Go to the Ecommerce platform with serverless computing solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the deployed ecommerce web app, follow the instructions in Explore your Avocano deployment. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Google Cloud zone where you want to deploy the solution # Example: us-central1-a zone = "ZONE" # Container Registry that hosts the client image client_image_host = "hsa-public/serverless-ecommerce" # Container Registry that hosts the server image server_image_host = "hsa-public/serverless-ecommerce" For information about the values that you can assign to the required variables, see the following: PROJECT_ID: Identifying projects REGION and ZONE: Available regions and zones Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! To view and use the deployed ecommerce web app, follow the instructions in Explore your Avocano deployment. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore your Avocano deployment You have now deployed the Avocano website application! You can visit the Avocano website and look around, and then explore how the solution works in Google Cloud console. Be aware that it can take a few minutes after deploying the application for the site to show up at the provided address. What is Avocano? This solution uses a sample application named Avocano to demonstrate the deployment and management of a web app with many of the tools and products used for serverless applications. Avocano is an application that mimics a real-world implementation of an ecommerce web app. The Avocano frontend presents a faux storefront, where you can add items to your cart, and attempt to complete a checkout process, but then the store reveals it is fake (Avoca--no!). While you can't actually purchase an avocado, the application demonstrates inventory management by decreasing the amount of available product by one. Explore the frontend To launch the frontend of your solution deployment: Open the Firebase console. Select your existing project. Navigate to the Build > Hosting section. Select the Domain ending with web.app. The frontend of your sample application opens in a new browser window. You can now interact with the Avocano website just as its customers would see it, including browsing through products, adding products to the shopping cart, and checking out as a guest. View the autoscaling and concurrency configuration Cloud Run automatically scales up or down container instances from zero depending on traffic, to provide a fast startup time for your app. Understand the settings for autoscaling and concurrency It's important to understand that the settings for autoscaling and concurrency on Cloud Run can affect the performance and costs for your app: Minimum instances: You can set the minimum instances setting to enable idle instances for your service. By raising the minimum instances setting in anticipation of higher traffic, you can minimize response time for the first N users. If your service requires reduced latency, especially when scaling from zero active instances, you can specify a minimum number of container instances to be kept warm and ready to serve requests. Maximum instances: Increasing the maximum instances setting in Cloud Run can help you to serve exceptionally high traffic that's anticipated. In such a scenario, you should also evaluate your current quota and consider requesting an increase. Reducing the maximum instances setting can help you avoid unexpected costs or a higher utilization of underlying backing infrastructure (such as your database capacity). Concurrency: The Cloud Run concurrency setting specifies the maximum number of requests that can be processed simultaneously by a given container instance. Optimizing the memory, CPU, and concurrency for your application's behavior ensures that each container instance has the best utilization, and minimizes the need to scale up to new instances. For more information, see Optimize concurrency. View autoscaling and concurrency settings To view the current minimum and maximum instances settings, and concurrency settings for your Cloud Run service: Go to Cloud Run Click the service you are interested in to open the Service details page. Click the Revisions tab. In the details panel at the right, the current minimum instances, maximum instances, and concurrency settings are listed under the Container tab. If you want to learn how to adjust these settings to optimize app performance, see the General development tips in the Cloud Run documentation. View traffic logs You can use the logging tools in Cloud Run to keep watch on traffic to your app and get alerts when problems arise. To view logs for your Cloud Run service: Go to Cloud Run Click the chosen service in the displayed list. Click the Logs tab to get the request and container logs for all revisions of this service. You can filter by log severity level. Cloud Run automatically captures logs from many places: anything written to standard out and standard error, anything in /var/log/, and others. Any manual logging done with the Cloud Logging libraries is also captured. You can also view the logs for this service directly in Cloud Logging by clicking View in Logs Explorer. In the Avocano app, try the following user actions to trigger corresponding output that you can view in the logs. User action Log output Purchase the shopping cart using Collect as the payment type and the product amount in the cart does not exceed the inventory count. The log output shows an info log, with httpRequest status of 200. Purchase the shopping cart using Collect as the payment type but the product amount in the cart exceeds the inventory count. The log output shows a Warning log, with httpRequest status of 400. Purchase the shopping cart using Credit as the payment type. The log output shows an Error log, with httpRequest status of 501. You can view the code that raises the error that leads to the 400/501 HTTP response in the serializers.py file. Cloud Run notes the response, and generates a corresponding request log entry. You can use log-based alerts to notify you whenever a specific message appears in your included logs. View trace instrumentation and captured traces This solution uses Open Telemetry Python automatic instrumentation to capture telemetry data for the Avocano application. Understand how tracing is implemented The solution implements the following code and configuration settings in order to generate traces using automatic instrumentation: Add dependencies for Cloud Trace in the requirements.txt file, including the following: opentelemetry-distro: installs the Open Telemetry API, SDK, and command-line tools. opentelemetry-instrumentation: adds support for Python auto-instrumentation. opentelemetry-exporter-gcp-trace: provides support for exporting traces to Cloud Trace. opentelemetry-resource-detector: provides support for detecting Google Cloud resources. opentelemetry-instrumentation-django: allow tracing requests for Django application. Set the IAM binding in the iam.tf file to enable the server to write to Cloud Trace. Configure the OTEL_TRACES_EXPORTER environment variable in the services.tf file to use the exporter for Cloud Trace. In the server/Procfile, configure the server to run the opentelemetry-instrument command on the Avocano app. This command detects packages in Avocano and applies automatic tracing instrumentation on them, if possible. To learn more about collecting Cloud Trace data for Python, see Python and OpenTelemetry. View the latency data To view the latency data for requests, follow these steps: Go to Cloud Trace In the Select a trace section of the Trace list page, click the blue dot, which represents a captured trace. The Latency column displays the latency for the captured traces. You can also view trace data using the following visualizations in the Trace list page: Waterfall graph: represents a complete request through the application. Each step in the timeline is a span, which you can click to view details. Cloud Run automatically creates spans for internal operations, such as request handling and load balancing. These spans appear in the same waterfall graph as spans produced by Avocano, allowing you to view the full lifetime of the request. Span details: shows any labels or annotations you added to the app's code when you instrumented it for tracing. If you want to add customized traces, refer to Manual Instrumentation in the Open Telemetry documentation. Design recommendations This section provides recommendations for using the Ecommerce platform with serverless computing solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. To view the design recommendations for each area, click the appropriate tab. security Enhance security Design focus Recommendations Data encryption By default, Cloud Run encrypts data by using a Google-owned and Google-managed encryption key. To protect your containers by using a key that you control, you can use customer managed encryption keys. For more information, see Using customer-managed encryption keys. Software supply-chain security To ensure that only authorized container images are deployed to the Cloud Run services, you can use Binary Authorization. restore Improve reliability Design focus Recommendations App scaling The Cloud Run services in the solution are configured to autoscale the container instances horizontally based on the request load. Review and adjust the autoscaling parameters based on your requirements. For more information, see About container instance autoscaling. Request handling To improve the responsiveness of Cloud Run services that store client-specific state on container instances, you can use session affinity. Requests from the same client are routed to the same container instance, on a best-effort basis. For more information, see Setting session affinity (services). Data durability To protect your data against loss, you can use automated backups of the Cloud SQL database. For more information, see About Cloud SQL backups. Database high availability (HA) The Cloud SQL database in the solution is deployed in a single zone. For HA, you can use a multi-zone configuration. For more information, see About high availability. For more information about region-specific considerations, see Geography and regions. If database HA is a critical requirement, AlloyDB for PostgreSQL is an alternative Google Cloud service that you can consider. Database reliability The Cloud SQL instance in this solution uses the db-custom-2-4096 machine type, which uses two CPUs with 4 GB of memory. This machine type is designed to provide resources for a low-cost database that might be appropriate for test and development environments only. If you need production-grade reliability, consider using a machine type that provides more CPU and memory. A Cloud SQL instance that uses the db-g1-small machine type is not included in the Cloud SQL service level agreement (SLA). For more information about configurations that are excluded from the SLA, see Operational guidelines. Quotas and limits The number of Cloud Run resources is limited. If you anticipate a surge in traffic, for example due to a seasonal or sales event, you should request a quota increase. For more information, see How to increase quota. Some quota requests require manual approval, so you should plan ahead. You can also set alerts on your progress towards reaching your quota. payment Optimize cost Design focus Recommendations Resource efficiency Cloud Run determines the number of requests that should be sent to a container instance based on CPU usage and memory usage. By increasing the maximum concurrency setting, you can reduce the number of container instances that Cloud Run needs to create, and therefore reduce cost. For more information, see Maximum concurrent requests per instance (services). The Cloud Run services in this solution are configured to allocate CPUs only during request processing. When a Cloud Run service finishes handling a request, the container instance's access to CPUs is disabled. For information about the cost and performance effect of this configuration, see CPU allocation (services). speed Improve performance Design focus Recommendations App startup time To reduce the performance effect of cold starts, you can configure the minimum number of Cloud Run container instances to a non-zero value. For more information, see General development tips for Cloud Run. Tune concurrency This solution is tuned to maximize individual container throughput. Cloud Run automatically adjusts the concurrency for serving multiple requests. However, you should adjust the default maximum concurrency if your container is not able to process many concurrent requests, or if your container is able to handle a larger volume of requests. For more information, see Optimize concurrency. Database performance For performance-sensitive applications, you can improve performance of Cloud SQL by using a larger machine type and by increasing the storage capacity. If database performance is a critical requirement, AlloyDB for PostgreSQL is an alternative Google Cloud service that you can consider. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the solution deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-dynamic-python-webapp/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified code, create issues in the appropriate GitHub repository: Terraform code Application code GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next This solution demonstrates how to deploy an ecommerce web application using Cloud Run. To continue learning more about Google Cloud products and capabilities, see: General development tips for Cloud Run. Load testing best practices for Cloud Run. End user authentication for Cloud Run tutorial. Serve dynamic content and host microservices using Firebase Hosting. Send feedback \ No newline at end of file diff --git a/Ecommerce_web_app_deployed_on_Kubernetes.txt b/Ecommerce_web_app_deployed_on_Kubernetes.txt new file mode 100644 index 0000000000000000000000000000000000000000..d4e6a13d31a4883a4f0c5466555768c6d1467c57 --- /dev/null +++ b/Ecommerce_web_app_deployed_on_Kubernetes.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/ecommerce-microservices +Date Scraped: 2025-02-23T11:48:25.751Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Ecommerce web app deployed on Kubernetes Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-29 UTC This guide helps you understand, deploy, and use the Ecommerce web app deployed on Kubernetes Jump Start Solution. This solution demonstrates how to build and run an ecommerce application for a retail organization, with a publicly visible shop website. It shows you how to create an application that can scale to handle spikes in usage (for example, during peak scale events like a sale), and can manage requests based on the visitor's location, helping the online shop provide a consistent service to a geographically distributed customer base. The application is deployed as multiple small services, or microservices, running on Google-managed Kubernetes clusters in Google Cloud. Each service performs a specific task such as providing the web frontend or managing the shopping cart. This solution is a good starting point if you need the configurability and flexibility offered by Kubernetes features when managing your website. A microservices architecture like this one is also particularly useful if you have a larger engineering team, as it lets different teams or developers create and manage different parts of the application separately. If this doesn't sound like your organization, or if you're not sure, consider also trying our Ecommerce web app deployed on Cloud Run solution. It uses Cloud Run to deploy a similar online shop application without the need for Kubernetes, and doesn't use microservices. This document assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. About Cymbal Shops The application used in this solution is a demo online shop for an imaginary retail chain called Cymbal Shops, with a website that visitors can use to browse through the company's products, add products to their cart, go to the checkout, and purchase products — you can try it yourself after deploying the solution (though sadly you can't really buy any of the products). Cymbal Shops have customers in both the US and Europe, so they need a website solution that isn't slower for some visitors than others. Cymbal Shops also often have sales, and get lots of shoppers around holidays, so they need their website to be able to cope with surges in traffic without slowing down or having other issues, and without having to spend money on Cloud resources that they don't actually need. Products used The solution uses the following Google Cloud products: Google Kubernetes Engine (GKE): A managed environment for deploying, managing, and scaling containerized applications using Google infrastructure. Multi Cluster Ingress: A Google-hosted service that supports deploying shared load balancing resources across clusters and across regions. For information about how these products are configured and how they interact, see the next section. Architecture The solution deploys an ecommerce application with a publicly accessible web interface. The following diagram shows the architecture of the Google Cloud resources that the solution deploys: Request flow The following is the request processing flow for the deployed application. The steps in the flow are numbered as shown in the preceding architecture diagram. A user interacts with the Cymbal Shops website in their browser, which sends an HTTP request to a Google Cloud Load Balancer. This is a load balancer that sits at the edge of Google's network and directs traffic to the appropriate destination within Google Cloud. The user request is directed to one of the two GKE clusters where the application frontend is running. By default this is the cluster nearest the user: in the preceding diagram, the nearest cluster to the user is in Europe, so that's where the request goes. You'll learn more about how this is configured using the Multi Cluster Ingress service in the next section. The request is handled by one or more of the backend microservices that make up the rest of the Cymbal Shops application. The application's cartservice stores the state of the user's shopping cart while they're visiting the site, using a Redis database. One Redis database is deployed to the US cluster only. Components and configuration The Cymbal Shops app runs on Google Kubernetes Engine (GKE) clusters. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications, where the application is packaged (or containerized) with its dependencies in a way that's independent of its environment. A Kubernetes cluster is a set of machines, called nodes, that you use to run your containers. GKE with Autopilot is Google's scalable and fully automated Kubernetes service, where your clusters are made up of Compute Engine virtual machines on Google Cloud. The Cymbal Shops solution includes the following components: Three GKE clusters, as follows: One cluster, known as a config cluster, that's used to manage the Multi Cluster Ingress service for the application. Multi Cluster Ingress is a service that lets you load balance traffic across a specified set of clusters, with a single virtual IP address for your application. Two clusters in different regions to run the Cymbal Shops microservices. Each cluster has identical Cymbal Shops services, running in the same Kubernetes namespaces. This lets Multi Cluster Ingress treat both frontend services as if they were the same service, choosing the cluster to send traffic to depending on proximity to the website visitor. Multi Cluster Ingress can also be used to make sure traffic is sent only to healthy clusters, perform gradual rollouts when upgrading, and more. All three GKE clusters have Autopilot enabled. Autopilot is a GKE feature that lets you create clusters where Google manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. For Cymbal Shops, this means that when there are more visitors to the site than usual, the clusters can automatically scale up the amount of CPU, memory, and storage they use based on the application's needs. With Autopilot enabled, the Cymbal Shops platform administrator doesn't have to worry about requesting (and paying for) more Cloud resources than they actually need most of the time, or risk having clusters that are too under-resourced to cope with increased traffic on busy days. Cost See the Ecommerce web app deployed on Kubernetes page for an estimated monthly cost based on the default resource locations and estimated usage time. You can find out more about pricing for GKE, Autopilot, and Multi Cluster Ingress in the GKE pricing page. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/container.admin roles/gkehub.editor roles/compute.networkAdmin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/serviceusage.serviceUsageAdmin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Ecommerce web app deployed on Kubernetes solution. Go to the Ecommerce web app deployed on Kubernetes solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Continue to Explore Cymbal Shops to find out how to test and explore your solution. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. When you no longer need the deployment, you can delete it by using the Terraform CLI, as described in Delete using the Terraform CLI. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" For information about the values that you can assign to the project_id variable, see Identifying projects. Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Continue to Explore Cymbal Shops to find out how to test and explore your solution. Explore Cymbal Shops Congratulations, you have now deployed the Cymbal Shops website! You can visit the Cymbal Shops website and look around, then explore how the solution works in the Google Cloud console. Be aware that it can take about five minutes after successfully deploying the application for the site to appear at the provided address. Visit the Cymbal Shops site How you find the Cymbal Shops site depends on how you deployed the solution. Console deployments If you deployed the solution through the console, you can visit the site directly from the Solution deployments page. If you have just finished deploying the solution, click View web app to visit the site. Otherwise, click the more_vert Actions menu for the deployment, and then select View web app. Terraform deployments If you deployed the solution by using the Terraform CLI, first find the IP address for the frontend provided by Multi Cluster Ingress. You can do this from the command line by using the Google Cloud CLI (the simplest approach), or from the Google Cloud console. gcloud Make sure you have the latest version of the Google Cloud CLI installed. We recommend running the command from Cloud Shell, where the tool is already installed for you. Run the following command to get the IP address, replacing PROJECT_ID with the ID of your Google Cloud project: gcloud compute addresses list \ --filter="name=('multi-cluster-ingress-ip-address-1')" \ --project=PROJECT_ID Copy and paste the address returned by the command into your browser to visit the website. Console Go to the Google Kubernetes Engine page in the Google Cloud console. Go to Google Kubernetes Engine Select Object Browser in the navigation menu. In the Object Browser list, expand the networking.gke.io section, then select MultiClusterIngress. You may need to scroll further to find this section. In the MultiClusterIngress page, select frontend-multi-cluster-ingress. In the frontend-multi-cluster-ingress details page, find the IP address. Click this address to visit the website. Explore the website You can now interact with the Cymbal Shops website just as its customers would see it, including browsing through products, adding products to the cart, and checking out as a guest. Explore your solution To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-ecommerce-microservices-on-gke/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. Error creating Feature: Resource already exists If you deploy this solution into a project where Multi Cluster Ingress is already configured, you will see an error similar to the following: Error: Error creating Feature: Resource already exists - apply blocked by lifecycle params: &beta.Feature{Name:(*string)(0xc0017d1d00), Labels:map[string]string{}, ResourceState:(*beta.FeatureResourceState)(0xc001b9d890), Spec:(*beta.FeatureSpec)(0xc001792f00), State:(*beta.FeatureState)(0xc001792f50), CreateTime:(*string)(0xc001792fd0), UpdateTime:(*string)(0xc001792ff0), DeleteTime:(*string)(nil), Project:(*string)(0xc0017d1d40), Location:(*string)(0xc0017d1ca0)}. running tf apply: terraform apply failed: running terraform failed: exit status 1 This is because this solution deploys a new config GKE cluster into the selected project. A project (specifically, a project's fleet) can contain only a single config cluster for configuring Multi Cluster Ingress. To fix this issue, either remove the existing Multi Cluster Ingress config cluster, or start again in a new project. Error: job: default/kubernetes-manifests-deployer-job is in failed state This solution's Terraform deploys a Kubernetes Job called kubernetes-manifests-deployer-job. This Kubernetes Job deploys the Kubernetes resources (Cymbal Shops microservices, the Redis database, and so on) needed for this solution into all three clusters. Because this Kubernetes Job is complex and relies on the readiness of all three clusters, it may occasionally fail with an error message similar to: kubernetes_job.kubernetes_manifests_deployer_job: Creation errored after 5m8s ... Error: job: default/kubernetes-manifests-deployer-job is in failed state If you receive this error, it's likely that some, if not all, of the solution's Google Cloud infrastructure is already provisioned, even though the rest of the deployment did not complete successfully. We recommend deleting the project to avoid getting billed for these resources, and retrying the deployment in a new, separate project. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. We also recommend reviewing the list of (both open and closed) issues in the solution's GitHub repository. What's next If you're new to containers and Kubernetes: Try this tutorial. It's aimed at Cloud Service Mesh users, but is useful for anyone who wants to see how to go from source code to a container running on GKE. Read our Kubernetes comic! Visit the Kubernetes documentation site. Learn about GKE Learn about Autopilot Learn about Multi Cluster Ingress Send feedback \ No newline at end of file diff --git a/Edge_hybrid_pattern.txt b/Edge_hybrid_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..925fa6ab737ccadc82ac7627f971a8f27967b90a --- /dev/null +++ b/Edge_hybrid_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/edge-hybrid-pattern +Date Scraped: 2025-02-23T11:50:07.543Z + +Content: +Home Docs Cloud Architecture Center Send feedback Edge hybrid pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC Running workloads in the cloud requires that clients in some scenarios have fast and reliable internet connectivity. Given today's networks, this requirement rarely poses a challenge for cloud adoption. There are, however, scenarios when you can't rely on continuous connectivity, such as: Sea-going vessels and other vehicles might be connected only intermittently or have access only to high-latency satellite links. Factories or power plants might be connected to the internet. These facilities might have reliability requirements that exceed the availability claims of their internet provider. Retail stores and supermarkets might be connected only occasionally or use links that don't provide the necessary reliability or throughput to handle business-critical transactions. The edge hybrid architecture pattern addresses these challenges by running time- and business-critical workloads locally, at the edge of the network, while using the cloud for all other kinds of workloads. In an edge hybrid architecture, the internet link is a noncritical component that is used for management purposes and to synchronize or upload data, often asynchronously, but isn't involved in time or business-critical transactions. Advantages Running certain workloads at the edge and other workloads in the cloud offers several advantages: Inbound traffic—moving data from the edge to Google Cloud—might be free of charge. Running workloads that are business- and time-critical at the edge helps ensure low latency and self-sufficiency. If internet connectivity fails or is temporarily unavailable, you can still run all important transactions. At the same time, you can benefit from using the cloud for a significant portion of your overall workload. You can reuse existing investments in computing and storage equipment. Over time, you can incrementally reduce the fraction of workloads that are run at the edge and move them to the cloud, either by reworking certain applications or by equipping some edge locations with internet links that are more reliable. Internet of Things (IoT)-related projects can become more cost-efficient by performing data computations locally. This allows enterprises to run and process some services locally at the edge, closer to the data sources. It also allows enterprises to selectively send data to the cloud, which can help to reduce the capacity, data transfer, processing, and overall costs of the IoT solution. Edge computing can act as an intermediate communication layer between legacy and modernized services. For example, services that might be running a containerized API gateway such as Apigee hybrid). This enables legacy applications and systems to integrate with modernized services, like IoT solutions. Best practices Consider the following recommendations when implementing the edge hybrid architecture pattern: If communication is unidirectional, use the gated ingress pattern. If communication is bidirectional, consider the gated egress and gated ingress pattern. If the solution consists of many edge remote sites connecting to Google Cloud over the public internet, you can use a software-defined WAN (SD-WAN) solution. You can also use Network Connectivity Center with a third-party SD-WAN router supported by a Google Cloud partner to simplify the provisioning and management of secure connectivity at scale. Minimize dependencies between systems that are running at the edge and systems that are running in the cloud environment. Each dependency can undermine the reliability and latency advantages of an edge hybrid setup. To manage and operate multiple edge locations efficiently, you should have a centralized management plane and monitoring solution in the cloud. Ensure that CI/CD pipelines along with tooling for deployment and monitoring are consistent across cloud and edge environments. Consider using containers and Kubernetes when applicable and feasible, to abstract away differences among various edge locations and also among edge locations and the cloud. Because Kubernetes provides a common runtime layer, you can develop, run, and operate workloads consistently across computing environments. You can also move workloads between the edge and the cloud. To simplify the hybrid setup and operation, you can use GKE Enterprise for this architecture (if containers are used across the environments). Consider the possible connectivity options that you have to connect a GKE Enterprise cluster running in your on-premises or edge environment to Google Cloud. As part of this pattern, although some GKE Enterprise components might sustain during a temporary connectivity interruption to Google Cloud, don't use GKE Enterprises when it's disconnected from Google Cloud as a nominal working mode. For more information, see Impact of temporary disconnection from Google Cloud. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backend and edge services, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee Hybrid let you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. Establish common identity between environments so that systems can authenticate securely across environment boundaries. Because the data that is exchanged between environments might be sensitive, ensure that all communication is encrypted in transit by using VPN tunnels, TLS, or both. Previous arrow_back Analytics hybrid and multicloud patterns Next Environment hybrid pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Education.txt b/Education.txt new file mode 100644 index 0000000000000000000000000000000000000000..15dfefd9ff15ca251a4e40014ac352caaa9eea62 --- /dev/null +++ b/Education.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/edu/higher-education +Date Scraped: 2025-02-23T11:58:04.672Z + +Content: +Discover how generative AI can enhance efficiency, improve student services, and drive innovation at your institutions. Attend Google Public Sector's GenAI Live + Labs on February 27 to get started.Google Cloud for EducationWe’re committed to advancing learning for everyone. Explore our Cloud education solutions and success stories.Learn more about Cloud for edtech, researchers, faculty, and students.Contact salesVIDEOSee how institutions drive student success with Google Cloud3:06SolutionsEducation solutionsModernize and secure your institution and dataSecurity upgrades with Google Workspace for Education PlusGoogle Workspace for Education Plus offers you access to enterprise-grade tools, including the security center, security investigation tool, and anomaly detection.Security analytics and operationsGoogle's secure-by-design infrastructure and global network make it easy to detect, investigate, and help stop threats that target your school and your users.Backup and disaster recoveryStay as prepared as possible for all situations, from natural disasters to short notice data requests, with Google Cloud’s backup and disaster recovery solutions.Modernize your infrastructureMigrate and modernize your workloads on Google’s secure, reliable infrastructure while keeping up with educational complexity and your own goals.Application modernizationMeet your institution’s needs and build new apps across hybrid or multi-cloud—quickly, flexibly, and more securely with our application modernization portfolio.Data warehouse modernizationKeep pace with the growth of data, derive real-time insights, and break down silos when your institution modernizes with Google Cloud’s data management solutions.Improve student success with the cloud, AI, and machine learning Google Cloud's learning platformGoogle Cloud's learning platform is a suite of applications and APIs that provides new ways for educators to share curriculum and provide on-demand learning.Interactive tutorPart of our AI-powered learning platform, the interactive tutor can provide assistance to students in learning new skills and information.Virtual agentsGoogle Cloud's Contact Center AI powers virtual agents that can provide answers to students’ inquiries, while freeing up staff to help with more difficult issues.Collaboration toolsGoogle Workspace for Education enables remote learning, encourages innovation, and streamlines administrative tasks—without disrupting your current workflows.ChromebooksEducators and students alike can use these simple, secure, and shareable devices to create and collaborate.Virtual desktop infrastructureVirtual labs and remote access to compute power empower educators to offer distance learning without disruption.Preparing students for careersGet your instructors and students access to learning programs, curriculum, and communities that enhance learning and prepare students for life after graduation.Accelerate groundbreaking researchRAD Lab for research, development and prototypingGoogle Cloud’s new sandbox environment, RAD Lab, helps teams move quickly from research and development to production.High performance computingGoogle Cloud’s flexible and scalable offerings help accelerate time to completion, so you can convert ideas into discoveries and inspirations into products.Genomics and clinical researchGenomics data is growing dramatically, with petabytes becoming exabytes. Use Google Cloud to store, analyze, and share large, complex datasets seamlessly at scale.Healthcare and medical imagingWork more closely with researchers, clinicians, and imaging specialists by de-identifying and integrating your data with our APIs to derive new medical insights.Research data warehouseMoving from a traditional data warehouse to Google Cloud means reduced costs, faster data queries and discoveries, and scalable storage to meet research demands.Research credits and discountsApply for Google Cloud research credits and discounts to fuel new innovations, deepen your cloud expertise, and get hands-on training with online labs.Student Success ServicesLearn how you can transform the student experience with Google Cloud.Contact salesSee how you can use technology to engage and support students throughout their education.Watch the webinarCustomersEducation success storiesVideoUSC team screens 680M possible drug compounds in 1 day1:22Case studyPenn State World Campus pioneers using AI for academic advising support services5-minute readCase studyLafayette College rolls out Google Workspace for Education4-minute readBlog postCollegis helps schools unlock value from data5-min readSee all customersPartnersPartners for educationFind a trusted partner who can help you purchase and deploy Google Cloud for education solutions.Expand allSolution partnersTechnology partnersConsulting and implementation partnersContract vehicles and procurementSee all partnersSecuritySecure, enterprise-ready tools support your compliance needsGoogle is committed to building solutions and products that help protect student and educator privacy and provide best-in-class security for your institution. Our industry-leading safeguards and privacy policies put you in control of your school’s data. We understand the unique needs of education; we built Google Workspace for Education, Chromebooks, and Google Cloud to be used in compliance with state and federal regulations like COPPA and FERPA.Learn moreCOPPAFERPAHIPAAFedRAMPISO/IEC 27018SOC 3What's newEducation news and eventsEventJoin us at the Google Public Sector SummitRegister nowBlog postUC Riverside expands research capacity with Google CloudRead the blogBlog postPeer supported mental health with Project håpRead the blogBlog postIntroducing Student Success Services from Google CloudRead the blogBlog postProtect your institution with Google Cloud's Security Command CenterRead the blogBlog postLearningMate brings Student Success Services to more learnersRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Ensure_operational_readiness_and_performance_using_CloudOps.txt b/Ensure_operational_readiness_and_performance_using_CloudOps.txt new file mode 100644 index 0000000000000000000000000000000000000000..355f6f2835ee5f1e33b732b893f3ce6179c01e26 --- /dev/null +++ b/Ensure_operational_readiness_and_performance_using_CloudOps.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence/operational-readiness-and-performance-using-cloudops +Date Scraped: 2025-02-23T11:42:42.053Z + +Content: +Home Docs Cloud Architecture Center Send feedback Ensure operational readiness and performance using CloudOps Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This principle in the operational excellence pillar of the Google Cloud Architecture Framework helps you to ensure operational readiness and performance of your cloud workloads. It emphasizes establishing clear expectations and commitments for service performance, implementing robust monitoring and alerting, conducting performance testing, and proactively planning for capacity needs. Principle overview Different organizations might interpret operational readiness differently. Operational readiness is how your organization prepares to successfully operate workloads on Google Cloud. Preparing to operate a complex, multilayered cloud workload requires careful planning for both go-live and day-2 operations. These operations are often called CloudOps. Focus areas of operational readiness Operational readiness consists of four focus areas. Each focus area consists of a set of activities and components that are necessary to prepare to operate a complex application or environment in Google Cloud. The following table lists the components and activities of each focus area: Note: The recommendations in the operational excellence pillar of the Architecture Framework are relevant to one or more of these operational-readiness focus areas. Focus area of operational readiness Activities and components Workforce Defining clear roles and responsibilities for the teams that manage and operate the cloud resources. Ensuring that team members have appropriate skills. Developing a learning program. Establishing a clear team structure. Hiring the required talent. Processes Observability. Managing service disruptions. Cloud delivery. Core cloud operations. Tooling Tools that are required to support CloudOps processes. Governance Service levels and reporting. Cloud financials. Cloud operating model. Architectural review and governance boards. Cloud architecture and compliance. Recommendations To ensure operational readiness and performance by using CloudOps, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Define SLOs and SLAs A core responsibility of the cloud operations team is to define service level objectives (SLOs) and service level agreements (SLAs) for all of the critical workloads. This recommendation is relevant to the governance focus area of operational readiness. SLOs must be specific, measurable, achievable, relevant, and time-bound (SMART), and they must reflect the level of service and performance that you want. Specific: Clearly articulates the required level of service and performance. Measurable: Quantifiable and trackable. Achievable: Attainable within the limits of your organization's capabilities and resources. Relevant: Aligned with business goals and priorities. Time-bound: Has a defined timeframe for measurement and evaluation. For example, an SLO for a web application might be "99.9% availability" or "average response time less than 200 ms." Such SLOs clearly define the required level of service and performance for the web application, and the SLOs can be measured and tracked over time. SLAs outline the commitments to customers regarding service availability, performance, and support, including any penalties or remedies for noncompliance. SLAs must include specific details about the services that are provided, the level of service that can be expected, the responsibilities of both the service provider and the customer, and any penalties or remedies for noncompliance. SLAs serve as a contractual agreement between the two parties, ensuring that both have a clear understanding of the expectations and obligations that are associated with the cloud service. Google Cloud provides tools like Cloud Monitoring and service level indicators (SLIs) to help you define and track SLOs. Cloud Monitoring provides comprehensive monitoring and observability capabilities that enable your organization to collect and analyze metrics that are related to the availability, performance, and latency of cloud-based applications and services. SLIs are specific metrics that you can use to measure and track SLOs over time. By utilizing these tools, you can effectively monitor and manage cloud services, and ensure that they meet the SLOs and SLAs. Clearly defining and communicating SLOs and SLAs for all of your critical cloud services helps to ensure reliability and performance of your deployed applications and services. Implement comprehensive observability To get real-time visibility into the health and performance of your cloud environment, we recommend that you use a combination of Google Cloud Observability tools and third-party solutions. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Implementing a combination of observability solutions provides you with a comprehensive observability strategy that covers various aspects of your cloud infrastructure and applications. Google Cloud Observability is a unified platform for collecting, analyzing, and visualizing metrics, logs, and traces from various Google Cloud services, applications, and external sources. By using Cloud Monitoring, you can gain insights into resource utilization, performance characteristics, and overall health of your resources. To ensure comprehensive monitoring, monitor important metrics that align with system health indicators such as CPU utilization, memory usage, network traffic, disk I/O, and application response times. You must also consider business-specific metrics. By tracking these metrics, you can identify potential bottlenecks, performance issues, and resource constraints. Additionally, you can set up alerts to notify relevant teams proactively about potential issues or anomalies. To enhance your monitoring capabilities further, you can integrate third-party solutions with Google Cloud Observability. These solutions can provide additional functionality, such as advanced analytics, machine learning-powered anomaly detection, and incident management capabilities. This combination of Google Cloud Observability tools and third-party solutions lets you create a robust and customizable monitoring ecosystem that's tailored to your specific needs. By using this combination approach, you can proactively identify and address issues, optimize resource utilization, and ensure the overall reliability and availability of your cloud applications and services. Implement performance and load testing Performing regular performance testing helps you to ensure that your cloud-based applications and infrastructure can handle peak loads and maintain optimal performance. Load testing simulates realistic traffic patterns. Stress testing pushes the system to its limits to identify potential bottlenecks and performance limitations. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Tools like Cloud Load Balancing and load testing services can help you to simulate real-world traffic patterns and stress-test your applications. These tools provide valuable insights into how your system behaves under various load conditions, and can help you to identify areas that require optimization. Based on the results of performance testing, you can make decisions to optimize your cloud infrastructure and applications for optimal performance and scalability. This optimization might involve adjusting resource allocation, tuning configurations, or implementing caching mechanisms. For example, if you find that your application is experiencing slowdowns during periods of high traffic, you might need to increase the number of virtual machines or containers that are allocated to the application. Alternatively, you might need to adjust the configuration of your web server or database to improve performance. By regularly conducting performance testing and implementing the necessary optimizations, you can ensure that your cloud-based applications and infrastructure always run at peak performance, and deliver a seamless and responsive experience for your users. Doing so can help you to maintain a competitive advantage and build trust with your customers. Plan and manage capacity Proactively planning for future capacity needs—both organic or inorganic—helps you to ensure the smooth operation and scalability of your cloud-based systems. This recommendation is relevant to the processes focus area of operational readiness. Planning for future capacity includes understanding and managing quotas for various resources like compute instances, storage, and API requests. By analyzing historical usage patterns, growth projections, and business requirements, you can accurately anticipate future capacity requirements. You can use tools like Cloud Monitoring and BigQuery to collect and analyze usage data, identify trends, and forecast future demand. Historical usage patterns provide valuable insights into resource utilization over time. By examining metrics like CPU utilization, memory usage, and network traffic, you can identify periods of high demand and potential bottlenecks. Additionally, you can help to estimate future capacity needs by making growth projections based on factors like growth in the user base, new products and features, and marketing campaigns. When you assess capacity needs, you should also consider business requirements like SLAs and performance targets. When you determine the resource sizing for a workload, consider factors that can affect utilization of resources. Seasonal variations like holiday shopping periods or end-of-quarter sales can lead to temporary spikes in demand. Planned events like product launches or marketing campaigns can also significantly increase traffic. To make sure that your primary and disaster recovery (DR) system can handle unexpected surges in demand, plan for capacity that can support graceful failover during disruptions like natural disasters and cyberattacks. Autoscaling is an important strategy for dynamically adjusting your cloud resources based on workload fluctuations. By using autoscaling policies, you can automatically scale compute instances, storage, and other resources in response to changing demand. This ensures optimal performance during peak periods while minimizing costs when resource utilization is low. Autoscaling algorithms use metrics like CPU utilization, memory usage, and queue depth to determine when to scale resources. Continuously monitor and optimize To manage and optimize cloud workloads, you must establish a process for continuously monitoring and analyzing performance metrics. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. To establish a process for continuous monitoring and analysis, you track, collect, and evaluate data that's related to various aspects of your cloud environment. By using this data, you can proactively identify areas for improvement, optimize resource utilization, and ensure that your cloud infrastructure consistently meets or exceeds your performance expectations. An important aspect of performance monitoring is regularly reviewing logs and traces. Logs provide valuable insights into system events, errors, and warnings. Traces provide detailed information about the flow of requests through your application. By analyzing logs and traces, you can identify potential issues, identify the root causes of problems, and get a better understanding of how your applications behave under different conditions. Metrics like the round-trip time between services can help you to identify and understand bottlenecks that are in your workloads. Further, you can use performance-tuning techniques to significantly enhance application response times and overall efficiency. The following are examples of techniques that you can use: Caching: Store frequently accessed data in memory to reduce the need for repeated database queries or API calls. Database optimization: Use techniques like indexing and query optimization to improve the performance of database operations. Code profiling: Identify areas of your code that consume excessive resources or cause performance issues. By applying these techniques, you can optimize your applications and ensure that they run efficiently in the cloud. Previous arrow_back Overview Next Manage incidents and problems arrow_forward Send feedback \ No newline at end of file diff --git a/Enterprise_application_on_Compute_Engine_VMs_with_Oracle_Exadata_in_Google_Cloud.txt b/Enterprise_application_on_Compute_Engine_VMs_with_Oracle_Exadata_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..040a552db20d78468b105d23f0ff7ea78fd31844 --- /dev/null +++ b/Enterprise_application_on_Compute_Engine_VMs_with_Oracle_Exadata_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-app-oracle-exadata-database-compute-engine +Date Scraped: 2025-02-23T11:49:33.497Z + +Content: +Home Docs Cloud Architecture Center Send feedback Enterprise application on Compute Engine VMs with Oracle Exadata in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-30 UTC This document provides a reference architecture for a highly available enterprise application that is hosted on Compute Engine virtual machines (VMs) with low-latency connectivity to Oracle Cloud Infrastructure (OCI) Exadata databases that run in Google Cloud. The intended audience for this document is cloud architects and Oracle database administrators. The document assumes that you're familiar with Compute Engine and Oracle Exadata Database Service. If you use Oracle Exadata or Oracle Real Application Clusters (Oracle RAC) to run Oracle databases on-premises, you can efficiently migrate your applications to Google Cloud and run your databases on Oracle Database@Google Cloud. Oracle Database@Google Cloud is a Google Cloud Marketplace offering that lets you run Oracle Exadata Database Service and Oracle Autonomous Database directly inside Google Cloud. If you don't need the Oracle RAC capability or if you need an Oracle Database version other than 19c and 23ai, then you can run self-managed Oracle databases on Compute Engine VMs. For more information, see Enterprise application with Oracle Database on Compute Engine. Architecture The following diagram shows a high-level view of the architecture: In the preceding diagram, an external load balancer receives requests from users of a public-facing application and it distributes the requests to frontend web servers. The web servers forward the user requests through an internal load balancer to application servers. The application servers read data from and write to databases in Oracle Database@Google Cloud. Administrators and OCI services can connect and interact with the Oracle databases. The following diagram shows a detailed view of the architecture: In this architecture, the web tier and application tier run in active-active mode on Compute Engine VMs that are distributed across two zones within a Google Cloud region. The application uses Oracle Exadata databases in the same Google Cloud region. All the components in the architecture are in a single Google Cloud region. This architecture is aligned with the regional deployment archetype. You can adapt this architecture to build a topology that is robust against regional outages by using the multi-regional deployment archetype. For more information, see Multi-regional deployment on Compute Engine and also the guidance in the Reliability section later in this document. The architecture that's shown in the preceding diagram includes the following components: Component Purpose Regional external Application Load Balancer The regional external Application Load Balancer receives user requests and distributes them to the web tier VMs. Google Cloud Armor security policy The Google Cloud Armor security policy helps to protect your application stack against threats like distributed denial-of-service (DDoS) attacks and cross-site scripting (XSS). Regional managed instance group (MIG) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of a regional MIG. This MIG is the backend for the external Application Load Balancer. The MIG contains Compute Engine VMs in two zones. Each of these VMs hosts an independent instance of the web tier of the application. Regional internal Application Load Balancer The regional internal Application Load Balancer distributes traffic from the web tier VMs to the application tier VMs. Regional MIG for the application tier The application tier, such as an Oracle WebLogic Server cluster, is deployed on Compute Engine VMs that are part of a regional MIG. This MIG is the backend for the internal Application Load Balancer. The MIG contains Compute Engine VMs in two zones. Each VM hosts an independent instance of the application server. Virtual Private Cloud (VPC) network and subnet All of the Google Cloud resources in the architecture use a single VPC network. Depending on your requirements, you can choose to build an architecture that uses multiple networks. For more information, see Deciding whether to create multiple VPC networks. Oracle Database@Google Cloud The application servers read data from and write to Oracle databases in Oracle Exadata Database Service. You provision Oracle Exadata Database Service by using Oracle Database@Google Cloud, a Cloud Marketplace offering that lets you run Oracle databases on Oracle-managed hardware within a Google Cloud data center. You use Google Cloud interfaces like the Google Cloud console, Google Cloud CLI, and APIs to create Exadata Infrastructure instances. Oracle sets up and manages the required compute, storage, and networking infrastructure in a data center within a Google Cloud region on hardware that's dedicated for your project. Exadata Infrastructure instances Each Exadata Infrastructure instance contains two or more physical database servers and three or more storage servers. These servers, which aren't shown in the diagram, are interconnected using a low-latency network fabric. When you create an Exadata Infrastructure instance, you specify the number of database servers and storage servers that must be provisioned. Exadata VM Clusters Within an Exadata Infrastructure instance, you create one or more Exadata VM Clusters. For example, you can choose to create and use a separate Exadata VM Cluster to host the databases that are required for each of your business units. Each Exadata VM Cluster contains one or more Oracle Linux VMs that host Oracle Database instances. When you create an Exadata VM Cluster, you specify the following: The number of database servers. The compute, memory, and storage capacity to be allocated to each VM in the cluster. The VPC network that the cluster must connect to. IP address ranges of the backup and client subnets for the cluster. The VMs within Exadata VM Clusters are not Compute Engine VMs. Oracle Database instances You create and manage Oracle databases through the OCI console and other OCI interfaces. Oracle Database software runs on the VMs within the Exadata VM Cluster. When you create the Exadata VM Cluster, you specify the Oracle Grid Infrastructure version. You also choose the license type: either bring your own licenses (BYOL) or opt for the license-included model. OCI VCN and subnets When you create an Exadata VM Cluster, an OCI virtual cloud network (VCN) is created automatically. The VCN has a client subnet and a backup subnet with IP address ranges that you specify. The client subnet is used for connectivity from your VPC network to the Oracle databases. The backup subnet is used to send database backups to OCI Object Storage. Cloud Router, Partner Interconnect, and OCI DRG Traffic between your VPC network and the VCN is routed by a Cloud Router that's attached to the VPC and through a dynamic routing gateway (DRG) that's attached to the VCN. The traffic flows through a low-latency connection that Google sets up using Partner Interconnect. Private Cloud DNS zone When you create an Exadata VM Cluster, a Cloud DNS private zone is created automatically. When your application servers send read and write requests to the Oracle databases, Cloud DNS resolves the database hostnames to the corresponding IP addresses. OCI Object Storage and OCI Service Gateway By default, backups of the Oracle Exadata databases are stored in OCI Object Storage. Database backups are routed to OCI Object Storage through a Service Gateway. Public Cloud NAT gateway The architecture includes a public Cloud NAT gateway to enable secure outbound connections from the Compute Engine VMs, which have only internal IP addresses. Cloud Interconnect and Cloud VPN To connect your on-premises network to the VPC network in Google Cloud, you can use Cloud Interconnect or Cloud VPN. For information about the relative advantages of each approach, see Choosing a Network Connectivity product. Cloud Monitoring You can use Cloud Monitoring to observe the behavior, health, and performance of your application and Google Cloud resources, including the Oracle Exadata resources. You can also monitor the resources in Oracle Exadata resources by using the OCI Monitoring service. Products used This reference architecture uses the following Google Cloud products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Google Cloud Armor: A network security service that offers web application firewall (WAF) rules and helps to protect against DDoS and application attacks. Cloud NAT: A service that provides Google Cloud-managed high-performance network address translation. Cloud Monitoring: A service that provides visibility into the performance, availability, and health of your applications and infrastructure. Cloud Interconnect: A service that extends your external network to the Google network through a high-availability, low-latency connection. Partner Interconnect: A service that provides connectivity between your on-premises network and your Virtual Private Cloud networks and other networks through a supported service provider. Cloud VPN: A service that securely extends your peer network to Google's network through an IPsec VPN tunnel. This reference architecture uses the following OCI products: Exadata Database Service on Dedicated Infrastructure: A service that lets you run Oracle Database instances on Exadata hardware that's dedicated for you. Object Storage: A service for storing large amounts of structured and unstructured data as objects. VCN and subnets: A VCN is a virtual and private network for resources in an OCI region. A subnet is a contiguous range of IP addresses with a VCN. Dynamic Routing Gateway: A virtual router for traffic between a VCN and external networks. Service Gateway: A gateway to let resources in a VCN access specific Oracle services privately. Design considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud and third-party products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions for your deployment and to select appropriate Google Cloud services. Region selection When you choose the Google Cloud region for your deployment, consider the following factors and requirements: Availability of Google Cloud services in each region. For more information, see Products available by location. Availability of Compute Engine machine types in each region. For more information, see Regions and zones. Availability of Oracle Database@Google Cloud in each region. For more information, see Available configurations. End-user latency requirements. Cost of Google Cloud resources. Regulatory requirements. Some of these factors and requirements might involve trade-offs. For example, the most cost-efficient region might not have the lowest carbon footprint. For more information, see Best practices for Compute Engine regions selection. Compute infrastructure The reference architecture in this document uses Compute Engine VMs to host the web tier and application tier. Depending on the requirements of your application, you can choose the following other Google Cloud compute services: Containers: You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. Serverless: If you prefer to focus your IT efforts on your data and applications instead of on setting up and operating infrastructure resources, then you can use serverless services like Cloud Run and Cloud Run functions. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility and control, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. The design guidance for those services is outside the scope of this document. For more information about service options, see Hosting Applications on Google Cloud. Storage options To provide persistent storage for the Compute Engine VMs in the web tier and application tier, choose an appropriate Persistent Disk or Google Cloud Hyperdisk type based on your application's requirements for capacity, scaling, availability, and performance. For more information, see Storage options. If you need low-cost storage that's redundant across the zones within a region, use Cloud Storage regional buckets. To store data that's shared across multiple VMs in a region, like configuration files for all the VMs in the web tier, you can use a Filestore Regional instance. The data that you store in a Filestore Regional instance is replicated synchronously across three zones within the region1. This replication helps to ensure high availability and robustness against zone outages for the data that you store in Filestore. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. When you design storage for your workloads, consider the functional characteristics of the workloads, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Network design When you build infrastructure for a multi-tier application stack, you must choose a network design that meets your business and technical requirements. The architecture that's shown in this document uses a simple network topology with a single VPC network. Depending on your requirements, you can choose to use multiple networks. For more information, see the following documentation: Deciding whether to create multiple VPC networks Decide the network design for your Google Cloud landing zone When you assign IP address ranges for the client and backup subnets to be used for the Exadata VM Clusters, consider the minimum subnet size requirements. For more information, see Plan for IP Address Space in Oracle Database@Google Cloud. Database migration When you plan to migrate on-premises databases to Oracle Database@Google Cloud, assess your current database environment and get configuration and sizing recommendations by using the Database Migration Assessment (DMA) tool. To migrate on-premises data to Oracle database deployments in Google Cloud, you can use standard Oracle tools like Oracle GoldenGate. Before you use the migrated databases in a production environment, verify connectivity from your applications to the databases. Security This section describes factors to consider when you use this reference architecture to design a topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against external threats To protect your application against external threats like DDoS attacks and XSS, define appropriate Google Cloud Armor security policies based on your requirements. Each policy is a set of rules that specifies the conditions to be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the source IP address of incoming traffic matches a specific IP address or CIDR range, then the traffic must be denied. You can also apply preconfigured WAF rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the web tier and the application tier don't need direct inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only private, internal IP addresses can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only private IP addresses, like the Compute Engine VMs in this reference architecture, you can use Cloud NAT as shown in the preceding architecture diagram, or use Secure Web Proxy. For the subnets that are used by the Exadata VMs, Oracle recommends that you assign private IP address ranges. VM image security Approved images are images with software that meets your policy or security requirements. To ensure that your VMs in the web tier and application tier use only approved images, you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. For Google Cloud organizations that were created before May 3, 2024, this default service account is granted the Editor IAM role (roles/editor), unless this behavior is disabled. By default, the default service account is attached to all Compute Engine VMs that you create by using the gcloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each tier of the application stack. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges. Network security To control network traffic between the resources in the web tier and application tier of the architecture, you must configure appropriate Cloud Next Generation Firewall (NGFW) policies. Database security and compliance The Exadata Database service includes Oracle Data Safe, which helps you manage security and compliance requirements for Oracle databases. You can use Oracle Data Safe to evaluate security controls, monitor user activity, and mask sensitive data. For more information, see Manage Database Security with Oracle Data Safe. More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations that are provided in the Enterprise foundations blueprint. Reliability This section describes design factors to consider when you use this reference architecture to build and operate reliable infrastructure for your deployment in Google Cloud. Robustness against VM failures In the architecture that's shown in this document, if a Compute Engine VM in the web tier or application tier crashes, the relevant MIG recreates the VM automatically. The load balancers forward requests to only the currently available web server instances and application server instances. VM autohealing Sometimes the VMs that host your web tier and application tier might be running and available, but there might be issues with the application itself. The application might freeze, crash, or not have enough memory. In this scenario, the VMs won't respond to load balancer health checks, and the load balancer won't route traffic to the unresponsive VMs. To help ensure that applications respond as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configuring autohealing, see About repairing VMs for high availability. Robustness against region outages If a region outage occurs, then the application is unavailable. To reduce the downtime caused by region outages, you can implement the following approach: Maintain a passive (failover) replica of the web tier and application tier in another Google Cloud region. Create a standby Exadata Infrastructure instance with the required Exadata VM Clusters in the same region that has the passive replica of the application stack. Use Oracle Data Guard for data replication and automatic failover to the standby Exadata databases. If your application needs a lower recovery point objective (RPO), you can backup and recover the databases by using Oracle Autonomous Recovery Service. If an outage occurs in the primary region, use the database replica or backup to restore the database to production and to activate the application in the failover region. Use DNS routing policies to route traffic to an external load balancer in the failover region. For business-critical applications that must continue to be available even when a region outage occurs, consider using the multi-regional deployment archetype. You can use Oracle Active Data Guard to provide a read-only standby database in the failover region. Oracle manages the infrastructure in Oracle Database@Google Cloud. For information about the service level objectives (SLOs) for Oracle Exadata Database Service on Dedicated Infrastructure, see Service Level Objectives for Oracle PaaS and IaaS Public Cloud Services. MIG autoscaling The architecture in this document uses regional MIGs for the web tier and application tier. The autoscaling capability of stateless MIGs ensures that the Compute Engine VMs that host the web tier and application tier aren't affected by single-zone outages. Stateful MIGs can't be autoscaled. To control the autoscaling behavior of your MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling. For more information, see Autoscaling groups of instances. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs that are distributed across multiple zones. This distribution helps to ensure that your web tier and your application tier are robust against single-zone outages. To improve this robustness further, you can create a spread placement policy and apply it to the MIG template. With a spread placement policy, when the MIG creates VMs, it places them within each zone on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. However, a trade-off with this approach is that the latency for inter-VM network traffic might increase. For more information, see Placement policies overview. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required for MIG autoscaling, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or it can be shared across multiple projects. You incur charges for reserved resources even if the resources aren't provisioned or used. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Stateful storage A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them easily to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Database capacity You can scale Exadata Infrastructure by adding database servers and storage servers as needed. After you add the required database servers or storage servers to Exadata Infrastructure, to be able to use the additional CPU or storage resources, you must add the capacity to the associated Exadata VM cluster. For more information, see Scaling Exadata Compute and Storage. Backup and recovery You can use Backup and DR Service to create, store, and manage backups of Compute Engine VMs. Backup and DR Service stores backup data in its original, application-readable format. When required, you can restore workloads to production by directly using data from long-term backup storage without time-consuming data-movement or preparation activities. For more information, see Backup and DR Service for Compute Engine instance backups. By default, backups of databases in Oracle Exadata Database Service on Dedicated Infrastructure are stored in OCI Object Storage. To achieve a lower RPO, you can backup and recover the databases by using Oracle Autonomous Recovery Service. More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the utilization of your VMs, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match the compute requirements of your web tier and application tier VMs. For workloads that have predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce the Compute Engine costs for your VMs in the web tier and application tier. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have high availability requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, the MIGs might not be able to scale out (that is, create VMs) automatically to reach the specified target size until the required capacity becomes available again. VM resource utilization The autoscaling capability of stateless MIGs enables your application to gracefully handle increases in traffic to the web tier and application tier. Autoscaling also helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Database costs When you create an Exadata VM Cluster, you can choose to BYOL or to provision license-included Oracle databases. Networking charges for data transfer between your applications and Oracle Exadata databases that are within the same region are included in the price of the Oracle Database@Google Cloud offering. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors to consider when you use this reference architecture to design a Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (like the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using an update method that you specify: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom OS images that include the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. You must regularly update your custom images to include security updates and patches that are provided by the OS vendor. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts (for example, to install third-party software), make sure that the scripts explicitly specify the software-installation parameters, like the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. Database administration Oracle manages the physical database servers, storage servers, and networking hardware in Oracle Exadata Database Service on Dedicated Infrastructure. You can manage the Exadata Infrastructure instances and the Exadata VM Clusters through the OCI or Google Cloud interfaces. You create and manage databases through the OCI interfaces. The Google Cloud console pages for Oracle Database@Google Cloud include links that you can use to go directly to the relevant pages in the OCI console. To avoid the need to sign in again to OCI, you can configure identity federation between OCI and Google Cloud. More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors to consider when you use this reference architecture to design a topology in Google Cloud that meets the performance requirements of your workloads. Compute performance Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on the performance requirements of your workloads. For the VMs that host the web tier and application tier, choose an appropriate machine type based on your performance requirements for those tiers. For more information, see Machine series comparison. Network performance For workloads that need low inter-VM network latency within the application and web tiers, you can create a compact placement policy and apply it to the MIG template that's used for those tiers. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. While a compact placement policy helps improve inter-VM network performance, a spread placement policy can help improve VM availability as described earlier. To achieve an optimal balance between network performance and availability, when you create a compact placement policy, you can specify how far apart the VMs must be placed. For more information, see Placement policies overview. Compute Engine has a per-VM limit for egress network bandwidth. This limit depends on the VM's machine type and whether traffic is routed through the same VPC network as the source VM. For VMs with certain machine types, to improve network performance, you can get a higher maximum egress bandwidth by enabling Tier_1 networking. For more information, see Configure per VM Tier_1 networking performance. Network traffic between the application tier VMs and the Oracle Exadata network is routed through a low-latency Partner Interconnect connection that Google sets up. Exadata Infrastructure uses RDMA over Converged Ethernet (RoCE) for high bandwidth and low latency networking among the database servers and storage servers. The servers exchange data directly in main memory without involving the processor, cache, or operating system. More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Accelerating cloud transformation with Google Cloud and Oracle Oracle documentation Overview of Oracle Database@Google Cloud Plan for IP Address Space in Oracle Database@Google Cloud Learn about selecting network topologies for Oracle Database@Google Cloud Deploy Oracle Database@Google Cloud Google documentation Oracle Database@Google Cloud overview Available configurations for Oracle Database@Google Cloud For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Andy Colvin | Database Black Belt Engineer (Oracle on Google Cloud)Jeff Welsch | Director, Product ManagementLee Gates | Group Product ManagerMarc Fielding | Data Infrastructure ArchitectMark Schlagenhauf | Technical Writer, NetworkingMichelle Burtoft | Senior Product ManagerRajesh Kasanagottu | Engineering ManagerRoberto Mendez | Staff Network Implementation EngineerSekou Page | Outbound Product ManagerSouji Madhurapantula | Group Product ManagerVictor Moreno | Product Manager, Cloud Networking For more information about region-specific considerations, see Geography and regions. ↩ Send feedback \ No newline at end of file diff --git a/Enterprise_application_with_Oracle_Database_on_Compute_Engine.txt b/Enterprise_application_with_Oracle_Database_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..8336e50ef74cf8d09168db07d7dc4e61fe18ac64 --- /dev/null +++ b/Enterprise_application_with_Oracle_Database_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-app-oracle-database-compute-engine +Date Scraped: 2025-02-23T11:49:31.145Z + +Content: +Home Docs Cloud Architecture Center Send feedback Enterprise application with Oracle Database on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-02 UTC This document provides a reference architecture to help you build the infrastructure to host a highly available enterprise application that uses an Oracle database, with the entire stack deployed on Compute Engine VMs. You can use this reference architecture to efficiently rehost (lift and shift) on-premises applications that use Oracle databases to Google Cloud. This document also includes guidance to help you build an Oracle Database topology in Google Cloud that meets Oracle's maximum availability architecture (MAA) requirements. The intended audience for this document is cloud architects and Oracle database administrators. The document assumes that you're familiar with Compute Engine and Oracle Database. If you use Oracle Exadata or Oracle Real Application Clusters (Oracle RAC) to run Oracle databases on-premises, you can efficiently migrate your applications to Google Cloud and run your databases on Oracle Database@Google Cloud. For more information, see Enterprise application on Compute Engine VMs with Oracle Exadata in Google Cloud. Architecture The following diagram shows the infrastructure for a multi-tier enterprise application that uses Oracle Database. The web tier, application tier, and Oracle Database instances are hosted on Compute Engine VMs. The web tier and application tier run in active-active mode on VMs that are distributed across two zones within a Google Cloud region. The primary and standby database instances are deployed in separate zones. This architecture is aligned with the regional deployment archetype, which helps to ensure that your Google Cloud topology is robust against single-zone outages1. Note: You can adapt the architecture in this document to build a topology that's robust against regional outages by using the multi-regional deployment archetype. For more information, see Multi-regional deployment on Compute Engine and also the guidance in the Reliability section later in this document. The architecture that's shown in the preceding diagram includes the following components: Component Purpose Regional external Application Load Balancer The regional external Application Load Balancer receives and distributes user requests to the web tier VMs. Google Cloud Armor security policy The Google Cloud Armor security policy helps to protect your application stack against threats like distributed denial-of-service (DDoS) attacks and cross-site scripting (XSS). Regional managed instance group (MIG) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of a regional MIG. This MIG is the backend for the external Application Load Balancer. The MIG contains Compute Engine VMs in two zones. Each of these VMs hosts an independent instance of the web tier of the application. Regional internal Application Load Balancer The regional internal Application Load Balancer distributes traffic from the web tier VMs to the application tier VMs. Regional MIG for the application tier The application tier, such as an Oracle WebLogic Server cluster, is deployed on Compute Engine VMs that are part of a regional MIG. This MIG is the backend for the internal Application Load Balancer. The MIG contains Compute Engine VMs in two zones. Each VM hosts an independent instance of the application server. Oracle Database instances deployed on Compute Engine VMs The application in this architecture uses a primary-standby pair of Oracle Database instances that are deployed on Compute Engine VMs in separate zones. You bring your own licenses (BYOL) for these Oracle Database instances, and you manage the VMs and database instances. Hyperdisk Storage Pools The VMs in each zone (across all the tiers in the application stack) use Hyperdisk Balanced volumes from a Hyperdisk Storage Pool. By creating and managing all the disks in a single storage pool, you improve capacity utilization and reduce operational complexity while maintaining the storage capacity and performance that the VMs need. Oracle Data Guard FSFO observer The Oracle Data Guard Fast-Start Failover (FSFO) observer is a lightweight program that initiates automatic failover to the standby Oracle Database instance when the primary instance is unavailable. The observer runs on a Compute Engine VM in a zone that's different from the zones where the primary and standby database instances are deployed. Cloud Storage bucket To store backups of the Oracle Database instances, this architecture uses a Cloud Storage bucket. To facilitate recovery of the database during a region outage, you can store the backups geo-redundantly in a dual-region or multi-region bucket. Virtual Private Cloud (VPC) network and subnet All the Google Cloud resources in the architecture use a single VPC network and subnet. Depending on your requirements, you can choose to build an architecture that uses multiple VPC networks or multiple subnets. For more information, see Deciding whether to create multiple VPC networks. Public Cloud NAT gateway The architecture includes a public Cloud NAT gateway to enable secure outbound connections from the Compute Engine VMs that have only internal IP addresses. Cloud Interconnect and Cloud VPN To connect your on-premises network to the VPC network in Google Cloud, you can use Cloud Interconnect or Cloud VPN. For information about the relative advantages of each approach, see Choosing a Network Connectivity product. Cloud Monitoring and Cloud Logging Cloud Monitoring helps you to observe the behavior, health, and performance of your application and Google Cloud resources. Ops Agent collects metrics and logs from the Compute Engine VMs, including the VMs that host the Oracle Database instances. The agent sends logs to Cloud Logging and sends metrics to Cloud Monitoring. Products used This reference architecture uses the following Google Cloud products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. Google Cloud Hyperdisk: A network storage service that you can use to provision and dynamically scale block storage volumes with configurable and predictable performance. Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Google Cloud Armor: A network security service that offers web application firewall (WAF) rules and helps to protect against DDoS and application attacks. Cloud NAT: A service that provides Google Cloud-managed high-performance network address translation. Cloud Monitoring: A service that provides visibility into the performance, availability, and health of your applications and infrastructure. Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Cloud Interconnect: A service that extends your external network to the Google network through a high-availability, low-latency connection. Cloud VPN: A service that securely extends your peer network to Google's network through an IPsec VPN tunnel. This reference architecture uses the following Oracle products: Oracle Database: A relational database management system (RDBMS) that extends the relational model to an object-relational model. Oracle Data Guard: A set of services to create, maintain, manage, and monitor one or more standby databases. You're responsible for procuring licenses for the Oracle products that you deploy in Google Cloud, and you're responsible for complying with the terms and conditions of the Oracle licenses. Design considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud and third-party products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions for your deployment and to select appropriate Google Cloud services. Region selection When you choose the Google Cloud region for your deployment, consider the following factors and requirements: Availability of Google Cloud services in each region. For more information, see Products available by location. Availability of Compute Engine machine types in each region. For more information, see Regions and zones. End-user latency requirements. Cost of Google Cloud resources. Regulatory requirements. Some of these factors and requirements might involve trade-offs. For example, the most cost-efficient region might not have the lowest carbon footprint. For more information, see Best practices for Compute Engine regions selection. Compute infrastructure The reference architecture in this document uses Compute Engine VMs to host all the tiers of the application. Depending on the requirements of your application, you can choose the following other Google Cloud compute services: Containers: You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. Serverless: If you prefer to focus your IT efforts on your data and applications instead of setting up and operating infrastructure resources, then you can use serverless services like Cloud Run. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility and control, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. The design guidance for those services is outside the scope of this document. For more information about service options, see Application Hosting Options. Storage options The architecture shown in this document uses a Hyperdisk Storage Pool in each zone, with Hyperdisk Balanced volumes for the VMs in all the tiers. Hyperdisk volumes provide better performance, flexibility, and efficiency than Persistent Disk. For information about Hyperdisk types and features, see About Hyperdisk. To store data that's shared across multiple VMs in a region, like configuration files for all the VMs in the web tier, you can use a Filestore regional instance. The data that you store in a Filestore regional instance is replicated synchronously across three zones within the region. This replication ensures high availability and robustness against zone outages. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. When you design storage for your workloads, consider the functional characteristics of the workloads, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Network design When you build infrastructure for a multi-tier application stack, you must choose a network design that meets your business and technical requirements. The architecture that's shown in this document uses a simple network topology with a single VPC network and subnet. Depending on your requirements, you can choose to use multiple VPC networks or multiple subnets. For more information, see the following documentation: Deciding whether to create multiple VPC networks Decide the network design for your Google Cloud landing zone Security, privacy, and compliance This section describes factors to consider when you use this reference architecture to design a topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against external threats To help protect your application against external threats like DDoS attacks and XSS, define appropriate Google Cloud Armor security policies based on your requirements. Each policy is a set of rules that specifies the conditions to be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the source IP address of incoming traffic matches a specific IP address or CIDR range, then the traffic must be denied. You can also apply preconfigured web application firewall (WAF) rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the web tier, application tier, and Oracle Database instances don't need direct inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only private, internal IP addresses can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only private IP addresses, like the Compute Engine VMs in this reference architecture, you can use Secure Web Proxy or Cloud NAT. VM image security Approved images are images with software that meets your policy or security requirements. To ensure that your VMs use only approved images, you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. For Google Cloud organizations that were created before May 3, 2024, this default service account is granted the Editor IAM role (roles/editor), unless this behavior is disabled. By default, the default service account is attached to all VMs that you create by using the Google Cloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each tier of the application stack. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges. Disk encryption By default, the data that's stored in Hyperdisk volumes is encrypted using Google-owned and Google-managed encryption keys. As an additional layer of protection, you can choose to encrypt the Google-owned data encryption keys by using keys that you own and manage in Cloud Key Management Service (Cloud KMS). For more information, see About disk encryption. Network security To control network traffic between the resources in the architecture, you must configure appropriate Cloud Next Generation Firewall (NGFW) policies. More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations that are provided in the Enterprise foundations blueprint. Reliability This section describes design factors to consider when you use this reference architecture to build and operate reliable infrastructure for your deployment in Google Cloud. Robustness against VM failures In the architecture that's shown in this document, if a Compute Engine VM in any of the tiers fails, the application can continue to process requests. If a VM in the web tier or application tier crashes, the relevant MIG recreates the VM automatically. The load balancers forward requests to only the currently available web server instances and application server instances. If the VM that hosts the primary Oracle Database instance fails, the Oracle Data Guard FSFO observer initiates an automatic failover to the standby Oracle Database instance. VM autohealing Sometimes the VMs that host your web tier and application tier might be running and available, but there might be issues with the application itself. The application might freeze, crash, or not have enough memory. In this scenario, the VMs won't respond to load balancer health checks, and the load balancer won't route traffic to the unresponsive VMs. To help ensure that applications respond as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configuring autohealing, see About repairing VMs for high availability. Robustness against zone outages If a zone outage occurs, the application remains available. The web tier and application tier are available (and responsive) because the VMs are in regional MIGs. The regional MIGs ensure that new VMs are created automatically in the other zone to maintain the configured minimum number of VMs. The load balancers forward requests to the available web server VMs and application server VMs. If an outage affects the zone that has the primary Oracle Database instance, then the Oracle Data Guard FSFO observer initiates an automatic failover to the standby Oracle Database instance. The FSFO observer runs on a VM in a zone that's different from the zones that have the primary and standby database instances. To ensure high availability of data in Hyperdisk volumes during a single-zone outage, you can use Hyperdisk Balanced High Availability. When data is written to a volume, the data is replicated synchronously between two zones in the same region. Robustness against region outages If both of the zones in the architecture have an outage or if a region outage occurs, then the application is unavailable. To reduce the downtime caused by multi-zone or region outages, you can implement the following approach: Maintain a passive (failover) replica of the infrastructure stack in another Google Cloud region. Use a dual-region or multi-region Cloud Storage bucket to store the Oracle Database backups. The backups are replicated asynchronously across at least two geographic locations. With replicated database backups, your architecture maps to Oracle's Maximum Availability Architecture (MAA) Silver tier. To achieve faster replication for backups stored in dual-region buckets, you can use turbo replication. For more information, see Data availability and durability. If an outage occurs in the primary region, use the database backup to restore the database and activate the application in the failover region. Use DNS routing policies to route traffic to the load balancer in the failover region. For business-critical applications that must continue to be available even when a region outage occurs, consider using the multi-regional deployment archetype. For the database tier, you can use Oracle Active Data Guard FSFO to automatically failover to a standby Oracle Database instance in the failover region. This approach maps to Oracle's MAA Gold tier. MIG autoscaling When you run your application on VMs in a regional MIG, the application remains available during isolated zone outages. The autoscaling capability of stateless MIGs lets you maintain application availability and performance at predictable levels. Stateful MIGs can't be autoscaled. To control the autoscaling behavior of your MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling. For more information, see Autoscaling groups of instances. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs that are distributed across multiple zones. This distribution ensures that your application is robust against single-zone outages. To improve this robustness further, you can create a spread placement policy and apply it to the MIG template. With a spread placement policy, when the MIG creates VMs, it places them within each zone on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. However, a trade-off with this approach is that the latency for inter-VM network traffic might increase. For more information, see Placement policies overview. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required for MIG autoscaling, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or it can be shared across multiple projects. You incur charges for reserved resources even if the resources aren't provisioned or used. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Block storage availability The architecture in this document uses a Hyperdisk Storage Pool in each zone to provide block storage for the Compute Engine VMs. You create a pool of block storage capacity for a zone. You then create Hyperdisk volumes in the storage pool and attach the volumes to VMs in the zone. The storage pool attempts to add capacity automatically to ensure that the utilization rate doesn't exceed 80% of the pool's provisioned capacity. This approach ensures that block storage space is available when required. For more information, see How Hyperdisk Storage Pools work. Stateful storage A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them easily to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Backup and recovery The architecture in this document uses Cloud Storage to store database backups. If you choose the dual-region or multi-region location type for the Cloud Storage bucket, the backups are replicated asynchronously across at least two geographic locations. If a region outage occurs, you can use the backups to restore the database in another region. With a dual-region bucket, you can achieve faster replication by enabling the turbo replication option. For more information, see Data availability and durability. You can use Backup and DR Service to create, store, and manage backups of Compute Engine VMs. Backup and DR Service stores backup data in its original, application-readable format. When required, you can restore workloads to production by directly using data from long-term backup storage without time-consuming data-movement or preparation activities. For more information, see the following documentation: Backup and DR for Compute Engine instance backups Backup and DR for Oracle More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the utilization of your VM resources, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match your workload's compute requirements. For workloads that have predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce the Compute Engine costs for your VMs in the web tier and application tier. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and that don't have high availability requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, MIGs with Spot VMs might not be able to scale out (that is, create VMs) automatically to reach the specified target size until the required capacity becomes available again. Don't use Spot VMs for the VMs that host the Oracle Database instances. VM resource utilization The autoscaling capability of stateless MIGs enables your application to gracefully handle increases in traffic to the web tier and application tier. Autoscaling also helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Oracle Database licensing You're responsible for procuring licenses for the Oracle products that you deploy on Compute Engine, and you're responsible for complying with the terms and conditions of the Oracle licenses. When you calculate the Oracle Database licensing cost, consider the number of Oracle Processor licenses that are required based on the machine type that you choose for the Compute Engine VMs that host the Oracle Database instances. For more information, see Licensing Oracle Software in the Cloud Computing Environment. Block storage resource utilization The architecture in this document uses a Hyperdisk Storage Pool in each zone to provide block storage for the Compute Engine VMs. You can improve the overall utilization of block storage capacity and reduce cost by using Advanced capacity storage pools, which use thin provisioning and data reduction technologies to improve storage efficiency. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors to consider when you use this reference architecture to design a Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (like the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using an update method that you specify: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom OS images that include the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. You must regularly update your custom images to include the security updates and patches that are provided by the OS vendor. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts (for example, to install third-party software), make sure that the scripts explicitly specify the software-installation parameters, like the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. Block storage management The architecture in this document uses a Hyperdisk Storage Pool in each zone to provide block storage for the Compute Engine VMs. Hyperdisk Storage Pools help simplify storage management. Instead of allocating and managing capacity individually for numerous disks, you define a pool of capacity that can be shared across multiple workloads in a zone. You then create Hyperdisk volumes in the storage pool and attach the volumes to the VMs in the zone. The storage pool attempts to add capacity automatically to ensure that the utilization rate doesn't exceed 80% of the pool's provisioned capacity. Application server to database connectivity For connections from your application to Oracle Database, we recommend that you use the database VM's zonal internal DNS name rather than its IP address. Google Cloud automatically resolves the DNS name to the VM's primary internal IP address. An added advantage with this approach is that you don't need to reserve and assign static internal IP addresses for the database VMs. Oracle Database administration and support When you run a self-managed Oracle Database instance on a Compute Engine VM, there are similar operational concerns as when you run Oracle Database on-premises. However, with a Compute Engine VM you no longer need to manage the underlying compute, networking, and storage infrastructure. For guidance about operating and managing your Oracle Database instances, see the Oracle-provided documentation for the relevant release. For information about Oracle's support policy for Oracle Database instances that you deploy in Google Cloud, see Oracle Database Support for Non-Oracle Public Cloud Environments (Doc ID 2688277.1). More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors to consider when you use this reference architecture to design a topology in Google Cloud that meets the performance requirements of your workloads. Compute performance Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on the performance requirements of your workloads. For the VMs that host the web tier and application tier, choose an appropriate machine type based on your performance requirements for those tiers. To get a list of the available machine types that support Hyperdisk volumes and that meet your performance and other requirements, use the Machine series comparison table. For the VMs that host the Oracle Database instances, we recommend that you use a machine type in the C4 machine series from the general-purpose machine family. C4 machine types provide consistently high performance for database workloads. Network performance For workloads that need low inter-VM network latency, you can create a compact placement policy and apply it to the MIG template that's used for the application tier. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. While a compact placement policy helps improve inter-VM network performance, a spread placement policy can help improve VM availability as described earlier. To achieve an optimal balance between network performance and availability, when you create a compact placement policy, you can specify how far apart the VMs must be placed. For more information, see Placement policies overview. Compute Engine has a per-VM limit for egress network bandwidth. This limit depends on the VM's machine type and whether traffic is routed through the same VPC network as the source VM. For VMs with certain machine types, to improve network performance, you can get a higher maximum egress bandwidth by enabling Tier_1 networking. For more information, see Configure per VM Tier_1 networking performance. Hyperdisk storage performance The architecture that's described in this document uses Hyperdisk volumes for the VMs in all the tiers. Hyperdisk lets you scale performance and capacity dynamically. You can adjust the provisioned IOPS, throughput, and the size of each volume to match your workload's storage performance and capacity needs. The performance of Hyperdisk volumes depends on the Hyperdisk type and the machine type of the VMs to which the volumes are attached. For more information about Hyperdisk performance limits and tuning, see the following documentation: Machine types that support Hyperdisk Hyperdisk performance limits Optimize Hyperdisk performance More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Accelerating cloud transformation with Google Cloud and Oracle Oracle MAA Reference Architectures For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Andy Colvin | Database Black Belt Engineer (Oracle on Google Cloud)Jeff Welsch | Director, Product ManagementLee Gates | Group Product ManagerMarc Fielding | Data Infrastructure ArchitectMark Schlagenhauf | Technical Writer, NetworkingMichelle Burtoft | Senior Product ManagerRajesh Kasanagottu | Engineering ManagerSekou Page | Outbound Product ManagerSouji Madhurapantula | Group Product ManagerVictor Moreno | Product Manager, Cloud NetworkingYeonsoo Kim | Product Manager For more information about region-specific considerations, see Geography and regions. ↩ Send feedback \ No newline at end of file diff --git a/Environment_hybrid_pattern.txt b/Environment_hybrid_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cc3fc79a0e6b13bc9172329cbc9ce5a174b9cdb --- /dev/null +++ b/Environment_hybrid_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/environment-hybrid-pattern +Date Scraped: 2025-02-23T11:50:09.312Z + +Content: +Home Docs Cloud Architecture Center Send feedback Environment hybrid pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC With the environment hybrid architecture pattern, you keep the production environment of a workload in the existing data center. You then use the public cloud for your development and testing environments, or other environments. This pattern relies on the redundant deployment of the same applications across multiple computing environments. The goal of the deployment is to help increase capacity, agility, and resiliency. When assessing which workloads to migrate, you might notice cases when running a specific application in the public cloud presents challenges: Jurisdictional or regulatory constraints might require that you keep data in a specific country. Third-party licensing terms might prevent you from operating certain software in a cloud environment. An application might require access to hardware devices that are available only locally. In such cases, consider not only the production environment but all environments that are involved in the lifecycle of an application, including development, testing, and staging systems. These restrictions often apply to the production environment and its data. They might not apply to other environments that don't use the actual data. Check with the compliance department of your organization or the equivalent team. The following diagram shows a typical environment hybrid architecture pattern: Running development and test systems in different environments than your production systems might seem risky and could deviate from your existing best practices or from your attempts to minimize differences between your environments. While such concerns are justified, they don't apply if you distinguish between the stages of the development and testing processes: Although development, testing, and deployment processes differ for each application, they usually involve variations of the following stages: Development: Creating a release candidate. Functional testing or user acceptance testing: Verifying that the release candidate meets functional requirements. Performance and reliability testing: Verifying that the release candidate meets nonfunctional requirements. It's also known as load testing. Staging or deployment testing: Verifying that the deployment procedure works. Production: Releasing new or updated applications. Performing more than one of these stages in a single environment is rarely practical, so each stage usually requires one or more dedicated environments. Note: The term staging environment is often confused with testing environment. The primary purpose of a testing environment is to run functional tests. The primary purpose of a staging environment is to test if your application deployment procedures work as intended. By the time a release reaches a staging environment, your functional testing should be complete. Staging is the last step before you deploy software to your production deployment. To ensure that test results are meaningful and that they apply to the production deployment, the set of environments that you use throughout an application's lifecycle must satisfy the following rules, to the extent possible: All environments are functionally equivalent. That is, the architecture, APIs, and versions of operating systems and libraries are equivalent, and systems behave the same across environments. This equivalence avoids situations where applications work in one environment but fail in another, or where defects aren't reproducible. Environments that are used for performance and reliability testing, staging, and production are non-functionally equivalent. That is, their performance, scale, and configuration, and the way they're operated and maintained, are either the same or differ only in insignificant ways. Otherwise, performance and staging tests become meaningless. In general, it's fine if the environments that are used for development and functional testing differ non-functionally from the other environments. As illustrated in the following diagram, the test and development environments are built on Google Cloud. A managed database, like Cloud SQL, can be used as an option for development and testing in Google Cloud. Development and testing can use the same database engine and version in the on-premises environment, one that's functionally equivalent, or a new version that's rolled out to the production environment after the testing stage. However, because the underlying infrastructure of the two environments aren't identical, this approach to performance load testing isn't valid. The following scenarios can fit well with the environment hybrid pattern: Achieve functional equivalence across all environments by relying on Kubernetes as a common runtime layer where applicable and feasible. Google Kubernetes Engine (GKE) Enterprise edition can be a key enabling technology for this approach. Ensure workload portability and abstract away differences between computing environments. With a zero trust service mesh, you can control and maintain the required communication separation between the different environments. Run development and functional testing environments in the public cloud. These environments can be functionally equivalent to the remaining environments but might differ in nonfunctional aspects, like performance. This concept is illustrated in the preceding diagram. Run environments for production, staging, and performance (load testing) and reliability testing in the private computing environment, ensuring functional and nonfunctional equivalence. Design Considerations Business needs: Each deployment and release strategy for applications has its own advantages and disadvantages. To ensure that the approach that you select aligns with your specific requirements, base your selections on a thorough assessment of your business needs and constraints. Environment differences: As part of this pattern, the main goal of using this cloud environment is for development and testing. The final state is to host the tested application in the private on-premises environment (production). To avoid developing and testing a capability that might function as expected in the cloud environment and fail in the production environment (on-premises), the technical team must know and understand the architectures and capabilities of both environments. This includes dependencies on other applications and on the hardware infrastructure—for example, security systems that perform traffic inspection. Governance: To control what your company is allowed to develop in the cloud and what data they can use for testing, use an approval and governance process. This process can also help your company make sure that it doesn't use any cloud features in your development and testing environments that don't exist in your on-premises production environment. Success criteria: There must be clear, predefined, and measurable testing success criteria that align with the software quality assurance standards for your organization. Apply these standards to any application that you develop and test. Redundancy: Although development and testing environments might not require as much reliability as the production environment, they still need redundant capabilities and the ability to test different failure scenarios. Your failure-scenario requirements might drive the design to include redundancy as part of your development and testing environment. Advantages Running development and functional testing workloads in the public cloud has several advantages: You can automatically start and stop environments as the need arises. For example, you can provision an entire environment for each commit or pull request, allow tests to run, and then turn it off again. This approach also offers the following advantages: You can reduce costs by stopping virtual machine (VM) instances when they're inactive, or by provisioning environments only on demand. You can speed up development and testing by starting ephemeral environments for each pull-request. Doing so also reduces maintenance overhead and reduces inconsistencies in the build environment. Running these environments in the public cloud helps build familiarity and confidence in the cloud and related tools, which might help with migrating other workloads. This approach is particularly helpful if you decide to explore Workload portability using containers and Kubernetes—for example, using GKE Enterprise across environments. Best practices To implement the environment hybrid architecture pattern successfully, consider the following recommendations: Define your application communication requirements, including the optimal network and security design. Then, use the mirrored network pattern to help you design your network architecture to prevent direct communications between systems from different environments. If communication is required across environments, it has to be in a controlled manner. The application deployment and testing strategy you choose should align with your business objectives and requirements. This might involve rolling out changes without downtime or implementing features gradually to a specific environment or user group before wider release. To make workloads portable and to abstract away differences between environments, you might use containers with Kubernetes. For more information, see GKE Enterprise hybrid environment reference architecture. Establish a common tool chain that works across computing environments for deploying, configuring, and operating workloads. Using Kubernetes gives you this consistency. Ensure that CI/CD pipelines are consistent across computing environments, and that the exact same set of binaries, packages, or containers is deployed across those environments. When using Kubernetes, use a CI system such as Tekton to implement a deployment pipeline that deploys to clusters and works across environments. For more information, see DevOps solutions on Google Cloud. To help you with the continuous release of secure and reliable applications, incorporate security as an integral part of the DevOps process (DevSecOps). For more information, see Deliver and secure your internet-facing application in less than an hour using Dev(Sec)Ops Toolkit. Use the same tools for logging and monitoring across Google Cloud and existing cloud environments. Consider using open source monitoring systems. For more information, see Hybrid and multicloud monitoring and logging patterns. If different teams manage test and production workloads, using separate tooling might be acceptable. However, using the same tools with different view permissions can help reduce your training effort and complexity. When you choose database, storage, and messaging services for functional testing, use products that have a managed equivalent on Google Cloud. Relying on managed services helps decrease the administrative effort of maintaining development and testing environments. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available that are based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. The following table shows which Google Cloud products are compatible with common OSS products. OSS product Compatible with Google Cloud product Apache HBase Bigtable Apache Beam Dataflow CDAP Cloud Data Fusion Apache Hadoop Dataproc MySQL, PostgreSQL Cloud SQL Redis Cluster, Redis, Memcached Memorystore Network File System (NFS) Filestore JMS, Kafka Pub/Sub Kubernetes GKE Enterprise Previous arrow_back Edge hybrid pattern Next Business continuity hybrid and multicloud patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Error_Reporting.txt b/Error_Reporting.txt new file mode 100644 index 0000000000000000000000000000000000000000..13a323fe5478881afd74c5b8505b288220c724d4 --- /dev/null +++ b/Error_Reporting.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/error-reporting/docs/grouping-errors +Date Scraped: 2025-02-23T12:07:32.090Z + +Content: +Home Google Cloud Observability Error Reporting Documentation Guides Send feedback Error Reporting overview Stay organized with collections Save and categorize content based on your preferences. Error Reporting aggregates errors produced in your running cloud services. These errors are either reported by the Error Reporting API or are inferred to be errors when Error Reporting inspects log entries for common text patterns such as stack traces. Error Reporting groups errors which are considered to have the same root cause. Error Reporting is automatically enabled. Error Reporting samples up to 1,000 errors per hour. When this limit is reached, the displayed counts are estimated. If too many events are received, then Error Reporting samples up to 100 errors per hour and continue to extrapolate the counts. When Error Reporting analyzes log entries Error Reporting is a global service built on Cloud Logging and can analyze log entries when all of the following are true: Assured workloads are disabled. For more information, see Overview of Assured Workloads. Customer-managed encryption keys (CMEK) is disabled on all log buckets that store the log entry. For information about how to determine the CMEK configuration for a log bucket, see Verify key enablement. The log bucket satisfies one of the following: The log bucket is stored in the same project where the log entries originated. The log entries were routed to a project, and then that project stored those log entries in a log bucket that it owns. If you are storing your log entries in log buckets with CMEK enabled, then you can still use Error Reporting. However, you must use the Error Reporting client libraries or the Error Reporting API. For more information, see the Error Reporting API overview and Error Reporting client libraries. How errors are grouped When Error Reporting evaluates log entries, it ignores log entries with the following conditions: On App Engine standard environment, errors logged with a severity lower than ERROR are ignored. Stack frames which are not owned by the user are ignored (for instance, those that belong to public libraries). Any repeating sequence of one or more stack frames is replaced by a single occurrence of that sequence. Compiler-introduced methods and symbols are removed. Next, Error Reporting follows these rules to group errors: Exceptions are grouped together if they have the same exception type and similar stacks. The stack trace is ignored for exceptions that are typically unrelated to the source location where they occur. Errors without an exception stack are grouped together if they were created by the same log entry, approximated by the source location it was reported from (reportLocation). Specifically, the following grouping rules are applied in this order: Error type Grouped by Errors caused by a general problem in the environment. For example, App Engine specific problems: com.google.apphosting.runtime.HardDeadlineExceededError com.google.appengine.api.datastore.DatastoreTimeoutException Java problems: java.util.concurrent.CancellationException Grouped by exception type. Errors with a stack trace. In the case of nested exceptions, the innermost exception is considered. For example: runtime error: index out of range package1.func1() file1:20 package2.func2() file2:33 Grouped by exception type and the 5 top-most frames. Errors without a stack trace, but with a message. For example: runtime error: index out of range func1() Grouped by message and (if present) function name. Only the first 3 literal tokens of the message are considered. In the example to the left, these are runtime, error, and index. Data regionality If you set up Assured Workloads for data-residency or Impact Level 4 (IL4) requirements, then Google Cloud automatically disables Error Reporting. In Cloud Logging, you can regionalize your logs by routing them to a specific location. On the Error Groups page, Error Reporting organizes and shows error groups based on the region of the log bucket that contains the log entries. For example, an error group listed under us-central-1 contains only error logs that are part of a log bucket in us-central-1. Global error groups contain only error logs that are part of a log bucket in the global region. To filter the region of the error groups displayed on the Error Groups page, select a value from the Region menu. This menu has a default value of global. Note: Because Error Reporting is a global service, error groups can be accessed from any region. This behavior isn't configurable. What's next Collect error data by using Error Reporting View errors Send feedback \ No newline at end of file diff --git a/Eventarc.txt b/Eventarc.txt new file mode 100644 index 0000000000000000000000000000000000000000..e5f2b2e2300c8681da16cb43b9567cec2dcb6a43 --- /dev/null +++ b/Eventarc.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/eventarc/docs +Date Scraped: 2025-02-23T12:05:32.078Z + +Content: +Home Documentation Eventarc Send feedback Eventarc overview Stay organized with collections Save and categorize content based on your preferences. Advanced Standard Preview — Eventarc Advanced This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions. Eventarc lets you build event-driven architectures without having to implement, customize, or maintain the underlying infrastructure. Eventarc is offered in two editions: Eventarc Advanced and Eventarc Standard. Both editions offer a scalable, serverless, and fully managed eventing solution that lets you asynchronously route messages from sources to targets using loosely coupled services that are triggered by and react to state changes known as events. Both editions support a range of event providers and destinations—including Google Cloud services, custom applications, SaaS applications, and third-party services—while managing delivery, security, authorization, observability, and error-handling for you. Note that the underlying data model for both editions of Eventarc is the same. As a use case grows in complexity, you have the option of seamlessly transitioning from using Eventarc Standard to using Eventarc Advanced. Editions overview The following is an overview of both editions. For more detailed information, see the Eventarc Advanced overview and the Eventarc Standard overview. Eventarc Advanced Eventarc Advanced is a fully managed platform for building event-driven architectures. It lets you collect events that occur in a system and publish them to a central bus. Interested services can subscribe to specific messages by creating enrollments. You can use the bus to route events from multiple sources in real time and publish them to multiple destinations, and optionally transform events prior to delivery to a target. Eventarc Advanced is feature rich and is ideal for organizations with complex eventing and messaging needs, particularly those grappling with managing numerous Pub/Sub topics, Kafka queues, or other third-party messaging systems. By providing administrators with enhanced and centralized visibility and control, Eventarc Advanced enables organizations to connect multiple teams across different projects. Eventarc Advanced lets you receive, filter, transform, route, and deliver messagesbetween different event providers and destinations (click diagram to enlarge). Eventarc Standard Eventarc Standard is recommended for applications where the focus is on simply delivering events from event provider to event destination. It lets you quickly and easily consume Google events by defining triggers that filter inbound events according to their source, type, and other attributes, and then route them to a specified destination. Eventarc Standard lets you filter and route eventsfrom event providers to event destinations (click diagram to enlarge). Features comparison table The following table can help you choose between Eventarc Advanced and Eventarc Standard. It assumes your familiarity with the basic concepts of event-driven architectures. Feature Eventarc Advanced Eventarc Standard Access control Per message access control and central governance with IAM See Access control with IAM See Access control with IAM Capacity Automatically provisioned Automatically provisioned Client library languages Java, Python, Go, Node.js, C++, C#, PHP, RubySee Eventarc client libraries Java, Python, Go, Node.js, C++, C#, PHP, RubySee Eventarc client libraries Compliance standards Doesn't apply to any feature in Preview See Compliance standards Cross-project event delivery SupportedSee Publish events from Google sources Not supported Customer managed encryption keys YesSee Use customer-managed encryption keys YesSee Use customer-managed encryption keys Dead letter queues supported No Yes, through Pub/Sub dead letter topicSee Retry events Event format Events are delivered to the destination in a CloudEvents formatSee Event format Optionally, you can override this behavior by defining an HTTP binding Events are delivered to the destination in a CloudEvents format See Event format Event size 1 MB maximumSee Quotas and limits 512 KB maximumSee Quotas and limits Locations See Eventarc Advanced locations See Eventarc Standard locations Message filtering Filtering on any and all event attributes Filtering on event type and specific attributes Message routing Many providers to many destinations Provider to destination Message schema conversion YesSee Convert the format of received events No Message transformation Yes, through CEL expressionsSee Transform received events No Observability Through Google Cloud Observability such as Cloud Logging and Cloud MonitoringSee Eventarc audit logging Through Google Cloud Observability such as Cloud Logging and Cloud MonitoringSee Eventarc audit logging Ordered delivery There is no in-order, first-in-first-out delivery guarantee There is no in-order, first-in-first-out delivery guarantee Pricing See Eventarc pricing See Eventarc pricing Regionality RegionalSee Understand regionality Regional, GlobalSee Understand Eventarc locations REST endpoints https://eventarc.googleapis.comSee Eventarc API https://eventarcpublishing.googleapis.comSee Eventarc Publishing API https://eventarc.googleapis.comSee Eventarc API Retry and retention At-least-once event delivery to targets; default message retention duration is 24 hours with an exponential backoff delay See Retry events At-least-once event delivery to targets; default message retention duration is 24 hours with an exponential backoff delay See Retry events Service limits One bus per Google Cloud project100 pipelines per Google Cloud project per regionSee Quotas and limits 500 triggers per location per Google Cloud projectSee Quotas and limits Service perimeter using VPC Service Controls YesSee Set up a service perimeter using VPC Service Controls YesSee Set up a service perimeter using VPC Service Controls Supported sources Google providersDirect publishers using the Eventarc Publishing APISee Event providers and destinations Google providersGoogle providers through audit logsThird-party providersSee Event providers and destinations Supported targets Cloud Run functions (including 1st gen)Cloud Run jobs and services Eventarc Advanced busesInternal HTTP endpoints in VPC networksPub/Sub topicsWorkflowsSee Event providers and destinations Cloud Run functionsCloud Run servicesInternal HTTP endpoints in VPC networksPublic endpoints of private and public GKE servicesWorkflowsSee Event providers and destinations Send feedback \ No newline at end of file diff --git a/Events_and_webinars.txt b/Events_and_webinars.txt new file mode 100644 index 0000000000000000000000000000000000000000..de863ed4751cf214688700c5ae41846efc139ba3 --- /dev/null +++ b/Events_and_webinars.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/events +Date Scraped: 2025-02-23T12:11:47.895Z + +Content: +Events for everyone.Explore cloud events and get insights into how the most successful companies transformed their business with Google Cloud.Filter byTypeUpcoming in personUpcoming digitalOn demandTopicsApplication ModernizationArtificial Intelligence & Machine LearningData ManagementIT InfrastructureInfrastructure ModernizationProductivity & CollaborationSecuritySmart AnalyticsRegionsAMEREMEAAPACJapanDatePast 60+ daysPast 60 days30-60 days away60+ days awayDurationLess than an hour1 - 3 hours3 - 5 hours5+ hourssearchcloseinstant_mixEMEAcloseClear allFeatured filtersSummit Seriesinstant_mixOpportunities for you to engage with experts in live Q&As, explore demos, and gain insights from our partners.For developersinstant_mixBringing you best practices, tips and tricks, segments on what’s new, and more through live interactive discussions.Gemini at Work - AI in Action 2025 - MunichFeb 25Die Anmeldungen sind geschlossen, da die Veranstaltung ausgebucht ist. Ausgewählte Präsentationen werden jedoch kurz nach der Veranstaltung on demand verfügbar sein.Google Munich6 hr 30 minTechByte: Understanding the current cyber threat landscapeFeb 25Cybersecurity threats are constantly evolving. When it comes to preparation, looking ahead means staying ahead. Join our experts as they explore the Cybersecurity Forecast 2025 report to help you get your cyber defences ready for the year ahead. This webinar is ideal for: Cybersecurity professionals, IT leaders, and anyone interested in enhancing their understanding of the current and future cyber threat landscape.Online1 hrTechByte: Production ready generative AI web apps with Genkit and Firebase App HostingFeb 25Tired of your generative AI apps not making it past the ideation phase? You could be saving time and effort by using the same tools and services to take them from ideation, iteration to live users. In this webinar, we’ll guide you through building and deploying production-ready generative AI modern web apps using Node.js with Genkit and Firebase App Hosting. This webinar is ideal for: developers, product managers, and anyone interested in leveraging generative AI to build cutting-edge web applications. Remember; production or it didn’t happen.Online1 hrTechByte: Google Workspace: AI-powered collaboration for organizations of all sizesFeb 25Join our webinar to learn how Google Workspace can unlock the power of AI and collaboration to supercharge your organization's productivity. This webinar is ideal for: anyone who wants to benefit from generative AI within the workplace.Online1 hrTechByte: Cloud Roadmap Series: AI InnovationFeb 26The latest products and features from Google Cloud driving true AI-driven business results. It’s all right here in our Cloud Roadmap series. Join our product management leadership for an invitation-only Vertex AI Roadmap session packed with exciting new announcements you can’t afford to miss. This session is ideal for: anyone interested in learning about the latest breakthroughs in enterprise-ready generative AI.Online1 hrGoogle Cloud LiftOff - February 26 & 27Feb 26 – 27Boost your Google Cloud KnowledgeOnline Spanish1 day 3 hr 30 minTechByte: John Lewis Partnership: platform engineering in actionFeb 26Join us for a conversation between Darren Evans of Google Cloud and Alex Moss of the John Lewis Partnership, focusing on Platform Engineering. Together, they’ll discuss the reasons behind JLP's adoption of Platform Engineering, the challenges they faced, and the strategies they employed which resulted in an increase in both developer productivity and operational efficiency. This webinar is ideal for: architects, developers, tech leads, product managers, and anyone looking to accelerate application delivery, reduce operational complexity, and empower developers through self-service capabilities with platform engineering.Online1 hrTechByte: Modernise your database with Cloud SQL: reduce costs and boost performanceFeb 26Strained staff. Lost productivity. Risk of security breaches. Self-managing databases isn’t the quick, safe, and easy option anymore – it’s a burden. In this webinar, we’ll explore the value of fully managed Cloud SQL and how it can help you achieve improved performance, increase data security, and access higher availability for your mission-critical applications, while freeing your IT team to focus on what matters most. This webinar is ideal for: Anyone with an interest in ensuring their database management is fully optimised and easily scalable.Online1 hrGoogle Agentspace: Unlock your organization’s collective brilliance with AI agentsFeb 26 – 27Is your organization's expertise trapped in information silos? Google Agentspace unlocks it by combining Gemini's reasoning, Google-quality search, and your enterprise data. Agentspace increases employee productivity by offering new ways of interacting with data with NotebookLM, AI-powered information discovery and custom AI agents that apply generative AI contextually. Join Google Cloud experts to learn how to unlock your organization's collective brilliance.Online1 hrGemini at Work - AI in Action 2025 - BerlinFeb 27Erfahren Sie, wie Gemini für Google Workspace die Art und Weise, wie Organisationen arbeiten, revolutioniert. Lernen Sie reale Anwendungsfälle kennen und erleben Sie die Leistungsfähigkeit von KI.Google Berlin6 hr 30 minGoogle /dev/cloud day Warsaw | Tango 2025Mar 4Looking to boost your AI and cloud skills? Join us for Google /dev/cloud day Warsaw! Learn, connect, and get hands-on experience. Register today.Google Campus for Startups8 hrDevelop Secure AI Solutions with your DataMar 4 – 5Explore how to build and deploy effective AI solutions with your data. Topics include leveraging natural language for data discovery and exploration, using SQL based ML to build your first ML Model, Generative AI Data Agents for automation, and implementing security and governance best practices to effectively deploy and manage with confidence. Join Google Cloud experts to gain practical knowledge and tools essential to rapidly develop and scale AI Solutions in your organization.Online1 hrShow moreOther resourcesGoogle Cloud TrainingLearn moreGoogle Developer CenterLearn moreGoogle Cloud PodcastsLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Evict_unwanted_consumer_accounts.txt b/Evict_unwanted_consumer_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..3361dc05e822a0abc6118cc51f3baf4d57fdd7cf --- /dev/null +++ b/Evict_unwanted_consumer_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/evicting-consumer-accounts +Date Scraped: 2025-02-23T11:55:54.822Z + +Content: +Home Docs Cloud Architecture Center Send feedback Evict consumer accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC If you haven't been using Cloud Identity or Google Workspace, it's possible that your organization's employees have been using consumer accounts to access Google services. Some of these consumer accounts might use a corporate email address such as alice@example.com as a primary or alternate email address. This document describes how you can evict, or get rid of, these types of consumer accounts by removing the corporate email address from them. Although the consumer accounts will still exist, removing the corporate email address helps you mitigate a social engineering risk—as long as a consumer account has a seemingly trustworthy email address like alice@example.com, the owner of the account might be able to convince current employees or business partners to grant them access to resources they shouldn't be allowed to access. Alternatively, by migrating consumer accounts, you can keep these accounts and turn them into managed accounts. But there might be some accounts that you don't want to migrate, such as the following: Consumer accounts that are used by former employees. Consumer accounts that are used by employees that are not supposed to access Google services. Consumer accounts for which you cannot recognize the owner. Before you begin To evict offending consumer accounts, you must satisfy the following prerequisites: You have identified a suitable onboarding plan and have completed all prerequisites for consolidating your existing user accounts. You have created a Cloud Identity or Google Workspace account. The primary or alternate email address of the consumer account must correspond to one of the domains that you have added to your Cloud Identity or Google Workspace account. Both primary and secondary domains qualify, but alias domains are not supported. Process Evicting unwanted consumer accounts works similarly to migrating consumer accounts, but it is based on deliberately creating a conflicting account. The following diagram illustrates the process. Boxes on the Administrator side denote actions a Cloud Identity or Google Workspace administrator takes; rectangular boxes on the User account owner side denote actions only the owner of a consumer account can perform. Find unmanaged user accounts You can use the transfer tool for unmanaged users to find consumer accounts that use a primary email address that matches one of the verified domains of your Cloud Identity or Google Workspace account. Create a conflicting account When you have identified a consumer account that you want to evict, do the following: Create a user account in Cloud Identity or Google Workspace that has the same corporate email address as the account you want to evict. If the consumer account uses the corporate email address as the primary email address, the Admin Console warns you about an impending conflict. Because you are intentionally creating the conflicting account, select Create new user. Because you don't want the managed user account to ever be used, assign a random password. Delete the user account that you just created. By creating a conflicting account and immediately deleting it, you force the owner to rename that user account. But you avoid that owner being shown a ballot screen that prompts them to choose between the managed and consumer account. Rename the user account For the owner of the evicted user account, the next time they sign in, they see the following message: As the screenshot suggests, they have three options for proceeding: Convert the user account into a Gmail account. Associate a different email address with the account. Postpone the rename. This causes the user account to use a temporary gtempaccount.com email address in the meantime. All configuration and data that was created by using this consumer account is unaffected by the rename. Best practices We recommend the following best practices when you evict unwanted consumer accounts: Ensure that affected users can no longer receive email on their (former) corporate email address. Otherwise, a user might be able to undo the rename and switch the primary email address back to the corporate email address. Prevent other users from signing up for new consumer accounts by proactively provisioning user accounts to Cloud Identity or Google Workspace. What's next Review how you can assess existing user accounts. Learn how to remove a corporate email address from a Gmail account. Send feedback \ No newline at end of file diff --git a/Example_announcement.txt b/Example_announcement.txt new file mode 100644 index 0000000000000000000000000000000000000000..6b5333d1de61ca470464233b1f414fee82bd8f4a --- /dev/null +++ b/Example_announcement.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/example-announcement +Date Scraped: 2025-02-23T11:56:04.987Z + +Content: +Home Docs Cloud Architecture Center Send feedback Example announcement Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC This document contains an example email that you can customize and use to send an announcement to users whose consumer accounts you plan to migrate to managed accounts. Migrating consumer accounts to managed accounts requires the consent of affected users. A migration might also affect these users personally if they have previously used these accounts for private purposes. To make sure that users have time to prepare for a migration and are well-informed about the reasons behind a migration, it's important to announce the migration to them in advance. Example announcement email Subject: Upcoming transfer of your Google user account At Example Organization, we use a number of Google services to operate our business, including Google Cloud, Google Marketing Platform, and others. We have noticed that you currently access these services using Google user accounts that you have signed up for by yourself. Starting on March 1, Example Organization IT will centralize the management of all Google user accounts that use example.com email addresses. What you need to know This migration only affects your affected username@example.com user account. Other Google user accounts, including any private Gmail accounts that you might use, are not affected. When the migration is complete, your example.com user account will be managed by Example Organization IT and will be subject to the Example Organization [link to acceptable use policy]. The email address of your Google user account will stay the same and all your data and settings will be retained. After your user account has been migrated, you must use your corporate credentials for future sign-ins. How to prepare for the migration If you have been using your example.com Google user account for private purposes, do the following: Go to Google TakeOut. Verify that you're signed in with your example.com user account. If you're signed in with a different user account, switch to your example.com user account. Follow the instructions on the Google TakeOut page to download your personal data. You must perform this download before March 1. What will happen after March 1 You will receive an email requesting you to transfer your user account to ExampleOrganization IT: When you receive the email, click Transfer account. This link opens a browser window. In the browser window, click Sign in and transfer account. The website guides you through the migration process, which takes only a few minutes to complete. Please help us to perform the user transition quickly by taking action immediately when you receive the email. If you think that your user account should not be transferred, contact it-support@example.com before you decline the transfer request. Declining or delaying the user account transfer will slow down the migration, and we might have to revoke your access to Google services. Your ExampleOrganization team. Send feedback \ No newline at end of file diff --git a/Executive_insights.txt b/Executive_insights.txt new file mode 100644 index 0000000000000000000000000000000000000000..8ce103d91e75d7551b3c693990551e5beec51d5d --- /dev/null +++ b/Executive_insights.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/executive-insights +Date Scraped: 2025-02-23T11:57:36.397Z + +Content: +Prefer to listen to your content? Check out our podcast, AI That Means Business, in partnership with Custom Content from WSJ.Executive insightsGoogle Cloud's curated C-suite perspectives on generative AI, cloud computing, cybersecurity, and innovation.Subscribe to the Leader's DigestAI Business Trends 2025Five AI trends are set to shape business in 2025. Get a glimpse of what the future holds in the AI Business Trends 2025 report.Get the reportThe ROI of Gen AIWe surveyed 2,500 senior leaders to uncover the ROI of gen AI on business and financial performance.Get the reportCEO insightsFind out how CEOs around the globe are thinking about AI, machine learning, business resilience, and productivity.Executive's guide to gen AISee how your industry peers are starting with generative AI and use our 10-step, 30-day plan to hit the ground running with your first use case.10-min readGen AI is the Future. But How do Companies Start Achieving ROI Today?Hear insights from Oliver Parker, VP of Google Cloud, as he shares the business areas where gen AI is already creating value.Video (1:00)Gen AI for Social GoodDiscover how nonprofits are tackling climate action, education, crisis response, and more.2-min readMark Cuban’s Cost Plus Drugs switched to Google WorkspaceLearn why they migrated from Microsoft to Google Workspace to boost collaboration and productivity.5-min readCrossing the generative AI tipping pointExplore strategies that drive immediate value, while laying the foundation for future breakthroughs.15-min readManufacturers are seeing ROI on gen AIManufacturers are already putting gen AI into production — and seeing real returns.5-min readView MoreCTO insightsGet the latest CTO insights, analyst reports, and technical leadership resources. Explore testimonials from industry-leading CTOs around the globe.Google is a Leader in The Forrester Wave™: AI Foundation Models For Language, Q2 2024Discover how Google Cloud’s capabilities compare with other vendors in the market.11-min readGoogle named Leader in The Forrester Wave: AI Infrastructure Solutions, Q1 2024Google receives the highest scores of any vendor evaluated in both Current Offering and Strategy.12-min readGoogle is a Leader for the 5th consecutive year, in the 2024 Gartner® Magic Quadrant™ for Cloud AI Developer ServicesLearn why Google is a Leader based on 'Ability to Execute' and 'Completeness of Vision8-min readView MoreCIO insightsGet the latest CIO insights to help you solve your organization's toughest challenges. Learn how your peers increase speed of IT, organizational agility, and cybersecurity.Mandiant M-Trends 2024 Special ReportExpert insights into today’s top cybersecurity trends and attacker developments for 15 years running20-min readDeloitte: Entering the era of generative AI-enabled securityDiscover how generative AI can help solve some of security’s toughest challenges.10-min readThe CIO's guide to digital business ecosystemsKey findings from the Google Cloud and Oxford Economics survey of 1,000 CIOs of large organizations.12-min readView MoreFind more informative contentGoogle Cloud newsletterSign up for Google Cloud newsletters with product updates, event information, special offers, and more.Subscribe nowOn-demand webinarsTune in every week to hear from our experts and partners who will help you discover how to get the most from Google Cloud.Watch nowTransform with Google CloudDiscover blogs inside the moments when cloud computing changed everything.Read nowPress cornerGet the latest news and find everything you need about Google Cloud.Read nowTell us what you're solving forA Google Cloud executive will help you find the right solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudTry it freeDeploy ready-to-go solutionsExplore the marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Federate_with_Active_Directory.txt b/Federate_with_Active_Directory.txt new file mode 100644 index 0000000000000000000000000000000000000000..9eaa96cfea3349e6ad811dc9ebfe354a3fef884b --- /dev/null +++ b/Federate_with_Active_Directory.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/federating-gcp-with-active-directory-introduction +Date Scraped: 2025-02-23T11:55:15.830Z + +Content: +Home Docs Cloud Architecture Center Send feedback Federate Google Cloud with Active Directory Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document describes how you can configure Cloud Identity or Google Workspace to use Active Directory as IdP and authoritative source. The document compares the logical structure of Active Directory with the structure used by Cloud Identity and Google Workspace and describes how you can map Active Directory forests, domains, users, and groups. The document also provides a flowchart that helps you determine the best mapping approach for your scenario. This document assumes that you're familiar with Active Directory. Implement federation Google Cloud uses Google identities for authentication and access management. Manually maintaining Google identities for each employee can add unnecessary management overhead when all employees already have an account in Active Directory. By federating user identities between Google Cloud and your existing identity management system, you can automate the maintenance of Google identities and tie their lifecycle to existing users in Active Directory. Setting up federation between Active Directory and Cloud Identity or Google Workspace entails two pieces: Provisioning users: Relevant users and groups are synchronized periodically from Active Directory to Cloud Identity or Google Workspace. This process ensures that when you create a new user in Active Directory, it can be referenced in Google Cloud even before the associated user has logged in for the first time. This process also ensures that user deletions are being propagated. Provisioning works one way, which means changes in Active Directory are replicated to Google Cloud but not the other way around. Also, provisioning does not include passwords. In a federated setup, Active Directory remains the only system that manages these credentials. Single sign-on: Whenever a user needs to authenticate, Google Cloud delegates the authentication to Active Directory by using the Security Assertion Markup Language (SAML) protocol. This delegation ensures that only Active Directory manages user credentials and that any applicable policies or multi-factor authentication (MFA) mechanisms are being enforced. For a sign-on to succeed, however, the respective user must have been provisioned before. To implement federation, you can use the following tools: Google Cloud Directory Sync (GCDS) is a free Google-provided tool that implements the synchronization process. GCDS communicates with Google Cloud over Secure Sockets Layer (SSL) and usually runs in the existing computing environment. Active Directory Federation Services (AD FS) is provided by Microsoft as part of Windows Server. With AD FS, you can use Active Directory for federated authentication. AD FS usually runs within the existing computing environment. Because APIs for Google Cloud are publicly available and SAML is an open standard, many tools are available to implement federation. This document focuses on using GCDS and AD FS. Logical structure of Active Directory In an Active Directory infrastructure, the top-level component is the forest. The forest serves as a container for one or more domains and derives its name from the forest root domain. Domains in an Active Directory forest trust each other, allowing users who are authenticated in one domain to access resources that are in another domain. Unless forests are connected by using cross-forest trusts, separate forests don't trust each other by default, and users who are authenticated in one forest cannot access resources that are in a different forest. Active Directory domains are containers for managing resources and are considered administrative boundaries. Having multiple domains in a forest is one way to simplify administration or enforce additional structure, but domains in a forest don't represent security boundaries. Logical structure of Google Cloud In Google Cloud, organizations serve as containers for all resources, and they can be further segmented by using folders and projects. Organizations, folders, and projects therefore serve a purpose similar to Active Directory domains. Active Directory treats users as resources, so user management and authentication are tied to domains. In contrast, Google Cloud doesn't manage users in an organization, except for service accounts. Instead, Google Cloud relies on Cloud Identity or Google Workspace to manage users. A Cloud Identity or Google Workspace account serves as a private directory for users and groups. As an administrator of the account, you can control the lifecycle and the configuration of users and groups and define how authentication can be performed. When you create a Cloud Identity or Google Workspace account, a Google Cloud organization is created automatically for you. The Cloud Identity or Google Workspace account and the Google Cloud organization that's associated with it share the same name and are tied to each other. However, a Google Cloud organization is allowed to reference users and groups from other Cloud Identity or Google Workspace accounts. Integrate Active Directory and Google Cloud Despite certain similarities between the logical structure of Active Directory and Google Cloud, no single mapping between the two structures works equally well in all scenarios. Instead, the right approach to integrating the two systems and mapping the structure depends on multiple factors: How to map domains and forests to Cloud Identity or Google Workspace accounts How to map DNS domains How to map users How to map groups The following sections look at each of these factors. Map forests Especially in larger organizations, you often use more than one Active Directory domain to manage identities and access across the enterprise. When you are planning to federate Active Directory and Google Cloud, the first factor to look at is the topology of your Active Directory infrastructure. Single forest, single domain When a forest includes just one domain, you can map the entire Active Directory forest to a single Cloud Identity or Google Workspace account. This account then provides the basis for a single Google Cloud organization that you can use to manage your Google Cloud resources. In a single-domain environment, domain controllers and global catalog servers both provide access to all objects that are managed in Active Directory. In most cases, you can run a single instance of GCDS to synchronize user accounts and groups to Google Cloud, and to maintain a single AD FS instance or fleet to handle single sign-on. Single forest, multiple domains When a forest contains multiple Active Directory domains, you can organize them in one or more domain trees. In both cases, you can map the entire forest to a single Cloud Identity or Google Workspace account. This account then provides the basis for a single Google Cloud organization that you can use to manage your Google Cloud resources. In a multi-domain environment, there is a difference between what information can be retrieved from a domain controller and what can be queried from a global catalog server. While domain controllers serve data only from their local domain, global catalog servers provide access to information from all domains in the forest. Crucially, the data that is served by global catalog servers is partial and lacks certain LDAP attributes. This limitation can affect how you configure GCDS to synchronize groups. Depending on how you plan to map groups, federating a multi-domain forest with Google Cloud requires one or more GCDS instances but only a single AD FS instance or fleet to handle single sign-on. Multiple forests with cross-forest trust In larger organizations, it's not uncommon to have more than one Active Directory forest, often as a result of a merger or acquisition. You can combine these forests by using a two-way, cross-forest trust so that users can share and access resources across the boundaries of a single forest. If all forests have a bidirectional trust relationship with another forest, you can map the entire environment to a single Cloud Identity or Google Workspace account. This account provides the basis for a single Google Cloud organization that you can use to manage your Google Cloud resources. Although global catalog servers provide access to data from multiple domains, their scope is limited to a single forest. So in a multi-forest environment, you must query multiple domain controllers or global catalog servers to obtain, for example, a complete list of users. As a result of this limitation, federating a multi-forest environment with Google Cloud requires at least one GCDS instance per forest. Cross-forest trusts enable user authentication to work across forest boundaries, so a single AD FS instance or fleet is sufficient to handle single sign-on. If your environment spans multiple forests without cross-forest trust, but all Active Directory domains that are relevant for federation with Google Cloud are connected through external trusts, then the same considerations apply. Multiple forests without cross-forest trust In the environment illustrated here, it's not possible to authenticate or access resources across the forest boundaries. It's also not possible for a single AD FS instance or fleet to handle single sign-on requests for users from all forests. Therefore, it's not possible to map multiple forests that lack cross-forest trusts to a single Cloud Identity or Google Workspace account. Instead, each forest must be mapped to a separate Cloud Identity or Google Workspace account, which involves running at least one GCDS instance and one AD FS server or fleet per forest. In Google Cloud, a separate organization is created for each Cloud Identity or Google Workspace account. In most cases, you don't need to maintain multiple, separate organizations. You can select one of the organizations and associate it with the other Cloud Identity or Google Workspace accounts, effectively creating an organization that is federated with multiple Active Directory forests. The other organizations remain unused. Map DNS domains DNS plays a crucial role both in Active Directory and for Cloud Identity and Google Workspace. The second factor to look at when you're planning to federate Active Directory and Google Cloud is how to share or map DNS domains between Active Directory and Google Cloud. Usage of DNS domains in Active Directory In an Active Directory forest, DNS domains are used in multiple places: Active Directory DNS domains: Each Active Directory domain corresponds to a DNS domain. This domain might be global, like corp.example.com, or can be a local domain name like corp.local or corp.internal. Mail exchange (MX) domains: Email addresses use a DNS domain. In some cases, this domain is the same as the Active Directory DNS domain, but in many cases, a different, often shorter, domain such as example.com is used. Ideally, users in Active Directory have the email address that is associated with the optional mail attribute. UPN suffix domains: These domains are used for User Principal Names (UPN). By default, the Active Directory DNS domain of the user's domain is used to build a UPN. For a user john in the domain corp.example.com, the default UPN therefore reads john@corp.example.com. However, you can configure a forest to use additional DNS domains as UPN suffixes that correspond to neither Active Directory DNS domains nor MX domains. UPNs are optional and are stored in the userPrincipalName field of the user. Endpoint domains: Public-facing servers such as AD FS servers are usually assigned a DNS name, such as login.external.example.com. The domain that is used for these purposes can overlap with the MX, UPN suffix, or Active Directory DNS domain, or it can be an entirely different domain. Usage of DNS domains in Google Cloud Google Sign-In, which Google Cloud relies on for authentication, uses email addresses to identify users. Using email addresses not only guarantees that they are globally unique, but also enables Google Cloud to send notification messages to the addresses. Google Sign-In determines the directory or identity provider to use for authenticating a user based on the domain part of the email addresses, which follows the @. For an email address that uses the gmail.com domain, for example, Google Sign-In uses the directory of Gmail users for authentication. When you sign up for a Google Workspace or Cloud Identity account, you're creating a private directory that Sign-In can use for authentication. In the same way that the Gmail directory is associated with the gmail.com domain, Google Workspace and Cloud Identity accounts need to be associated with a custom domain. Three different kinds of domains are used: Primary domain: This domain identifies the Cloud Identity or Google Workspace account and is used as the name for the organization in Google Cloud. When signing up for Cloud Identity or Google Workspace, you must specify this domain name. Secondary domain: Along with the primary domain, you can associate other, secondary domains with a Cloud Identity or Google Workspace account. Each user in the directory is associated with either the primary domain or one of the secondary domains. Two users, johndoe@example.com and johndoe@secondary.example.com, are considered separate users if example.com is the primary domain and secondary.example.com is a secondary domain. Alias domain: An alias domain is an alternate name for another domain. That is, johndoe@example.com and johndoe@alias.example.com refer to the same user if alias.example.com is set up as an alias domain for example.com. All domains must satisfy the following requirements: They must be valid, global DNS domain names. During setup, you might need administrative access to the respective DNS zones in order to verify domain ownership. A domain, such as example.com, can refer only to a single directory. However, you can use different subdomains, such as subdomain.example.com, to refer to different directories. Primary and secondary domains should have a valid MX record so that messages sent to email addresses that are formed by using this domain name can be delivered properly. In order to enable synchronizing between the directories, some mapping is required between the Active Directory domains and the domains that Cloud Identity or Google Workspace uses. Determining the right mapping depends on how you use Active Directory and requires a closer look at how users are identified in an Active Directory forest and how they can be mapped to Cloud Identity or Google Workspace. Map users The third factor to look at when planning to federate Active Directory and Google Cloud is how to map users between Active Directory and Cloud Identity or Google Workspace. Identify users in Active Directory Internally, Active Directory uses two identifiers to uniquely identify users: objectGUID: This globally unique ID is generated when a user is created, and never changes. objectSID: The SID, or security identifier, is used for all access checks. While this ID is unique and stable within a domain, it's possible that when moved to a different domain in the forest, a user might be assigned a new SID. Neither of these IDs is meaningful to users, so Active Directory offers two human-friendly ways to identify users: UPN (userPrincipalName): The preferred way to identify a user is by UPN. UPNs follow the RFC 822 format of email addresses and are created by combining the username with a UPN suffix domain, as in johndoe@corp.example.com. Despite being the preferred way to identify users, UPNs are optional, so some users in your Active Directory forest might lack a UPN. Although it's considered a best practice that UPNs be valid email addresses, Active Directory does not enforce this practice. Pre–Windows 2000 logon name (sAMAccountName): This name combines the NetBIOS domain name and username by using the format domainuser, as in corp\johndoe. Although these names are considered legacy, they are still commonly used and are the only mandatory identifier of a user. Notably, Active Directory does not use the user's email address (mail) to identify users. Consequently, this field is neither mandatory nor required to be unique in a forest. All of these identifiers can be changed at any time. Map user identities Mapping Active Directory users to Cloud Identity or Google Workspace users requires two pieces of information for each user: A stable, unique ID that you can use during synchronization to track which Active Directory user corresponds to which user in Cloud Identity or Google Workspace. On the AD side, the objectGUID is perfectly suited for this purpose. An email address for which the domain part corresponds to a primary, secondary, or alias domain of your Cloud Identity or Google Workspace account. Because this email address will be used throughout Google Cloud, make sure the address is meaningful. Deriving an address from the objectGUID is impractical, as are other automatically generated email addresses. For an Active Directory user, two fields are candidates for providing a Cloud Identity or Google Workspace email address: userPrincipalName and mail. Map by user principal name Using the userPrincipalName field requires that two criteria be met for all users that are subject to synchronization: User principal names (UPNs) must be valid email addresses. All domains that are used as UPN suffix domains also must be MX domains and must be set up so that an email that is sent to a user's UPN is delivered to their email inbox. UPN assignments must be complete. All users that are subject to synchronization must have a UPN assigned. The following PowerShell command can help you find users that lack a UPN: Get-ADUser -LDAPFilter "(!userPrincipalName=*)" If these two criteria are met, you can safely map UPNs to Cloud Identity or Google Workspace email addresses. You can use one of the UPN suffix domains as the primary domain in Cloud Identity or Google Workspace and add any other UPN suffix domains as secondary domains. If one of the criteria is not met, you can still map UPNs to Cloud Identity or Google Workspace email addresses, but the following caveats apply: If UPNs are not valid email addresses, users might not receive notification emails that are sent by Google Cloud, which might cause users to miss important information. Users without UPNs are ignored during synchronization. You can configure the synchronization to replace the UPN suffix domain with a different domain. When you're using multiple UPN suffix domains in a forest, this approach can create duplicates, however, because all UPN suffix domains will be replaced by a single domain. In case of duplicates, only a single user can be synchronized. A major advantage of using UPNs to map users is that UPNs are guaranteed to be unique across a forest, and they use a curated set of domains, which helps avoid potential synchronization problems. Map by email address Using the mail field requires meeting the following criteria for all users that are subject to synchronization: Email assignments must be complete. All users that are subject to synchronization must have the mail field populated. The following PowerShell command can help you find users for which this field is not populated: Get-ADUser -LDAPFilter "(!mail=*)" Email addresses must use a curated set of domains, all of which are owned by you. If some of your users use email addresses that refer to partner companies or consumer email providers, those email addresses cannot be used. All email addresses must be unique across the forest. Because Active Directory does not enforce uniqueness, you might have to implement custom checks or policies. If all relevant users meet these criteria, you can identify all domains that are used by these email addresses and use them as primary and secondary domains in Cloud Identity or Google Workspace. If one of the criteria is not met, you can still map email addresses to Cloud Identity or Google Workspace email addresses, with the following caveats: During synchronization, users without email addresses will be ignored, as will users with email addresses that use domains that are not associated with the Cloud Identity or Google Workspace account. When two users share the same email address, only one user will be synchronized. You can configure the synchronization to replace the domain of email addresses with a different domain. This process can create duplicates, in which case only one user will be synchronized. Map groups The fourth factor to look at when you're planning to federate Active Directory and Google Cloud is whether to synchronize groups between Active Directory and Google Cloud and how to map them. In Google Cloud, groups are commonly used as a way to manage access efficiently across projects. Rather than assigning individual users to IAM roles in each project, you define a set of groups that model common roles in your organization, and then assign those groups to a set of IAM roles. By modifying the membership of the groups, you can control users' access to an entire set of resources. Active Directory distinguishes between two kinds of groups: distribution groups and security groups. Distribution groups are used to manage email lists. Synchronizing distribution groups is relevant when you're migrating from Microsoft Exchange to Google Workspace, so GCDS can handle both regular and dynamic distribution groups. Distribution groups aren't a concern in identity and access management for Google Cloud, however, so this discussion focuses exclusively on security groups. Mapping groups between Active Directory and Google Cloud is optional. Once you've set up user provisioning, you can create and manage groups directly in Cloud Identity or Google Workspace, which means that Active Directory remains the central system for identity management but not for access management. To maintain Active Directory as the central system for identity management and access management, we recommend that you synchronize security groups from Active Directory instead of managing them in Cloud Identity or Google Workspace. With this approach, you can set up IAM so that you can use group memberships in Active Directory to control who has access to certain resources in Google Cloud. Security groups in Active Directory Security groups play a foundational role in Windows security and Active Directory access management. This role is facilitated by three different types of Active Directory groups: domain local groups, global groups, and universal groups. Domain local groups These groups are relevant only within the scope of a single domain and cannot be referenced in other domains. Because their list of members does not need to be replicated across the forest, domain local groups are the most flexible with respect to the types of members that they can include. Global groups These groups are surfaced to and can be referenced in other domains. Their member list is not replicated, however. This limitation restricts the types of members that these groups can include. These groups can only include users and other global groups from the same domain. Universal groups These groups, along with their member lists, are replicated across the forest. They can therefore be referenced in other domains and can include not only users and other universal groups but also global groups from all domains. If your Active Directory forest contains only a single domain, you can synchronize all three types of security groups by using GCDS. If your Active Directory forest uses more than one domain, the type of a group determines whether and how it can be synchronized to Cloud Identity or Google Workspace. Because domain local and global groups aren't fully replicated across a forest, global catalog servers contain incomplete information about them. Although this limitation is deliberate and helps to speed up directory replication, it's an obstacle when you want to synchronize such groups to Cloud Identity or Google Workspace. Specifically, if you configure GCDS to use a global catalog server as a source, then the tool will be able to find groups from all domains across the forest. But only groups that are in the same domain as the global catalog server will contain a membership list and be suitable for replication. To synchronize domain local or global groups in a multi-domain forest, you must run a separate GCDS instance per domain. Because universal groups are fully replicated across the forest, they don't have this restriction. A single GCDS instance can synchronize universal groups from multiple domains. Before concluding that you need multiple GCDS instances to synchronize multiple Active Directory domains to Cloud Identity or Google Workspace, keep in mind that not all groups might need to be synchronized. For this reason, it's worthwhile to look at how different types of security groups are typically used across your Active Directory forest. Usage of security groups in Active Directory The following sections describe the types of security groups that are used in Active Directory. Resource groups Windows uses an access model based on access control lists (ACLs). Each resource like a file or LDAP object has an associated ACL that controls which users have access to it. Resources and ACLs are very fine grained, so there are many of them. To simplify the maintenance of ACLs, it's common to create resource groups to bundle resources that are frequently used and accessed together. You add the resource group to all affected ACLs once, and manage further access by altering membership of the resource group, not by altering the ACLs. The resources that are bundled this way typically reside in a single domain. Consequently, a resource group also tends to be referenced only in a single domain, either in ACLs or by other resource groups. Because most resource groups are local, they are usually implemented by using domain local groups in Active Directory. Role and organization groups Resource groups help simplify access management, but in a large environment, you might need to add a new user to a large number of resource groups. For this reason, resource groups are commonly complemented by role groups or organization groups. Role groups aggregate the permissions that a specific role requires in the organization. A role group that is named Engineering Documentation Viewer, for example, might give members read-only access to all engineering documentation. Practically, you would implement this by creating a role group and making it a member of all resource groups that are used to control access to various kinds of documentation. In a similar way, organization groups aggregate the permissions that are required by departments within an organization. For example, an organization group that is named Engineering might grant access to all resources that members of the Engineering department commonly require. Technically, there is no difference between role groups and resource groups, and the two are commonly used in concert. Unlike resource groups, however, role and organization groups can have relevance beyond the boundaries of a domain. To allow such groups to be referenced by resource groups in other domains, role and organization groups are usually implemented by using global groups, which are constrained to members of a single domain, and sometimes complemented by universal groups, which allow members from different domains. The following diagram shows a nesting pattern that is commonly used in multi-domain Active Directory environments. Groups in Cloud Identity and Google Workspace In Cloud Identity and Google Workspace, there is only a single type of group. Groups in Cloud Identity and Google Workspace aren't confined to the scope of the Cloud Identity or Google Workspace account where they were defined. Instead, they can include users from different Cloud Identity or Google Workspace accounts, support being referenced and nested in other accounts, and be used across any Google Cloud organization. As it does with users, Cloud Identity and Google Workspace identifies groups by email address. The email address doesn't have to correspond to an actual mailbox, but it must use one of the domains registered for the respective Cloud Identity account. Unlike Active Directory groups, members of a Cloud Identity or Google Workspace group are not implicitly granted permission to list other members of the same group. Instead, querying group membership generally requires explicit authorization. Usage of groups in Google Cloud Google Cloud uses a role-based access model instead of an ACL-based access model. Roles apply to all resources of a certain type that fall within a certain scope. For example, the Kubernetes Engine Developer role has full access to Kubernetes API objects inside all clusters in a given project. Due to the nature of roles, there is little need to maintain resource groups on Google Cloud, and rarely a need to synchronize resource groups to Google Cloud. Role groups and organization groups are just as relevant in Google Cloud as they are in Active Directory, because they make it easier to manage access for large numbers of users. Synchronizing role and organization groups helps maintain Active Directory as the primary place for managing access. If you consistently implement resource groups as domain local groups, and role and organization groups as global or universal groups, you can use the group type to ensure that only role and organization groups are synchronized. The question of whether it's sufficient to run a single GCDS instance per multi-domain forest or whether you need multiple GCDS instances then becomes the question of whether you use global groups. If you implement all your role and organization groups by using universal groups, a single GCDS instance is sufficient; otherwise, you'll need a GCDS instance per domain. Map group identities Mapping Active Directory security groups to Cloud Identity or Google Workspace groups requires a common identifier. In Cloud Identity and Google Workspace, this identifier must be an email address for which the domain part corresponds to a the primary, secondary, or alias domain of the Cloud Identity or Google Workspace account. Because this email address will be used throughout Google Cloud, the address must be human-readable. The email address doesn't need to correspond to a mailbox. In Active Directory, groups are identified either by their common name (cn) or by a pre–Windows 2000 logon name (sAMAccountName). Similar to user accounts, groups can also have an email address (mail), but email addresses are an optional attribute for groups, and Active Directory does not verify uniqueness. You have two options for mapping group identities between Active Directory and Cloud Identity or Google Workspace. Map by common name The advantage of using the common name (cn) is that it's guaranteed to be available and, at least within an organizational unit, unique. However, the common name is not an email address, so it needs a suffix @DOMAIN appended to turn into an email address. You can configure GCDS to automatically take care of appending a suffix to the group name. Because Active Directory ensures that group names and user names don't conflict, an email address that is derived this way is also unlikely to cause any conflicts. If your Active Directory forest contains more than a single domain, the following caveats apply: If two groups in different domains share a common name, the derived email address will conflict, causing one group to be ignored during synchronization. You can only synchronize groups from domains of a single forest. If you synchronize groups from multiple forests by using separate GCDS instances, the email addresses that are derived from the common name don't reflect which forest they correspond to. This ambiguity will cause a GCDS instance to delete any group that has previously been created from a different forest by another GCDS instance. You cannot map groups by common name if you use domain substitution for mapping users. Map by email address Using the email address (mail) to map group identities means you must satisfy the same criteria as when using the email address to map users: Email assignments must be complete. Although it's common for distribution groups to have an email address, security groups often lack this attribute. To use the email address for mapping identities, security groups that are subject to synchronization must have the mail field populated. The following PowerShell command can help you find accounts for which this field is not populated: Get-ADGroup -LDAPFilter "(!mail=*)" Email addresses must use a curated set of domains, all of which you own. If some of your users use email addresses that refer to partner companies or consumer email providers, you cannot use those addresses. All email addresses must be unique across the forest. Because Active Directory does not enforce uniqueness, you might have to implement custom checks or policies. If all relevant groups meet these criteria, you can identify all domains that are used by these email addresses and ensure that the list of DNS domains registered in Cloud Identity or Google Workspace covers these domains. If one of the criteria is not met, you can still map UPNs to Cloud Identity or Google Workspace email addresses, with the following caveats: Groups without email addresses will be ignored during synchronization, as will email addresses that use domains that aren't associated with the Cloud Identity or Google Workspace account. When two groups share the same email address, only one of them will be synchronized. Mapping groups by email address is not supported if your Active Directory forest contains more than a single domain and you use domain substitution for mapping users. Map organizational units Most Active Directory domains make extensive use of organizational units to cluster and organize resources hierarchically, control access, and enforce policies. In Google Cloud, folders and projects serve a similar purpose, although the kinds of resources that are managed within a Google Cloud organization are very different from the resources that are managed in Active Directory. As a result, an appropriate Google Cloud folder hierarchy for an enterprise tends to differ significantly from the structure of organizational units in Active Directory. Automatically mapping organizational units to folders and projects is therefore rarely practical and not supported by GCDS. Unrelated to folders, Cloud Identity and Google Workspace support the concept of organizational units. Organizational units are created to cluster and organize users, similar to Active Directory. But unlike in Active Directory, they apply only to users, not to groups. GCDS offers the option of synchronizing Active Directory organizational units to Cloud Identity or Google Workspace. In a setup where Cloud Identity is merely used to extend Active Directory identity management to Google Cloud, mapping organizational units is usually not necessary. Choose the right mapping As noted at the beginning of this document, there is no single best way to map the structures of Active Directory and Google Cloud. To help you choose the right mapping for your scenario, the following decision graphs summarize the criteria that were discussed in the previous sections. First, refer to the following chart to identify how many Cloud Identity or Google Workspace accounts, GCDS instances, and AD FS instances or fleets you will need. Then refer to the second chart to identify the domains to configure in your Cloud Identity or Google Workspace account. What's next Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Configure GCDS to synchronize Active Directory users and groups to Cloud Identity. Configure single sign-on between Active Directory and Google Cloud. Lean about best practices for managing super administrator accounts Send feedback \ No newline at end of file diff --git a/Federate_with_Microsoft_Entra_ID.txt b/Federate_with_Microsoft_Entra_ID.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1dfed8aac2a626c15c96f150ab0f1b9aa1887e8 --- /dev/null +++ b/Federate_with_Microsoft_Entra_ID.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/federating-gcp-with-azure-active-directory +Date Scraped: 2025-02-23T11:55:18.429Z + +Content: +Home Docs Cloud Architecture Center Send feedback Federate Google Cloud with Microsoft Entra ID (formerly Azure AD) Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document describes how you can configure Cloud Identity or Google Workspace to use Microsoft Entra ID (formerly Azure AD) as IdP and source for identities. The document compares the logical structure of Microsoft Entra ID with the structure used by Cloud Identity and Google Workspace and describes how you can map Microsoft Entra ID tenants, domains, users, and groups. Implement federation Google Cloud uses Google identities for authentication and access management. Manually maintaining Google identities for each employee can add unnecessary management overhead when all employees already have an account in Microsoft Entra ID. By federating user identities between Google Cloud and your existing identity management system, you can automate the maintenance of Google identities and tie their lifecycle to existing users in Microsoft Entra ID. Setting up federation between Microsoft Entra ID and Cloud Identity or Google Workspace entails two pieces: Provisioning users: Relevant users and groups are synchronized periodically from Microsoft Entra ID to Cloud Identity or Google Workspace. This process ensures that when you create a new user in Microsoft Entra ID or synchronize a new user from Active Directory to Microsoft Entra ID, it's also made available in Google Cloud so that it can be referenced in Google Cloud even before the associated user has logged in for the first time. This process also ensures that user deletions are being propagated. Provisioning works one way, which means changes in Microsoft Entra ID are replicated to Google Cloud but not vice versa. Also, provisioning doesn't include passwords. Single sign-on: Whenever a user needs to authenticate, Google Cloud delegates the authentication to Microsoft Entra ID by using the Security Assertion Markup Language (SAML) protocol. Depending on your Microsoft Entra ID configuration, Microsoft Entra ID might do one of the following: Perform authentication itself. Use pass-through authentication or password hash synchronization. Delegate authentication to an on-premises AD FS server. Having Cloud Identity or Google Workspace delegate authentication to Microsoft Entra ID not only avoids having to synchronize passwords to Google Cloud, it also ensures that any applicable policies or multi-factor authentication (MFA) mechanisms you have configured in Microsoft Entra ID or AD FS are enforced. Note: This document refers to the Google Cloud/G Suite Connector by Microsoft gallery app from the Microsoft Azure marketplace. This app is a Microsoft product and is not maintained or supported by Google. Logical structure of Microsoft Entra ID When you use Azure, you use one or more Microsoft Entra ID tenants (also referred to as directories). Microsoft Entra ID tenants are a top-level structure. They are considered administrative boundaries, and serve as containers for users, groups, as well as resources and resource groups. Each Microsoft Entra ID tenant has at least one DNS domain associated with it. This DNS domain is reflected in usernames (also referred to as User Principal Names or UPNs), which take the form of name@domain, where domain is one of the DNS domains associated with the corresponding tenant. An enterprise might maintain multiple Microsoft Entra ID tenants. Common reasons for having multiple tenants include distinguishing between different parts of the organization, separating production resources from development and testing resources, and using separate tenants to integrate multiple forests from an on-premises Active Directory. Logical structure of Google Cloud In Google Cloud, organizations serve as containers for all resources, and they can be further segmented by using folders and projects. However, except for service accounts, organizations aren't used to manage users and groups, making organizations different from Microsoft Entra ID tenants. For managing users and groups, Google Cloud relies on Cloud Identity or Google Workspace. A Cloud Identity or Google Workspace account serves as a private directory for users and groups. As an administrator of the account, you can control the lifecycle and the configuration of users and groups and define how authentication can be performed. When you create a Cloud Identity or Google Workspace account, a Google Cloud organization is created automatically for you. The Cloud Identity or Google Workspace account and the Google Cloud organization that's associated with it share the same name and are tied to each other. However, a Google Cloud organization is allowed to reference users and groups from other Cloud Identity or Google Workspace accounts. Integrate Microsoft Entra ID and Google Cloud Despite certain similarities between the logical structure of Microsoft Entra ID and Google Cloud, no single mapping between the two structures works equally well in all scenarios. Instead, the right approach to integrating the two systems and mapping the structure depends on multiple factors: How to map Microsoft Entra ID tenants to Cloud Identity or Google Workspace accounts How to map DNS domains How to map users How to map groups The following sections look at each of these factors. Google Cloud interacts only with Microsoft Entra ID, not with your on-premises Active Directory. So the way you've mapped forests and domains of your on-premises Active Directory to Microsoft Entra ID is of minor concern. Map Microsoft Entra ID tenants The following sections describe how to map Microsoft Entra ID tenants for these scenarios: Single tenant Multiple tenants External users Single tenant If you use only a single Microsoft Entra ID tenant, you can map the tenant to a single Cloud Identity or Google Workspace account and set up federation between the two. This Cloud Identity or Google Workspace account then provides the basis for a single Google Cloud organization that you can use to manage all Google Cloud resources. Multiple tenants In larger organizations, it's not uncommon to have more than one Microsoft Entra ID tenant. Multiple tenants might be used to differentiate between testing and production environments, or to differentiate between different parts of an organization. It's possible to map multiple Microsoft Entra ID tenants to a single Cloud Identity or Google Workspace account, and to set up user provisioning and single sign-on accordingly. However, such an N:1 mapping can weaken the isolation between Microsoft Entra ID tenants: If you grant multiple Microsoft Entra ID tenants permission to create and modify users in a single Cloud Identity or Google Workspace account, then these tenants can interfere and tamper with each other's changes. Typically, a better approach to integrate with multiple Microsoft Entra ID tenants is to create a separate Cloud Identity or Google Workspace account for each tenant and set up federation between each pair. In Google Cloud, a separate organization is created for each Cloud Identity or Google Workspace account. Because Google Cloud allows resources to be organized using projects and folders within a single organization, having more than one organization is often impractical. However, you can select one of the organizations and associate it with the other Cloud Identity or Google Workspace accounts, effectively creating an organization that is federated with multiple Active Directory forests. The other organizations remain unused. External users With Microsoft Entra ID B2B, you can invite external users as guests to your Microsoft Entra ID tenant. A new user is created for each guest and Microsoft Entra ID automatically assigns a UPN to these guest users. The UPN that Microsoft Entra ID generates uses a prefix derived from the invitee's email address, combined with the tenant's initial domain: prefix#EXT#@tenant.onmicrosoft.com. Provisioning guest users from Microsoft Entra ID to Cloud Identity or Google Workspace is subject to certain limitations: You cannot map the UPN of guest users to email addresses in Cloud Identity or Google Workspace because onmicrosoft.com is a domain that cannot be added and verified in Cloud Identity or Google Workspace. You therefore have to map users by using an attribute other than the UPN. If a guest user is suspended in its home tenant, then the corresponding user in Cloud Identity or Google Workspace might remain active and access to Google Cloud resources might not be properly revoked. A better way to deal with guest users that originate from a different Microsoft Entra ID tenant is to create multiple Cloud Identity or Google Workspace accounts as outlined in the previous section, each federated with a different Microsoft Entra ID tenant. For more information, see Microsoft Entra ID B2B user provisioning and single sign-on To grant an external user access to certain Google Cloud resources, it's not a prerequisite for the user to have a user account in Cloud Identity or Google Workspace. Unless restricted by policy, you can also add the external user directly as a member to the respective projects, folders, or organization, provided that the user has a Google identity. Map DNS domains DNS plays a crucial role, both for Microsoft Entra ID and for Cloud Identity and Google Workspace. The second factor to look at when planning to federate Microsoft Entra ID and Google Cloud is how you can share or map DNS domains between Microsoft Entra ID and Google Cloud. Usage of DNS domains in Google Cloud Google Sign-In, which Google Cloud relies on for authentication purposes, uses email addresses to identify users. Using email addresses not only guarantees that they are globally unique, but also enables Google Cloud to send notification messages to the addresses. Google Sign-In determines the directory or identity provider to use for authenticating a user based on the domain part of the email addresses, which follows the @. For an email address that uses the gmail.com domain, for example, Google Sign-In uses the directory of Gmail users for authentication. When you sign up for a Google Workspace or Cloud Identity account, you're creating a private directory that Sign-In can use for authentication. In the same way that the Gmail directory is associated with the gmail.com domain, Google Workspace and Cloud Identity accounts need to be associated with a custom domain. Three different kinds of domains are used: Primary domain: This domain identifies the Cloud Identity or Google Workspace account and is used as the name for the organization in Google Cloud. When signing up for Cloud Identity or Google Workspace, you must specify this domain name. Secondary domain: Along with the primary domain, you can associate other, secondary domains with a Cloud Identity or Google Workspace account. Each user in the directory is associated with either the primary domain or one of the secondary domains. Two users, johndoe@example.com and johndoe@secondary.example.com, are considered separate users if example.com is the primary domain and secondary.example.com is a secondary domain. Alias domain: An alias domain is an alternate name for another domain. That is, johndoe@example.com and johndoe@alias.example.com refer to the same user if alias.example.com is set up as an alias domain for example.com. All domains must satisfy the following requirements: They must be valid, global DNS domain names. During setup, you might need administrative access to the respective DNS zones in order to verify domain ownership. A domain, such as example.com, can refer only to a single directory. However, you can use different subdomains, such as subdomain.example.com, to refer to different directories. Primary and secondary domains should have a valid MX record so that messages sent to email addresses that are formed by using this domain name can be delivered properly. Usage of DNS domains in Microsoft Entra ID The way Cloud Identity and Google Workspace uses DNS is similar to how Microsoft Entra ID relies on DNS to distinguish Microsoft Entra ID tenants and associate usernames with tenants. But be aware of two notable differences: Initial domains: When you create a Microsoft Entra ID tenant, the tenant is associated with an initial domain, which is a subdomain of onmicrosoft.com. This domain is different from any custom domain name that you might register in that you don't own the domain and that you don't have administrative access to the respective DNS zone. Cloud Identity doesn't have an equivalent of an initial domain and instead requires that you own all domains that you associate with a Cloud Identity account. This requirement means that you cannot register an onmicrosoft.com subdomain as a Cloud Identity domain. Domains used in user identifiers: Microsoft Entra ID distinguishes between email addresses MOERAs (Microsoft Online Email Routing Addresses) and UPNs. Both follow an RFC 822 format (user@domain), but they serve different purposes: Where UPNs are used to identify users and are used in the sign-on form, only MOERAs are used for delivering email. The domain suffix used by UPNs is required to match one of the registered domains of your Microsoft Entra ID tenant—the same requirement does not apply to email addresses. Cloud Identity and Google Workspace doesn't rely on the concept of a UPN and instead use a user's email address as an identifier. Consequently, the domain used by the email address must match one of the registered domains of the Cloud Identity or Google Workspace account. A Microsoft Entra ID tenant and a Cloud Identity or Google Workspace account can use the same DNS domains. If using the same domains isn't possible, you can configure user provisioning and single sign-on to substitute domain names. Considering the two differences outlined above, you should base your domain mapping on how you plan to map users between Microsoft Entra ID and Cloud Identity or Google Workspace. Map users The third factor to look at when planning to federate Microsoft Entra ID and Google Cloud is how to map users between Microsoft Entra ID and Cloud Identity or Google Workspace. Successfully mapping Microsoft Entra ID users to users in Cloud Identity or Google Workspace requires two pieces of information for each user: A stable, unique ID that you can use during synchronization to track which Microsoft Entra ID user corresponds to which user in Cloud Identity or Google Workspace. An email address for which the domain part corresponds to a Cloud Identity primary, secondary, or alias domain. Because this email address will be used throughout Google Cloud, the address should be human-readable. Internally, Microsoft Entra ID identifies users by ObjectId, which is a randomly generated, globally unique ID. While this ID is stable and therefore meets the first requirement, it's meaningless to humans and doesn't follow the format of an email address. To map users, the only practical options are to map by UPN or by email address. If the user enters an email address that belongs to a Cloud Identity or Google Workspace account that is configured to use single sign-on with Microsoft Entra ID, the user is then redirected to the Microsoft Entra ID sign-on screen. Microsoft Entra ID uses UPNs, not email addresses, to identify users, so the sign-on screen prompts the user to provide a UPN. If the Microsoft Entra ID tenant is configured to use AD FS for sign-on, another redirect takes the user to the AD FS sign-on screen. AD FS can identify users either by their Active Directory UPN or by their Pre–Windows 2000 logon name (domain\user). If the email address used for Cloud Identity or Google Workspace, the UPN used by Microsoft Entra ID, and the UPN used by Active Directory all differ, the sequence of sign-on screens can easily become confusing for end users. In contrast, if all three identifiers are the same as in the example screenshots (johndoe@example.com), then users aren't exposed to the technical differences between UPNs and email addresses, minimizing potential confusion among your users. Map by UPN From a user-experience perspective, mapping Microsoft Entra ID UPNs to Cloud Identity or Google Workspace email addresses might be considered the best option. Another benefit of relying on UPNs is that Microsoft Entra ID guarantees uniqueness so that you avoid the risk of naming clashes. However, in order to map Microsoft Entra ID UPNs to Cloud Identity email addresses, you must meet these requirements: The Microsoft Entra ID UPNs should be valid email addresses so that any notification emails sent by Google Cloud are correctly delivered to the user's mailbox. If this isn't the case already, you might be able to configure alias email addresses to ensure that the user receives such email. The UPNs of all relevant users in Microsoft Entra ID must use a custom domain as suffix (user@custom-domain). UPNs that use the Microsoft Entra ID initial domain (user@tenant.onmicrosoft.com) cannot be mapped to email addresses in Cloud Identity, because the initial domain isn't owned by you, but by Microsoft. All custom domains used by Microsoft Entra ID for UPNs must be registered in Cloud Identity as well. This requirement means that you must have access to the respective DNS zones in order to perform domain validation. To avoid overriding existing TXT DNS records you might have created for verifying domain ownership in Microsoft Entra ID, you can use a CNAME record to verify domain ownership in Cloud Identity. Map users by email address If mapping Microsoft Entra ID UPNs to Cloud Identity or Google Workspace email addresses isn't an option, you can map users by email address. When you specify an email address in Active Directory, it's stored in the mail attribute of the respective user object and Microsoft Entra ID Connect will synchronize the value to the Mail attribute in Microsoft Entra ID. Microsoft Entra ID also makes the attribute available for user provisioning so that you can map it to the email address in Cloud Identity or cloudid_name_short. Crucially, the Microsoft Entra ID Mail attribute currently isn't shown in the Azure portal and can only be viewed and edited through APIs or PowerShell. Although the user interface lets you specify an email address and an alternate email address, neither of these values is stored in the Mail attribute, so they can't be used for provisioning to Cloud Identity or cloudid_name_short. As a result, mapping users of an email address is practical only when you do most of your user creation and editing in Active Directory, not in Microsoft Entra ID. Mapping users by email address requires that you meet the following additional requirements: Email assignments must be complete. All users that are subject to synchronization must be assigned an email address. Especially when users are synchronized from an on-premises Active Directory, this might not be the case because mail is an optional user attribute in Active Directory. Users that lack an email address in the Microsoft Entra ID Mail attribute are silently ignored during synchronization. Email addresses must be unique across the Microsoft Entra ID tenant. Although Active Directory doesn't check uniqueness on user creation, Microsoft Entra ID Connect detects collisions by default, which might cause the synchronization of affected users to fail. The domains used by email addresses don't need to be registered in Microsoft Entra ID, but they must be registered in Cloud Identity or Google Workspace. This requirement means that you must have access to the respective DNS zones in order to perform domain validation, and it rules out the use of email addresses with domains that you don't own (like gmail.com). Map the user lifecycle After you've defined a mapping between Microsoft Entra ID users and users in Cloud Identity or Google Workspace, you must decide which set of users you want to provision. Refer to our best practices on mapping identity sets for guidance on making this decision. If you don't want to provision all users of your Microsoft Entra ID tenant, you can enable provisioning for a subset of users by using user assignments or scoping filters. The following table summarizes the default behavior of Microsoft Entra ID provisioning, and shows how enabling or disabling provisioning for a user controls which actions Microsoft Entra ID will perform in Cloud Identity or Google Workspace. Provisioning enabled for Microsoft Entra ID user User state in Cloud Identity or Google Workspace Action performed in Microsoft Entra ID Action performed in Cloud Identity or Google Workspace No (does not exist) Enable provisioning Create new user (*) No Exists (active) Enable provisioning Update existing user No Exists (suspended) Enable provisioning Update existing user, keep suspended Yes Exists Modify user Update existing user Yes Exists Rename UPN/email address Change primary email address, keep previous address as alias Yes Exists Disable provisioning Suspend user Yes Exists Soft-delete user Suspend user (*) If a consumer account with the same email address exists, the consumer account is evicted. Map groups The fourth factor to look at when you are planning to federate Microsoft Entra ID and Google Cloud is whether to synchronize groups between Microsoft Entra ID and Google Cloud and how to map them. In Google Cloud, groups are commonly used as a way to manage access efficiently across projects. Rather than assigning individual users to IAM roles in each project, you define a set of groups that model common roles in your organization. You then assign those groups to a set of IAM roles. By modifying the membership of the groups, you can control users' access to an entire set of resources. Mapping groups between Microsoft Entra ID and Google Cloud is optional. Once you've set up user provisioning, you can create and manage groups directly in Cloud Identity or Google Workspace, which means that Active Directory or Microsoft Entra ID remains the central system for identity management but not for Google Cloud access management. To maintain Active Directory or Microsoft Entra ID as the central system for identity management and access management, we recommend that you synchronize groups from Microsoft Entra ID instead of managing them in Cloud Identity or Google Workspace. With this approach, you can set up IAM so that you can use group memberships in Active Directory or Microsoft Entra ID to control who has access to resources in Google Cloud. Groups in Cloud Identity and Google Workspace In Cloud Identity and Google Workspace, there is only a single type of group. Groups in Cloud Identity and Google Workspace aren't confined to the scope of the Cloud Identity or Google Workspace account where they were defined. Instead, they can include users from different Cloud Identity or Google Workspace accounts, support being referenced and nested in other accounts, and be used across any Google Cloud organization. As they do with users, Cloud Identity and Google Workspace identify groups by email address. The email address doesn't have to correspond to an actual mailbox, but it must use one of the domains registered for the respective Cloud Identity account. When working with groups in IAM, you often need to specify groups by email address rather than by name. So it's best for groups to have not only a meaningful name, but a meaningful and recognizable email address. Groups in Microsoft Entra ID Microsoft Entra ID supports multiple types of groups, each catering to different use cases. Groups are scoped to a single tenant and identified by an Object ID, which is a randomly generated, globally unique ID. Groups can be nested and can contain either users from the same tenant or users invited from other tenants using Azure B2B. Microsoft Entra ID also supports dynamic groups, whose members are determined automatically based on a query. In the context of integrating Microsoft Entra ID with Cloud Identity or Google Workspace, two properties of groups are of primary interest—whether a group is mail-enabled and whether it is security-enabled: A security-enabled group can be used to manage access to resources in Microsoft Entra ID. Any security-enabled group is therefore potentially relevant in the context of Google Cloud. A mail-enabled group has an email address, which is relevant because Cloud Identity and Google Workspace require groups to be identified by an email address. In Microsoft Entra ID, you can create groups of type Security or Office 365 (sometimes called unified groups). When synchronizing groups from an on-premises Active Directory or when using the Office 365 type, you can also create groups of type Distribution list. The following table summarizes the differences between these different kinds of groups regarding being mail-enabled or security-enabled, and how they map to Active Directory group types, assuming a default Microsoft Entra ID Connect configuration: Source Active Directory group type Microsoft Entra ID group type Mail-enabled Security-enabled Microsoft Entra ID - Office 365 group Always Optional Microsoft Entra ID - Security group Optional Always on-premises Security group (with email) Security group Yes Yes on-premises Security group (without email) Security group No Yes on-premises Distribution list (with email) Distribution list Yes No on-premises Distribution list (without email) (ignored by Microsoft Entra ID Connect) Unlike for users, Microsoft Entra ID requires that email addresses assigned to groups use a domain that has been registered as a custom domain in Microsoft Entra ID. This requirement results in the following default behavior: If a group in Active Directory uses an email address that uses a domain that has previously been registered in Microsoft Entra ID, then the email address is properly maintained during synchronization to Microsoft Entra ID. If a group in Active Directory uses an email address that has not been registered in Microsoft Entra ID, then Microsoft Entra ID auto-generates a new email address during synchronization. This email address uses the tenant's default domain. If your tenant uses the initial domain as the default domain, the resulting email address will be in the form of [name]@[tenant].onmicrosoft.com. If you create an Office 365 group in Microsoft Entra ID, then Microsoft Entra ID also auto-generates an email address that uses the tenant's default domain. Map group identities Successfully mapping Microsoft Entra ID groups to Cloud Identity or Google Workspace groups requires a common identifier, and this identifier must be an email address. On the Microsoft Entra ID side, this requirement leaves you with two options: You can use the email address of a group in Microsoft Entra ID and map it to a Cloud Identity or Google Workspace email address. You can derive an email address from the name of the group in Microsoft Entra ID and map the resulting address to an email address in Cloud Identity or Google Workspace. Map by email address Mapping groups by email address is the most obvious choice, yet it requires you to meet several requirements: All groups that are subject to synchronization must have an email address. In practice, this means that you can only synchronize mail-enabled groups. Groups that lack an email address are ignored during provisioning. The email addresses must be unique across the tenant. Because Microsoft Entra ID doesn't enforce uniqueness, you might have to implement custom checks or policies. The domains used by email addresses must be registered in both Microsoft Entra ID and Cloud Identity or Google Workspace. Any groups with email addresses that use domains not registered in Cloud Identity or Google Workspace, including the tenant.onmicrosoft.com domain, will fail to synchronize. If all relevant groups meet these criteria, identify the domains that are used by these email addresses and ensure that the list of DNS domains registered in Cloud Identity or Google Workspace covers these domains. Map by name Meeting the criteria required to map groups by email address can be challenging, particularly if many of the security groups you intend to provision to Cloud Identity or Google Workspace aren't mail-enabled. In this case, it might be better to automatically derive an email address from the group's display name. Deriving an email address presents two challenges: An email address derived from a group's name might clash with an email address of a user. Microsoft Entra ID doesn't enforce uniqueness for group names. If multiple groups in your Microsoft Entra ID tenant share the same name, email addresses derived from this name will clash, causing only one group to synchronize successfully. You can overcome the first challenge by using a domain for the generated email address that is different than any of the domains used by users. For example, if all your users use email addresses with example.com as the domain, then you could use groups.example.com for all groups. Registering subdomains in Cloud Identity or Google Workspace doesn't require domain verification, so the DNS zone groups.example.com doesn't even have to exist. You can overcome the second challenge, duplicate group names, only technically by deriving the group email address from the Object ID. Because the resulting group names are rather cryptic and difficult to work with, it's better to identify and rename duplicate group names in Microsoft Entra ID before setting up provisioning to Cloud Identity or Google Workspace. Map the group lifecycle After you've defined a mapping between Microsoft Entra ID groups and groups in Cloud Identity or Google Workspace, you must decide which set of groups you want to provision. Similar to users, you can enable provisioning for a subset of groups by using group assignments or scoping filters. The following table summarizes the default behavior of Microsoft Entra ID provisioning, and shows how enabling or disabling provisioning for a group controls which actions Microsoft Entra ID will perform in Cloud Identity or Google Workspace. Provisioning enabled for Microsoft Entra ID group Group state in Cloud Identity or Google Workspace Action performed in Microsoft Entra ID Action performed in Cloud Identity or Google Workspace No (does not exist) Enable provisioning Create new group No Exists Enable provisioning Add member, retain all existing members Yes Exists Rename group Rename group Yes Exists Modify description Update description Yes Exists Add member Add member, retain all existing members Yes Exists Remove member Remove member Yes Exists Disable provisioning Group left as-is (incl. members) Yes Exists Delete group Group left as-is (incl. members) Note: When you delete a group in Microsoft Entra ID, Microsoft Entra ID doesn't propagate the deletion to Cloud Identity or Google Workspace. Similarly, if you enable provisioning for a group and later disable provisioning, the group in Cloud Identity or Google Workspace remains active. This behavior can result in users inadvertently retaining access rights in Google Cloud. To prevent this from happening, remove all members from a group first, wait for changes to be propagated, and only then delete the group in Microsoft Entra ID or disable provisioning for the group in Microsoft Entra ID. What's next Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Learn how to configure provisioning and single sign-on between Microsoft Entra ID and Cloud Identity. Learn about best practices for managing super administrator accounts. Send feedback \ No newline at end of file diff --git a/File_storage_on_Compute_Engine.txt b/File_storage_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2f491e041478434c990315c2bc013dcb1e49078 --- /dev/null +++ b/File_storage_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/filers-on-compute-engine +Date Scraped: 2025-02-23T11:57:01.406Z + +Content: +Home Docs Cloud Architecture Center Send feedback File storage on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-07 UTC File storage, also known as network-attached storage (NAS), provides file-level access to applications to read and update information that can be shared across multiple machines. Some on-premises file storage solutions have a scale-up architecture and simply add storage to a fixed amount of compute resources. Other file storage solutions have a scale-out architecture where capacity and compute (performance) can be incrementally added to an existing file system as needed. In both storage architectures, one or multiple virtual machines (VMs) can access the storage. Although some file systems use a native POSIX client, many storage systems use a protocol that enables client machines to mount a file system and access the files as if they were hosted locally. The most common protocols for exporting file shares are Network File System (NFS) for Linux (and in some cases Windows) and Server Message Block (SMB) for Windows. This document describes the following options for sharing files: Compute Engine Persistent Disk and local SSDs Managed solutions: Filestore Basic Filestore Zonal Filestore Regional Google Cloud NetApp Volumes Partner solutions in Google Cloud Marketplace: NetApp Cloud Volumes ONTAP DDN EXAScaler Cloud Nasuni Cloud File Storage Sycomp Storage Fueled by IBM Storage Scale An underlying factor in the performance and predictability of all of the Google Cloud services is the network stack that Google evolved over many years. With the Jupiter Fabric, Google built a robust, scalable, and stable networking stack that can continue to evolve without affecting your workloads. As Google improves and bolsters its network abilities internally, your file-sharing solution benefits from the added performance. One feature of Google Cloud that can help you get the most out of your investment is the ability to specify Custom VM types. When choosing the size of your filer, you can pick exactly the right mix of memory and CPU, so that your filer is operating at optimal performance without being oversubscribed. Furthermore, it is important to choose the correct Compute Engine persistent disk capacity and number of vCPUs to ensure that your file server's storage devices receive the required storage bandwidth and IOPs as well as network bandwidth. A VM receives 2 Gbps of network throughput for every vCPU (up to the max). For tuning persistent disk, see Optimizing persistent disk and local SSD performance. Note that Cloud Storage is also a great way to store petabytes of data with high levels of redundancy at a low cost, but Cloud Storage has a different performance profile and API than the file servers discussed here. Summary of file-server solutions The following table summarizes the file-server solutions and features: Solution Optimal dataset Throughput Managed support Export protocols Filestore Basic 1 TiB to 64 TiB Up to 1.2 GiB/s Fully managed by Google NFSv3 Filestore Zonal 1 TiB to 100 TiB Up to 26 GiB/s Fully managed by Google NFSv4.1 Filestore Regional 1 TiB to 100 TiB Up to 26 GiB/s Fully managed by Google NFSv4.1 Google Cloud NetApp Volumes 1 GiB to 100 TiB MBs/s to 4.5 GiB/s Fully managed by Google NFSv3, NFSv4.1, SMB2, SMB3 NetApp Cloud Volumes ONTAP 1 GiB to 1 PiB varies Customer-managed NFSv3, NFSv4.1, SMB2, SMB3, iSCSI Nasuni 10s of TB to > 1 PB Up to 1.2 GBps Nasuni- and customer-managed NFSv3, NFSv4, NFSv4.1, NFSv4.2, SMB2, SMB3 Read-only Persistent Disk < 64 TB 240 to 1,200 MBps No Direct attachment Persistent Disk and local SSD If you have data that only needs to be accessed by a single VM or doesn't change over time, you might use Compute Engine Persistent Disk volumes, and avoid a file server altogether. With persistent disks, you can format them with a file system such as Ext4 or XFS and attach volumes in either read-write or read-only mode. This means that you can first attach a volume to an instance, load it with the data you need, and then attach it as a read-only disk to hundreds of VMs simultaneously. Employing read-only persistent disks does not work for all use cases, but it can greatly reduce complexity, compared to using a file server. Persistent disks deliver consistent performance. All disks of the same size (and for SSD persistent disks, the same number of vCPUs) that you attach to your instance have the same performance characteristics. You don't need to pre-warm or test your persistent disks before using them in production. The cost of persistent disks is simple to determine because there are no I/O costs to consider after provisioning your volume. Persistent disks can also be resized when required. This lets you start with a low-cost and low-capacity volume, and you need not create additional instances or disks to scale your capacity. If total storage capacity is the main requirement, you can use low-cost standard persistent disks. For the best performance while continuing to be durable, you can use SSD persistent disks. If your data is ephemeral and requires sub-millisecond latency and high I/O operations per second (IOPS), you can take advantage of up to 9 TB of local SSDs for extreme performance. Local SSDs provide GBps of bandwidth and millions of IOPS, all while not using up your instances' allotted network bandwidth. It is important to remember though that local SSDs have certain trade-offs in availability, durability, and flexibility. For a comparison of the many disk types available to Compute Engine instances, see the documentation for block storage. Considerations when choosing a file storage solution Choosing a file storage solution requires you to make tradeoffs regarding manageability, cost, performance, and scalability. Making the decision is easier if you have a well-defined workload, which isn't often the case. Where workloads evolve over time or are highly variant, it's prudent to trade cost savings for flexibility and elasticity, so you can grow into your solution. On the other hand, if you have a temporal and well-known workload, you can create a purpose-built file storage architecture that you can tear down and rebuild to meet your immediate storage needs. One of the first decisions to make is whether you want to pay for a managed storage service, a solution that includes product support, or an unsupported solution. Managed file storage services are the easiest to operate, because either Google or a partner is handling all operations. These services might even provide a service level agreement (SLA) for availability like most other Google Cloud services. Unmanaged, yet supported, solutions provide additional flexibility. Partners can help with any issues, but the day-to-day operation of the storage solution is left to the user. Unsupported solutions require the most effort to deploy and maintain, leaving all issues to the user. These solutions are not covered in this document. Your next decision involves determining the solution's durability and availability requirements. Most file solutions are zonal solutions and don't provide protection by default if the zone fails. So it's important to consider if a disaster recovery (DR) solution that protects against zonal failures is required. It's also important to understand the application requirements for durability and availability. For example, the choice of local SSDs or persistent disks in your deployment has a big impact, as does the configuration of the file solution software. Each solution requires careful planning to achieve high durability, availability, and even protection against zonal and regional failures. Finally, consider the locations (that is, zones, regions, on-premises data centers) where you need to access the data. The locations of the compute farms that access your data influence your choice of filer solution because only some solutions allow hybrid on-premises and in-cloud access. Managed file storage solutions This section describes the Google-managed solutions for file storage. Filestore Basic Filestore is Google's fully managed NAS solution. Filestore Basic instances are suitable for file sharing, software development, and GKE workloads. You can choose either HDD or SSD for storing data. SSD provides better performance. With either option, capacity scales up incrementally, and you can protect the data by using backups. Filestore Zonal Filestore Zonal simplifies enterprise storage and data management on Google Cloud and across hybrid clouds. Filestore Zonal delivers cost-effective, high-performance parallel access to global data while maintaining strict consistency powered by a dynamically scalable, distributed file system. With Filestore Zonal, existing NFS applications and NAS workflows can run in the cloud without requiring refactoring, yet retain the benefits of enterprise data services (for example, snapshots and backups). The Filestore CSI driver allows seamless data persistence, portability, and sharing for containerized workloads. You can scale Filestore Zonal instances on demand. This lets you create and expand file system infrastructure when required, ensuring that storage performance and capacity always align with your dynamic workflow requirements. As a Filestore Zonal cluster expands, both metadata and I/O performance scale linearly. This scaling lets you enhance and accelerate a broad range of data-intensive workflows, including high performance computing, analytics, cross-site data aggregation, DevOps, and many more. As a result, Filestore Zonal is a great fit for use in data-centric industries such as life sciences (for example, genome sequencing), financial services, and media and entertainment. Filestore Regional Filestore Regional is a fully managed cloud-native NFS solution that lets you deploy critical file- based applications in Google Cloud, backed by an SLA that delivers 99.99% regional availability. With a 99.99% regional-availability SLA, Filestore Regional is designed for applications that demand high availability. With a few mouse clicks (or a few gcloud commands or API calls), you can provision NFS shares that are synchronously replicated across three zones within a region. If any zone within the region becomes unavailable, Filestore Regional continues to transparently serve data to the application with no operational intervention. To further protect critical data, Filestore also lets you take and keep periodic snapshots of the file system. With Filestore, you can recover an individual file or an entire file system in less than 10 minutes from any of the prior recovery points. For critical applications like SAP, both the database and application tiers need to be highly available. To satisfy this requirement, you can deploy the SAP database tier to Google Cloud Hyperdisk Extreme, in multiple zones using built-in database high availability. Similarly, the NetWeaver application tier, which requires shared executables across many VMs, can be deployed to Filestore Regional, which replicates the Netweaver data across multiple zones within a region. The end result is a highly available three-tier mission-critical application architecture. Note: For more information about region-specific considerations, see Geography and regions. IT organizations are also increasingly deploying stateful applications in containers on Google Kubernetes Engine (GKE). This often causes them to rethink which storage infrastructure to use to support those applications. You can use block storage (Persistent Disk), file storage (Filestore Basic, Zonal, or Regional), or object storage (Cloud Storage). Filestore multishares for GKE combined with the Filestore CSI driver lets organizations that require multiple GKE Pods have shared file access, providing an increased level of availability for mission-critical workloads. NetApp Volumes NetApp Volumes is a fully managed Google service that lets you quickly mount shared file storage to your Google Cloud compute instances. NetApp Volumes supports SMB, NFS, and multi-protocol access. NetApp Volumes delivers high performance to your applications at low latency, with robust data-protection capabilities: snapshots, copies, cross-region replication, and backup. The service is suitable for applications requiring both sequential and random workloads, which can scale across hundreds or thousands of Compute Engine instances. In seconds, volumes that range in size from 100 GiB to 100 TiB can be provisioned and protected with robust data protection capabilities. With three service levels (Standard, Premium, and Extreme) that you can change on demand, NetApp Volumes delivers the appropriate performance for your workload, without affecting availability. For information about the Google Cloud locations where NetApp Volumes is available, see NetApp Volumes locations. Partner solutions in Cloud Marketplace The following partner-provided solutions are available in Cloud Marketplace. NetApp Cloud Volumes ONTAP NetApp Cloud Volumes ONTAP (NetApp CVO) is a customer-managed, cloud-based solution that brings the full feature set of ONTAP, NetApp's leading data management operating system, to Google Cloud. NetApp CVO is deployed within your VPC, with billing and support from Google. The ONTAP software runs on a Compute Engine VM, and uses a combination of persistent disks and Cloud Storage buckets (if tiering is enabled) to store the NAS data. The built-in filer accommodates the NAS volumes using thin provisioning so that you pay only for the storage you use. As the data grows, additional persistent disks are added to the aggregate capacity pool. NetApp CVO abstracts the underlying infrastructure and let you create virtual data volumes carved out of the aggregate pool that are consistent with all other ONTAP volumes on any cloud or on-premises environment. The data volumes you create support all versions of NFS, SMB, multi-protocol NFS/SMB, and iSCSI. They support a broad range of file-based workloads, including web and rich media content, used across many industries such as electronic design automation (EDA) and media and entertainment. NetApp CVO supports instant, space-saving point-in-time snapshots, built-in block-level, incremental forever backup to Cloud Storage and cross-region asynchronous replication for disaster recovery. The option to select the type of Compute Engine instance and persistent disks lets you achieve the performance you want for your workloads. Even when operating in a high-performance configuration, NetApp CVO implements storage efficiencies such as deduplication, compaction, and compressions as well as auto-tiering infrequently-used data to the Cloud Storage bucket enabling you to store petabytes of data while significantly reducing overall storage costs. DDN EXAScaler Cloud DDN's EXAScaler Cloud platform is an industry leading parallel shared file solution for high-performance data processing and for managing the large volumes of data required to support AI, HPC, and analytics workloads. Life sciences, energy, autonomous vehicles, financial services, and other data-intensive customers can take advantage of EXAScaler Cloud for AI and analytics in the cloud to maximize their return from Google Cloud resources and create agile workflows with cloud bursting and long-term data retention. Ideal uses of EXAScaler Cloud include deep learning and inference AI applications, hybrid-cloud architectures for cloud bursting to take advantage of on-demand high-performance processing, and as a repository to hold longer-term assets from an on-premises EXAScaler deployment. The cloud-based EXAScaler is simple to deploy and takes advantage of DDN's parallel file system, which powers over two thirds of the top 100 supercomputers. EXAScaler Cloud is designed to optimize data-intensive cloud workloads to reduce time to insight by reducing I/O contention and delivering resilient access to shared storage for a large number of clients. EXAScaler Cloud optimizes the whole environment for high performance from application through to the storage devices, including the network and the compute instances themselves. With flexible configurations, EXAScaler Cloud is useful for high-performance scratch workloads, more persistent IOPS or throughput-oriented applications, and even long-term persistent data. By mimicking on-premises architectures in the cloud, customers can transition workloads seamlessly, helping minimize end-user application disruption as workloads move. DDN EXAScaler Cloud handles scalable workloads and is backed with the expertise learned supporting the largest data environments in the world. With premium support options, customers get the same expert support experience on-premises and in the cloud. For more information, see the following: DDN for Google Cloud web page Parallel file systems for HPC workloads Nasuni Cloud File Storage Nasuni replaces enterprise file servers and NAS devices and all associated infrastructures, including backup and DR hardware, with a simpler, low-cost cloud alternative. Nasuni uses Google Cloud object storage to deliver a more efficient software-as-a-service (SaaS) storage solution that scales to handle rapid, unstructured file data growth. Nasuni is designed to handle department, project, and organizational file shares and application workflows for every employee, wherever they work. Nasuni offers three packages, with pricing for companies and organizations of all sizes so they can grow and expand as needed. Its benefits include the following: Cloud-based primary file storage for up to 70% less. Nasuni's architecture takes advantage of built-in object lifecycle management policies. These policies allow complete flexibility for use with Cloud Storage classes, including Standard, Nearline, Coldline, and Archive. By using the immediate-access Archive class for primary storage with Nasuni, you can realize cost savings of up to 70%. Departmental and organizational file shares in the cloud. Nasuni's cloud-based architecture offers a single global namespace across Google Cloud regions, with no limits on the number of files, file sizes, or snapshots, letting you store files directly from your desktop into Google Cloud through standard NAS (SMB) drive-mapping protocols. Built-in backup and disaster recovery. Nasuni's "set-it and forget-it" operations make it simple to manage global file storage. Backup and DR is included, and a single management console lets you oversee and control the environment anywhere, anytime. Replaces aging file servers. Nasuni makes it simple to migrate Microsoft Windows file servers and other existing file storage systems to Google Cloud, reducing costs and management complexity of these environments. For more information, see the following: Nasuni guided tour Nasuni and Google Cloud partnership Nasuni Enterprise File Storage for Google Cloud solution brief (PDF) Nasuni Cloud File Storage in Cloud Marketplace Nasuni and Google Cloud blog Sycomp Storage Fueled by IBM Storage Scale Sycomp Storage Fueled by IBM Storage Scale in Google Cloud Marketplace lets you run your high performance computing (HPC), artificial intelligence (AI), machine learning (ML), and big data workloads in Google Cloud. With Sycomp Storage you can concurrently access data from thousands of VMs, reduce costs by automatically managing tiers of storage, and run your application on-premises or in Google Cloud. Sycomp Storage Fueled by IBM Storage Scale is available in Cloud Marketplace, can be quickly deployed, and supports access to your data through NFS and the IBM Storage Scale client. IBM Storage Scale is a parallel file system that helps to securely manage large volumes (PBs) of data. Sycomp Storage Scale is a parallel file system that's well suited for HPC, AI, ML, big data, and other applications that require a POSIX-compliant shared file system. With adaptable storage capacity and performance scaling, Sycomp Storage can support small to large HPC, AI, and ML workloads. After you deploy a cluster in Google Cloud, you decide how you want to use it. Choose whether you want to use the cluster only in the cloud or in hybrid mode by connecting to existing on-premises IBM Storage Scale clusters, third-party NFS NAS solutions, or other object-based storage solutions. For more information, see the following: IBM Storage Scale is now available in Google Cloud Sycomp Storage Fueled by IBM Storage Scale on Google Cloud Sycomp Storage Fueled by IBM Storage Scale in Cloud Marketplace ContributorsAuthor: Sean Derrington | Group Outbound Product Manager, StorageOther contributors: Dean Hildebrand | Technical Director, Office of the CTOKumar Dhanagopal | Cross-Product Solution Developer Send feedback \ No newline at end of file diff --git a/Filestore.txt b/Filestore.txt new file mode 100644 index 0000000000000000000000000000000000000000..96e7d39d7541d69bd7ae0cd3fe2eac56f8930351 --- /dev/null +++ b/Filestore.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/filestore +Date Scraped: 2025-02-23T12:09:52.324Z + +Content: +Tech leaders: Get an insider view of Google Cloud’s App Dev and Infrastructure solutions on Oct 30 at 9AM PT. Save your seat.Jump to FilestoreFilestoreHigh-performance, fully managed file storage.Go to consoleView documentationLearn to perform Filestore operations with the quickstart guideHear how Sabre intends to use Filestore for SAP and their business-critical appsExplore the latest news, articles, and videos for Filestore99.99% regional availability SLA 4:35 What is Filestore Enterprise?BenefitsExpedite your migration to cloudFilestore enables application migration to cloud without requiring you to rewrite or rearchitect, accelerating and simplifying your migration. Simple to manageDeploy Filestore instances easily, from the console, gCloud CLI, or using APIs. Spend less time configuring and monitoring your file storage, and more time focused on driving value for your business.Scale capacity up or down as you needPay for what you use, not what you don't. Automatically scale capacity up and down based on the demand of your applications.Key featuresFilestore meets the needs of the most demanding applications Scales to meet needs of high performance workloadsFilestore offers low latency storage operations for applications. For workloads that are latency sensitive, like high performance computing, data analytics, or other metadata intensive applications, Filestore supports capacity up to 100 TB and throughput of 25 GB/s and 920K IOPS.99.99% regional availability SLA supports enterprise appsFilestore Enterprise is built for critical applications (e.g., SAP) requiring regional availability to ensure the applications are unphased in a zonal outage. Avoid the need to rewrite your applications, and jump start your migration to the cloud. Protect your data with backups and snapshotsFilestore offers instantaneous backups and snapshots to help you protect your data easily. Back up data and metadata of the file share, set up a regular backup schedule, or take snapshots of your instances anytime you need. When it comes to recovering your data, recover some or all of your data from a prior snapshot recovery point in 10 minutes or less.Support GKE workloads with FilestoreFor apps running in GKE that require file storage, the fully managed NFS solution supports stateful and stateless applications. With an integrated and managed GKE Container Storage Interface (CSI) driver, multiple pods can have shared file system access to the same data. Scale GCVE datastore capacity independently from computeFilestore Zonal and Filestore Enterprise are VMware-certified as an NFS datastores with Google Cloud VMware Engine. Right-size vCPUs and storage capacity independently to meet compute and storage requirements for your storage intensive VMs. Leverage vSAN for low latency VM requirements and scale Filestore from TBs to PBs for the capacity-hungry VMs. Our team is looking forward to working with our customers to leverage Filestore Enterprise for their GKE workloads. The 99.99% regional availability is extremely valuable to our customers looking to run mission critical applications in Google Cloud.Miles Ward, CTO, SADARead the blogWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoStorage best practices for data analytics, GKE, and critical applicationsWatch videoBlog postAnnouncing Filestore EnterpriseLearn moreBlog postFilestore Backups eases migration of file-based apps to cloudLearn moreBlog postSpeeding up, scaling out: Filestore now supports high performanceRead the blogBlog postAnnouncement: Google to acquire ElastifileRead the blogDocumentationDocumentationQuickstartUsing the Google Cloud ConsoleLearn how to create, mount, and delete a Filestore instance using the Google Cloud Console.Learn moreTutorialCreating Filestore Persistent Volumes for GKEAll Filestore tiers can be utilized with GKE through the CSI driver, including Filestore Enterprise and the multishare feature.Learn moreQuickstartUsing the gcloud command-line toolLearn how to create, mount, and delete a Filestore instance using the gcloud command-line tool.Learn moreTutorial Copying dataUse gsutil to copy data between a Filestore instance and a Cloud Storage bucket. If migrating file systems, leverage our file transfer service.Learn moreTutorialUsing Filestore datastores with Google Cloud VMware EngineLearn how to create Filestore shares for GCVE datastores, augmenting vSAN storage in the cluster.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notes Read about the latest releases for FilestoreUse casesUse casesUse case Enterprise application migrations (SAP) Many on-premises applications require a file system interface. We've made it easy to migrate your enterprise applications to the cloud with our fully managed storage service. Filestore Enterprise is built for critical applications requiring regional availability and having unstructured NFS data requirements.Use caseFinancial services and technologyQuantitative researchers and modelers require ready access to powerful compute and storage resources to deliver useful, time-to-value trading insights. Filestore Zonal is built for high performance computing (HPC) applications to access, sort, process, model, and deliver the right information to the right decision-maker at the right time.Use caseGoogle Cloud VMware EngineUtilize Filestore as NFS datastores shares with GCVE to scale NFS storage alongside vSAN storage with your VMware clusters. Use case Data analyticsCompute complex financial models or analyze environmental data with Filestore. As capacity or performance needs change, easily grow or shrink your instances as needed. As a persistent and shareable storage layer, Filestore enables immediate access to data for high-performance, smart analytics without the need to lose valuable time on loading and off-loading data to clients’ drives.Use case Genomics processingGenome sequencing requires an incredible amount of raw data, on the order of billions of data points per person. This type of analysis requires speed, scalability, and security. Filestore meets the needs of companies and research institutions performing scientific research, while also offering predictable prices for the performance.Use caseWeb content managementWeb developers and large hosting providers rely on Filestore to manage and serve web content, including needs such as WordPress hosting.View all technical guidesCompare featuresFilestore offers three performance tiers:Each Filestore service tier provides a different level of performance. The performance of any given instance might vary from the expected numbers due to various factors, such as the use of caching, the number of client VMs, the machine type of the client VMs, and the workload tested. Learn more about expected performance.FeaturesFilestore Basic (HDD & SSD)Filestore RegionalFilestore ZonalBest for:File sharing, GKE, software development, and web hostingCritical applications (e.g., SAP), Compute Engine, and GKE workloadsHigh performance computing including genome sequencing, financial services trading analysis, and other high performance workloads.Capacity1-63.9 TiB (HDD)2.5-63.9 TiB (SSD)1-100TiB1-100TiBMax sequential read throughput (MB/s)180 (HDD)1,200 (SSD)26,00026,000Max random read IOPS1,000 (HDD)60,000 (SSD)920,000920,000Filestore Basic (HDD & SSD)Best for:File sharing, GKE, software development, and web hostingCapacity1-63.9 TiB (HDD)2.5-63.9 TiB (SSD)Max sequential read throughput (MB/s)180 (HDD)1,200 (SSD)Max random read IOPS1,000 (HDD)60,000 (SSD)Filestore RegionalBest for:Critical applications (e.g., SAP), Compute Engine, and GKE workloadsCapacity1-100TiBMax sequential read throughput (MB/s)26,000Max random read IOPS920,000Filestore ZonalBest for:High performance computing including genome sequencing, financial services trading analysis, and other high performance workloads.Capacity1-100TiBMax sequential read throughput (MB/s)26,000Max random read IOPS920,000PricingPricingFilestore pricing is based on the following elements:Service tier: Whether the service tier of your instance is Basic HDD (Standard), Basic SSD (Premium), Enterprise, or Zonal SSD.Instance capacity: You are charged for the allocated storage capacity, even if it is unused.Region: The location where your instance is provisioned.New customers get $300 in free credits to spend on Google Cloud during the first 90 days.View pricing detailsPartnersExplore our file storage partner ecosystemSee all partnersExplore our marketplaceTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/FinOps_and_Optimization_of_GKE.txt b/FinOps_and_Optimization_of_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..984cd902317b4751b35bb7f193297a73d82d8705 --- /dev/null +++ b/FinOps_and_Optimization_of_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/finops-optimize-gke +Date Scraped: 2025-02-23T11:58:27.734Z + +Content: +Business Oriented GKE OptimizationThe Google framework to build optimal compute platforms, designed to support the business. Provides three key pillars which help optimize for reliability, performance, innovation, and cost-efficiency with GKE.Contact usDo you want help from experts to help you optimize your GKE environment?Sign-up hereBenefitsThe Google framework to build optimal compute platforms, to support the businessSREProvides signals to measure the health of user-facing systems.DORAProvides signals to measure software delivery performance.Cost OptimizationProvide signals to measure cost-efficiency within your Kubernetes environments.Key featuresMore than just about cost-efficient workloadsCost optimization remains a key initiative for most organizations, the following are critical pillars: customer, innovation, and cloud cost.Pillars of the Google framework for business optimized computeTo support your critical business needs, focus needs to be on reliable systems, while providing an environment to innovate faster in a competitive world and all in a cost-efficient manner.How do you implement this framework with GKEThe GKE ecosystem provides you out of the box tooling for implementing the Google framework to build optimal compute platforms.Anthos Service Mesh and Cloud Monitoring can help provide details and alerting on signals for reliability.Learn moreGoogle Cloud Deploy provides you both automation and monitoring capabilities for your delivery pipelines.Learn moreGKE cost insights provides better visibility and recommendations for your GKE clusters and workloads.Learn moreTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Financial_Services.txt b/Financial_Services.txt new file mode 100644 index 0000000000000000000000000000000000000000..321ac1d94b333b5d83f864115f99974b6a0fa703 --- /dev/null +++ b/Financial_Services.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/financial-services +Date Scraped: 2025-02-23T11:57:48.719Z + +Content: +Accelerate your understanding of generative AI. Gain insights from industry experts and watch the 10 essential talks for the financial services industry.Google Cloud for financial servicesEmpower your transformation and growth.Maximize data-driven innovation, actionable insights, and exceptional customer experiences while meeting your security and compliance needs with Google Cloud.Contact sales0:44Gen AI opens a new world of possibilities for the growth potential of financial services companies. Explore 5 ways Generative AI can transform your business. How AI is shaping the future of financial servicesMultimodal AI, AI agents, and other groundbreaking innovations are poised to revolutionize financial services. Learn how these emerging trends could impact your business and gain a competitive edge.Read it hereWhy Google Cloud for financial servicesMore than cost savings and convenience, digital transformation is about empowering teams with insights to drive continuous innovation and scale. That's why organizations across the financial ecosystem solve their biggest challenges with Google Cloud.Shift to changing customer demandsTailored solutions can help financial services improve their business KPIsInnovate by redesigning omnichannel customer experiences, transforming data and analytics for smarter decision-making, improving operational efficiencies, and better managing risk, regulations, and fraud.Learn more“The speed itself is mind blowing. You should have seen the faces of some of our guys when they saw the numbers come out in 15 minutes.”Ajay Yadav, Global Head of Fixed Income for Traded Risk, HSBCWhat's newDiscover Financial Services deploys Google Cloud's generative AI to transform customer serviceNewsTD and Google Cloud enter into a strategic relationship to power innovative banking experiencesNewsCiti will use Google Cloud's Vertex AI platform to deliver generative AI capabilities across the companyNewsBuild resilient, secure, and scaled infrastructureImprove time-to-investment decisions through faster and easier data delivery, consumption, and analytics.Transform your entire business using modern cloud-based architectures, high-performance computing, and AI/ML for quantitative research, risk simulation, market data in the cloud, regulatory reporting, and more.Find out more“…Google Cloud’s new specialized platform will extend the benefits we can provide to our clients through next-generation cloud technology, expanded access and efficiencies, a broader range of customized connectivity options, and faster product development, with minimal disruption to their current operations.”Terry Duffy, Chairman & CEO, CME GroupWhat's newHow Citadel Securities is reimagining quantitative research on the cloudBlogCME Group and Google Cloud announce new Chicago area private cloud region and co-location facilityNewsListen to the fireside chat with former SEC Commissioner, Troy Paredes, at FIA Boca 2022VideoAchieve growth through innovative business modelsMaximize data’s impact for valuable business insights and smart decisionsLeverage data and analytics at scale to accelerate business growth, mitigate risk, and drive customer-centric business models to deliver better customer experiences. Learn more“Whether it was raw or curated data, the Google Cloud team and BigQuery really helped us consolidate and leverage the horsepower of Google’s data cloud to stitch our data together into a global-360 view”Santosh Bardwaj, Senior Vice President and Global Chief Data and Analytics Officer, CNAWhat's newCNA Insurance moves rapidly from data foundation to analytic products on Google CloudCase studyUSAA and Google Cloud: modernizing insurance operations with machine learningBlogSecurely grow revenues and reduce operating costsEnabling digital merchants worldwide to offer customers superior shopping experiences and simpler, safer ways to make online transactionsEfficiently monetize data, expand global revenue streams securely, and easily scale to transaction volume surges while staying compliant and preventing fraud. Learn more“We can only develop fast, build fast, and deploy fast if we have infrastructure that’s as nimble as we are...Google Cloud allows us to innovate and serve our customers at the speed we require. We can bring as much data as we want to the platform and manage it in a secure, cost-effective way.”Sri Shivananda, CTO, PayPal What's newGlobal Payments and Google form strategic partnership to reimagine digital paymentsNewsRevolut: Building the first global financial super app with Google CloudCase studyBankingShift to changing customer demandsTailored solutions can help financial services improve their business KPIsInnovate by redesigning omnichannel customer experiences, transforming data and analytics for smarter decision-making, improving operational efficiencies, and better managing risk, regulations, and fraud.Learn more“The speed itself is mind blowing. You should have seen the faces of some of our guys when they saw the numbers come out in 15 minutes.”Ajay Yadav, Global Head of Fixed Income for Traded Risk, HSBCWhat's newDiscover Financial Services deploys Google Cloud's generative AI to transform customer serviceNewsTD and Google Cloud enter into a strategic relationship to power innovative banking experiencesNewsCiti will use Google Cloud's Vertex AI platform to deliver generative AI capabilities across the companyNewsCapital MarketsBuild resilient, secure, and scaled infrastructureImprove time-to-investment decisions through faster and easier data delivery, consumption, and analytics.Transform your entire business using modern cloud-based architectures, high-performance computing, and AI/ML for quantitative research, risk simulation, market data in the cloud, regulatory reporting, and more.Find out more“…Google Cloud’s new specialized platform will extend the benefits we can provide to our clients through next-generation cloud technology, expanded access and efficiencies, a broader range of customized connectivity options, and faster product development, with minimal disruption to their current operations.”Terry Duffy, Chairman & CEO, CME GroupWhat's newHow Citadel Securities is reimagining quantitative research on the cloudBlogCME Group and Google Cloud announce new Chicago area private cloud region and co-location facilityNewsListen to the fireside chat with former SEC Commissioner, Troy Paredes, at FIA Boca 2022VideoInsuranceAchieve growth through innovative business modelsMaximize data’s impact for valuable business insights and smart decisionsLeverage data and analytics at scale to accelerate business growth, mitigate risk, and drive customer-centric business models to deliver better customer experiences. Learn more“Whether it was raw or curated data, the Google Cloud team and BigQuery really helped us consolidate and leverage the horsepower of Google’s data cloud to stitch our data together into a global-360 view”Santosh Bardwaj, Senior Vice President and Global Chief Data and Analytics Officer, CNAWhat's newCNA Insurance moves rapidly from data foundation to analytic products on Google CloudCase studyUSAA and Google Cloud: modernizing insurance operations with machine learningBlogPaymentsSecurely grow revenues and reduce operating costsEnabling digital merchants worldwide to offer customers superior shopping experiences and simpler, safer ways to make online transactionsEfficiently monetize data, expand global revenue streams securely, and easily scale to transaction volume surges while staying compliant and preventing fraud. Learn more“We can only develop fast, build fast, and deploy fast if we have infrastructure that’s as nimble as we are...Google Cloud allows us to innovate and serve our customers at the speed we require. We can bring as much data as we want to the platform and manage it in a secure, cost-effective way.”Sri Shivananda, CTO, PayPal What's newGlobal Payments and Google form strategic partnership to reimagine digital paymentsNewsRevolut: Building the first global financial super app with Google CloudCase studyThe ROI of gen AI in financial services In the finance industry, early adopters are reaping the rewards of gen AI. Get all the insights from our survey of 340 senior leaders and start generating ROI on your gen AI investments now.Get the reportSolutionsEnrich experiences, revenues, and efficiency.Detect suspicious, potential money laundering activity faster and more precisely with AI.AML AIRisk and regulatoryDeliver exceptional, natural customer interactions with expert assistance for complex cases.Contact Center AICustomer experience and growthDrive personalized, privacy-centric, omnichannel customer experiences and predictive marketing.Customer Data Platform (CDP)Customer experience and growthExchange and monetize historical and real-time market datasets—securely, easily, and at scale.DatashareCustomer experience and growthDeliver relevant and contextualized financial services within any customer service offering.Embedded Finance APICustomer experience and growthDrive profitable strategies by accelerating and scaling your quantitative research and backtesting.HPC for Quantitative ResearchOperational and cost efficiencyEfficiently calculate and simulate VaR at scale, in real time or on demand.HPC for Value at Risk (VaR)Operational and cost efficiencyReimagine and scale your regulatory reporting with cloud-based architecture and granular data.Regulatory Reporting PlatformRisk and regulatoryUse in-stream analytics to detect anomalies within time- series data.Time Series Anomaly DetectionRisk and regulatoryCustomer storiesDive into more customer case studiesGoldman Sachs Chairman & CEO talks about how the global investment banking firm is at the cutting edge of technology.Enhanced customer experiencesHSBC launched a scenario risk-modeling tool to accelerate climate risk and trading decision-making.Risk modeling and analysisCME Group flexible, cost-effective market data access helps firms mitigate risk and drive revenue.Real-time market data accessPayPal efficiently aggregates and analyzes customer and merchant data to identify and prevent fraud.Preventing fraudScotiabank accelerates its Cloud Adoption Strategy Through an Expanded Partnership with Google CloudEnhanced customer experiencesKeyBank uses our data and AI solutions to create flexible, personalized digital banking experiences.Enhanced customer experiencesBNY Mellon is transforming its U.S. Treasury market settlement and clearance process.Transforming clearance and settlementGenerali builds it's next-generation insurance company.Enhanced customer experiencesBloomberg's B-PIPE enables customers to receive real-time market data via a private connection.Real-time market data accessDeutsche Börse is providing safe and fast infrastructure for financial markets.Reducing security riskMax Life Insurance built a chatbot powered by Google Cloud to answer customer questions and generate stronger leads.Enhanced customer experiencesAllianz & Munich Re are helping insurance customers reduce cloud security risk and costs.Reducing security riskAllstate helps customers uncover risks to their homes with location-specific claims information.Insurance policy claimsUSAA creates machine learning models to improve cost estimate accuracy and reduce time to payment.Insurance policy claimsHudson River Trading uses high-performance computing to develop trading models faster.Real-time market data accessView MoreTrusted by top financial institutionsSome of the world’s most successful banks, investment firms, insurance companies, and payments providers use Google Cloud to transform and grow their businesses. Here are just a few.See more customersSecurity and complianceProtect your sensitive data—including customer personally identifiable information (PII), transaction data, and payment card details—through identity management, network security, threat detection and response.Learn moreGoogle Cybersecurity Action Team (GCAT)Compliance resource centerDrive innovation and scaleAccelerate digital transformationTake advantage of cloud computing to drive innovation through data democratization, app and infrastructure modernization, people and workflow connections, and trusted transactions.Read the blogMake faster, smarter decisionsFuel data-driven transformation with speed, scale, and security while leveraging built-in AI and ML. Break down operational and analytical silos and turn data into real-time insights.Read nowQuickly build, run, and scale your appsSpeed up development, effortlessly run and scale workloads where you want, faster and more cost-effectively, and turn your ideas into profitable strategies and innovative products. Read the blogProtect what’s importantDefend your data and apps with advanced security that complies with key global industry standards and requirements. Google keeps more people safe online than anyone else in the world.Read the blogReach sustainability goalsBuild a carbon-free future on the cleanest cloud in the industry. Google Cloud is the only major provider with enough renewable energy to cover our operations, including your workloads.Read nowSimplify your compliance journeyTrust the world's premier security advisory team with the singular mission of supporting the security and digital transformation of critical infrastructure, enterprises, and small businesses.Learn more about our Google Cybersecurity Action Team (GCAT)Trust the gold standard in security, privacy, and compliance controlsGoogle Cloud’s industry-leading security, third-party audits and certifications, documentation, and legal commitments help support your compliance.Visit our Compliance resource centerWhether it was raw or curated data, the Google Cloud team and BigQuery really helped us consolidate and leverage the horsepower of Google’s data cloud to stitch our data together into a global-360 view. Every few minutes, new real-time data feeds land in BigQuery. This is a radical improvement from the daily and weekly data loads that CNA did previously.Santosh Bardwaj, Senior Vice President and Global Chief Data and Analytics Officer, CNACase studyTake the next stepTell us what you’re solving for and a Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerFinancial services customers and storiesLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Firestore.txt b/Firestore.txt new file mode 100644 index 0000000000000000000000000000000000000000..48ca2cb29bcfb439e0d1f12e6c8b0771d454ddc9 --- /dev/null +++ b/Firestore.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/firestore +Date Scraped: 2025-02-23T12:03:57.598Z + +Content: +Learn more about the latest generative AI launches, including LangChain integrations across Google Cloud databases. Read the blog. Jump to FirestoreFirestoreEasily develop rich applications using a fully managed, scalable, and serverless document database.New customers get $300 in free credits to spend on Firestore. All customers get 50,000 reads, 10,000 writes, 10,000 deletes, and 1 GB storage free per day, not charged against your credits.Go to consoleDeploy a dynamic websiteDeploy and run a dynamic web app with this interactive solution that uses Firestore to run a sample application (Developer Journey App) built with JavaScriptServerless, JSON-compatible database that easily scales, with no partitioning or maintenanceEasily build gen AI applications with Firestore vector search, LangChain, and LlamaindexFully customizable security and data validation rules to ensure the data is always protectedAccelerate development of mobile, web, and IoT apps with direct connectivity to the databaseVIDEOIntroducing Firestore: A document database that simplifies data for your apps2:22BenefitsEnterprise-grade document databaseFirestore is a JSON-compatible document database that offers rich queryability, high volumes of global indexes, serializable ACID-transactions and is fully integrated with Google Cloud’s governance tools.Pay as you go, and effortlessly scalePay only for what you use—no up-front expenditure or underutilized resources. Automatically scale up and down as needed. No partitioning or maintenance required and achieve up to 99.999% availability SLA.Launch applications and features fasterFirestore offers a great developer experience that helps you build applications faster. It offers built-in live synchronization, offline support, and ACID transactions—across client and server-side libraries.Key featuresKey featuresServerlessFocus on your application development using a fully managed, serverless document database that effortlessly scales up or down to meet any demand, with no partitioning, maintenance windows, or downtime.Powerful query engineFirestore allows you to run sophisticated queries, including Vector Search, and ACID transactions against your JSON-compatible document data. This gives you more flexibility in the way you structure your data.Live synchronization and offline modeBuilt-in live synchronization and offline mode makes it easy to build multi-user, collaborative applications on mobile web, and IoT devices, including workloads consisting of live asset tracking, activity tracking, real-time analytics, media and product catalogs, communications, social user profiles, and gaming leaderboards.View all featuresVIDEO How do queries work in Cloud Firestore?17:15Once we implemented our new statistics processing system, we were able to update our contributors’ site metrics much faster…By providing this granular level of data to our contributors, we are helping them better optimize their content and deliver the best possible pieces to their readers.Benjamin Harrigan, Software Architect, ForbesRead the blogWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoBuild an AI marketing platform with Highlevel and FirestoreWatch videoVideoBuilding a next-generation scalable gaming platform using FirestoreWatch videoVideoGoogle I/O 2023: What's new in Cloud FirestoreWatch videoVideoLearn how startups use FirestoreWatch videoVideoSee how developers benefit from building apps in the Cloud using FirestoreWatch videoVideoNYT: Building a real-time collaborative editor with FirestoreWatch videoDocumentationDocumentationQuickstartQuickstart using a mobile/web client librarySet up a Firestore database, add data, and read data using the Android, iOS, or Web client libraries.Learn moreQuickstartQuickstart using a server client librarySet up a Firestore database, add data, and read data using the C#, Go, Java, Node.js, PHP, Python, or Ruby server client library.Learn moreTutorialBuilding scalable applications with FirestoreBest practices for building apps that use Firestore, including data location, document IDs, field names, indexes, read and write operations, and designing for scale.Learn moreTutorialFirestore sample appThis document describes when to use Firestore to build large applications.Learn moreAPIs & LibrariesFirestore client librariesBuild a sample app for Android, iOS, Web, or Java.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for FirebaseAll featuresAll featuresServerlessFully managed, serverless database that effortlessly scales up or down to meet any demand, with no partitioning, maintenance windows, or downtime.Powerful query engineFirestore allows you to run sophisticated queries, including Vector Search, and ACID transactions against your JSON-compatible document data. This gives you more flexibility in the way you structure your data.AI functionalityBuilt-in Vector Search and turnkey extensions to integrate Firestore with popular AI services with a few clicks. Enables use cases, such as automated embedding generation, language translations, image classification, and more.Gen AI library integrationsEasily build gen AI applications that are more accurate, transparent, and reliable with integrations in popular libraries, including LangChain and Llamaindex. Firestore’s integration supports common patterns—Document loader for loading and storing information from documents, Vector Store, and Memory (such as Chat Messages Memory).Easily share data between Firestore and BigQueryCapture changes to your documents in Firestore and replicate changes to BigQuery. Easily pull data from BigQuery into Firestore to add analytics to your apps.SecurityFirestore seamlessly integrates with Identity Platform and Firebase Authentication, to enable customizable identity-based security access controls and enables data validation using a configuration language.Multi-region replicationWith automatic multi-region replication and strong consistency, your data is safe and has a 99.999% availability guarantee, even when disasters strike.Live synchronization and offline modeBuilt-in live synchronization and offline mode make it easy to build multi-user, collaborative applications on mobile web, and IoT devices, including workloads consisting of live asset tracking, activity tracking, real-time analytics, media and product catalogs, communications, social user profiles, and gaming leaderboards.Libraries for popular languagesFocus on your application development using Firestore client-side development libraries for Web, iOS, Android, Flutter, C++, and Unity. Firestore also supports traditional server-side development libraries using Node.js, Java, Go, Ruby, and PHP.Datastore modeFirestore supports the Datastore API. You won't need to make any changes to your existing Datastore apps, and you can expect the same performance characteristics and pricing with the added benefit of strong consistency.PricingPricingCloud Firestore detailed pricing is available on our pricing page.FeaturePriceStored data$0.18/GBBandwidthGoogle Cloud pricingDocument writes$0.18/100KDocument reads$0.06/100KDocument deletes$0.02/100KTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Fortigate_architecture_in_Google_Cloud.txt b/Fortigate_architecture_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..35b841db40405dcd5b55db378313e5a3b0f69baa --- /dev/null +++ b/Fortigate_architecture_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/fortigate-architecture-in-cloud +Date Scraped: 2025-02-23T11:53:50.955Z + +Content: +Home Docs Cloud Architecture Center Send feedback FortiGate architecture in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-21 UTC Authors: Bartek Moczulski, Sherelle Farrington - Fortinet Inc. | Jimit Modi - Google This document describes the overall concepts around deploying a FortiGate Next Generation Firewall (NGFW) in Google Cloud. It's intended for network administrators, solution architects, and security professionals who are familiar with Compute Engine and Virtual Private Cloud networking. Architecture The following architecture diagram shows a highly available cloud security solution that helps to reduce costs and to detect threats. The recommended deployment of FortiGate NGFW on Google Cloud consists of a pair of FortiGate virtual machine (VM) instances deployed in an active-passive high-availability (HA) cluster across two availability zones in the same region. In the previous diagram, traffic is forwarded through the cluster using an external passthrough Network Load Balancer and an internal passthrough Network Load Balancer. The external passthrough Network Load Balancer binds to external IP addresses and forwards all traffic to the external network interface (port1) of the active FortiGate instance. The internal passthrough Network Load Balancer is used as the next hop for the custom static route from the shared Virtual Private Clouds (VPCs) in the host project. The load balancer forwards outbound traffic to the internal network interface (port2) of the active FortiGate VM. To determine which instance is active, both load balancers use health checks. The FortiGate Clustering Protocol (FGCP) requires a dedicated network interface for configuration and session synchronization. Each network interface must be connected to a separate VPC network. Standard deployments use port3 for clustering (HA sync in the diagram). Both VM instances are managed through dedicated management network interfaces (port4) that are directly reachable using either external IP addresses or over a VLAN attachment. The VLAN attachment isn't marked in the diagram. Newer firmware versions (7.0 and later) allow using dedicated HA sync interfaces for management. Use Cloud Next Generation Firewall rules to restrict access to management interfaces. When outbound traffic originates from FortiGate VMs—for example, updates from the FortiGate Intrusion Prevention System (IPS), or communications with Compute Engine APIs—that traffic is forwarded to the internet using the Cloud NAT service. In environments isolated from the internet, the outbound traffic that originates from a FortiGate VM scan also uses a combination of Private Google Access and FortiManager. Private Google Access accesses Compute Engine APIs. FortiManager is a central management appliance that acts as a proxy for accessing FortiGuard Security Services. Workloads protected by FortiGate NGFWs are typically located in separate VPC networks, either simple or shared. Those separate VPC networks are connected to the internal VPC using VPC Network Peering. To apply the custom default route from the internal VPC to all the workloads, enable export custom routes or import custom routes in the peering options for both the internal network VPC and the workload VPCs. Placing security features in this central hub enables different independent DevOps teams to deploy and operate their workload projects behind a common security management zone. This dedicated VPC forms the basis of the Fortinet Cloud Security Services Hub concept. This hub is designed to offload effective and scalable secure connectivity and policy enforcement to the network security team. For more details about the different traffic flows, see the Use cases section. Product capabilities FortiGate on Google Cloud delivers NGFW, VPN, advanced routing, and software-defined wide area network (SD-WAN) capabilities for organizations. FortiGate reduces complexity with automated threat protection, visibility into applications and users, and network and security ratings that enable security best practices. Close integration with Google Cloud, management automation using Terraform, Ansible, and robust APIs, help you maintain your strong network security posture in agile modern cloud deployments. Features highlight FortiGate NGFW is a virtual appliance that offers the following features: Full traffic visibility Granular access control based on metadata, geolocation, protocols, and applications Threat protection powered by threat intelligence security services from FortiGuard Labs Encrypted traffic inspection Secure remote access options, including the following: Zero-trust network access (ZTNA) SSL VPN for individual users IPsec and award-winning secure SD-WAN connectivity for distributed locations and data centers By combining FortiGate NGFW with optional additional elements of the Fortinet Security Fabric, like FortiManager, FortiAnalyzer, FortiSandbox, and FortiWeb, administrators can expand security capabilities in the targeted areas. The following screenshot displays FortiGate firewall policies using geolocation and Google Cloud network tags. Use cases You can deploy FortiGate to secure your environment and apply deep packet inspection to both incoming, outgoing, and local traffic. In the following sections, the document discusses packet inspection: North-south traffic inspection Inbound traffic Private service access Outbound traffic East-west traffic inspection North-south traffic inspection When you're running your workloads on Google Cloud you should protect your applications against cyberattacks. When deployed in a VPC, a FortiGate VM can help secure applications by inspecting all inbound traffic originating from the internet. You can also use a FortiGate VM as a secure web gateway that protects the outbound traffic originating from workloads. Note: To provide enhanced protection of the services that are based on HTTP and HTTPS protocols, it's recommended that you deploy a WAAP (web application and API protection) solution. Fortinet offers a dedicated WAAP product for Google Cloud as a VM deployment (FortiWeb), and as a service (FortiWeb Cloud). Inbound traffic To enable secure connectivity to your workloads, assign one or more external IP addresses to the external passthrough Network Load Balancer that forwards the traffic to FortiGate instances. There, the incoming connections are: Translated according to DNAT (Virtual IP) configuration Matched to the access policy Inspected for threats according to the security profile Forwarded to the workloads using their internal IP addresses or FQDN Don't translate source IP addresses. Not translating a source IP address ensures that the original client IP address is visible to the workloads. When using internal domain names for service discovery, you must configure FortiGate VMs to use the following internal Cloud DNS service (169.254.169.254) instead of the FortiGuard servers. The return traffic flow is ensured because of a custom static route in the internal VPC. That route is configured with the internal passthrough Network Load Balancer as a next hop. The workloads might be deployed directly into the internal network, or connected to it using VPC Network Peering. The following diagram shows an example of the traffic packet flow for inbound traffic inspection with FortiGate: Private service access You can make some Google Cloud services accessible to the workloads in a VPC network using private services access (PSA). Examples of these services include—but aren't limited to—Cloud SQL and Vertex AI. Because the VPC Network Peering is non-transitive, services connected using PSA aren't directly available from other VPC networks over peering. In the recommended architecture design with peered internal and workload VPCs, the services connected to the workload VPCs aren't available from or through FortiGate NGFWs. To make a single instance of the service available through FortiGate NGFWs, connect it using PSA to the internal VPC. For multiple service instances, connect each instance to workload VPCs and create an extra FortiGate network interface for each instance. The following diagram shows additional network interfaces for PSA in a network security project: Outbound traffic Any outbound traffic initiated by workloads and headed to the active FortiGate instance uses a custom route through an internal passthrough Network Load Balancer. The load balancer forwards the traffic to the active FortiGate instance. FortiGate inspects the traffic according to its policy and source-NATs (SNATs) to one of the external IP addresses of the external passthrough Network Load Balancer. It's also possible to SNAT to an internal IP address. In that case, Cloud NAT handles the translation to an external IP address. Using IP address pools in FortiGate for SNATing offers better control over the external IP addresses assigned to individual groups of outbound connections when compared to using directly attached external IP addresses or first-party NAT solutions. East-west traffic inspection East-west traffic inspection, or segmentation, is a common requirement for three-tier architecture (presentation, application, data) deployments that are used to detect and block lateral movement, in case one of the tiers is compromised. It's also commonly used to provide control and visibility of the connections between multiple internal applications or services. Each tier must be put in a separate VPC and peered with the firewall's internal VPC to form a hub-and-spoke architecture with an internal VPC in the role of a hub. To make connections between the spokes to be passed through FortiGate for inspection, perform the following steps: Create a custom route in the internal VPC. Apply the custom route to the spokes. To apply the route, use the Export custom route and Import custom route options in the peering settings for both the hub and the spokes. You can add and remove workload VPCs at any time. Doing so doesn't affect traffic between the other spoke networks. If the peering group size limit enforced by Google Cloud is too low for a specific deployment, you can deploy FortiGate VMs with multiple internal network interfaces. You can group these interfaces into a single zone to simplify the firewall configuration. Note: Google Cloud recently introduced policy-based routes, which makes it possible to inspect traffic using FortiGate NGFWs between the workloads in the same VPC. As of November 2023, this feature is in preview, but you might consider it as an alternative to a hub-and-spoke approach. In the preceding diagram, the connection initiated from the Tier 1 VPC toward the Tier 2 VPC matches the imported default custom route. The connection is sent through the internal passthrough Network Load Balancer to the active FortiGate for inspection. Once inspected, the connection is forwarded by FortiGate using the same internal port2, that matches the peering route. That route is more specific than the default custom route and is sent over VPC Network Peering to its destination in the Tier 2 VPC. Additional use cases This section contains some additional use cases for deploying a FortiGate NGFW in Google Cloud. Secure a hybrid cloud Because of regulatory or compliance requirements, you might be required to perform a thorough inspection of the connectivity between an on-premises data center and the cloud. FortiGate can provide inline IPS inspection and enforce access policies for Cloud Interconnect links. To avoid direct traffic flow between Cloud Interconnect and your workloads, don't place VLAN attachments in the workload VPC networks. Instead, place them in an external VPC that's only connected to the firewalls. You can redirect traffic between Cloud Interconnect and the workload VPCs by using a set of custom routes that point to the two internal passthrough Network Load Balancers placed on the external side and the internal side of the FortiGate cluster. To enable traffic flow, and to match the IP address space of the workloads, you must manually update the custom route (or routes) on the external (Cloud Interconnect) side of the FortiGate cluster, and the custom advertisement list of the Cloud Router. While organizations can automate this process, those steps are beyond the scope of this document. Note: Sometimes, high throughput requirements can only be met using an active-active deployment. In these cases, the flow symmetry necessary for full bi-directional IPS inspection is maintained by the symmetric hashing feature of internal passthrough Network Load Balancer. This feature is enabled by default. The following diagram shows an example of the packet flow from an on-premises location to Google Cloud, when those packets are protected by FortiGate: A network packet sent from the on-premises location is directed by the custom BGP advertisement configured on the Cloud Router to a Cloud Interconnect link. Once the packet reaches the external VPC, it's sent using a custom static route to the active FortiGate instance. The FortiGate instance is located behind an internal network pass-through load balancer. FortiGate inspects the connection and passes the packet to the internal VPC network. The packet then reaches its destination in a peered network. The return packet is routed through the Internal Network Load Balancer in the Internal VPC and through the FortiGate instance to the external VPC network. After that, the packet is sent back to the on-premises location through Cloud Interconnect. Connect with shared VPCs Larger organizations that have multiple projects hosting application workloads often use Shared VPCs to centralize network management in a single host project and extend subnets to individual service projects. In projects used by DevOps teams, this approach helps increase operational security by restricting unnecessary access to network functions. It's also aligned to Fortinet Security Services Hub best practices. Fortinet recommends using VPC peering between FortiGate internal VPC and Shared VPC networks within the same project. But only if those networks are controlled by the network security team. And only if that team also manages network operations. If network security and network operations are managed by separate teams, implement the peering across different projects. Using individual FortiGate NICs to connect directly to the Shared VPCs–while necessary in some architectures to access services attached to different Shared VPCs using PSA–negatively affects performance. This performance issue can be caused by the distribution of NIC queues across multiple interfaces and the reduced number of vCPUs available to each NIC. Connect with the Network Connectivity Center To connect between branch offices and to provide access to cloud resources, Network Connectivity Center (NCC) provides an advanced multi-regional architecture that uses dynamic routing and the Google backbone network. To manage SD-WAN connectivity between Google Cloud and on-premises locations equipped with FortiGate appliances, FortiGate NGFWs are certified for use as NCC router appliances. FortiGate clusters, as described in this document, are deployed into NCC spokes to form regional SD-WAN hubs that connect on-premises locations in the region. Locations within the same region can use FortiGate Auto-Discovery VPN technology to create on-demand direct links between each location. This practice can help deliver minimum latency. Another advantage is that locations in different regions can connect over the Google backbone to deliver fast and stable connectivity that's constantly monitored by FortiGate SD-WAN performance SLAs. Design considerations This section discusses the Google Cloud and FortiGate components that you need to consider when designing systems and deploying a FortiGate NGFW in Google Cloud. FortiGate VMs FortiGate VMs are configured in a unicast FortiGate Clustering Protocol (FGCP) active-passive cluster. The cluster heartbeat is on port3. Dedicated management is on port4. Port1 and port2 are used for forwarding traffic. Optionally, port4 can be linked to an external IP address unless the internal IP addresses of the interfaces are available in another way—for example, by VLAN attachment. No other NICs need external IP addresses. Port3 needs a static internal IP address because it's part of the configuration of the peer instance in the cluster. Other NICs can be configured with static IP addresses for consistency. Cloud Load Balancing To identify the current active instance, Cloud Load Balancing health checks constantly probe FortiGate VMs. A built-in FortiGate probe responder responds only when a cluster member is active. During an HA failover, the load balancer identifies the active instance and redirects traffic through it. During a failover, the connection tracking feature of Cloud Load Balancing helps sustain existing TCP connections. Cloud NAT Both FortiGate VMs in a cluster periodically connect to Compute Engine API and to FortiGuard services to accomplish the following tasks: Verify licenses Access workload metadata Update signature databases The external network interfaces of the FortiGate VMs don't have an attached external IP address. To enable connectivity to these services, use Cloud NAT. Or, you can use a combination of PSA for Compute Engine API and a FortiManager acting as a proxy for accessing FortiGuard services. Service Account VM instances in Google Cloud can be bound to a service account and use the IAM privileges of the account when interacting with Google Cloud APIs. Create and assign to FortiGate VM instances a dedicated service account with a custom role. This custom role should enable the reading of metadata information for VMs and Kubernetes clusters in all projects which host protected workloads. To build dynamic firewall policies without explicitly providing IP addresses, you can use metadata from Compute Engine and Google Kubernetes Engine. For example, only VMs with the network tag frontend can connect to VMs with the network tag backend. Workloads Workloads are any VM instances or Kubernetes clusters providing services that are protected by FortiGate VMs. You can deploy these instances or clusters into the internal VPC that's connected directly to the FortiGate VM instances. For role separation and flexibility reasons, Fortinet recommends using a separate VPC network that's peered with the FortiGate internal VPC. Costs FortiGate VM for Google Cloud supports both the on-demand, pay-as-you-go model and the bring-your-own-license model. For more information, see FortiGate support. The FortiGate NGFW uses the following billable components of Google Cloud: Compute Engine Cloud Load Balancing Cloud Storage Cloud NAT For more information on FortiGate licensing in Google Cloud, see Fortinet's article on order types. To generate a cost estimate based on your projected usage, use the Google Cloud pricing calculator. New Google Cloud users might be eligible for a free trial. High-performance deployments By default, FortiGate NGFWs are deployed in an active-passive high-availability cluster that can only scale up vertically and that's limited by the maximum throughput of a single active instance. While that's sufficient for many deployments, it's possible to deploy active-active clusters and even autoscaling groups for organizations that require higher performance. Using an active-active architecture might enforce additional limitations to the configuration and licensing of your FortiGate NGFWs—for example, requirements for a central management system or the source-NAT. When autoscaling a deployment, NGFW only supports pay-as-you-go licensing. Fortinet recommends following the active-passive approach. Contact the Fortinet support team with questions. Because of the way the Google Cloud algorithm assigns queues to VMs and the way individual queues get assigned to CPUs in the Linux kernel, your business can help increase performance by using custom queue assignment in Compute Engine and fine-tuning interrupt affinity settings in FortiGate. For more information on performance offered by a single appliance, see FortiGate®-VM on Google Cloud. Deployment Standalone FortiGate virtual appliances are available in Google Cloud Marketplace. You can use the following methods to automate the deployment of the high-availability architectures described in this document: FortiGate reference architecture deployment with Terraform FortiGate reference architecture deployment with Deployment Manager FortiGate reference architecture deployment with Google Cloud CLI For a customizable experience without deploying a described solution in your own project, see the FortiGate: Protecting Google Compute resources lab in the Fortinet Qwiklabs portal. What's next Read more about FortiGate in their Google Cloud administration guide. Read more about other Fortinet products for Google Cloud at fortinet.com. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Bartek Moczulsk | Consulting System Engineer (Fortinet)Jimit Modi | Cloud Partner EngineerOther contributors: Deepak Michael | Networking Specialist Customer EngineerAmmett Williams | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Foster_a_culture_of_cost_awareness.txt b/Foster_a_culture_of_cost_awareness.txt new file mode 100644 index 0000000000000000000000000000000000000000..8f73fd1a0343edd4a2a645250ad7e06ad8b625e6 --- /dev/null +++ b/Foster_a_culture_of_cost_awareness.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization/foster-culture-cost-awareness +Date Scraped: 2025-02-23T11:43:52.034Z + +Content: +Home Docs Cloud Architecture Center Send feedback Foster a culture of cost awareness Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-25 UTC This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to promote cost awareness across your organization and ensure that team members have the cost information that they need to make informed decisions. Conventionally, the responsibility for cost management might be centralized to a few select stakeholders and primarily focused on initial project architecture decisions. However, team members across all cloud user roles (analyst, architect, developer, or administrator) can help to reduce the cost of your resources in Google Cloud. By sharing cost data appropriately, you can empower team members to make cost-effective decisions throughout their development and deployment processes. Principle overview Stakeholders across various roles – product owners, developers, deployment engineers, administrators, and financial analysts – need visibility into relevant cost data and its relationship to business value. When provisioning and managing cloud resources, they need the following data: Projected resource costs: Cost estimates at the time of design and deployment. Real-time resource usage costs: Up-to-date cost data that can be used for ongoing monitoring and budget validation. Costs mapped to business metrics: Insights into how cloud spending affects key performance indicators (KPIs), to enable teams to identify cost-effective strategies. Every individual might not need access to raw cost data. However, promoting cost awareness across all roles is crucial because individual decisions can affect costs. By promoting cost visibility and ensuring clear ownership of cost management practices, you ensure that everyone is aware of the financial implications of their choices and everyone actively contributes to the organization's cost optimization goals. Whether through a centralized FinOps team or a distributed model, establishing accountability is crucial for effective cost optimization efforts. Recommendations To promote cost awareness and ensure that your team members have the cost information that they need to make informed decisions, consider the following recommendations. Provide organization-wide cost visibility To achieve organization-wide cost visibility, the teams that are responsible for cost management can take the following actions: Standardize cost calculation and budgeting: Use a consistent method to determine the full costs of cloud resources, after factoring in discounts and shared costs. Establish clear and standardized budgeting processes that align with your organization's goals and enable proactive cost management. Use standardized cost management and visibility tools: Use appropriate tools that provide real-time insights into cloud spending and generate regular (for example, weekly) cost progression snapshots. These tools enable proactive budgeting, forecasting, and identification of optimization opportunities. The tools could be cloud provider tools (like the Google Cloud Billing dashboard), third-party solutions, or open-source solutions like the Cost Attribution solution. Implement a cost allocation system: Allocate a portion of the overall cloud budget to each team or project. Such an allocation gives the teams a sense of ownership over cloud spending and encourages them to make cost-effective decisions within their allocated budget. Promote transparency: Encourage teams to discuss cost implications during the design and decision-making processes. Create a safe and supportive environment for sharing ideas and concerns related to cost optimization. Some organizations use positive reinforcement mechanisms like leaderboards or recognition programs. If your organization has restrictions on sharing raw cost data due to business concerns, explore alternative approaches for sharing cost information and insights. For example, consider sharing aggregated metrics (like the total cost for an environment or feature) or relative metrics (like the average cost per transaction or user). Understand how cloud resources are billed Pricing for Google Cloud resources might vary across regions. Some resources are billed monthly at a fixed price, and others might be billed based on usage. To understand how Google Cloud resources are billed, use the Google Cloud pricing calculator and product-specific pricing information (for example, Google Kubernetes Engine (GKE) pricing). Understand resource-based cost optimization options For each type of cloud resource that you plan to use, explore strategies to optimize utilization and efficiency. The strategies include rightsizing, autoscaling, and adopting serverless technologies where appropriate. The following are examples of cost optimization options for a few Google Cloud products: Cloud Run lets you configure always-allocated CPUs to handle predictable traffic loads at a fraction of the price of the default allocation method (that is, CPUs allocated only during request processing). You can purchase BigQuery slot commitments to save money on data analysis. GKE provides detailed metrics to help you understand cost optimization options. Understand how network pricing can affect the cost of data transfers and how you can optimize costs for specific networking services. For example, you can reduce the data transfer costs for external Application Load Balancers by using Cloud CDN or Google Cloud Armor. For more information, see Ways to lower external Application Load Balancer costs. Understand discount-based cost optimization options Familiarize yourself with the discount programs that Google Cloud offers, such as the following examples: Committed use discounts (CUDs): CUDs are suitable for resources that have predictable and steady usage. CUDs let you get significant reductions in price in exchange for committing to specific resource usage over a period (typically one to three years). You can also use CUD auto-renewal to avoid having to manually repurchase commitments when they expire. Sustained use discounts: For certain Google Cloud products like Compute Engine and GKE, you can get automatic discount credits after continuous resource usage beyond specific duration thresholds. Spot VMs: For fault-tolerant and flexible workloads, Spot VMs can help to reduce your Compute Engine costs. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have high availability requirements. Discounts for specific product options: Some managed services like BigQuery offer discounts when you purchase dedicated or autoscaling query processing capacity. Evaluate and choose the discounts options that align with your workload characteristics and usage patterns. Incorporate cost estimates into architecture blueprints Encourage teams to develop architecture blueprints that include cost estimates for different deployment options and configurations. This practice empowers teams to compare costs proactively and make informed decisions that align with both technical and financial objectives. Use a consistent and standard set of labels for all your resources You can use labels to track costs and to identify and classify resources. Specifically, you can use labels to allocate costs to different projects, departments, or cost centers. Defining a formal labeling policy that aligns with the needs of the main stakeholders in your organization helps to make costs visible more widely. You can also use labels to filter resource cost and usage data based on target audience. Use automation tools like Terraform to enforce labeling on every resource that is created. To enhance cost visibility and attribution further, you can use the tools provided by the open-source cost attribution solution. Share cost reports with team members By sharing cost reports with your team members, you empower them to take ownership of their cloud spending. This practice enables cost-effective decision making, continuous cost optimization, and systematic improvements to your cost allocation model. Cost reports can be of several types, including the following: Periodic cost reports: Regular reports inform teams about their current cloud spending. Conventionally, these reports might be spreadsheet exports. More effective methods include automated emails and specialized dashboards. To ensure that cost reports provide relevant and actionable information without overwhelming recipients with unnecessary detail, the reports must be tailored to the target audiences. Setting up tailored reports is a foundational step toward more real-time and interactive cost visibility and management. Automated notifications: You can configure cost reports to proactively notify relevant stakeholders (for example, through email or chat) about cost anomalies, budget thresholds, or opportunities for cost optimization. By providing timely information directly to those who can act on it, automated alerts encourage prompt action and foster a proactive approach to cost optimization. Google Cloud dashboards: You can use the built-in billing dashboards in Google Cloud to get insights into cost breakdowns and to identify opportunities for cost optimization. Google Cloud also provides FinOps hub to help you monitor savings and get recommendations for cost optimization. An AI engine powers the FinOps hub to recommend cost optimization opportunities for all the resources that are currently deployed. To control access to these recommendations, you can implement role-based access control (RBAC). Custom dashboards: You can create custom dashboards by exporting cost data to an analytics database, like BigQuery. Use a visualization tool like Looker Studio to connect to the analytics database to build interactive reports and enable fine-grained access control through role-based permissions. Multicloud cost reports: For multicloud deployments, you need a unified view of costs across all the cloud providers to ensure comprehensive analysis, budgeting, and optimization. Use tools like BigQuery to centralize and analyze cost data from multiple cloud providers, and use Looker Studio to build team-specific interactive reports. Previous arrow_back Align spending with business value Next Optimize resource usage arrow_forward Send feedback \ No newline at end of file diff --git a/Games.txt b/Games.txt new file mode 100644 index 0000000000000000000000000000000000000000..ce8d5b64605b2d05e8579330a57712ccb6b0cad5 --- /dev/null +++ b/Games.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/games +Date Scraped: 2025-02-23T11:57:57.049Z + +Content: +Google Cloud for GamesGoogle Cloud for Games helps you create exceptional player experiences by unifying performance and insights.Contact us1:51Watch how Capcom uses Google Cloud for the renowned fighting series Street FighterAn ecosystem for living gamesGoogle operates live services used by billions. Harness world-class technology, performance, and scale with Google Cloud. Elevate your live games to next generation living games with Google’s generative AI capabilities.Empower your games with Google CloudGoogle Cloud's solutions helps you serve exceptional games and uncover new ways to delight players, so they come back for more.Cloud infrastructure for gamesThe world has more players with higher expectationsGoogle Cloud's ecosystem of solutions helps you serve players reliably everywhere in the world. Create, scale, and react—fast.“Our goal is to continually find new ways to provide the highest-quality, most seamless services to our players so that they can focus on their games. …This collaboration makes it possible to combine Google Cloud’s expertise in deploying Kubernetes at scale with our deep knowledge of game development pipelines and technologies.”Carl Dionne, Development Director, Online Technology Group, UbisoftTop solutions for gamesCompute EngineServe game experiencesGoogle Kubernetes EngineScale game experiencesCloud ArmorShield game experiencesDatabases and analytics for gamesAs audiences grow, there’s more to knowInitially built for Google's own consumer services, Google Cloud's real-time databases scale like no other. Harness, secure, and scale all your data across regions and games.“Our infrastructure needs to support hundreds of thousands of concurrent connections per second, as well as our data warehouse, and we saw that Google has the capability to handle our needs.”Jacques Erasmus, CIO, KingTop solutions for gamesSpannerMaximum scale for dataBigtableScale and speed; NoSQLPub/SubHarness game dataAnalytics and AI for gamesThe clues to gain and retain players are in your dataSuccessful games generate vast amounts of data. Google Cloud's AI and analytics solutions help you distill swaths of data into facts and insights. Top solutions for gamesBigQueryManage your game dataLookerUncover game insightsVertex AIHarness ML for gamesOpen source for gamesThe largest games must integrate with existing tools and technologyCreate custom solutions without starting from scratch. Use Google-founded open source projects: from game-specific projects like Open Match to industry standards like Kubernetes. Top solutions for gamesKubernetesOrchestrate workloadsAgonesKubernetes for GamesOpen MatchMatchmake your playersAI for gamesBring your games to life with cutting-edge generative AIIntegrate generative AI into your development pipeline with Vertex AI. Deploy AI models onto Google Kubernetes Engine to create entirely new player experiences.Top solutions for gamesVertex AITune AI modelsGoogle Kubernetes EngineOrchestrate AI workloadsGoogle Cloud MarketplaceDiscover AI SolutionsServe your playersCloud infrastructure for gamesThe world has more players with higher expectationsGoogle Cloud's ecosystem of solutions helps you serve players reliably everywhere in the world. Create, scale, and react—fast.“Our goal is to continually find new ways to provide the highest-quality, most seamless services to our players so that they can focus on their games. …This collaboration makes it possible to combine Google Cloud’s expertise in deploying Kubernetes at scale with our deep knowledge of game development pipelines and technologies.”Carl Dionne, Development Director, Online Technology Group, UbisoftTop solutions for gamesCompute EngineServe game experiencesGoogle Kubernetes EngineScale game experiencesCloud ArmorShield game experiencesKnow your gameDatabases and analytics for gamesAs audiences grow, there’s more to knowInitially built for Google's own consumer services, Google Cloud's real-time databases scale like no other. Harness, secure, and scale all your data across regions and games.“Our infrastructure needs to support hundreds of thousands of concurrent connections per second, as well as our data warehouse, and we saw that Google has the capability to handle our needs.”Jacques Erasmus, CIO, KingTop solutions for gamesSpannerMaximum scale for dataBigtableScale and speed; NoSQLPub/SubHarness game dataUnderstand your playersAnalytics and AI for gamesThe clues to gain and retain players are in your dataSuccessful games generate vast amounts of data. Google Cloud's AI and analytics solutions help you distill swaths of data into facts and insights. Top solutions for gamesBigQueryManage your game dataLookerUncover game insightsVertex AIHarness ML for gamesBolster your techOpen source for gamesThe largest games must integrate with existing tools and technologyCreate custom solutions without starting from scratch. Use Google-founded open source projects: from game-specific projects like Open Match to industry standards like Kubernetes. Top solutions for gamesKubernetesOrchestrate workloadsAgonesKubernetes for GamesOpen MatchMatchmake your playersCreate living gamesAI for gamesBring your games to life with cutting-edge generative AIIntegrate generative AI into your development pipeline with Vertex AI. Deploy AI models onto Google Kubernetes Engine to create entirely new player experiences.Top solutions for gamesVertex AITune AI modelsGoogle Kubernetes EngineOrchestrate AI workloadsGoogle Cloud MarketplaceDiscover AI SolutionsLive service games need more than serversPlayers level up, make purchases, and forge relationships. Games need a database that remembers everything, fast, at scale.Databases for gamesWatch how you can use Google Cloud for GamesUnderstand your 3d assetsLearn how Tencent Games used generative AI and Gemini to categorize 3d assets.Video (14:36)Generative AI for GamingCreate novel player experiences while improving productivity. Learn 3 strategies you can implement, today.Spanner handles millions of playersLearn how Spanner supports large player populations without sharding or overprovisioning.Video (18:28)Analytics and AI for games help you monetizeLearn how to build a churn prediction solution using Google Analytics and BigQuery ML for games.Video (17:50)Open source for games integrates with your techLearn how Embark Studios used open source tools to create their multiplayer backend stack.Video (22:53)View MorePowering everlasting love for the gameThe game industry's most successful game publishers, developers, and studios build on Google Cloud.Creating better, togetherGoogle Cloud collaborates with some of the game industry's most prominent enablers.Google Cloud for GamesEmpower your game with Google Cloud.Contact usGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Gated_egress.txt b/Gated_egress.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab97332fea5f343767a42cc7eecf2ae9ca2aca91 --- /dev/null +++ b/Gated_egress.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/gated-egress +Date Scraped: 2025-02-23T11:50:34.160Z + +Content: +Home Docs Cloud Architecture Center Send feedback Gated egress Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The architecture of the gated egress networking pattern is based on exposing select APIs from the on-premises environment or another cloud environment to workloads that are deployed in Google Cloud. It does so without directly exposing them to the public internet from an on-premises environment or from other cloud environments. You can facilitate this limited exposure through an API gateway or proxy, or a load balancer that serves as a facade for existing workloads. You can deploy the API gateway functionality in an isolated perimeter network segment, like a perimeter network. The gated egress networking pattern applies primarily to (but isn't limited to) tiered application architecture patterns and partitioned application architecture patterns. When deploying backend workloads within an internal network, gated egress networking helps to maintain a higher level of security within your on-premises computing environment. The pattern requires that you connect computing environments in a way that meets the following communication requirements: Workloads that you deploy in Google Cloud can communicate with the API gateway or load balancer (or a Private Service Connect endpoint) that exposes the application by using internal IP addresses. Other systems in the private computing environment can't be reached directly from within Google Cloud. Communication from the private computing environment to any workloads deployed in Google Cloud isn't allowed. Traffic to the private APIs in other environments is only initiated from within the Google Cloud environment. The focus of this guide is on hybrid and multicloud environments connected over a private hybrid network. If the security requirements of your organization permit it, API calls to remote target APIs with public IP addresses can be directly reached over the internet. But you must consider the following security mechanisms: API OAuth 2.0 with Transport Layer Security (TLS). Rate limiting. Threat protection policies. Mutual TLS configured to the backend of your API layer. IP address allowlist filtering configured to only allow communication with predefined API sources and destinations from both sides. To secure an API proxy, consider these other security aspects. For more information, see Best practices for securing your applications and APIs using Apigee. Architecture The following diagram shows a reference architecture that supports the communication requirements listed in the previous section: Data flows through the preceding diagram as follows: On the Google Cloud side, you can deploy workloads into virtual private clouds (VPCs). The VPCs can be single or multiple (shared or non-shared). The deployment should be in alignment with the projects and resource hierarchy design of your organization. The VPC networks of the Google Cloud environment are extended to the other computing environments. The environments can be on-premises or in another cloud. To facilitate the communication between environments using internal IP addresses, use a suitable hybrid and multicloud networking connectivity. To limit the traffic that originates from specific VPC IP addresses, and is destined for remote gateways or load balancers, use IP address allowlist filtering. Return traffic from these connections is allowed when using stateful firewall rules. You can use any combination of the following capabilities to secure and limit communications to only the allowed source and destination IP addresses: Firewall rules or firewall policies. Network virtual appliance (NVA) with next generation firewall (NGFW) inspection capabilities that are placed in the network path. Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention. All environments share overlap-free RFC 1918 IP address space. Variations The gated egress architecture pattern can be combined with other approaches to meet different design requirements that still consider the communication requirements of this pattern. The pattern offers the following options: Use Google Cloud API gateway and global frontend Expose remote services using Private Service Connect Use Google Cloud API gateway and global frontend With this design approach, API exposure and management reside within Google Cloud. As shown in the preceding diagram, you can accomplish this through the implementation of Apigee as the API platform. The decision to deploy an API gateway or load balancer in the remote environment depends on your specific needs and current configuration. Apigee provides two options for provisioning connectivity: With VPC peering Without VPC peering Google Cloud global frontend capabilities like Cloud Load Balancing, Cloud CDN (when accessed over Cloud Interconnect), and Cross-Cloud Interconnect enhance the speed with which users can access applications that have backends hosted in your on-premises environments and in other cloud environments. Optimizing content delivery speeds is achieved by delivering those applications from Google Cloud points of presence (PoP). Google Cloud PoPs are present on over 180 internet exchanges and at over 160 interconnection facilities around the world. To see how PoPs help to deliver high-performing APIs when using Apigee with Cloud CDN to accomplish the following, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube: Reduce latency. Host APIs globally. Increase availability for peak traffic. The design example illustrated in the preceding diagram is based on Private Service Connect without VPC peering. The northbound network in this design is established through: A load balancer (LB in the diagram), where client requests terminate, processes the traffic and then routes it to a Private Service Connect backend. A Private Service Connect backend lets a Google Cloud load balancer send clients requests over a Private Service Connect connection associated with a producer service attachment to the published service (Apigee runtime instance) using Private Service Connect network endpoint groups (NEGs). The southbound networking is established through: A Private Service Connect endpoint that references a service attachment associated with an internal load balancer (ILB in the diagram) in the customer VPC. The ILB is deployed with hybrid connectivity network endpoint groups (hybrid connectivity NEGs). Hybrid services are accessed through the hybrid connectivity NEG over a hybrid network connectivity, like VPN or Cloud Interconnect. For more information, see Set up a regional internal proxy Network Load Balancer with hybrid connectivity and Private Service Connect deployment patterns. Note: Depending on your requirements, the APIs of the on-premises backends can be exposed through Apigee Hybrid, a third party API gateway or proxy, or a load balancer. Expose remote services using Private Service Connect Use the Private Service Connect option to expose remote services for the following scenarios: You aren't using an API platform or you want to avoid connecting your entire VPC network directly to an external environment for the following reasons: You have security restrictions or compliance requirements. You have an IP address range overlap, such as in a merger and acquisition scenario. To enable secure uni-directional communications between clients, applications, and services across the environments even when you have a short deadline. You might need to provide connectivity to multiple consumer VPCs through a service-producer VPC (transit VPC) to offer highly scalable multi-tenant or single-tenant service models, to reach published services on other environments. Using Private Service Connect for applications that are consumed as APIs provides an internal IP address for the published applications, enabling secure access within the private network across regions and over hybrid connectivity. This abstraction facilitates the integration of resources from diverse clouds and on-premises environments over a hybrid and multicloud connectivity model. You can accelerate application integration and securely expose applications that reside in an on-premises environment, or another cloud environment, by using Private Service Connect to publish the service with fine-grained access. In this case, you can use the following option: A service attachment that references a regional internal proxy Network Load Balancer or an internal Application Load Balancer. The load balancer uses a hybrid network endpoint group (hybrid connectivity NEG) in a producer VPC that acts in this design as a transit VPC. In the preceding diagram, the workloads in the VPC network of your application can reach the hybrid services running in your on-premises environment, or in other cloud environments, through the Private Service Connect endpoint, as illustrated in the following diagram. This design option for uni-directional communications provides an alternative option to peering to a transit VPC. As part of the design in the preceding diagram, multiple frontends, backends, or endpoints can connect to the same service attachment, which lets multiple VPC networks or multiple consumers access the same service. As illustrated in the following diagram, you can make the application accessible to multiple VPCs. This accessibility can help in multi-tenant services scenarios where your service is consumed by multiple consumer VPCs even if their IP address ranges overlap. IP address overlap is one of most common issues when integrating applications that reside in different environments. The Private Service Connect connection in the following diagram helps to avoid the IP address overlap issue. It does so without requiring provisioning or managing any additional networking components, like Cloud NAT or an NVA, to perform the IP address translation. For an example configuration, see Publish a hybrid service by using Private Service Connect. The design has the following advantages: Avoids potential shared scaling dependencies and complex manageability at scale. Improves security by providing fine-grained connectivity control. Reduces IP address coordination between the producer and consumer of the service and the remote external environment. The design approach in the preceding diagram can expand at later stages to integrate Apigee as the API platform by using the networking design options discussed earlier, including the Private Service Connect option. You can make the Private Service Connect endpoint accessible from other regions by using Private Service Connect global access. The client connecting to the Private Service Connect endpoint can be in the same region as the endpoint or in a different region. This approach might be used to provide high availability across services hosted in multiple regions, or to access services available in a single region from other regions. When a Private Service Connect endpoint is accessed by resources hosted in other regions, inter-regional outbound charges apply to the traffic destined to endpoints with global access. Note: To achieve distributed wellness checks and to facilitate connecting multiple VPCs to on-premises environments over multiple hybrid connections, chain an internal Application Load Balancer with an external Application Load Balancer. For more information, see Explicit Chaining of Google Cloud L7 Load Balancers with PSC. Best practices Considering Apigee and Apigee Hybrid as your API platform solution offers several benefits. It provides a proxy layer, and an abstraction or facade, for your backend service APIs combined with security capabilities, rate limiting, quotas, and analytics. Use Apigee Adapter for Envoy with an Apigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture. VPCs and project design in Google Cloud should be driven by your resource hierarchy and your secure communication model requirements. When APIs with API gateways are used, you should also use an IP address allowlist. An allowlist limits communications to the specific IP address sources and destinations of the API consumers and API gateways that might be hosted in different environments. Use VPC firewall rules or firewall policies to control access to Private Service Connect resources through the Private Service Connect endpoint. If an application is exposed externally through an application load balancer, consider using Google Cloud Armor as an extra layer of security to protect against DDoS and application layer security threats. If instances require internet access, use Cloud NAT in the application (consumer) VPC to allow workloads to access the internet. Doing so lets you avoid assigning VM instances with external public IP addresses in systems that are deployed behind an API gateway or a load balancer. For outbound web traffic, you can use Google Cloud Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Previous arrow_back Gated patterns Next Gated ingress arrow_forward Send feedback \ No newline at end of file diff --git a/Gated_egress_and_gated_ingress.txt b/Gated_egress_and_gated_ingress.txt new file mode 100644 index 0000000000000000000000000000000000000000..17fee7b0b73a391395cfa7c3e72fef41d316e069 --- /dev/null +++ b/Gated_egress_and_gated_ingress.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/gated-egress-ingress +Date Scraped: 2025-02-23T11:50:38.519Z + +Content: +Home Docs Cloud Architecture Center Send feedback Gated egress and gated ingress Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The gated egress and gated ingress pattern uses a combination of gated egress and gated ingress for scenarios that demand bidirectional usage of selected APIs between workloads. Workloads can run in Google Cloud, in private on-premises environments, or in other cloud environments. In this pattern, you can use API gateways, Private Service Connect endpoints, or load balancers to expose specific APIs and optionally provide authentication, authorization, and API call audits. The key distinction between this pattern and the meshed pattern lies in its application to scenarios that solely require bidirectional API usage or communication with specific IP address sources and destinations—for example, an application published through a Private Service Connect endpoint. Because communication is restricted to the exposed APIs or specific IP addresses, the networks across the environments don't need to align in your design. Common applicable scenarios include, but aren't limited to, the following: Mergers and acquisitions. Application integrations with partners. Integrations between applications and services of an organization with different organizational units that manage their own applications and host them in different environments. The communication works as follows: Workloads that you deploy in Google Cloud can communicate with the API gateway (or specific destination IP addresses) by using internal IP addresses. Other systems deployed in the private computing environment can't be reached. Conversely, workloads that you deploy in other computing environments can communicate with the Google Cloud-side API gateway (or a specific published endpoint IP address) by using internal IP addresses. Other systems deployed in Google Cloud can't be reached. Architecture The following diagram shows a reference architecture for the gated egress and gated ingress pattern: The design approach in the preceding diagram has the following elements: On the Google Cloud side, you deploy workloads in a VPC (or shared VPC) without exposing them directly to the internet. The Google Cloud environment network is extended to other computing environments. That environment can be on-premises or on another cloud. To extend the environment, use a suitable hybrid and multicloud connectivity communication pattern to facilitate the communication between environments so they can use internal IP addresses. Optionally, by enabling access to specific target IP addresses, you can use a transit VPC to help add a perimeter security layer outside of your application VPC. You can use Cloud Next Generation Firewall or network virtual appliances (NVAs) with next generation firewalls (NGFWs) at the transit VPC to inspect traffic and to allow or prohibit access to certain APIs from specific sources before reaching your application VPC. APIs should be accessed through an API gateway or a load balancer to provide a proxy layer, and an abstraction or facade for your service APIs. For applications consumed as APIs, you can also use Private Service Connect to provide an internal IP address for the published application. All environments use overlap-free RFC 1918 IP address space. A common application of this pattern involves deploying application backends (or a subset of application backends) in Google Cloud while hosting other backend and frontend components in on-premises environments or in other clouds (tiered hybrid pattern or partitioned multicloud pattern). As applications evolve and migrate to the cloud, dependencies and preferences for specific cloud services often emerge. Sometimes these dependencies and preferences lead to the distribution of applications and backends across different cloud providers. Also, some applications might be built with a combination of resources and services distributed across on-premises environments and multiple cloud environments. For distributed applications, the capabilities of external Cloud Load Balancing hybrid and multicloud connectivity can be used to terminate user requests and route them to frontends or backends in other environments. This routing occurs over a hybrid network connection, as illustrated in the following diagram. This integration enables the gradual distribution of application components across different environments. Requests from the frontend to backend services hosted in Google Cloud communicate securely over the established hybrid network connection facilitated by an internal load balancer (ILB in the diagram). Using the Google Cloud design in the preceding diagram helps with the following: Facilitates two-way communication between Google Cloud, on-premises, and other cloud environments using predefined APIs on both sides that align with the communication model of this pattern. To provide global frontends for internet-facing applications with distributed application components (frontends or backends), and to accomplish the following goals, you can use the advanced load balancing and security capabilities of Google Cloud distributed at points of presence (PoPs): Reduce capital expenses and simplify operations by using serverless managed services. Optimize connections to application backends globally for speed and latency. Google Cloud Cross-Cloud Network enables multicloud communication between application components over optimal private connections. Cache high demand static content and improve application performance for applications using global Cloud Load Balancing by providing access to Cloud CDN. Secure the global frontends of the internet facing applications by using Google Cloud Armor capabilities that provide globally distributed web application firewall (WAF) and DDoS mitigation services. Optionally, you can incorporate Private Service Connect into your design. Doing so enables private, fine-grained access to Google Cloud service APIs or your published services from other environments without traversing the public internet. Variations The gated egress and gated ingress architecture patterns can be combined with other approaches to meet different design requirements, while still considering the communication requirements of this pattern. The patterns offer the following options: Distributed API gateways Bidirectional API communication using Private Service Connect Bidirectional communication using Private Service Connect endpoints and interfaces Distributed API gateways In scenarios like the one based on the partitioned multicloud pattern, applications (or application components) can be built in different cloud environments—including a private on-premises environment. The common requirement is to route client requests to the application frontend directly to the environment where the application (or the frontend component) is hosted. This kind of communication requires a local load balancer or an API gateway. These applications and their components might also require specific API platform capabilities for integration. The following diagram illustrates how Apigee and Apigee Hybrid are designed to address such requirements with a localized API gateway in each environment. API platform management is centralized in Google Cloud. This design helps to enforce strict access control measures where only pre-approved IP addresses (target and destination APIs or Private Service Connect endpoint IP addresses) can communicate between Google Cloud and the other environments. The following list describes the two distinct communication paths in the preceding diagram that use Apigee API gateway: Client requests arrive at the application frontend directly in the environment that hosts the application (or the frontend component). API gateways and proxies within each environment handle client and application API requests in different directions across multiple environments. The API gateway functionality in Google Cloud (Apigee) exposes the application (frontend or backend) components that are hosted in Google Cloud. The API gateway functionality in another environment (Hybrid) exposes the application frontend (or backend) components that are hosted in that environment. Optionally, you can consider using a transit VPC. A transit VPC can provide flexibility to separate concerns and to perform security inspection and hybrid connectivity in a separate VPC network. From an IP address reachability standpoint, a transit VPC (where the hybrid connectivity is attached) facilitates the following requirements to maintain end-to-end reachability: The IP addresses for target APIs need to be advertised to the other environments where clients/requesters are hosted. The IP addresses for the hosts that need to communicate with the target APIs have to be advertised to the environment where the target API resides—for example, the IP addresses of the API requester (the client). The exception is when communication occurs through a load balancer, proxy, Private Service Connect endpoint, or NAT instance. To extend connectivity to the remote environment, this design uses direct VPC peering with customer route exchange capability. The design lets specific API requests that originate from workloads hosted within the Google Cloud application VPC to route through the transit VPC. Alternatively, you can use a Private Service Connect endpoint in the application VPC that's associated with a load balancer with a hybrid network endpoint group backend in the transit VPC. That setup is described in the next section: Bidirectional API communication using Private Service Connect. Bidirectional API communication using Private Service Connect Sometimes, enterprises might not need to use an API gateway (like Apigee) immediately, or might want to add it later. However, there might be business requirements to enable communication and integration between certain applications in different environments. For example, if your company acquired another company, you might need to expose certain applications to that company. They might need to expose applications to your company. Both companies might each have their own workloads hosted in different environments (Google Cloud, on-premises, or in other clouds), and must avoid IP address overlap. In such cases, you can use Private Service Connect to facilitate effective communication. For applications consumed as APIs, you can also use Private Service Connect to provide a private address for the published applications, enabling secure access within the private network across regions and over hybrid connectivity. This abstraction facilitates the integration of resources from diverse clouds and on-premises environments over a hybrid and multicloud connectivity model. It also enables the assembly of applications across multicloud and on-premises environments. This can satisfy different communication requirements, like integrating secure applications where an API gateway isn't used or isn't planned to be used. By using Private Service Connect with Cloud Load Balancing, as shown in the following diagram, you can achieve two distinct communication paths. Each path is initiated from a different direction for a separate connectivity purpose, ideally through API calls. All the design considerations and recommendations of Private Service Connect discussed in this guide apply to this design. If additional Layer 7 inspection is required, you can integrate NVAs with this design (at the transit VPC). This design can be used with or without API gateways. The two connectivity paths depicted in the preceding diagram represent independent connections and don't illustrate two-way communication of a single connection or flow. Bidirectional communication using Private Service Connect endpoints and interfaces As discussed in the gated ingress pattern, one of the options to enable client-service communication is by using a Private Service Connect endpoint to expose a service in a producer VPC to a consumer VPC. That connectivity can be extended to an on-premises environment or even another cloud provider environment over a hybrid connectivity. However, in some scenarios, the hosted service can also require private communication. To access a certain service, like retrieving data from data sources that can be hosted within the consumer VPC or outside it, this private communication can be between the application (producer) VPC and a remote environment, such as an on-premises environment. In such a scenario, Private Service Connect interfaces enable a service producer VM instance to access a consumer's network. It does so by sharing a network interface, while still maintaining the separation of producer and consumer roles. With this network interface in the consumer VPC, the application VM can access consumer resources as if they resided locally in the producer VPC. A Private Service Connect interface is a network interface attached to the consumer (transit) VPC. It's possible to reach external destinations that are reachable from the consumer (transit) VPC where the Private Service Connect interface is attached. Therefore, this connection can be extended to an external environment over a hybrid connectivity such as an on-premises environment, as illustrated in the following diagram: If the consumer VPC is an external organization or entity, like a third-party organization, typically you won't have the ability to secure the communication to the Private Service Connect interface in the consumer VPC. In such a scenario, you can define security policies in the guest OS of the Private Service Connect interface VM. For more information, see Configure security for Private Service Connect interfaces. Or, you might consider an alternative approach if it doesn't comply with the security compliance or standards of your organization. Best practices For situations where client requests from the internet need to be received locally by a frontend hosted in a private on-premises or other cloud environment, consider using Hybrid as an API gateway solution. This approach also facilitates a migration of the solution to a fully Google Cloud-hosted environment while maintaining the consistency of the API platform (Apigee). To minimize latency and optimize costs for high volumes of outbound data transfers to your other environments when those environments are in long-term or permanent hybrid or multicloud setups, consider the following: Use Cloud Interconnect or Cross-Cloud Interconnect. To terminate user connections at the targeted frontend in the appropriate environment, use Hybrid. Where applicable to your requirements and the architecture, use Apigee Adapter for Envoy with a Hybrid deployment with Kubernetes. Before designing the connectivity and routing paths, you first need to identify what traffic or API requests need to be directed to a local or remote API gateway, along with the source and destination environments. Use VPC Service Controls to protect Google Cloud services in your projects and to mitigate the risk of data exfiltration, by specifying service perimeters at the project or VPC network level. You can extend service perimeters to a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Use Virtual Private Cloud (VPC) firewall rules or firewall policies to control network-level access to Private Service Connect resources through the Private Service Connect endpoint. For example, outbound firewall rules at the application (consumer) VPC can restrict access from VM instances to the IP address or subnet of your endpoints. When using a Private Service Connect interface, you must protect the communication to the interface by configuring security for the Private Service Connect interface. If a workload in a private subnet requires internet access, use Cloud NAT to avoid assigning an external IP address to the workload and exposing it to the public internet. For outbound web traffic, use Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Previous arrow_back Gated ingress Next Handover pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Gated_ingress.txt b/Gated_ingress.txt new file mode 100644 index 0000000000000000000000000000000000000000..d226cd3f9fb787e2850ff7f1b2594e2d9d0990ce --- /dev/null +++ b/Gated_ingress.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/gated-ingress +Date Scraped: 2025-02-23T11:50:36.268Z + +Content: +Home Docs Cloud Architecture Center Send feedback Gated ingress Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The architecture of the gated ingress pattern is based on exposing select APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. This pattern is the counterpart to the gated egress pattern and is well suited for edge hybrid, tiered hybrid, and partitioned multicloud scenarios. Like with the gated egress pattern, you can facilitate this limited exposure through an API gateway or load balancer that serves as a facade for existing workloads or services. Doing so makes it accessible to private computing environments, on-premises environments, or on other cloud environment, as follows: Workloads that you deploy in the private computing environment or other cloud environments are able to communicate with the API gateway or load balancer by using internal IP addresses. Other systems deployed in Google Cloud can't be reached. Communication from Google Cloud to the private computing environment or to other cloud environments isn't allowed. Traffic is only initiated from the private environment or other cloud environments to the APIs in Google Cloud. Architecture The following diagram shows a reference architecture that meets the requirements of the gated ingress pattern. The description of the architecture in the preceding diagram is as follows: On the Google Cloud side, you deploy workloads into an application VPC (or multiple VPCs). The Google Cloud environment network extends to other computing environments (on-premises or on another cloud) by using hybrid or multicloud network connectivity to facilitate the communication between environments. Optionally, you can use a transit VPC to accomplish the following: Provide additional perimeter security layers to allow access to specific APIs outside of your application VPC. Route traffic to the IP addresses of the APIs. You can create VPC firewall rules to prevent some sources from accessing certain APIs through an endpoint. Inspect Layer 7 traffic at the transit VPC by integrating a network virtual appliance (NVA). Access APIs through an API gateway or a load balancer (proxy or application load balancer) to provide a proxy layer, and an abstraction layer or facade for your service APIs. If you need to distribute traffic across multiple API gateway instances, you could use an internal passthrough Network Load Balancer. Provide limited and fine-grained access to a published service through a Private Service Connect endpoint by using a load balancer through Private Service Connect to expose an application or service. All environments should use an overlap-free RFC 1918 IP address space. The following diagram illustrates the design of this pattern using Apigee as the API platform. In the preceding diagram, using Apigee as the API platform provides the following features and capabilities to enable the gated ingress pattern: Gateway or proxy functionality Security capabilities Rate limiting Analytics In the design: The northbound networking connectivity (for traffic coming from other environments) passes through a Private Service Connect endpoint in your application VPC that's associated with the Apigee VPC. At the application VPC, an internal load balancer is used to expose the application APIs through a Private Service Connect endpoint presented in the Apigee VPC. For more information, see Architecture with VPC peering disabled. Configure firewall rules and traffic filtering at the application VPC. Doing so provides fine-grained and controlled access. It also helps stop systems from directly reaching your applications without passing through the Private Service Connect endpoint and API gateway. Also, you can restrict the advertisement of the internal IP address subnet of the backend workload in the application VPC to the on-premises network to avoid direct reachability without passing through the Private Service Connect endpoint and the API gateway. Certain security requirements might require perimeter security inspection outside the application VPC, including hybrid connectivity traffic. In such cases, you can incorporate a transit VPC to implement additional security layers. These layers, like next generation firewalls (NGFWs) NVAs with multiple network interfaces, or Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS), perform deep packet inspection outside of your application VPC, as illustrated in the following diagram: As illustrated in the preceding diagram: The northbound networking connectivity (for traffic coming from other environments) passes through a separate transit VPC toward the Private Service Connect endpoint in the transit VPC that's associated with the Apigee VPC. At the application VPC, an internal load balancer (ILB in the diagram) is used to expose the application through a Private Service Connect endpoint in the Apigee VPC. You can provision several endpoints in the same VPC network, as shown in the following diagram. To cover different use cases, you can control the different possible network paths using Cloud Router and VPC firewall rules. For example, If you're connecting your on-premises network to Google Cloud using multiple hybrid networking connections, you could send some traffic from on-premises to specific Google APIs or published services over one connection and the rest over another connection. Also, you can use Private Service Connect global access to provide failover options. Variations The gated ingress architecture pattern can be combined with other approaches to meet different design requirements, while still considering the communication requirements of the pattern. The pattern offers the following options: Access Google APIs from other environments Expose application backends to other environments using Private Service Connect Use a hub and spoke architecture to expose application backends to other environments Access Google APIs from other environments For scenarios requiring access to Google services, like Cloud Storage or BigQuery, without sending traffic over the public internet, Private Service Connect offers a solution. As shown in the following diagram, it enables reachability to the supported Google APIs and services (including Google Maps, Google Ads, and Google Cloud) from on-premises or other cloud environments through a hybrid network connection using the IP address of the Private Service Connect endpoint. For more information about accessing Google APIs through Private Service Connect endpoints, see About accessing Google APIs through endpoints. In the preceding diagram, your on-premises network must be connected to the transit (consumer) VPC network using either Cloud VPN tunnels or a Cloud Interconnect VLAN attachment. Google APIs can be accessed by using endpoints or backends. Endpoints let you target a bundle of Google APIs. Backends let you target a specific regional Google API. Note: Private Service Connect endpoints are registered with Service Directory for Google APIs where you can store, manage, and publish services. Expose application backends to other environments using Private Service Connect In specific scenarios, as highlighted by the tiered hybrid pattern, you might need to deploy backends in Google Cloud while maintaining frontends in private computing environments. While less common, this approach is applicable when dealing with heavyweight, monolithic frontends that might rely on legacy components. Or, more commonly, when managing distributed applications across multiple environments, including on-premises and other clouds, that require connectivity to backends hosted in Google Cloud over a hybrid network. In such an architecture, you can use a local API gateway or load balancer in the private on-premises environment, or other cloud environments, to directly expose the application frontend to the public internet. Using Private Service Connect in Google Cloud facilitates private connectivity to the backends that are exposed through a Private Service Connect endpoint, ideally using predefined APIs, as illustrated in the following diagram: The design in the preceding diagram uses an Apigee Hybrid deployment consisting of a management plane in Google Cloud and a runtime plane hosted in your other environment. You can install and manage the runtime plane on a distributed API gateway on one of the supported Kubernetes platforms in your on-premises environment or in other cloud environments. Based on your requirements for distributed workloads across Google Cloud and other environments, you can use Apigee on Google Cloud with Apigee Hybrid. For more information, see Distributed API gateways. Use a hub and spoke architecture to expose application backends to other environments Exposing APIs from application backends hosted in Google Cloud across different VPC networks might be required in certain scenarios. As illustrated in the following diagram, a hub VPC serves as a central point of interconnection for the various VPCs (spokes), enabling secure communication over private hybrid connectivity. Optionally, local API gateway capabilities in other environments, such as Apigee Hybrid, can be used to terminate client requests locally where the application frontend is hosted. As illustrated in the preceding diagram: To provide additional NGFW Layer 7 inspection abilities, the NVA with NGFW capabilities is optionally integrated with the design. You might require these abilities to comply with specific security requirements and the security policy standards of your organization. This design assumes that spoke VPCs don't require direct VPC to VPC communication. If spoke-to-spoke communication is required, you can use the NVA to facilitate such communication. If you have different backends in different VPCs, you can use Private Service Connect to expose these backends to the Apigee VPC. If VPC peering is used for the northbound and southbound connectivity between spoke VPCs and hub VPC, you need to consider the transitivity limitation of VPC networking over VPC peering. To overcome this limitation, you can use any of the following options: To interconnect the VPCs, use an NVA. Where applicable, consider the Private Service Connect model. To establish connectivity between the Apigee VPC and backends that are located in other Google Cloud projects in the same organization without additional networking components, use Shared VPC. If NVAs are required for traffic inspection—including traffic from your other environments—the hybrid connectivity to on-premises or other cloud environments should be terminated on the hybrid-transit VPC. If the design doesn't include the NVA, you can terminate the hybrid connectivity at the hub VPC. If certain load-balancing functionalities or security capabilities are required, like adding Google Cloud Armor DDoS protection or WAF, you can optionally deploy an external Application Load Balancer at the perimeter through an external VPC before routing external client requests to the backends. Best practices For situations where client requests from the internet need to be received locally by a frontend hosted in a private on-premises or other cloud environment, consider using Apigee Hybrid as an API gateway solution. This approach also facilitates a seamless migration of the solution to a completely Google Cloud-hosted environment while maintaining the consistency of the API platform (Apigee). Use Apigee Adapter for Envoy with an Apigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture. The design of VPCs and projects in Google Cloud should follow the resource hierarchy and secure communication model requirements, as described in this guide. Incorporating a transit VPC into this design provides the flexibility to provision additional perimeter security measures and hybrid connectivity outside the workload VPC. Use Private Service Connect to access Google APIs and services from on-premises environments or other cloud environments using the internal IP address of the endpoint over a hybrid connectivity network. For more information, see Access the endpoint from on-premises hosts. To help protect Google Cloud services in your projects and help mitigate the risk of data exfiltration, use VPC Service Controls to specify service perimeters at the project or VPC network level. When needed, you can extend service perimeters to a hybrid environment over a VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Use VPC firewall rules or firewall policies to control network-level access to Private Service Connect resources through the Private Service Connect endpoint. For example, outbound firewall rules at the application (consumer) VPC can restrict access from VM instances to the IP address or subnet of your endpoints. For more information about VPC firewall rules in general, see VPC firewall rules. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by your NVA vendor. To strengthen perimeter security and secure your API gateway that's deployed in the respective environment, you can optionally implement load balancing and web application firewall mechanisms in your other computing environment (hybrid or other cloud). Implement these options at the perimeter network that's directly connected to the internet. If instances require internet access, use Cloud NAT in the application VPC to allow workloads to access the internet. Doing so lets you avoid assigning VM instances with external public IP addresses in systems that are deployed behind an API gateway or a load balancer. For outbound web traffic, use Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Previous arrow_back Gated egress Next Gated egress and gated ingress arrow_forward Send feedback \ No newline at end of file diff --git a/Gated_patterns.txt b/Gated_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..baeeccb44df687ba857469ead2bde22bd938fae6 --- /dev/null +++ b/Gated_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/gated-patterns +Date Scraped: 2025-02-23T11:50:32.072Z + +Content: +Home Docs Cloud Architecture Center Send feedback Gated patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The gated pattern is based on an architecture that exposes select applications and services in a fine-grained manner, based on specific exposed APIs or endpoints between the different environments. This guide categorizes this pattern into three possible options, each determined by the specific communication model: Gated egress Gated ingress Gated egress and ingress (bidirectional gated in both directions) As previously mentioned in this guide, the networking architecture patterns described here can be adapted to various applications with diverse requirements. To address the specific needs of different applications, your main landing zone architecture might incorporate one pattern or a combination of patterns simultaneously. The specific deployment of the selected architecture is determined by the specific communication requirements of each gated pattern. Note: In general, the gated pattern can be applied or incorporated with the landing zone design option that exposes the services in a consumer-producer model. This series discusses each gated pattern and its possible design options. However, one common design option applicable to all gated patterns is the Zero Trust Distributed Architecture for containerized applications with microservice architecture. This option is powered by Cloud Service Mesh, Apigee, and Apigee Adapter for Envoy—a lightweight Apigee gateway deployment within a Kubernetes cluster. Apigee Adapter for Envoy is a popular, open source edge and service proxy that's designed for cloud-first applications. This architecture controls allowed secure service-to-service communications and the direction of communication at a service level. Traffic communication policies can be designed, fine-tuned, and applied at the service level based on the selected pattern. Gated patterns allow for the implementation of Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to perform deep packet inspection for threat prevention without any design or routing modifications. That inspection is subject to the specific applications being accessed, the communication model, and the security requirements. If security requirements demand Layer 7 and deep packet inspection with advanced firewalling mechanisms that surpass the capabilities of Cloud Next Generation Firewall, you can use a centralized next generation firewall (NGFW) hosted in a network virtual appliance (NVA). Several Google Cloud security partners offer NGFW appliances that can meet your security requirements. Integrating NVAs with these gated patterns can require introducing multiple security zones within the network design, each with distinct access control levels. Previous arrow_back Meshed pattern Next Gated egress arrow_forward Send feedback \ No newline at end of file diff --git a/Gemini.txt b/Gemini.txt new file mode 100644 index 0000000000000000000000000000000000000000..5872b2f6d4ea98d536af18748bd921217e7ee25d --- /dev/null +++ b/Gemini.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/ai/gemini +Date Scraped: 2025-02-23T12:01:48.755Z + +Content: +The Gemini era for developers and businessesGemini's ecosystem of products and models can help developers and businesses get the most out of Google AI, from building with Gemini models to using Gemini as your AI assistant.Try Gemini 2.0 models—the latest and most advanced multimodal models from Google. See what you can build with up to a 2M token context window.6:35Learn how customers are using Google AI today to reshape their businesses.BUILD WITH GEMINI MODELS:Google AI StudioExperiment, prototype, and deploy. Google AI Studio is the fast path for developers, students, and researchers who want to try Gemini models and get started building with the Gemini Developer API.Vertex AIBuild AI agents and integrate generative AI into your applications, Google Cloud offers Vertex AI, a single, fully-managed, unified development platform for using Gemini models and other third party models at scale.USE GEMINI AS YOUR AI ASSISTANT:Gemini for Google CloudYour always-on assistant for building or monitoring anything built on Google Cloud, Gemini for Google Cloud helps you code more efficiently, gain deeper data insights, navigate security challenges, and more.Gemini for Google WorkspaceYour AI-powered assistant built right into Gmail, Docs, Slides, Sheets, and more, to help boost your productivity and creativity.Gemini offering Use caseResources Gemini Developer APIDiscover the power of Gemini with Google AI Studio and start buildingStart using Gemini with Google AI Studio, a web-based tool that lets you prototype, run prompts right in your browser. Google AI Studio is the fast path for developers, students, and researchers who want to try Gemini models and get started building with the Gemini Developer API.Try Gemini 2.0 FlashGet your no cost Gemini API keyGemini Developer API quickstartGemini API in Vertex AI Access, customize, and deploy Gemini models at scale with Vertex AIEnterprise customers and developers can experience Google's largest and most capable AI models through the Gemini API in Vertex AI. Vertex AI makes it possible to discover, customize, tune, and deploy Gemini models at scale, empowering developers to build new and differentiated applications that can process information across text, code, images, and video.Guide to get started using the Vertex AI Gemini APIView examples of prompts and responses using Gemini modelsGemini for Google CloudAccelerate software delivery with Gemini Code AssistOur AI assistant for developers. Gemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity and quality. Code assistance is available in many popular IDEs, such as Visual Studio Code, JetBrains IDEs , Cloud Workstations, Cloud Shell Editor, and supports 20+ programming languages. Set up Gemini Code Assist for a cloud project Use Gemini Code Assist in your IDEEfficiently manage your application lifecycle with Gemini Cloud AssistGemini Cloud Assist (in private preview) helps cloud teams design, operate, and optimize their application lifecycle. Gemini’s contextual and personalized AI guidance understands your Google Cloud resources to help you craft new designs, deploy workloads, manage applications, troubleshoot issues, and optimize performance and costs.Learn how Gemini supports observability needs Elevate security expertise with Gemini in SecurityGemini in Security provides generative AI-powered assistance to cloud defenders where and when they need it. Gemini in Security Operations and Gemini in Threat Intelligence are now generally available, while Gemini in Security Command Center is currently in preview.Set up Gemini in Security Command CenterLearn about our vision for AI-powered securityAccelerate analytics and workflows with Gemini in BigQueryGemini in BigQuery (in preview) provides AI-powered assistive and collaboration features including code assist, visual data preparation, and intelligent recommendations that help enhance productivity and optimize costs.Set up Gemini in BigQueryIntroduction to Gemini in BigQueryAutomate insights with Gemini in LookerGemini in Looker provides an always-on intelligent assistant that enables you to have conversations with your data and helps you automatically create and share your insights even faster, with automated reports and visualizations.Introducing Gemini in LookerSupercharge database development and management with Gemini in DatabasesGemini in Databases, is an AI-powered database assistant that helps you optimize your database fleet and work with the data in your databases. Gemini in Databases helps simplify all aspects of database operations, including programming, performance optimization, fleet management, governance, and migrations.Set up Gemini in DatabasesFeatures of Gemini in DatabasesGemini for Google Workspace Gemini is your always-on AI assistant across Google Workspace appsGemini for Google Workspace is a powerful collaborative partner that can act as a coach, thought partner, source of inspiration, and productivity booster—all while ensuring every user and organization has control over their data. Sign up for your 14-day free trial of Gemini for Google WorkspaceGemini Developer APIDiscover the power of Gemini with Google AI Studio and start buildingStart using Gemini with Google AI Studio, a web-based tool that lets you prototype, run prompts right in your browser. Google AI Studio is the fast path for developers, students, and researchers who want to try Gemini models and get started building with the Gemini Developer API.Try Gemini 2.0 FlashGet your no cost Gemini API keyGemini Developer API quickstartGemini API in Vertex AI Access, customize, and deploy Gemini models at scale with Vertex AIEnterprise customers and developers can experience Google's largest and most capable AI models through the Gemini API in Vertex AI. Vertex AI makes it possible to discover, customize, tune, and deploy Gemini models at scale, empowering developers to build new and differentiated applications that can process information across text, code, images, and video.Guide to get started using the Vertex AI Gemini APIView examples of prompts and responses using Gemini modelsGemini for Google CloudAccelerate software delivery with Gemini Code AssistOur AI assistant for developers. Gemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity and quality. Code assistance is available in many popular IDEs, such as Visual Studio Code, JetBrains IDEs , Cloud Workstations, Cloud Shell Editor, and supports 20+ programming languages. Set up Gemini Code Assist for a cloud project Use Gemini Code Assist in your IDEGemini for Google Workspace Gemini is your always-on AI assistant across Google Workspace appsGemini for Google Workspace is a powerful collaborative partner that can act as a coach, thought partner, source of inspiration, and productivity booster—all while ensuring every user and organization has control over their data. Sign up for your 14-day free trial of Gemini for Google WorkspaceStart your AI journey today Try Google Cloud AI and machine learning products in the console.Go to my console Have a large project? Contact salesContinue browsing See all AI/ML productsWork with a trusted partnerFind a partnerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Gemini_Code_Assist.txt b/Gemini_Code_Assist.txt new file mode 100644 index 0000000000000000000000000000000000000000..b65878fb7cad137b8aa454207623720e8f6e8210 --- /dev/null +++ b/Gemini_Code_Assist.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/gemini/code-assist +Date Scraped: 2025-02-23T12:04:37.817Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.Gemini Code AssistAI-assisted application developmentIncrease software development and delivery velocity using generative AI assistance, with enterprise security and privacy protection.Try it nowContact salesWant to build with Gemini models? Try Gemini API in Vertex AIProduct highlightsAI-powered code assistance and natural language chatCode customization with private codebasesAvailable in multiple IDEs like VS Code and JetBrains IDEsEnterprise security and data privacyGet to know Gemini Code Assist Enterprise0:50FeaturesAI code assistanceGemini Code Assist completes your code as you write, and generates whole code blocks or functions on demand. Code assistance is available in many popular IDEs, such as Visual Studio Code, JetBrains IDEs (IntelliJ, PyCharm, GoLand, WebStorm, and more), Cloud Workstations, Cloud Shell Editor, and supports 20+ programming languages, including Java, JavaScript, Python, C, C++, Go, PHP, and SQL.Natural language chatThrough a natural language chat interface, you can quickly chat with Gemini Code Assist to get answers to your coding questions, or receive guidance on coding best practices. Chat is available in all supported IDEs.Google Cloud Skills BoostGemini for Application Developers learning courseCode customizationCustomize Gemini Code Assist using your organization’s private codebases for more tailored assistance. Your developers can get code suggestions more pertinent to your private codebases.Watch demo: code customization with private codebasesLocal codebase awarenessGemini Code Assist generates code that’s more relevant to your application by grounding responses with context from your local codebase and current development session. Perform large-scale changes to your codebase, including adding new features, updating cross-file dependencies, helping with version upgrades, comprehensive code reviews, and more. This capability is powered by Google’s Gemini 1.5 Pro model.Watch Demo: code transformation with full codebase awarenessCode transformationGemini Code Assist comes with contextual smart actions and smart commands, quick shortcuts to automate tasks such as fix code errors, generation, and code explanation. You can also just select your code and use natural language to quickly take action on the code selected. Because these smart actions and commands are available right in the IDE, it minimizes the context switching of copying/pasting, making the user experience much smoother for developers. Additionally Gemini is in your IDE, it has the context of all your files, and can assist you in a more personalized way.API development (Preview)Using Gemini Code Assist in Apigee, you can create APIs consistent with your enterprise standards without specialized expertise. If an existing API specification in API Hub doesn’t meet your requirements, you can create a new one with just a prompt. Gemini Code Assist considers artifacts such as your security schemas or API objects in API Hub, and uses them to suggest a specification tailored to your enterprise, saving time in review cycles and development. Furthermore, Gemini assists you to easily spin up a mock server for simulating real-world behavior and to build a proxy from your specification.App development in FirebaseGemini Code Assist includes Gemini in Firebase, which is integrated within the Firebase console to help streamline your development process. Chat with Gemini to plan and design your application, troubleshoot issues, and get recommendations based on best practices. Get insights into your app's crashes with AI assistance in Crashlytics, which provides crash summaries, possible root causes, and suggested fixes.Gemini and FirebaseUnlock AI-assisted app development with Gemini in Firebase10:59Data analysis with BigQueryUnlock deeper insights from your data by using natural language to explore, transform, and visualize data within BigQuery. Generate insightful queries, and prompt Gemini to create efficient SQL and Python code for you. Troubleshoot Apache Spark workloads, optimize your data infrastructure with recommendations for partitioning, clustering, and materialized views, and even customize your SQL translations with Gemini Code Assist.App integration and workflow automationUsing Gemini Code Assist in Application Integration, you can build end-to-end automation flows from prompts or one-click suggestions. Using the prompts and existing enterprise assets like APIs or applications, Gemini Code Assist suggests multiple flows tailored for your use case. Automatically create variables, preconfigure tasks, and complete documentation in accordance with your enterprise context. Get suggested optimizations and extend existing flows in a single click, to significantly reduce your maintenance efforts.Enterprise security and privacyOur data governance policy helps ensure customer code, customers' inputs, as well as the recommendations generated will not be used to train any shared models nor used to develop any products. Customers control and own their data and IP. Gemini Code Assist also comes with security features like Private Google Access, VPC Service Controls, and Enterprise Access Controls with granular IAM permissions to help enterprises adopt AI assistance at scale without compromising on security and privacy.Respect intellectual property Gemini Code Assist provides source citation so that code suggestions are automatically flagged when directly quoting at length from a source to help enterprises comply with license requirements. Google’s IP indemnification policy helps protect Gemini Code Assist licensed users from potential legal ramifications concerning copyright infringements.Industry certificatesGemini Code Assist has achieved multiple industry certifications such as SOC 1/2/3, ISO/IEC 27001 (Information Security Management), 27017 (Cloud Security), 27018 (Protection of PII), and 27701 (Privacy Information Management). More details are at Certifications and security for Gemini.View all featuresCompare Gemini Code Assist editionsEditionOverviewKey featuresGemini Code Assist StandardBusiness-ready AI coding assistance, with enterprise-grade security, for building and running applications.Standard includes:Code completion, generation, and chat in IDELocal codebase awarenessCode transformationEnterprise-grade security, generative AI indemnification and cited sourcesGemini in FirebaseGemini in Colab Enterprise (Vertex AI)Gemini in DatabasesGemini Code Assist EnterpriseComprehensive AI-powered application development solution, that can be customized based on your private source code repositories, and integrated with many Google Cloud services for building applications across the tech stack.Everything included with Standard, plus:Customized code suggestions based on your private source code repositoriesGemini in BigQueryGemini in ApigeeGemini in Application IntegrationGemini Code Assist StandardOverviewBusiness-ready AI coding assistance, with enterprise-grade security, for building and running applications.Key featuresStandard includes:Code completion, generation, and chat in IDELocal codebase awarenessCode transformationEnterprise-grade security, generative AI indemnification and cited sourcesGemini in FirebaseGemini in Colab Enterprise (Vertex AI)Gemini in DatabasesGemini Code Assist EnterpriseOverviewComprehensive AI-powered application development solution, that can be customized based on your private source code repositories, and integrated with many Google Cloud services for building applications across the tech stack.Key featuresEverything included with Standard, plus:Customized code suggestions based on your private source code repositoriesGemini in BigQueryGemini in ApigeeGemini in Application IntegrationHow It WorksGemini Code Assist uses large language models (LLMs) from Google. The LLMs are fine-tuned with billions of lines of open source code, security data, and Google Cloud documentation and sample code. These models paired with Gemini Code Assist give developers code completion, code generation, natural language chat, and more, in their IDE, and Google Cloud services including Firebase, Colab Enterprise (Vertex AI), Databases BigQuery, Apigee, and Application Integration.Setup guide Learn how to prompt Gemini Code Assist Common UsesCode faster with AI assistanceExpedite coding with AI code completion, generation, and chat Application developers can use Gemini Code Assist to auto-complete code inline while coding in IDEs, or generate code blocks using natural language comments. They can also chat with Gemini Code Assist for any code-related questions in IDE.Tutorial: Develop an app with assistance from Gemini Code AssistGemini in Visual Studio CodeGemini in JetBrains IDEsGemini in Cloud WorkstationsTutorials, quickstarts, & labsExpedite coding with AI code completion, generation, and chat Application developers can use Gemini Code Assist to auto-complete code inline while coding in IDEs, or generate code blocks using natural language comments. They can also chat with Gemini Code Assist for any code-related questions in IDE.Tutorial: Develop an app with assistance from Gemini Code AssistGemini in Visual Studio CodeGemini in JetBrains IDEsGemini in Cloud WorkstationsAutomate developer inner loop tasksUse smart actions to further expedite development processDevelopers can use Gemini Code Assist's smart actions to automate frequent developer inner loop tasks such as test generation and code explanation. These prebuilt, one-click shortcuts help expedite the development process even further.Tutorial: How to use smart actionsTutorials, quickstarts, & labsUse smart actions to further expedite development processDevelopers can use Gemini Code Assist's smart actions to automate frequent developer inner loop tasks such as test generation and code explanation. These prebuilt, one-click shortcuts help expedite the development process even further.Tutorial: How to use smart actionsUplevel coding and technical skillsLearn about coding and new tools faster with AI assistanceWhether you are looking for answers on how to write specific queries or scripts, for guidance on the best tools or libraries to solve your problems, or searching for coding best practices, you can seek expert-level advice from Gemini Code Assist by chatting with it in natural language right in the IDE, minimizing context-switching.Tutorials, quickstarts, & labsLearn about coding and new tools faster with AI assistanceWhether you are looking for answers on how to write specific queries or scripts, for guidance on the best tools or libraries to solve your problems, or searching for coding best practices, you can seek expert-level advice from Gemini Code Assist by chatting with it in natural language right in the IDE, minimizing context-switching.Application development in FirebaseUse Gemini in Firebase to speed up application developmentIntegrated within the Firebase console, Gemini in Firebase streamlines the development process by providing quick answers, generating Firebase integration code snippets, offering troubleshooting support, giving app quality insights, and more. This integration simplifies the learning curve, enabling developers to build, launch, and scale their apps with Firebase more rapidly.Get started with Gemini in FirebaseTutorials, quickstarts, & labsUse Gemini in Firebase to speed up application developmentIntegrated within the Firebase console, Gemini in Firebase streamlines the development process by providing quick answers, generating Firebase integration code snippets, offering troubleshooting support, giving app quality insights, and more. This integration simplifies the learning curve, enabling developers to build, launch, and scale their apps with Firebase more rapidly.Get started with Gemini in FirebaseBuild APIs and automations without special expertiseAI-powered assistance, tailored to your enterpriseUsing Gemini Code Assist in Apigee API Management and Application Integration, you can build APIs from ideas, integrations between applications, and automate your SaaS app workflows. Gemini understands your enterprise context such as security schema, APIs, app usage, and more, and uses them to provide tailored recommendations and proactive suggestions for your use case. Using Gemini Code Assist in Apigee API Management and Application IntegrationTutorials, quickstarts, & labsAI-powered assistance, tailored to your enterpriseUsing Gemini Code Assist in Apigee API Management and Application Integration, you can build APIs from ideas, integrations between applications, and automate your SaaS app workflows. Gemini understands your enterprise context such as security schema, APIs, app usage, and more, and uses them to provide tailored recommendations and proactive suggestions for your use case. Using Gemini Code Assist in Apigee API Management and Application IntegrationPricingHow Gemini Code Assist pricing worksPricing is based on per user per month licenses, with annual commitment terms for Enterprise and monthly or annual commitment terms for Standard. ServicePriceGemini Code Assist Standard (monthly)$22.80 per user per month.Gemini Code Assist Standard (annual)$19 per user per month with an upfront annual commitment.Gemini Code Assist Enterprise (monthly)$54 per user per month.Gemini Code Assist Enterprise (annual)$45 per user per month with an upfront annual commitment.Gemini Code Assist Enterprise is available for $19 per month per user on a 12-month commitment until March 31, 2025. Connect with our sales team to take advantage of this promotional offer.Learn more about Gemini Code Assist pricing.How Gemini Code Assist pricing worksPricing is based on per user per month licenses, with annual commitment terms for Enterprise and monthly or annual commitment terms for Standard. Gemini Code Assist Standard (monthly)Price$22.80 per user per month.Gemini Code Assist Standard (annual)Price$19 per user per month with an upfront annual commitment.Gemini Code Assist Enterprise (monthly)Price$54 per user per month.Gemini Code Assist Enterprise (annual)Price$45 per user per month with an upfront annual commitment.Gemini Code Assist Enterprise is available for $19 per month per user on a 12-month commitment until March 31, 2025. Connect with our sales team to take advantage of this promotional offer.Learn more about Gemini Code Assist pricing.Pricing CalculatorEstimate your monthly costs for Google Cloud, including region-specific pricing and fees.Estimate your costCUSTOM QUOTE Connect with our sales team to get a custom quote for your organization. Request a quoteGemini Code AssistAccess Gemini Code Assist todayTry it nowLearn more about Gemini Code AssistRead moreData governance policyLearn more Set up a projectView tutorialResponsible AILearn moreBusiness CaseLearn how businesses leverage Gemini Code Assist to increase their developers' productivity and well-being.Fiona Tan, CTO, Wayfair"Gemini Code Assist brought in significant improvements across the spectrum. For example, developers were able to set up environments 55% faster than before, there was over 48% increase in unit test coverage for the code, and 60% of developers reported that they were now able to focus on more satisfying work." Watch this video to learn more about Wayfair's storyMore customer storiesPayPal chose Gemini Code Assist to accelerate software development. Capgemini sharing experiences of using Gemini Code AssistCommerzbank's take on using Cloud Workstations and Gemini Code AssistPartners & IntegrationGemini Code Assist partner ecosystemTechnology partnersService partnersWe’re working with an ecosystem of partners on Gemini Code Assist. Technology partners are providing us with additional documentation and data on their products so we can optimize Gemini Code Assist to provide better code assistance and general responses for their products over time. Service partners will play an important role in helping customers adopt Gemini Code Assist.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Gemini_for_Databases.txt b/Gemini_for_Databases.txt new file mode 100644 index 0000000000000000000000000000000000000000..26869e929f14c0069e0648fb36060581e3685b25 --- /dev/null +++ b/Gemini_for_Databases.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/gemini/databases +Date Scraped: 2025-02-23T11:59:29.405Z + +Content: +Register today for an exclusive live webinar on Databases Roadmap 2024. Learn about the latest innovations and plans for Google Cloud databases.Gemini in DatabasesSupercharge database development and management with AI. Gemini in Databases (in Preview) is an AI-powered assistant designed to be your ultimate database companion. It helps you in every aspect of the database journey across development, performance optimization, fleet management, governance, and migrations.Try it nowContact usAI-powered assistance where and when you need itAccelerate application development Database Studio empowers developers to easily generate and summarize SQL code with intelligent code assistance, code completion, and guidance directly in the editor. A context-aware chat interface helps developers build database applications faster using natural language.ResourcesManage your data using AlloyDB StudioGet startedUtilize Cloud SQL Studio to work with your dataGet startedStay ahead of potential performance issuesGemini in Databases automatically analyzes your workloads, highlights problems, and provides recommendations to resolve them. It simplifies the intricate troubleshooting process by providing contextual explanations for complex database concepts and metrics.ResourcesEnhanced Query Insights in AlloyDBOverviewUsing the index advisor with Query InsightsGet startedDe-risk and optimize your database fleetDatabase Center offers reliability engineers and platform engineers a single pane of glass to view their entire database fleet. This new dashboard provides a central view on database performance, security, cost, and availability to proactively monitor and identify instances that need attention.ResourcesDatabase Center provides a centralized view of your fleetRead the overviewUnderstanding database health issues with Database CenterRead the guideProactively mitigate risks and ensure complianceDatabase Center delivers simplified security configuration with automated scans, providing recommendations to mitigate risks and promote industry best practices. These recommendations help secure databases by hardening network security, access management, data protection, and audit capabilities.ResourcesDatabase security posture management with GeminiLearn moreStreamline and accelerate database migrationsWith the help of Gemini, database administrators can examine and convert database-resident code such as stored procedures, triggers, and functions, to a PostgreSQL-compatible dialect. The explainability feature helps developers easily learn new PostgreSQL dialects and optimize SQL code.ResourcesPerform code conversion with Gemini assistanceGet startedSimplify migrations to the cloud with Database Migration ServiceLearn moreAssisted developmentAccelerate application development Database Studio empowers developers to easily generate and summarize SQL code with intelligent code assistance, code completion, and guidance directly in the editor. A context-aware chat interface helps developers build database applications faster using natural language.ResourcesManage your data using AlloyDB StudioGet startedUtilize Cloud SQL Studio to work with your dataGet startedPerformance optimizationStay ahead of potential performance issuesGemini in Databases automatically analyzes your workloads, highlights problems, and provides recommendations to resolve them. It simplifies the intricate troubleshooting process by providing contextual explanations for complex database concepts and metrics.ResourcesEnhanced Query Insights in AlloyDBOverviewUsing the index advisor with Query InsightsGet startedFleet managementDe-risk and optimize your database fleetDatabase Center offers reliability engineers and platform engineers a single pane of glass to view their entire database fleet. This new dashboard provides a central view on database performance, security, cost, and availability to proactively monitor and identify instances that need attention.ResourcesDatabase Center provides a centralized view of your fleetRead the overviewUnderstanding database health issues with Database CenterRead the guideGovernanceProactively mitigate risks and ensure complianceDatabase Center delivers simplified security configuration with automated scans, providing recommendations to mitigate risks and promote industry best practices. These recommendations help secure databases by hardening network security, access management, data protection, and audit capabilities.ResourcesDatabase security posture management with GeminiLearn moreMigrationsStreamline and accelerate database migrationsWith the help of Gemini, database administrators can examine and convert database-resident code such as stored procedures, triggers, and functions, to a PostgreSQL-compatible dialect. The explainability feature helps developers easily learn new PostgreSQL dialects and optimize SQL code.ResourcesPerform code conversion with Gemini assistanceGet startedSimplify migrations to the cloud with Database Migration ServiceLearn more“With Gemini in Databases, we can get answers on fleet health in seconds and proactively mitigate potential risks to our applications more swiftly than ever before.” Bogdan Capatina, Technical Expert in Database Technologies, Ford Motor CompanyResources to learn more Gemini for Google Cloud is a new generation of AI assistantsLearn moreTake a deep dive into Gemini in DatabasesRead the blogFind the right database for your needsLearn moreView MoreWhy you should careProductivity unleashed—build fasterGenerate, understand, and optimize SQL code using natural language instructions directly in Database Studio. Use "Help me code" to ask Gemini to generate SQL statements that work with your schema.Learn more about AlloyDB for PostgreSQLNo more surprises—always in controlDatabase Center provides intelligent dashboards that proactively assess availability, data protection, security, and compliance across your database fleet, and offers recommendations to fix issues.Learn more about Cloud SQLFast-track your modernizationGemini in Databases makes migrations simpler and faster with AI-assisted code conversion. It provides side-by-side comparison of SQL dialects and detailed explanations of the code and recommendations.Learn more about Database Migration ServiceGoogle's Database Migration Service is addressing key capabilities for minimal downtime migrations with advanced features like Gemini-assisted code conversion.Jeff Carroll, SVP - Platform and Cloud Engineering, SabreStart your Gemini in Databases journey todayExplore more in the Preview.Get startedRead our latest AI announcementsGoogle Cloud blogGet updates with the Google Cloud newsletterSubscribeGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Gemini_for_Google_Cloud.txt b/Gemini_for_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..c0449a73a5338859bbbc6c13e05974b2d63ea9dc --- /dev/null +++ b/Gemini_for_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/gemini +Date Scraped: 2025-02-23T11:58:46.321Z + +Content: +Gemini for Google Cloud is your AI-powered assistant Gemini for Google Cloud helps you be more productive and creative. It can be your writing and coding assistant, creative designer, expert adviser, or even your data analyst. Seamlessly integrated across Google Cloud and Google Workspace, Gemini is always at your side, helping you accomplish more in less time.Contact salesGenerally AvailableAccelerate software delivery with AI assistanceGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. “Gemini Code Assist is one of the top coding assistants we’ve tried. Our early experience with it has been very promising with productivity gains around 33%. We’re trying out newer features right now like indexing and debugging, which we expect to push productivity even higher.”Kai Du Director of Engineering, TuringTry Gemini Code Assist Get startedWant to build with Gemini models? Try Gemini API in Vertex AIPrivate PreviewEfficiently manage cloud applicationsGemini Cloud Assist helps cloud teams design, operate, and optimize their application lifecycle. Gemini’s contextual and personalized AI guidance understands your Google Cloud resources to help you craft new designs, deploy workloads, manage applications, troubleshoot issues, and optimize performance and costs. This means you can accelerate complex tasks and boost productivity, so you can stay focused on your business.Gemini Cloud Assist is directly accessible through a chat interface in the Google Cloud console, and directly embedded into the interfaces where you manage different cloud products and resources. Gemini Cloud Assist is in private preview.Google named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Download reportGenerative AI on Google CloudLearn moreGenerally AvailableSupercharge threat detection, investigation, and responseGemini in Security provides generative AI-powered assistance to cloud defenders where and when they need it.Gemini in Security Operations can help transform threat detection, investigation, and response for cyber defenders by simplifying search, complex data analysis, and threat remediation.Gemini in Security Command Center can help teams stay one step ahead of adversaries with near-instant analysis of security findings and possible attack paths.Gemini in Threat Intelligence can help surface prevalent tactics, techniques, and procedures (TTPs) used by threat actors by summarizing our frontline threat intelligence into an easy-to-comprehend format.“We are excited about Gemini's potential to dramatically increase the effectiveness of our team. The ability to use natural language queries to perform complex analysis will help new analysts on-board faster and let seasoned analysts rapidly pursue advanced threats across our complex environment.”Vice President, Global Information Security, PfizerGemini in Security Operations is now generally availableRead the blogSupercharge security with AILearn moreFast-track data analysis and automate data insightsGemini in BigQuery provides a NL-based experience along with semantic search, assisted data preparation, SQL and Python code assist capabilities, query performance improvement, and cost optimization recommendations. Gemini in BigQuery is now generally available. Gemini in Looker (preview) provides an always-on intelligent assistant that enables you to have conversations with your data and helps you automatically create and share your insights even faster, with automated reports and visualizations.“The natural language-based experiences, low-code data preparation tools, and automatic code generation features streamline high-priority analytics workflows, enhancing the productivity of data practitioners and providing the space to focus on high impact initiatives.”Tim Velasquez, Head of Analytics, VeoDiscover how Gemini in BigQuery accelerates analytics workflows with AIWatch nowThe future of Looker is AIWatch nowPreviewSupercharge database development and managementGemini in Databases delivers AI-powered assistance that simplifies all aspects of the database journey, helping teams focus on what matters most. Gemini in databases helps developers, operators, and database administrators build applications faster using natural language, manage, optimize, and govern an entire fleet of databases with a single pane of glass, and accelerate database migrations for the last-mile conversions.Write SQL with Gemini in Databases assistanceDocumentationDatabase management with Gemini in DatabasesWatch nowSoftware development Generally AvailableAccelerate software delivery with AI assistanceGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity, quality, and security in popular code editors like VS Code and JetBrains, and on developer platforms like Firebase. Built with robust enterprise features, it enables organizations to adopt AI assistance at scale while meeting security, privacy, and compliance requirements. “Gemini Code Assist is one of the top coding assistants we’ve tried. Our early experience with it has been very promising with productivity gains around 33%. We’re trying out newer features right now like indexing and debugging, which we expect to push productivity even higher.”Kai Du Director of Engineering, TuringTry Gemini Code Assist Get startedWant to build with Gemini models? Try Gemini API in Vertex AIApplication lifecyclePrivate PreviewEfficiently manage cloud applicationsGemini Cloud Assist helps cloud teams design, operate, and optimize their application lifecycle. Gemini’s contextual and personalized AI guidance understands your Google Cloud resources to help you craft new designs, deploy workloads, manage applications, troubleshoot issues, and optimize performance and costs. This means you can accelerate complex tasks and boost productivity, so you can stay focused on your business.Gemini Cloud Assist is directly accessible through a chat interface in the Google Cloud console, and directly embedded into the interfaces where you manage different cloud products and resources. Gemini Cloud Assist is in private preview.Google named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024Download reportGenerative AI on Google CloudLearn moreSecurity Generally AvailableSupercharge threat detection, investigation, and responseGemini in Security provides generative AI-powered assistance to cloud defenders where and when they need it.Gemini in Security Operations can help transform threat detection, investigation, and response for cyber defenders by simplifying search, complex data analysis, and threat remediation.Gemini in Security Command Center can help teams stay one step ahead of adversaries with near-instant analysis of security findings and possible attack paths.Gemini in Threat Intelligence can help surface prevalent tactics, techniques, and procedures (TTPs) used by threat actors by summarizing our frontline threat intelligence into an easy-to-comprehend format.“We are excited about Gemini's potential to dramatically increase the effectiveness of our team. The ability to use natural language queries to perform complex analysis will help new analysts on-board faster and let seasoned analysts rapidly pursue advanced threats across our complex environment.”Vice President, Global Information Security, PfizerGemini in Security Operations is now generally availableRead the blogSupercharge security with AILearn moreData analyticsFast-track data analysis and automate data insightsGemini in BigQuery provides a NL-based experience along with semantic search, assisted data preparation, SQL and Python code assist capabilities, query performance improvement, and cost optimization recommendations. Gemini in BigQuery is now generally available. Gemini in Looker (preview) provides an always-on intelligent assistant that enables you to have conversations with your data and helps you automatically create and share your insights even faster, with automated reports and visualizations.“The natural language-based experiences, low-code data preparation tools, and automatic code generation features streamline high-priority analytics workflows, enhancing the productivity of data practitioners and providing the space to focus on high impact initiatives.”Tim Velasquez, Head of Analytics, VeoDiscover how Gemini in BigQuery accelerates analytics workflows with AIWatch nowThe future of Looker is AIWatch nowDatabasesPreviewSupercharge database development and managementGemini in Databases delivers AI-powered assistance that simplifies all aspects of the database journey, helping teams focus on what matters most. Gemini in databases helps developers, operators, and database administrators build applications faster using natural language, manage, optimize, and govern an entire fleet of databases with a single pane of glass, and accelerate database migrations for the last-mile conversions.Write SQL with Gemini in Databases assistanceDocumentationDatabase management with Gemini in DatabasesWatch nowGemini for Google Workspace The AI-powered assistant from Google, built right into Gmail, Docs, Sheets, and more, with enterprise-grade security and privacy.Learn more Key features of Gemini for Google Cloud Conversational assistance through a chat interfaceYou can quickly chat with Gemini for Google Cloud to get answers to cloud questions, or receive guidance on best practices. Gemini for Google Cloud is specifically trained on Google Cloud content like docs and sample code.Fully managed, ready to useYou don't have to know anything about the world of AI to take advantage of Gemini for Google Cloud. As a fully managed service, it is regularly updated, monitored, and uses the latest-tested AI technology from Google.AI-powered code assistanceWhether you are writing apps, calling APIs, or querying data, Gemini Code Assist can help complete your code while you write, or generate code blocks based on comments. Supports 20+ programming languages.Code assistance is available in multiple IDEsVisual Studio Code, JetBrains IDEs (IntelliJ, PyCharm, GoLand, WebStorm, and more), Colab for Enterprise, Cloud Workstations, Cloud Shell Editor, Cloud Spanner, and BigQuery.Designed to respect intellectual propertyYou can rest assured that your code, inputs, and recommendations will not be used for any product and model learning and development. Your data and IP remain exclusively yours.For more details, check our data governance policySafe and responsible AIGemini for Google Cloud has achieved ISO/IEC 27001 (Information Security Management), 27017 (Cloud Security), 27018 (Protection of PII), and 27701 (Privacy Information Management) certifications.Learn about Google’s AI principlesStart your Gemini for Google Cloud journey todayContact salesLearn about our AI-powered code assistanceGemini Code AssistRead our latest AI announcementsGoogle Cloud BlogGet updates with the Google Cloud newsletterSubscribeGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/General_best_practices.txt b/General_best_practices.txt new file mode 100644 index 0000000000000000000000000000000000000000..83f9a698d82e5abe98ec6d464e9b2e80f6014093 --- /dev/null +++ b/General_best_practices.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/general-best-practices +Date Scraped: 2025-02-23T11:50:42.249Z + +Content: +Home Docs Cloud Architecture Center Send feedback General best practices Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC When designing and onboarding cloud identities, resource hierarchy, and landing zone networks, consider the design recommendations in Landing zone design in Google Cloud and the Google Cloud security best practices covered in the enterprise foundations blueprint. Validate your selected design against the following documents: Best practices and reference architectures for VPC design Decide a resource hierarchy for your Google Cloud landing zone Google Cloud Architecture Framework: Security, privacy, and compliance Also, consider the following general best practices: When choosing a hybrid or multicloud network connectivity option, consider business and application requirements such as SLAs, performance, security, cost, reliability, and bandwidth. For more information, see Choosing a Network Connectivity product and Patterns for connecting other cloud service providers with Google Cloud. Use shared VPCs on Google Cloud instead of multiple VPCs when appropriate and aligned with your resource hierarchy design requirements. For more information, see Deciding whether to create multiple VPC networks. Follow the best practices for planning accounts and organizations. Where applicable, establish a common identity between environments so that systems can authenticate securely across environment boundaries. To securely expose applications to corporate users in a hybrid setup, and to choose the approach that best fits your requirements, you should follow the recommended ways to integrate Google Cloud with your identity management system. Also, see Patterns for authenticating workforce users in a hybrid environment. When designing your on-premises and cloud environments, consider IPv6 addressing early on, and account for which services support it. For more information, see An Introduction to IPv6 on Google Cloud. It summarizes the services that were supported when the blog was written. When designing, deploying, and managing your VPC firewall rules, you can: Use service-account-based filtering over network-tag-based filtering if you need strict control over how firewall rules are applied to VMs. Use firewall policies when you group several firewall rules, so that you can update them all at once. You can also make the policy hierarchical. For hierarchical firewall policy specifications and details, see Hierarchical firewall policies. Use geo-location objects in firewall policy when you need to filter external IPv4 and external IPv6 traffic based on specific geographic locations or regions. Use Threat Intelligence for firewall policy rules if you need to secure your network by allowing or blocking traffic based on Threat Intelligence data, such as known malicious IP addresses or based on public cloud IP address ranges. For example, you can allow traffic from specific public cloud IP address ranges if your services need to communicate with that public cloud only. For more information, see Best practices for firewall rules. You should always design your cloud and network security using a multilayer security approach by considering additional security layers, like the following: Google Cloud Armor Cloud Intrusion Detection System Cloud Next Generation Firewall IPS Threat Intelligence for firewall policy rules These additional layers can help you filter, inspect, and monitor a wide variety of threats at the network and application layers for analysis and prevention. When deciding where DNS resolution should be performed in a hybrid setup, we recommend using two authoritative DNS systems for your private Google Cloud environment and for your on-premises resources that are hosted by existing DNS servers in your on-premises environment. For more information see, Choose where DNS resolution is performed. Where possible, always expose applications through APIs using an API gateway or load balancer. We recommend that you consider an API platform like Apigee. Apigee acts as an abstraction or facade for your backend service APIs, combined with security capabilities, rate limiting, quotas, and analytics. An API platform (gateway or proxy) and Application Load Balancer aren't mutually exclusive. Sometimes, using both API gateways and load balancers together can provide a more robust and secure solution for managing and distributing API traffic at scale. Using Cloud Load Balancing API gateways lets you accomplish the following: Deliver high-performing APIs with Apigee and Cloud CDN, to: Reduce latency Host APIs globally Increase availability for peak traffic seasons For more information, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube. Implement advanced traffic management. Use Google Cloud Armor as DDoS protection, WAF, and network security service to protect your APIs. Manage efficient load balancing across gateways in multiple regions. For more information, watch Securing APIs and Implementing multi-region failover with PSC and Apigee. To determine which Cloud Load Balancing product to use, you must first determine what traffic type your load balancers must handle. For more information, see Choose a load balancer. When Cloud Load Balancing is used, you should use its application capacity optimization abilities where applicable. Doing so can help you address some of the capacity challenges that can occur in globally distributed applications. For a deep dive on latency, see Optimize application latency with load balancing. While Cloud VPN encrypts traffic between environments, with Cloud Interconnect you need to use either MACsec or HA VPN over Cloud Interconnect to encrypt traffic in transit at the connectivity layer. For more information, see How can I encrypt my traffic over Cloud Interconnect. You can also consider service layer encryption using TLS. For more information, see Decide how to meet compliance requirements for encryption in transit. If you need more traffic volume over a VPN hybrid connectivity than a single VPN tunnel can support, you can consider using active/active HA VPN routing option. For long-term hybrid or multicloud setups with high outbound data transfer volumes, consider Cloud Interconnect or Cross-Cloud Interconnect. Those connectivity options help to optimize connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. When connecting to Google Cloud resources and trying to choose between Cloud Interconnect, Direct Peering, or Carrier Peering, we recommend using Cloud Interconnect, unless you need to access Google Workspace applications. For more information, you can compare the features of Direct Peering with Cloud Interconnect and Carrier Peering with Cloud Interconnect. Allow enough IP address space from your existing RFC 1918 IP address space to accommodate your cloud-hosted systems. If you have technical restrictions that require you to keep your IP address range, you can: Use the same internal IP addresses for your on-premises workloads while migrating them to Google Cloud, using hybrid subnets. Provision and use your own public IPv4 addresses for Google Cloud resources using bring your own IP (BYOIP) to Google. If the design of your solution requires exposing a Google Cloud-based application to the public internet, consider the design recommendations discussed in Networking for internet-facing application delivery. Where applicable, use Private Service Connect endpoints to allow workloads in Google Cloud, on-premises, or in another cloud environment with hybrid connectivity, to privately access Google APIs or published services, using internal IP addresses in a fine-grained fashion. When using Private Service Connect, you must control the following: Who can deploy Private Service Connect resources. Whether connections can be established between consumers and producers. Which network traffic is allowed to access those connections. For more information, see Private Service Connect security. To achieve a robust cloud setup in the context of hybrid and multicloud architecture: Perform a comprehensive assessment of the required levels of reliability of the different applications across environments. Doing so can help you meet your objectives for availability and resilience. Understand the reliability capabilities and design principles of your cloud provider. For more information, see Google Cloud infrastructure reliability. Cloud network visibility and monitoring are essential to maintain reliable communications. Network Intelligence Center provides a single console for managing network visibility, monitoring, and troubleshooting. Previous arrow_back Handover pattern Next What's next arrow_forward Send feedback \ No newline at end of file diff --git a/Generative_AI_RAG_with_Cloud_SQL.txt b/Generative_AI_RAG_with_Cloud_SQL.txt new file mode 100644 index 0000000000000000000000000000000000000000..d3af98f5d6a0c838165e2f7c1d36c12e8bcc9b4c --- /dev/null +++ b/Generative_AI_RAG_with_Cloud_SQL.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml/generative-ai-rag +Date Scraped: 2025-02-23T11:45:52.024Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Generative AI RAG with Cloud SQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-06 UTC This guide helps you understand and deploy the Generative AI RAG with Cloud SQL solution. This solution is based on the reference architecture Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL, but it's designed to help you get started and learn how to use RAG at a lower cost. This solution demonstrates how you can create a chat application that uses retrieval-augmented generation (RAG). When users ask questions in the app, it provides responses that are based on the information stored as vectors in a database. This document is intended for application developers and data scientists who have some background in application development and interacting with an LLM, such as Gemini. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Deploy a three-tier app that uses RAG as a way to provide input to an LLM. The app has a frontend service and a backend service (both built using Python), and uses a managed database. Learn how to use an LLM with RAG and unstructured text. Architecture The following diagram shows the architecture of the solution: The following sections describe the request flow and the Google Cloud resources that are shown in the diagram. Request flow The following is the request processing flow of this solution. The steps in the flow are numbered as shown in the preceding architecture diagram. Data is uploaded to a Cloud Storage bucket. Data is loaded to a PostgreSQL database in Cloud SQL. Embeddings of text fields are created by using Vertex AI and stored as vectors. You open the application in a browser. The frontend service communicates with the backend service for a generative AI call. The backend service converts the request to an embedding and searches existing embeddings. Natural language results from the embeddings search, along with the original prompt, are sent to Vertex AI to create a response. Products used The solution uses the following Google Cloud products: Vertex AI: A machine learning (ML) platform that lets you train and deploy ML models and AI applications, and customize LLMs for use in applications. Cloud SQL: A cloud-based service for MySQL, PostgreSQL and SQL Server databases that's fully managed on the Google Cloud infrastructure. Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Google Cloud handles scaling and other infrastructure tasks. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Cost For an estimate of the cost of the Google Cloud resources that the generative AI RAG with Cloud SQL solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The amount of data stored in Cloud Storage. The CPU and memory allocation for Cloud Run. The CPU, memory, and storage allocation for Cloud SQL. The number of calls to Vertex AI model endpoints. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/aiplatform.admin roles/artifactregistry.admin roles/cloudfunctions.admin roles/cloudsql.admin roles/compute.networkAdmin roles/config.agent roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/iam.serviceAccountTokenCreator roles/logging.configWriter roles/resourcemanager.projectIamAdmin roles/run.admin roles/servicenetworking.serviceAgent roles/serviceusage.serviceUsageAdmin roles/storage.admin roles/workflows.admin roles/vpcaccess.admin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Generative AI RAG with Cloud SQL solution. Go to the Generative AI RAG with Cloud SQL solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view the solution, return to the Solution deployments page in the console. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour This task takes about 10 minutes to complete. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-genai-rag. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-genai-rag Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-rag. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-genai-rag directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # The following variables have default values. You can set your own values or remove them to accept the defaults. # Google Cloud region where you want to deploy the solution. # Example: us-central1 region = "REGION" # Whether or not to enable underlying apis in this solution. # Example: true enable_apis = "BOOL" # Whether or not to protect Cloud SQL resources from deletion when solution is modified or changed. # Example: false deletion_protection = "BOOL" # A map of key/value label pairs to assign to the resources. # Example: "team"="monitoring", "environment"="test" labels = {"KEY1"="VALUE1",..."KEYn"="VALUEn"} For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects. The other variables have default values. You might change some of them (for example, disable_services_on_destroy and labels). Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-rag. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-rag. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! To view the solution, return to the Solution deployments page in the console. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour This task takes about 15 minutes to complete. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Customize the solution The solution uses a base flights and airports dataset. While the application container code is specific to this dataset, you can use your own data to create a private RAG. To add your data to the existing SQL instance: Upload your data in CSV format to a Cloud Storage bucket. Import the data into Cloud SQL. Create embeddings of the columns you will search. Query the data using SQL. Delete the deployment When you no longer need the solution, to avoid continued billing for the resources that you created in this solution, delete all the resources. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-rag. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Errors when deleting a deployment through the console If the Cloud SQL instance is not running, you might receive the following error when deleting a deployment through the console: error_description: "Error: Error when reading or editing SQL User \"retrieval-service\" in instance \"genai-rag-db-GENERATED_ID\": googleapi: Error 400: Invalid request: Invalid request since instance is not running. To resolve the error, start the Cloud SQL instance and then retry deleting the deployment. Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Discover how to Build a generative AI application on Google Cloud. Learn to Build generative AI applications using AlloyDB AI. Learn how to build infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. ContributorsAuthor: Jason Davenport | Developer AdvocateOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperGeoffrey Anderson | Product Manager Send feedback \ No newline at end of file diff --git a/Generative_AI_document_summarization.txt b/Generative_AI_document_summarization.txt new file mode 100644 index 0000000000000000000000000000000000000000..c42391590986a4745ad1270bd7d2271032ddbc5a --- /dev/null +++ b/Generative_AI_document_summarization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml/generative-ai-document-summarization +Date Scraped: 2025-02-23T11:45:49.612Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Generative AI document summarization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-07-21 UTC This guide helps you understand, deploy, and use the Generative AI document summarization solution, which leverages Vertex AI Generative AI Large Language Models (LLM) to process and summarize documents on demand. This solution deploys a pipeline that is triggered when you add a new PDF document to your Cloud Storage bucket. The pipeline extracts text from your document, creates a summary from the extracted text, and stores the summary in a database for you to view and search. This guide is intended for developers who have some background with large language models. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Understand how the Generative AI document summarization application works. Deploy an application that orchestrates the documentation summarization process. Trigger the pipeline with a PDF upload and view a generated summary. Products used This section describes the products that the solution uses. Component Product description Purpose in this solution Cloud Storage An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Stores the PDF documents and extracted text. Eventarc A service that manages the flow of state changes (events) between decoupled microservices, routing events to various destinations while managing delivery, security, authorization, observability, and error handling. Watches for new documents in the Cloud Storage bucket and triggers an event in Cloud Run functions. Cloud Run functions A lightweight serverless compute service that lets you create single-purpose, standalone functions that respond to Google Cloud events without the need to manage a server or runtime environment. Orchestrates the document processing steps. Document AI A document understanding platform that takes unstructured data from documents and transforms it into structured data. You can automate tedious tasks, improve data extraction, and gain deeper insights from data. Extracts the text from the documents. Vertex AI Generative AI Generative AI support on Vertex AI gives you access to Google's large generative AI models so you can test, tune, and deploy them for use in your AI-powered applications. Creates a summary from the extracted text stored in Cloud Storage. BigQuery A fully managed, highly scalable data warehouse with built-in machine learning capabilities. Handles the storage of the generated summary. Architecture This solution deploys a document summarization application using code that already exists. The following diagram shows the architecture of the application infrastructure: Request flow The following steps detail the request processing flow of the application. The steps in the flow are numbered as shown in the preceding architecture diagram. When the document is uploaded, it triggers a Cloud Run functions. This function runs the Extractive Question-Answering process. The Cloud Run function uses Document AI Optical Character Recognition (OCR) to extract all text from the document. The Cloud Run function uses Vertex AI Gemini to extract all text from the PDF file. The Cloud Run function stores the textual summaries of PDFs inside a BigQuery table. The Cloud Run function stores the extracted text inside a Cloud Storage bucket. Cost For an estimate of the cost of the Google Cloud resources that the generative AI document summarization solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The amount of data stored in Cloud Storage. The number of times the document summarization application is invoked. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/clouddeploymentmanager.serviceAgent roles/cloudfunctions.serviceAgent roles/config.agent roles/documentai.editor roles/resourcemanager.projectIamAdmin roles/serviceusage.serviceUsageViewer Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Generative AI document summarization solution. Go to the Generative AI document summarization solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. Next, to try the solution out yourself, see Explore the solution. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-gen-ai-document-summarization/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-gen-ai-document-summarization/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-gen-ai-document-summarization/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-gen-ai-document-summarization/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-gen-ai-document-summarization/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-gen-ai-document-summarization/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! Next, you can explore the solution and see how it works. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Explore the solution Once the solution is deployed, you can upload a PDF document and view a summary of the document in BigQuery. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour in the console. Start the tour Upload a document and query the model Begin using this solution by uploading a document, then ask the pre-trained LLM questions about the document. To follow step-by-step guidance for this task directly in Google Cloud console, click Guide me. Guide me This task takes about 10 minutes to complete. Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-gen-ai-document-summarization/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Configuration error If you modify resource arguments in your main.tf file using unsupported values, an error like the following occurs: Error: Error creating Instance: googleapi: Error 400: Provided Redis version is not supported: REDIS_5_X │ com.google.apps.framework.request.StatusException: generic::INVALID_ARGUMENT: Provided Redis version is not supported: REDIS_5_X Details: │ [ │ { │ "@type": "type.googleapis.com/google.rpc.BadRequest", │ "fieldViolations": [ │ { │ "description": "Invalid value: REDIS_5_X", │ "field": "instance.redis_version" │ } │ ] │ } │ ] │ │ with google_redis_instance.main, │ on main.tf line 96, in resource "google_redis_instance" "main": │ 96: resource "google_redis_instance" "main" { In this case, the intent was to use Redis version 5, but the value specified for the instance.redis_version argument (REDIS_5_X) in the main.tf file is not valid. The correct value is REDIS_5_0, as enumerated in the Memorystore REST API documentation. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Learn more about Generative AI on Vertex AI. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Send feedback \ No newline at end of file diff --git a/Generative_AI_knowledge_base.txt b/Generative_AI_knowledge_base.txt new file mode 100644 index 0000000000000000000000000000000000000000..e6d35f8fffdb96ed317497c3ba6fc6d22746bf7d --- /dev/null +++ b/Generative_AI_knowledge_base.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ai-ml/generative-ai-knowledge-base +Date Scraped: 2025-02-23T11:45:54.450Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Generative AI Knowledge Base Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-01 UTC This guide helps you understand and deploy the Generative AI Knowledge Base solution. This solution demonstrates how to build an extractive question-answering (EQA) pipeline to produce content for an internal knowledge base. An EQA pipeline processes documents, extracts question-and-answer pairs from the documents, and provides you with structured information to fine-tune a large language model (LLM). For example, you could upload a document containing many frequently asked questions (FAQs), tune an LLM using those FAQs, and then use the trained model to help customer support agents find information while resolving cases. This document is intended for developers who have some background with LLMs. It assumes that you're familiar with basic cloud concepts. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Deploy an application that extracts question-and-answer pairs from your documents. Deploy a pipeline that triggers your application when a document is uploaded. Train a prompt-based AI model using the output from your application. Architecture This solution deploys a Generative AI Knowledge Base application. The following diagram shows the architecture of the application infrastructure: Request flow The following steps detail the request processing flow of the application. The steps in the flow are numbered as shown in the preceding architecture diagram. You start the Generative AI Knowledge Base application by uploading a document directly to a Cloud Storage bucket, either through the Google Cloud console or gcloud CLI. When the document is uploaded, it triggers a Cloud Run function. This function runs the Extractive Question-Answering process. The function uses Document AI OCR to extract all text from the document. The function indexes the document into Vector Search. The Vector Search index provides context for the LLM to extract question-and-answer pairs based only on content that's extracted directly from the uploaded documents. The function uses Vertex AI to extract and generate questions and answers from the document. The function stores the extracted question-and-answer pairs in Firestore. A JSONL fine tuning dataset is generated from the Firestore database and stored in Cloud Storage. After manually validating that you are satisfied with the dataset, you can launch a fine tuning job on Vertex AI. When the tuning job is complete, the tuned model is deployed to an endpoint. After it's deployed to an endpoint, you can submit queries to the tuned model in a Colab notebook, and compare it with the foundation model. Products used This section describes the products that the solution uses. If you're familiar with the Terraform configuration language, you can change some of the settings for the services. Component Product description Purpose in this solution Cloud Storage An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Stores the PDF documents, extracted text, tuning dataset, and tuned model. Eventarc A service that manages the flow of state changes (events) between decoupled microservices, routing events to various destinations while managing delivery, security, authorization, observability, and error handling. Watches for new documents in the Cloud Storage bucket and triggers an event in Cloud Run functions. Cloud Run functions A lightweight serverless compute service that lets you create single-purpose, standalone functions that respond to Google Cloud events without the need to manage a server or runtime environment. Orchestrates the document processing steps. Document AI A document understanding platform that takes unstructured data from documents and transforms it into structured data. You can automate tedious tasks, improve data extraction, and gain deeper insights from data. Extracts the text from the documents. Vertex AI A machine learning platform that lets you train, test, tune, and deploy LLMs and generative AI applications. Generates questions and answers from the documents. Vector Search A service that lets you use the same infrastructure that provides a foundation for Google products such as Google Search, YouTube, and Play. Lets you search embeddings to find semantically similar or related entities. Firestore A fully managed, low-latency file system for VMs and clusters that offers high availability and high throughput. Stores the generated questions and answers. Cost For an estimate of the cost of the Google Cloud resources that the generative ai knowledge base solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The amount of data stored in Cloud Storage. The number of times the knowledge base application is invoked. The computing resources used for tuning the model. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/aiplatform.user roles/artifactregistry.admin roles/documentai.editor roles/firebase.admin roles/iam.serviceAccountUser roles/serviceusage.serviceUsageAdmin roles/iam.serviceAccountAdmin roles/resourcemanager.projectIamAdmin roles/config.agent Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Generative AI Knowledge Base solution. Go to the Generative AI Knowledge Base solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour in the console. Start the tour For next steps, see Use the solution. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-genai-knowledge-base/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-genai-knowledge-base/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-knowledge-base/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-genai-knowledge-base/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-knowledge-base/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-knowledge-base/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! To view the Google Cloud resources that are deployed and their configuration, take an interactive tour in the console. Start the tour Next, you can use the solution and see how it works. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Use the solution Once the solution is deployed, you can upload a document to index it and ask questions about it. Additionally, a JSON lines (JSONL) tuning dataset file is generated, which you can use to prompt-tune an LLM. Upload a document and query the model Begin using this solution by uploading a document, then ask the pre-trained LLM questions about the document. To follow step-by-step guidance for this task directly in Google Cloud console, click Guide me. Guide me This task takes about 10 minutes to complete. Tune the LLM After you upload documents for the solution, you can use Vertex AI to tune an LLM with your question-and-answer pairs. Tuning the LLM is not an automated process. Before you tune the LLM, inspect your data and make sure that it's valid and accurate. After you are satisfied with the data, you can manually launch a tuning job and launch the LLM from the Model Registry. The JSONL tuning file contains extracted content from your question-and-answer pairs. Each line in the file is a JSON entry with input_text and output_text fields. The input_text field contains the content from each question, and the output_text contains content from each respective answer. For example, the following JSONL file contains the question "How many people live in Beijing" and the respective answer: {"input_text": "CONTEXT: With over 21 million residents, Beijing is the world's most populous national capital city and is China's second largest city after Shanghai. QUESTION: How many people live in Beijing?, "output_text": "21 million people"} To follow step-by-step guidance for tuning your model directly in Google Cloud console, click Guide me. Guide me This walkthrough takes about 10 minutes to complete, but the model tuning can take an hour or more to finish processing. Delete the deployment When you no longer need the solution, delete the deployment. When you delete the deployment, you are no longer billed for the resources that you created. Before you delete Before you delete this solution, delete the Vector Search index deployment: Go to the Vector Search page. Go to Vector Search Click knowledge-base-index. Under Deployed indexes, click more_vert More. Click Undeploy. You don't need to wait for the index deletion process to finish. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-genai-knowledge-base/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete the tuned model You must manually delete the tuned model. To delete the tuned mode, see Delete a model from Vertex AI Model Registry. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Configuration error If you modify resource arguments in your main.tf file using unsupported values, an error like the following occurs: Error: Error creating Instance: googleapi: Error 400: Provided Redis version is not supported: REDIS_5_X │ com.google.apps.framework.request.StatusException: generic::INVALID_ARGUMENT: Provided Redis version is not supported: REDIS_5_X Details: │ [> │ { │ "@type": "type.googleapis.com/google.rpc.BadRequest", │ "fieldViolations": [ │ { │ "description": "Invalid value: REDIS_5_X", │ "field": "instance.redis_version" │ } │ ] │ } │ ] │ │ with google_redis_instance.main, │ on main.tf line 96, in resource "google_redis_instance" "main": │ 96: resource "google_redis_instance" "main" { In this case, the intent was to use Redis version 5, but the value specified for the instance.redis_version argument (REDIS_5_X) in the main.tf file is not valid. The correct value is REDIS_5_0, as enumerated in the Memorystore REST API documentation. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Learn more about Generative AI on Vertex AI. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Send feedback \ No newline at end of file diff --git a/Generative_AI_on_Google_Cloud.txt b/Generative_AI_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..35077028e300ed900487ad90ad515274d9b1c41f --- /dev/null +++ b/Generative_AI_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/use-cases/generative-ai +Date Scraped: 2025-02-23T11:58:48.583Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceGenerative AI use casesGenerate text, images, code, and more with Google Cloud AITransform content creation and discovery, research, customer service, and developer efficiency—all with the power of Google Cloud generative AI.Try it in consoleRequest a demoWant training? Start a free course on generative AI.HighlightsHow businesses use generative AIHow does generative AI work?Google Cloud's generative AI offeringsVideo: Introduction to Generative AI22:54OverviewWhat is generative AI?Generative AI or generative artificial intelligence refers to the use of AI to create new content, like text, images, music, audio, and videos.Generative AI is powered by foundation models (large AI models) that can multi-task and perform out-of-the-box tasks, including summarization, Q&A, classification, and more. Plus, with minimal training required, foundation models can be adapted for targeted use cases with very little example data. Explore 300+ real-world gen AI use cases from the world's leading organizationsRead the blogHow does generative AI work?Generative AI works by using an ML model to learn the patterns and relationships in a dataset of human-created content. It then uses the learned patterns to generate new content. The most common way to train a generative AI model is to use supervised learning - the model is given a set of human-created content and corresponding labels. It then learns to generate content that is similar to the human-created content and labeled with the same labels.Common generative AI applicationsGenerative AI processes vast content, creating insights and answers via text, images, and user-friendly formats. Generative AI can be used to:- Improve customer interactions through enhanced chat and search experiences - Explore vast amounts of unstructured data through conversational interfaces and summarizations - Assist with repetitive tasks like replying to requests for proposals (RFPs), localizing marketing content in five languages, and checking customer contracts for compliance, and moreQuick start guides for key business areasExplore the value of gen AIWhat generative AI offerings does Google Cloud have?With Vertex AI, interact with, customize, and embed foundation models including Gemini into your applications—no ML expertise required. Access foundation models on Model Garden, tune models via a simple UI on Vertex AI Studio, or use models in a data science notebook.Vertex AI Agent Builder offers developers the fastest way to build generative AI powered search engines and AI agents. And, Gemini for Google Cloud serves as an always-on AI collaborator that helps users of all skill levels where they need it.Need help implementing generative AI solutions? Work with Google's experts to get up and running quickly with generative AI. Explore our consulting service offerings in generative AI which can help your organization with:- Creating new text and image content- Discovering trends and gaining insights from data - Summarizing for faster decision-making- Automating solutions and processes- Productionizing your generative AI solutionsCheck out the entire Google Cloud Consulting portfolio of service offerings. Google’s approach to responsible AIGoogle Cloud products and product updates have safety and security built in, are guided by Google's AI Principles, and are focused on helping enterprises control their use of IP, data, and individual privacy.Introduction to Responsible AI 9:37View moreHow It WorksGenerative AI is poised to usher in a new wave of interactive, multimodal experiences that transform how we interact with information, brands, and one another. Harnessing the power of decades of Google’s research, innovation, and investment in AI, Google Cloud is bringing businesses and governments the ability to generate text, images, code, videos, audio, and more from simple natural language prompts.Try it in Vertex AI StudioLatest on generative AI with Google CloudCommon UsesModernize customer serviceImprove employee productivity, customer analytics, and deflection rates with AIGenerative AI can support customer service agents by enabling them to quickly synthesize answers from internal knowledge bases and external references, as well as summarize conversations to quickly resolve issues. Analysts can easily view call center and customer analytics with predictive models, enabling them to personalize a customer’s experience. To improve deflection rates, organizations can use gen AI to auto-generate FAQ responses and enhance their websites with multimodal search and content summarization.8:14Improving live customer service experiences with generative AIDeployment guide: Github repositoryDocumentation: Get started with conversational AICodelab: Create a generative chat app How-tosImprove employee productivity, customer analytics, and deflection rates with AIGenerative AI can support customer service agents by enabling them to quickly synthesize answers from internal knowledge bases and external references, as well as summarize conversations to quickly resolve issues. Analysts can easily view call center and customer analytics with predictive models, enabling them to personalize a customer’s experience. To improve deflection rates, organizations can use gen AI to auto-generate FAQ responses and enhance their websites with multimodal search and content summarization.8:14Improving live customer service experiences with generative AIDeployment guide: Github repositoryDocumentation: Get started with conversational AICodelab: Create a generative chat app Improve developer efficiency Drive developer productivity with code assistanceGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity and quality in popular code editors like VS Code and JetBrains. It supports your private codebase wherever it lives — even across multiple repositories. With gen AI support in Vertex AI, enterprises can tune Codey, a family of code models, using their own code base. The ability to customize these models helps you generate code that complies with established coding standards and conventions while leveraging custom endpoints and proprietary codebases for code generation tasks. 3:12Improving developer efficiency with generative AIDeployment guide: Github repositoryGemini Code Assist set-up guideHow-tosDrive developer productivity with code assistanceGemini Code Assist offers AI-powered assistance to help developers build applications with higher velocity and quality in popular code editors like VS Code and JetBrains. It supports your private codebase wherever it lives — even across multiple repositories. With gen AI support in Vertex AI, enterprises can tune Codey, a family of code models, using their own code base. The ability to customize these models helps you generate code that complies with established coding standards and conventions while leveraging custom endpoints and proprietary codebases for code generation tasks. 3:12Improving developer efficiency with generative AIDeployment guide: Github repositoryGemini Code Assist set-up guideGen AI for marketing Supercharge creativity, productivity, and impact at scale Generative AI helps marketers engage with customers and drive positive business outcomes.Top use cases include:Utilizing chatbots and AI agents that react in real time to deliver more accurate and personalized responses using your own datasets.Synthesizing data to understand customer profiles and generating content to reach target customers.Generating new product concepts and designs using textual descriptions and parameters.Summarize large documents with a preconfigured generative AI solutionTry Gemini in Vertex AI 2:44Transform marketing with Generative AI Deployment guide: Github repository View sample reference architecture Guide to generative AI on Vertex AI How-tosSupercharge creativity, productivity, and impact at scale Generative AI helps marketers engage with customers and drive positive business outcomes.Top use cases include:Utilizing chatbots and AI agents that react in real time to deliver more accurate and personalized responses using your own datasets.Synthesizing data to understand customer profiles and generating content to reach target customers.Generating new product concepts and designs using textual descriptions and parameters.Summarize large documents with a preconfigured generative AI solutionTry Gemini in Vertex AI 2:44Transform marketing with Generative AI Deployment guide: Github repository View sample reference architecture Guide to generative AI on Vertex AI Website modernization Unlock business value across the consumer jouneryTransform your website and mobile app experience with gen AI. Using Vertex AI's content generation tools, powered by Gemini, you can generate text, audio, and images at scale. Localize and translate content to engage with customers in new markets with cultural relevance. Build next-generation search experiences on your website to enable users to find accurate information quickly. Deploy AI agents to retrieve information and submit basic transactions, while having granular control over what your AI agent says. Start website modernization discovery 3:53Enhance content and improve navigation with generative AIDeployment guide: Github repositoryDeploy a gen AI FAQ knowledge base View sample reference architecture How-tosUnlock business value across the consumer jouneryTransform your website and mobile app experience with gen AI. Using Vertex AI's content generation tools, powered by Gemini, you can generate text, audio, and images at scale. Localize and translate content to engage with customers in new markets with cultural relevance. Build next-generation search experiences on your website to enable users to find accurate information quickly. Deploy AI agents to retrieve information and submit basic transactions, while having granular control over what your AI agent says. Start website modernization discovery 3:53Enhance content and improve navigation with generative AIDeployment guide: Github repositoryDeploy a gen AI FAQ knowledge base View sample reference architecture Take the next step with Google CloudTry Google Cloud AI products in the consoleGo to my consoleHava a large project?Contact salesMeet Vertex AI, our unified platform for using generative AILearn moreEasily build no code conversational AI agents with Agent BuilderLearn moreView examples of prompts and responses from our generative AI modelsExplore sample promptsBusiness CaseUnlock the full potential of gen AIGrounding generative AI in enterprise truthHow grounding your models helps you gain a competitive edge.Download the guideAdditional resourcesBetter search, better business: How gen AI takes enterprise search to a whole new level.Download the guideGen AI Navigator: Personalized recommendations for getting startedTake the assessmentThe ROI of Gen AI: The results are inGet the reportAnalyst ReportsGoogle is a Leader in The Forrester Wave™: AI Foundation Models For Language, Q2 2024. Read the report.Google named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, receiving the highest scores of any vendor evaluated in both Current Offering and Strategy.Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024. Learn more.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Geospatial_analytics.txt b/Geospatial_analytics.txt new file mode 100644 index 0000000000000000000000000000000000000000..b04b6b2b5f7cc9ba46fa158fd356d8ceee68170c --- /dev/null +++ b/Geospatial_analytics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/geospatial-analytics-architecture +Date Scraped: 2025-02-23T11:49:24.810Z + +Content: +Home Docs Cloud Architecture Center Send feedback Geospatial analytics architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-25 UTC This document helps you understand Google Cloud geospatial capabilities and how you can use these capabilities in your geospatial analytics applications. This document is intended for geographic information systems (GIS) professionals, data scientists, and application developers who want to learn how to use the products and services available in Google Cloud to deliver geospatial insights to business stakeholders. Overview Google Cloud provides a comprehensive suite of geospatial analytics and machine learning capabilities that can help you develop insights to understand more about the world, your environment, and your business. Geospatial insights that you get from these Google Cloud capabilities can help you make more accurate and sustainable business decisions without the complexity and expense of managing traditional GIS infrastructure. Geospatial analytics use cases Many critical business decisions revolve around location data. Insights gleaned from geospatial analytics are applicable across a number of industries, businesses, and markets, as described in the following examples: Assessing environmental risk. Understand the risks posed by environmental conditions by predicting natural disasters like flooding and wildfires, which can help you more effectively anticipate risk and plan for it. Optimizing site selection. Combine proprietary site metrics with publicly available data like traffic patterns and geographic mobility, and then use geospatial analytics to find the optimum locations for your business and to predict financial outcomes. Planning logistics and transport. Better manage fleet operations such as last-mile logistics, analyze data from autonomous vehicles, manage precision railroading, and improve mobility planning by incorporating geospatial data into business decision-making. Understanding and improving soil health and yield. Analyze millions of acres of land to understand soil characteristics and help farmers analyze the interactions among variables that affect crop production. Managing sustainable development. Map economic, environmental, and social conditions to determine focus areas for protecting and preserving the environment. Geospatial cloud building blocks Your geospatial analytics architecture can consist of one or more geospatial cloud components, depending on your use case and requirements. Each component provides different capabilities, and these components work together to form a unified, scalable geospatial cloud analytics architecture. Data is the raw material for delivering geospatial insights. Quality geospatial data is available from a number of public and proprietary sources. Public data sources include BigQuery public datasets, the Earth Engine catalog, and the United States Geological Survey (USGS). Proprietary data sources include internal systems such as SAP and Oracle, and internal GIS tooling such as Esri ArcGIS Server, Carto, and QGIS. You can aggregate data from multiple business systems, such as inventory management, marketing analytics, and supply chain logistics, and then combine that data with geospatial source data and send the results to your geospatial data warehouse. Depending on a source's data type and destination, you might be able to load geospatial data sources directly into your analytics data warehouse. For example, BigQuery has built-in support for loading newline-delimited GeoJSON files, and Earth Engine has an integrated data catalog with a comprehensive collection of analysis-ready datasets. You can load other data in other formats through a geospatial data pipeline that preprocesses the geospatial data and loads it into your enterprise data warehouse in Google Cloud. You can build production-ready data pipelines using Dataflow. Alternatively, you can use a partner solution such as FME Spatial ETL. The enterprise data warehouse is the core of your geospatial analytics platform. After geospatial data is loaded into your data warehouse, you can start building geospatial applications and insights by using some of the following capabilities: The machine learning capabilities that are available in BigQuery ML and Vertex AI. Reporting and business intelligence tools like BigQuery GeoViz, Looker Studio, and Looker. The API services that are available in Google Cloud such as Apigee. The geospatial query and analysis capabilities available in BigQuery. The SQL geography functions in BigQuery, which let you perform geospatial computations and queries. The machine learning capabilities built into Earth Engine. Your architecture then serves as a single system that you can use to store, process, and manage data at scale. The architecture also lets you build and deploy advanced analytics solutions that can produce insights that are not feasible on systems that don't include these features. Geospatial data types, formats, and coordinate systems To aggregate your geospatial data into a data warehouse like BigQuery, you must understand the geospatial data formats that you're likely to encounter in internal systems and from public sources. Data types Geospatial data types fall into two categories: vector and raster. Vector data is composed of vertices and line segments, as shown in the following diagram. Examples of vector data include parcel boundaries, public rights-of-way (roads), and asset locations. Because vector data can be stored in a tabular (row and column) format, geospatial databases such as BigQuery and PostGIS in Cloud SQL excel at storing, indexing, and analyzing vector data. Raster data is composed of grids of pixels. Examples of raster data include atmospheric measurements and satellite imagery, as shown in the following examples. Earth Engine is designed for planetary-scale storage and analysis of raster data. Earth Engine includes the ability to vectorize rasters, which can help you classify regions and understand patterns in raster data. For example, by analyzing atmospheric raster data over time, you can extract vectors that represent prevailing wind currents. You can load each individual raster pixel into BigQuery by using a process called polygonization, which converts each pixel directly to a vector shape. Geospatial cloud applications often combine both types of data to produce holistic insights that leverage the strengths of data sources from each category. For example, a real-estate application that helps identify new development sites might combine vector data such as parcel boundaries with raster data such as elevation data to minimize flood risk and insurance costs. Data formats The following table lists popular geospatial data formats and ways in which they can be used in your analytics platform. Data source format Description Examples Shapefile A vector data format that was developed by Esri. It lets you store geometric locations and associate attributes. Census tract geometries, building footprints WKT A human-readable vector data format that's published by OGC. Support for this format is built into BigQuery. Representation of geometries in CSV files WKB A storage-efficient binary equivalent of WKT. Support for this format is built into BigQuery. Representation of geometries in CSV files and databases KML An XML-compatible vector format used by Google Earth and other desktop tools. The format is published by OGC. 3D building shapes, roads, land features Geojson An open vector data format that's based on JSON. Features in web browsers and mobile applications GeoTIFF A widely used raster data format. This format lets you map pixels in a TIFF image to geographic coordinates. Digital elevation models, Landsat Coordinate reference systems All geospatial data, regardless of type and format, includes a coordinate reference system that lets geospatial analytics tools such as BigQuery and Earth Engine associate coordinates with a physical location on the earth's surface. There are two basic types of coordinate reference systems: geodesic and planar. Geodesic data takes the curvature of the earth into account, and uses a coordinate system based on geographic coordinates (longitude and latitude). Geodesic shapes are commonly referred to as geographies. The WGS 84 coordinate reference system that's used by BigQuery is a geodesic coordinate system. Planar data is based on a map projection such as Mercator that maps geographic coordinates to a two-dimensional plane. To load planar data into BigQuery, you need to reproject planar data into the WGS 84 coordinate system. You can do this reprojection manually by using your existing GIS tooling, or by using a geospatial cloud data pipeline (see the next section). Considerations for building a geospatial cloud data pipeline As noted, you can load some geospatial data directly into BigQuery and Earth Engine, depending on data type. BigQuery lets you load vector data in the WKT, WKB, and GeoJSON file formats if the data uses the WGS 84 reference system. Earth Engine integrates directly with the data that's available in the Earth Engine catalog and supports loading raster images directly in the GeoTIFF file format. You might encounter geospatial data that's stored in other formats and that can't be loaded directly into BigQuery. Or the data might be in a coordinate reference system that you must first reproject into the WGS 84 reference system. Similarly, you might encounter data that needs to be preprocessed, simplified, and corrected for errors. You can load preprocessed geospatial data into BigQuery by building geospatial data pipelines using Dataflow. Dataflow is a managed analytics service that supports streaming and batch processing of data at scale. You can use the geobeam Python library that extends Apache Beam and adds geospatial processing capabilities to Dataflow. The library lets you read geospatial data from a variety of sources. The library also helps you process and transform the data and load it into BigQuery to use as your geospatial cloud data warehouse. The geobeam library is open source, so you can modify and extend it to support additional formats and preprocessing tasks. Using Dataflow and the geobeam library, you can ingest and analyze massive amounts of geospatial data in parallel. The geobeam library works by implementing custom I/O connectors. The geobeam library includes GDAL, PROJ, and other related libraries to make it easier to process geospatial data. For example, geobeam automatically reprojects all input geometries to the WGS84 coordinate system used by BigQuery to store, cluster, and process spatial data. The geobeam library follows Apache Beam design patterns, so your spatial pipelines work similar to non-spatial pipelines. The difference is that you use the geobeam custom FileBasedSource classes to read from spatial source files. You can also use the built-in geobeam transform functions to process your spatial data and to implement your own functions. The following example shows how you can create a pipeline that reads a raster file, polygonizes the raster, reprojects it to WGS 84, and writes the polygons to BigQuery. with beam.Pipeline(options=pipeline_options) as p: (p | beam.io.Read(GeotiffSource(known_args.gcs_url)) | 'MakeValid' >> beam.Map(geobeam.fn.make_valid) | 'FilterInvalid' >> beam.Filter(geobeam.fn.filter_invalid) | 'FormatRecords' >> beam.Map(geobeam.fn.format_record, known_args.band_column, known_args.band_type) | 'WriteToBigQuery' >> beam.io.WriteToBigQuery('DATASET.TABLE')) Geospatial data analysis in BigQuery When the data is in BigQuery, you can transform, analyze, and model the data. For example, you can query the average elevation of a land parcel by computing the intersection of those geographies and joining the tables using standard SQL. BigQuery offers many functions that let you construct new geography values, compute the measurements of geographies, explore the relationship between two geographies, and more. You can do hierarchical geospatial indexing with S2 grid cells using BigQuery S2 functions. In addition, you can use the machine learning features of BigQuery ML to identify patterns in the data, such as creating a k-means machine learning model to cluster geospatial data. Geospatial visualization, reports, and deployment Google Cloud provides several options for visualizing and reporting your spatial data and insights in order to deliver them to users and applications. The methods you use to represent your spatial insights depend on your business requirements and objectives. Not all spatial insights are represented graphically. Many insights are best delivered through an API service like Apigee, or by saving them into an application database like Firestore so that the insights can power features in your user-facing applications. While you're testing and prototyping your geospatial analyses, you can use BigQuery GeoViz as a way to validate your queries and to generate a visual output from BigQuery. For business intelligence reporting, you can use Looker Studio or Looker to connect to BigQuery and combine your geospatial visualizations with a wide variety of other report types in order to present a unified view of the insights you need. You can also build applications that let your users interact with geospatial data and insights and incorporate those insights into your business applications. For example, by using the Google Maps Platform, you can combine geospatial analytics, machine learning, and data from the Maps API into a single map-based application. By using open source libraries like deck.gl, you can include high-performance visualizations and animations to tell map-based stories and better represent your data. Google also has a robust and growing ecosystem of partner offerings that can help you make the most of your geospatial insights. Carto, NGIS, Climate Engine, and others each have specialized capabilities and offerings that you can customize to your industry and business. Reference architecture The following diagram shows a reference architecture that illustrates how the geospatial cloud components interact. The architecture has two key components: the geospatial data pipeline and the geospatial analytics platform. As the diagram shows, geospatial source data is loaded into Cloud Storage and Earth Engine. From either of these products, the data can be loaded through a Dataflow pipeline using geobeam to perform common preprocessing operations such as feature validation and geometry reprojection. Dataflow writes the pipeline output into BigQuery. When the data is in BigQuery, it can be analyzed in place using BigQuery analytics and machine learning, or it can be accessed by other services such as Looker Studio, Looker, Vertex AI, and Apigee. What's next Getting started with geospatial analytics BigQuery geospatial tutorials Earth Engine tutorials Geospatial analytics and AI For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Send feedback \ No newline at end of file diff --git a/Get_started(1).txt b/Get_started(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..040304454571032b9e42afba58786f12d3085c39 --- /dev/null +++ b/Get_started(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-from-aws-get-started +Date Scraped: 2025-02-23T11:51:52.765Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Get started Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-21 UTC Google Cloud provides tools, products, guidance, and professional services to help you migrate workloads, data, and processes from Amazon Web Services (AWS) to Google Cloud. This document introduces a discussion about how to design, implement, and validate a plan to migrate from AWS to Google Cloud. The discussion is intended for cloud administrators who want details about how to plan and implement a migration process. It's also intended for decision-makers who are evaluating the opportunity to migrate and who want to explore what migration looks like. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started (this document) Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to Google Kubernetes Engine Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run Migrating your workloads, data, and processes from AWS to Google Cloud can be a challenging task and can take days, weeks, or months to complete, depending on the scope and size of the migration. Therefore, we recommend that you plan and execute your migration carefully. AWS and Google Cloud products that target similar use cases might seem similar in the way you provision, configure, and use the resources that they provide. However, despite similarities, these resources might differ significantly in the way they work and in their requirements. We therefore recommend that you assess the ways in which similar AWS and Google Cloud resources work before you plan your migration. This series provides guidance about the following migration journeys: From AWS compute services to Google Cloud Migrate from Amazon EC2 to Compute Engine Migrate VMware VMs to your Google Cloud VMware Engine private cloud Migrate from Amazon Elastic Kubernetes Service (Amazon EKS) to Google Kubernetes Engine Migrate from AWS Lambda to Cloud Run From AWS data storage services to Google Cloud Migrate from Amazon S3 to Cloud Storage From AWS database services to Google Cloud Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server From AWS data analytics services to Google Cloud Amazon Redshift to BigQuery migration What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Get_started(2).txt b/Get_started(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..43f8fdf2c4621f73f9b28c559699681cc8e1a70f --- /dev/null +++ b/Get_started(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-across-regions +Date Scraped: 2025-02-23T11:52:16.231Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate across Google Cloud regions: Get started Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-08 UTC This document series helps you prepare for migrating your workloads and data across Google Cloud regions. In this series, you learn how to design reliable, single-region environments, and how to set up security and compliance controls to help ensure that your workloads and data stay inside specific Google Cloud regions. This series also provides guidance on how to expand your environment to multiple regions. This series is useful if you're planning to do any of these actions, or if you're evaluating the opportunity to do so in the future and want to explore what it might look like. This document is part of the following series: Get started (this document) Design resilient single-region environments on Google Cloud Architect your workloads Prepare data and batch workloads for the migration This series assumes that you have read and are familiar with Migrate to Google Cloud: Get started, which describes the general migration framework used in this series. This series is also part of a larger set of migration content. For an overview of all related migration content, see Migration resources. Use the following list to help you get started with the documents in this series and with related migration content: Architecting your environments for reliability. When designing your Google Cloud environments, there are several architectural choices to make, including how to distribute resources across Google Cloud regions and zones. For more information about how to design your Google Cloud environments, and about how to design reliable single-region environments, refer to Design resilient single-region environments on Google Cloud. Preparing for and migrating across Google Cloud regions. Preparing for a migration across Google Cloud regions helps you avoid potential issues during such a migration, and also reduces the time and effort required to complete the migration. For example, you might want to prepare for a future migration to another region because you don't yet know if a particular region would be the best fit for your environment. Or you might prepare for a migration because you anticipate that your requirements about the preferred region might change in the future. You might even prepare for a migration because you're considering a migration to a soon-to-be-opened region. The guidance in this series is also useful if you didn't plan in advance for a migration across regions or for an expansion to multiple regions. In this case, you might need to spend additional effort to prepare your infrastructure, workloads, and data for the migration across regions and for the expansion to multiple regions. For more information about preparing for a migration across Google Cloud regions, see the following documents: Design resilient single-region environments Prepare data and batch workloads for migration across regions Setting up boundaries, and security and compliance controls to ensure that your workloads and data stay inside certain Google Cloud regions. You might have to comply with security and regulatory requirements so that your workloads and data can only reside in specific Google Cloud regions. If you have this compliance requirement, you might need to implement boundaries, controls, and auditing to ensure locality, sovereignty, privacy, and confidentiality on top of the guarantees that Google Cloud offers. For example, you might need to make your environment compliant with regulations and security requirements that mandate certain workloads and data never leave a specific region residing in a particular political, administrative, or state entity. Minimizing the costs of your single- and multi-region environments, and of migrations across Google Cloud regions. As your environments and your Google Cloud footprint grow, you might consider implementing mechanisms and processes to control and reduce the costs that are associated with your environments and your migrations across regions. For example, you can provision resources in specific regions, optimize network traffic patterns, streamline and automate processes, and automatically scale resources with demand. For more information about reducing costs, refer to Migrate to Google Cloud: Minimize costs. What's next Learn how to design resilient single-region environments on Google Cloud. Learn how to prepare data and batch workloads for the migration. Learn how to minimize costs of your Google Cloud environment and your migration across regions. Read an in-depth analysis of deployment archetypes for cloud applications. Learn about the migration framework by reading Migrate to Google Cloud: Get started. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions ArchitectOther contributor: Lee Gates | Group Product Manager Send feedback \ No newline at end of file diff --git a/Get_started.txt b/Get_started.txt new file mode 100644 index 0000000000000000000000000000000000000000..1a21aecff8b8fdb332cb8e4e4b76ca42e98b2e09 --- /dev/null +++ b/Get_started.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-gcp-getting-started +Date Scraped: 2025-02-23T11:51:30.753Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Get started Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This document helps you plan, design, and implement the process of migrating your workloads to Google Cloud. Moving apps from one environment to another is a challenging task, even for experienced teams, so you need to plan and execute your migration carefully. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started (this document) Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs This document is useful if you're planning a migration from an on-premises environment, from a private hosting environment, from another cloud provider to Google Cloud, or if you're evaluating the opportunity to migrate and want to explore what it might look like. Beginning the journey When planning your migration to Google Cloud, you start by defining the environments that are involved in the migration. Your starting point can be an on-premises environment, a private hosting environment, or another public cloud environment. An on-premises environment is an environment where you have full ownership and responsibility. You retain full control over every aspect of the environment, such as cooling, physical security, and hardware maintenance. In a private hosting environment such as a colocation facility, you outsource part of the physical infrastructure and its management to an external party. This infrastructure is typically shared between customers. In a private hosting environment, you don't have to manage the physical security and safety services. Some hosting environments let you manage part of the physical hardware, such as servers, racks, and network devices, while others manage that hardware for you. Typically, power and network cabling are provided as a service so you don't have to manage them. You maintain full control over hypervisors that virtualize physical resources, the virtualized infrastructure that you provision, and workloads that you run on that infrastructure. A public cloud environment has the advantage that you don't have to manage the whole resource stack by yourself. You can focus on the aspect of the stack that is most valuable to you. Like in a private hosting environment, you don't have to manage the underlying physical infrastructure. Additionally, you don't have to manage the resource virtualization hypervisor. You can build a virtualized infrastructure and can deploy your workloads in this new infrastructure. You can also buy fully managed services, where you care only about your workloads, handing off the operational burden of managing runtime environments. For each environment, this document evaluates the following aspects as well as who should provide and manage the relevant services: Resources On-premises environment Private hosting environment Public cloud environment Physical security and safety You Service provider Service provider Power and network cabling You Service provider Service provider Hardware (incl. maintenance) You Depends on service provider Service provider Virtualization platform You You Service provider App resources You You You (eventually leveraging fully managed services) In this document, the target environment is Google Cloud. After you define your starting and target environments, you define the workload types and the related operational processes that are in scope for the migration. This document considers two types of workloads and operations: legacy and cloud-optimized. Legacy workloads and operations are developed without any consideration for cloud environments. These workloads and operations can be difficult to modify and expensive to run and maintain because they usually don't support any type of scalability. Cloud-optimized workloads and operations are natively scalable, portable, available, and secure. The workloads and operations can help increase developer productivity and agility, because developers can focus on the actual workloads, rather than spending effort to manage development and runtime environments, or dealing with manual and cumbersome deployment processes. Google Cloud also has a shared responsibility model for security. Google Cloud is responsible for the physical security and the security of the infrastructure, while you're responsible for the security of the workloads you deploy to the infrastructure. Considering these environment and workload types, your starting situation is one of the following: On-premises or private hosting environment with legacy workloads and operations. On-premises or private hosting environment with cloud-optimized workloads and operations. Public cloud or private hosting environment with legacy workloads and operations. Public cloud or private hosting environment with cloud-optimized workloads and operations. The migration process depends on your starting point. Migrating a workload from a legacy on-premises environment or private hosting environment to a cloud-optimized environment, such as a public cloud, can be challenging and risky. Successful migrations change the workload to migrate as little as possible during the migration operations. Moving legacy on-premises apps to the cloud often requires multiple migration steps. Types of migrations This document defines the following major types of migrations: Rehost: lift and shift Replatform: lift and optimize Refactor: move and improve Re-architect: continue to modernize Rebuild: remove and replace, sometimes called rip and replace Repurchase In the following sections, each type of migration is defined with examples of when to use each type. Rehost: lift and shift In a rehost migration, you move workloads from a source environment to a target environment with minor or no modifications or refactoring. The modifications you apply to the workloads to migrate are only the minimum changes you need to make in order for the workloads to operate in the target environment. A rehost migration is ideal when a workload can operate as-is in the target environment, or when there is little or no business need for change. This migration is the type that requires the least amount of time because the amount of refactoring is kept to a minimum. There might be technical issues that force a rehost migration. If you cannot refactor a workload to migrate and cannot decommission the workload, you must use a rehost migration. For example, it can be difficult or impossible to modify the source code of the workload, or the build process isn't straightforward so producing new artifacts after refactoring the source code might not be possible. Rehost migrations are the easiest to perform because your team can continue to use the same set of tools and skills that they were using before. These migrations also support ready-made software. Because you migrate existing workloads with minimal refactoring, rehost migrations tend to be the quickest, compared to refactor or rebuild migrations. However, after a rehost migration, the workloads that are running in the target environment aren't optimized for the cloud. These workloads don't take full advantage of cloud platform features, such as horizontal scalability, fine-grained pricing, and highly managed services. Replatform: lift and optimize In a replatform migration, you lift the existing workloads and then optimize them for the new cloud environment. A replatform migration is best for organizations that want to take advantage of all the core competencies of the cloud. These competencies include elastic computing, redundancy, improved performance, and security. For example, you might replatform a workload to the cloud in order to take advantage of a cloud-based microservice architecture or containers in Google Kubernetes Engine. These workloads will then have higher performance and more efficiency running in the cloud. However, replatform migrations take more work to accomplish than rehost migrations. The new cloud platform will have a different underlying codebase, which requires several rounds of testing to make sure that everything is running at its optimal level. Refactor: move and improve In a refactor migration, you modify the workloads to take advantage of cloud capabilities, and not just modify the workloads to make them work in the new environment. You can improve each workload for performance, features, cost, and user experience. You can modify the workloads while you're migrating them to the cloud, or even before migrating them. For example, if you don't have substantial experience with cloud migrations, you might prefer to modify the workloads while you're migrating. However, if you have cloud migration experience, you may already have an idea of the modifications that the workloads need to take full advantage of cloud capabilities. If the current architecture or infrastructure of an app isn't supported in the target environment as it is, a certain amount of refactoring is necessary to overcome these limits. Another reason to choose the refactor approach is when a major update to the workload is necessary in addition to the updates you need to make to migrate. Refactor migrations let your app take advantage of features of a cloud platform, such as scalability and high availability. You can also architect the improvement to increase the portability of the app. However, refactor migrations take longer than rehost migrations because the workloads must be refactored in order for the app to migrate. A refactor migration also requires that you learn new skills. Re-architect: continue to modernize Re-architect migrations are similar to refactor migrations. However, instead of restructuring how the workload code works, re-architect migrations change how that code functions. Those code changes optimize the workload and take advantage of cloud-optimized properties such as scalability, security, and agility. For example, a re-architect migration can take one large, monolithic workload and turn it into several independent microservices that you deploy on Google Cloud. A re-architect migration is more complex than a refactor migration, so it takes more time and effort. A re-architect migration can also potentially introduce bugs or security issues into the new workloads. Thus, a re-architect migration requires several rounds of testing to make sure that everything is running at its optimal level. Rebuild: remove and replace In a rebuild migration, you decommission an existing app and completely redesign and rewrite it as a fully cloud-optimized app. If the current app isn't meeting your goals—for example, you don't want to maintain it, it's too costly to migrate using one of the previously mentioned approaches, or it's not supported on Google Cloud—you can do a rebuild migration. Rebuild migrations let your app take full advantage of Google Cloud features, such as horizontal scalability, highly managed services, and high availability. Because you're rewriting the app from scratch, you also remove the technical debt of the existing, legacy version. However, rebuild migrations can take longer than rehost or refactor migrations. Moreover, this type of migration isn't suitable for ready-made apps because it requires rewriting the app. You need to evaluate the extra time and effort to redesign and rewrite the app as part of its lifecycle. A rebuild migration also requires new skills. You need to use new toolchains to provision and configure the new environment and to deploy the app in that environment. Repurchase A repurchase migration is when you move from a purchased on-premises workload to a cloud-hosted software-as-a-service (SaaS) equivalent. For example, you can move from on-premises collaboration software and local storage to Google Workspace. From a resources perspective, a repurchase migration might be a lot easier than refactoring, rebuilding, or re-architecting. However, a repurchase migration might be a lot more expensive and you might not get the granular features of controlling your own cloud environments. Google Cloud Adoption Framework Before starting your migration, you should evaluate the maturity of your organization in adopting cloud technologies. The Google Cloud Adoption Framework serves both as a map for determining where your business information technology capabilities are now, and as a guide to where you want to be. You can use this framework to assess your organization's readiness for Google Cloud and what you need to do to fill in the gaps and develop new competencies, as illustrated in the following diagram. The framework assesses four themes: Learn. The quality and scale of your learning programs. Lead. The extent to which your IT departments are supported by a mandate from leadership to migrate to Google Cloud. Scale. The extent to which you use cloud-optimized services, and how much operational automation you have in place. Secure. The capability to protect your current environment from unauthorized and inappropriate access. For each theme, you should be in one of the following three phases, according to the framework: Tactical. There are no coherent plans covering all the individual workloads you have in place. You're mostly interested in a quick return on investments and little disruption to your IT organization. Strategic. There is a plan in place to develop individual workloads with an eye to future scaling needs. You're interested in the mid-term goal to streamline operations to be more efficient than they are today. Transformational. Cloud operations work smoothly, and you use data that you gather from those operations to improve your IT business. You're interested in the long-term goal of making the IT department one of the engines of innovation in your organization. When you evaluate the four topics in terms of the three phases, you get the Cloud Maturity Scale. In each theme, you can see what happens when you move from adopting new technologies when needed, to working with them more strategically across the organization—which naturally means deeper, more comprehensive, and more consistent training for your teams. The migration path It's important to remember that a migration is a journey. You are at point A with your existing infrastructure and environments, and you want to reach point B. To get from A to B, you can choose any of the options previously described. The following diagram illustrates the path of this journey. There are four phases of your migration: Assess. In this phase, you perform a thorough assessment and discovery of your existing environment in order to understand your app and environment inventory, identify app dependencies and requirements, perform total cost of ownership calculations, and establish app performance benchmarks. For more information about the assess phase, see Migrate to Google Cloud: Assess and discover your workloads, Migrate to Google Cloud: Best practices, and Migration Center: Start an asset discovery. Plan. In this phase, you create the basic cloud infrastructure for your workloads to live in and plan how you will move apps. This planning includes identity management, organization and project structure, networking, sorting your apps, and developing a prioritized migration strategy. For more information about planning and building your foundation, see Migrate to Google Cloud: Plan and build your foundation. Deploy. In this phase, you design, implement and execute a deployment process to move workloads to Google Cloud. You might also have to refine your cloud infrastructure to deal with new needs. For more information about how to deploy your workloads to Google Cloud and how to migrate data to Google Cloud, see Migrate to Google Cloud: Deploy your workloads, Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments, and Migrate to Google Cloud: Transfer your large datasets. Optimize. In this phase, you begin to take full advantage of cloud-optimized technologies and capabilities to expand your business's potential to things such as performance, scalability, disaster recovery, costs, training, as well as opening the doors to machine learning and artificial intelligence integrations for your app. For more information about how to optimize your environment, see Migrate to Google Cloud: Optimize your environment. For more information about costs, see Migrate to Google Cloud: Minimize costs. Finding help Google Cloud offers various options and resources for you to find the necessary help and support to best take advantage of Google Cloud services. Self-service resources If you don't need dedicated support, you can use these self-service resources: Product documentation. Google Cloud provides documentation for each of its products and services, as well as for APIs. Architecture Center documentation. The migration section of the Architecture Center covers many migration scenarios. For example, Migration resources provides guidance about your migration journey to Google Cloud. Tools. Google Cloud provides several products and services to help you migrate. For example: Google Cloud Migration Center is a unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. Migrate to Virtual Machines is a product for migrating physical servers and virtual machines from on-premises and cloud environments to Google Cloud. Migrate to VMs lets you migrate a virtual machine to Google Cloud in a few minutes, where the data is copied in the background but the virtual machines are completely operational. Storage Transfer Service lets you bring data to Cloud Storage from other cloud providers, online resources, or local data. Database Migration Service is a product that helps you migrate your databases to Google Cloud. Transfer Appliance is a hardware appliance you can use to migrate large volumes of data (from hundreds of terabytes up to 1 petabyte) to Google Cloud without disrupting business operations. BigQuery Migration Service is a comprehensive solution for migrating your data warehouse to BigQuery. Whitepapers. These papers include reference architectures, case studies, best practices, and advanced tutorials. Media content. You can listen to the Google Cloud podcast or watch any of the videos on the Google Cloud YouTube channel. These resources discuss a wide range of topics from product explanations to development strategies. Online courses and labs. Google Cloud has several courses on Coursera that include video content, reading materials, and labs. You can also take labs using Google Cloud Skills Boost or participate in live online classes. Technology partners Google Cloud has partnered with multiple companies to enable you to use their products. Some of the offerings might be free to use so ask the company and your Google Cloud account manager. System integrators Google Cloud partners not just with product and technology companies, but with system integrators that can provide hands-on-keyboard assistance. In the partners list, you can find a list of system integrators that specialize in cloud migrations. Google Cloud Professional Services Our Professional Services team is here to help you get the most out of your investment in Google Cloud. Cloud Plan and Foundations: get help with your migration Professional Services can help you plan your migration and deploy your workloads in production with our Cloud Plan and Foundations offering. These experts provide your team with guidance through each phase of migrating your workload into production, from setting up Google Cloud foundations to optimize the platform for your unique workload needs and deploying the workload. The objectives of Cloud Plan and Foundations are: Set up the Google Cloud foundation. Create design documentation. Plan deployment and migration activities. Deploy workloads into production. Track issues and risks. Professional Services guides your team through the following activities and deliverables: Conducting technical kickoff workshops. Building a technical design document. Creating a migration plan. Creating a program charter. Providing project management. Providing technical expertise. Cloud Sprint: accelerate your migration to Google Cloud Cloud Sprint is an intensive workshop that accelerates your app migration to Google Cloud. In this workshop, Google Cloud Professional Services leads one of your teams through interactive discussions, whiteboarding sessions, and reviewing target apps to migrate to Google Cloud. During the Cloud Sprint, Professional Services works alongside your team members to help you gain first-hand experience with cloud solutions with required deployment activities to help you understand your next steps for future Google Cloud migrations. Training: Develop your team's skills Google Cloud Professional Services can provide training in fields based on your team's needs. What's next Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Global.txt b/Global.txt new file mode 100644 index 0000000000000000000000000000000000000000..1c6829c7bd6ee27f4a8a02cc568f3bbffac374b8 --- /dev/null +++ b/Global.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/global +Date Scraped: 2025-02-23T11:44:44.485Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud global deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide describes the global deployment archetype. In an architecture that's based on the global deployment archetype, the application runs in multiple Google Cloud regions across the globe. You can deploy the application either as a distributed location-unaware stack or as multiple regionally isolated stacks. In either case, a global anycast load balancer distributes traffic to the appropriate region. The application writes data to, and reads from, a synchronously replicated database that's available in all the regions, like Spanner with multi-region configuration. Other components of the application stack can also be global, such as the cache and object store. The following diagram shows the distributed location-unaware variant of the global deployment archetype: The preceding diagram shows a location-unaware application stack, with frontend and backend instances (typically microservices) that are distributed across multiple zones in three Google Cloud regions. A global anycast load balancer distributes incoming traffic to an appropriate frontend instance. This distribution is based on the availability and capacity of the instances and their geographical proximity to the source of the traffic. Cross-region internal load balancers distribute traffic from the frontend instances to appropriate backend instances based on their availability and capacity. The application uses a database that is synchronously replicated and available across regions. The following diagram shows a variant of the global deployment archetype with regionally isolated application stacks: The preceding diagram shows regionally isolated application stacks that run in multiple zones in two Google Cloud regions. This topology is similar to the multi-regional deployment archetype, but it uses a global anycast load balancer instead of DNS routing. The global load balancer distributes incoming traffic to a frontend in the region that's nearest to the user. Both the application stacks write data to, and read from, a database that is synchronously replicated and available across both the regions. If an outage occurs in any one of the two regions, the global load balancer sends user requests to a frontend in the other region. Use cases The following sections provide examples of use cases for which the global deployment archetype is an appropriate choice. Highly available application for a global audience We recommend the global deployment archetype for applications that serve users across the world and, therefore, need high availability and robustness against outages in multiple regions. Opportunity to optimize cost and simplify operations With the global deployment archetype, you can use highly available global resources like a global load balancer and a global database. Compared to a multi-regional deployment, a global deployment can help lower costs and simplify operations because you provision and manage fewer resources. Design considerations When you build an architecture that's based on the global deployment archetype, consider the following design factors. Storage, replication, and networking costs In a globally distributed architecture, the volume of cross-location network traffic can be high compared to a regional deployment. You might also store and replicate more data. When you build an architecture that's based on the global deployment archetype, consider the potentially higher cost for data storage and networking. For business-critical applications, the availability advantage of a globally distributed architecture might outweigh the higher networking and storage costs. Managing changes to global resources The opportunity to use highly available global resources can help you to optimize cost and simplify operations. However, to ensure that the global resources don't become single points of failure (SPOF), you must carefully manage configuration changes to global resources. Reference architecture For a reference architecture that you can use to design a global deployment, see Global deployment with Compute Engine and Spanner. Previous arrow_back Multi-regional Next Hybrid arrow_forward Send feedback \ No newline at end of file diff --git a/Global_deployment_on_Compute_Engine_and_Spanner.txt b/Global_deployment_on_Compute_Engine_and_Spanner.txt new file mode 100644 index 0000000000000000000000000000000000000000..5c1c33309f33bd050b421f19331bd052f9dad973 --- /dev/null +++ b/Global_deployment_on_Compute_Engine_and_Spanner.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/global-deployment-compute-engine-spanner +Date Scraped: 2025-02-23T11:45:07.150Z + +Content: +Home Docs Cloud Architecture Center Send feedback Global deployment with Compute Engine and Spanner Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-12 UTC This document provides a reference architecture for a multi-tier application that runs on Compute Engine VMs and Spanner in a global topology in Google Cloud. The document also provides guidance to help you build an architecture that uses other Google Cloud infrastructure services. It describes the design factors that you should consider when you build a global architecture for your cloud applications. The intended audience for this document is cloud architects. This architecture is aligned with the global deployment archetype. We recommend this archetype for applications that serve users across the world and need high availability and robustness against outages in multiple regions. This architecture supports elastic scaling at the network, application, and database levels. It lets you align costs with usage without having to compromise on performance, availability, or scalability. Architecture The following diagram shows an architecture for an application that runs on infrastructure that's distributed globally across multiple Google Cloud regions. In this architecture, a global load balancer distributes incoming requests to web servers in appropriate regions based on their availability, capacity, and proximity to the source of the traffic. A cross-regional internal load balancing layer handles distribution of traffic from the web servers to the appropriate application servers based on their availability and capacity. The application servers write data to, and read from, a synchronously replicated database that's available in all the regions. The architecture includes the following Google Cloud resources: Component Purpose Global external load balancer The global external load balancer receives and distributes user requests to the application. The global external load balancer advertises a single anycast IP address, but the load balancer is implemented as a large number of proxies on Google Front Ends (GFEs). Client requests are directed to the GFE that's closest to the client. Depending on your requirements, you can use a global external Application Load Balancer or a global external proxy Network Load Balancer. For more information, see Choose a load balancer. To protect your application against threats like distributed denial-of-service (DDoS) attacks and cross-site scripting (XSS), you can use Google Cloud Armor security policies. Regional managed instance groups (MIGs) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of regional MIGs. These MIGs are the backends for the global load balancer. Each MIG contains Compute Engine VMs in three different zones. Each of these VMs hosts an independent instance of the web tier of the application. Cross-region internal load balancing layer Internal load balancers with cross-regional backends handle the distribution of traffic from the web tier VMs in any region to the application tier VMs across all the regions. Depending on your requirements, you can use a cross-region internal Application Load Balancer or a cross-region internal proxy Network Load Balancer. For more information, see Choose a load balancer. Regional MIGs for the application tier The application tier is deployed on Compute Engine VMs that are part of regional MIGs. These MIGs are the backends for the internal load balancing layer. Each MIG contains Compute Engine VMs in three different zones. Each VM hosts an independent instance of the application tier. Spanner multi-region instance The application writes data to and reads from a multi-region Spanner instance. The multi-region configuration in this architecture includes the following replicas: Four read-write replicas in separate zones across two regions. A witness replica in a third region. Virtual Private Cloud (VPC) network and subnets All the resources in the architecture use a single VPC network. The VPC network has the following subnets: A subnet in each region for the web server VMs. A subnet in each region for the application server VMs. (Not shown in the architecture diagram) A proxy-only subnet in each region for the cross-region internal load balancer. Instead of using a single VPC network, you can create a separate VPC network in each region and connect the networks by using Network Connectivity Center. Products used This reference architecture uses the following Google Cloud products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Spanner: A highly scalable, globally consistent, relational database service. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for system design, security and compliance, reliability, cost, operational efficiency, and performance. Note: The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions for your global deployment and to select appropriate Google Cloud services. Region selection When you choose the Google Cloud regions where your applications must be deployed, consider the following factors and requirements: Availability of Google Cloud services in each region. For more information, see Products available by location. Availability of Compute Engine machine types in each region. For more information, see Regions and zones. End-user latency requirements. Cost of Google Cloud resources. Cross-regional data transfer costs. Regulatory requirements. Some of these factors and requirements might involve trade-offs. For example, the most cost-efficient region might not have the lowest carbon footprint. Compute services The reference architecture in this document uses Compute Engine VMs for the web and application tiers. Depending on the requirements of your application, you can choose from other Google Cloud compute services: You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. If you prefer to focus your IT efforts on your data and applications instead of setting up and operating infrastructure resources, then you can use serverless services like Cloud Run and Cloud Run functions. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. For more information about choosing appropriate compute services for your workloads in Google Cloud, see Hosting Applications on Google Cloud in the Google Cloud Architecture Framework. Storage services The architecture shown in this document uses regional Persistent Disk volumes for the VMs. Regional Persistent Disk volumes provide synchronous replication of data across two zones within a region. Data in Persistent Disk volumes is not replicated across regions. Other storage options for multi-regional deployments include Cloud Storage dual-region or multi-region buckets. Objects that are stored in a dual-region or multi-region bucket are stored redundantly in at least two separate geographic locations. Metadata is written synchronously across regions, and data is replicated asynchronously. For dual-region buckets, you can use turbo replication, which ensures faster replication across regions. For more information, see Data availability and durability. To store files that are shared across multiple VMs in a region, such as across all the VMs in the web tier or application tier, you can use a Filestore Enterprise instance. The files that you store in a Filestore Enterprise instance are replicated synchronously across three zones within the region. This replication ensures high availability and robustness against zone outages. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. When you design storage for your multi-regional workloads, consider the functional characteristics of the workloads, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Database services The reference architecture in this document uses Spanner, a fully managed, horizontally scalable, globally distributed, and synchronously-replicated database. We recommend a multi-regional Spanner configuration for mission-critical deployments that require strong cross-region consistency. Spanner supports synchronous cross-region replication without downtime for failover, maintenance, or resizing. For information about other managed database services that you can choose from based on your requirements, see Google Cloud databases. When you choose and configure the database for a multi-regional deployment, consider your application's requirements for cross-region data consistency, and be aware of the performance and cost trade-offs. External load balancing options An architecture that uses a global external load balancer, such as the architecture in this document, supports certain features that help you to enhance the reliability of your deployments. For example, if you use the global external Application Load Balancer, you can implement edge caching by using Cloud CDN. If your application requires Transport Layer Security (TLS) to be terminated in a specific region, or if you need the ability to serve content from specific regions, you can use regional load balancers with Cloud DNS to route traffic to different regions. For information about the differences between regional and global load balancers, see the following documentation: Global versus regional load balancing in "Choose a load balancer" Modes of operation in "External Application Load Balancer overview" Security and compliance This section describes factors that you should consider when you use this reference architecture to design and build a global topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against threats To protect your application against threats like DDoS attacks and XSS, you can use Google Cloud Armor security policies. Each policy is a set of rules that specifies certain conditions that should be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the incoming traffic's source IP address matches a specific IP address or CIDR range, then the traffic must be denied. You can also apply preconfigured web application firewall (WAF) rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the application tier and the web tier don't need inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only a private, internal IP address can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only private IP addresses, like the Compute Engine VMs in this reference architecture, you can use Secure Web Proxy or Cloud NAT. VM image security To ensure that your VMs use only approved images (that is, images with software that meets your policy or security requirements), you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. For Google Cloud organizations that were created before May 3, 2024, this default service account is granted the Editor IAM role (roles/editor) unless this behavior is disabled. By default, the default service account is attached to all VMs that you create by using the Google Cloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each application. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges in "Best practices for using service accounts." More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations provided in the Enterprise foundations blueprint. Reliability This section describes design factors that you should consider when you use this reference architecture to build and operate reliable infrastructure for a global deployment in Google Cloud. MIG autoscaling When you run your application on multiple regional MIGs, the application remains available during isolated zone outages or region outages. The autoscaling capability of stateless MIGs lets you maintain application availability and performance at predictable levels. To control the autoscaling behavior of your stateless MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling for stateless MIGs. Stateful MIGs can't be autoscaled. For more information, see Autoscaling groups of instances. VM autohealing Sometimes the VMs that host your application might be running and available, but there might be issues with the application itself. It might freeze, crash, or not have sufficient memory. To verify whether an application is responding as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configuring autohealing, see Set up an application health check and autohealing. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs that are distributed across multiple zones. This distribution ensures that your application is robust against zone outages. To improve this robustness further, you can create a spread placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs within each zone on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. For more information, see Apply spread placement policies to VMs. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required for MIG autoscaling, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or shared across multiple projects. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Persistent Disk state A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your persistent disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them easily to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Data durability You can use Backup and DR Service to create, store, and manage backups of the Compute Engine VMs. Backup and DR stores backup data in its original, application-readable format. When required, you can restore your workloads to production by directly using data from long-term backup storage without time-consuming data movement or preparation activities. Compute Engine provides the following options to help you to ensure the durability of data that's stored in Persistent Disk volumes: Standard snapshots let you capture the point-in-time state of Persistent Disk volumes. The snapshots are stored redundantly in multiple regions, with automatic checksums to ensure the integrity of your data. Snapshots are incremental by default, so they use less storage space and you save money. Snapshots are stored in a Cloud Storage location that you can configure. For more recommendations about using and managing snapshots, see Best practices for Compute Engine disk snapshots. Regional Persistent Disk volumes let you run highly available applications that aren't affected by failures in persistent disks. When you create a regional Persistent Disk volume, Compute Engine maintains a replica of the disk in a different zone in the same region. Data is replicated synchronously to the disks in both zones. If any one of the two zones has an outage, the data remains available. You can use the backup and restore feature in Spanner to help protect against data corruption caused by operator errors and application issues. For more information, see Spanner backup and restore overview. Database reliability Data that's stored in a multi-region Spanner instance is replicated synchronously across multiple regions. The Spanner configuration that's shown in the preceding architecture diagram includes the following replicas: Four read-write replicas in separate zones across two regions. A witness replica in a third region. A write operation to a multi-region Spanner instance is acknowledged after at least three replicas—in separate zones across two regions—have committed the operation. If a zone or region failure occurs, Spanner has access to all of the data, including data from the latest write operations, and it continues to serve read and write requests. Spanner uses disaggregated storage where the compute and storage resources are decoupled. You don't have to move data when you add compute capacity for HA or scaling. The new compute resources get data when they need it from the closest Colossus node. This makes failover and scaling faster and less risky. Spanner provides external consistency, which is a stricter property than serializability for transaction-processing systems. For more information, see the following: Spanner: TrueTime and external consistency Demystifying Spanner multi-region configurations Inside Spanner and the CAP Theorem More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a global Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the utilization of your VM resources, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match your workload's compute requirements. For workloads with predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce your Compute Engine costs for the VMs in the application and web tiers. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have high availability requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, MIGs might not be able to scale out (that is, create VMs) automatically to the specified target size until the required capacity becomes available again. VM resource utilization The autoscaling capability of stateless MIGs enables your application to handle increases in traffic gracefully, and it helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Database cost Spanner helps ensure that your database costs are predictable. The compute capacity that you specify (number of nodes or processing units) determines the storage capacity. The read and write throughputs scale linearly with compute capacity. You pay for only what you use. When you need to align costs with the needs of your workload, you can adjust the size of your Spanner instance. Third-party licensing When you migrate third-party workloads to Google Cloud, you might be able to reduce cost by bringing your own licenses (BYOL). For example, to deploy Microsoft Windows Server VMs, instead of using a premium image that incurs additional cost for the third-party license, you can create and use a custom Windows BYOL image. You then pay only for the VM infrastructure that you use on Google Cloud. This strategy helps you continue to realize value from your existing investments in third-party licenses. If you decide to use the BYOL approach, we recommend that you do the following: Provision the required number of compute CPU cores independently of memory by using custom machine types. By doing this, you limit the third-party licensing cost to the number of CPU cores that you need. Reduce the number of vCPUs per core from 2 to 1 by disabling simultaneous multithreading (SMT), and reduce your licensing costs by 50%. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors that you should consider when you use this reference architecture to design and build a global Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (such as the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using the update method that you choose: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom images that contain the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts to install third-party software, make sure that the scripts explicitly specify software-installation parameters, like the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. Migration to Spanner You can migrate your data to Spanner from other databases like MySQL, SQL Server, and Oracle Database. The migration process depends on factors like the source database, the size of your data, downtime constraints, and complexity of the application code. To help you plan and implement the migration to Spanner efficiently, we provide a range of Google Cloud and third-party tools. For more information, see Migration overview. Database administration With Spanner, you don't need to configure or monitor replication or failover. Synchronous replication and automatic failover are built-in. Your application experiences zero downtime for database maintenance and failover. To further reduce operational complexity, you can configure autoscaling. With autoscaling enabled, you don't need to monitor and scale the instance size manually. More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors that you should consider when you use this reference architecture to design and build a global topology in Google Cloud that meets the performance requirements of your workloads. VM placement For workloads that require low inter-VM network latency, you can create a compact placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. For more information, see Reduce latency by using compact placement policies. VM machine types Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on your cost and performance requirements. The machine types are grouped into machine series and families. The following table provides a summary of the recommended machine families for different workload types: Requirement Recommended machine family Best price-performance ratio for a variety of workloads General-purpose machine family Highest performance per core and optimized for compute-intensive workloads Compute-optimized machine family High memory-to-vCPU ratio for memory-intensive workloads Memory-optimized machine family GPUs for massively parallelized workloads Accelerator-optimized machine family Low core usage and high storage density Storage-optimized machine family For more information, see Machine families resource and comparison guide. VM multithreading Each virtual CPU (vCPU) that you allocate to a Compute Engine VM is implemented as a single hardware multithread. By default, two vCPUs share a physical CPU core. For workloads that are highly parallel or that perform floating point calculations (such as genetic sequence analysis and financial risk modeling), you can improve performance by reducing the number of threads that run on each physical CPU core. For more information, see Set the number of threads per core. Network Service Tiers Network Service Tiers lets you optimize the network cost and performance of your workloads. Depending on your requirements, you can choose Premium Tier or Standard Tier. The architecture in this document uses a global external load balancer with an external IP address and backends in multiple regions. This architecture requires you to use Premium Tier, which uses Google's highly reliable global backbone to help you achieve minimal packet loss and latency. If you use regional external load balancers and route traffic to regions by using Cloud DNS, then you can choose Premium Tier or Standard Tier depending on your requirements. The pricing for Standard Tier is lower than Premium Tier. Standard Tier is suitable for traffic that isn't sensitive to packet loss and that doesn't have low latency requirements. Spanner performance When you provision a Spanner instance, you specify the compute capacity of the instance in terms of the number of nodes or processing units. Monitor the resource utilization of your Spanner instance, and scale the capacity based on the expected load and your application's performance requirements. You can scale the capacity of a Spanner instance manually or automatically. For more information, see Autoscaling overview. With a multi-region configuration, Spanner replicates data synchronously across multiple regions. This replication enables low-latency read operations from multiple locations. The trade-off is higher latency for write operations, because the quorum replicas are spread across multiple regions. To minimize the latency for read-write transactions in a multi-region configuration, Spanner uses leader-aware routing (enabled by default). For recommendations to optimize the performance of your Spanner instance and databases, see the following documentation: Performance best practices for multi-region configurations Schema design best practices Bulk loading best practices Data Manipulation Language best practices SQL best practices Caching If your application serves static website assets and if your architecture includes a global external Application Load Balancer, then you can use Cloud CDN to cache frequently accessed static content closer to your users. Cloud CDN can help to improve performance for your users, reduce your infrastructure resource usage in the backend, and reduce your network delivery costs. For more information, see Faster web performance and improved web protection for load balancing. More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Learn more about the Google Cloud products used in this reference architecture: Cloud Load Balancing Compute Engine managed instance groups Spanner multi-region configurations Learn about replication and consistency in Spanner: Demystifying Spanner multi-region configurations Inside Spanner and the CAP Theorem Get started with migrating your workloads to Google Cloud. Explore and evaluate deployment archetypes that you can choose to build architectures for your cloud workloads. Review architecture options for designing reliable infrastructure for your workloads in Google Cloud. Deploy programmable GFEs using Google Cloud Armor, load balancing, and Cloud CDN. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Ben Good | Solutions ArchitectDaniel Lees | Cloud Security ArchitectGleb Otochkin | Cloud Advocate, DatabasesJustin Makeig | Product ManagerMark Schlagenhauf | Technical Writer, NetworkingSekou Page | Outbound Product ManagerSteve McGhee | Reliability AdvocateVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Global_infrastructure.txt b/Global_infrastructure.txt new file mode 100644 index 0000000000000000000000000000000000000000..8f0967703cc40805a2e3688be0dddc94013730ce --- /dev/null +++ b/Global_infrastructure.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/infrastructure +Date Scraped: 2025-02-23T11:57:15.203Z + +Content: +Deploy a virtual machine cluster with a load balancer that instantaneously manages and distributes global traffic.Google Cloud infrastructureOur planet-scale infrastructure delivers the highest level of performance and availability in a secure, sustainable way. Deploy a preconfigured VM cluster solution recommended by Google experts. Plus, new customers get $300 in free credits on signup.Contact salesDeploy a load balanced VMTrusted global presenceGoogle Cloud’s locations underpin all of the important work we do for our customers. From redundant cloud regions to high-bandwidth connectivity via subsea cables, every aspect of our infrastructure is designed to deliver your services to your users, no matter where they are around the world. Secure, efficient data centersGoogle Cloud’s global network of data centers—among the world’s most secure and energy-efficient facilities—run your services 24x7 with the highest possible speed and reliability. Our data centers employ layered security and built-in redundancy and fault tolerance, and strictly limit employee access.Fast, reliable global networkWith a highly provisioned, low-latency network—the same network that powers products like Gmail, Google Search, and YouTube—your traffic stays on Google’s private backbone for most of its journey, ensuring exceptional user experience and high performance. PayPal processes billions of transactions across the globe. With Google Cloud, we have access to the world’s largest network, which helps us reach our infrastructure goals and best serve millions of users.Sri Shivananda, Senior Vice President and Chief Technology Officer, PayPalWatch videoMultilayered securityWe protect your data through progressive infrastructure layers that deliver defense-in-depth. Google Cloud’s adherence to data privacy and security standards has earned the trust of third-party auditors who attest that our infrastructure and operations keep user data more secure and compliant.Designed for high availabilityOur data centers and network architecture are designed for maximum reliability and uptime. Your workloads are securely distributed across multiple regions, availability zones, points of presence, and network cables to provide strong built-in redundancy and application availability.Sustainability built inGoogle matches 100% of the energy consumed by our global operations with purchases of renewable energy, so every Google Cloud product you use has zero net carbon emissions. And our hyper-efficient data centers use 50% less energy than most systems. Learn more about our commitment to sustainability. More Google Cloud infrastructure resourcesGet the latest Google Cloud infrastructure news.Read the blogDeploy resources in specific zones, regions, and multi-regions. View products by locationLearn more about standards, regulations, and certifications.Explore our offeringsTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Go_Serverless.txt b/Go_Serverless.txt new file mode 100644 index 0000000000000000000000000000000000000000..2be5a1270a72fee9d76d177cff355575ee7d9dfc --- /dev/null +++ b/Go_Serverless.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/serverless +Date Scraped: 2025-02-23T11:58:35.245Z + +Content: +Go ServerlessGoogle Cloud Serverless provides the fastest path to cloud native applications, bringing speed and scalability without worrying about managing infrastructure.Contact usBenefitsQuickly build and deploy applications with Google Cloud ServerlessFully managed infrastructureNo setup, maintenance, or patching of serverless infrastructure. Scale up and down based on traffic.Portability with containersBuild and deploy serverless applications as containers or bring your own Open Container Initiative compliant containers to run.Leverage integration with Google Cloud servicesIntegrate with AI/ML technologies from Google, such as vision, video processing, Speech-to-Text, and other APIs to build smart applications.Key featuresSolutionsLearn to onboard applications, understand serverless patterns, and architect enterprise ready applications for Google Cloud serverless.Cloud native principles workshopLearn cloud native principles and a fast path approach to serverless cloud native applications with Cloud Run. Enterprise serverless workshopLearn common serverless patterns and to build enterprise ready applications using Google Cloud serverless technologies. This solution covers several technologies: Cloud Run, Cloud Run functions, Cloud Run jobs, Cloud SQL, Spanner, Firestore, Eventarc, and more.Ready to get started? Contact usCustomersCustomersEnterprises can innovate without worrying about provisioning machines, clusters, or autoscaling. No knowledge of containers or Kubernetes is required.Blog postLeverages serverless and BigQuery to empower product innovation through their Beauty Tech Data Platform5-min readBlog postLes Echos Le Parisien Annonces scales quickly to new markets with Cloud Run5-min readVideoVeolia uses Cloud Run to remove the barriers of managed platformsVideo (48:32)See all customersRelated servicesProductsCloud RunDevelop and deploy highly scalable containerized applications using your favorite language on a fully managed serverless platform.Cloud Run functionsDevelop and deploy highly scalable applications as functions on a fully managed serverless platform. EventarcAsynchronously deliver events from Google services, SaaS, and your own apps using loosely coupled services that react to state changes.WorkflowsCombine Google Cloud services and APIs to easily build reliable applications, process automation, and data and machine learning pipelines.FirestoreEasily develop rich applications using a fully managed, scalable, and serverless document database.Cloud SQLFully managed relational database service for MySQL, PostgreSQL, and SQL Server.SpannerFully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability.Cloud SchedulerFully managed cron job service.DocumentationCommon use cases for serverless workloadsPatternWeb services: website hostingBuild your website with Cloud Run using your favorite language or framework (Go, Python, Java, Node.js, .NET, and more), access SQL database, render dynamic pages.Learn morePatternIT process automationAutomate cloud infrastructure with Eventarc triggers and workflows that control Google Cloud services.Learn morePatternIntegration with third-party services and APIsUse Cloud Run functions to integrate with third-party services that offer webhook integrations to quickly extend your application.Learn morePatternReal-time analyticsRespond to events from Pub/Sub to process, transform, and enrich streaming data.Learn morePatternWeb services: REST APIs backendModern mobile apps commonly rely on RESTful backend APIs to provide current views of application data and separation for frontend and backend development teams.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_Community.txt b/Google_Cloud_Community.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec8f3173d72ad549520eafb9705a086a17ca1f01 --- /dev/null +++ b/Google_Cloud_Community.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/communities +Date Scraped: 2025-02-23T12:11:49.752Z + +Content: +Welcome to the Google Cloud CommunityJoin our community of Google Cloud experts and peers to ask questions, collaborate on answers, and connect with Googlers who are making the products you use every day.Browse community postsLogin to start a conversationGet support from our global community of Google Cloud experts and peersWe're excited to help you get started with Google CloudSign up for the Google Cloud free trial, which has no cost or obligation. Ask questions and get answers in our vast collection of discussion forums. Google Cloud experts and peers are eagerly willing to collaborate with you on your Google Cloud journey. Other resources to help you get startedOnboarding essentialsQuickstarts & tutorialsGoogle Cloud eventsEvents & webinarsEngage directly with experts, increase your product knowledge, develop new skills, and get your questions answered.Ask questions, find expert recommendations, and connect with your peers using the Google Cloud Architecture Framework.Browse our vast collection of discussion forums and ask questions to our active global community.Other support resourcesSupport CenterFile a support caseCloud Architecture CenterReference architecturesRegardless if you use Cloud to build, modernize, train, teach, or even for fun - in our eyes you are an Innovator.Learn about our Google Cloud Innovators program and join our Google Cloud learning community.ResourcesTrainingCertificationGet started with Google CloudWe're excited to help you get started with Google CloudSign up for the Google Cloud free trial, which has no cost or obligation. Ask questions and get answers in our vast collection of discussion forums. Google Cloud experts and peers are eagerly willing to collaborate with you on your Google Cloud journey. Other resources to help you get startedOnboarding essentialsQuickstarts & tutorialsGoogle Cloud eventsEvents & webinarsSolve your use caseEngage directly with experts, increase your product knowledge, develop new skills, and get your questions answered.Ask questions, find expert recommendations, and connect with your peers using the Google Cloud Architecture Framework.Browse our vast collection of discussion forums and ask questions to our active global community.Other support resourcesSupport CenterFile a support caseCloud Architecture CenterReference architecturesLearn new skillsRegardless if you use Cloud to build, modernize, train, teach, or even for fun - in our eyes you are an Innovator.Learn about our Google Cloud Innovators program and join our Google Cloud learning community.ResourcesTrainingCertificationHot TopicsDiscover a few of the most popular resources and conversations across the Google Cloud Community. How to Maximize Service ReliabilityLearn how to build and operate reliable services with Architecture Framework principles.Managing Capacity, Quota, and Stockouts in the CloudLearn how to effectively plan for and manage capacity, quotas, and stockouts in the cloud.How to Optimize Google Cloud CostsLearn cloud cost optimization principles for compute, networking, and storage.View MoreGoogle Cloud ForumsGet answers to your questions and share your knowledge about the Google Cloud.⚡Cloud HubJoin our virtual watercooler to connect on all Google Cloud topics.Architecture FrameworkGet advice using the Google Cloud Architecture Framework.Learning and Certification HubPrepare for certification and stay up to date on what’s next.AI/MLAsk questions about developing solutions with Artificial Intelligence and Machine Learning.AnthosAsk questions about using Anthos to manage infrastructure and apps.ApigeeConnect on questions about Apigee.DatabasesFind best practices about databases, including Cloud Spanner, Bigtable, and Firestore.Data AnalyticsAsk questions about using BigQuery, Dataflow, or Pub/Sub.Developer ToolsDiscuss developer tools used to build, test, and deploy, such as Cloud Build and Artifact Registry.Google Cloud's operations suiteGet operations questions answered about logging, monitoring, performance, and cost optimization.Google Kubernetes Engine (GKE)Ask questions about Kubernetes, GKE, or GKE Autopilot.Infrastructure: Compute, Storage, NetworkingJoin the conversation about infrastructure, including Compute Engine, Cloud Storage, and Networking.SecurityLearn security and compliance best practices.ServerlessExplore topics about Cloud Run, Cloud Functions, App Engine, Workflows, and Eventarc.View MoreFind more resources to help you get startedDeveloper CenterStay in the know with the latest developer news and become a Google Cloud Innovator. Visit the Developer CenterTrainingTransform your career with in-demand skills through learning paths and live training sessions for individuals or teams.Browse upcoming training sessionsArchitecture CenterDiscover reference architectures, diagrams, design patterns, guidance, and best practices for building or migrating your workloads on Google Cloud.Explore the Architecture CenterOnboarding EssentialsDiscover quickstarts, interactive tutorials, technical documentation, and videos to help you get started using Google Cloud.Get started nowSolve real business challenges on Google CloudBuild apps faster, make smarter business decisions, and connect people anywhere.Go to consoleGoogle CloudLearn more about our productsCloud EventsRegister for upcoming eventsNews and AnnouncementsSee news and announcementsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_Consulting.txt b/Google_Cloud_Consulting.txt new file mode 100644 index 0000000000000000000000000000000000000000..e77adddf6f565568b28cd6a94511e8fe968b5fa6 --- /dev/null +++ b/Google_Cloud_Consulting.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/consulting +Date Scraped: 2025-02-23T12:11:51.942Z + +Content: +Ready to disrupt your industry? Google Cloud Consulting can show you howGoogle Cloud Consulting is your team of experts and innovators to guide you through the moments that matter most in your Cloud journey to help your business thrive.Contact salesExplore services1:18Google Cloud Consulting is focused on helping you accelerate time to value at every stage of your cloud journey, in all the moments that matterAccelerate time to value at every stage of your cloud journeyUse cutting-edge technology: Including AI-focused service offerings and Google engineering experience to help you reach your goals in the cloudExpedite your adoption timelines: Customers are 10x more likely to implement cloud in two years, with strategy customized from the start1Boost your return on investment: 90% of customers report key benefits across protecting data, automating processes, and utilizing resources2We'll help you get the most from your cloud experienceHarness Google’s top technology and culture of innovation with Google Cloud Consulting architects, engineers, delivery executives, trainers, partners, and other experts. We collaborate with our specialists to work alongside you to understand your business goals and help drive your cloud journey forward.Speed up your journeyWe can help you accelerate your transformation journey with a roadmap tailored to your business goals.Take our assessment to determine where you are on your cloud journey todayOptimize strategyWe collaborate with Google Cloud partners to drive a smarter approach to deployment.Get leadership insights from our partner network to boost data performance and leverage AI Unlock more valueWe guide you through the “firsts” in your journey with expertise to help you make the most of your cloud investment.Download our executive's guide on generative AI to kickstart your planEmpower your teamWe create tailored upskilling opportunities to help you achieve your business and technology objectives.Choose your learning path, build your skills, and validate your knowledgeReady to strategize your success?Let’s work together to build your customized strategy with Google Cloud Consulting.Contact salesSee how our innovation and transformation experts help solve your biggest challenges to create products, solutions, and applications that accelerate impact, development, execution, and change in our customers' organizations.Learn moreWe guide you through every stage of your cloud journeyLEARNEmpower your team with training tailored to the key stages in your cloud journey.Learn the new way to cloudBUILDEnsure work gets done right the first time with a cloud strategy aligned to your goals.Build transformative business innovationsOPERATEMaximize your operational efficiency with proactive monitoring, support, and expertise.Operate with confidence and efficiencySUCCEEDMake better decisions with strategic planning, integration of services, and ongoing guidance.Succeed with Google CloudWork with Google Cloud experts to achieve your goalsGoogle Cloud Consulting works with you from the get-go to design a Google Cloud services strategy, inclusive of partners, to deliver business outcomes that resonate with your vision and specific needs.How we work with youWe proactively engage with you to co-develop the right delivery approach based on your needs and your position in the Google Cloud lifecycle. We help you define the outcomes you want to achieve, and we continuously align on what’s working and how we can optimize your strategy to ensure your continued success with Google Cloud.How we work with partnersGoogle Cloud Consulting and Google Cloud partners are better together. Our goal is to co-design the best delivery strategy and approach for your business. We combine our deep knowledge of Google Cloud technology with our partners’ industry and technical expertise to drive consistently high-quality experiences across the lifecycle.Hear more about how we work with partnersSee how customers are succeeding with usVideoXometry is revolutionizing custom manufacturing with Google Cloud Consulting and Vertex AI2:11VideoBelk is discovering innovative ways to connect with customers using generative AIVideo (2:21)Blog postUber's sustainable engineering saved an estimated hundreds of thousands of kilogram of CO2 per year5-min readVideoGinkgo Bioworks is building a new way to speak DNA with gen AIVideo (2:37)Blog postWayfair trains their ML and data engineers and application developers with Google Cloud5-min readBlog postHow HSBC empowers employees with a culture of learning5-min readView all customersGoogle Cloud and Deloitte brought us a technology architecture and application framework that we could implement in record time. We’re already seeing results across our stores, with associate tasks being optimized and overall productivity increasing.Jim Clendenen, VP, Enterprise Retail Systems, KrogerLearn more1. IDC Command Paper, sponsored by Google Cloud Learning: “To Maximize Your Cloud Benefits, Maximize Training” Doc #US48867222, March 20222. IDC, “To Maximize Your Cloud Benefits, Maximize Training”Take the next stepTell us what you’d like to achieve with Google Cloud, and an expert will let you know how Google Cloud Consulting can help you get there.Contact salesLearn more about Google Cloud ConsultingExplore servicesGet tips and insights from usRead the blogWork with a trusted partnerFind a partnerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git "a/Google_Cloud_Cybershield\342\204\242.txt" "b/Google_Cloud_Cybershield\342\204\242.txt" new file mode 100644 index 0000000000000000000000000000000000000000..c39a27ccdc0d4079e45e4d8739585d89214c9e44 --- /dev/null +++ "b/Google_Cloud_Cybershield\342\204\242.txt" @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/solutions/secops-cybershield +Date Scraped: 2025-02-23T12:01:12.839Z + +Content: +Help governments defend against threats at national scaleGoogle Cloud Cybershield™ provides AI and intel-driven cyber defense at national scale with tailored and applied threat intelligence, streamlined security operations, and capability excellence.Get an in-depth look at Google Cloud Cybershield™Google Cloud Cybershield™Raise the bar for cyber defenseDefending against today’s increasingly disruptive and sophisticated adversaries requires united national cyber defense with near real-time knowledge sharing and unparalleled situational awareness of the threat landscape. Google Cloud Cybershield™ ensures the technology, best practices, and expertise are in place to combat modern threats. Google Cloud Cybershield™ enables governments to build an enhanced cyber threat capability, get actionable insights in real time, and develop skills and processes that drive effective security operations.Request a demo50:13How Google Cloud Cybershield can transform government security operations (starting at 28:19)0:33Tailor and apply threat intelligenceEquip governments with real-time, actionable insights on the threats most relevant to their environment to proactively uncover and defend against new and novel threats.1:19Streamline security operationsModernize government security operations centers with capabilities to enhance detection, protect against major threats, and automate response and incident management.Develop capability excellenceDevelop the governance, processes, and skills required to build and operate modern security operations delivered through a nationwide cybersecurity capability with help from Gemini, and Google Cloud and Mandiant consultants.Our vision for the future of national cyber defense is aligned with Google’s. The focus on threat intelligence—combined with monitoring and incident management and continuous validation—will allow us to take our capabilities to the next level, and provide a federated national security operations center and more.Dr. Ammar Alhusaini, Director General of the Central Agency for IT, KuwaitExplore the solutions that bring Google Cloud Cybershield™ to lifeGoogle Threat IntelligenceLearn moreGoogle Security OperationsLearn moreMandiant Cybersecurity ConsultingLearn moreTake the next stepContact us today for more information on Google Cloud Cybershield™.Contact usGet more detailsRead the data sheetExplore it in-depthRead the blogHear from our customersWatch the videoGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_Databases.txt b/Google_Cloud_Databases.txt new file mode 100644 index 0000000000000000000000000000000000000000..1bd23a7928c3954f5a0619af85d3b813d773a618 --- /dev/null +++ b/Google_Cloud_Databases.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/databases +Date Scraped: 2025-02-23T11:59:22.061Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud databasesGoogle Cloud offers the only suite of industry-leading databases built on planet-scale infrastructure and for AI. Experience unmatched reliability, price performance, security, and global scale for all your applications. Go to consoleContact salesIndustry-leading databases for innovationRevolutionize customer experiences with operational databases you know and love in virtually any environment whether in the cloud or on-premises. And with Gemini in databases, you can simplify all aspects of the database journey with AI-powered assistance.Download our gen AI and databases white paperUnlocking data efficiency with AlloyDBLearn how Bayer Crop Science modernized their data solution tool with AlloyDB for PostgreSQL to handle increasing demands and improve collaboration.Scaling a generative AI platformLearn how Character.AI uses AlloyDB and Spanner to serve five times the query volume at half the query latency.Streamline database operationsLearn how Google Cloud helped Ford significantly reduce their management overhead and database-related operational tasks. Databases that fit your needsDatabase typeGoogle Cloud ServiceUse case examplesRelationalAlloyDB for PostgreSQLPower your most demanding enterprise workloads with AlloyDB, the PostgreSQL-compatible database built for the future.AlloyDB Omni is a downloadable edition designed to run anywhere—in your datacenter, your laptop, and in any cloud.Use AlloyDB AI to easily build enterprise generative AI applications.Simplify migrations to AlloyDB with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.Heterogenous migrationsLegacy applicationsEnterprise workloadsHybrid cloud, multicloud, and edgeCloud SQLFully managed MySQL, PostgreSQL, and SQL Server.Simplify migrations to Cloud SQL from MySQL, PostgreSQL, and Oracle databases with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.CRMERPEcommerce and webSaaS applicationSpannerCloud-native with unlimited scale, global consistency, and up to 99.999% availability.Processes more than three billion requests per second at peak.Create a 90-day Spanner free trial instance with 10 GB of storage at no cost.Migrate from databases like Oracle or DynamoDB.GamingRetailGlobal financial ledgerSupply chain/inventory managementBare Metal Solution for OracleLift and shift Oracle workloads to Google Cloud.Legacy applicationsData center retirementBigQueryServerless, highly scalable, and cost-effective multicloud data warehouse designed for business agility and offers up to 99.99% availability.Enable near real-time insights on operational data with Datastream for BigQuery.Multicloud analyticsReal-time processingBuilt-in machine learningKey-valueBigtableHighly performant, fully managed NoSQL database service for large analytical and operational workloads. Offers up to 99.999% availability. Processes more than 7 billion requests per second at peak, and with more than 10 Exabytes of data under management.Learn how to migrate from HBase or Cassandra.PersonalizationAdtechRecommendation enginesFraud detectionDocumentFirestoreHighly-scalable, massively popular document database service for mobile, web, and server development that offers richer, faster queries and high availability up to 99.999%. Has a thriving developer community of more than 250,000 monthly active developers. Mobile/web/IoT applicationsReal-time syncOffline syncFirebase Realtime DatabaseStore and sync data in real time.Mobile sign-insPersonalized applications and adsIn-app chatIn-memoryMemorystoreFully managed Redis and Memcached for sub-millisecond data access.Memorystore for Redis Cluster is a fully managed service that can easily scale to terabytes of keyspace and tens of millions of operations per second. CachingGamingLeaderboardSocial chat or news feedAdditional NoSQLMongoDB AtlasGlobal cloud database service for modern applications.Mobile/web/IoT applicationsGamingContent managementSingle viewGoogle Cloud Partner ServicesManaged offerings from our open source partner network, including MongoDB, Datastax, Redis Labs, and Neo4j.Leverage existing investmentsDatabases that fit your needsRelationalAlloyDB for PostgreSQLPower your most demanding enterprise workloads with AlloyDB, the PostgreSQL-compatible database built for the future.AlloyDB Omni is a downloadable edition designed to run anywhere—in your datacenter, your laptop, and in any cloud.Use AlloyDB AI to easily build enterprise generative AI applications.Simplify migrations to AlloyDB with Database Migration Service.Set up easy-to-use, low-latency database replication with Datastream.Heterogenous migrationsLegacy applicationsEnterprise workloadsHybrid cloud, multicloud, and edgeKey-valueBigtableHighly performant, fully managed NoSQL database service for large analytical and operational workloads. Offers up to 99.999% availability. Processes more than 7 billion requests per second at peak, and with more than 10 Exabytes of data under management.Learn how to migrate from HBase or Cassandra.PersonalizationAdtechRecommendation enginesFraud detectionDocumentFirestoreHighly-scalable, massively popular document database service for mobile, web, and server development that offers richer, faster queries and high availability up to 99.999%. Has a thriving developer community of more than 250,000 monthly active developers. Mobile/web/IoT applicationsReal-time syncOffline syncIn-memoryMemorystoreFully managed Redis and Memcached for sub-millisecond data access.Memorystore for Redis Cluster is a fully managed service that can easily scale to terabytes of keyspace and tens of millions of operations per second. CachingGamingLeaderboardSocial chat or news feedAdditional NoSQLMongoDB AtlasGlobal cloud database service for modern applications.Mobile/web/IoT applicationsGamingContent managementSingle viewReady to get started? Let’s solve your challenges together.See how tens of thousands of customers are building data-driven applications using Google's data and AI cloud.Learn moreMake your database your secret advantage with Google Cloud.Read the whitepaperLooking for technical resources? Explore our guides and tutorials.See how our customers saved time and money with Google Cloud databasesBlog postSabre chose Bigtable and Spanner to serve more than one billion travelers annually5-min readBlog postHow Renault migrated 70 applications from Oracle databases to Cloud SQL for PostgreSQL5-min readBlog postHow Bitly migrated 80 billion rows of core link data from a self-managed MySQL database to Bigtable5-min readBlog postHow Credit Karma uses Bigtable and BigQuery to store and analyze financial data for 130 million members5-min readBlog postHow ShareChat built a scalable data-driven social media platform with Google Cloud6-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_FedRAMP_implementation_guide.txt b/Google_Cloud_FedRAMP_implementation_guide.txt new file mode 100644 index 0000000000000000000000000000000000000000..ae180f1f74b5a579974b50b76df68341a6695f9d --- /dev/null +++ b/Google_Cloud_FedRAMP_implementation_guide.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/fedramp-implementation-guide +Date Scraped: 2025-02-23T11:56:22.186Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud FedRAMP implementation guide Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-27 UTC This guide is intended for security officers, compliance officers, IT admins, and other employees who are responsible for Federal Risk and Authorization Management Program (FedRAMP) implementation and compliance on Google Cloud. This guide helps you understand how Google is able to support FedRAMP compliance and which Google Cloud tools, products, and services to configure to help meet your responsibilities under FedRAMP. Overview Google Cloud supports FedRAMP compliance, and provides specific details on the approach to security and data protection in the Google security whitepaper and in the Google Infrastructure Security Design Overview. Although Google provides a secure and compliant cloud infrastructure, you are ultimately responsible for evaluating your own FedRAMP compliance. You're also responsible for ensuring that the environment and applications that you build on top of Google Cloud are properly configured and secured according to FedRAMP requirements. This document outlines the FedRAMP Authority to Operate (ATO) phases at a high level, explains the Google Cloud shared responsibility model, highlights customer-specific responsibilities, and suggests how to meet these requirements and guidelines on Google Cloud. FedRAMP The Federal Risk and Authorization Management Program (FedRAMP) is a government-wide program that standardizes how the Federal Information Security Modernization Act (FISMA) applies to cloud computing. It establishes a repeatable approach to security assessment, authorization, and continuous monitoring for cloud-based services. Using FedRAMP's standards and guidelines, you can secure sensitive, mission-essential, and mission-critical data in the cloud, making it possible to detect cybersecurity vulnerabilities quickly. At a high level, FedRAMP has the following goals: Ensure that cloud services and systems used by government agencies have adequate safeguards. De-duplicate efforts and reduce risk management costs. Enable government agencies to rapidly and cost effectively procure information systems and services. In adherence to FedRAMP, federal government agencies must do the following: Ensure that all cloud systems which process, transmit, and store government data use the FedRAMP security controls baseline. Use the security assessment plan when granting security authorizations under FISMA. Enforce FedRAMP requirements through contracts with cloud service providers (CSPs). Authority to Operate (ATO) Successful implementation and execution of the FedRAMP accreditation process culminates with an Authority to Operate (ATO) in the cloud. There are two paths for FedRAMP ATO: P-ATO and Agency ATO. P-ATO, or Provisional Authority to Operate, is granted by the FedRAMP Joint Authorization Board (JAB). The JAB is composed of CIOs from the Department of Homeland Security (DHS), the General Services Administration (GSA), and the Department of Defense (DoD). The board defines the baseline FedRAMP security controls and establish the FedRAMP accreditation criteria for third-party assessment organizations (3PAOs). Organizations and agencies request to have their information system security package processed by the JAB, and the JAB then issues P-ATO to use cloud services. With Agency ATO, the internal organization or agency designates authorizing officials (AOs) to conduct a risk review of the information system security package. The AO can engage 3PAOs or non-accredited, independent assessors (IAs) to review the information system security package. The AO, and later the agency or organization, then authorizes the information system's use of cloud services. The security package is also sent to the FedRAMP Program Management Office (PMO) for review; GSA is the PMO for FedRAMP. After review, the PMO publishes the security package for other agencies and organizations to use. Security assessment plan Authorizing Officials (AOs) at agencies and organizations must incorporate the FedRAMP Security Assessment Plan (SAP) into their internal authorization processes to ensure that they meet FedRAMP requirements for cloud services use. The SAF is implemented in four phases: You or your AO categorize your information system as a Low, Moderate, or High impact system according to FIPS PUB 199 security objectives for confidentiality, integrity, and availability. Based on the system's FIPS categorization, select the FedRAMP security controls baseline that correlates with the FIPS 199 categorization level of low, moderate, or high. You must then implement the security controls captured in the respective controls baseline. Alternative implementations and justification for why a control can't be met or implemented is also acceptable. Capture the details of the security controls implementation in a System Security Plan (SSP). We recommend that you select the SSP template according to the FedRAMP compliance level—Low, Moderate, or High. The SSP does the following: Describes the security authorization boundary. Explains how the system implementation addresses each FedRAMP security control. Outlines system roles and responsibilities. Defines expected system user behavior. Exhibits how the system is architected and what the supporting infrastructure looks like. You use the FedRAMP authorization review template to track your ATO progress. For more details about the implementation phases, see the FedRAMP's agency authorization process. Cloud responsibility model Conventional infrastructure technology (IT) required organizations and agencies to purchase, physical data center or colocation space, physical servers, networking equipment, software, licenses, and other devices for building systems and services. With cloud computing, a CSP invests in the physical hardware, data center, and global networking, while also providing virtual equipment, tools, and services for customers to use. Three cloud computing models exist: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS): In the IaaS model, CSPs essentially supply a virtual data center in the cloud, and they deliver virtualized computing infrastructure such as servers, networks, and storage. Although CSPs manage the physical equipment and data centers for these resources, you are responsible for configuring and securing any of the platform or application resources that you run on the virtualized infrastructure. In the PaaS model, CSPs not only provide and manage the infrastructure and virtualization layer, they also provide customers with a pre-developed, pre-configured platform for creating software, applications, and web services. PaaS makes it easy for developers to create applications and middleware without worrying about security and configuration of the underlying hardware. In the SaaS model, CSPs manage the physical and virtual infrastructure and the platform layer while delivering cloud-based applications and services for customers to consume. Internet applications that run directly from the web browser or by going to a website are SaaS applications. With this model, organizations and agencies don't have to worry about installing, updating, or supporting applications; they simply manage system and data access policies. The following figure highlights CSP responsibilities and your responsibilities both on-premises and across cloud computing models: FedRAMP responsibility You can view the cloud IT stack relative to four layers: the physical infrastructure layer, the cloud infrastructure layer, the cloud platform layer, and the cloud software layer. The following diagram shows these layers. The numbered layers in the diagram correspond to the following: Software as a service. Google Workspace is also certified as FedRAMP Moderate. In order to inherit these SaaS security controls, you can request a copy of Google's ATO package from the JAB and include a copy of Google's attestation letter in your package. Platform as a service. In addition to Google Cloud's FedRAMP certified physical infrastructure, additional PaaS products and services are covered by FedRAMP, including App Engine, Cloud Storage, and database services. Use these pre-certified products and services wherever possible. Infrastructure as a service. In addition to Google Cloud's FedRAMP certified physical infrastructure, additional IaaS products and services are covered by FedRAMP, including Google Kubernetes Engine (GKE) and Compute Engine. Use these pre-certified products and services wherever possible. Physical infrastructure. Google Cloud is certified by JAB as FedRAMP Moderate. In order to inherit these physical security controls, you can request a copy of Google's ATO package and include Google's attestation letter in your package. With respect to FedRAMP ATO, each layer of the cloud IT stack is considered an independent control boundary, and each control boundary requires a separate ATO. This means that despite Google Cloud's FedRAMP compliance and having dozens of Google Cloud services that are covered by FedRAMP, you are still required to implement FedRAMP security baseline controls and the SAF process to qualify your cloud systems and workloads as FedRAMP compliant. There are two types of FedRAMP security controls across Low, Moderate, and High compliance baselines: controls implemented by the information system, and controls implemented by the organization. As your organization or agency builds out FedRAMP-compliant systems on Google Cloud, you inherit the physical infrastructure security controls that Google meets under its FedRAMP certification. You also inherit any physical infrastructure, IaaS, and PaaS security controls that are built into Google's FedRAMP compliant products and services, and into all SaaS controls when using Google Workspace. However, you are required to implement all other security controls and configurations at the IaaS, PaaS, and SaaS levels, as defined by the FedRAMP security controls baseline. FedRAMP implementation recommendations As mentioned, you inherit some security controls from the CSP. For other controls, you must specifically configure them and create organization-defined policies, rules, and regulations to meet each control. This section recommends aids for implementing NIST 800-53 security controls in the cloud by using organization-defined policies with Google Cloud tools, services, and best practices. Note: Services listed in this section marked with * are not currently covered by FedRAMP, and services marked with + are not built-in Google Cloud services. Access control To manage access control in Google Cloud, define organization admins who will manage information system accounts in the cloud. Place those admins in access control groups using Cloud Identity, Admin Console, or some other identity provider (for example, Active Directory or LDAP), ensuring that third-party identity providers are federated with Google Cloud. Use Identity and Access Management (IAM) to assign roles and permissions to administrative groups, implementing least privilege and separation of duties. Develop an organization-wide access control policy for information system accounts in the cloud. Define the parameters and procedures by which your organization creates, enables, modifies, disables, and removes information system accounts. Account management, separation of duties, and least privilege In the access control policy, define the parameters and procedures by which your organization will create, enable, modify, disable, and remove information system accounts. Define the conditions under which information system accounts should be used. Also, identify the time period of inactivity in which users will be required to log out of a system (for example, after *x* minutes, hours, or days). Use Cloud Identity, Admin Console, or application configurations to force users to sign out or re-authenticate after the defined time period. Define what actions should be taken when privileged role assignments are no longer appropriate for a user in your organization. Google's *Policy Intelligence has an IAM Recommender feature that helps you remove unwanted access to Google Cloud resources by using machine learning to make smart access control recommendations. Define conditions under which groups accounts are appropriate. Use Cloud Identity or Admin Console to create groups or service accounts. Assign roles and permissions to shared groups and service accounts by using IAM. Use service accounts whenever possible. Specify what atypical use of an information system account is for your organization. When you detect atypical use, use tools such as Google Cloud Observability or *Security Command Center to alert information system admins. Follow these guidelines to aid in implementing these security controls: AC-02, AC-02 (04), AC-02 (05), AC-02 (07), AC-02 (09), AC-02 (11), AC-02 (12), AC-05, AC-06 (01), AC-06 (03), AC-06 (05), AU-2, AU-3, AU-6, AU-12, SI-04, SI-04 (05), SI-04 (11), SI-04 (18), SI-04 (19), SI-04 (20), SI-04 (22), SI-04 (23). Information flow enforcement and remote access In the organization-wide access control policy, define information-flow control policies for your organization. Identify prohibited or restricted ports, protocols, and services. Define requirements and restrictions for interconnections to internal and external systems. Use tools such as Virtual Private Cloud to create firewalls, logically isolated networks, and subnetworks. Help control the flow of information by implementing Cloud Load Balancing, *Cloud Service Mesh, and VPC Service Controls. When setting information-flow control policies, identify controlled network access points for your organization. Use tools such as Identity-Aware Proxy to provide context-based access to cloud resources for remote and onsite users. Use Cloud VPN or Cloud Interconnect to provide secure, direct access to VPCs. Set organization-wide policies for executing privileged commands and accessing secure data over remote access. Use IAM and VPC Service Controls to restrict access to sensitive data and workloads. Follow these guidelines to aid in implementing these security controls: AC-04, AC-04 (08), AC-04 (21), AC-17 (03), AC-17 (04), CA-03 (03), CA-03 (05), CM-07, CM-07(01), CM-07(02). Logon attempts, system-use notification, and session termination In the access control policy, specify how long a user should be delayed from accessing a login prompt when 3 unsuccessful login attempts have been attempted in a 15-minute period. Define conditions and triggers under which user sessions are terminated or disconnected. Use Cloud Identity Premium Edition or Admin Console to manage mobile devices that connect to your network, including BYOD. Create organization-wide security policies that apply to mobile devices. Outline requirements and procedures for purging and wiping mobile devices after consecutive unsuccessful login attempts. Develop organization-wide language and system-use notifications that provide privacy policies, terms of use, and security notices to users who are accessing the information system. Define the conditions under which organization-wide notifications are displayed before granting users access. Pub/Sub is a global messaging and event ingestion system that you can use to push notifications to applications and end users. You can also use *Chrome Enterprise Suite, including *Chrome Browser and *Chrome OS, with the *Push API and *Notifications API to send notifications and updates to users. Follow these guidelines to aid in implementing these security controls: AC-07, AC-07 (02), AC-08, AC-12, AC-12 (01). Permitted actions, mobile devices, information sharing In the access control policy, define user actions that can be performed on an information system without identification and authentication. Use IAM to regulate user access to view, create, delete, and modify specific resources. Develop organization-wide policies for information sharing. Determine circumstances under which information can be shared and when user discretion is required for sharing information. Employ processes to assist users with sharing information and collaborating across the organization. Google Workspace has a great feature set for controlled collaboration and engagement across teams. Follow these guidelines to aid in implementing these security controls: AC-14, AC-19 (05), AC-21. Awareness and training Create security policies and associated training materials to disseminate to users and security groups across your organization at least annually. Google offers Professional Services options for educating users on cloud security, including but not limited to a Cloud Discover Security engagement and a Google Workspace Security Assessment. Update security policies and training at least annually. Follow these guidelines to aid in implementing security control AT-01. Auditing and accountability Create organization-wide auditing policies and accountability controls that address procedures and implementation requirements for auditing personnel, events, and actions that are tied to cloud information systems. In the organization-wide auditing policy, outline events that should be audited in your organization's information systems, and the auditing frequency. Examples of logged events include successful and unsuccessful account login events, account management events, object access, policy change, privilege functions, process tracking, and system events. For web applications, examples include admin activity, authentication checks, authorization checks, data deletions, data access, data changes, and permission changes. Define additional events of interest for your organization. For the auditing policy, we also recommend that you specify indications of inappropriate or unusual activity for your organization. Monitor, log, and flag these activities regularly (at least weekly). Use Google Cloud Observability to manage logging, monitoring, and alerting for your Google Cloud, on-premises, or other cloud environments. Use Google Cloud Observability to configure and track security events in your organization. You can also use Cloud Monitoring to set custom metrics to monitor for organization-defined events in audit records. Enable information systems to alert admins of audit processing failures. You can implement these alerts by using tools like Pub/Sub and alerting. Set standards for alerting admins within a set time period (for example, within 15 minutes), in the event of a system or functional failure, to include when audit records reach a set threshold or volume capacity. Determine an organization-wide granularity of time measurement, by which audit records should be time-stamped and logged. Define the level of tolerance for time-stamped records in the information system audit trail (for example, nearly real-time or within 20 minutes). Set VPC resource quotas to establish the capacity thresholds for audit record storage. Configure budget alerts to notify admins when a percentage of a resource limit has been reached or exceeded. Define organization-wide storage requirements for audit data and records, to include audit log availability and retention requirements. Use Cloud Storage to store and archive audit logs, and BigQuery to perform further log analysis. Follow these guidelines to aid in implementing these security controls: AU-01, AU-02, AU-04, AU-05, AU-05 (01), AU-06, AU-07 (01), AU-08, AU-08 (01), AU-09 (04), AU-09 (04), AU-12, AU-12 (01), AU-12 (03), CA-07. Security assessment and authorization Develop an organization-wide security assessment and authorization policy that defines the procedures and implementation requirements of organization security assessments, security controls, and authorization controls. In the security assessment and authorization policy, define the level of independence required for security assessment teams to conduct impartial assessments of information systems in the cloud. Identify the information systems that need to be assessed by an independent assessor. Security assessments should minimally cover the following: In-depth monitoring Vulnerability scanning Malicious user testing Insider threat assessment Performance and load testing Your organization should define additional requirements and forms of security assessment. Make sure that your security assessment and authorization policy specifies security system classifications and requirements, including requirements for unclassified and non-national security systems. In the information flow control policies for your organization, outline requirements and restrictions for interconnections to internal and external systems. Set VPC firewall rules to allow and deny traffic to information systems, and use VPC Service Controls to protect sensitive data by using security parameters. Set organization-wide auditing and accountability policies that enforce continuous monitoring requirements (CA-07). Follow these guidelines to aid in implementing these security controls: CA-01, CA-02, CA-02 (01), CA-02 (02), CA-02 (03), CA-03 (03), CA-03 (05), CA-07, CA-07 (01), CA-08, CA-09. Configuration management Create an organization-wide configuration management policy that defines the procedures and implementation requirements for organization-wide configuration management controls, roles, responsibilities, scope, and compliance. Standardize configuration setting requirements for organization-owned information systems and system components. Provide operational requirements and procedures for configuring information systems. Explicitly call out how many previous versions of a baseline configuration the system admins are required to retain for information system rollback support. Use Google's suite of configuration management tools to control IT system configurations as code, and monitor configuration changes by using *Policy Intelligence or *Security Command Center. Specify configuration requirements for each type of information system in your organization (for example, cloud, on-premises, hybrid, unclassified, controlled unclassified information (CUI), or classified). Also define security safeguard requirements for organization-owned and Bring Your Own Device (BYOD) devices to include identifying safe and unsafe geographic locations. Use Identity-Aware Proxy to enforce context-based access controls to organization-owned data, including access controls by geographic location. Use Cloud Identity Premium edition or Admin Console to enforce security configurations on mobile devices that connect to the corporate network. In the configuration management policy, define an organization-wide configuration change-control element, such as a change-control committee or board. Document how frequently the committee meets and under which conditions. Establish a formal body for reviewing and approving configuration changes. Identify the configuration management approval authorities for your organization. These admins review requests for changes to information systems. Define the time period that authorities have to approve or disapprove change requests. Provide guidance for change implementers to notify approval authorities when information system changes have been completed. Set restrictions on the use of open source software across your organization, to include the specification of what software is approved and not approved for use. Use Cloud Identity or Admin Console to enforce approved applications and software for your organization. With Cloud Identity Premium, you can enable single sign-on and multi-factor authentication for third-party applications. Use tools such as alerting to send notifications to security admins when configuration changes are logged. Give admin access to tools like *Security Command Center to monitor configuration changes in near real-time. Using *Policy Intelligence, you can use machine learning to study configurations defined by your organization, raising awareness about when configurations change from the baseline. Enforce least functionality across your organization using information-flow control policies. Follow these guidelines to aid in implementing these security controls: CM-01, CM-02 (03), CM-02 (07), CM-03, CM-03 (01), CM-05 (02), CM-05 (03), CM-06, CM-06 (01), CM-06 (02), CM-07, CM-07 (01), CM-07 (02), CM-07 (05), CM-08, CM-08 (03), CM-10 (01), CM-11, CM-11 (01), SA-10. Contingency planning Develop a contingency plan for your organization that defines the procedures and implementation requirements for contingency planning controls across your organization. Identify key contingency personnel, roles, and responsibilities across organizational elements. Highlight the mission-essential and business-essential information system operations within your organization. Outline recovery time objectives (RTO) and recovery point objectives (RPO) for resuming essential operations when the contingency plan has been activated. Document critical information systems and associated software. Identify any additional security-related information, and provide guidance and requirements for storing backup copies of critical system components and data. Deploy Google's global, regional, and zonal resources and world-wide locations for high availability. Use Cloud Storage classes for multi-regional, regional, backup, and archive options. Implement global network autoscaling and load balancing with Cloud Load Balancing. Follow these guidelines to aid in implementing these security controls: CP-01, CP-02, CP-02 (03), CP-07, CP-08, CP-09 (03). Identification and authentication Create an identification and authentication policy for your organization that specifies identification and authentication procedures, scopes, roles, responsibilities, management, entities, and compliance. Specify identification and authentication controls that your organization requires. Use Cloud Identity Premium or Admin Console to identify corporate and personal devices that can connect to your organization's resources. Use Identity-Aware Proxy to enforce context-aware access to resources. Include guidance around authenticator content for your organization, authentication reuse conditions, standards for protecting authenticators, and standards for changing or refreshing authenticators. Also, capture requirements for using cached authenticators. Specify time limits for using cached authenticators and create definitions for when to expire cached authenticators. Define the minimum and maximum lifetime requirements and refresh time periods that should be enforced by information systems in your organization. Use Cloud Identity or Admin Console to enforce password policies for sensitivity, character usage, new password creation or reuse, password lifetime, storage, and transmission requirements. Outline hardware and software token authentication requirements for authentication across your organization, including but not limited to PIV card and PKI requirements. You can use *Titan Security Keys to enforce additional authentication requirements for admins and privileged personnel. In the identification and authentication policy, outline the Federal Identity, Credential, and Access Management (FICAM) information system components that are allowable for accepting third parties in your organization. Google's Identity Platform is a customer identity and access management (CIAM) platform that helps organizations add identity and access management functionality to applications that are being accessed by external entities. Follow these guidelines to aid in implementing these security controls: IA-01, IA-03, IA-04, IA-05, IA-05 (01), IA-05 (03), IA-05 (04), IA-05 (11), IA-05 (13), IA-08 (03). Incident response Establish an incident response policy for your organization, including procedures to facilitate and implement incident response controls. Create security groups for your organization's incident response teams and authorities. Use tools such as Google Cloud Observability or *Security Command Center to share incident events, logs, and details. Develop an incident response test plan, procedures and checklists, and requirements and benchmarks for success. Specify classes of incidents that your organization should recognize, and outline the associated actions to take in response to such incidents. Define the actions that you expect authorized personnel to take if an incident occurs. These actions might be steps for managing information spills, cybersecurity vulnerabilities, and attacks.Take advantage of capabilities in Google Workspace to scan and quarantine email content, block phishing attempts, and set restrictions on attachments. Use Sensitive Data Protection to inspect, classify, and de-identify sensitive data to help restrict exposure. Specify organization-wide requirements for incident response training, including training requirements for general users and privileged roles and responsibilities. Enforce time-period requirements for taking training (for example, within 30 days of joining, quarterly, or annually). Follow these guidelines to aid in implementing these security controls: IR-01, IR-02, IR-03, IR-04 (03), IR-04 (08), IR-06, IR-08, IR-09, IR-09 (01), IR-09 (03), IR-09 (04). System maintenance Create a system maintenance policy for your organization, documenting system maintenance controls, roles, responsibilities, management, coordination requirements, and compliance. Define parameters for controlled maintenance, including approval processes for conducting off-site maintenance and repairs, and organization-wide turnaround times for replacing failed devices and parts. Your organization will benefit from Data deletion on Google Cloud data and equipment sanitization, and Google's data center security and innovation for off-site maintenance and repairs. Follow these guidelines to aid in implementing these security controls: MA-01, MA-02, MA-06. Media protection As part of Google Cloud's FedRAMP ATO, we meet media protection requirements for physical infrastructure. Review Google's Infrastructure Security Design and Security Overview. You are subsequently responsible for meeting virtual infrastructure security requirements. Develop a media protection policy for your organization, documenting media controls, protection policies and procedures, compliance requirements, and management roles and responsibilities. Document procedures for facilitating and implementing media protections across your organization. Create security groups that identify personnel and roles for managing media and their protections. Specify approved media types and accesses for your organization, including digital and nondigital media restrictions. Set media markings and media-handling exceptions that must be implemented across your organization, including security marking requirements inside and outside of controlled access areas. Use *Data Catalog to manage cloud resource metadata, simplifying data discovery. Control cloud resource compliance across your organization, regulating the distribution and discovery of cloud resources with *Service Catalog. Identify how to sanitize, dispose, or reuse media that your organization manages. Outline use cases and circumstances where sanitization, disposal, or reuse of media and devices is required or acceptable. Define the media safeguard methods and mechanisms that your organization deems acceptable. With Google, you'll benefit from data deletion on Google Cloud data and equipment sanitization, and Google's data center security and innovation. In addition, Cloud KMS and Cloud HSM provide FIPS-compliant cryptographic protection, and you can use *Titan Security Keys to enforce additional physical authentication requirements for admins and privileged personnel. Follow these guidelines to aid in implementing these security controls: MP-01, MP-02, MP-03, MP-04, MP-06, MP-06 (03), MP-07. Physical and environmental protection As part of Google Cloud's FedRAMP ATO, we meet physical and environmental protection requirements for physical infrastructure. Review Google's Infrastructure Security Design and Security Overview. You are subsequently responsible for meeting virtual infrastructure security requirements. Establish a physical and environmental protection policy for your organization, outlining protection controls, protection entities, compliance standards, roles, responsibilities, and management requirements. Outline how to implement physical and environmental protection across your organization. Create security groups that identify personnel and roles for managing physical and environmental protections. Require admins who are accessing sensitive computational resources to use *Titan Security Keys or some other form of MFA to verify access integrity. In the physical and environmental protection policy, define physical access control requirements for your organization. Identify facility entry and exit points for information system sites, access-control safeguards for such facilities, and inventory requirements. Take advantage of tools such as *Google Maps Platform to visually display and track facilities and entry and exit points for locational mappings. Use Resource Manager and *Service Catalog to control access to cloud resources, making them organized and easily discoverable. Use Cloud Monitoring to configure loggable events, accesses, and incidents. Define organization-wide physical access events that should be logged in Cloud Logging. Use the physical and environmental protection policy to account for emergency situations, such as emergency shutoff of information systems, emergency power, fire suppression, and emergency response. Identify points of contact for emergency response, including local emergency responders and physical security personnel for your organization. Outline requirements and locations for alternate work sites. Specify security controls and personnel for primary and alternate work sites. Deploy Google's global, regional, and zonal resources and world-wide locations for high availability. Use Cloud Storage classes for multi-regional, regional, backup, and archive options. Implement global network autoscaling and load-balancing with Cloud Load Balancing. Create declarative deployment templates to establish a repeatable, template-driven deployment process. Follow these guidelines to aid in implementing these security controls: PE-01, PE-03, PE-03 (01), PE-04, PE-06, PE-06 (04), PE-10, PE-13 (02), PE-17. System security planning Develop a security-planning policy for your organization, outlining security-planning controls, roles, responsibilities, management, security-planning entities for your organization, and compliance requirements. Outline how you expect security planning to be implemented across your organization. Create groups to define security-planning personnel accordingly. Specify security groups for security assessments, audits, hardware and software maintenance, patch management, and contingency planning for your organization. Use tools such as Google Cloud Observability or *Security Command Center to monitor security, compliance, and access control across your organization. Follow these guidelines to aid in implementing these security controls: PL-01, PL-02, PL-02 (03). Personnel security Create a personnel security policy that identifies security personnel, their roles and responsibilities, how you expect personnel security to be implemented, and what personnel security controls to enforce across your organization. Capture conditions that would require individuals to go through organizational security screening, re-screening, and investigation. Outline requirements for security clearances in your organization. Include guidance for addressing personnel termination and transfer. Define needs and parameters for exit interviews and the security topics that should be discussed during such interviews. Specify when you expect security and admin entities in your organization to be notified of personnel termination, transfer, or reassignment (for example, within 24 hours). Specify the actions that you expect personnel and the organization to complete for a transfer, reassignment, or termination. Also, cover requirements for enforcing formal employee sanctions. Explain when you expect security personnel and admins to be notified of employee sanctions, and explain sanction processes. Use IAM to assign roles and permissions to personnel. Add, remove, disable, and enable personnel profiles and accesses in Cloud Identity or Admin Console. Enforce additional physical authentication requirements for admins and privileged personnel using *Titan Security Keys. Follow these guidelines to aid in implementing these security controls: PS-01, PS-03, PS-04, PS-05, PS-07, PS-08. Risk assessment Implement a risk assessment policy that identifies risk assessment personnel, risk assessment controls that you expect to be enforced across your organization, and procedures for carrying out risk assessments in your organization. Define how you expect risk assessments to be documented and reported. Use tools such as *Security Command Center to automatically notify security personnel of security risks and the overall security posture of your organization. Leverage Google's suite of risk assessment tools such as Web Security Scanner, Artifact Analysis, Google Cloud Armor, and Google Workspace phishing and malware protection to scan for and report on vulnerabilities across your organization's information systems. Make these tools available to risk assessment personnel and admins to help identify and eliminate vulnerabilities. Following these guidelines will set the foundation for implementing the following security controls: RA-01, RA-03, RA-05. System and services acquisition Develop a system and services acquisition policy that outlines key personnel's roles and responsibilities, acquisition and services management, compliance, and entities. Outline system and services acquisition procedures and implementation guidelines for your organization. Define your organization's system development lifecycle for information systems and information security. Outline information security roles and responsibilities, personnel, and how you expect your organization's risk assessment policy to drive and influence system development life-cycle activities. Highlight procedures that you expect to be carried out within your organization when information system documentation is not available or undefined. Engage your organization's information system admins and system services personnel as required. Define any required training for admins and users that are implementing or accessing information systems in your organization. Use tools such as *Security Command Center to track security compliance, findings, and security control policies for your organization. Google outlines all of its security standards, regulations, and certifications to help educate customers on how to meet compliance requirements and laws on Google Cloud. In addition, Google offers a suite of security products to help customers continuously monitor their information systems, communications, and data both in the cloud and on-premises. Specify any locational restrictions for your organization's data, services, and information processing, and under which conditions data can be stored elsewhere. Google offers global, regional, and zonal options for data storage, processing, and services utilization in Google Cloud. Leverage the configuration management policy to regulate developer configuration management for system and services acquisition controls, and use the security assessment and authorization policy to enforce developer security testing and evaluation requirements. Follow these guidelines to aid in implementing these security controls: SA-01, SA-03, SA-05, SA-09, SA-09 (01), SA-09 (04), SA-09 (05), SA-10, SA-11, SA-16. System and communications protection Create a system and communications protection policy that outlines key personnel's roles and responsibilities, implementation requirements for systems communication protection policies, and required protection controls for your organization. Identify the types of denial of service attacks your organization recognizes and monitors for, and outline DoS protection requirements for your organization. Use Google Cloud Observability to log, monitor, and alert on predefined security attacks against your organization. Implement tools such as Cloud Load Balancing and Google Cloud Armor to safeguard your cloud perimeter, and leverage VPC services such as firewalls and network security controls to protect your internal cloud network. Identify your organization's resource availability requirements; define how you expect cloud resources to be allocated across your organization and what constraints to implement in order to restrict over-utilization. Use tools such as Resource Manager to control access to resources at the organization, folder, project, and individual resource level. Set resource quotas to manage API requests and resource utilization in Google Cloud. Establish boundary protection requirements for your information systems and system communications. Define requirements for internal communications traffic and how you expect internal traffic to engage with external networks. Specify requirements for proxy servers and other network routing and authentication components. Take advantage of *Cloud Service Mesh to manage network traffic and communications flow for your organization. Use Identity-Aware Proxy to control access to cloud resources based on authentication, authorization, and context—including geographic location or device fingerprint. Implement *Private Google Access, *Cloud VPN, or *Cloud Interconnect to secure network traffic and communications between internal and external resources. Use VPC to define and secure your organization's cloud networks; establish subnetworks to further isolate cloud resources and network perimeters. Google offers global software-defined networks with multi-regional, regional, and zonal options for high availability and failover. Define failure requirements for your organization to ensure that your information systems fail to a known state. Capture requirements for preserving information system state information. Use managed instance groups and Deployment Manager templates to re-instantiate failed or unhealthy resources. Give admins access to *Security Command Center to actively monitor your organization's confidentiality, integrity, and availability posture. In the policy, outline your organization's requirements for managing cryptographic keys, including requirements for key generation, distribution, storage, access, and destruction. Use Cloud KMS and Cloud HSM to manage, generate, use, rotate, store, and destroy FIPS-compliant security keys in the cloud. Google encrypts data at rest by default; however, you can use Cloud KMS with Compute Engine and Cloud Storage to further encrypt data by using cryptographic keys. You can also deploy Shielded VMs to enforce kernel-level integrity controls on Compute Engine Follow these guidelines to aid in implementing these security controls: SC-01, SC-05, SC-06, SC-07 (08), SC-07 (12), SC-07 (13), SC-07 (20), SC-07 (21), SC-12, SC-24, SC-28, SC-28 (01). System and information integrity Implement a system and information integrity policy that outlines key personnel's roles and responsibilities, integrity implementation procedures and requirements, compliance standards, and security controls for your organization. Create security groups for the personnel in your organization that are responsible for system and information integrity. Outline flaw-remediation requirements for your organization, to include guidelines for monitoring, assessing, authorizing, implementing, planning, benchmarking, and remediating security flaws across your organization and its information systems. Take advantage of Google's suite of security tools, including but not limited to the following: Chrome Browser Web Security Scanner Artifact Analysis Google Workspace phishing and malware protections Google Workspace security center Google Cloud Armor Use these tools to do the following: Protect against malicious code, cyber attacks, and common vulnerabilities. Quarantine spam and set spam and malware policies. Alert admins about vulnerabilities. Gain insights across your organization for central management. Use tools such as Google Cloud Observability or *Security Command Center to centrally manage, alert on, and monitor your organization's security controls and findings. More specifically, use Google Cloud Observability to log administrative actions, data accesses, and system events initiated by privileged users and personnel across your organization. Notify admins about error messages and information system error handling. Define security-relevant events relative to your organization's software, firmware, and information (for example, zero-day vulnerabilities, unauthorized data deletion, installation of new hardware, software, or firmware). Explain the steps to take when these types of security-relevant changes occur. Specify monitoring objectives and indicators of attack for admins to pay special attention to, to include essential information that should be monitored within information systems across your organization. Define system and information monitoring roles and responsibilities, as well as monitoring and reporting frequency (for example, real-time, every 15 minutes, every hour, or quarterly). Capture requirements for analyzing communications traffic for information systems across your organization. Specify requirements for discovering anomalies, including system points for monitoring. *Google's Network Intelligence Center services make it possible to conduct in-depth network performance and security monitoring. Google also has strong third-party partnerships that integrate with Google Cloud for scanning and protecting cloud endpoints and hosts, such as +Aqua Security and +Crowdstrike. Shielded VMs make it possible to harden devices, verify authentication and ensure secure boot processes. Define how you expect your organization to check and safeguard against security anomalies and integrity violations. Use tools such as *Security Command Center or *Policy Intelligence to monitor and detect configuration changes. Use +configuration management tools or Deployment Manager templates to re-instantiate or to halt changes to cloud resources. In the system information and integrity policy, specify requirements for authorizing and approving network services in your organization. Outline approval and authorization processes for network services. VPC is essential for defining cloud networks and subnetwork using firewalls to protect network perimeters. VPC Service Controls makes it possible to enforce additional network security perimeters for sensitive data in the cloud. On top of all of this, you automatically inherit Google's secure boot stack and trusted, defense-in-depth infrastructure. Follow these guidelines to aid in implementing these security controls: SI-01, SI-02 (01), SI-02 (03), SI-03 (01), SI-04, SI-04 (05), SI-04 (11), SI-04 (18), SI-04 (19), SI-04 (20), SI-04 (22), SI-04 (23), SI-05, SI-06, SI-07, SI-07 (01), SI-07 (05), SI-07 (07), SI-08 (01), SI-10, SI-11, SI-16. Conclusion Security and compliance in the cloud is a joint effort on behalf of you and your CSP. While Google ensures that the physical infrastructure and corresponding services support compliance against dozens of third-party standards, regulations and certifications, you are required to ensure that anything you build in the cloud is compliant. Google Cloud supports you in your compliance efforts by providing the same set of security products and capabilities that Google uses to protect its infrastructure. What's next Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Google_Cloud_Free_Program.txt b/Google_Cloud_Free_Program.txt new file mode 100644 index 0000000000000000000000000000000000000000..3e5cc520bfc605d24e9213af86d6da1fae2e62dd --- /dev/null +++ b/Google_Cloud_Free_Program.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/free +Date Scraped: 2025-02-23T12:11:09.686Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayBuild what’s next in generative AITry Gemini 2.0 models, the latest and most advanced multimodal models in Vertex AI. See what you can build with up to a 2M token context window, starting as low as $0.0001.Try it in consoleContact salesMore ways you can build and scale with Google CloudTry Gemini Code Assist to accelerate developer productivityAuto-complete code, automate inner loop tasks, speed up app development with Gemini Code Assist.Get a $1,000 credit for Vertex AI Agent BuilderNew Vertex AI Agent Builder customers get a one-time $1,000 credit per Google Cloud billing account to create search, chat, and other gen AI apps.Start your next project with 20+ free productsGet free usage of Compute Engine, Cloud Storage, BigQuery, AI APIs, and more popular products up to monthly limits.Free Tier productsThere is no charge to use these products up to their specified free usage limit. The free usage limit does not expire, but is subject to change. Available for eligible customers.Compute Engine Scalable, high-performance virtual machines.1 e2-micro instance per monthCloud StorageBest-in-class performance, reliability, and pricing for all your storage needs.5 GB-months Standard StorageBigQueryFully managed, petabyte scale, analytics data warehouse.1 TB queries per monthCloud RunA fully managed environment to run stateless containers, build apps, or host and deploy websites. 2 million requests per monthGoogle Kubernetes EngineOne-click container orchestration via Kubernetes clusters, managed by Google.One Autopilot or Zonal cluster per monthCloud BuildFast, consistent, reliable builds on Google Cloud.120 build-minutes per dayOperations (formerly Stackdriver)Monitoring, logging, and diagnostics for applications on Google Cloud.Monthly allotments for logging and monitoringFirestoreNoSQL document database that simplifies storing, syncing, and querying data for apps.1 GB storagePub/SubA global service for real-time and reliable messaging and streaming data.10 GB messages per monthCloud Run functionsA serverless environment to build and connect cloud services with code.2 million invocations per monthVision AILabel detection, OCR, facial detection, and more.1,000 units per monthSpeech-to-TextSpeech-to-text transcription — the same that powers Google's own products.60 minutes per monthNatural Language APIDerive insights from unstructured text using Google machine learning.5,000 units per monthAutoML TranslationCreate custom ML models for translation queries.500,000 translated characters per month Video Intelligence APIPre-trained ML models that recognize objects, places, and actions in stored and streaming video.1,000 units per monthWorkflowsRun fully-managed sequences of service calls across Google Cloud and any HTTP APIs.5,000 free internal steps per monthCloud Source RepositoriesMultiple private Git repositories hosted on Google Cloud.Free access for up to five usersGoogle Cloud MarketplaceClick-to-deploy, production-grade solutions from Google Cloud partners.Free trials of select apps and servicesSecret ManagerSecurely store API keys, passwords, certificates, and other sensitive data.6 secret versions per monthCloud ShellOnline development and operations environment accessible anywhere with your browser.Includes 5 GB of persistent disk storageExplore more offers from Google CloudOFFERKEY BENEFITSGKE Enterprise Free TrialTry GKE Enterprise with the full set of product capabilities for 90 days for no fee.Fleet-based team management Managed GitOps-based configuration management GKE on hybrid and multicloudSpanner Free TrialCreate a 90-day Spanner free trial instance with 10 GB of storage at no cost. Plus, new Google Cloud customers get $300 in free credits on signup.Always on database with virtually unlimited scaleIndustry-leading 99.999% availability SLABrings together relational, graph, key value, and searchAlloyDB for PostgreSQL Free TrialStart a 30-day AlloyDB free trial with an 8 vCPU basic primary instance and up to 1 TB of storage capacity. Plus, new Google Cloud customers get $300 in free credits on signup.PostgreSQL database with increased transactional performance and scalabilityBuilt-in generative AI with Vertex AI integrationAccelerated analytical queries with columnar engineLooker Free TrialCreate a 30-day Looker trial instance with all standard features at no cost. Connect to data in 50+ different databasesBuild and manage centralized data modelsAccess and analyze your trusted dataOFFERGKE Enterprise Free TrialTry GKE Enterprise with the full set of product capabilities for 90 days for no fee.Fleet-based team management Managed GitOps-based configuration management GKE on hybrid and multicloudSpanner Free TrialCreate a 90-day Spanner free trial instance with 10 GB of storage at no cost. Plus, new Google Cloud customers get $300 in free credits on signup.Always on database with virtually unlimited scaleIndustry-leading 99.999% availability SLABrings together relational, graph, key value, and searchAlloyDB for PostgreSQL Free TrialStart a 30-day AlloyDB free trial with an 8 vCPU basic primary instance and up to 1 TB of storage capacity. Plus, new Google Cloud customers get $300 in free credits on signup.PostgreSQL database with increased transactional performance and scalabilityBuilt-in generative AI with Vertex AI integrationAccelerated analytical queries with columnar engineLooker Free TrialCreate a 30-day Looker trial instance with all standard features at no cost. Connect to data in 50+ different databasesBuild and manage centralized data modelsAccess and analyze your trusted dataTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesManage your accountGo to consoleWork with a trusted partnerFind a partnerGoogle Maps Platform usage at no costSee pricing detailsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_Marketplace.txt b/Google_Cloud_Marketplace.txt new file mode 100644 index 0000000000000000000000000000000000000000..8babe25a7d37eafb0575f9bdaa00f93df09bb2b0 --- /dev/null +++ b/Google_Cloud_Marketplace.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/marketplace +Date Scraped: 2025-02-23T12:11:54.307Z + +Content: +Announcing partner-delivered professional services on Google Cloud Marketplace.Google Cloud MarketplaceDiscover, try, buy, and use industry-leading data, DevOps, AI, security, and business applications that have been validated to run on Google Cloud. Find professional services delivered by qualified providers. Get trusted technology in action fast while optimizing your cloud spend.Explore the cloud marketplaceSell on Google Cloud MarketplaceWhat is Google Cloud Marketplace?Google Cloud Marketplace offers a universal catalog of solutions from Google and our partner ecosystem for customers to easily discover, try, buy, and use.Discover industry-leading technologyEasily browse our catalog of validated software and services that have undergone validation to integrate with your Google Cloud environment.Simplify procurement Eliminate long procurement cycles and accelerate vendor reviews with flexible payment and fulfillment options from trusted technology vendors and channel partners. Optimize your cloud spendGet the most out of your cloud investment through qualifying purchases of third-party Google Cloud Marketplace solutions that can draw down on your Google Cloud commitments.Streamline deploymentAccelerate time-to-value with streamlined deployment options for AI agents, SaaS, APIs, VM products, Google Kubernetes Engine (GKE) apps, datasets, and foundational AI models that deploy to Vertex AI or GKE.Maintain governanceLeverage a comprehensive governance toolkit to manage user access, create a curated Private Marketplace of approved products, and allow developers to safely request products be made available for use.Consolidate billingIntegrated cost reporting in Google Cloud console allows you to maintain visibility over your Google Cloud and third-party Google Cloud Marketplace spend.What's new with Google Cloud MarketplaceAI Agent Space, a new category in Google Cloud Marketplace to easily find and deploy partner-built AI agents.Read the blogExplore how companies are building AI applications faster with solutions on Google Cloud Marketplace.Watch the videosLearn why Google Cloud Marketplace is the smarter way to find, try, buy, and use apps.Get the infographicGet the Google Cloud Marketplace products you need through your chosen channel partners.Learn moreCustomer success storiesVideoSevenRooms saves time procuring Fivetran with Google Cloud Marketplace42 secondsBlog postAccelerating operations at Delta Dental New Jersey with Google Cloud and Datadog4 minute readBlog postADT has tapped into Google Cloud Marketplace for on-demand technologies and services5 minute readVideoInfobip uses Google Cloud Marketplace to get HumanFirst's AI solution fast45 secondsBlog postFrom an IT management perspective, purchasing CockroachDB through Google Cloud Marketplace gives Booksy unified billing for all its IT costs5 minute readSee all customersFeatured partner solutionsExplore thousands of industry-leading solutions on Google Cloud Marketplace that have undergone technical validation to enable seamless integration with your Google Cloud environment.Expand allAI AgentsFind more AI agentsData solutionsFind no-cost and commercial datasets that are instantly accessible through BigQuery. See all datasets.Database and analytics solutionsDatabase, data lake, and analytics solutions that integrate with Google Cloud. See all database and analytics solutions.DevOpsSee all DevOps toolsFinancial ServicesSee all financial services solutionsGenerative AIFind AI SaaS solutions, generative AI foundational models, and large language models (LLMs) that deploy to Vertex AI and Google Kubernetes Engine. See all generative AI solutions.Healthcare and Life SciencesSee all healthcare and life sciences solutionsInfrastructure solutionsOperating systems, backup and storage, compute, and networking solutions to support infrastructure modernization.Line-of-Business applicationsSee all business applicationsProfessional servicesSee all professional servicesRetail and Consumer Packaged Goods (CPG)See all retail and CPG solutionsSecuritySee all security solutionsTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact usExplore the cloud marketplaceFind solutionsWork with a trusted partnerFind a partnerLearn how to discover, buy, and manage solutions on the MarketplaceView documentationGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_NetApp_Volumes.txt b/Google_Cloud_NetApp_Volumes.txt new file mode 100644 index 0000000000000000000000000000000000000000..045aafe6703ad3484768d2fe74829316b58e36a7 --- /dev/null +++ b/Google_Cloud_NetApp_Volumes.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/netapp-volumes +Date Scraped: 2025-02-23T12:10:14.391Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Google Cloud NetApp VolumesNetApp's best-in-class file storage service, in Google CloudSecure and performant file storage that supports SAP, Microsoft, and Linux-based applications and makes it easy to migrate without refactoring or redesigning.Go to consoleProduct highlightsSupports NFS, SMB, and multiprotocol environmentsMinimize storage costs by nearly 90% with Instant SnapshotsInstantly restore data after ransomware, corruption, or lossView documentationFeaturesRun enterprise apps faster and more efficientlyGoogle Cloud NetApp Volumes supports most on-prem applications. Seamlessly lift-and-optimize mission-critical Linux, Windows, VMware, and SAP shared file workloads. Integrate and run complex, performance-intensive, and latency-sensitive applications with up to rapid response times of 4.5GiB/sec of throughput.Reduce cloud storage costs by up to 90%Set automation and optimization controls that fit your business, and reduce storage needed with​ instant, zero-footprint snapshots and volume clones.Meet mandates for uptime, availability, and securityInstantly restore data from in-place, near-zero-footprint snapshots or remote backup copies in the event of unplanned data loss, ransomware, or data corruption​. Provide business continuity for your cloud application using cross-region replication, cloning and backups.View all featuresHow It WorksIDC’s analysis found that interviewed organizations realized an average annual savings of $4.7 million and a three-year ROI of 457% from NetApp Cloud Volumes. These savings were achieved in two ways: by reducing the occurrence of unplanned outages and improving the productivity of IT teams.Download guideCommon UsesData sharing for Windows applicationsNetApp Volumes enables data sharing for Windows applications, making it useful for user and group shares, application shares for unstructured data, SAP shared files, VDI, and shared storage for MS-SQL.Learn moreLearning resourcesNetApp Volumes enables data sharing for Windows applications, making it useful for user and group shares, application shares for unstructured data, SAP shared files, VDI, and shared storage for MS-SQL.Learn moreData sharing for Linux applicationsNetApp Volumes enables data sharing for Linux applications, making it good for application shares for unstructured data; SAP shared files: binaries, log files, config files; user and group shares; shared machine learning data; EDA-shared chip design data; and PACS images.Learn moreLearning resourcesNetApp Volumes enables data sharing for Linux applications, making it good for application shares for unstructured data; SAP shared files: binaries, log files, config files; user and group shares; shared machine learning data; EDA-shared chip design data; and PACS images.Learn moreEasy data recovery with snapshots and backupsGoogle Cloud NetApp Volumes enables easy data recovery if a user or application deletes data accidentally, and scheduled snapshots provide convenient, low RPO/RTO recovery points. Users can access snapshots and backups and self-restore data quickly and easily from within the console.Learn moreLearning resourcesGoogle Cloud NetApp Volumes enables easy data recovery if a user or application deletes data accidentally, and scheduled snapshots provide convenient, low RPO/RTO recovery points. Users can access snapshots and backups and self-restore data quickly and easily from within the console.Learn moreQuickly recover from ransomware attacksNetApp Volumes enables fast ransomware recovery. Easily revert any volume back to a snapshot taken before the ransomware hit. Recovery time of volume is less than one minute versus restoring from backup which can take hours.Learn moreLearning resourcesNetApp Volumes enables fast ransomware recovery. Easily revert any volume back to a snapshot taken before the ransomware hit. Recovery time of volume is less than one minute versus restoring from backup which can take hours.Learn moreAsynchronous replication for disaster recoveryProtect your data through cross-location volume replication, which asynchronously replicates a source volume in one location to a destination volume in a different location. This capability lets you use the replicated volume for critical application activity in case of a location-wide outage or disaster. The replicated volume can also be used as a read-only copy during normal usage.Learn moreLearning resourcesProtect your data through cross-location volume replication, which asynchronously replicates a source volume in one location to a destination volume in a different location. This capability lets you use the replicated volume for critical application activity in case of a location-wide outage or disaster. The replicated volume can also be used as a read-only copy during normal usage.Learn morePricingService levelsService level fit is evaluated by performance—as a combination of throughput, R/W mix, and latency.Service levelFlexFlex RegionalStandardPremiumExtremePerformance16 MiB/sec per TiB (Throughput)16 MiB/sec per TiB (Throughput)16 MiB/sec per TiB (Throughput)64 MiB/sec per TiB, Max 4.5 GiBps (Throughput)128 MiB/sec per TiB, Max 4.5 GiBps (Throughput)Price$0.20/GiB$0.40/GiB$0.20/GiB$0.29/GiB$0.39/GiBVolume replication$0.11-$0.14/GiB (depending on RPO)$0.11-$0.14/GiB (depending on RPO)$0.11-$0.14/GiB (depending on RPO)$0.11-$0.14/GiB (depending on RPO)$0.11-$0.14/GiB (depending on RPO)Regional availability40 Google Cloud regions (see list)40 Google Cloud regions (see list)15 Google Cloud regions15 Google Cloud regions15 Google Cloud regionsUptime / SLA99.9%99.99%99.9%99.95%99.95%ProtocolsNFSv3/v4.1, SMBNFSv3/v4.1, SMBNFSv3/v4.1, SMB, Dual (SMB/NFS)NFSv3/v4.1, SMB, Dual (SMB/NFS)NFSv3/v4.1, SMB, Dual (SMB/NFS)Cost for us-central1 region in USD. Service levelsService level fit is evaluated by performance—as a combination of throughput, R/W mix, and latency.PerformanceFlex16 MiB/sec per TiB (Throughput)Flex Regional16 MiB/sec per TiB (Throughput)Standard16 MiB/sec per TiB (Throughput)Premium64 MiB/sec per TiB, Max 4.5 GiBps (Throughput)Extreme128 MiB/sec per TiB, Max 4.5 GiBps (Throughput)PriceFlex$0.20/GiBFlex Regional$0.40/GiBStandard$0.20/GiBPremium$0.29/GiBExtreme$0.39/GiBVolume replicationFlex$0.11-$0.14/GiB (depending on RPO)Flex Regional$0.11-$0.14/GiB (depending on RPO)Standard$0.11-$0.14/GiB (depending on RPO)Premium$0.11-$0.14/GiB (depending on RPO)Extreme$0.11-$0.14/GiB (depending on RPO)Regional availabilityFlex40 Google Cloud regions (see list)Flex Regional40 Google Cloud regions (see list)Standard15 Google Cloud regionsPremium15 Google Cloud regionsExtreme15 Google Cloud regionsUptime / SLAFlex99.9%Flex Regional99.99%Standard99.9%Premium99.95%Extreme99.95%ProtocolsFlexNFSv3/v4.1, SMBFlex RegionalNFSv3/v4.1, SMBStandardNFSv3/v4.1, SMB, Dual (SMB/NFS)PremiumNFSv3/v4.1, SMB, Dual (SMB/NFS)ExtremeNFSv3/v4.1, SMB, Dual (SMB/NFS)Cost for us-central1 region in USD. Pricing detailLearn more about NetApp Volumes pricing.See pricing detailsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteGet started todayNew customers get $300 in free creditsTry Google Cloud freeNot sure which storage option is best?See all storage productsTalk to an expert sales representativeStart nowSee technical detailsView documentationRead blog post announcementVisit blogGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_certification.txt b/Google_Cloud_certification.txt new file mode 100644 index 0000000000000000000000000000000000000000..00e431e46a99dad9af86f80070feebb12b1b93a2 --- /dev/null +++ b/Google_Cloud_certification.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/certification +Date Scraped: 2025-02-23T12:11:18.727Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud CertificationDemonstrate your knowledge and skills with an industry-recognized Google Cloud certification.View certifications0:55Why get Google Cloud certified87% of Google Cloud certified individuals are more confident in their cloud skills.1Google Cloud certifications are among the highest paying IT certifications of 2023.2More than 1 in 4 of Google Cloud certified individuals took on more responsibility or leadership roles at work.1Which Google Cloud certification is right for you?Foundational certificationValidates broad knowledge of cloud concepts and the products, services, tools, features, benefits, and use cases of Google Cloud.Recommended candidate:Has fundamental understanding of Google Cloud products, concepts and servicesCollaborative role with technical professionalsNo technical prerequisitesRoleCloud Digital LeaderAssociate certificationValidates fundamental skills to deploy and maintain cloud projects.Recommended candidate:Has experience deploying cloud applications and monitoring operationsHas experience managing cloud enterprise solutionsRoleCloud EngineerGoogle Workspace AdministratorData PractitionerProfessional certificationValidates key technical job functions and advanced skills in design, implementation and management of Google Cloud products. This includes specialized certifications.Recommended candidate:Has in-depth experience setting up cloud environments for an organization Has in-depth experience deploying services and solutions based on business requirementsRoleCloud ArchitectCloud Database EngineerCloud DeveloperData EngineerCloud DevOps EngineerCloud Security EngineerCloud Network EngineerMachine Learning EngineerShowcase your skillsShowcase, share and manage your verified Google Cloud credentials with your Credential Wallet, powered by Credly.Credential WalletOpt-in to the Skills Directory to help connect to job opportunities, and boost your visibility with cloud with talent seekers.Skills DirectoryLearn more about Google Cloud certificationsLearn more about Google Cloud Certification and find answers to frequently asked questions.Go to the help centerTake one class or follow a full learning path. Develop your cloud skills through virtual or in-person training.See training optionsTune in live to Cloud OnAir to learn more about certifications, get exam tips and tricks, and hear insights from industry experts.Join usMore than 1 in 4 of Google Cloud certified individuals reported taking on more responsibility or leadership roles at work.Read the full reportCertification stories and resourcesBlog postA visual tour of Google Cloud certificationsRead blogBlog postDiscover the value of Google Cloud certifications for you and your organization Read blogBlog postShowcase your skills: Discover new ways to skill up with Google Cloud credentialsRead blogBlog postPrepare for Google Cloud certification with no-cost training resources Read blogCase study90% agree skill badges have helped in their Google Cloud certification journey Download reportVideoHear from a Data Engineer and Educator on how Google Cloud Certification helps her serve her communityVideo (1:32)See exam terms & conditions1. Based on survey responses from the 2020 Google Cloud certification impact report.2. Based on responses from the Global Knowledge 2023 IT Skills and Salary Survey.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_free_tier.txt b/Google_Cloud_free_tier.txt new file mode 100644 index 0000000000000000000000000000000000000000..42380f7818d48e8173a1f354e1804ad8a04ac38f --- /dev/null +++ b/Google_Cloud_free_tier.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/free +Date Scraped: 2025-02-23T12:10:38.665Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayBuild what’s next in generative AITry Gemini 2.0 models, the latest and most advanced multimodal models in Vertex AI. See what you can build with up to a 2M token context window, starting as low as $0.0001.Try it in consoleContact salesMore ways you can build and scale with Google CloudTry Gemini Code Assist to accelerate developer productivityAuto-complete code, automate inner loop tasks, speed up app development with Gemini Code Assist.Get a $1,000 credit for Vertex AI Agent BuilderNew Vertex AI Agent Builder customers get a one-time $1,000 credit per Google Cloud billing account to create search, chat, and other gen AI apps.Start your next project with 20+ free productsGet free usage of Compute Engine, Cloud Storage, BigQuery, AI APIs, and more popular products up to monthly limits.Free Tier productsThere is no charge to use these products up to their specified free usage limit. The free usage limit does not expire, but is subject to change. Available for eligible customers.Compute Engine Scalable, high-performance virtual machines.1 e2-micro instance per monthCloud StorageBest-in-class performance, reliability, and pricing for all your storage needs.5 GB-months Standard StorageBigQueryFully managed, petabyte scale, analytics data warehouse.1 TB queries per monthCloud RunA fully managed environment to run stateless containers, build apps, or host and deploy websites. 2 million requests per monthGoogle Kubernetes EngineOne-click container orchestration via Kubernetes clusters, managed by Google.One Autopilot or Zonal cluster per monthCloud BuildFast, consistent, reliable builds on Google Cloud.120 build-minutes per dayOperations (formerly Stackdriver)Monitoring, logging, and diagnostics for applications on Google Cloud.Monthly allotments for logging and monitoringFirestoreNoSQL document database that simplifies storing, syncing, and querying data for apps.1 GB storagePub/SubA global service for real-time and reliable messaging and streaming data.10 GB messages per monthCloud Run functionsA serverless environment to build and connect cloud services with code.2 million invocations per monthVision AILabel detection, OCR, facial detection, and more.1,000 units per monthSpeech-to-TextSpeech-to-text transcription — the same that powers Google's own products.60 minutes per monthNatural Language APIDerive insights from unstructured text using Google machine learning.5,000 units per monthAutoML TranslationCreate custom ML models for translation queries.500,000 translated characters per month Video Intelligence APIPre-trained ML models that recognize objects, places, and actions in stored and streaming video.1,000 units per monthWorkflowsRun fully-managed sequences of service calls across Google Cloud and any HTTP APIs.5,000 free internal steps per monthCloud Source RepositoriesMultiple private Git repositories hosted on Google Cloud.Free access for up to five usersGoogle Cloud MarketplaceClick-to-deploy, production-grade solutions from Google Cloud partners.Free trials of select apps and servicesSecret ManagerSecurely store API keys, passwords, certificates, and other sensitive data.6 secret versions per monthCloud ShellOnline development and operations environment accessible anywhere with your browser.Includes 5 GB of persistent disk storageExplore more offers from Google CloudOFFERKEY BENEFITSGKE Enterprise Free TrialTry GKE Enterprise with the full set of product capabilities for 90 days for no fee.Fleet-based team management Managed GitOps-based configuration management GKE on hybrid and multicloudSpanner Free TrialCreate a 90-day Spanner free trial instance with 10 GB of storage at no cost. Plus, new Google Cloud customers get $300 in free credits on signup.Always on database with virtually unlimited scaleIndustry-leading 99.999% availability SLABrings together relational, graph, key value, and searchAlloyDB for PostgreSQL Free TrialStart a 30-day AlloyDB free trial with an 8 vCPU basic primary instance and up to 1 TB of storage capacity. Plus, new Google Cloud customers get $300 in free credits on signup.PostgreSQL database with increased transactional performance and scalabilityBuilt-in generative AI with Vertex AI integrationAccelerated analytical queries with columnar engineLooker Free TrialCreate a 30-day Looker trial instance with all standard features at no cost. Connect to data in 50+ different databasesBuild and manage centralized data modelsAccess and analyze your trusted dataOFFERGKE Enterprise Free TrialTry GKE Enterprise with the full set of product capabilities for 90 days for no fee.Fleet-based team management Managed GitOps-based configuration management GKE on hybrid and multicloudSpanner Free TrialCreate a 90-day Spanner free trial instance with 10 GB of storage at no cost. Plus, new Google Cloud customers get $300 in free credits on signup.Always on database with virtually unlimited scaleIndustry-leading 99.999% availability SLABrings together relational, graph, key value, and searchAlloyDB for PostgreSQL Free TrialStart a 30-day AlloyDB free trial with an 8 vCPU basic primary instance and up to 1 TB of storage capacity. Plus, new Google Cloud customers get $300 in free credits on signup.PostgreSQL database with increased transactional performance and scalabilityBuilt-in generative AI with Vertex AI integrationAccelerated analytical queries with columnar engineLooker Free TrialCreate a 30-day Looker trial instance with all standard features at no cost. Connect to data in 50+ different databasesBuild and manage centralized data modelsAccess and analyze your trusted dataTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesManage your accountGo to consoleWork with a trusted partnerFind a partnerGoogle Maps Platform usage at no costSee pricing detailsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_partners.txt b/Google_Cloud_partners.txt new file mode 100644 index 0000000000000000000000000000000000000000..ba6411f2f85c388ff1e63102f5a6a6c1c519bb64 --- /dev/null +++ b/Google_Cloud_partners.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/partners +Date Scraped: 2025-02-23T12:11:55.831Z + +Content: +Interested in becoming a Google Cloud partner? Learn more.Google Cloud Partner AdvantageOur partners have the certified experience to successfully deliver Google Cloud solutions to our customers.Find a partnerExplore the marketplaceWhy work with a Google Cloud partner?Innovation: Accelerate your digital transformation with best-in-class data analytics, workplace collaboration, and security—strengthened with AI.Trust: Our global partner ecosystem has been specially trained and certified to deliver cloud solutions that address industry-specific needs.Speed and quality: Expedite time to value with partner-led and delivered services and seamless integration of partner technology on Google Cloud.Partners you can trustPartners that have earned Expertise, Specialization, and solution designations have the Google-validated skills to help you achieve your goals.SpecializationSpecialization is the highest technical designation a partner can earn. Partners who have achieved a Specialization in a solution area have an established Google Cloud services practice, consistent customer success, and proven technical capabilities, vetted by Google and a third-party assessor.Find partnersApplication DevelopmentCloud MigrationContact Center AIData AnalyticsData Center ModernizationData ManagementDevOpsEducationInfrastructureLocation-Based Services (Google Maps Platform and Google Cloud)Machine LearningMarketing AnalyticsSAP on Google CloudSecurityTraining in Data Analytics, Infrastructure, or SecurityWork Transformation - SMBWork Transformation - EnterpriseExpertisePartners with the Expertise designation have demonstrated proficiency and have exhibited customer success through the combination of experience in a specific industry, solution, or product.Find partnersIndustryGoogle Cloud product or technologySolutionSolution designationsGoogle Cloud solution designations are awarded to partner-built solutions that have met Google Cloud Ready or Solution Validation requirements.FIND PARTNERSGoogle Cloud Ready - AlloyDB for PostgreSQLGoogle Cloud Ready - BigQueryGoogle Cloud Ready - Cloud SQLGoogle Cloud Ready - Distributed CloudGoogle Cloud Ready - Regulated and Sovereignty SolutionGoogle Cloud Ready - SustainabilityGoogle Cloud ValidatedLook for the badgeFind Google Cloud partners with the experience you’re looking for and the badges that prove it.Google Cloud Partner Find a partnerPartner with SpecializationFind a partner with a SpecializationGoogle Cloud certifiedLearn about certificationsSee how customers found success with partnersBlog postPrestaShop works with Fivetran and Hightouch to enable a beyond-BI data strategy7-min readCase studyAyoconnect turns to Searce to support a 1000X increase in API hits with Apigee5-min readCase studySADA helps Tassat streamline $24 trillion B2B bank payments market with blockchain-based solution6-min readSee all customersGrowing and retaining talent is woven into everything we do. Google Cloud training and certification has been key to our strategy of developing talent internally. The program has been instrumental in supporting our rapid growth.Simon Lewis, Head of Engineering Delivery, AppsbrokerLearn moreFeatured partnersExpand allFeatured partners with SpecializationsWhen you see a partner with a Specialization, it indicates the strongest signal of proficiency and experience with Google Cloud solutions. Meet featured partners with specialization in artificial intelligence, collaboration, data analytics, infrastructure, and security, respectively.Featured partners with ExpertiseWhen you see a partner with an Expertise designation, it signals that this partner has achieved a high level of proficiency in a Google Cloud product/technology, solution, or industry. Meet featured partners with expertise in artificial intelligence, collaboration, data analytics, infrastructure, and security, respectively.See all partnersExplore partner solutions*Does not currently apply to the Partner Advantage program requirements.Take the next stepTell us what you’re solving for. A Google Cloud partner will help you find the best solution.Find a partnerDiscover partner solutionsExplore our MarketplaceInterested in becoming a partner?Join Partner AdvantageAlready a partner?Sign inGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Cloud_pricing.txt b/Google_Cloud_pricing.txt new file mode 100644 index 0000000000000000000000000000000000000000..6f3d0b4f07ec44df0f61c3106a86118f268811e7 --- /dev/null +++ b/Google_Cloud_pricing.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/pricing +Date Scraped: 2025-02-23T12:10:33.980Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud pricingSave money with Google Cloud’s transparent and innovative approach to pricing. Estimate your costs with our pricing calculator or contact us to get a quote for your organization.Request a quote Detailed price list Start running workloads for freeCreate an account to evaluate how Google Cloud products perform in real-world scenarios. New customers get $300 in free credits to run, test, and deploy workloads, including Google-recommended, pre-built solutions. All customers can use 20+ products for free, up to monthly usage limits.$300in free credits 20+free productsOnly pay for what you useWith Google Cloud’s pay-as-you-go pricing structure, you only pay for the services you use. No up-front fees. No termination charges. Pricing varies by product and usage—view detailed price list.Save up to 57% on workloadsGoogle Cloud saves you money over other providers through automatic savings based on monthly usage and by pre-paying for resources at discounted rates. For example, save up to 57% with committed use discounts on Compute Engine resources like machine types or GPUs.Stay in control of your spendingControl your spending with budgets, alerts, quota limits, and other free cost management tools. Optimize costs with actionable, AI-powered intelligent recommendations and custom dashboards that display cost trends and forecasts. We’re here to help with free 24/7 billing support.Estimate your costs Understand how your costs can fluctuate based on location, workloads, and other variables with the pricing calculator. Or get a custom quote by connecting with a sales representative.Want more guidance? We’ve got you covered.Migration assessment: Estimate the cost of migration and get an end-to-end plan.Request free migration assessment Discover five ways to reduce overall cloud spend with a cost optimization strategy.Download the whitepaperPartner with trained consultants for help with implementation, migration & more.Find a partner for any budgetTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerBuild with Google CloudGo to my consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Distributed_Cloud.txt b/Google_Distributed_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..6fd0b92fcbd16231a29afbab5a95f9915e744164 --- /dev/null +++ b/Google_Distributed_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/distributed-cloud +Date Scraped: 2025-02-23T12:04:58.360Z + +Content: +Download “The 2024 State of Edge Computing” report to leverage insights from 640 business leaders.Google Distributed CloudExtend Google Cloud AI and infrastructure on-premisesGoogle Distributed Cloud’s hardware and software bring Google Cloud to your data center, designed for sovereignty, regulatory, and low-latency requirements.Contact usView documentationProduct highlightsBuild apps with an air-gapped optionEnable modern retail experiences Implement modern manufacturing outcomesTransform telecommunicationsWhat is Google Distributed Cloud?1.5-minute video overviewOverviewEnable inference at the edgeOn-premises computing is not new, but extending AI-optimized cloud infrastructure from the cloud to on-premises deployments is. With Google Distributed Cloud, you can extend the latest in AI models and AI-optimized infrastructure without compromising on data residency, latency, or connectivity.12:56Run modern apps on-premisesBuild modern apps everywhere in a uniform developer environment from cloud to edge. Google Distributed Cloud enables your team to build, deploy, and scale with a Kubernetes-based developer workflow and leverage an active ecosystem of partners.12:55Address data residency and operational sovereignty needsGoogle Distributed Cloud comes with an air-gapped option, coupled with a partner-operated model, to support public sector and regulated industries to meet the strictest sovereignty regulations, including ensuring data residency, control of operational staffing, and limiting impacts of jurisdictional challenges.Strategic partnership to deliver sovereign cloud services in EuropeScale anywhere with cloud-native agilityScale from one to thousands of locations with flexible hardware and software options for your business. Google Distributed Cloud equips your team with fully managed software to quickly adapt and respond to the latest customer requirement and market changes.2:33View moreHow It WorksDrive modern business use cases that leverage Google's AI, security, and an open ecosystem for better experiences, improved growth, and optimized operations. Securely store data where you need it and run modern apps anywhere with Kubernetes-based flexible scaling for thousands of edge locations.Technical overviewGoogle Distributed CloudCommon UsesBuild apps with an air-gapped optionFull isolation with no connectivity to the public internetSafeguard sensitive data and adhere to strict regulations with Google Distributed Cloud's air-gapped option. The solution does not require connectivity to Google Cloud or the public internet to manage infrastructure, services, APIs, or tooling, and is built to remain disconnected in perpetuity. Our product's open architecture creates familiarity across both public and air-gapped private clouds.Learn more about our air-gapped solutions1:13Watch "Google Distributed Cloud: An air-gapped cloud solution"How-tosFull isolation with no connectivity to the public internetSafeguard sensitive data and adhere to strict regulations with Google Distributed Cloud's air-gapped option. The solution does not require connectivity to Google Cloud or the public internet to manage infrastructure, services, APIs, or tooling, and is built to remain disconnected in perpetuity. Our product's open architecture creates familiarity across both public and air-gapped private clouds.Learn more about our air-gapped solutions1:13Watch "Google Distributed Cloud: An air-gapped cloud solution"Enable modern retail experiencesDeliver modern store use cases to thousands of locationsBuilding, deploying, and scaling software for thousands of clusters and locations can be challenging. With Google Distributed Cloud, you can build, deploy, and scale configurations from a single to thousands of retail locations to support use cases like store analytics, fast checkout, and predictive analytics.Learn more about the key insights driving retail at the edgeDownload the ESG ShowcaseLearn how to modernize retail with AI, cloud, and edge computingDownload the ESG WhitepaperLearn how AI, cloud, and edge enable the store of the futureDownload the ESG Economic ValidationLearn Google Distributed Cloud’s economic benefits Learn more about GDC for retailRead our blog: Introducing Google Distributed Cloud for retailHow-tosDeliver modern store use cases to thousands of locationsBuilding, deploying, and scaling software for thousands of clusters and locations can be challenging. With Google Distributed Cloud, you can build, deploy, and scale configurations from a single to thousands of retail locations to support use cases like store analytics, fast checkout, and predictive analytics.Learn more about the key insights driving retail at the edgeDownload the ESG ShowcaseLearn how to modernize retail with AI, cloud, and edge computingDownload the ESG WhitepaperLearn how AI, cloud, and edge enable the store of the futureDownload the ESG Economic ValidationLearn Google Distributed Cloud’s economic benefits Additional resourcesLearn more about GDC for retailRead our blog: Introducing Google Distributed Cloud for retailEnable the modern factory floorDeliver factory floor use cases directly on-siteModernizing factory operations with legacy solutions can be difficult. With Google Distributed Cloud, you can enable modern industrial outcomes, such as process optimization, security, asset protection, visual inspection, and assisted workforce across the factory floor, while delivering improved customer experiences, such as predictable delivery lead times and real time updates.Learn more about the key insights driving manufacturing at the edgeRead our blog: Introducing Google Distributed Cloud for manufacturingInteractive demos and videosIn this interactive demo created in collaboration with Google Cloud partner ClearObject, you can experience real-time anomaly detection on the factory floor solution, built with GDC connected and ClearVision for manufacturing.0:36VIDEO - ClearVision and Google Cloud provide deep insights into production operationsHow-tosDeliver factory floor use cases directly on-siteModernizing factory operations with legacy solutions can be difficult. With Google Distributed Cloud, you can enable modern industrial outcomes, such as process optimization, security, asset protection, visual inspection, and assisted workforce across the factory floor, while delivering improved customer experiences, such as predictable delivery lead times and real time updates.Learn more about the key insights driving manufacturing at the edgeRead our blog: Introducing Google Distributed Cloud for manufacturingAdditional resourcesInteractive demos and videosIn this interactive demo created in collaboration with Google Cloud partner ClearObject, you can experience real-time anomaly detection on the factory floor solution, built with GDC connected and ClearVision for manufacturing.0:36VIDEO - ClearVision and Google Cloud provide deep insights into production operationsTransform telecommunicationsDigitally transform from IT to networks with the latest AIModernizing and monetizing workflows for cloud solution providers (CSPs) can be expensive. In the 5G era, CSPs focus on activating data and analytics with AI and ML. A common operating model is needed to manage networks running across premises-based and cloud environments—from core to edge. Google’s flexibility optimizes deployment and operating costs—both essential to monetizing 5G investments.Learn more about the best practices for monetizing cloud-based technology and 5G networks18:43Watch the Google Cloud Innovators Live session on telecommunicationsHow-tosDigitally transform from IT to networks with the latest AIModernizing and monetizing workflows for cloud solution providers (CSPs) can be expensive. In the 5G era, CSPs focus on activating data and analytics with AI and ML. A common operating model is needed to manage networks running across premises-based and cloud environments—from core to edge. Google’s flexibility optimizes deployment and operating costs—both essential to monetizing 5G investments.Learn more about the best practices for monetizing cloud-based technology and 5G networks18:43Watch the Google Cloud Innovators Live session on telecommunicationsPricingHow our pricing worksGoogle Distributed Cloud pricing is based on Google Cloud services, storage, networking, and hardware configurations.Services and configurationsDescriptionPrice ServerChoose your hardware configuration, starting with a 3-node configurationStarting at$165 per node per month 5-year commitment is $495/month for 3-node configurationSoftware license for managed Kubernetes, starting with a 3-node configurationStarting at$7 per vCPU per month5-year commitment is $672/month for 3-node configurationServer installation$2,300One-time feeRacksChoose your rack configuration, starting with 6 nodesContact sales for pricing3-year commitmentRequest a quote from sales for costs based on your requirements.How our pricing worksGoogle Distributed Cloud pricing is based on Google Cloud services, storage, networking, and hardware configurations. ServerDescriptionChoose your hardware configuration, starting with a 3-node configurationPriceStarting at$165 per node per month 5-year commitment is $495/month for 3-node configurationSoftware license for managed Kubernetes, starting with a 3-node configurationDescriptionStarting at$7 per vCPU per month5-year commitment is $672/month for 3-node configurationServer installationDescription$2,300One-time feeRacksDescriptionChoose your rack configuration, starting with 6 nodesPriceContact sales for pricing3-year commitmentRequest a quote from sales for costs based on your requirements.Explore additional purchase requirementsPremier Support Plan and Necessary StorageReview Google Distributed Cloud's requirements including size, power supply, support, and moreWhere can I get GDC?Now available in 25 countriesLearn more about GEOs and hardware orderingNeed help getting started?Contact GDC sales to beginContact usResources to learn more about our productView documentationUsing AI and edge infrastructure to analyze dataLearn how Orange analyzes a petabyte of data on-premises across 26 countries with Google Distributed CloudBuild cloud-native apps with AI and KubernetesLearn more about Google Distributed Cloud inventory detection appsGoogle in the public sector and WWTLearn how Google Distributed Cloud enhances cloud sovereignty for the public sectorPartners & IntegrationPartner with GDCAll partnersGoogle Cloud ReadyManaged Google Distributed Cloud providersHardware partnersService partnersVisit our partner directory to learn about these Google Distributed Cloud partners.FAQExpand allWhat is a Google Cloud Ready partner and how does my company become one?The Google Cloud Ready - Distributed Cloud validation recognizes partner solutions that have met a core set of functional and interoperability requirements. Through the process, partners closely collaborate with Google to provide new integrations to best support customer use cases. Learn more in the Google Cloud Ready - Distributed Cloud Partner Validation Guide.What is a Managed Google Distributed Cloud provider and how does my company become one?The Managed GDC Providers (MGP) is a strategic partnership initiative by Google Cloud Partner Advantage program designed to accelerate Google Distributed Cloud adoption by collaborating with specialized partners who are skilled in deploying, operating, and managing services. These MGPs form a comprehensive ecosystem, providing end-to-end Google Distributed Cloud solutions, including top-tier support, robust data security, and more. By offering Google Distributed Cloud as a managed service, MGPs empower businesses to scale efficiently while maintaining high service quality.Where can I find more videos about Google Distributed Cloud?Google Distributed Cloud YouTube playlist has a variety of content about driving data and AI transformation, accelerating cloud-native network adoption, and new monetization models. Learn more about our managed edge hardware and software product configurations for enterprises and public sector to innovate with AI, keep data secure, and modernize with a Kubernetes-based consistent and open developer experience from edge to cloud.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Distributed_Cloud_Air-gapped.txt b/Google_Distributed_Cloud_Air-gapped.txt new file mode 100644 index 0000000000000000000000000000000000000000..3e4b866ebff370599a8d1e4b015877d00f4a4c92 --- /dev/null +++ b/Google_Distributed_Cloud_Air-gapped.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/distributed-cloud-air-gapped +Date Scraped: 2025-02-23T12:04:46.047Z + +Content: +Gulf Edge and Google Cloud partner to deliver AI-enabled sovereign cloud for Thailand. Learn more.Google Distributed Cloud air-gappedDisconnected sovereign cloud solutionGoogle Distributed Cloud air-gapped delivers a fully managed cloud experience for organizations that require complete isolation to meet stringent sovereignty and regulatory requirements.View documentationContact salesProduct highlightsFully disconnectedInnovative AI and database servicesSmall initial footprintGDC air-gapped in 90 seconds1.5 min videoFeaturesFull isolationThe solution does not require connectivity to Google Cloud or the public internet to manage the infrastructure, services, APIs, or tooling, and is built to remain disconnected in perpetuity. It is designed to support strict requirements in alignment with NIST SP 800-53-FedRAMP high security controls.VIDEOHow governments are solving for strict data residency and security requirements37:31Integrated cloud servicesGoogle Distributed Cloud air-gapped delivers advanced cloud services, including many of our data and machine learning technologies. Customers can use built-in AI solutions, such as Translation API, Speech-to-Text, or optical character recognition (OCR), which are features of our Vertex AI product that follows our AI Principles. The solution is built to be extensible and enables a catalog of independent software vendors' (ISVs) applications through our marketplace.VIDEOBring Google Cloud into your data centers or other locations of your choice1:24Open ecosystemGoogle Distributed Cloud is designed around Google Cloud’s open cloud strategy. It is built on the Kubernetes API, and uses industry-leading open source components in the platform and managed services. Open software accelerates developer adoption by leveraging existing expertise and tools versus requiring customers to learn new, proprietary systems.How customers and partners deliver modern applications with GDC1:25Flexible hardware optionsGoogle Distributed Cloud provides customers with industry-leading flexibility for hardware, including general purpose compute and GPUs. Customers can start small with an appliance or as few as four racks and grow to hundreds as their workloads grow. The air-gapped offering provides a fully redundant, high-availability architecture for mission-critical systems.An air-gapped solution for emergency response at the edge2:18Configurable operationsWhile the technology at the core of every Google Distributed Cloud deployment is the same, the operating model can be configured to each customer’s unique needs. Customers enjoy a consistent developer experience and access to a robust set of managed services, while being able to tailor deployment and operations to address their specific requirements. Google Distributed Cloud's air-gapped solution can be operated by Google, a trusted partner, or a combination of the two with the ability to customize elements like operator citizenship and clearances.Explore how we bring zero trust to help protect sensitive data in air-gapped environments16:29View all featuresHow It WorksGoogle Distributed Cloud air-gapped includes the hardware, software, local control plane, and operational tooling necessary to deploy, operate, scale, and secure a complete private managed cloud.Contact usWhat is Google Distributed Cloud air-gapped?Common UsesAI-enabled offline visual and audio translationBuild in secure, offline environments with AI-enabled translationUse the latest optical character recognition and speech-to-text technologies developed by Google Cloud while running in a fully disconnected environment for live translation of text within images, PDFs, or audio files. The translated text is summarized within the UI and available for download, reducing the need for manual transcription and translation while also saving valuable time and resources.Solution guide: Learn how the AI-enabled offline visual and audio translation solution worksTutorials, quickstarts, & labsBuild in secure, offline environments with AI-enabled translationUse the latest optical character recognition and speech-to-text technologies developed by Google Cloud while running in a fully disconnected environment for live translation of text within images, PDFs, or audio files. The translated text is summarized within the UI and available for download, reducing the need for manual transcription and translation while also saving valuable time and resources.Solution guide: Learn how the AI-enabled offline visual and audio translation solution worksAI-enabled offline video analysisAutomate analysis of video content and real-time streamsOur ready-to-deploy end-to-end video analysis platform that features AI-enabled search using natural language, intelligent video ingestion that supports both batch uploads and real-time streams supporting the majority video and stream types, and an advanced model repository serving Google’s latest models, open source, and custom models. It also includes a fully automated workflow management system that orchestrates the full solution lifecycle, and extensive reporting and metrics capabilities to deliver full visibility into the solution components.Solution guide: Learn how the AI-enabled offline video analysis solution worksTutorials, quickstarts, & labsAutomate analysis of video content and real-time streamsOur ready-to-deploy end-to-end video analysis platform that features AI-enabled search using natural language, intelligent video ingestion that supports both batch uploads and real-time streams supporting the majority video and stream types, and an advanced model repository serving Google’s latest models, open source, and custom models. It also includes a fully automated workflow management system that orchestrates the full solution lifecycle, and extensive reporting and metrics capabilities to deliver full visibility into the solution components.Solution guide: Learn how the AI-enabled offline video analysis solution worksGenerative AINeed to quickly find relevant content from on-prem data?GDC offers a generative AI search packaged solution enabled by Gemma, our state-of-the-art open models. The solution is designed to help customers easily retrieve and analyze data at the edge or on-premises with GDC. The generative AI search packaged solution is a ready-to-deploy on-prem conversational search solution that uses the Gemma 7B model.Demo video: Generative AI Search Packaged solutionUsing the natural language search solution can boost employee productivity and knowledge sharing, while ensuring data remains on-premises.Tutorials, quickstarts, & labsNeed to quickly find relevant content from on-prem data?GDC offers a generative AI search packaged solution enabled by Gemma, our state-of-the-art open models. The solution is designed to help customers easily retrieve and analyze data at the edge or on-premises with GDC. The generative AI search packaged solution is a ready-to-deploy on-prem conversational search solution that uses the Gemma 7B model.Demo video: Generative AI Search Packaged solutionUsing the natural language search solution can boost employee productivity and knowledge sharing, while ensuring data remains on-premises.AI at the edgeAI at the edge and field operations centersThe air-gapped appliance is a configuration that brings Google’s cloud and AI capabilities to tactical edge environments. The integrated hardware and software solution unlocks real-time local data processing for AI use cases such as object detection, medical imaging analysis, and predictive maintenance for critical infrastructure. The appliance can be conveniently transported in a rugged case or mounted in a rack within customer-specific local operating environments.Blog: Bringing cloud and AI capabilities to the tactical edge: Google Distributed Cloud air-gapped appliance is GAInfographic with use cases: Beyond the grid: air-gapped appliances to keep your operations runningLearning resourcesAI at the edge and field operations centersThe air-gapped appliance is a configuration that brings Google’s cloud and AI capabilities to tactical edge environments. The integrated hardware and software solution unlocks real-time local data processing for AI use cases such as object detection, medical imaging analysis, and predictive maintenance for critical infrastructure. The appliance can be conveniently transported in a rugged case or mounted in a rack within customer-specific local operating environments.Blog: Bringing cloud and AI capabilities to the tactical edge: Google Distributed Cloud air-gapped appliance is GAInfographic with use cases: Beyond the grid: air-gapped appliances to keep your operations runningPricing Pricing is based on the services you intend to consume, and the capacity that you will use to run those services.Billing is computed in an air-gapped environment that is not connected to Google Cloud. Consumption information is not visible in the Google Cloud console. Pricing is based on the services you intend to consume, and the capacity that you will use to run those services.Billing is computed in an air-gapped environment that is not connected to Google Cloud. Consumption information is not visible in the Google Cloud console. LOOKING FOR SOMETHING DIFFERENT?Discover our full portfolio of Google Distributed Cloud offeringsLearn moreGoogle Sovereign CloudExplore our complete suite of sovereign cloud solutionsLearn moreGet startedReady to begin?Download our beginners guideStart your project todayContact our sales teamDocumentationRead nowDigital Sovereignty Explorer toolDiscover which Google Sovereign Cloud solution is right for youRelease notesRead nowBusiness CaseDeliver innovation while addressing data, operational, and software sovereignty requirementsTop Secret and Secret cloud authorizationGoogle Distributed Cloud is authorized to host Top Secret and Secret missions for the U.S. Intelligence Community, and Top Secret missions for the Department of Defense (DoD). This authorization allows the U.S. Intelligence and DoD agencies to host, control, and manage their infrastructure and services in a highly secure environment, while leveraging the power of advanced cloud capabilities like data analytics, machine learning, and artificial intelligence. Read the announcementRelated contentPress release: CSIT and Google Cloud collaborate to pilot sovereign cloud solution in SingaporeThe Centre for Strategic Infocomm Technologies (CSIT) is piloting the use of Google Distributed Cloud air-gapped to support CSIT's effort to harness AI in tackling Singapore's defense and security challenges.Blog: How European customers benefit today from the power of choice with Google Sovereign CloudLearn how our comprehensive set of sovereign capabilities is allowing organizations to adopt the right controls on a per-workload basis to meet their digital sovereignty requirements.Blog: Our latest announcements and innovations from Google Cloud NEXT '24Learn about our generative AI packaged solution, product enhancements, and extended partnership programs bringing the power of Google's AI services wherever you need them—in your own data center or at the edge. Partnerships with leading independent software solutions, including Canonical, CIQ, Cockroach Labs, Confluent, Elastic, IS Decisions, Maria DB, Starburst, Syntasa, and Trellix to validate compatible solutions for easy in air-gapped environments.Available Managed GDC Providers, including Clarence, Gulf Edge, T-Systems, and WWT, specialize in deploying, operating, and managing Google Distributed Cloud services.Read more about our Managed Google Distributed Cloud Provider initiative in this blog.Partners & IntegrationCheck out our partnersGoogle Cloud Ready PartnersManaged Google Distributed Cloud ProvidersHardware PartnersVisit our partner directory to learn about these Google Distributed Cloud air-gapped partners.FAQExpand allWhat is a Google Cloud Ready Partner and how does my company become one?The Google Cloud Ready - Distributed Cloud recognizes partner solutions that have met a core set of functional and interoperability requirements. Through the process, partners closely collaborate with Google to provide new integrations to best support customer use cases. Learn more in the Google Cloud Ready - Distributed Cloud Partner Validation Guide.What is a Managed Google Distributed Cloud Provider and how does my company become one?The Managed GDC Providers (MGP) is a strategic partnership initiative by Google Cloud Partner Advantage Program designed to accelerate Google Distributed Cloud adoption by collaborating with specialized partners who are skilled in deploying, operating, and managing services. These MGPs form a comprehensive ecosystem, providing end-to-end Google Distributed Cloud solutions, including top-tier support and robust data security. By offering Google Distributed Cloud as a managed service, MGPs empower businesses to scale efficiently while maintaining high service quality.This initiative addresses digital (data, operational, software) sovereignty, latency, and connectivity challenges, making it ideal for sectors, such as public sector, finance, and manufacturing.Becoming an MGP is by invitation only. Interested partners should contact your account team or email gdc-partners@google.com to express their interest and undergo an initial technical and business capability qualification. Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Distributed_Cloud_Connected.txt b/Google_Distributed_Cloud_Connected.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4b17d5da82a396199e374501d610b01be4a3f1a --- /dev/null +++ b/Google_Distributed_Cloud_Connected.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/distributed-cloud-connected +Date Scraped: 2025-02-23T12:04:44.433Z + +Content: +Download “The 2024 State of Edge Computing” report to leverage insights from 640 business leaders.Google Distributed Cloud connectedUnleash your data at the edgeGoogle's fully managed hardware and software product that delivers modern applications equipped with AI, security, and open source at the edge.Go to consoleContact salesProduct highlightsInnovate faster with the latest in AIEnsure the highest levels of data security without compromiseRun modern apps from edge to cloud in a uniform environmentLeverage AI anywhere for retailClick here to view retail use cases and trends with AI and Google Distributed Cloud connected.FeaturesHardware delivery options designed for your needsDeploy and scale AI-ready infrastructure on-premises: choose between sourcing Google-provided or customer-sourced hardware.VIDEOBuilding a cloud-native inventory detection app with AI and Kubernetes on Google Distributed Cloud12:55Consistent applications platform: common tools and processesTake advantage of Google Cloud services at your data center and edge locations to quickly modernize your apps. Google Distributed Cloud connected uses a cloud-backed control plane that provides a consistent management experience at scale. Leverage common tools, policies, and processes that you use in the cloud for mission-critical use cases running on the edge, such as computer vision and Google AI edge inferencing.Gain real-time insights from data locally with low latencyUse the power of Google’s AI, data analytics, and databases solutions to uncover insights and remove traditional constraints of scale, performance, and cost when you're processing data, no matter where that data is generated. Bring only insights and value-added data to the regional cloud without uploading low-value data from your edge cloud.VIDEOUsing AI and Edge Infrastructure to Dynamically Analyze Petabyte Scale Data1:06Support regulatory and sovereignty requirementsMaintain autonomy and control over your infrastructure and data while adhering to strict sovereignty, data security, and privacy requirements. Remove any PII and other sensitive information at the edge while still leveraging regional clouds for any data in highly regulated verticals.VIDEODeliver modern apps equipped with AI, security, and open source at the edge1:25Optimized managed service for network functions and RANWe offer customized hardware that leverages our optimized data plane for high throughput packet processing to enable telecom use cases, including 5G core network functions and virtual radio access network (vRAN) functions. Communication service providers (CSPs) can run these functions at the edge while maintaining an open architecture that does not lock them into proprietary technology.View all featuresHow It WorksGoogle Distributed Cloud connected is a portfolio of fully managed hardware and software solutions that extend Google Cloud’s infrastructure and services to the edge and data centers. It is ideal for running local data processing, low latency edge workloads, and modernizing telecom networks.View documentationGoogle Distributed Cloud connected technical overviewCommon UsesModern customer experiences with AIDeliver engaging customer experiencesJoin experts and learn how to deploy a modern, fully managed application platform to thousands of locations, run business-critical applications locally, and keep operations running even during times of limited or unstable internet connectivity. Google Cloud offers powerful AI capabilities, including LLMs, NLP, PaLM2, Med-PaLM2 to manage apps at scale, reduce TCO, and improve business outcomes.Learn more about delivering modern customer experiences from the experts at Google Cloud, Deloitte, Orange, and KrogerLearning resourcesDeliver engaging customer experiencesJoin experts and learn how to deploy a modern, fully managed application platform to thousands of locations, run business-critical applications locally, and keep operations running even during times of limited or unstable internet connectivity. Google Cloud offers powerful AI capabilities, including LLMs, NLP, PaLM2, Med-PaLM2 to manage apps at scale, reduce TCO, and improve business outcomes.Learn more about delivering modern customer experiences from the experts at Google Cloud, Deloitte, Orange, and KrogerMeet regulatory and privacy needsStore and leverage your data from edge to cloudMeeting regulatory and privacy regulations can be difficult, ensuring these measures over multiple countries and changing regulations adds to the complexity. Google Cloud provides you with the independence to store and leverage your data where you need it from edge to cloud, while enabling you to meet security and data privacy for both highly regulated and public sector requirements.Learn more about meeting multicountry regulatory requirements at the edge Learning resourcesStore and leverage your data from edge to cloudMeeting regulatory and privacy regulations can be difficult, ensuring these measures over multiple countries and changing regulations adds to the complexity. Google Cloud provides you with the independence to store and leverage your data where you need it from edge to cloud, while enabling you to meet security and data privacy for both highly regulated and public sector requirements.Learn more about meeting multicountry regulatory requirements at the edge Optimized network functionsCreating the foundation for cloud networksGoogle Distributed Cloud connected transforms silos in existing telecom infrastructure models and builds a service-based architecture to run core network functions in the cloud. With an open Kubernetes-based offering tuned for telecom network workloads, network functions (CNFs/VNFs) from network equipment partners leverage cloud-native network automation and are optimized for Google Cloud.Learn more in Google Distributed Cloud's telecom podcastSimplifying cloud-native network functions deploymentsTelenet builds powerful foundation for their 5G networkRead our blog about creating the foundation for 5G cloud networksLearning resourcesCreating the foundation for cloud networksGoogle Distributed Cloud connected transforms silos in existing telecom infrastructure models and builds a service-based architecture to run core network functions in the cloud. With an open Kubernetes-based offering tuned for telecom network workloads, network functions (CNFs/VNFs) from network equipment partners leverage cloud-native network automation and are optimized for Google Cloud.Learn more in Google Distributed Cloud's telecom podcastSimplifying cloud-native network functions deploymentsTelenet builds powerful foundation for their 5G networkRead our blog about creating the foundation for 5G cloud networksRadio access networksDeploying vRAN and O-RAN at the edgeCSPs can transform at the far edge of their radio access networks (RAN) for both vRAN and O-RAN deployments. Google Distributed Cloud connected provides CSPs with a common and agile operating model that extends from the core of the network to the edge, for a high degree of programmability, flexibility, and low operating expenses. Learn more about our cloud-native RAN strategyAccelerate Cloud RAN with Google Cloud and NokiaDeutsche Telekom, Google Cloud, and Ericsson Demonstrate Network Transformation MilestoneDelivering cloud-native 5G core and RANLearning resourcesDeploying vRAN and O-RAN at the edgeCSPs can transform at the far edge of their radio access networks (RAN) for both vRAN and O-RAN deployments. Google Distributed Cloud connected provides CSPs with a common and agile operating model that extends from the core of the network to the edge, for a high degree of programmability, flexibility, and low operating expenses. Learn more about our cloud-native RAN strategyAccelerate Cloud RAN with Google Cloud and NokiaDeutsche Telekom, Google Cloud, and Ericsson Demonstrate Network Transformation MilestoneDelivering cloud-native 5G core and RANData processing at the edgeRemoving PII and other sensitive information at the edgeUsing Sensitive Data Protection on the edge provides several benefits to customers operating in heavily regulated environments. Data can be audited, classified, and anonymized on-premises before loading to the cloud. This is achieved with a relatively low level of development effort as customers can leverage 150+ built-in classifiers with the option to extend the model with custom classifiers.Learn about modern manufacturing insights from 100+ tech and business leaders across the globeCreating Sensitive Data Protection de-identification templatesCodelab: Create a de-identified copy of Data in Cloud StorageLearning resourcesRemoving PII and other sensitive information at the edgeUsing Sensitive Data Protection on the edge provides several benefits to customers operating in heavily regulated environments. Data can be audited, classified, and anonymized on-premises before loading to the cloud. This is achieved with a relatively low level of development effort as customers can leverage 150+ built-in classifiers with the option to extend the model with custom classifiers.Learn about modern manufacturing insights from 100+ tech and business leaders across the globeCreating Sensitive Data Protection de-identification templatesCodelab: Create a de-identified copy of Data in Cloud StorageCamera-based industrial hazard detectionLeveraging existing CCTV to track hazards in warehousesCustomers can use existing CCTV infrastructure with Vision AI/ML models to track moving machinery at industrial sites. Deploying GPU-powered Google Distributed Cloud connected racks on-site can eliminate concerns about connectivity issues, bandwidth cost, and availability. Models can be deployed on the edge for inferencing and trained in the cloud to leverage its power and flexible scalability.Learn how to manage GPU workloads on Google Distributed Cloud connectedLearning resourcesLeveraging existing CCTV to track hazards in warehousesCustomers can use existing CCTV infrastructure with Vision AI/ML models to track moving machinery at industrial sites. Deploying GPU-powered Google Distributed Cloud connected racks on-site can eliminate concerns about connectivity issues, bandwidth cost, and availability. Models can be deployed on the edge for inferencing and trained in the cloud to leverage its power and flexible scalability.Learn how to manage GPU workloads on Google Distributed Cloud connectedModern point-of-sale solutionsImprove your solutions with a modern infrastructure approachConsolidate your infrastructure to streamline management of VM applications, such as point-of-sale and back-office services. Windows and Linux VMs can run on top of Kubernetes in the same way as containers, and work with the same management, logging, and monitoring tools in Google Distributed Cloud connected. Service availability is improved as high availability enables VMs to failover gracefully.Learn more about leveraging Google Distributed Cloud connected to run VMs at the edgeLearn more about managing VMs on Google Distributed Cloud connectedTutorial: How to migrate VM workloads to containersOverview of Google Distributed Cloud connected VM RuntimeLearning resourcesImprove your solutions with a modern infrastructure approachConsolidate your infrastructure to streamline management of VM applications, such as point-of-sale and back-office services. Windows and Linux VMs can run on top of Kubernetes in the same way as containers, and work with the same management, logging, and monitoring tools in Google Distributed Cloud connected. Service availability is improved as high availability enables VMs to failover gracefully.Learn more about leveraging Google Distributed Cloud connected to run VMs at the edgeLearn more about managing VMs on Google Distributed Cloud connectedTutorial: How to migrate VM workloads to containersOverview of Google Distributed Cloud connected VM RuntimeSelf-checkout solutionsImproving shopper experience through frictionless checkoutUsing a variety of sensors and cameras, retailers streamline the customer experience by eliminating the need for barcode scanning and enabling contactless checkout. Individual products can be identified with machine learning leveraging Google Distributed Cloud connected. Shoppers enjoy more convenient shopping experiences without queues and employees can provide more value away from a cash register.Check out new AI tools for retailLearn more about Google Cloud solutions for retailSee our extensive list of partners that could help you implement retail solutionsIntroducing Google Distributed Cloud for retail and manufacturingLearning resourcesImproving shopper experience through frictionless checkoutUsing a variety of sensors and cameras, retailers streamline the customer experience by eliminating the need for barcode scanning and enabling contactless checkout. Individual products can be identified with machine learning leveraging Google Distributed Cloud connected. Shoppers enjoy more convenient shopping experiences without queues and employees can provide more value away from a cash register.Check out new AI tools for retailLearn more about Google Cloud solutions for retailSee our extensive list of partners that could help you implement retail solutionsIntroducing Google Distributed Cloud for retail and manufacturingPricingGoogle Distributed Cloud connected pricingGoogle Distributed Cloud pricing is a single monthly fee for hardware lease and software licenses based on hardware configuration and time commitment.Service usage and typeDescriptionPrice (USD)Google Distributed Cloud connected (rack)Configuration 1Three-year commitmentStarting at $10,864 per rack per monthConfiguration 2Three-year commitmentStarting at $12,513 per rack per monthView Google Distributed Cloud connected pricing details.Google Distributed Cloud connected pricingGoogle Distributed Cloud pricing is a single monthly fee for hardware lease and software licenses based on hardware configuration and time commitment.Google Distributed Cloud connected (rack)DescriptionConfiguration 1Three-year commitmentPrice (USD)Starting at $10,864 per rack per monthConfiguration 2Three-year commitmentDescriptionStarting at $12,513 per rack per monthView Google Distributed Cloud connected pricing details.Looking for something different?Learn more about the full portfolio of Google Distributed Cloud products.ExploreDiscover the possibilities with 5G edgeHear from 466 business and tech leaders from around the globe.Download reportGet started with Google Distributed Cloud connectedTalk to a Google Cloud sales specialistContact salesHow Google Distributed Cloud worksView documentationExplore enterprise 5G edge possibilitiesDiscover moreExplore industry use cases for Google Distributed Cloud connectedRead the blogNavigating Multi-Access Edge ComputingDownload the studyPartners & IntegrationWork with a partner with Google Distributed Cloud connected expertisePartnersVisit our partner directory to learn about these Google Distributed Cloud connected partners.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Kubernetes_Engine(1).txt b/Google_Kubernetes_Engine(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..208797c08cbdf63f21a13d3b36647557184c1ed1 --- /dev/null +++ b/Google_Kubernetes_Engine(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/kubernetes-engine +Date Scraped: 2025-02-23T12:02:57.182Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Google Kubernetes Engine (GKE) The most scalable and fully automated Kubernetes servicePut your containers on autopilot and securely run your enterprise workloads at scale—with little to no Kubernetes expertise required.Get one Zonal or Autopilot cluster free per month.Try it in consoleContact salesProduct highlightsIncreased velocity, reduced risk, and lower TCOBuilt-in security posture and observability tooling With support for up to 65,000 nodes, we believe GKE offers more than 10X larger scale than the other two largest public cloud providersWhy choose GKE?3:48 min videoFeaturesSupport for 65,000-node clusters for next gen AIIn anticipation of even larger models, we are introducing support for 65,000-node clusters. To develop cutting-edge AI models, teams need to be able to allocate computing resources across diverse workloads. This includes not only model training but also serving, inference, conducting ad hoc research, and managing auxiliary tasks. Centralizing computing power within the smallest number of clusters provides the flexibility to quickly adapt to changes in demand from inference serving, research, and training workloads.Increased velocity, reduced risk, and lower TCOWith the new premium GKE Enterprise edition, platform teams benefit from increased velocity by configuring and observing multiple clusters from one place, defining configuration for teams rather than clusters, and providing self-service options for developers for deployment and management of apps. You can reduce risk using advanced security and GitOps-based configuration management. Lower total cost of ownership (TCO) with a fully integrated and managed solution—adding up to a 196% ROI in three years.VIDEOIntroducing GKE Enterprise edition6:15Flexible editionsGKE Standard edition provides fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. It includes all the existing benefits of GKE and offers both the Autopilot and Standard operation modes. The new premium GKE Enterprise edition offers all of the above, plus management, governance, security, and configuration for multiple teams and clusters—all with a unified console experience and integrated service mesh.Standard edition for single cluster and Enterprise edition for multiple clustersServerless Kubernetes experience using AutopilotGKE Autopilot is a hands-off operations mode that manages your cluster’s underlying compute (without you needing to configure or monitor)—while still delivering a complete Kubernetes experience. And with per-pod billing, Autopilot ensures you pay only for your running pods, not system components, operating system overhead, or unallocated capacity for up to 85% savings from resource and operational efficiency. Both Autopilot and Standard operations mode are available as part of the GKE Enterprise edition.VIDEOWatch to learn about the Autopilot mode of operation in GKE 6:36Automated security and compliance monitoringGKE threat detection is powered by Security Command Center (SCC), and surfaces threats affecting your GKE clusters in near real-time by continuously monitoring GKE audit logs.GKE compliance provides streamlined real-time insights, automated reports, and the freedom to innovate securely on Google Cloud.Explore GKE’s robust security configurationsPod and cluster autoscalingGKE implements the full Kubernetes API, four-way autoscaling, release channels, and multi-cluster support. Horizontal pod autoscaling can be based on CPU utilization or custom metrics. Cluster autoscaling works on a per-node-pool basis, and vertical pod autoscaling continuously analyzes the CPU and memory usage of pods, automatically adjusting CPU and memory requests.Explore GKE’s robust security configurationsContainer-native networking and securityPrivately networked clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access. GKE Sandbox for the Standard mode of operation provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters inherently support Kubernetes Network Policy to restrict traffic with pod-level firewall rules.Prebuilt Kubernetes applications and templatesGet access to enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity. Click to deploy on-premises or in third-party clouds from Google Cloud Marketplace.Browse 100+ click-to-deploy and third-party services for GKEGPU and TPU supportGKE supports GPUs and TPUs and makes it easy to run ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.Multi-team management using fleet team scopesUse fleets to organize clusters and workloads, and assign resources to multiple teams easily to improve velocity and delegate ownership. Team scopes let you define subsets of fleet resources on a per-team basis, with each scope associated with one or more fleet member clusters.Multi-cluster management using fleetsYou might choose multiple clusters to separate services across environments, tiers, locales, teams, or infrastructure providers. Fleets and the Google Cloud components and features that support them strive to make managing multiple clusters as easy as possible.Backup for GKEBackup for GKE is an easy way for customers running stateful workloads on GKE to protect, manage, and restore their containerized applications and data.Multi-cloud support with workload portabilityGKE runs Certified Kubernetes, enabling workload portability to other Kubernetes platforms across clouds and on-premises. You can also run your apps anywhere with consistency using GKE on Google Cloud, GKE on AWS, or GKE on Azure.Hybrid supportTake advantage of Kubernetes and cloud technology in your own data center through Google Distributed Cloud. Get the GKE experience with quick, managed, and simple installs as well as upgrades validated by Google.Managed service meshManage, observe, and secure your services with Google’s implementation of the powerful Istio open source project. Simplify traffic management and monitoring with a fully managed service mesh.Managed GitOpsCreate and enforce consistent configurations and security policies across clusters, fleets, and teams with managed GitOps config deployment.Identity and access managementControl access in the cluster with your Google accounts and role permissions.Hybrid networkingReserve an IP address range for your cluster, allowing your cluster IPs to coexist with private network IPs using Google Cloud VPN.Security and complianceGKE is backed by a Google security team of over 750 experts and is both HIPAA and PCI DSS compliant.Integrated logging and monitoringEnable Cloud Logging and Cloud Monitoring with simple checkbox configurations, making it easy to gain insight into how your application is running.Cluster optionsChoose clusters tailored to the availability, version stability, isolation, and pod traffic requirements of your workloads.Auto scaleAutomatically scale your application deployment up and down based on resource utilization (CPU, memory).Auto upgradeAutomatically keep your cluster up to date with the latest release version of Kubernetes.Auto repairWhen auto repair is enabled, if a node fails a health check, GKE initiates a repair process for that node.Resource limitsKubernetes allows you to specify how much CPU and memory (RAM) each container needs, which is used to better organize workloads within your cluster.Container isolationUse GKE Sandbox for a second layer of defense between containerized workloads on GKE for enhanced workload security.Stateful application supportGKE isn't just for 12-factor apps. You can attach persistent storage to containers, and even host complete databases.Docker image supportGKE supports the common Docker container format.OS built for containersGKE runs on Container-Optimized OS, a hardened OS built and managed by Google.Private container registryIntegrating with Google Container Registry makes it easy to store and access your private Docker images.Fast, consistent buildsUse Cloud Build to reliably deploy your containers on GKE without needing to set up authentication.Built-in dashboard Google Cloud console offers useful dashboards for your project's clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters.Spot VMsAffordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide significant savings of up to 91% while still getting the same performance and capabilities as regular VMs.Persistent disks support Durable, high-performance block storage for container instances. Data is stored redundantly for integrity, flexibility to resize storage without interruption, and automatic encryption. You can create persistent disks in HDD or SSD formats. You can also take snapshots of your persistent disk and create new persistent disks from that snapshot.Local SSD supportGKE offers always encrypted, local, solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.Global load balancingGlobal load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.Linux and Windows supportFully supported for both Linux and Windows workloads, GKE can run both Windows Server and Linux nodes.Serverless containersRun stateless serverless containers abstracting away all infrastructure management and automatically scale them with Cloud Run.Usage meteringFine-grained visibility to your Kubernetes clusters. See your GKE clusters' resource usage broken down by namespaces and labels, and attribute it to meaningful entities.Release channels Release channels provide more control over which automatic updates a given cluster receives, based on the stability requirements of the cluster and its workloads. You can choose rapid, regular, or stable. Each has a different release cadence and targets different types of workloads.Software supply chain securityVerify, enforce, and improve security of infrastructure components and packages used for container images with Artifact Analysis.Per-second billingGoogle bills in second-level increments. You pay only for the compute time that you use.View all featuresHow It WorksA GKE cluster has a control plane and machines called nodes. Nodes run the services supporting the containers that make up your workload. The control plane decides what runs on those nodes, including scheduling and scaling. Autopilot mode manages this complexity; you simply deploy and run your apps.View documentationGoogle Kubernetes Engine in a minute (1:21)Common UsesManage multi-cluster infrastructureSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerTutorials, quickstarts, & labsSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementPartners & integrationsFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerSecurely run optimized AI workloadsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsTutorials, quickstarts, & labsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationLearning resourcesGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsContinuous integration and deliveryCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKETutorials, quickstarts, & labsCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKEDeploying and running applicationsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsTutorials, quickstarts, & labsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEPartners & integrationsFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerLearning resourcesCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsMigrate workloadsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easeMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationTutorials, quickstarts, & labsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easePartners & integrationsMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationPricingHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. ServiceDescriptionPrice (USD)Free tierThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.FreeKubernetesEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.$0.10Per cluster per hourComputeAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. Free tierDescriptionThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.Price (USD)FreeKubernetesDescriptionEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.Price (USD)$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.Description$0.10Per cluster per hourComputeDescriptionAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Price (USD)Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsPricing calculatorEstimate your monthly GKE costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry GKE in the console, with one Zonal or Autopilot cluster free per month.Go to my consoleHave a large project?Contact salesDeploy an app to a GKE clusterView quickstartClick to deploy Kubernetes applicationsGo to MarketplaceGet expert help evaluating and implementing GKEFind a partnerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Kubernetes_Engine(2).txt b/Google_Kubernetes_Engine(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..2cc458903d104b6961d793046096915af6c923de --- /dev/null +++ b/Google_Kubernetes_Engine(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/kubernetes-engine +Date Scraped: 2025-02-23T12:04:47.895Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Google Kubernetes Engine (GKE) The most scalable and fully automated Kubernetes servicePut your containers on autopilot and securely run your enterprise workloads at scale—with little to no Kubernetes expertise required.Get one Zonal or Autopilot cluster free per month.Try it in consoleContact salesProduct highlightsIncreased velocity, reduced risk, and lower TCOBuilt-in security posture and observability tooling With support for up to 65,000 nodes, we believe GKE offers more than 10X larger scale than the other two largest public cloud providersWhy choose GKE?3:48 min videoFeaturesSupport for 65,000-node clusters for next gen AIIn anticipation of even larger models, we are introducing support for 65,000-node clusters. To develop cutting-edge AI models, teams need to be able to allocate computing resources across diverse workloads. This includes not only model training but also serving, inference, conducting ad hoc research, and managing auxiliary tasks. Centralizing computing power within the smallest number of clusters provides the flexibility to quickly adapt to changes in demand from inference serving, research, and training workloads.Increased velocity, reduced risk, and lower TCOWith the new premium GKE Enterprise edition, platform teams benefit from increased velocity by configuring and observing multiple clusters from one place, defining configuration for teams rather than clusters, and providing self-service options for developers for deployment and management of apps. You can reduce risk using advanced security and GitOps-based configuration management. Lower total cost of ownership (TCO) with a fully integrated and managed solution—adding up to a 196% ROI in three years.VIDEOIntroducing GKE Enterprise edition6:15Flexible editionsGKE Standard edition provides fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. It includes all the existing benefits of GKE and offers both the Autopilot and Standard operation modes. The new premium GKE Enterprise edition offers all of the above, plus management, governance, security, and configuration for multiple teams and clusters—all with a unified console experience and integrated service mesh.Standard edition for single cluster and Enterprise edition for multiple clustersServerless Kubernetes experience using AutopilotGKE Autopilot is a hands-off operations mode that manages your cluster’s underlying compute (without you needing to configure or monitor)—while still delivering a complete Kubernetes experience. And with per-pod billing, Autopilot ensures you pay only for your running pods, not system components, operating system overhead, or unallocated capacity for up to 85% savings from resource and operational efficiency. Both Autopilot and Standard operations mode are available as part of the GKE Enterprise edition.VIDEOWatch to learn about the Autopilot mode of operation in GKE 6:36Automated security and compliance monitoringGKE threat detection is powered by Security Command Center (SCC), and surfaces threats affecting your GKE clusters in near real-time by continuously monitoring GKE audit logs.GKE compliance provides streamlined real-time insights, automated reports, and the freedom to innovate securely on Google Cloud.Explore GKE’s robust security configurationsPod and cluster autoscalingGKE implements the full Kubernetes API, four-way autoscaling, release channels, and multi-cluster support. Horizontal pod autoscaling can be based on CPU utilization or custom metrics. Cluster autoscaling works on a per-node-pool basis, and vertical pod autoscaling continuously analyzes the CPU and memory usage of pods, automatically adjusting CPU and memory requests.Explore GKE’s robust security configurationsContainer-native networking and securityPrivately networked clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access. GKE Sandbox for the Standard mode of operation provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters inherently support Kubernetes Network Policy to restrict traffic with pod-level firewall rules.Prebuilt Kubernetes applications and templatesGet access to enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity. Click to deploy on-premises or in third-party clouds from Google Cloud Marketplace.Browse 100+ click-to-deploy and third-party services for GKEGPU and TPU supportGKE supports GPUs and TPUs and makes it easy to run ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.Multi-team management using fleet team scopesUse fleets to organize clusters and workloads, and assign resources to multiple teams easily to improve velocity and delegate ownership. Team scopes let you define subsets of fleet resources on a per-team basis, with each scope associated with one or more fleet member clusters.Multi-cluster management using fleetsYou might choose multiple clusters to separate services across environments, tiers, locales, teams, or infrastructure providers. Fleets and the Google Cloud components and features that support them strive to make managing multiple clusters as easy as possible.Backup for GKEBackup for GKE is an easy way for customers running stateful workloads on GKE to protect, manage, and restore their containerized applications and data.Multi-cloud support with workload portabilityGKE runs Certified Kubernetes, enabling workload portability to other Kubernetes platforms across clouds and on-premises. You can also run your apps anywhere with consistency using GKE on Google Cloud, GKE on AWS, or GKE on Azure.Hybrid supportTake advantage of Kubernetes and cloud technology in your own data center through Google Distributed Cloud. Get the GKE experience with quick, managed, and simple installs as well as upgrades validated by Google.Managed service meshManage, observe, and secure your services with Google’s implementation of the powerful Istio open source project. Simplify traffic management and monitoring with a fully managed service mesh.Managed GitOpsCreate and enforce consistent configurations and security policies across clusters, fleets, and teams with managed GitOps config deployment.Identity and access managementControl access in the cluster with your Google accounts and role permissions.Hybrid networkingReserve an IP address range for your cluster, allowing your cluster IPs to coexist with private network IPs using Google Cloud VPN.Security and complianceGKE is backed by a Google security team of over 750 experts and is both HIPAA and PCI DSS compliant.Integrated logging and monitoringEnable Cloud Logging and Cloud Monitoring with simple checkbox configurations, making it easy to gain insight into how your application is running.Cluster optionsChoose clusters tailored to the availability, version stability, isolation, and pod traffic requirements of your workloads.Auto scaleAutomatically scale your application deployment up and down based on resource utilization (CPU, memory).Auto upgradeAutomatically keep your cluster up to date with the latest release version of Kubernetes.Auto repairWhen auto repair is enabled, if a node fails a health check, GKE initiates a repair process for that node.Resource limitsKubernetes allows you to specify how much CPU and memory (RAM) each container needs, which is used to better organize workloads within your cluster.Container isolationUse GKE Sandbox for a second layer of defense between containerized workloads on GKE for enhanced workload security.Stateful application supportGKE isn't just for 12-factor apps. You can attach persistent storage to containers, and even host complete databases.Docker image supportGKE supports the common Docker container format.OS built for containersGKE runs on Container-Optimized OS, a hardened OS built and managed by Google.Private container registryIntegrating with Google Container Registry makes it easy to store and access your private Docker images.Fast, consistent buildsUse Cloud Build to reliably deploy your containers on GKE without needing to set up authentication.Built-in dashboard Google Cloud console offers useful dashboards for your project's clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters.Spot VMsAffordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide significant savings of up to 91% while still getting the same performance and capabilities as regular VMs.Persistent disks support Durable, high-performance block storage for container instances. Data is stored redundantly for integrity, flexibility to resize storage without interruption, and automatic encryption. You can create persistent disks in HDD or SSD formats. You can also take snapshots of your persistent disk and create new persistent disks from that snapshot.Local SSD supportGKE offers always encrypted, local, solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.Global load balancingGlobal load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.Linux and Windows supportFully supported for both Linux and Windows workloads, GKE can run both Windows Server and Linux nodes.Serverless containersRun stateless serverless containers abstracting away all infrastructure management and automatically scale them with Cloud Run.Usage meteringFine-grained visibility to your Kubernetes clusters. See your GKE clusters' resource usage broken down by namespaces and labels, and attribute it to meaningful entities.Release channels Release channels provide more control over which automatic updates a given cluster receives, based on the stability requirements of the cluster and its workloads. You can choose rapid, regular, or stable. Each has a different release cadence and targets different types of workloads.Software supply chain securityVerify, enforce, and improve security of infrastructure components and packages used for container images with Artifact Analysis.Per-second billingGoogle bills in second-level increments. You pay only for the compute time that you use.View all featuresHow It WorksA GKE cluster has a control plane and machines called nodes. Nodes run the services supporting the containers that make up your workload. The control plane decides what runs on those nodes, including scheduling and scaling. Autopilot mode manages this complexity; you simply deploy and run your apps.View documentationGoogle Kubernetes Engine in a minute (1:21)Common UsesManage multi-cluster infrastructureSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerTutorials, quickstarts, & labsSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementPartners & integrationsFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerSecurely run optimized AI workloadsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsTutorials, quickstarts, & labsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationLearning resourcesGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsContinuous integration and deliveryCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKETutorials, quickstarts, & labsCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKEDeploying and running applicationsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsTutorials, quickstarts, & labsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEPartners & integrationsFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerLearning resourcesCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsMigrate workloadsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easeMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationTutorials, quickstarts, & labsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easePartners & integrationsMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationPricingHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. ServiceDescriptionPrice (USD)Free tierThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.FreeKubernetesEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.$0.10Per cluster per hourComputeAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. Free tierDescriptionThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.Price (USD)FreeKubernetesDescriptionEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.Price (USD)$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.Description$0.10Per cluster per hourComputeDescriptionAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Price (USD)Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsPricing calculatorEstimate your monthly GKE costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry GKE in the console, with one Zonal or Autopilot cluster free per month.Go to my consoleHave a large project?Contact salesDeploy an app to a GKE clusterView quickstartClick to deploy Kubernetes applicationsGo to MarketplaceGet expert help evaluating and implementing GKEFind a partnerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Kubernetes_Engine(3).txt b/Google_Kubernetes_Engine(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..5fa0c15bfd951e2bcd5554a32c2a555c452d41a4 --- /dev/null +++ b/Google_Kubernetes_Engine(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/kubernetes-engine/pricing +Date Scraped: 2025-02-23T12:10:59.322Z + +Content: +Home Pricing Google Kubernetes Engine (GKE) Send feedback Stay organized with collections Save and categorize content based on your preferences. Google Kubernetes Engine pricing This page explains pricing for edition, compute resources, cluster operation mode, cluster management fees, and applicable ingress fees in Google Kubernetes Engine (GKE). Standard edition Includes fully automated cluster lifecycle management, pod and cluster autoscaling, cost visibility and automated infrastructure cost optimization. Priced at $0.10 per cluster per hour. Enterprise edition Includes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, best practice observability metrics, and a unified console experience. Priced at $0.00822 per vCPU per hour. Enabling GKE Enterprise in your project entitles you to full usage of the GKE Enterprise platform, including hybrid and multi-cloud features. Once enabled, GKE Enterprise charges apply to all GKE Enterprise clusters and are based on the number of GKE Enterprise cluster vCPUs, charged on an hourly basis. A vCPU is considered "under management" by GKE Enterprise when it is seen as schedulable compute capacity by the GKE Enterprise control plane, meaning all vCPUs in the relevant user cluster, and excluding (for on-premises options) both the admin cluster and the control plane nodes. For details of available GKE Enterprise features in each environment see the deployment options guide. If you use Autopilot clusters with GKE Enterprise, you're billed per vCPU for the Enterprise tier in addition to the pricing model described in Autopilot pricing. Pricing table GKE Enterprise offers pay-as-you-go pricing, where you are billed for GKE Enterprise clusters as you use them at the rates listed below. You can start using pay-as-you-go GKE Enterprise whenever you like by following the instructions in our setup guides. Prices are listed in U.S. dollars (USD). If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. A bill is sent out at the end of each billing cycle, listing previous usage and charges. Public Cloud Environments Pay-as-you-goList Price (hourly) Pay-as-you-go ListPrice (monthly) M GKE Enterprise GC $0.00822 /vCPU $6 / vCPU GKE Enterprise Multicloud (AWS) AWS $0.00822 /vCPU $6 / vCPU GKE Enterprise Multicloud (Azure) AZ $0.00822 /vCPU $6 / vCPU GKE Enterprise Multicloud (Attached Clusters) AC $0.00822 /vCPU $6 / vCPU On-premises Environments Pay-as-you-go (hourly) Pay-as-you-go (monthly)Price (monthly) M GDC (vSphere) $0.03288 / vCPU $24 / vCPU GDC (Bare Metal) BM, BM2 $0.03288 / vCPU $24 / vCPU M - Estimated monthly price based on 730 hours in a month. GC - GKE Enterprise on Google Cloud pricing does not include charges for Google Cloud resources such as Compute Engine, Cloud Load Balancing, and Cloud Storage. AWS - GKE Enterprise on AWS pricing does not include any costs associated with AWS resources such as EC2, ELB, and S3. The customer is responsible for any charges for their AWS resources. AZ - GKE Enterprise on Azure pricing does not include any costs associated with Azure resources such as VMs, LBs, and Azure Storage. The customer is responsible for any charges for their Azure resources. BM - For GKE Enterprise / GDC (software only) on bare metal, if hyperthreading is enabled one CPU is equivalent to two vCPUs for pricing purposes. If hyperthreading is not enabled, one CPU is equivalent to one vCPU. BM2 - VM Runtime is a feature that can be enabled on GKE Enterprise / GDC (software only) on bare metal. It doesn't require an alternate SKU or additional pricing to be used. AC - For CNCF compliant clusters. Learn more. If you are a new GKE Enterprise customer, you can try GKE Enterprise for free for a maximum of 90 days. You are still billed for applicable infrastructure usage during the trial. To sign up, go to the GKE Console and enable trial. If at any time you want to stop using GKE Enterprise, follow the instructions in Disabling GKE Enterprise. Autopilot mode Autopilot clusters accrue a flat fee of $0.10/hour for each cluster after the free tier, in addition to charges for the workloads that you run. Autopilot uses a workload-driven provisioning model, where resources are provisioned based on the requirements specified in the Pod specification of your workloads. GKE includes a Service Level Agreement (SLA) that's financially backed providing availability of 99.95% for the control plane, and 99.9% for Autopilot Pods in multiple zones, or 99.99% for GKE Enterprise Autopilot Pods in multiple regions. Committed use discounts (CUDs) can be used to reduce costs for workloads that have predictable resource usage. By default, workloads that you create are provisioned on our general-purpose computing platform where you're billed only for the resources that Pods request (and not for spare compute capacity or system overhead). If your workloads need to scale beyond 28 vCPU, you can use the Balanced or Scale-Out compute classes, which use the same approach of Pod-based compute provisioning and billing. If you enable Confidential GKE Nodes, additional charges apply. For more information, see Confidential GKE Nodes on GKE Autopilot pricing. You can also request specific hardware like accelerators or Compute Engine machine series for your workloads. For these specialized workloads, Autopilot provisions nodes that have at least the requested compute capacity for the workload and bills you for the entire node. This node-based compute model is ideal for workloads that have specific hardware requirements, but requires you to consider how to fully utilize the resources provisioned. These compute provisioning and billing approaches mean that you can use specific compute hardware for specialized workloads while using the simpler Pod-based compute provisioning approach for everything else. Pods without specific hardware requirements The default general-purpose platform and the Balanced and Scale-Out compute classes use a Pod-based billing model. You are charged in one-second increments for the CPU, memory, and ephemeral storage resources that your running Pods request in the Pod resource requests, with no minimum duration. This billing model applies to the default general-purpose platform and to the Balanced and Scale-Out compute classes. This model has the following considerations: Autopilot sets a default value if no resource request was defined, and scales up values that don't meet the required minimums or CPU-to-memory ratio. Set the resource requests to what your workloads require to get the most optimal price. You are only billed for Pods that are being created or are currently running (those in the Running phase, and those with the ContainerCreating status in the Pending phase). Pods waiting to be scheduled, and those that have terminated (like Pods marked as Succeeded or Failed) are not billed. You are not charged for system Pods, operating system overhead, unallocated space, or unscheduled Pods. Set appropriate resource requests and Pod replica counts for your workloads for optimal cost. With the Pod-based billing model, the underlying node size or quantity doesn't matter for billing. Note: Prior to version 1.29.2-gke.1060000, and for clusters that were originally created on a version earlier than 1.26, Autopilot sets the resources requested to equal the resource limits values in cases where only the limit is set.Note: GPU Pods that run on GKE version 1.29.4-gke.1427000 and earlier are billed according to the Autopilot GPU Pod SKUs and use the Pod-based billing model. General-purpose (default) Pods If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. *Spot prices are dynamic and can change up to once every 30 days, but always provide discounts of 60-91% off of the corresponding regular price for CPU, memory and GPU. Balanced and Scale-Out compute class Pods If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. *Spot prices are dynamic and can change up to once every 30 days, but always provide discounts of 60-91% off of the corresponding regular price for CPU, memory and GPU. Pods that have specific hardware requirements Autopilot uses a node-based billing model when you request specific hardware like accelerators or Compute Engine machine series. When your Pods request these types of hardware resources, GKE allocates predefined Compute Engine machine types that most closely fit the requests (as a result, they can be larger than what your Pod requested). You're charged for the underlying VM resources, to which any applicable discounts like Compute Engine CUDs apply, plus a management premium on the compute resources. Because you're billed for the entire machine, ensure that these specialized workloads efficiently utilize the full resources of the provisioned machines. This consideration doesn't apply to the default Pod-based billing model, which is ideal for smaller workloads (those that request significantly less resources than the size of the smallest machine size in the machine series) and workloads that don't efficiently fit into the predefined Compute Engine machine types. Note: GPU Pods prior to version 1.29.4-gke.1427000 that don't specify the Accelerator compute class are billed using Pod-based billing on these legacy SKUs instead. Node management premiums for accelerators and specific machine series If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. *Spot prices are dynamic and can change up to once every 30 days, but always provide discounts of 60-91% off of the corresponding regular price for CPU, memory and GPU. Standard mode Clusters created in Standard mode accrue a management fee of $0.10 per cluster per hour, irrespective of cluster size or topology, after the free tier. GKE cluster management fees do not apply to GKE Enterprise clusters. In Standard mode, GKE uses Compute Engine instances worker nodes in the cluster. You are billed for each of those instances according to Compute Engine's pricing, until the nodes are deleted. Compute Engine resources are billed on a per-second basis with a one-minute minimum usage cost. Compute Engine offers committed use discounts that apply to the Compute Engine instances in the cluster. To learn more, see Committed use discounts in Compute Engine . GKE includes a Service Level Agreement (SLA) that's financially backed providing availability of 99.95% for the control plane of Regional clusters, and 99.5% for the control plane of Zonal clusters. Cluster management fee and free tier The cluster management fee of $0.10 per cluster per hour (charged in 1 second increments) applies to all GKE clusters irrespective of the mode of operation, cluster size or topology. The GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters. If you only use a single Zonal or Autopilot cluster, this credit will at least cover the complete cost of that cluster each month. Unused free tier credits are not rolled over, and cannot be applied to any other SKUs (for example, they cannot be applied to compute charges, or the cluster fee for Regional clusters). The following conditions apply to the cluster management fee: The fee is flat, irrespective of cluster size and topology—whether it is a single-zone cluster, multi-zonal cluster, regional or Autopilot cluster, all accrue the same flat fee per cluster. The fee does not apply to GKE Enterprise clusters. The following example demonstrates how the cluster management fee and free tier credit is applied for an organization’s billing accounts. In this example, the organization’s regional and zonal cluster hours are listed excluding GKE Enterprise cluster hours. The total billable amount is calculated per month, with the monthly free tier credit applied. Organization's billing accounts Autopilot cluster hours per month Regional cluster hours per month Zonal cluster hours per month Free tier credit used Total monthly GKE cluster management fee(at $0.10/hour per cluster) account_1 744 0 0 $74.40 $0 account_2 0 1000 500 $50 $100 account_3 1000 1000 1000 $74.40 $225.60 Extended support period Clusters on the Extended release channel can stay on their GKE minor version and receive extended support beyond the standard support period. With extended support, you can stay on a GKE minor release, supported, for up to 24 months. You will be charged an additional GKE extended period cluster management fee after the cluster has reached the end of standard support. There is no additional charge for using the Extended release channel during the standard support period and you can upgrade to a minor version covered under the standard support period at any time. To learn more, see Get long-term support with the Extended channel. Priced at $0.50 per cluster per hour. The GKE extended period cluster management fee is in addition to the GKE cluster management fee at $0.10 per cluster per hour, for a total of $0.60 per cluster per hour. The GKE extended period cluster management fee is included in the GKE Enterprise edition. Multi Cluster Ingress Multi Cluster Ingress is included as part of GKE Enterprise, so there is no additional charge to use Multi Cluster Ingress in an GKE Enterprise on Google Cloud cluster. If you have GKE clusters that are not licensed for GKE Enterprise, you are billed at the standalone pricing rate when you use Multi Cluster Ingress. The functionality of Multi Cluster Ingress is the same whether you use it with GKE Enterprise licensing or standalone pricing. You can change how you are billed at any time by enrolling or unenrolling in GKE Enterprise. In all cases, the load balancers and traffic for MultiClusterIngress resources are charged separately, according to load balancer pricing. GKE Enterprise licensing Multi Cluster Ingress is included as part of GKE Enterprise. If you enable the GKE Enterprise API (gcloud services enable anthos.googleapis.com) and your clusters are registered to a fleet, then there is no additional charge to use Multi Cluster Ingress. Standalone pricing Multi Cluster Ingress standalone pricing is based on the number of Pods that are considered Multi Cluster Ingress backends at a cost of $3 per backend Pod per month (730 hours). This pricing is approximately $0.0041096 per backend Pod per hour, billed in 5 minute increments. The number of backend Pods is the total number of Pods that are members of MultiClusterService resources across all GKE clusters in your project. The following example shows how backend Pods are counted: Multi Cluster Ingress only charges Pods which are direct backends of MultiClusterIngress resources. Pods which are not Multi Cluster Ingress backends are not charged. In this example, there are three MultiClusterService resources across two clusters with Pod backends. Pods that are members of Service A, B, and C are not direct backends and are not billed against the standalone pricing. Any Pod that is a member of multiple MultiClusterService resources is billed for each MultiClusterService that they are a member of. Two Pods are members of both the D and E MultiClusterService. The following table summarizes the monthly cost total for standalone billing for the two clusters in the example: MultiClusterService Pods Cost per month D 3 $9 E 4 $12 F 1 $3 Total 8 $24 For more information on how to configure Multi Cluster Ingress billing, see API enablement. Backup for GKE Backup for GKE is a separate service from GKE that can be used to protect and manage GKE data. Backup for GKE accrues fees along two dimensions: first, there is a GKE backup management fee, based on the number of GKE pods protected, and second, there is a backup storage fee, based on the amount of data (GiB) stored. Both fees are calculated on a monthly basis, similar to other GKE feature billing. If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply. Hourly pricing varies by month. The exact hourly pricing can be calculated by dividing the monthly pricing by the number of hours in the desired month. As an example, a customer with a single backup plan in Iowa (us-central1) that backs up an average of 20 pods during a month, storing 200GiB of backup storage data in Iowa, would be charged $25.60 in fees. This $25.60 would include $20 for the month for GKE backup management (20 x $1.00 / pod-month) and $5.60 for backup storage (200 * $0.028 / GiB-month). Starting June 26 2023, new network outbound data transfer charges will be introduced for backups that are stored in a different region from their source GKE cluster. These charges will be based on the source and destination region and the number of bytes transferred for each such "cross-region" backup operation: GKE cluster location Backup location Northern America Europe Asia Indonesia Oceania Middle East Latin America Africa Northern America $0.02/GiB $0.05/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Europe $0.05/GiB $0.02/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Asia $0.08/GiB $0.08/GiB $0.08/GiB $0.10/GiB $0.10/GiB $0.11/GiB $0.14/GiB $0.11/GiB Indonesia $0.10/GiB $0.10/GiB $0.10/GiB N/A $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Oceania $0.10/GiB $0.10/GiB $0.10/GiB $0.08/GiB $0.08/GiB $0.11/GiB $0.14/GiB $0.14/GiB Middle East $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.11/GiB $0.08/GiB $0.14/GiB $0.11/GiB Latin America $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB $0.14/GiB Africa $0.11/GiB $0.11/GiB $0.11/GiB $0.14/GiB $0.14/GiB $0.11/GiB $0.14/GiB N/A Pricing calculator You can use the Google Cloud pricing calculator to estimate your monthly GKE charges, including cluster management fees and worker node pricing. What's next Read the Google Kubernetes Engine documentation. Get started with Google Kubernetes Engine. Try the Pricing calculator. Learn about Google Kubernetes Engine solutions and use cases. Request a custom quote With Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization. Contact sales \ No newline at end of file diff --git a/Google_Kubernetes_Engine.txt b/Google_Kubernetes_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d37d1b7d60c279bf7ca7263e77895de894a6438 --- /dev/null +++ b/Google_Kubernetes_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/kubernetes-engine +Date Scraped: 2025-02-23T12:01:33.676Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Google Kubernetes Engine (GKE) The most scalable and fully automated Kubernetes servicePut your containers on autopilot and securely run your enterprise workloads at scale—with little to no Kubernetes expertise required.Get one Zonal or Autopilot cluster free per month.Try it in consoleContact salesProduct highlightsIncreased velocity, reduced risk, and lower TCOBuilt-in security posture and observability tooling With support for up to 65,000 nodes, we believe GKE offers more than 10X larger scale than the other two largest public cloud providersWhy choose GKE?3:48 min videoFeaturesSupport for 65,000-node clusters for next gen AIIn anticipation of even larger models, we are introducing support for 65,000-node clusters. To develop cutting-edge AI models, teams need to be able to allocate computing resources across diverse workloads. This includes not only model training but also serving, inference, conducting ad hoc research, and managing auxiliary tasks. Centralizing computing power within the smallest number of clusters provides the flexibility to quickly adapt to changes in demand from inference serving, research, and training workloads.Increased velocity, reduced risk, and lower TCOWith the new premium GKE Enterprise edition, platform teams benefit from increased velocity by configuring and observing multiple clusters from one place, defining configuration for teams rather than clusters, and providing self-service options for developers for deployment and management of apps. You can reduce risk using advanced security and GitOps-based configuration management. Lower total cost of ownership (TCO) with a fully integrated and managed solution—adding up to a 196% ROI in three years.VIDEOIntroducing GKE Enterprise edition6:15Flexible editionsGKE Standard edition provides fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization. It includes all the existing benefits of GKE and offers both the Autopilot and Standard operation modes. The new premium GKE Enterprise edition offers all of the above, plus management, governance, security, and configuration for multiple teams and clusters—all with a unified console experience and integrated service mesh.Standard edition for single cluster and Enterprise edition for multiple clustersServerless Kubernetes experience using AutopilotGKE Autopilot is a hands-off operations mode that manages your cluster’s underlying compute (without you needing to configure or monitor)—while still delivering a complete Kubernetes experience. And with per-pod billing, Autopilot ensures you pay only for your running pods, not system components, operating system overhead, or unallocated capacity for up to 85% savings from resource and operational efficiency. Both Autopilot and Standard operations mode are available as part of the GKE Enterprise edition.VIDEOWatch to learn about the Autopilot mode of operation in GKE 6:36Automated security and compliance monitoringGKE threat detection is powered by Security Command Center (SCC), and surfaces threats affecting your GKE clusters in near real-time by continuously monitoring GKE audit logs.GKE compliance provides streamlined real-time insights, automated reports, and the freedom to innovate securely on Google Cloud.Explore GKE’s robust security configurationsPod and cluster autoscalingGKE implements the full Kubernetes API, four-way autoscaling, release channels, and multi-cluster support. Horizontal pod autoscaling can be based on CPU utilization or custom metrics. Cluster autoscaling works on a per-node-pool basis, and vertical pod autoscaling continuously analyzes the CPU and memory usage of pods, automatically adjusting CPU and memory requests.Explore GKE’s robust security configurationsContainer-native networking and securityPrivately networked clusters in GKE can be restricted to a private endpoint or a public endpoint that only certain address ranges can access. GKE Sandbox for the Standard mode of operation provides a second layer of defense between containerized workloads on GKE for enhanced workload security. GKE clusters inherently support Kubernetes Network Policy to restrict traffic with pod-level firewall rules.Prebuilt Kubernetes applications and templatesGet access to enterprise-ready containerized solutions with prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing. These are not just container images, but open source, Google-built, and commercial applications that increase developer productivity. Click to deploy on-premises or in third-party clouds from Google Cloud Marketplace.Browse 100+ click-to-deploy and third-party services for GKEGPU and TPU supportGKE supports GPUs and TPUs and makes it easy to run ML, GPGPU, HPC, and other workloads that benefit from specialized hardware accelerators.Multi-team management using fleet team scopesUse fleets to organize clusters and workloads, and assign resources to multiple teams easily to improve velocity and delegate ownership. Team scopes let you define subsets of fleet resources on a per-team basis, with each scope associated with one or more fleet member clusters.Multi-cluster management using fleetsYou might choose multiple clusters to separate services across environments, tiers, locales, teams, or infrastructure providers. Fleets and the Google Cloud components and features that support them strive to make managing multiple clusters as easy as possible.Backup for GKEBackup for GKE is an easy way for customers running stateful workloads on GKE to protect, manage, and restore their containerized applications and data.Multi-cloud support with workload portabilityGKE runs Certified Kubernetes, enabling workload portability to other Kubernetes platforms across clouds and on-premises. You can also run your apps anywhere with consistency using GKE on Google Cloud, GKE on AWS, or GKE on Azure.Hybrid supportTake advantage of Kubernetes and cloud technology in your own data center through Google Distributed Cloud. Get the GKE experience with quick, managed, and simple installs as well as upgrades validated by Google.Managed service meshManage, observe, and secure your services with Google’s implementation of the powerful Istio open source project. Simplify traffic management and monitoring with a fully managed service mesh.Managed GitOpsCreate and enforce consistent configurations and security policies across clusters, fleets, and teams with managed GitOps config deployment.Identity and access managementControl access in the cluster with your Google accounts and role permissions.Hybrid networkingReserve an IP address range for your cluster, allowing your cluster IPs to coexist with private network IPs using Google Cloud VPN.Security and complianceGKE is backed by a Google security team of over 750 experts and is both HIPAA and PCI DSS compliant.Integrated logging and monitoringEnable Cloud Logging and Cloud Monitoring with simple checkbox configurations, making it easy to gain insight into how your application is running.Cluster optionsChoose clusters tailored to the availability, version stability, isolation, and pod traffic requirements of your workloads.Auto scaleAutomatically scale your application deployment up and down based on resource utilization (CPU, memory).Auto upgradeAutomatically keep your cluster up to date with the latest release version of Kubernetes.Auto repairWhen auto repair is enabled, if a node fails a health check, GKE initiates a repair process for that node.Resource limitsKubernetes allows you to specify how much CPU and memory (RAM) each container needs, which is used to better organize workloads within your cluster.Container isolationUse GKE Sandbox for a second layer of defense between containerized workloads on GKE for enhanced workload security.Stateful application supportGKE isn't just for 12-factor apps. You can attach persistent storage to containers, and even host complete databases.Docker image supportGKE supports the common Docker container format.OS built for containersGKE runs on Container-Optimized OS, a hardened OS built and managed by Google.Private container registryIntegrating with Google Container Registry makes it easy to store and access your private Docker images.Fast, consistent buildsUse Cloud Build to reliably deploy your containers on GKE without needing to set up authentication.Built-in dashboard Google Cloud console offers useful dashboards for your project's clusters and their resources. You can use these dashboards to view, inspect, manage, and delete resources in your clusters.Spot VMsAffordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide significant savings of up to 91% while still getting the same performance and capabilities as regular VMs.Persistent disks support Durable, high-performance block storage for container instances. Data is stored redundantly for integrity, flexibility to resize storage without interruption, and automatic encryption. You can create persistent disks in HDD or SSD formats. You can also take snapshots of your persistent disk and create new persistent disks from that snapshot.Local SSD supportGKE offers always encrypted, local, solid-state drive (SSD) block storage. Local SSDs are physically attached to the server that hosts the virtual machine instance for very high input/output operations per second (IOPS) and very low latency compared to persistent disks.Global load balancingGlobal load-balancing technology helps you distribute incoming requests across pools of instances across multiple regions, so you can achieve maximum performance, throughput, and availability at low cost.Linux and Windows supportFully supported for both Linux and Windows workloads, GKE can run both Windows Server and Linux nodes.Serverless containersRun stateless serverless containers abstracting away all infrastructure management and automatically scale them with Cloud Run.Usage meteringFine-grained visibility to your Kubernetes clusters. See your GKE clusters' resource usage broken down by namespaces and labels, and attribute it to meaningful entities.Release channels Release channels provide more control over which automatic updates a given cluster receives, based on the stability requirements of the cluster and its workloads. You can choose rapid, regular, or stable. Each has a different release cadence and targets different types of workloads.Software supply chain securityVerify, enforce, and improve security of infrastructure components and packages used for container images with Artifact Analysis.Per-second billingGoogle bills in second-level increments. You pay only for the compute time that you use.View all featuresHow It WorksA GKE cluster has a control plane and machines called nodes. Nodes run the services supporting the containers that make up your workload. The control plane decides what runs on those nodes, including scheduling and scaling. Autopilot mode manages this complexity; you simply deploy and run your apps.View documentationGoogle Kubernetes Engine in a minute (1:21)Common UsesManage multi-cluster infrastructureSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerTutorials, quickstarts, & labsSimplify multi-cluster deployments with fleetsUse fleets to simplify how you manage multi-cluster deployments—such as separating production from non-production environments, or separating services across tiers, locations, or teams. Fleets let you group and normalize Kubernetes clusters, making it easier to administer infrastructure and adopt Google best practices.Learn about fleet managementPartners & integrationsFind the right partner to manage multi-cluster infraSecurely manage multi-cluster infrastructure and workloads with the help of Enterprise edition launch partners.Find a GKE partnerSecurely run optimized AI workloadsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsTutorials, quickstarts, & labsRun optimized AI workloads with platform orchestrationA robust AI/ML platform considers the following layers: (i) Infrastructure orchestration that support GPUs for training and serving workloads at scale, (ii) Flexible integration with distributed computing and data processing frameworks, and (iii) Support for multiple teams on the same infrastructure to maximize utilization of resources.Learn more about AI/ML orchestration on GKEWhat is Kubeflow and how is it used?Building a Machine Learning Platform with Kubeflow and Ray on GKETensorFlow on GKE Autopilot with GPU accelerationLearning resourcesGKE shared-GPU helps to search for neutrinosHear from the San Diego Supercomputer Center (SDSC) and University of Wisconsin-Madison about how GPU sharing in Google Kubernetes Engines is helping them detect neutrinos at the South Pole with the gigaton-scale IceCube Neutrino Observatory.Read to learn moreLearn how to request hardware accelerators (GPUs) in your GKE Autopilot workloadsContinuous integration and deliveryCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKETutorials, quickstarts, & labsCreate a continuous delivery pipeline This hands-on lab shows you how to create a continuous delivery pipeline using Google Kubernetes Engine, Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker. After you create a sample application, you configure these services to automatically build, test, and deploy it. Start hands-on labBest practices for continuous integration and delivery to Google Kubernetes EngineWatch to learn continuous delivery best practices for Jenkins and GKEDeploying and running applicationsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsTutorials, quickstarts, & labsDeploy a containerized web applicationCreate a containerized web app, test it locally, and then deploy to a Google Kubernetes Engine (GKE) cluster—all directly in the Cloud Shell Editor. By the end of this short tutorial, you'll understand how to build, edit, and debug a Kubernetes app.Start tutorialQuickstart: Deploying a language-specific appDeploy WordPress on GKE with Persistent Disk and Cloud SQLCourse: get started with GKEPartners & integrationsFind the right partner to deploy and runDeploy and run on GKE with the help of our trusted partners, including WALT Labs, Zencore, FTG, and more.Find a GKE partnerLearning resourcesCurrent deploys and runs on GKECurrent, a leading challenger bank based in New York City, now hosts most of its apps in Docker containers, including its business-critical GraphQL API, using GKE to automate cluster deployment and management of containerized apps.Read how Current deployed apps with GKEBest practices for running cost-optimized Kubernetes applications on GKEHands-on Lab: Optimizing Your Costs on Google Kubernetes EngineKubernetes Engine API for building and managing applicationsMigrate workloadsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easeMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationTutorials, quickstarts, & labsMigrating a two-tier application to GKEUse Migrate to Containers to move and convert workloads directly into containers in GKE. Migrate a two-tiered LAMP stack application, with both app and database VMs, from VMware to GKE.Modernization path for .NET applications on Google CloudMigrate traditional workloads to GKE containers with easePartners & integrationsMigration partners and servicesWork with a trusted partner to get Google Kubernetes Engine on-prem and bring Kubernetes' world-class management to private infrastructure. Or tap into migration services from the Google Cloud Marketplace.Find a migration partnerHybrid and multicloud partnerCloud-native application migrationPricingHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. ServiceDescriptionPrice (USD)Free tierThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.FreeKubernetesEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.$0.10Per cluster per hourComputeAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsHow GKE pricing worksAfter free credits are used, total cost is based on edition, cluster operation mode, cluster management fees, and applicable inbound data transfer fees. Free tierDescriptionThe GKE free tier provides $74.40 in monthly credits per billing account that are applied to zonal and Autopilot clusters.Price (USD)FreeKubernetesDescriptionEnterprise editionIncludes standard edition features and multi-team, multi-cluster, self-service operations, advanced security, service mesh, configuration, and a unified console experience.Price (USD)$0.0083Per vCPU per hourStandard editionIncludes fully automated cluster life cycle management, pod and cluster autoscaling, cost visibility, and automated infrastructure cost optimization.Description$0.10Per cluster per hourComputeDescriptionAutopilot mode: CPU, memory, and compute resources that are provisioned for your Pods.Standard mode: You are billed for each instance according to Compute Engine's pricing.Price (USD)Refer to Compute Engine pricingLearn more about GKE pricing. View all pricing detailsPricing calculatorEstimate your monthly GKE costs, including region specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry GKE in the console, with one Zonal or Autopilot cluster free per month.Go to my consoleHave a large project?Contact salesDeploy an app to a GKE clusterView quickstartClick to deploy Kubernetes applicationsGo to MarketplaceGet expert help evaluating and implementing GKEFind a partnerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Security_Operations.txt b/Google_Security_Operations.txt new file mode 100644 index 0000000000000000000000000000000000000000..f61ccd82f51b216528d5dc83a2a40a553733e6b7 --- /dev/null +++ b/Google_Security_Operations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/security-operations +Date Scraped: 2025-02-23T12:09:31.904Z + +Content: +Google named a Leader in the IDC MarketScape: Worldwide SIEM for Enterprise 2024 Vendor Assessment. Get your complimentary excerpt.Google Security OperationsThe intelligence-driven and AI-powered security operations platformGoogle SecOps empowers security teams to better detect, investigate, and respond to threats.Contact usSee the valueProduct highlightsIngest and analyze your data at Google speed and scaleApply Google's threat intelligence to uncover and defend against the latest threatsElevate your team's talent and productivity with generative AIIntroducing Google Security OperationsFeaturesDetect more threats with less effortGoogle SecOps provides a rich and growing set of curated detections out of the box. These detections are developed and continuously maintained by our team of threat researchers.Leverage Gemini to search your data, iterate, and drill down using natural language and to create detections.Google SecOps also allows for custom detection authoring using the intuitive Yara-L language. In a fraction of the time (and the code).Reduce preparation and make your data actionable. Route, filter, redact, and transform your security telemetry with data pipeline management capabilities.Learn moreLearn how Google SecOps empowers security teams to confidently detect threats4:01Investigate with the right contextGoogle SecOps offers a streamlined and intuitive analyst experience that includes threat-centric case management, interactive, context-rich alert graphing, and automatic stitching together of entities.Investigate more efficiently with AI-generated summaries of what’s happening in cases, along with recommendations on how to respond.Google SecOps enables lightning fast, flexible, and context-rich search capabilities to surface any additional data that is needed as part of an investigationLearn moreExplore how Google SecOps streamlines investigations with real-time context6:36Respond with speed and precisionGoogle SecOps includes full fledged security orchestration, automation and response (SOAR) capabilities. Build playbooks that automate common response actions, orchestrate over 300 tools (EDRs, identity management, network security and more), and collaborate with other members of the team using an auto-documenting case wall.Interact with a context-aware AI-powered chat to easily create playbooks.Google SecOps makes it easy to track and measure the effectiveness of response efforts such as analyst productivity and MTTR and communicate that with stakeholders.Learn moreSee how Google SecOps enables security teams to respond with speed and precision4:38View all featuresHow It WorksGoogle Security Operations offers a unified experience across SIEM, SOAR, and threat intelligence to drive better detection, investigation, and response. Collect security telemetry data, apply threat intel to identify high priority threats, drive response with playbook automation, case management, and collaboration.See it in actionHow Google Security Operations worksCommon UsesSIEM migrationRethink your existing SecOps platformIdentify shortcomings in your current SIEM and move to selecting Google SecOps and executing a successful migration. Learn more about migrating to Google SecOpsTutorials, quickstarts, & labsRethink your existing SecOps platformIdentify shortcomings in your current SIEM and move to selecting Google SecOps and executing a successful migration. Learn more about migrating to Google SecOpsSOC modernizationDrive SOC modernizationProtect your organization against modern-day threats by transforming your security operations.Learn more more about ditching your SIEM dinosaurTutorials, quickstarts, & labsDrive SOC modernizationProtect your organization against modern-day threats by transforming your security operations.Learn more more about ditching your SIEM dinosaurGoogle Cloud Cybershield™Defend against threats at national scaleTransform government security operations to provide cyber defense at national scale with tailored and applied threat intelligence, streamlined security operations, and capability excellence.Learn more about Google Cloud Cybershield™Tutorials, quickstarts, & labsDefend against threats at national scaleTransform government security operations to provide cyber defense at national scale with tailored and applied threat intelligence, streamlined security operations, and capability excellence.Learn more about Google Cloud Cybershield™PricingAbout Google Security Operations pricingGoogle Security Operations is available in packages and based on ingestion. Includes one year of security telemetry retention at no additional cost.Package typeFeatures includedPricingStandardBase SIEM and SOAR capabilitiesIncludes the core capabilities for data ingestion, threat detection, investigation and response with 12 months hot data retention, full access to our 700+ parsers and 300+ SOAR integrations and 1 environment with remote agent.The detection engine for this package supports up to 1,000 single-event and 75 multi-event rules.Threat intelligenceBring your own threat intelligence feeds.Contact sales for pricingEnterpriseIncludes everything in the Standard package plus:Base SIEM and SOAR capabilitiesExpanded support to unlimited environments with remote agent and a detection engine that supports up to 2,000 single-event and 125 multi-event rules.UEBAUse YARA-L to create rules for your own user and entity behavior analytics, plus get a risk dashboard and out of the box user and entity behavior-style detections.Threat intelligenceAdds curation of enriched open source intelligence that can be used for filtering, detections, investigation context and retro-hunts. Enriched open source intelligence includes Google Safe Browsing, remote access, Benign, and OSINT Threat Associations.Google curated detectionsAccess out-of-the-box detections maintained by Google experts, covering on-prem and cloud threats.Gemini in security operationsTake productivity to the next level with AI. Gemini in security operations provides natural language, an interactive investigation assistant, contextualized summaries, recommended response actions and detection and playbook creation.Contact sales for pricingEnterprise PlusIncludes everything in the Enterprise package plus:Base SIEM and SOAR capabilitiesExpanded detection engine supporting up to 3,500 single-event rules and 200 multi-event rules.Applied threat intelligenceFull access to Google Threat Intelligence (which includes Mandiant, VirusTotal, and Google threat intel) including intelligence gathered from active Mandiant incident response engagements.On top of the unique sources, Applied Threat Intelligence provides turnkey prioritization of IoC matches with ML-base prioritization that factors in each customer's unique environment. We will also go beyond IoCs to include TTPs in understanding how an adversary behaves and operates.Google curated detectionsAdditional access to emerging threat detections based on Mandiant's primary research and frontline threats seen in active incident response engagements.BigQuery UDM storageFree storage for BigQuery exports for Google SecOps data up to your retention period (12 months by default).Contact sales for pricingContact sales for full pricing detailsAbout Google Security Operations pricingGoogle Security Operations is available in packages and based on ingestion. Includes one year of security telemetry retention at no additional cost.StandardFeatures includedBase SIEM and SOAR capabilitiesIncludes the core capabilities for data ingestion, threat detection, investigation and response with 12 months hot data retention, full access to our 700+ parsers and 300+ SOAR integrations and 1 environment with remote agent.The detection engine for this package supports up to 1,000 single-event and 75 multi-event rules.Threat intelligenceBring your own threat intelligence feeds.PricingContact sales for pricingEnterpriseFeatures includedIncludes everything in the Standard package plus:Base SIEM and SOAR capabilitiesExpanded support to unlimited environments with remote agent and a detection engine that supports up to 2,000 single-event and 125 multi-event rules.UEBAUse YARA-L to create rules for your own user and entity behavior analytics, plus get a risk dashboard and out of the box user and entity behavior-style detections.Threat intelligenceAdds curation of enriched open source intelligence that can be used for filtering, detections, investigation context and retro-hunts. Enriched open source intelligence includes Google Safe Browsing, remote access, Benign, and OSINT Threat Associations.Google curated detectionsAccess out-of-the-box detections maintained by Google experts, covering on-prem and cloud threats.Gemini in security operationsTake productivity to the next level with AI. Gemini in security operations provides natural language, an interactive investigation assistant, contextualized summaries, recommended response actions and detection and playbook creation.PricingContact sales for pricingEnterprise PlusFeatures includedIncludes everything in the Enterprise package plus:Base SIEM and SOAR capabilitiesExpanded detection engine supporting up to 3,500 single-event rules and 200 multi-event rules.Applied threat intelligenceFull access to Google Threat Intelligence (which includes Mandiant, VirusTotal, and Google threat intel) including intelligence gathered from active Mandiant incident response engagements.On top of the unique sources, Applied Threat Intelligence provides turnkey prioritization of IoC matches with ML-base prioritization that factors in each customer's unique environment. We will also go beyond IoCs to include TTPs in understanding how an adversary behaves and operates.Google curated detectionsAdditional access to emerging threat detections based on Mandiant's primary research and frontline threats seen in active incident response engagements.BigQuery UDM storageFree storage for BigQuery exports for Google SecOps data up to your retention period (12 months by default).PricingContact sales for pricingContact sales for full pricing detailsGet a demoSee Google Security Operations in actionContact usThe Business Value of Google SecOpsExplore the business value customers derive from Google Security OperationsRead the IDC studyLearn what Google Security Operations can do for you"Our SOC and analysts are able to prioritize work and respond with the attention that is needed"See Charles Schwab's story"When you run a search, all the data just pops up from a contextual enrichment perspective"See why customers love Google SecOpsJoin the Google Cloud Security communityInteract with your peers, access best practices, documentation, and moreLearn the technical aspects of Google Security OperationsCheck out the Google Security Operations learning pathNew to Google Security Operations?Check out product documentation for assistanceBusiness CaseExplore how organizations like yours cut costs, increase ROI, and drive innovation with Google Security OperationsIDC Study: Customers cite 407% ROI with Google Security OperationsCISO, multi-billion dollar automotive company"Our cybersecurity teams deal with issues faster with Google Security Operations, but they also identify more issues. The real question is, 'how much safer do I feel as a CISO with Google Security Operations versus my old platform?' and I would say 100 times safer."Read the studyRelated contentIDC Report: Google named a Leader in the 2024 IDC MarketScape for SIEM Gartner Report: Google named a Visionary in the 2024 Gartner Magic Quadrant for SIEMWhite paper: The Great SIEM Migration: A Guide to Ditching DinosaursTrusted and loved by security teams around the world"With the traditional SIEM, it would typically take five to seven people with an environment our size. With Google Security Operations, we’re logging approximately 22 times the amount of data, we're seeing three times the events, and we're closing investigations in half the time." - Mike Orosz, CISO, VertivHear their story"Historically, our legacy SIEM, we had to feed it a lot of the contextual enrichment and all of that threat intelligence stuff. It was data engineering to make it sing, where on the Google side, the product is more baked in, purpose-built for us to use it. It’s so intuitive and the speed was certainly really beneficial for us as well."- Mark Ruiz, Head of Cybersecurity Analytics, PfizerHear their story"When we moved to Google Security Operations, we were able to reduce the time to detect and time to investigate from 2 hours to about 15 to 30 minutes. No longer spending time in disparate tools but performing the job functions of a security operations analyst, it empowers them to work on more advanced workflows." - Hector Peña, Senior Director of Information Security, Apex FinTech SolutionsHear their storyFAQExpand allIs Google Security Operations only relevant for Google Cloud?No. Google SecOps ingests and analyzes security telemetry from across your environment, including on-premises and all major cloud providers, to help you detect, investigate and respond to cyberthreats across your organization. Check out the complete list of supported log types and parsers.Can I bring my own threat intelligence feeds to Google Security Operations?Yes. You can integrate any threat intelligence feeds with Google SecOps. Note that the automatic application of threat intelligence for threat detection is only supported for Google’s threat intelligence feeds. Does Google Security Operations support data residency for specific regions?Yes. The full list of available regions can be found here.Does Google SecOps include AI?Yes. We leverage AI to supercharge productivity including: the ability to use natural language to search your data, iterate, and drill down. Gemini generates underlying queries and presents full mapped syntax; the ability to investigate more efficiently with AI-generated summaries of what’s happening in cases, along with recommendations on how to respond; and the ability to interact with Google SecOps using a context-aware AI-powered chat, including the ability to create detections and playbooks.Does Google SecOps include SIEM capabilities?Yes. Google SecOps includes SIEM (Security Information and Event Management), SOAR (Security Orchestration, Automation, and Response), and applied threat intelligence capabilities. Let's work togetherContact usJoin the Google Cloud Security CommunityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Google_Threat_Intelligence.txt b/Google_Threat_Intelligence.txt new file mode 100644 index 0000000000000000000000000000000000000000..70ac8dbc373cc30e2bf7b05f0221d67200350c47 --- /dev/null +++ b/Google_Threat_Intelligence.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/threat-intelligence +Date Scraped: 2025-02-23T12:09:08.379Z + +Content: +See where Google was ranked in the Forrester Wave™: External Threat Intelligence Service Providers, Q3 2023.Google Threat IntelligenceActionable threat intelligence at Google scaleGet comprehensive visibility and context on the threats that matter most to your organization.Find out who's targeting youFrontline intelligence from MandiantCrowdsource intelligence from VirusTotalProduct highlightsKnow who's targeting you with unmatched visibilityKnow and track the biggest threats to your org everydayMake Google part of your security team with deep expertiseWhy Google Threat Intelligence?FeaturesKnow who’s targeting you with unmatched visibilityGoogle Threat Intelligence provides unmatched visibility into threats enabling us to deliver detailed and timely threat intelligence to security teams around the world. By defending billions of users, seeing millions of phishing attacks, and spending hundreds of thousands of hours investigating incidents we have the visibility to see across the threat landscape to keep the most important organizations protected, yours.Turn insights into actionFocus on the most relevant threats to your organization by understanding the threat actors and their ever changing tactics, techniques, and procedures (TTPs). Leverage these insights to proactively set your defenses, hunt efficiently, and respond to new and novel threats in minutes. Make Google part of your security teamGrow your team’s capabilities with Mandiant’s industry leading threat analysts. Our threat intel experts are on your side and here to help. Whether you're looking for cyber threat intel (CTI) training for your team, needing a deeper understanding of threats you should prioritize and action, or needing a CTI expert to sit with your team, we have the expertise to help you maximize threat intel.Get help directly within the console from a Mandiant expert to address threats faster and move to your next task.Supercharge your team with Gemini Optimize your work flows with the help of AI. Gemini in Threat Intelligence analyzes vast datasets and acts as a force multiplier, immediately surfacing threats most relevant to your unique risk profile, reducing the noise of generic alerts. It continuously learns from your actions, tailoring its output to become increasingly relevant to your specific needs over time.Simplify workflows and collaboration with a workbenchTake command of your threat analysis. Our threat intelligence workbench puts everything you need in one place: a vast malware database, powerful tools, insightful context, and effortless collaboration. Customize workflows with graphs, hunting results, rule sharing, and collections to maximize efficiency.Trust a unified verdictWhen it comes to security and the threats you are facing, you need to have confidence in the threat intelligence you use. Google Threat Intelligence pulls together inputs from Google’s vast threat insights, Mandiant’s frontline and human curated threat intelligence, and VirusTotal’s massive threat database to deliver a unified verdict. This provides you with a single answer on whether an indicator or suspicious object is something you should prioritize as a threat to your organization.View all featuresHow It WorksGet ahead of the latest threats and respond in minutes, not weeks with Google Threat Intelligence. Leverage our broad visibility to blanket your event data with threat intel associations to reduce gaps. Fill in your blind spots maximizing containment and minimizing the potential impact of a breach.Watch demoCommon UsesAutomate IOC enrichment and alert prioritizationEnrich and prioritize SIEM alerts Google Threat Intelligence helps efficiently manage the overwhelming volume of alerts. By providing a unified score that aggregates hundreds of technical details, Google Threat Intelligence simplifies alert prioritization. It offers curated threat intel details from Mandiant experts, community intel, and associated IOC info, allowing you to connect alerts and identify priority threats more effectively. Learn more about IOC enrichment and prioritizationLearning resourcesEnrich and prioritize SIEM alerts Google Threat Intelligence helps efficiently manage the overwhelming volume of alerts. By providing a unified score that aggregates hundreds of technical details, Google Threat Intelligence simplifies alert prioritization. It offers curated threat intel details from Mandiant experts, community intel, and associated IOC info, allowing you to connect alerts and identify priority threats more effectively. Learn more about IOC enrichment and prioritizationRespond to incidents with confidenceEmpower Incident response (IR) and Forensic capabilitiesGoogle Threat Intelligence empowers IR and forensic investigators with comprehensive and actionable insights for efficient threat analysis. With outstanding technical pivoting capabilities, curated and crowdsourced threat intelligence, and interactive graph visualizations, teams can quickly assess incident severity and identify additional indicators of compromise, context, and attribution.Learn how you can respond fasterLearning resourcesEmpower Incident response (IR) and Forensic capabilitiesGoogle Threat Intelligence empowers IR and forensic investigators with comprehensive and actionable insights for efficient threat analysis. With outstanding technical pivoting capabilities, curated and crowdsourced threat intelligence, and interactive graph visualizations, teams can quickly assess incident severity and identify additional indicators of compromise, context, and attribution.Learn how you can respond fasterThreat intelligence and advanced huntingEfficiently hunt for threatsGoogle Threat Intelligence elevates the value of threat hunting by providing tailored risk profiles, including actors, campaigns, and malware families, to enable proactive threat tracking and mitigation. Detailed malicious activity reports and TTP analysis help refine detection and prevention strategies, while crowdsourced detection rules and YARA hunting capabilities uncover threats and malicious activity. Learn advanced hunting methodsLearning resourcesEfficiently hunt for threatsGoogle Threat Intelligence elevates the value of threat hunting by providing tailored risk profiles, including actors, campaigns, and malware families, to enable proactive threat tracking and mitigation. Detailed malicious activity reports and TTP analysis help refine detection and prevention strategies, while crowdsourced detection rules and YARA hunting capabilities uncover threats and malicious activity. Learn advanced hunting methodsUncover external threatsStay ahead of the threatsProactively detect potential external threats by monitoring exposed data, your attack surface, and brand impersonation. Receive early warnings of potential breaches by identifying compromised credentials, websites, and phishing attacks abusing your brands. Monitor malware or malicious abuse of your infrastructure, assets, or image. Get notifications if your assets are found in a malware configuration.Learn how to uncover external threatsLearning resourcesStay ahead of the threatsProactively detect potential external threats by monitoring exposed data, your attack surface, and brand impersonation. Receive early warnings of potential breaches by identifying compromised credentials, websites, and phishing attacks abusing your brands. Monitor malware or malicious abuse of your infrastructure, assets, or image. Get notifications if your assets are found in a malware configuration.Learn how to uncover external threatsOptimized vulnerability managementPut our resources where they are most neededChange your approach to vulnerability management by combining asset exposure detection, vulnerability intelligence, and early threat detection. Proactively identify and prioritize vulnerabilities based on real-world exploitation data, including associated campaigns, and threat actors. This approach enables efficient allocation of resources, to prioritize the most critical vulnerabilities.Learn intelligence-led prioritizationLearning resourcesPut our resources where they are most neededChange your approach to vulnerability management by combining asset exposure detection, vulnerability intelligence, and early threat detection. Proactively identify and prioritize vulnerabilities based on real-world exploitation data, including associated campaigns, and threat actors. This approach enables efficient allocation of resources, to prioritize the most critical vulnerabilities.Learn intelligence-led prioritizationAI-driven threat intelligence Know and track the biggest threats to your org everydayQuickly get a grasp of your threat landscape and what has changed. In a single dashboard, see an up-to-date view of who’s targeting you, active campaigns, malware, and relevant vulnerabilities. Receive daily or weekly notifications on changes to your threat landscape to prepare the organization and stay ahead of the threats.Learning resourcesKnow and track the biggest threats to your org everydayQuickly get a grasp of your threat landscape and what has changed. In a single dashboard, see an up-to-date view of who’s targeting you, active campaigns, malware, and relevant vulnerabilities. Receive daily or weekly notifications on changes to your threat landscape to prepare the organization and stay ahead of the threats.Understand threat intel faster with AI generated summariesLeverage the power of Gemini in Threat Intelligence Save time and reduce complexity when researching threats or geopolitical topics. Leverage Gemini in Threat Intelligence, an always-on AI collaborator that provides generative AI-powered assistance to help you distill Mandiant’s industry-leading corpus of threat intel information into easy to comprehend, natural language summaries, allowing you to quickly understand how adversaries may be targeting your organization and impacting the threat landscape.Learning resourcesLeverage the power of Gemini in Threat Intelligence Save time and reduce complexity when researching threats or geopolitical topics. Leverage Gemini in Threat Intelligence, an always-on AI collaborator that provides generative AI-powered assistance to help you distill Mandiant’s industry-leading corpus of threat intel information into easy to comprehend, natural language summaries, allowing you to quickly understand how adversaries may be targeting your organization and impacting the threat landscape.Get visibility into the threat actor’s playbookKnow how the attack will happen before it startsSet a proactive security strategy by mapping the TTPs used against organizations just like yours. By mapping the TTPs with the MITRE ATT&CK framework you will be able to prioritize tasks, make adjustments to security settings, and make security investments with more confidence.Learning resourcesKnow how the attack will happen before it startsSet a proactive security strategy by mapping the TTPs used against organizations just like yours. By mapping the TTPs with the MITRE ATT&CK framework you will be able to prioritize tasks, make adjustments to security settings, and make security investments with more confidence.Anticipate, identify and respond to threats with confidenceVisibility into active threat campaignsThreat intelligence can be helpful to proactively set your security strategy. When you need to know if there are any active threat campaigns targeting your region, industry, or vulnerabilities, Google Threat Intelligence can provide actionable insight into these campaigns. With this knowledge you can adjust your strategy quickly, driving better prioritization and mitigation of current and future threats.Learn more about threat campaignsLearning resourcesVisibility into active threat campaignsThreat intelligence can be helpful to proactively set your security strategy. When you need to know if there are any active threat campaigns targeting your region, industry, or vulnerabilities, Google Threat Intelligence can provide actionable insight into these campaigns. With this knowledge you can adjust your strategy quickly, driving better prioritization and mitigation of current and future threats.Learn more about threat campaignsPricingHow Google Threat Intelligence pricing worksSubscriptions are priced on a flat annual rate with a set number of API calls per subscription level. More API call packs can be added separately. Product/subscriptionDescriptionPricingGoogle Threat Intelligence - StandardFor organisations looking for threat intelligence driven event triage and detections to improve their security posture.Contact sales for pricingGoogle Threat Intelligence - EnterpriseFor organizations who want to use threat intelligence to be more proactive, know more about threat actors targeting them and conduct efficient hunting exercises.Contact sales for pricingGoogle Threat Intelligence - Enterprise+For organizations with a strong cyber threat intelligence teams who see threat intelligence as a critical tool to understand and stay ahead of their adversaries. Contact sales for pricingHow Google Threat Intelligence pricing worksSubscriptions are priced on a flat annual rate with a set number of API calls per subscription level. More API call packs can be added separately. Google Threat Intelligence - StandardDescriptionFor organisations looking for threat intelligence driven event triage and detections to improve their security posture.PricingContact sales for pricingGoogle Threat Intelligence - EnterpriseDescriptionFor organizations who want to use threat intelligence to be more proactive, know more about threat actors targeting them and conduct efficient hunting exercises.PricingContact sales for pricingGoogle Threat Intelligence - Enterprise+DescriptionFor organizations with a strong cyber threat intelligence teams who see threat intelligence as a critical tool to understand and stay ahead of their adversaries. PricingContact sales for pricingGet a demoSee what you can learn from Google Threat Intelligence.Contact usContact usContact us today for more information on Google Threat Intelligence.Contact usLearn more about threat intelJoin the Google Threat Intel communityJoin nowGet a tour of Google Threat IntelligenceWatch the videoGoogle Threat Intelligence overviewRead nowWhat's your security maturity level?Intelligence Capability Discovery Threat intelligence newsletterSign up nowFAQExpand allWhat is cyber threat intelligence (CTI)?CTI is a refined insight into cyber threats. Intelligence teams use credible insights from multiple sources to create actionable context on the threat landscape, threat actors and their tactics, techniques, and procedures (TTPs). The effective use of CTI allows organizations to make the shift from being reactive to becoming more proactive against threat actors.How can CTI be used to make security more proactive?Credible threat intelligence can be used to understand the malware and TTPs threat actors use and the vulnerabilities they exploit to target specific industries and regions. Organizations use this intelligence to implement, configure, and adjust security tools, and train staff to thwart attacks.What is a threat actor?A threat actor is a person or group of people who conduct malicious targeting or attacks on others. Typically motivated by espionage, financial gain, or publicity, threat actors may conduct a full campaign alone or work with other groups who specialize in specific aspects of an attack.How can you identify active threats?Assuming we all agree that a “threat” is defined as a plan or inclination to attack as opposed to an “attack” which is an existing or previously successful breach. Identifying active threats can be done using threat intelligence which will help provide context into the threat actors and malware impacting your specific region or industry. Another method to identify active threats is by scanning the open, deep, and dark web for chatter around your organization, personnel, technology, or partners. By identifying these threats, security professionals can proactively adjust their defenses to block or reduce the impact of a potential attack.What are the three types of CTI?Strategic – High level trends used to drive business decisions and security investments.Operational – Contextual information on impending threats to the organization, used by security professionals to understand more about threat actors and their TTPs.Tactical – Understanding of the threat actor TTPs, used by security professionals to stop incidents and make defensive adjustments.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Government.txt b/Government.txt new file mode 100644 index 0000000000000000000000000000000000000000..18b9439a22ac96dab374fc0362c4ed3f53f118a6 --- /dev/null +++ b/Government.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/gov +Date Scraped: 2025-02-23T11:58:02.859Z + +Content: +Explore Google Public Sector CEO Karen Dahut’s reflections on AI transformation in 2024.Google for GovernmentGoogle helps government agencies meet their mission with the power of AI, infrastructure, security, and collaboration solutions.See our offerings for state and local, civilian, and defense.Talk to an expertExplore Generative AI in Public SectorLearn how to use Google's Generative AI to accelerate your missionGet startedSolutionsGovernment solutionsMeet the mission with the help of artificial intelligence and machine learningGenerative AI for organizationsImprove constituent experience, productivity, and learning with the power of Gen AI.Take the free introductory courseVirtual agents for 24/7 supportExpand access with human-like AI-powered contact center experiences, lower costs, and free up your human agents' time.Blog post: State of HawaiiAnalytics for mission ownersBigQuery is a completely serverless and cost-effective enterprise data warehouse that works across clouds.Blog post: how agencies use analytics to drive the missionTranslation at scaleWith Translation Hub, easily translate content into 135 languages with an intuitive, business-user-friendly interface, and integrate human feedback where required.Document AI for governmentAccelerate time to delivery of critical services by decreasing the manual labor needed to process documents.Learn more: Document intake acceleratorTransportation solutionsImprove operations, maximize the efficiency of your resources, and meet constituent needs with the power of analytics and AI.Case study: Chicago Department of TransportationClimate and sustainabilityMonitor and manage the impacts of climate change with the help of Google Cloud.Case study: Battelle NEONModernize infrastructure and improve operationsGoogle Distributed Cloud HostedGDC Hosted is an air-gapped private cloud that enables public sector organizations to address strict data residency and security requirements.GDCH now generally availableHybrid cloud managementAnthos is the leading cloud-centric container platform to run modern apps anywhere consistently at scale.Apigee API managementBuild, manage, and secure APIs—for any use case, environment, or scale. Operate your APIs with enhanced scale, security, and automation.Constituent platformsDeliver personalized customer experiences with smarter digital intake, responsive service, and modern application experiences powered by Google Cloud.Data center migrationExit or reduce on-premises data centers, migrate workloads as is, modernize apps, or leave another cloud. We’ll craft the right cloud migration solutions.Research computingAccelerate research breakthroughs and scientific collaboration with Google's high Performance computing. Use RAD Lab for rapid application development and testing.Google WorkspaceUtilize the best of Google’s collaboration and communication tools in offerings—and with pricing—that meets your needs.Blog post: U.S. Navy Strengthen cyber defenses, safeguard critical data, and gain complianceFrontline intelligence and Mandiant expertiseUnderstand threat actors to mitigate risk and minimize the impact of a breach.Visit our Threat Intelligence blog for the latest insights and storiesModern SecOps for GovernmentBoost defenses with a modern approach to threat detection, investigation, and response.Blog post: Improving situational threat awareness with Google CyberShieldSecure your cloud transformationEmbrace new approaches to cloud security; provide security access to systems, data and resources; and protect business critical apps from fraud and web attacks.Zero Trust and more with Google CloudProvides a secure-by-design foundation: a shared fate model for risk management supported by products, services, frameworks, best practices, and controls.Supercharge your security with generative AIGemini in Security Operations is now generally available—a modern, cloud-native SecOps platform that empowers security teams to better defend against threats.To securely build AI on Google Cloud, follow these best practicesSecure data warehouseLearn best practices to create a secure data warehouse with our blueprint that helps you protect your data.Continuity of operationsGoogle Workspace can help government organizations with business continuity needs.Continue exploring our government solutionsLearn more about our offerings for the Federal Government.Visit the websiteLearn more about our offerings for State and Local Governments.Visit the websiteCustomersCustomer storiesBlog postU.S. Army chooses Google Workspace to deliver cutting-edge collaboration3-minute readCase studyNYC Cyber Command: Keeping digital city services secure6-minute readCase studyLos Angeles: Using Maps to share visual info with citizens6-minute readVideoThe State of Hawaii improves economic development and climate resilience with the help of GoogleVideo (2:01)Case studyState of Arizona: Productivity and security via the cloud6-minute readCase studyNational Institute on Aging: Speeding the Parkinson’s fight5-min readSee all customersPartnersPublic sector badge expertise partnersOur trusted industry partners can help your organization navigate every stage of digital transformation to achieve your mission and better serve your constituents. Expand allAI and MLSecurityCustomer engagement Work transformationMaps and geospatialData analyticsSee all partnersSecuritySecurity and complianceGoogle Cloud’s defense-in-depth security protects your data at global scale. We can help you meet your technical needs and detect and manage against security risks before they become threats.Our certifications, technical capabilities, guidance documents, and legal commitments support your compliance requirements globally.In the United States, Google Cloud has Department of Defense Impact Level 5 (IL5) provisional authorization (PA).Learn moreISO/IEC 27001ISO/IEC 27017ISO/IEC 27018FedRAMPHIPAANIST 800-53What's newNews and eventsEventAnnouncing the Google Public Sector Summit On DemandRegister nowNewsForbes selects Google on Best Employers for Veterans rankingsRead reportBlog postHelping businesses with generative AIRead the blogBlog postDearborn transforms its digital services, becoming a model for citiesRead the blogVideoThe State of Hawaii is supporting economic and climate resilience with GoogleWatch videoBlog postModern workforce development solutions for the evolving worldRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Guidelines_for_high-quality,_predictive_ML_solutions.txt b/Guidelines_for_high-quality,_predictive_ML_solutions.txt new file mode 100644 index 0000000000000000000000000000000000000000..56db5c5298fe72d2d6212868d6cb57445f53befa --- /dev/null +++ b/Guidelines_for_high-quality,_predictive_ML_solutions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/guidelines-for-developing-high-quality-ml-solutions +Date Scraped: 2025-02-23T11:46:24.908Z + +Content: +Home Docs Cloud Architecture Center Send feedback Guidelines for developing high-quality, predictive ML solutions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-08 UTC This document collates some guidelines to help you assess, ensure, and control quality in building predictive machine learning (ML) solutions. It provides suggestions for every step of the process, from developing your ML models to deploying your training systems and serving systems to production. The document extends the information that's discussed in Practitioners Guide to MLOps by highlighting and distilling the quality aspects in each process of the MLOps lifecycle. This document is intended for anyone who is involved in building, deploying, and operating ML solutions. The document assumes that you're familiar with MLOps in general. It does not assume that you have knowledge of any specific ML platform. Overview of machine learning solution quality In software engineering, many standards, processes, tools, and practices have been developed to ensure software quality. The goal is to make sure that the software works as intended in production, and that it meets both functional and non-functional requirements. These practices cover topics like software testing, software verification and validation, and software logging and monitoring. In DevOps, these practices are typically integrated and automated in CI/CD processes. MLOps is a set of standardized processes and capabilities for building, deploying, and operating ML systems rapidly and reliably. As with other software solutions, ML software solutions require you to integrate these software quality practices and apply them throughout the MLOps lifecycle. By applying these practices, you help make sure the trustworthiness and predictability of your models, and that the models conform to your requirements. However, the tasks of building, deploying, and operating ML systems present additional challenges that require certain quality practices that might not be relevant to other software systems. In addition to the characteristics of most of the other software systems, ML systems have the following characteristics: Data-dependent systems. The quality of the trained models and of their predictions depends on the validity of the data that's used for training and that's submitted for prediction requests. Any software system depends on valid data, but ML systems deduce the logic for decision-making from the data automatically, so they are particularly dependent on the quality of the data. Dual training-serving systems. ML workloads typically consist of two distinct but related production systems: the training system and the serving system. A continuous training pipeline produces newly trained models that are then deployed for prediction serving. Each system requires a different set of quality practices that balance effectiveness and efficiency in order to produce and maintain a performant model in production. In addition, inconsistencies between these two systems result in errors and poor predictive performance. Prone to staleness. Models often degrade after they're deployed in production because the models fail to adapt to changes in the environment that they represent, such as seasonal changes in purchase behavior. The models can also fail to adapt to changes in data, such as new products and locations. Thus, keeping track of the effectiveness of the model in production is an additional challenge for ML systems. Automated decision-making systems. Unlike other software systems, where actions are carefully hand-coded for a set of requirements and business rules, ML models learn rules from data to make a decision. Implicit bias in the data can lead models to produce unfair outcomes. When a deployed ML model produces bad predictions, the poor ML quality can be the result of a wide range of problems. Some of these problems can arise from the typical bugs that are in any program. But ML-specific problems can also include data skews and anomalies, along with the absence of proper model evaluation and validation procedures as a part of the training process. Another potential issue is inconsistent data format between the model's built-in interface and the serving API. In addition, model performance degrades over time even without these problems, and it can fail silently if it's not properly monitored. Therefore, you should include different kinds of testing and monitoring for ML models and systems during development, during deployment, and in production. Quality guidelines for model development When you develop an ML model during the experimentation phase, you have the following two sets of target metrics that you can use to assess the model's performance: The model's optimizing metrics. This metric reflects the model's predictive effectiveness. The metric includes accuracy and f-measure in classification tasks, mean absolute percentage error in regression and forecasting tasks, discounted cumulative gain in ranking tasks, and perplexity and BLEU scores in language models. The better the value of this metric, the better the model is for a given task. In some use cases, to ensure fairness, it's important to achieve similar predictive effectiveness on different slices of the data—for example, on different customer demographics. The model's satisficing metrics. This metric reflects an operational constraint that the model needs to satisfy, such as prediction latency. You set a latency threshold to a particular value, such as 200 milliseconds. Any model that doesn't meet the threshold is not accepted. Another example of a satisficing metric is the size of the model, which is important when you want to deploy your model to low-powered hardware like mobile and embedded devices. During experimentation, you develop, train, evaluate, and debug your model to improve its effectiveness with respect to the optimizing metrics, without violating the satisficing metric thresholds. Guidelines for experimentation Have predefined and fixed thresholds for optimizing metrics and for satisficing metrics. Implement a streamlined evaluation routine that takes a model and data and produces a set of evaluation metrics. Implement the routine so it works regardless of the type of the model (for example, decision trees or neural networks) or the model's framework (for example, TensorFlow or Scikit-learn). Make sure that you have a baseline model to compare with. This baseline can consist of hardcoded heuristics or it can be a simple model that predicts the mean or the mode target value. Use the baseline model to check the performance of the ML model. If the ML model isn't better than the baseline model, there is a fundamental problem in the ML model. Track every experiment that has been done to help you with reproducibility and incremental improvement. For each experiment, store hyperparameter values, feature selection, and random seeds. Guidelines for data quality Address any imbalanced classes early in your experiments by choosing the right evaluation metric. In addition, apply techniques like upweighting minority class instances or downsampling majority class instances. Make sure that you understand the data source at hand, and perform the relevant data preprocessing and feature engineering to prepare the training dataset. This type of process needs to be repeatable and automatable. Make sure that you have a separate testing data split (holdout) for the final evaluation of the model. The test split should not be seen during training, and don't use it for hyperparameter tuning. Make sure that training, validation, and test splits are equally representative of your input data. Sampling such a test split depends on the nature of the data and of the ML task at hand. For example, stratified splitting is relevant to classification tasks, while chronological splitting is relevant to time-series tasks. Make sure that the validation and test splits are preprocessed separately from the training data split. If the splits are preprocessed in a mixture, it leads to data leakage. For example, when you use statistics to transform data for normalization or for bucketizing numerical features, compute the statistics from the training data and apply them to normalize the validation and test splits. Generate a dataset schema that includes the data types and some statistical properties of the features. You can use this schema to find anomalous or invalid data during experimentation and training. Make sure that your training data is properly shuffled in batches but that it also still meets the model training requirements. For example, this task can apply to positive and negative instance distributions. Have a separate validation dataset for hyperparameter tuning and model selection. You can also use the validation dataset to perform early stopping. Otherwise, you can let the model train for the entirety of the given set of maximum iterations. However, only save a new snapshot of the model if its performance on the validation dataset improves relative to the previous snapshot. Guidelines for model quality Make sure that your models don't have any fundamental problems that prevent them from learning any relationship between the inputs and the outputs. You can achieve this goal by training the model with very few examples. If the model doesn't achieve high accuracy for these examples, there might be a bug in your model implementation or training routine. When you're training neural networks, monitor for NaN values in your loss and for the percentage of weights that have zero values throughout your model training. These NaN or zero values can be indications of erroneous arithmetic calculations, or of vanishing or exploding gradients. Visualizing changes in weight-values distribution over time can help you detect the internal covariate shifts that slow down the training. You can apply batch normalization to alleviate this reduction in speed. Compare your model performance on the training data and on the test data to understand if your model is overfitting or underfitting. If you see either of these issues, perform the relevant improvements. For example, if there is underfitting, you might increase the model's learning capacity. If there was overfitting, you might apply regularization. Analyze misclassified instances, especially the instances that have high prediction confidence and the most-confused classes in the multi-class confusion matrix. These errors can be an indication of mislabeled training examples. The errors can also identify an opportunity for data preprocessing, such as removing outliers, or for creating new features to help discriminate between such classes. Analyze feature importance scores and clean up features that don't add enough improvement to the model's quality. Parsimonious models are preferred over complex ones. Quality guidelines for training pipeline deployment As you implement your model and model training pipeline, you need to create a set of tests in a CI/CD routine. These tests run automatically as you push new code changes, or they run before you deploy your training pipeline to the target environment. Guidelines Unit-test the feature engineering functionality. Unit-test the encoding of the inputs to the model. Unit-test user-implemented (custom) modules of the models independently—for example, unit-test custom graph convolution and pooling layers, or custom attention layers. Unit-test any custom loss or evaluation functions. Unit-test the output types and shapes of your model against expected inputs. Unit-test that the fit function of the model works without any errors on a couple of small batches of data. The tests should make sure that the loss decreases and that the execution time of the training step is as expected. You make these checks because changes in model code can introduce bugs that slow down the training process. Unit-test the model's save and load functionality. Unit-test the exported model-serving interfaces against raw inputs and against expected outputs. Test the components of the pipeline steps with mock inputs and with output artifacts. Deploy the pipeline to a test environment and perform integration testing of the end-to-end pipeline. For this process, use some testing data to make sure that the workflow executes properly throughout and that it produces the expected artifacts. Use shadow deployment when you deploy a new version of the training pipeline to the production environment. A shadow deployment helps you make sure that the newly deployed pipeline version is executed on live data in parallel to the previous pipeline version. Quality guidelines for continuous training The continuous training process is about orchestrating and automating the execution of training pipelines. Typical training workflows include steps like data ingestion and splitting, data transformation, model training, model evaluation, and model registration. Some training pipelines consist of more complex workflows. Additional tasks can include performing self-supervised model training that uses unlabeled data, or building an approximate nearest neighbor index for embeddings. The main input of any training pipeline is new training data, and the main output is a new candidate model to deploy in production. The training pipeline runs in production automatically, based on a schedule (for example, daily or weekly) or based on a trigger (for example, when new labeled data is available). Therefore, you need to add quality-control steps to the training workflow, specifically data-validation steps and model-validation steps. These steps validate the inputs and the outputs of the pipelines. You add the data-validation step after the data-ingestion step in the training workflow. The data-validation step profiles the new input training data that's ingested into the pipeline. During profiling, the pipeline uses a predefined data schema, which was created during the ML development process, to detect anomalies. Depending on the use case, you can ignore or just remove some invalid records from the dataset. However, other issues in the newly ingested data might halt the execution of the training pipeline, so you must identify and address those issues. Guidelines for data validation Verify that the features of the extracted training data are complete and that they match the expected schema—that is, there are no missing features and no added ones. Also verify that features match the projected volumes. Validate the data types and the shapes of the features in the dataset that are ingested into the training pipeline. Verify that the formats of particular features (for example, dates, times, URLs, postcodes, and IP addresses) match the expected regular expressions. Also verify that features fall within valid ranges. Validate the maximum fraction of the missing values for each feature. A large fraction of missing values in a particular feature can affect the model training. Missing values usually indicate an unreliable feature source. Validate the domains of the input features. For example, check if there are changes in a vocabulary of categorical features or changes in the range of numerical features, and adjust data preprocessing accordingly. As another example, ranges for numerical features might change if an update in the upstream system that populates the features uses different units of measure. For example, the upstream system might change currency from dollars to yen, or it might change distances from kilometers to meters. Verify that the distributions of each feature match your expectations. For example, you might test that the most common value of a feature for payment type is cash and that this payment type accounts for 50% of all values. However, this test can fail if there's a change in the most common payment type to credit_card. An external change like this might require changes in your model. You add a model validation step before the model registration step to make sure that only models that pass the validation criteria are registered for production deployment. Guidelines for model validation For the final model evaluation, use a separate test split that hasn't been used for model training or for hyperparameter tuning. Score the candidate model against the test data split, compute the relevant evaluation metrics, and verify that the candidate model surpasses predefined quality thresholds. Make sure that the test data split is representative of the data as a whole to account for varying data patterns. For time-series data, make sure that the test split contains more recent data than the training split. Test model quality on important data slices like users by country or movies by genre. By testing on sliced data, you avoid a problem where fine-grained performance issues are masked by a global summary metric. Evaluate the current (champion) model against the test data split, and compare it to the candidate (challenger) model that the training pipeline produces. Validate the model against fairness indicators to detect implicit bias—for example, implicit bias might be induced by insufficient diversity in the training data. Fairness indicators can reveal root-cause issues that you must address before you deploy the model to production. During continuous training, you can validate the model against both optimizing metrics and satisficing metrics. Alternatively, you might validate the model only against the optimizing metrics and defer validating against the satisficing metric until the model deployment phase. If you plan to deploy variations of the same model to different serving environments or workloads, it can be more suitable to defer validation against the satisficing metric. Different serving environments or workloads (such as cloud environments versus on-device environments, or real-time environments versus batch serving environments) might require different satisficing metric thresholds. If you're deploying to multiple environments, your continuous training pipeline might train two or more models, where each model is optimized for its target deployment environment. For more information and an example, see Dual deployments on Vertex AI. As you put more continuous-training pipelines with complex workflows into production, you must track the metadata and the artifacts that the pipeline runs produce. Tracking this information helps you trace and debug any issue that might arise in production. Tracking the information also helps you reproduce the outputs of the pipelines so that you can improve their implementation in subsequent ML development iterations. Guidelines for tracking ML metadata and artifacts Track lineage of the source code, deployed pipelines, components of the pipelines, pipeline runs, the dataset in use, and the produced artifacts. Track the hyperparameters and the configurations of the pipeline runs. Track key inputs and output artifacts of the pipeline steps, like dataset statistics, dataset anomalies (if any), transformed data and schemas, model checkpoints, and model evaluation results. Track that conditional pipeline steps run in response to the conditions, and ensure observability by adding altering mechanisms in case key steps don't run or if they fail. Quality guidelines for model deployment Assume that you have a trained model that's been validated from an optimizing metrics perspective, and that the model is approved from a model governance perspective (as described later in the model governance section). The model is stored in the model registry and is ready to be deployed to production. At this point, you need to implement a set of tests to verify that the model is fit to serve in its target environment. You also need to automate these tests in a model CI/CD routine. Guidelines Verify that the model artifact can be loaded and invoked successfully with its runtime dependencies. You can perform this verification by staging the model in a sandboxed version of the serving environment. This verification helps you make sure that the operations and binaries that are used by the model are present in the environment. Validate satisficing metrics of the model (if any) in a staging environment, like model size and latency. Unit-test the model-artifact-serving interfaces in a staging environment against raw inputs and against expected outputs. Unit-test the model artifact in a staging environment for a set of typical and edge cases of prediction requests. For example, unit-test for a request instance where all features are set to None. Smoke-test the model service API after it's been deployed to its target environment. To perform this test, send a single instance or a batch of instances to the model service and validate the service response. Canary-test the newly deployed model version on a small stream of live serving data. This test makes sure that the new model service doesn't produce errors before the model is exposed to a large number of users. Test in a staging environment that you can roll back to a previous serving model version quickly and safely. Perform online experimentation to test the newly trained model using a small subset of the serving population. This test measures the performance of the new model compared to the current one. After you compare the new model's performance to the performance of the current model, you might decide to fully release the new model to serve all of your live prediction requests. Online experimentation techniques include A/B testing and Multi-Armed Bandit (MAB). Quality guidelines for model serving The predictive performance of the ML models that are deployed and are serving in production usually degrades over time. This degradation can be due to inconsistencies that have been introduced between the serving features and the features that are expected by the model. These inconsistencies are called training-serving skew. For example, a recommendation model might be expecting an alphanumeric input value for a feature like a most-recently-viewed product code. But instead, the product name rather than the product code is passed during serving, due to an update to the application that's consuming the model service. In addition, the model can go stale as the statistical properties of the serving data drift over time, and the patterns that were learned by the current deployed model are no longer accurate. In both cases, the model can no longer provide accurate predictions. To avoid this degradation of the model's predictive performance, you must perform continuous monitoring of the model's effectiveness. Monitoring lets you regularly and proactively verify that the model's performance doesn't degrade. Guidelines Log a sample of the serving request-response payloads in a data store for regular analysis. The request is the input instance, and the response is the prediction that's produced by the model for that data instance. Implement an automated process that profiles the stored request-response data by computing descriptive statistics. Compute and store these serving statistics at regular intervals. Identify training-serving skew that's caused by data shift and drift by comparing the serving data statistics to the baseline statistics of the training data. In addition, analyze how the serving data statistics change over time. Identify concept drift by analyzing how feature attributions for the predictions change over time. Identify serving data instances that are considered outliers with respect to the training data. To find these outliers, use novelty detection techniques and track how the percentage of outliers in the serving data changes over time. Set alerts for when the model reaches skew-score thresholds on the key predictive features in your dataset. If labels are available (that is, ground truth), join the true labels with the predicted labels of the serving instances to perform continuous evaluation. This approach is similar to the evaluation system that you implement as A/B testing during online experimentation. Continuous evaluation can identify not only the predictive power of your model in production, but also identify which type of request it performs well with and performs poorly with. Set objectives for system metrics that are important to you, and measure the performance of the models according to those objectives. Monitor service efficiency to make sure that your model can serve in production at scale. This monitoring also helps you predict and manage capacity planning, and it helps you estimate the cost of your serving infrastructure. Monitor efficiency metrics, including CPU utilization, GPU utilization, memory utilization, service latency, throughputs, and error rate. Model governance Model governance is a core function in companies that provides guidelines and processes to help employees implement the company's AI principles. These principles can include avoiding models that create or enforce bias, and being able to justify AI-made decisions. The model governance function makes sure that there is a human in the loop. Having human review is particularly important for sensitive and high-impact workloads (often user-facing ones). Workloads like this can include scoring credit risk, ranking job candidates, approving insurance policies, and propagating information on social media. Guidelines Have a responsibility assignment matrix for each model by task. The matrix should consider cross-functional teams (lines of business, data engineering, data science, ML engineering, risk and compliance, and so on) along the entire organization hierarchy. Maintain model documentation and reporting in the model registry that's linked to a model's version—for example, by using model cards. Such metadata includes information about the data that was used to train the model, about model performance, and about any known limitations. Implement a review process for the model before you approve it for deployment in production. In this type of process, you keep versions of the model's checklist, supplementary documentation, and any additional information that stakeholders might request. Evaluate the model on benchmark datasets (also known as golden datasets), which cover both standard cases and edge cases. In addition, validate the model against fairness indicators to help detect implicit bias. Explain to the model's users the model predictive behavior as a whole and on specific sample input instances. Providing this information helps you understand important features and possible undesirable behavior of the model. Analyze the model's predictive behavior using what-if analysis tools to understand the importance of different data features. This analysis can also help you visualize model behavior across multiple models and subsets of input data. Test the model against adversarial attacks to help make sure that the model is robust against exploitation in production. Track alerts on the predictive performance of models that are in production, on dataset shifts, and on drift. Configure the alerts to notify model stakeholders. Manage online experimentation, rollout, and rollback of the models. What's next Read The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction from Google Research. Read A Brief Guide to Running ML Systems in Production from O'Reilly. Read Rules for Machine Learning. Try the Testing and Debugging in Machine learning training. Read the Data Validation in Machine Learning paper. See the E2E MLOps on Google Cloud code repository. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Mike Styer | Generative AI Solution ArchitectOther contributor: Amanda Brinhosa | Customer Engineer Send feedback \ No newline at end of file diff --git a/Handover_pattern.txt b/Handover_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..e98513d883a71a3c7eae267a85d6cfd4d98726d1 --- /dev/null +++ b/Handover_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/handover-pattern +Date Scraped: 2025-02-23T11:50:40.714Z + +Content: +Home Docs Cloud Architecture Center Send feedback Handover patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC With the handover pattern, the architecture is based on using Google Cloud-provided storage services to connect a private computing environment to projects in Google Cloud. This pattern applies primarily to setups that follow the analytics hybrid multicloud architecture pattern, where: Workloads that are running in a private computing environment or in another cloud upload data to shared storage locations. Depending on use cases, uploads might happen in bulk or in smaller increments. Google Cloud-hosted workloads or other Google services (data analytics and artificial intelligence services, for example) consume data from the shared storage locations and process it in a streaming or batch fashion. Architecture The following diagram shows a reference architecture for the handover pattern. The preceding architecture diagram shows the following workflows: On the Google Cloud side, you deploy workloads into an application VPC. These workloads can include data processing, analytics, and analytics-related frontend applications. To securely expose frontend applications to users, you can use Cloud Load Balancing or API Gateway. A set of Cloud Storage buckets or Pub/Sub queues uploads data from the private computing environment and makes it available for further processing by workloads deployed in Google Cloud. Using Identity and Access Management (IAM) policies, you can restrict access to trusted workloads. Use VPC Service Controls to restrict access to services and to minimize unwarranted data exfiltration risks from Google Cloud services. In this architecture, communication with Cloud Storage buckets, or Pub/Sub, is conducted over public networks, or through private connectivity using VPN, Cloud Interconnect, or Cross-Cloud Interconnect. Typically, the decision on how to connect depends on several aspects, such as the following: Expected traffic volume Whether it's a temporary or permanent setup Security and compliance requirements Variation The design options outlined in the gated ingress pattern, which uses Private Service Connect endpoints for Google APIs, can also be applied to this pattern. Specifically, it provides access to Cloud Storage, BigQuery, and other Google Service APIs. This approach requires private IP addressing over a hybrid and multicloud network connection such as VPN, Cloud Interconnect and Cross-Cloud Interconnect. Best practices Lock down access to Cloud Storage buckets and Pub/Sub topics. When applicable, use cloud-first, integrated data movement solutions like the Google Cloud suite of solutions. To meet your use case needs, these solutions are designed to efficiently move, integrate, and transform data. Assess the different factors that influence the data transfer options, such as cost, expected transfer time, and security. For more information, see Evaluating your transfer options. To minimize latency and prevent high-volume data transfer and movement over the public internet, consider using Cloud Interconnect or Cross-Cloud Interconnect, including accessing Private Service Connect endpoints within your Virtual Private Cloud for Google APIs. To protect Google Cloud services in your projects and to mitigate the risk of data exfiltration, use VPC Service Controls. These service controls can specify service perimeters at the project or VPC network level. You can extend service perimeters to a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Communicate with publicly published data analytics workloads that are hosted on VM instances through an API gateway, a load balancer, or a virtual network appliance. Use one of these communication methods for added security and to avoid making these instances directly reachable from the internet. If internet access is required, Cloud NAT can be used in the same VPC to handle outbound traffic from the instances to the public internet. Review the general best practices for hybrid and multicloud networking topologies. Previous arrow_back Gated egress and gated ingress Next General best practices arrow_forward Send feedback \ No newline at end of file diff --git a/Healthcare_and_Life_Sciences.txt b/Healthcare_and_Life_Sciences.txt new file mode 100644 index 0000000000000000000000000000000000000000..26dc0e52cabe5dd20609e188f28bdeadbedac28b --- /dev/null +++ b/Healthcare_and_Life_Sciences.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/healthcare-life-sciences +Date Scraped: 2025-02-23T11:57:50.905Z + +Content: +Google Cloud for healthcare and life sciencesEmpower data-driven innovation, transform the patient and caregiver experience, and enable operational efficiencies across your organization.Contact sales3:58Improve responsive patient service with generative AI on Google Cloud1:24Vertex AI Search for HealthcareVertex AI Search for Healthcare is a Google-quality search experience specifically designed for healthcare and life science enterprises. Using our proprietary, clinical knowledge graph, Vertex AI Search for Healthcare can sift through large volumes of healthcare data and provide natural language answers grounded on patient data. Vertex AI Search for Healthcare empowers provider, payer, and biotech knowledge workers to quickly find the precise information they need.Read moreWhy Google Cloud for healthcare and life sciencesProvide secure, continuous patient careUnlock the value of clinical data and enable a longitudinal record of the patient across all encounters within the health system.“Google Cloud's tools have the potential to unlock sources of information that typically aren't searchable in a conventional manner, or are difficult to access or interpret. Accessing insights more quickly and easily could drive more cures, create more connections with patients, and transform healthcare.” Cris Ross, CIO, Mayo ClinicDiscover solutions:Generate a longitudinal patient recordHealthcare Data EngineUnlock the power of your healthcare dataCloud Healthcare APIA medically-tuned Google search experience on healthcare dataVertex AI Search for HealthcareMake data-driven operational decisionsUnlock the value of claims and clinical data to deliver insights to improve efficiency, stakeholder engagement, health outcomes, and opportunities for growth.“What sets Google Cloud apart is their commitment not only to technical capabilities but also to connecting the healthcare ecosystem through interoperability and using open standards.”Lisa Davis, SVP and CIO, Blue Shield of CaliforniaDiscover solutions:Streamline prior authorization claims processingClaims Acceleration SuiteEnable interoperability at enterprise scaleHealthcare Data EngineDerive insights from medical textCloud Healthcare APIAccelerate drug discovery and precision medicineOptimize workloads and advance drug discovery, development, manufacturing, and marketing at scale with AI-led solutions and high performance computing.“Bayer’s aspiration to be among the leading innovators drives us to continue to invest in novel and disruptive technologies to solve complex problems. Partnering with Google Cloud on TPU-powered quantum chemistry complements our ambition to work with industry leaders and experts to quickly deliver on digital transformation.”Bijoy Sagar, Chief Information and Digital Transformation Officer, Bayer AGDiscover solutions:Enable more efficient in silico drug designTarget and Lead ID SuiteTransform genomic data into insightsMultiomics SuiteAccelerate AI developmentGoogle Cloud TPUsOptimize operations and accelerate time-to-marketDrive operational efficiencies and build patient-centered growth models with data-driven solutions and services.“By complementing our expertise in diagnostics and AI with Google Cloud's expertise in AI, deep learning, and their cloud-based technologies for imaging storage, we're evolving our market-leading technology to improve cervical cancer diagnostics.”Michael Quick, Vice President of Research and Development / Innovation, HologicDiscover solutions:Enable a more holistic view of patientsDevice Connect for FitbitTransform imaging diagnosticsMedical Imaging SuiteEasily and securely enable developers to build FHIR API-based digital servicesApigee Health APIxHealthcare providersProvide secure, continuous patient careUnlock the value of clinical data and enable a longitudinal record of the patient across all encounters within the health system.“Google Cloud's tools have the potential to unlock sources of information that typically aren't searchable in a conventional manner, or are difficult to access or interpret. Accessing insights more quickly and easily could drive more cures, create more connections with patients, and transform healthcare.” Cris Ross, CIO, Mayo ClinicDiscover solutions:Generate a longitudinal patient recordHealthcare Data EngineUnlock the power of your healthcare dataCloud Healthcare APIA medically-tuned Google search experience on healthcare dataVertex AI Search for HealthcareHealthcare payersMake data-driven operational decisionsUnlock the value of claims and clinical data to deliver insights to improve efficiency, stakeholder engagement, health outcomes, and opportunities for growth.“What sets Google Cloud apart is their commitment not only to technical capabilities but also to connecting the healthcare ecosystem through interoperability and using open standards.”Lisa Davis, SVP and CIO, Blue Shield of CaliforniaDiscover solutions:Streamline prior authorization claims processingClaims Acceleration SuiteEnable interoperability at enterprise scaleHealthcare Data EngineDerive insights from medical textCloud Healthcare APIPharmaceuticals and biotechAccelerate drug discovery and precision medicineOptimize workloads and advance drug discovery, development, manufacturing, and marketing at scale with AI-led solutions and high performance computing.“Bayer’s aspiration to be among the leading innovators drives us to continue to invest in novel and disruptive technologies to solve complex problems. Partnering with Google Cloud on TPU-powered quantum chemistry complements our ambition to work with industry leaders and experts to quickly deliver on digital transformation.”Bijoy Sagar, Chief Information and Digital Transformation Officer, Bayer AGDiscover solutions:Enable more efficient in silico drug designTarget and Lead ID SuiteTransform genomic data into insightsMultiomics SuiteAccelerate AI developmentGoogle Cloud TPUsMedical devicesOptimize operations and accelerate time-to-marketDrive operational efficiencies and build patient-centered growth models with data-driven solutions and services.“By complementing our expertise in diagnostics and AI with Google Cloud's expertise in AI, deep learning, and their cloud-based technologies for imaging storage, we're evolving our market-leading technology to improve cervical cancer diagnostics.”Michael Quick, Vice President of Research and Development / Innovation, HologicDiscover solutions:Enable a more holistic view of patientsDevice Connect for FitbitTransform imaging diagnosticsMedical Imaging SuiteEasily and securely enable developers to build FHIR API-based digital servicesApigee Health APIxWorking with leading healthcare and life sciences organizationsDive into more customer case storiesBayer accelerates drug discovery with Google Cloud’s high performance compute power3-min readBlue Shield of California collaborates with Google Cloud to expedite the prior authorization process5-min readCardinal Health uses Google Cloud to modernize their SAP environment3-min readHackensack Meridian Health deploys Google Cloud’s generative AI tools to improve caregiver and patient experiences3-min readMayo Clinic collaborates with Google Cloud to improve clinical workflows using generative AI3-min readHologic advances innovation and healthcare accessibility with Google Cloud and SlalomVideo (8:04)HCA Healthcare is collaborating with Google Cloud on the use of generative AI to support caregivers to reduce the burden of administrative tasks9-min readALS TDI uses Google Cloud's cloud computing to advance neurodegenerative disease research5-min readSanitas reduces server provisioning from four weeks to two hours on Google Cloud5-min readWhat's new with Google Cloud healthcare and life sciencesSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Cybersecurity assistance for rural hospitalsThis tailored initiative to improve cybersecurity is specially designed for rural hospitals.3-min readPredict protein structures with AlphaFold on Vertex AIRun custom queries on BigQuery and scale to hundreds of experiments using Vertex AI PipelinesVideo (7:32)4 industry trends to help modernize healthcare data management with ReltioBuild trusted longitudinal data pipelines to help improve patient health outcomes6-min readTake a deep dive into Google Cloud for life sciencesLearn how to transform life sciences from the ground up with this four-part video series5-min readDriving operational insights with Healthcare Data Engine AcceleratorsGenerate a longitudinal patient view and improve patient care with our new accelerators5-min readView MoreRecommended healthcare and life sciences partnersOur global service and technology partner ecosystem can accelerate your transformation through their experts, professional services, and industry-specific solutions.Expand allHealthcare partnersLife sciences partnersService partnersTechnology partnersSee all partnersSecurity and complianceProtect your sensitive data—including protected health information (PHI), R&D and IP data, and clinical trial results—through identity management, network security, threat detection and response.Learn moreCompliance offeringsWatch the webinarGenerative AI for healthcareThe ROI of Gen AI in Healthcare and Life SciencesIn the healthcare and life sciences industry, early adopters are reaping the rewards of gen AI. Get all the insights from our survey of 305 senior leaders and start generating ROI on your gen AI investments now.Read it hereTake the next stepTell us what you’re solving for and a Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/High_Performance_Computing.txt b/High_Performance_Computing.txt new file mode 100644 index 0000000000000000000000000000000000000000..12d3f8f69d407ee51d04ce6f34aefcfc199d8b5c --- /dev/null +++ b/High_Performance_Computing.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/hpc +Date Scraped: 2025-02-23T11:59:38.997Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Google CloudHigh performance computingGoogle Cloud's HPC solutions are easy to use, built on the latest technology, and cost-optimized to provide a flexible and powerful HPC foundation.Contact usRead the docsBlog: Introducing H3 compute-optimized VMs for high performance computing (HPC)Blog: Using clusters for large-scale technical computing in the cloudHighlightsRapid, turnkey HPC environment creation with Cloud HPC ToolkitWorkload-specific HPC solutions and customer success casesGoogle Cloud HPC partnersFrequently asked questionsOverviewPowerful HPC infrastructureEnable your team to run your most intensive workloads on the latest and greatest infrastructure. CPUs from Intel, AMD, and Arm. GPUs from NVIDIA, including the A100 and H100. High-performance storage options spanning object, block, and file storage.Advanced HPC services and toolsDeploy HPC quickly with the Cloud HPC Toolkit's prebuilt best-practices HPC blueprints. Deploy preconfigured modules for Compute Engine, Google Kubernetes Engine, Batch, or individual VMs.Access data on Parallelstore, Filestore, Cloud Storage, or partner storage offerings.Speed up tightly coupled workloads with compact placement policies, gVNIC, and the HPC VM image.Cost-optimized HPCManage costs as you scale with budgets and committed use discounts. Save up to 91% with Spot VMs for flexible, check-pointable, and fault-tolerant workloads. Report on costs at a granular level easily with built-in labels in the Cloud HPC Toolkit.View moreHow It WorksGoogle Cloud’s HPC solutions are easy to use, built on the latest technology, and cost-optimized to provide a flexible and powerful HPC foundation.The Cloud HPC Toolkit enables you to easily launch new HPC environments.Read the docsCommon UsesDrug discoveryExpand horizonsDrug discovery is a challenging but life-saving workload in high performance computing. Google Cloud has the tools and expertise to deliver results when it matters most.Our HPC solutions make it easy and quick to run popular computational chemistry, molecular dynamics, and virtual drug screening workloads.Dive in to learn how customers have accelerated their drug discovery workloads on Google Cloud, and about the Cloud HPC Toolkit's best practices blueprints for drug discovery workloads.Watch the overview videoCloud HPC Toolkit - GROMACS blueprintHow HPC and generative AI will accelerate drug discovery and precision medicineAccelerate drug development - Google Cloud solution pageCheck the other tabs for more information on drug discovery workloads.How-tosExpand horizonsDrug discovery is a challenging but life-saving workload in high performance computing. Google Cloud has the tools and expertise to deliver results when it matters most.Our HPC solutions make it easy and quick to run popular computational chemistry, molecular dynamics, and virtual drug screening workloads.Dive in to learn how customers have accelerated their drug discovery workloads on Google Cloud, and about the Cloud HPC Toolkit's best practices blueprints for drug discovery workloads.Watch the overview videoCloud HPC Toolkit - GROMACS blueprintHow HPC and generative AI will accelerate drug discovery and precision medicineAccelerate drug development - Google Cloud solution pageCheck the other tabs for more information on drug discovery workloads.Financial servicesRisk simulation and quantitative researchQuantitative managers are increasingly relying on access to high performance computing (HPC) resources to develop their investment strategies. While buy-side firms predominantly used on-premises infrastructure in the past, both quants and risk professionals can benefit from the computational power of the cloud.Learn more about how Google Cloud is enabling financial services organizations to accelerate their toughest workloads.Read the whitepaperMonte Carlo methods using Dataproc and Apache SparkThe new frontier: High performance computing for quantitative research and risk simulationGoogle Cloud tick data analytics performance improves up to 18x in latest STAC benchmarkHow-tosRisk simulation and quantitative researchQuantitative managers are increasingly relying on access to high performance computing (HPC) resources to develop their investment strategies. While buy-side firms predominantly used on-premises infrastructure in the past, both quants and risk professionals can benefit from the computational power of the cloud.Learn more about how Google Cloud is enabling financial services organizations to accelerate their toughest workloads.Read the whitepaperMonte Carlo methods using Dataproc and Apache SparkThe new frontier: High performance computing for quantitative research and risk simulationGoogle Cloud tick data analytics performance improves up to 18x in latest STAC benchmarkElectronics design automationChip design and verification in the cloudGoogle Cloud HPC solutions can help electronics design automation (EDA) companies accelerate their design and verification cycles, improve product quality, and reduce costs. Google Cloud offers a wide range of HPC services and solutions, including the Cloud HPC Toolkit, Batch, and partner integrations. These solutions can be used to create powerful and flexible HPC environments that can run EDA workloads in a license-optimized manner.Learn moreAccelerating HPC and chip design with AMDSynopsys aids chip designers to accelerate development on Google CloudGoogle's chip design team benefits from move to Google CloudHow-tosChip design and verification in the cloudGoogle Cloud HPC solutions can help electronics design automation (EDA) companies accelerate their design and verification cycles, improve product quality, and reduce costs. Google Cloud offers a wide range of HPC services and solutions, including the Cloud HPC Toolkit, Batch, and partner integrations. These solutions can be used to create powerful and flexible HPC environments that can run EDA workloads in a license-optimized manner.Learn moreAccelerating HPC and chip design with AMDSynopsys aids chip designers to accelerate development on Google CloudGoogle's chip design team benefits from move to Google CloudComputer-aided engineeringFluid dynamics, structural mechanics, energy explorationWhether you're running computational fluid dynamics, finite element analysis, or reservoir simulations, computer-aided engineering (CAE) is at the heart of what you do. Running these simulations efficiently and in a timely manner is critical.Google Cloud has the HPC solutions to meet your needs, and deliver on-time performance to edge out the competition.Read the CAE solution guideBest practices for running Simcenter STAR CCM+ workloadsBest practices for running Ansys Fluent workloadsCloud HPC Toolkit blueprint for OpenFOAMHow-tosFluid dynamics, structural mechanics, energy explorationWhether you're running computational fluid dynamics, finite element analysis, or reservoir simulations, computer-aided engineering (CAE) is at the heart of what you do. Running these simulations efficiently and in a timely manner is critical.Google Cloud has the HPC solutions to meet your needs, and deliver on-time performance to edge out the competition.Read the CAE solution guideBest practices for running Simcenter STAR CCM+ workloadsBest practices for running Ansys Fluent workloadsCloud HPC Toolkit blueprint for OpenFOAMWeather forecastingClimate modeling on Google CloudWeather forecasters can run popular climate models, like the Weather Research and Forecasting (WRF) modeling system, GFS FV3, ECMWF, and more, easily on Google Cloud using the Google Cloud HPC Toolkit, and achieve the performance of an on-premises supercomputer for a fraction of the price.Get your climate simulations up and running on Google Cloud in minutes, and respond to new data in record time.Deploy WRF with the Cloud HPC Toolkit - tutorialTomorrow.io Collaboration With Google Cloud: Weather Forecasting With 5X ResolutionNOAA and Google Cloud: A data match made in the cloudGuest blog: "Clouds in the cloud: Weather forecasting in Google Cloud" - Fluid NumericsHow-tosClimate modeling on Google CloudWeather forecasters can run popular climate models, like the Weather Research and Forecasting (WRF) modeling system, GFS FV3, ECMWF, and more, easily on Google Cloud using the Google Cloud HPC Toolkit, and achieve the performance of an on-premises supercomputer for a fraction of the price.Get your climate simulations up and running on Google Cloud in minutes, and respond to new data in record time.Deploy WRF with the Cloud HPC Toolkit - tutorialTomorrow.io Collaboration With Google Cloud: Weather Forecasting With 5X ResolutionNOAA and Google Cloud: A data match made in the cloudGuest blog: "Clouds in the cloud: Weather forecasting in Google Cloud" - Fluid NumericsLife sciences and genomicsMake new discoveriesGoogle Cloud's HPC solutions can help researchers in genomics and life sciences make new discoveries. By providing access to powerful computing resources, Google Cloud can help researchers analyze large datasets, run simulations, and develop new medical treatments more quickly and efficiently. Google Cloud also offers a variety of HPC- and life sciences-specific tools and services, such as Batch and the Google Multiomics Suite.Read about the Google Multiomics SuiteOrchestrate jobs by running Nextflow pipelines on BatchBroad Institute: Discovering human health revelations hidden in DNAFreenome is building the next generation of early cancer detection technology with GoogleHow-tosMake new discoveriesGoogle Cloud's HPC solutions can help researchers in genomics and life sciences make new discoveries. By providing access to powerful computing resources, Google Cloud can help researchers analyze large datasets, run simulations, and develop new medical treatments more quickly and efficiently. Google Cloud also offers a variety of HPC- and life sciences-specific tools and services, such as Batch and the Google Multiomics Suite.Read about the Google Multiomics SuiteOrchestrate jobs by running Nextflow pipelines on BatchBroad Institute: Discovering human health revelations hidden in DNAFreenome is building the next generation of early cancer detection technology with GoogleEnergyAccelerating energy workloadsGoogle Cloud helps power and utility companies return results faster with a range of industry-leading and energy-specific solutions, partners, and services.Customers like PGS, TGS, and Schlumberger rely on Google Cloud to handle their mission-critical, time-sensitive workloads. Check out the Customer examples tab to learn how PGS replaced their 260,000 core Cray supercomputers with Google Cloud.Read more about our energy solutionsTGS: Up in the cloud – a seismic shift in geophysical computingSchlumberger chooses Google Cloud to deliver new oil and gas technology platformHow-tosAccelerating energy workloadsGoogle Cloud helps power and utility companies return results faster with a range of industry-leading and energy-specific solutions, partners, and services.Customers like PGS, TGS, and Schlumberger rely on Google Cloud to handle their mission-critical, time-sensitive workloads. Check out the Customer examples tab to learn how PGS replaced their 260,000 core Cray supercomputers with Google Cloud.Read more about our energy solutionsTGS: Up in the cloud – a seismic shift in geophysical computingSchlumberger chooses Google Cloud to deliver new oil and gas technology platformCloud HPC ToolkitLearn how the toolkit works, then deploy a blueprintCloud HPC Toolkit overviewRead the latest HPC blogsGoogle Cloud HPC blogsArchitect your HPC environmentLarge-scale technical computing in the CloudLearn the best practicesBest practices for running HPC workloadsDeploy a blueprintReview the blueprint catalogPartners & IntegrationGoogle Cloud HPC partnersPartner HighlightPartner HighlightPartner HighlightPartner HighlightSchedulers and platformsPartner HighlightIntegratorsIndependent software vendorsPartner HighlightStoragePartner HighlightInfrastructurePartner HighlightSee the rest of Google Cloud's HPC partners at Find a PartnerFAQExpand allHow should I choose between these Google Cloud services to run high performance computing (HPC) workloads: Compute Engine, Google Kubernetes Engine, Batch, and Cloud Run?The best Google Cloud service for running HPC workloads depends on your specific needs. There are a number of factors to consider when architecting your HPC environment. Some of those factors include:Control: How much control do you need over your HPC environment?Scalability: How scalable does your HPC environment need to be?Cost: How much are you willing to spend on your HPC environment?Ease of use: How easy do you need your HPC environment to be to use?Once you have considered these factors, you can choose the Google Cloud service that is best for you. Here is a brief overview of each service, and how they relate to the above factors:Compute Engine: Compute Engine is an infrastructure as a service (IaaS) offering that provides virtual machines (VMs) that can be used to run HPC workloads. Compute Engine gives you the most control and scalability over your HPC environment.Google Kubernetes Engine: Google Kubernetes Engine is a managed Kubernetes service that can be used to run containerized HPC workloads. Google Kubernetes Engine is a good option if you want to use containerized applications, or want the ease of use that Kubernetes brings to managing your compute resources.Batch: Batch is a managed service for running batch jobs. Batch is a good option if you want to run on Compute Engine, have a large number of HPC jobs that you need to run on a regular basis, and don't need deep customization of the infrastructure or scheduling policies.Cloud Run: Cloud Run is a serverless platform that can be used to run small, simple HPC workloads. Cloud Run is a good option if you want to run HPC workloads without having to manage infrastructure. See the Cloud Run resource limits to understand the limitations.Don't hesitate to reach out to the Google Cloud HPC team to discuss your requirements in depth.How should I choose between the Cloud HPC Toolkit and an HPC as a service platform partner for my HPC workloads?The choice between the Cloud HPC Toolkit and an HPC as a service platform partner depends on your specific needs. Here are some factors to consider when choosing between the Cloud HPC Toolkit and an HPC as a service platform partner:Control: How much control do you need over your HPC environment?Ease of use: How easy do you need your HPC environment to be to use?Cost: How much are you willing to spend on your HPC environment?Expertise: How much expertise do you have in HPC?Once you have considered these factors, you can choose the option that is best for you. Here is a brief overview of each option:Cloud HPC ToolkitThe Cloud HPC Toolkit is a set of open source tools that can be used to deploy and manage HPC workloads on Google Cloud. The toolkit provides a number of features, including being open source, Terraform and Cloud Foundation Toolkit based, composable, and integrated with Google Cloud services and popular HPC tooling and applications. The Cloud HPC Toolkit can be used through a web-based user interface with the open front end.The Cloud HPC Toolkit is a good option if you want to have a high degree of control over your HPC environment. The Cloud HPC Toolkit was built to provide better ease of use than building DIY HPC environments. It both provides more configurability and requires more configuration than an HPC as a service platform from a partner, and therefore is best suited for users with more HPC expertise.HPC as a service platform partnerAn HPC as a service platform partner is a third-party company that provides a managed HPC platform on Google Cloud. These platforms typically provide a number of features, including preconfigured HPC environments, user-friendly interfaces, and technical support.An HPC as a service platform partner is a good option if you want to get started with HPC quickly and easily, or want to provide users a simple, GUI-based user experience. However, they can be less flexible, or include extra costs.In general, the Cloud HPC Toolkit is a good option for users who have a high degree of expertise in HPC and want a high degree of control over their HPC environment. HPC as a service platform partners are a good option for users who want to get started with HPC quickly and easily.Didn't find your answer? Contact the Google Cloud teamContact usGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/High_availability_of_MySQL_clusters_on_Compute_Engine.txt b/High_availability_of_MySQL_clusters_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..f096df22c755fd577b0212655304ac34b584e5c4 --- /dev/null +++ b/High_availability_of_MySQL_clusters_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/architectures-high-availability-mysql-clusters-compute-engine +Date Scraped: 2025-02-23T11:49:29.169Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architectures for high availability of MySQL clusters on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-21 UTC This document describes several architectures that provide high availability (HA) for MySQL deployments on Google Cloud. HA is the measure of system resiliency in response to underlying infrastructure failure. In this document, HA addresses the availability of MySQL clusters within a single cloud region. This document is intended for database administrators, cloud architects, and DevOps engineers, who want to learn how to increase MySQL data-tier reliability by improving overall system uptime. This document is intended for you if you are running MySQL on Compute Engine. If you use Cloud SQL for MySQL, this document is not intended for you. For a system or application that requires a persistent state to handle requests or transactions, the data persistence layer must be available to successfully handle requests for data queries or mutations. If the application must interact with the data tier to service requests, any downtime in the data tier prevents the application from performing the necessary tasks. Depending on the system service level objectives (SLOs) of your system, you might require an architectural topology that provides a higher level of availability. There is more than one way to achieve HA, but in general you provision redundant infrastructure that you can quickly make accessible to your application. This document discusses the following topics: Define terms to help you understand HA database concepts. Help you understand several options for HA MySQL topologies. Provide contextual information to help you understand what to consider in each option. Terminology There are several terms and concepts that are industry-standard and useful to understand for purposes beyond the scope of this document. Replication. The process by which write transactions (INSERT, UPDATE, or DELETE) are reliably captured, logged, and then serially applied to all database nodes in the topology. Source node. All database writes must be directed to a source node. The source node provides a read with the most up-to-date state of persisted data. Replica node. An online copy of the source database node. Changes are near-synchronously replicated to replica nodes from the source node. You can read from replica nodes with the understanding that the data might be slightly delayed due to replication lag. Replication lag. A time measurement that expresses the difference between when a transaction is applied to the replica compared to when it is applied to the source node. Uptime. The percent of time that a resource is working and capable of delivering a response to a request. Failure detection. The process of identifying that an infrastructure failure has occurred. Failover. The process to promote the backup or standby infrastructure (in this case, the replica node) to become the primary infrastructure. In other words, during failover, the replica node becomes the source node. Recovery time objective (RTO). The duration, in elapsed real time, that is acceptable, from a business perspective, for the data tier failover process to complete. Fallback. The process to reinstate the former source node after a failover has occurred. Self-healing. The capability of a system to resolve issues without external actions by a human operator. Network partition. A condition when two nodes in a topology, for example the source and replica nodes, can't communicate with one another over the network. Split brain. A condition that occurs when two nodes simultaneously believe that they are the source node. Node group. A set of compute resource tasks that provide a service. For this document, that service is the data persistence tier. Witness or quorum node. A separate compute resource that helps a node group determine what to do when a split-brain condition occurs. Source or leader election. The process by which a group of peer-aware nodes, including witness nodes, determine which node should be the source node. Node group. A set of compute resource tasks that provide a service. For this document, that service is the data persistence tier. Hot standby. A node that represents a close copy of another source node and can become the new source node with minimal downtime. When to consider an HA architecture HA architectures provide increased protection against data-tier downtime. Understanding your tolerance for downtime, and the respective tradeoffs of the various architectures, is paramount to choosing the option that is right for your business use case. Use an HA topology when you want to provide increased data-tier uptime to meet the reliability requirements for your workloads and services. For environments where some amount of downtime is tolerated, an HA topology introduces unnecessary cost and complexity. For example, development or test environments infrequently need high database tier availability. Consider your requirements for HA Cost is a notable consideration, because you should expect your compute infrastructure and storage costs to at least double, in order to provide HA. When you assess the possible MySQL HA options, consider the following questions: What services or customers rely on your data tier? What is your operational budget? What is the cost to your business in the event of downtime in the data persistence tier? How automated does the process need to be? What level of availability do you hope to achieve, 99.5%, 99.9%, or 99.99%? How quickly do you need to fail over? What is your RTO? The following contribute to recovery time and should be considered when establishing your RTO: Detection of the outage Secondary virtual machine (VM) instance readiness Storage failover Database recovery time Application recovery time MySQL HA architectures At the most basic level, HA in the data tier consists of the following: A mechanism to identify that a failure of the source node has occurred. A process to perform a failover where the replica node is promoted to be a source node. A process to change the query routing so that application requests reach the new source node. Optionally, a method to fall back to the original topology using source and replica nodes. This document discusses the following three HA architectures: Regional Persistent Disk Hot standby and witness node Orchestrator and ProxySQL In addition to infrastructure failure, each of these architectures can help minimize downtime in the unlikely event of a zonal outage. You use these architectures with Domain Name System (DNS) changes to provide multi-region HA to guard against regional service interruption, but this topic is out of scope for this document. HA with regional Persistent Disks HA in the data tier always relies on some type of data replication. The simplest replication is one that you don't have to manage. With the regional Persistent Disk storage option from Compute Engine, you can provision a block storage device that provides synchronous data replication between two zones in a region. Regional Persistent Disks provide a strong foundational building block for implementing HA services in Compute Engine. The following diagram illustrates the architecture of HA with regional Persistent Disks. If your source node VM instance becomes unavailable due to infrastructure failure or zonal outage, you can force the regional Persistent Disk to attach to a VM instance in your backup zone in the same region. To perform this task, you must do one of the following: Start another VM instance in the backup zone where access to the shared regional Persistent Disk is available. Maintain a hot standby VM instance in the backup zone. A hot standby VM instance is a running VM instance that is identical to the instance that you are using. After you attach the regional Persistent Disk, you can start the database engine. If the data service outage is promptly identified, the force-attach operation typically completes in less than one minute, which means that an RTO measured in minutes is attainable. If your business can tolerate the additional downtime required for you to detect and communicate an outage, and for you to perform the failover manually, then there is no need to automate the process. If your RTO tolerance is lower, you can automate the detection and failover process. If you automate this architecture, the system is further complicated because there are several edge cases in the failover and fallback process that you need to consider. For more information about a fully automated implementation of this architecture, see the Cloud SQL high availability configuration. Advantages There are several advantages of achieving HA by using regional Persistent Disks due to the following features: This architecture provides simultaneous protection against several failure modes: primary-zone server infrastructure failure, single-zone block-storage degradation, or full-zone outage. Note: For more information about region-specific considerations, see Geography and regions. The application or database layer replication is not required because regional Persistent Disks provide continuous and synchronous block-level data replication, which is fully managed by Google Cloud. A regional Persistent Disk automatically detects errors and slowness, switches the replication mode, and performs catch up of data that is replicated to only one zone. If there are storage problems in a primary zone, a regional Persistent Disk automatically performs reads from the secondary zone. This operation can result in increased read latency, but your application can continue to operate without any manual action. Considerations The limitations of this architecture are related to the single region nature of this topology and some of the following inherent constraints of regional Persistent Disks: The regional Persistent Disk can only be mounted to one database. Even if the standby database VM instance is running, that instance cannot be used to serve database reads. The foundational technology behind this architecture allows only for replication between zones in the same region. As a result, regional failover is not an option when solely using this architecture. The regional Persistent Disk's write throughput is halved compared to zonal Persistent Disks. Make sure that throughput limits are within your required tolerance. The regional Persistent Disk's write latency is slightly higher than zonal Persistent Disk. We recommend that you test your workload to verify that the write performance is acceptable for your requirements. During a failure event and the resulting cutover, you need to force the regional Persistent Disk to attach to the standby zone VM. The force-attach operation typically executes in less than one minute, so you must consider this time when assessing your RTO. The RTO estimate must account for the time required for the force attachment of the regional Persistent Disk and the VM file system detection of hot-attached disk. HA with hot standby and witness node If you want an automated failover, a different architecture is required. One option is to deploy a group of at least two database nodes, configure database asynchronous replication, and launch witness nodes to help ensure that a quorum can be reached during a source node election. The source database node processes write transactions and serve read queries. The database replication process transmits changes to the online hot standby replica node. Because the witness node can be a small virtual machine, it provides a low-cost mechanism to ensure that a group majority is available for a source node election. Group nodes continually assess the status of the other group nodes. The signals that these status checks consume every few seconds are called heartbeats because they are used to assess the health of the other group nodes. A timely assessment of database node health is important because an unhealthy source database node must be quickly identified so that a failover of the hot standby can be initiated. The node group quorum is determined by the number of voting elements that must be part of active cluster membership for that cluster to start properly or continue running. For a node group to reach a quorum in a source database node election, a majority of the nodes in the group must participate. To guard against a split-brain condition, the majority requirement ensures that in the event of a network partition, two voting groups cannot concurrently have enough nodes to vote. A group majority consists of (n+1)/2 nodes, where n is the total number of nodes in the group. For example, if there are three nodes in a group, at least two nodes must be operating for a source node election. If there are five nodes in a group, at least three nodes are required. Groups are sized with an odd number of nodes in case there is a network partition that prevents communication between subgroups of the node group. If the group is even, there is a greater chance that both subgroups can possess less than the majority. If the group size is odd, it is more likely that either one of the subgroups can have a majority or neither group has a majority. The following diagram compares a healthy node group with a degraded node group. The diagram shows two node groups—a functional node group and a degraded node group. The fully functional and healthy node group has three group members. In this state, the source and replica database nodes are providing their expected purpose. The necessary quorum for this node group is two nodes. The degraded node group shows the state where the source node heartbeats are no longer sent due to an infrastructure failure. This state might be the result of source database node instance failure, or the source node might still be running. Alternatively, a network partition might prevent communication between the source node and the other nodes in the group. Regardless of the cause, the result is that both the replica and the witness determine that the source node is no longer healthy. At this point, the group majority conducts a source node election, determines that the hot standby node should become the source node, and initiates a failover. The following diagram shows the database transaction, replication, and heartbeat flow in the witness node architecture. In the preceding diagram, this HA architecture relies on the hot-standby replica node to quickly start processing production writes upon a failover. The mechanics of the failover—for example, source node promotion—are carried out by the database nodes in the group. To implement this architecture, consider the following two projects: MySQL Group Replication is an open source plugin for MySQL that facilitates the creation of HA topologies. Galera Cluster and Percona XtraDB Cluster are other open source options that can provide high availability. Advantages The hot standby architecture has few moving parts, is straightforward to deploy, and provides several advantages: With only one additional, low-cost witness node, fully automated failover is provided. This architecture can address long-term infrastructure failure as easily as it can a transient failure (for example, due to a system reboot). With some associated replication latency, multi-region HA is provided. Considerations The failover is automatic, however, the following operational tasks remain: You manage the replication between the source and replica nodes. You manage the witness nodes. You must deploy and manage the connection routing using a load balancer. Without changes to your application logic, which are out of scope for this document, you can't direct reads to the replica node. HA with Orchestrator and ProxySQL If you combine the open source components, Orchestrator and ProxySQL, you have architecture that can detect outages and automatically failover traffic from an afflicted source node to a newly promoted healthy replica. Furthermore, you can transparently route queries to the appropriate read or read and write nodes to improve steady-state data tier performance. Orchestrator is an open source MySQL replication topology manager and failover solution. The software lets you detect, query, and refactor complex replication topologies, and provides reliable failure detection, intelligent recovery, and promotion. ProxySQL is an open source, high performance, and highly available database protocol aware proxy for MySQL. ProxySQL scales to millions of connections across hundreds of thousands of backend servers. The following diagram shows the combined Orchestrator and ProxySQL architecture. In this architecture, as illustrated by the preceding diagram, database-bound traffic is routed by an internal load balancer to redundant ProxySQL instances. These instances route traffic to a write- or read-capable database instance based on the ProxySQL configuration. Orchestrator provides the following failure detection and recovery steps: Orchestrator determines that the source database node is not available. All replica nodes are queried to provide a second opinion about the status of the source node. If the replicas provide a consistent assessment that the source is not available, the failover proceeds. As defined by the topology, the promoted node becomes the new source node during the failover. When the failover is complete, Orchestrator helps ensure that the correct number of new replication nodes are provisioned according to the topology. On-going replication between the source database in zone A and the database replicas in alternate zones keeps the replicas up-to-date with any writes routed to the source. Orchestrator checks the health of the source and replica databases by continually sending heartbeats. Orchestrator application state is persisted in a separate Cloud SQL database. If changes in the topology are required, Orchestrator can also send commands to the databases. ProxySQL routes the traffic appropriately to the new source and replica nodes when the failover is complete. Services continue to address the data tier using the IP address of the load balancer. The virtual IP address is switched seamlessly from the earlier source node to the new source node. Advantages The architectural components and automation provides the following advantages: The software used in this architecture provides various observability features including replication topology graphs, and query traffic visibility. ProxySQL and Orchestrator coordinate to provide automatic replica promotion and failover. The replica promotion policy is fully configurable. Unlike other HA configurations, you can choose to promote a specific replica node to source in case of failover. After a failover, new replicas are provisioned declaratively according to the topology. ProxySQL provides a supplementary load-balancing benefit as it transparently routes read and write requests to the appropriate replica and source nodes based on the configured policies. Considerations This architecture increases operational responsibility and incurs additional hosting cost due to the following considerations: Both Orchestrator and ProxySQL must be deployed and maintained. Orchestrator needs a separate database for maintaining state. Both Orchestrator and ProxySQL need to be set up for HA, so there is additional configuration and deployment complexity. Further, Orchestrator doesn't support multi-source replications, doesn't support all types of parallel replication, and can't be combined with clustering software such as Galera or Percona XtraDB. For more information about the current limitations, see the Orchestrator FAQ. What's next Read about the Cloud SQL high availability configuration. Learn more about High availability options using regional persistent disks. Review the MySQL Group Replication documentation. Learn about Galera Cluster or the related Percona XtraDB Cluster. Review the Orchestrator documentation. Learn more about ProxySQL. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/High_availability_of_PostgreSQL_clusters_on_Compute_Engine.txt b/High_availability_of_PostgreSQL_clusters_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..3f945d270188b1de31b450cd995ff9f0f76e2cf1 --- /dev/null +++ b/High_availability_of_PostgreSQL_clusters_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/architectures-high-availability-postgresql-clusters-compute-engine +Date Scraped: 2025-02-23T11:54:52.389Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architectures for high availability of PostgreSQL clusters on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-03 UTC This document describes several architectures that provide high availability (HA) for PostgreSQL deployments on Google Cloud. HA is the measure of system resiliency in response to underlying infrastructure failure. In this document, HA refers to the availability of PostgreSQL clusters either within a single cloud region or between multiple regions, depending on the HA architecture. This document is intended for database administrators, cloud architects, and DevOps engineers who want to learn how to increase PostgreSQL data-tier reliability by improving overall system uptime. This document discusses concepts relevant to running PostgreSQL on Compute Engine. The document doesn't discuss using managed databases such as Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL. If a system or application requires a persistent state to handle requests or transactions, the data persistence layer (the data tier) must be available to successfully handle requests for data queries or mutations. Downtime in the data tier prevents the system or application from performing the necessary tasks. Depending on the service level objectives (SLOs) of your system, you might require an architecture that provides a higher level of availability. There is more than one way to achieve HA, but in general you provision redundant infrastructure that you can quickly make accessible to your application. This document discusses the following topics: Definition of terms related to HA database concepts. Options for HA PostgreSQL topologies. Contextual information for consideration of each architecture option. Terminology The following terms and concepts are industry-standard, and they are useful to understand for purposes beyond the scope of this document. replication The process by which write transactions (INSERT, UPDATE, or DELETE) and schema changes (data definition language (DDL)) are reliably captured, logged, and then serially applied to all downstream database replica nodes in the architecture. primary node The node that provides a read with the most up-to-date state of persisted data. All database writes must be directed to a primary node. replica (secondary) node An online copy of the primary database node. Changes are either synchronously or asynchronously replicated to replica nodes from the primary node. You can read from replica nodes with the understanding that the data might be slightly delayed due to replication lag. replication lag A measurement, in log sequence number (LSN), transaction ID, or time. Replication lag expresses the difference between when change operations are applied to the replica compared to when they are applied to the primary node. continuous archiving An incremental backup in which the database continuously saves sequential transactions to a file. write-ahead log (WAL) A write-ahead log (WAL) is a log file that records changes to data files before any changes are actually made to the files. In case of a server crash, the WAL is a standard way to help ensure the data integrity and durability of your writes. WAL record A record of a transaction that was applied to the database. A WAL record is formatted and stored as a series of records that describe data file page-level changes. Log Sequence Number (LSN) Transactions create WAL records which are appended to the WAL file. The position where the insert occurs is called Log Sequence Number (LSN). It is a 64-bit integer, represented as two hexadecimal numbers separated by a slash (XXXXXXXX/YYZZZZZZ). The 'Z' represents the offset position in the WAL file. segment files Files that contain as many WAL records as possible, depending on the file size that you configure. Segment files have monotonically increasing filenames and a default file size of 16 MB. synchronous replication A form of replication in which the primary server waits for the replica to confirm that data was written to the replica transaction log before confirming a commit to the client. When you run streaming replication, you can use the PostgreSQL synchronous_commit option, which helps to ensure consistency between your primary server and replica. asynchronous replication A form of replication in which the primary server doesn't wait for the replica to confirm that the transaction was successfully received before confirming a commit to the client. Asynchronous replication has lower latency when compared with synchronous replication. However, if the primary crashes and its committed transactions aren't transferred to the replica, there is a chance of data loss. Asynchronous replication is the default mode of replication on PostgreSQL, either using file-based log shipping or streaming replication. file-based log shipping A replication method in PostgreSQL that transfers the WAL segment files from the primary database server to the replica. The primary operates in continuous archiving mode, while each standby service operates in continuous recovery mode to read the WAL files. This type of replication is asynchronous. streaming replication A replication method wherein the replica connects to the primary and continuously receives a continuous sequence of changes. Because updates arrive through a stream, this method keeps the replica more up-to-date with the primary when compared with log-shipping replication. Although replication is asynchronous by default, you can alternatively configure synchronous replication. physical streaming replication A replication method that transports changes to the replica. This method uses the WAL records that contain the physical data changes in the form of disk block addresses and byte-by-byte changes. logical streaming replication A replication method that captures changes based on their replication identity (primary key) which allows for more control over how the data is replicated compared to physical replication. Because of restrictions in PostgreSQL logical replication, logical streaming replication requires special configuration for an HA setup. This guide discusses the standard physical replication and doesn't discuss logical replication. uptime The percent of time that a resource is working and capable of delivering a response to a request. failure detection The process of identifying that an infrastructure failure has occurred. failover The process of promoting the backup or standby infrastructure (in this case, the replica node) to become the primary infrastructure. During failover, the replica node becomes the primary node. switchover The process of running a manual failover on a production system. A switchover either tests that the system is working well, or takes the current primary node out of the cluster for maintenance. recovery time objective (RTO) The elapsed, real-time duration for the data tier failover process to complete. RTO depends on the amount of time that's acceptable from a business perspective. recovery point objective (RPO) The amount of data loss (in elapsed real time) for the data tier to sustain as a result of failover. RPO depends on the amount of data loss that's acceptable from a business perspective. fallback The process of reinstating the former primary node after the condition that caused a failover is remedied. self-healing The capability of a system to resolve issues without external actions by a human operator. network partition A condition when two nodes in an architecture—for example the primary and replica nodes—can't communicate with one another over the network. split brain A condition that occurs when two nodes simultaneously believe that they are the primary node. node group A set of compute resources that provide a service. In this document, that service is the data persistence tier. witness or quorum node A separate compute resource that helps a node group determine what to do when a split-brain condition occurs. primary or leader election The process by which a group of peer-aware nodes, including witness nodes, determine which node should be the primary node. When to consider an HA architecture HA architectures provide increased protection against data-tier downtime when compared to single node database setups. To select the best option for your business use case, you need to understand your tolerance for downtime, and the respective tradeoffs of the various architectures. Use an HA architecture when you want to provide increased data-tier uptime to meet the reliability requirements for your workloads and services. If your environment tolerates some amount of downtime, an HA architecture might introduce unnecessary cost and complexity. For example, development or test environments infrequently need high database tier availability. Consider your requirements for HA Following are several questions to help you decide what PostgreSQL HA option is best for your business: What level of availability do you hope to achieve? Do you require an option that allows your service to continue to function during only a single zone or complete regional failure? Some HA options are limited to a region while others can be multi-region. What services or customers rely on your data tier, and what is the cost to your business if there is downtime in the data persistence tier? If a service caters only to internal customers who require occasional use of the system, it likely has lower availability requirements than an end-customer facing service that serves customers continually. What is your operational budget? Cost is a notable consideration: to provide HA, your infrastructure and storage costs are likely to increase. How automated does the process need to be, and how quickly do you need to fail over? (What is your RTO?) HA options vary by how quickly the system can failover and be available to customers. Can you afford to lose data as a result of the failover? (What is your RPO?) Because of the distributed nature of HA topologies, there is a tradeoff between commit latency and risk of data loss due to a failure. How HA works This section describes streaming and synchronous streaming replication that underlie PostgreSQL HA architectures. Streaming replication Streaming replication is a replication approach in which the replica connects to the primary and continuously receives a stream of WAL records. Compared to log-shipping replication, streaming replication allows the replica to stay more up-to-date with the primary. PostgreSQL offers built-in streaming replication beginning in version 9. Many PostgreSQL HA solutions use the built-in streaming replication to provide the mechanism for multiple PostgreSQL replica nodes to be kept in sync with the primary. Several of these options are discussed in the PostgreSQL HA architectures section later in this document. Each replica node requires dedicated compute and storage resources. Replica node infrastructure is independent from the primary. You can use replica nodes as hot standbys to serve read-only client queries. This approach allows read-only query load-balancing across the primary and one or more replicas. Streaming replication is by default asynchronous; the primary doesn't wait for a confirmation from a replica before it confirms a transaction commit to the client. If a primary suffers a failure after it confirms the transaction, but before a replica receives the transaction, asynchronous replication can result in a data loss. If the replica is promoted to become a new primary, such a transaction wouldn't be present. Synchronous streaming replication You can configure streaming replication as synchronous by choosing one or more replicas to be a synchronous standby. If you configure your architecture for synchronous replication, the primary doesn't confirm a transaction commit until after the replica acknowledges the transaction persistence. Synchronous streaming replication provides increased durability in return for a higher transaction latency. The synchronous_commit configuration option also lets you configure the following progressive replica durability levels for the transaction: local: synchronous standby replicas are not involved in the commit acknowledgement. The primary acknowledges transaction commits after WAL records are written and flushed to its local disk. Transaction commits on the primary don't involve standby replicas. Transactions can be lost if there is any failure on the primary. on [default]: synchronous standby replicas write the committed transactions to their WAL before they send acknowledgment to the primary. Using the on configuration ensures that the transaction can only be lost if the primary and all synchronous standby replicas suffer simultaneous storage failures. Because the replicas only send an acknowledgment after they write WAL records, clients that query the replica won't see changes until the respective WAL records are applied to the replica database. remote_write: synchronous standby replicas acknowledge receipt of the WAL record at the OS level, but they don't guarantee that the WAL record was written to disk. Because remote_write doesn't guarantee that the WAL was written, the transaction can be lost if there is any failure on both the primary and secondary before the records are written. remote_write has lower durability than the on option. remote_apply: synchronous standby replicas acknowledge transaction receipt and successful application to the database before they acknowledge the transaction commit to the client. Using the remote_apply configuration ensures that the transaction is persisted to the replica, and that client query results immediately include the effects of the transaction. remote_apply provides increased durability and consistency compared to on and remote_write. The synchronous_commit configuration option works with the synchronous_standby_names configuration option that specifies the list of standby servers which take part in the synchronous replication process. If no synchronous standby names are specified, transaction commits don't wait for replication. PostgreSQL HA architectures At the most basic level, data tier HA consists of the following: A mechanism to identify if a failure of the primary node occurs. A process to perform a failover in which the replica node is promoted to be a primary node. A process to change the query routing so that application requests reach the new primary node. Optionally, a method to fallback to the original architecture using pre-failover primary and replica nodes in their original capacities. The following sections provide an overview of the following HA architectures: The Patroni template pg_auto_failover extension and service Stateful MIGs and regional persistent disk These HA solutions minimize downtime if there is an infrastructure or zonal outage. When you choose between these options, balance commit latency and durability according to your business needs. A critical aspect of an HA architecture is the time and manual effort that are required to prepare a new standby environment for subsequent failover or fallback. Otherwise, the system can only withstand one failure, and the service doesn't have protection from an SLA violation. We recommend that you select an HA architecture that can perform manual failovers, or switchovers, with the production infrastructure. HA using the Patroni template Patroni is a mature and actively maintained, open source (MIT licensed) software template that provides you with the tools to configure, deploy, and operate a PostgreSQL HA architecture. Patroni provides a shared cluster state and an architecture configuration that is persisted in a distributed configuration store (DCS). Options for implementing a DCS include: etcd, Consul, Apache ZooKeeper, or Kubernetes. The following diagram shows the major components of a Patroni cluster. Figure 1. Diagram of the major components of a Patroni cluster. In figure 1, the load balancers front the PostgreSQL nodes, and the DCS and the Patroni agents operate on the PostgreSQL nodes. Patroni runs an agent process on each PostgreSQL node. The agent process manages the PostgreSQL process and data node configuration. The Patroni agent coordinates with other nodes through the DCS. The Patroni agent process also exposes a REST API that you can query to determine the PostgreSQL service health and configuration for each node. To assert its cluster membership role, the primary node regularly updates the leader key in the DCS. The leader key includes a time to live (TTL). If the TTL elapses without an update, the leader key is evicted from the DCS, and the leader election starts to select a new primary from the candidate pool. The following diagram shows a healthy cluster in which Node A successfully updates the leader lock. Figure 2. Diagram of a healthy cluster. Figure 2 shows a healthy cluster: Node B and Node C watch while Node A successfully updates leader key. Failure detection The Patroni agent continuously telegraphs its health by updating its key in the DCS. At the same time, the agent validates PostgreSQL health; if the agent detects an issue, it either self-fences the node by shutting itself down, or demotes the node to a replica. As shown in the following diagram, if the impaired node is the primary, its leader key in the DCS expires, and a new leader election occurs. Figure 3. Diagram of an impaired cluster. Figure 3 shows an impaired cluster: a down primary node hasn't recently updated its leader key in the DCS, and the non-leader replicas are notified that the leader key has expired. On Linux hosts, Patroni also runs an OS-level watchdog on primary nodes. This watchdog listens for keep-alive messages from the Patroni agent process. If the process becomes unresponsive, and the keep alive isn't sent, the watchdog restarts the host. The watchdog helps prevent a split brain condition in which the PostgreSQL node continues to serve as the primary, but the leader key in the DCS expired due to agent failure, and a different primary (leader) was elected. Failover process If the leader lock expires in the DCS, the candidate replica nodes begin a leader election. When a replica discovers a missing leader lock, it checks its replication position compared to the other replicas. Each replica uses the REST API to get the WAL log positions of the other replica nodes, as shown in the following diagram. Figure 4. Diagram of the Patroni failover process. Figure 4 shows WAL log position queries and respective results from the active replica nodes. Node A isn't available, and the healthy nodes B and C return the same WAL position to each other. The most up-to-date node (or nodes if they are at the same position) simultaneously attempt to acquire the leader lock in the DCS. However, only one node can create the leader key in the DCS. The first node to successfully create the leader key is the winner of the leader race, as shown in the following diagram. Alternatively, you can designate preferred failover candidates by setting the failover_priority tag in the configuration files. Figure 5. Diagram of the leader race. Figure 5 shows a leader race: two leader candidates try to obtain the leader lock, but only one of the two nodes, Node C, successfully sets the leader key and wins the race. Upon winning the leader election, the replica promotes itself to be the new primary. Starting at the time that the replica promotes itself, the new primary updates the leader key in the DCS to retain the leader lock, and the other nodes serve as replicas. Patroni also provides the patronictl control tool that lets you run switchovers to test the nodal failover process. This tool helps operators to test their HA setups in production. Query routing The Patroni agent process that runs on each node exposes REST API endpoints that reveal the current node role: either primary or replica. REST endpoint HTTP return code if primary HTTP return code if replica /primary 200 503 /replica 503 200 Because the relevant health checks change their responses if a particular node changes its role, a load balancer health check can use these endpoints to inform primary and replica node traffic routing. The Patroni project provides template configurations for a load balancer such as HAProxy . The internal passthrough Network Load Balancer can use these same health checks to provide similar capabilities. Fallback process If there is a node failure, a cluster is left in a degraded state. Patroni's fallback process helps to restore an HA cluster back to a healthy state after a failover. The fallback process manages the return of the cluster to its original state by automatically initializing the impacted node as a cluster replica. For example, a node might restart due to a failure in the operating system or underlying infrastructure. If the node is the primary and takes longer than the leader key TTL to restart, a leader election is triggered and a new primary node is selected and promoted. When the stale primary Patroni process starts, it detects that it doesn't have the leader lock, automatically demotes itself to a replica, and joins the cluster in that capacity. If there is an unrecoverable node failure, such as an unlikely zonal failure, you need to start a new node. A database operator can manually start a new node, or you can use a stateful regional managed instance group (MIG) with a minimum node count to automate the process. After the new node is created, Patroni detects that the new node is part of an existing cluster and automatically initializes the node as a replica. HA using the pg_auto_failover extension and service pg_auto_failover is an actively developed, open source (PostgreSQL license) PostgreSQL extension. pg_auto_failover configures an HA architecture by extending existing PostgreSQL capabilities. pg_auto_failover doesn't have any dependencies other than PostgreSQL. To use the pg_auto_failover extension with an HA architecture, you need at least three nodes, each running PostgreSQL with the extension enabled. Any of the nodes can fail without affecting the uptime of the database group. A collection of nodes managed by pg_auto_failover is called a formation. The following diagram shows a pg_auto_failover architecture. Figure 6. Diagram of a pg_auto_failover architecture. Figure 6 shows a pg_auto_failover architecture that consists of two main components: the Monitor service and the Keeper agent. Both the Keeper and Monitor are contained in the pg_auto_failover extension. Monitor service The pg_auto_failover Monitor service is implemented as a PostgreSQL extension; when the service creates a Monitor node, it starts a PostgreSQL instance with the pg_auto_failover extension enabled. The Monitor maintains the global state for the formation, obtains health check status from the member PostgreSQL data nodes, and orchestrates the group using the rules established by a finite state machine (FSM). According to the FSM rules for state transitions, the Monitor communicates instructions to the group nodes for actions like promote, demote, and configuration changes. Keeper agent On each pg_auto_failover data node, the extension starts a Keeper agent process. This Keeper process observes and manages the PostgreSQL service. The Keeper sends status updates to the Monitor node, and receives and executes actions that the Monitor sends in response. By default, pg_auto_failover sets up all group secondary data nodes as synchronous replicas. The number of synchronous replicas required for a commit are based on the number_sync_standby configuration that you set on the Monitor. Failure detection The Keeper agents on primary and secondary data nodes periodically connect to the Monitor node to communicate their current state, and check whether there are any actions to be executed. The Monitor node also connects to the data nodes to perform a health check by executing the PostgreSQL protocol (libpq) API calls, imitating the pg_isready() PostgreSQL client application. If neither of these actions are successful after a period of time (30 seconds by default), the Monitor node determines that a data node failure occurred. You can change the PostgreSQL configuration settings to customize monitor timing and number of retries. For more information, see Failover and fault tolerance. If a single-node failure occurs, one of the following is true: If the unhealthy data node is a primary, the Monitor starts a failover. If the unhealthy data node is a secondary, the Monitor disables synchronous replication for the unhealthy node. If the failed node is the Monitor node, automated failover isn't possible. To avoid this single point of failure, you need to ensure that the right monitoring and disaster recovery is in place. The following diagram shows the failure scenarios and the formation result states that are described in the preceding list. Figure 7. Diagram of the pg_auto_failover failure scenarios. Failover process Each database node in the group has the following configuration options that determine the failover process: replication_quorum: a boolean option. If replication_quorum is set to true, then the node is considered as a potential failover candidate candidate_priority: an integer value from 0 through 100. candidate_priority has a default value of 50 that you can change to affect failover priority. Nodes are prioritized as potential failover candidates based on the candidate_priority value. Nodes that have a higher candidate_priority value have a higher priority. The failover process requires that at least two nodes have a nonzero candidate priority in any pg_auto_failover formation. If there is a primary node failure, secondary nodes are considered for promotion to primary if they have active synchronous replication and if they are members of the replication_quorum. Secondary nodes are considered for promotion according to the following progressive criteria: Nodes with the highest candidate priority Standby with the most advanced WAL log position published to the Monitor Random selection as a final tie break A failover candidate is a lagging candidate when it hasn't published the most advanced LSN position in the WAL. In this scenario, pg_auto_failover orchestrates an intermediate step in the failover mechanism: the lagging candidate fetches the missing WAL bytes from a standby node that has the most advanced LSN position. The standby node is then promoted. Postgres allows this operation because cascading replication lets any standby act as the upstream node for another standby. Query routing pg_auto_failure doesn't provide any server-side query routing capabilities. Instead, pg_auto_failure relies on client-side query routing that uses the official PostgreSQL client driver libpq. When you define the connection URI, the driver can accept multiple hosts in its host keyword. The client library that your application uses must either wrap libpq or implement the ability to supply multiple hosts for the architecture to support a fully automated failover. Fallback and switchover processes When the Keeper process restarts a failed node or starts a new replacement node, the process checks the Monitor node to determine the next action to perform. If a failed, restarted node was formerly the primary, and the Monitor has already picked a new primary according to the failover process, the Keeper reinitializes this stale primary as a secondary replica. pg_auto_failure provides the pg_autoctl tool, which lets you run switchovers to test the node failover process. Along with letting operators test their HA setups in production, the tool helps you restore an HA cluster back to a healthy state after a failover. HA using stateful MIGs and regional persistent disk This section describes an HA approach that uses the following Google Cloud components: regional persistent disk. When you use regional persistent disks, data is synchronously replicated between two zones in a region, so you don't need to use streaming replication. However, HA is limited to exactly two zones in a region. Stateful managed instance groups. A pair of stateful MIGs are used as part of a control plane to keep one primary PostgreSQL node running. When the stateful MIG starts a new instance, it can attach the existing regional persistent disk. At a single point in time, only one of the two MIGs will have a running instance. Cloud Storage. An object in a Cloud Storage bucket contains a configuration that indicates which of the two MIGs is running the primary database node, and in which MIG a failover instance should be created. MIG health checks and autohealing. The health check monitors instance health. If the running node becomes unhealthy, the health check initiates the autohealing process. Logging. When autohealing stops the primary node, an entry is recorded in Logging. The pertinent log entries are exported to a Pub/Sub sink topic using a filter. Event-driven Cloud Run functions. The Pub/Sub message triggers Cloud Run functions. Cloud Run functions uses the config in Cloud Storage to determine what actions to take for each stateful MIG. Internal passthrough Network Load Balancer. The load balancer provides routing to the running instance in the group. This ensures that an instance IP address change that's caused by instance recreation is abstracted from the client. The following diagram shows an example of an HA using stateful MIGs and regional persistent disks: Figure 8. Diagram of an HA that uses stateful MIGs and regional persistent disks. Figure 8 shows a healthy Primary node serving client traffic. Clients connect to the internal passthrough Network Load Balancer's static IP address. The load balancer routes client requests to the VM that is running as part of the MIG. Data volumes are stored on mounted regional persistent disks. To implement this approach, create a VM image with PostgreSQL that starts up on initialization to be used as the instance template of the MIG. You also need to configure an HTTP-based health check (such as HAProxy or pgDoctor) on the node. An HTTP-based health check helps ensure that both the load balancer and the instance group can determine the health of the PostgreSQL node. Regional persistent disk To provision a block storage device that provides synchronous data replication between two zones in a region, you can use the Compute Engine regional persistent disk storage option. A regional persistent disk can provide a foundational building block for you to implement a PostgreSQL HA option that doesn't rely on PostgreSQL's built-in streaming replication. If your primary node VM instance becomes unavailable due to infrastructure failure or zonal outage, you can force the regional persistent disk to attach to a VM instance in your backup zone in the same region. To attach the regional persistent disk to a VM instance in your backup zone, you can do one of the following: Maintain a cold standby VM instance in the backup zone. A cold standby VM instance is a stopped VM instance that doesn't have a regional Persistent Disk mounted to it but is an identical VM instance to the primary node VM instance. If there is a failure, the cold standby VM is started and the regional persistent disk is mounted to it. The cold standby instance and the primary node instance have the same data. Create a pair of stateful MIGs using the same instance template. The MIGs provide health checks and serve as part of the control plane. If the primary node fails, a failover instance is created in the target MIG declaratively. The target MIG is defined in the Cloud Storage object. A per-instance configuration is used to attach the regional persistent disk. If the data service outage is promptly identified, the force-attach operation typically completes in less than one minute, so an RTO measured in minutes is attainable. If your business can tolerate the additional downtime required for you to detect and communicate an outage, and for you to perform the failover manually, then you don't need to automate the force-attach process. If your RTO tolerance is lower, you can automate the detection and failover process. Alternatively Cloud SQL for PostgreSQL also provides a fully managed implementation of this HA approach. Failure detection and failover process The HA approach uses the autohealing capabilities of instance groups to monitor node health using a health check. If there is a failed health check, the existing instance is considered unhealthy, and the instance is stopped. This stop initiates the failover process using Logging, Pub/Sub, and the triggered Cloud Run functions function. To fulfill the requirement that this VM always has the regional disk mounted, one of the two the MIGs will be configured by the Cloud Run functions to create an instance in one of the two zones where the regional persistent disk is available. Upon a node failure, the replacement instance is started, according to the state persisted in Cloud Storage, in the alternate zone. Figure 9. Diagram of a zonal failure in a MIG. In figure 9, the former primary node in Zone A has experienced a failure and the Cloud Run functions has configured MIG B to launch a new primary instance in Zone B. The failure detection mechanism is automatically configured to monitor the health of the new primary node. Query routing The internal passthrough Network Load Balancer routes clients to the instance that is running the PostgreSQL service. The load balancer uses the same health check as the instance group to determine whether the instance is available to serve queries. If the node is unavailable because it's being recreated, the connections fail. After the instance is back up, health checks start passing and the new connections are routed to the available node. There are no read-only nodes in this setup because there is only one running node. Fallback process If the database node is failing a health check due to an underlying hardware issue, the node is recreated on a different underlying instance. At that point, the architecture is returned to its original state without any additional steps. However, if there is a zonal failure, the setup continues to run in a degraded state until the first zone recovers. While highly unlikely, if there are simultaneous failures in both zones that are configured for the regional Persistent Disk replication and stateful MIG, the PostgreSQL instance can't recover–the database is unavailable to serve requests during the outage. Comparison between the HA options The following tables provide a comparison of the HA options available from Patroni, pg_auto_failover, and Stateful MIGs with regional persistent disks. Setup and architecture Patroni pg_auto_failover Stateful MIGs with regional persistent disks Requires an HA architecture, DCS setup, and monitoring and alerting. Agent setup on data nodes is relatively straightforward. Doesn't require any external dependencies other than PostgreSQL. Requires a node dedicated as a monitor. The monitor node requires HA and DR to ensure that it isn't a single point of failure (SPOF). Architecture that consists exclusively of Google Cloud services. You only run one active database node at a time. High availability configurability Patroni pg_auto_failover Stateful MIGs with regional persistent disks Extremely configurable: supports both synchronous and asynchronous replication, and lets you specify which nodes are to be synchronous and asynchronous. Includes automatic management of the synchronous nodes. Allows for multiple zone and multi-region HA setups. The DCS must be accessible. Similar to Patroni: very configurable. However, because the monitor is only available as a single instance, any type of setup needs to consider access to this node. Limited to two zones in a single region with synchronous replication. Ability to handle network partition Patroni pg_auto_failover Stateful MIGs with regional persistent disks Self-fencing along with an OS-level monitor provides protection against split brain. Any failure to connect to the DCS results in the primary demoting itself to a replica and triggering a failover to ensure durability over availability. Uses a combination of health checks from the primary to the monitor and to the replica to detect a network partition, and demotes itself if necessary. Not applicable: there is only one active PostgreSQL node at a time, so there isn't a network partition. Cost Patroni pg_auto_failover Stateful MIGs with regional persistent disks High cost because it depends on the DCS that you choose and the number of PostgreSQL replicas. The Patroni architecture doesn't add significant cost. However, the overall expense is affected by the underlying infrastructure, which uses multiple compute instances for PostgreSQL and the DCS. Because it uses multiple replicas and a separate DCS cluster, this option can be the most expensive. Medium cost because it involves running a monitor node and at least three PostgreSQL nodes (one primary and two replicas). Low cost because only one PostgreSQL node is actively running at any given time. You only pay for a single compute instance. Client configuration Patroni pg_auto_failover Stateful MIGs with regional persistent disks Transparent to the client because it connects to a load balancer. Requires client library to support multiple host definition in setup because it isn't easily fronted with a load balancer. Transparent to the client because it connects to a load balancer. Scalability Patroni pg_auto_failover Stateful MIGs with regional persistent disks High flexibility in configuring for scalability and availability tradeoffs. Read scaling is possible by adding more replicas. Similar to Patroni: Read scaling is possible by adding more replicas. Limited scalability due to only having one active PostgreSQL node at a time. Automation of PostgreSQL node initialization, configuration management Patroni pg_auto_failover Stateful MIGs with regional persistent disks Provides tools to manage PostgreSQL configuration (patronictl edit-config) and automatically initializes new nodes or restarted nodes in the cluster. You can initialize nodes using pg_basebackup or other tools like barman. Automatically initializes nodes, but limited to only using pg_basebackup when initializing a new replica node. Configuration management is limited to pg_auto_failover-related configurations. Stateful instance group with shared disk removes the need for any PostgreSQL node initialization. Because there is only ever one node running, configuration management is on a single node. Customizability and feature richness Patroni pg_auto_failover Stateful MIGs with regional persistent disks Provides a hook interface to allow for user definable actions to be called at key steps, such as on demotion or on promotion. Feature-rich configurability like support for different types of DCS, different means to initialize replicas, and different ways to provide PostgreSQL configuration. Lets you set up standby clusters that allow for cascaded replica clusters to ease migration between clusters. Limited because it's a relatively new project. Not applicable. Maturity Patroni pg_auto_failover Stateful MIGs with regional persistent disks Project has been available since 2015 and it's used in production by large companies like Zalando and GitLab. Relatively new project announced early 2019. Composed entirely of generally available Google Cloud products. Best Practices for Maintenance and Monitoring Maintaining and monitoring your PostgreSQL HA cluster is crucial for ensuring high availability, data integrity, and optimal performance. The following sections provide some best practices for monitoring and maintaining a PostgreSQL HA cluster. Perform regular backups and recovery testing Regularly back up your PostgreSQL databases and test the recovery process. Doing so helps to ensure data integrity and minimizes downtime in case of an outage. Test your recovery process to validate your backups and identify potential issues before an outage occurs. Monitor PostgreSQL servers and replication lag Monitor your PostgreSQL servers to verify that they're running. Monitor the replication lag between the primary and replica nodes. Excessive lag can lead to data inconsistency and increased data loss in case of a failover. Set up alerts for significant lag increases and investigate the root cause promptly. Using views like pg_stat_replication and pg_replication_slots can help you to monitor replication lag. Implement connection pooling Connection pooling can help you to efficiently manage database connections. Connection pooling helps to reduce the overhead of establishing new connections, which improves application performance and database server stability. Tools such as PGBouncer and Pgpool-II can provide connection pooling for PostgreSQL. Implement comprehensive monitoring To gain insights into your PostgreSQL HA clusters, establish robust monitoring systems as follows: Monitor key PostgreSQL and system metrics, such as CPU utilization, memory usage, disk I/O, network activity, and active connections. Collect PostgreSQL logs, including server logs, WAL logs, and autovacuum logs, for deep analysis and troubleshooting. Use monitoring tools and dashboards to visualize metrics and logs for rapid issue identification. Integrate metrics and logs with alerting systems for proactive notification of potential problems. For more information about monitoring a Compute Engine instance, see Cloud Monitoring overview. What's next Read about the Cloud SQL high availability configuration. Learn more about High availability options using regional persistent disk. Read about Patroni. Read about pg_auto_failover. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Alex Cârciu | Solutions Architect Send feedback \ No newline at end of file diff --git a/Hub-and-spoke_network_architecture.txt b/Hub-and-spoke_network_architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..95cdada385c8504927082cc807f9866e49ac97e6 --- /dev/null +++ b/Hub-and-spoke_network_architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-hub-spoke-vpc-network-topology +Date Scraped: 2025-02-23T11:53:43.255Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hub-and-spoke network architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-29 UTC This document presents two architectural options for setting up a hub-and-spoke network topology in Google Cloud. One option uses the network peering capability of Virtual Private Cloud (VPC), and the other uses Cloud VPN. Enterprises can separate workloads into individual VPC networks for purposes of billing, environment isolation, and other considerations. However, the enterprise might also need to share specific resources across these networks, such as a shared service or a connection to on-premises. In such cases, it can be useful to place the shared resource in a hub network and to attach the other VPC networks as spokes. The following diagram shows an example of the resulting hub-and-spoke network, sometimes called a star topology. In this example, separate spoke VPC networks are used for the workloads of individual business units within a large enterprise. Each spoke VPC network is connected to a central hub VPC network that contains shared services and can serve as the sole entry point to the cloud from the enterprise's on-premises network. Architecture using VPC Network Peering The following diagram shows a hub-and-spoke network using VPC Network Peering. VPC Network Peering enables communication using internal IP addresses between resources in separate VPC networks. Traffic stays on Google's internal network and does not traverse the public internet. In this architecture, the resources that need network-level isolation use separate spoke VPC networks. For example, the architecture shows a Compute Engine VM in the spoke-1 VPC network. The spoke-2 VPC network has a Compute Engine VM and a Google Kubernetes Engine (GKE) cluster. Each spoke VPC network in this architecture has a peering relationship with a central hub VPC network. VPC Network Peering does not constrain VM bandwidth. Each VM can send traffic at the full bandwidth of that individual VM. Each spoke VPC network has a Cloud NAT gateway for outbound communication with the internet. VPC Network Peering does not provide for transitive route announcements. Unless an additional mechanism is used, the VM in the spoke-1 network cannot send traffic to the VM in the spoke-2 network. To work around this non-transitivity constraint, the architecture shows the option of using Cloud VPN to forward routes between the networks. In this example, VPN tunnels between the spoke-2 VPC network and the hub VPC network enable reachability to the spoke-2 VPC network from the other spokes. If you need connectivity between only a few specific spokes, you can peer those VPC network pairs directly. Architecture using Cloud VPN The scalability of a hub-and-spoke topology that uses VPC Network Peering is subject to VPC Network Peering limits. And as noted earlier, VPC Network Peering connections don't allow transitive traffic beyond the two VPC networks that are in a peering relationship. The following diagram shows an alternative hub-and-spoke network architecture that uses Cloud VPN to overcome the limitations of VPC Network Peering. The resources that need network-level isolation use separate spoke VPC networks. IPSec VPN tunnels connect each spoke VPC network to a hub VPC network. A DNS private zone in the hub network and a DNS peering zone and private zone exist in each spoke network. Bandwidth between networks is limited by the total bandwidths of the tunnels. When choosing between the two architectures discussed so far, consider the relative merits of VPC Network Peering and Cloud VPN: VPC Network Peering has the non-transitivity constraint, but it supports the full bandwidth defined by the machine type of the VMs and other factors that determine network bandwidth. However, you can add transitive routing by adding VPN tunnels. Cloud VPN allows transitive routing, but the total bandwidth (ingress plus egress) is limited to the bandwidths of the tunnels. Design alternatives Consider the following architectural alternatives for interconnecting resources that are deployed in separate VPC networks in Google Cloud: Inter-spoke connectivity using a gateway in the hub VPC network To enable inter-spoke communication, you can deploy a network virtual appliance (NVA) or a next-generation firewall (NGFW) on the hub VPC network, to serve as a gateway for spoke-to-spoke traffic. VPC Network Peering without a hub If you don't need centralized control over on-premises connectivity or sharing services across VPC networks, then a hub VPC network isn't necessary. You can set up peering for the VPC network pairs that require connectivity, and manage the interconnections individually. Do consider the limits on the number of peering relationships per VPC network. Multiple Shared VPC networks Create a Shared VPC network for each group of resources that you want to isolate at the network level. For example, to separate the resources used for production and development environments, create a Shared VPC network for production and another Shared VPC network for development. Then, peer the two VPC networks to enable inter-VPC network communication. Resources in individual projects for each application or department can use services from the appropriate Shared VPC network. For connectivity between the VPC networks and your on-premises network, you can use either separate VPN tunnels for each VPC network, or separate VLAN attachments on the same Dedicated Interconnect connection. What's next Deploy a hub-and-spoke network using VPC Network Peering. Deploy a hub-and-spoke network using Cloud VPN. Learn about more design options for connecting multiple VPC networks. Learn about the best practices for building a secure and resilient cloud topology that's optimized for cost and performance. Send feedback \ No newline at end of file diff --git a/Hybrid.txt b/Hybrid.txt new file mode 100644 index 0000000000000000000000000000000000000000..fe04511064ff34523c3a64d08585dfc85b909d64 --- /dev/null +++ b/Hybrid.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/hybrid +Date Scraped: 2025-02-23T11:44:46.733Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud hybrid deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide describes the hybrid deployment archetype, provides examples of use cases, and discusses design considerations. In an architecture that's based on the hybrid deployment archetype, some parts of the application are deployed in Google Cloud, and other parts run on-premises. Use cases The following sections provide examples of use cases for which the hybrid deployment archetype is an appropriate choice. Note: For each of these use cases, the Google Cloud part of the architecture can use the zonal, regional, multi-regional, or global deployment archetype. Disaster recovery (DR) site for an on-premises application For mission-critical applications that you run on-premises, you can back up the data to Google Cloud and maintain a replica in the cloud, as shown in the following diagram. The backup frequency and whether the replica needs to be active or passive depends on your recovery time objective (RTO) and recovery point objective (RPO). When the on-premises application is down due to planned or unplanned events, you can activate the replica in Google Cloud to restore the application to production. On-premises development for cloud applications For an application that runs in Google Cloud, you can keep the development environments on-premises, and use a CI/CD pipeline to push updates to the cloud, as shown in the following diagram. This architecture lets you retain control over your development activities while enjoying the benefits that Google Cloud offers for scalability, cost optimization, and reliability. Enhancing on-premises applications with cloud capabilities Google Cloud offers advanced capabilities in many areas, including storage, artificial intelligence (AI) and machine learning (ML), big data, and analytics. The hybrid deployment archetype lets you use these advanced Google Cloud capabilities even for applications that you run on-premises. The following are examples of these capabilities: Low-cost, unlimited archive storage in the cloud for an on-premises application. AI and ML applications in the cloud for data generated by an on-premises application. Cloud-based data warehouse and analytics processes using BigQuery for data ingested from on-premises data sources. Cloud bursting, to handle overflow traffic when the load on the on-premises application reaches peak capacity. The following diagram shows a hybrid topology where data from an on-premises application is uploaded to Google Cloud. Data analysts analyze the uploaded data by using advanced AI, ML, big data, and analytics capabilities in Google Cloud. Tiered hybrid topology In this topology, which is sometimes called a split-stack deployment, the application's frontend is in Google Cloud, and the backend is on-premises. The frontend might include capabilities like load balancing, CDN, DDoS protection, and access policies. The frontend sends traffic to the on-premises backend for processing, as shown in the following diagram: This architecture might be suitable when an application is used globally but the backend needs to be within a single, controlled environment. A variation of this use case is to run the frontend on-premises and deploy the backend in Google Cloud. More information For more information about the rationale and use cases for the hybrid deployment archetype, see Build hybrid and multicloud architectures using Google Cloud. Design considerations When you build an architecture that's based on the hybrid deployment archetype, consider the following design factors. On-premises to cloud network connection For efficient network communication between your on-premises environment and the resources in Google Cloud, you need a network connection that's reliable and secure. For more information about hybrid connectivity options offered by Google Cloud, see Choosing a Network Connectivity product. Setup effort and operational complexity Setting up and operating a hybrid topology requires more effort than an architecture that uses only Google Cloud. To operate this topology, you need to manage resources consistently across the on-premises and Google Cloud environments. To manage containerized hybrid applications, you can use GKE Enterprise, which is a unified orchestration platform to manage Kubernetes clusters in multiple locations. Cost of redundant resources A hybrid deployment is potentially more expensive than a cloud-only deployment, because data might need to be stored redundantly on-premises and in the cloud. Also, some of the redundant resources might be underutilized. When you build an architecture that's based on the hybrid deployment archetype, consider the potentially higher overall cost of resources. Example architectures For examples of architectures that use the hybrid deployment archetype, see Build hybrid and multicloud architectures using Google Cloud. Previous arrow_back Global Next Multicloud arrow_forward Send feedback \ No newline at end of file diff --git a/Hybrid_and_multicloud_monitoring_and_logging_patterns.txt b/Hybrid_and_multicloud_monitoring_and_logging_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..f2c843e38381286c4ef111f1fd5ff12a5a24ed2c --- /dev/null +++ b/Hybrid_and_multicloud_monitoring_and_logging_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-and-multi-cloud-monitoring-and-logging-patterns +Date Scraped: 2025-02-23T11:53:24.965Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud monitoring and logging patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-11 UTC This document discusses monitoring and logging architectures for hybrid and multicloud deployments, and provides best practices for implementing them by using Google Cloud. With this document, you can identify which patterns and products are best suited for your environments. Every enterprise has a unique portfolio of application workloads that place requirements and constraints on the architecture of a hybrid or multicloud setup. Although you must design and tailor your architecture to meet these constraints and requirements, you can rely on some common patterns. The patterns covered in this document fall into two categories: In a single pane of glass architecture, all monitoring and logging is centralized, with the aim of providing a single point of access and control. In a separate application and operations architecture, sensitive application data is segregated from less sensitive operations data, with the aim of meeting compliance requirements for sensitive data. Choosing your architecture pattern You can use the decision tree in the following diagram to identify the best architecture for your use case. Details of each architecture are discussed further in this document, but at a high level, your choices are as follows: Export from Monitoring to legacy solution. Export directly to legacy solution. Use Monitoring with Prometheus and Fluentd or Fluent Bit. Use Monitoring with observIQ BindPlane. Single pane of glass architecture A common goal for a hybrid system is to integrate monitoring and logging information from various sources across multiple applications and environments into a single display. This type of display is called a single pane of glass. The following diagram illustrates this pattern where monitoring and logging data from all applications, both on-premises and in the cloud, is centralized into a single repository hosted in the cloud. This architecture has the following advantages: You have a single, consistent view for all monitoring and logging. You have a single place to manage data storage and retention. You get centralized access control and auditing. However, you still need to ensure the security of data in transit to the central repository. Monitoring as a single pane of glass Cloud Monitoring is a Google-managed monitoring and management solution for services, containers, applications, and infrastructure. For a single pane of glass and a robust storage solution for metrics, logs, traces, and events, use Google Cloud Observability. The suite also provides a complete suite of observability tooling, such as dashboards, reporting, and alerting. All Google Cloud products and services support integration with Monitoring. In addition, there are several integrated tools that you can use to extend Monitoring to hybrid and on-premises resources. The following best practices apply to all architectures using Monitoring as a single pane of glass: To fulfill compliance requirements for log retention, set up log sinks for your organization. For fast analysis of log events, set up log exports to BigQuery for security and access analytics. To analyze logs that are stored in your log bucket, run SQL queries through Log Analytics. For projects containing sensitive data, consider enabling Data Access audit logs, so you can track who has accessed the data. To remove sensitive information, such as Social Security numbers, credit card numbers, and email addresses, you can filter log data. You can filter by using a custom Fluent Bit configuration or ingest with logs exclusions. You can also export unfiltered logs separately to meet compliance requirements. Hybrid monitoring and logging with Monitoring and BindPlane by observIQ With BindPlane from Google's partner observIQ, you can import monitoring and logging data from both on-premises VMs and other cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, Alibaba Cloud, and IBM Cloud into Google Cloud. The following diagram shows how Monitoring and BindPlane can provide a single pane of glass for a hybrid cloud. This architecture has the following advantages: In addition to monitoring resources like VMs, BindPlane has built-in deep integration for over 50 popular data sources. There are no additional licensing costs for using BindPlane. BindPlane metrics are imported into Monitoring as custom metrics, which are chargeable. Likewise, BindPlane logs are charged at the same rate as other Logging logs. For more details about implementing this pattern, see Logging and monitoring on-premises resources with BindPlane. Hybrid Google Kubernetes Engine monitoring with Prometheus and Monitoring With Google Cloud Managed Service for Prometheus, a popular open source monitoring solution fully managed by Google Cloud, you can monitor applications running on multiple Kubernetes clusters with Monitoring. This architecture is useful when running Kubernetes workloads distributed across Google Kubernetes Engine (GKE) on Google Cloud and Google Distributed Cloud in your on-premises data center, because it provides a unified interface across both. The following diagram shows how to use Prometheus and the Monitoring collectors for data collection. This architecture has the following advantages: Consistent Kubernetes metrics across cloud and on-premises environments. It lets you globally monitor and alert on your workloads by using Prometheus, without having to manually manage and operate Prometheus at scale. There are no additional licensing costs for using Prometheus. Prometheus metrics are imported into Monitoring. The imports are chargeable and priced by the number of samples ingested. This architecture has the following disadvantages: Prometheus supports monitoring only, so logging has to be configured separately. The following section discusses a common option for logging using either Fluentd or Fluent Bit. We recommend the following best practice: By default, Prometheus collects all exposed metrics, each of which becomes a chargeable metric. To avoid unexpected costs, consider implementing Monitoring cost controls. Hybrid GKE logging with Fluentd or Fluent Bit and Cloud Logging With Fluentd or Fluent Bit, a popular open source logging agent and Cloud Logging, you can ingest logs from applications running on multiple GKE clusters to Cloud Logging. This architecture is useful when running Kubernetes workloads distributed across GKE on Google Cloud and Google Distributed Cloud in your on-premises data center, because it provides a unified interface across both. The following diagram illustrates the flow of logs. This architecture has the following advantages: You can have consistent Kubernetes logging across cloud and on-premises environments. You can customize Logging to filter out sensitive information. There are no additional licensing costs for using Fluentd or Fluent Bit. Logs that are imported into Logging by using Fluentd or Fluent Bit are chargeable. This architecture has the following disadvantages: Fluentd and Fluent Bit support logging only, so monitoring has to be configured separately. The previous section discusses a common option for monitoring with Prometheus. For more details about implementing this pattern, see Customizing Fluent Bit for Google Kubernetes Engine logs. Partner services as single panes of glass If you are already using a third-party monitoring or logging service such as Datadog or Splunk, you might not want to move to Logging. If so, you can export data from Google Cloud to many common monitoring and logging services. You can choose to use an integrated monitoring and logging service, or select separate monitoring and logging services that best fit your needs. Export from Logging to partner services In this pattern, you authorize the partner's monitoring service, such as Datadog, to connect to the Cloud Monitoring API. This authorization lets the service ingest all the metrics available to Logging, so Datadog can function as a single pane of glass for monitoring. For logging data, Logging provides exports (log sinks) to Pub/Sub. These exports provide a performant and resilient method for partner logging services such as Elastic and Splunk to ingest large volumes of logs from Logging in real time, so these partner services can serve a single pane of glass for logs. The combined architecture for logging and monitoring is shown in the following diagram. This architecture has the following advantages: You can continue to use familiar existing tools. Google Cloud Support continues to have access to Logging logs for troubleshooting. This architecture has the following disadvantages: Partner solutions are typically externally hosted, which means they might not be available or collect data if network connections are disrupted. Sometimes, you can mitigate this risk by self-hosting, but at the cost of having to maintain the infrastructure for the solution yourself. Externally hosted dashboards aren't directly available to Google Cloud Support. This lack of availability can slow down troubleshooting and mitigation. Commercial partner solutions might entail more licensing fees. Some detailed example integrations include the following: Datadog: Monitoring Compute Engine metrics and Collect Logging Logs Elastic: Exporting Logging logs to Elastic Cloud Splunk: Scenarios for exporting Logging Analyze metrics from Prometheus and Logging with Grafana Grafana is a popular open source monitoring tool commonly paired with Prometheus for metrics collection. In this architecture, you use Prometheus as the on-premises collection layer and use Grafana as a single pane of glass for both Google Cloud and on-premises resources. The following diagram shows a sample architecture that analyzes metrics from Google Cloud and on-premises. This architecture has the following advantages: It's suitable for hybrid environments with both VMs and containers. If your organization is already using Prometheus and Grafana, your users can continue to use them. This architecture has the following disadvantages: Prometheus supports monitoring only, so logging has to be configured separately, for example, using Fluentd or the Cloud Logging plugin for Grafana. Prometheus is open source and extensible, but supports only a limited range of enterprise software integrations. Prometheus and Grafana are third-party tools and not official Google products. Google doesn't offer support for Prometheus or Grafana. For more information, see Better troubleshooting with a Cloud Logging plugin for Grafana. Export logs using Fluentd An earlier pattern covered using Fluentd or Fluent Bit as a log collector for Logging. The same basic architecture can also be used for other logging or data analytics systems that support Fluentd or Fluent Bit, including BigQuery, Elastic, and Splunk. The following diagram illustrates this pattern. This architecture has the following advantages: It's suitable for hybrid environments with both VMs and containers. Fluentd can read from many data sources, including system logs. Fluentd offers output plugins for many popular third-party logging and data analytics systems. Fluent Bit can also read from many inputs, including system logs. Fluent Bit offers outputs for many popular third-party logging and data analytics systems. This architecture has the following disadvantages: Fluentd and Fluent Bit support logs only, so monitoring has to be configured separately. The previous section discusses common options for monitoring with Prometheus and Grafana. Fluentd and Fluent Bit are third-party tools and not official Google products. Google doesn't offer support for them. Exported logs are not available to Google Cloud Support for troubleshooting. In particular, Google does not offer support for Google Distributed Cloud clusters without Logging enabled. Separate application and operations data Single pane of glass architectures require streaming application monitoring and logging data to the cloud. However, you might have regulatory or compliance requirements that either require keeping customer data on-premises or place strict constraints on what data can be stored in the public cloud. A useful pattern for these hybrid environments is to separate sensitive application data from lower-risk operations data, as illustrated in the following diagram. Separate application and system data with GKE Enterprise GKE Enterprise on VMware, a part of the GKE Enterprise suite, includes Grafana for monitoring on-premises clusters. In addition, you can opt to install a partner solution such as Elastic Stack or Splunk for logging. Using these solutions, you can ingest and view sensitive application data entirely on-premises, while still exporting system data to Logging on Google Cloud. The following diagram illustrates this architecture. This architecture has the following advantages: Sensitive application data is kept entirely on-premises. On-premises monitoring and logging have no cloud dependencies and remain available even if the network connection is interrupted. All GKE system data, both on-premises and Google Cloud, is centralized in Logging and is also accessible to Google Cloud Support as needed. For an example implementation using Elastic Stack as the partner solution, see Monitoring GKE Enterprise with the Elastic Stack. What's next Learn more about hybrid and multicloud best practices with the Hybrid and multicloud patterns and practices series, including architecture patterns and secure networking architecture patterns. Enroll in the Cloud Kubernetes Best Practice quest for hands-on exercises about observability and more on GKE. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Identify_and_prioritize_security_risks_with_Wiz_Security_Graph_and_Google_Cloud.txt b/Identify_and_prioritize_security_risks_with_Wiz_Security_Graph_and_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..cd1b784a8cb220f21890e06c80fa57e564927882 --- /dev/null +++ b/Identify_and_prioritize_security_risks_with_Wiz_Security_Graph_and_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/id-prioritize-security-risks-with-wiz +Date Scraped: 2025-02-23T11:56:52.237Z + +Content: +Home Docs Cloud Architecture Center Send feedback Identify and prioritize security risks with Wiz Security Graph and Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-21 UTC By Mason Yan - Partner Solution Engineer, Wiz; Jimit Modi - ISV Partner Engineer, Google Wiz helps organizations secure their cloud environments. This document describes how to identify and prioritize security risks in your cloud workloads with Wiz Security Graph and Google Cloud. Security Command Center Premium, Google's built-in CNAPP for Google Cloud, and Wiz's Cloud add-on capability are better together, giving you contextual visibility across your cloud organization with the ability to prioritize risk mitigation. For more information about Wiz, see the Wiz website and the Google Cloud case study. Architecture The following diagram shows how Wiz connects to your Google Cloud infrastructure and how Wiz is integrated with built-in Google Cloud tools. Wiz Cloud Security Platform is a software-as-a-service (SaaS) solution. This architecture diagram demonstrates the following workflow: Wiz API scanning collects Google Cloud services and their configuration metadata to build a complete inventory. Wiz Workload scanning collects the metadata of operating systems, apps, packages, secrets, and container files to create a list of vulnerabilities and misconfigurations. Wiz Data scanning scans Cloud Storage, non-OS volumes, tables in Cloud SQL, and collects metadata from sensitive data objects. Wiz uses Identity and Access Management (IAM) Recommender to find excessive and unused permissions to create Wiz findings. Wiz ingests Admin Activity audit logs and Security Command Center findings to add threat events to create Wiz findings. Wiz uses the information that it gathers to create a node-and-edge graph of your assets and their interconnections. This graph, called the Wiz Security Graph, lets Wiz identify the most toxic risk combinations. These combinations are then flagged as Wiz Issues, which trigger alerts and automated workflows for remediation. Connect the Wiz tenant to your Google Cloud infrastructure Before you connect Wiz to your Google Cloud infrastructure, ensure that you've met the following prerequisites. To deploy on the organization level, the person who performs the connection must have sufficient permissions, either as a Google Cloud owner or as a user with the following organization-level roles: roles/iam.serviceAccountAdmin roles/iam.organizationRoleAdmin roles/iam.securityAdmin Note: Wiz strongly recommends that you connect on the organization level. Doing so lets all Google Cloud projects, both existing and future, be detected and scanned automatically. To deploy on the project level, the person who performs the connection must have the rights of a Google Cloud project roles/owner role (or better). Other prerequisites include the following: Knowledge of your organization or project ID. Permissions to enable Google API services. Access to Wiz as a Global Admin, Global Contributor, Settings Admin, or Connector Admin. Create a Google Cloud connector Wiz provides auto-generated scripts that you can run to create a service account and all the roles that you need to use Wiz with Google Cloud. The following screenshot shows a Wiz service account created by the scripts: Use Cloud APIs to scan your organization This section explains the interactions between Wiz and Google Cloud components to help you understand the necessity of permissions and roles to be granted. Use IAM to grant Wiz with read-only roles to scan your Google Cloud organization using Cloud APIs. Wiz requires read-only roles to call Cloud APIs. Wiz collects the metadata and configuration information from all the Google services in your Google Cloud organization, including firewall rules and access controls. Wiz then builds an inventory of all Google Cloud services. For more information, see Supported Cloud Services Google Cloud. Use Wiz Workload scanning To create and delete a snapshot, Wiz requires proper IAM roles. After the API scan is complete, Wiz automatically creates a Workload scanner in the same region where the Compute Engine instance resides. The Wiz Workload scanner creates a snapshot of the boot volume of each Compute Engine instance. It does so by mounting a read-only volume that is backed by the snapshot. After the scan is complete, Wiz deletes the snapshot. Workload scanning detects several coding languages or frameworks like Go, Python, or React. It also detects operating systems or applications on a VM, container, or serverless function–for example, Linux, NGINX, Docker, or Ansible. Workload scanning collects metadata on packages, misconfigurations, and secrets, including SSH keys, cloud keys, and container files from the boot volumes. The following screenshot of the Wiz UI shows the results of a Wiz Workload scan: Use Wiz Data scanning Data scanning is an extension of Wiz Workload scanning. If you enable data discovery, the scanner performs a sampling scan of the files. Files are scanned in the following locations: Buckets Hosted databases on non-OS disks (if scanning non-OS disks is enabled) Tables in SQL servers Wiz searches files for the following sensitive data: Payment card industry data Personally identifiable information (PII) Protected health information (PHI) When a match is detected with high confidence, metadata about these objects is added to the Security Graph as a Data Finding. After an object is added to the Security Graph, queries and Controls are used for risk analysis. A Control consists of a pre-defined Security Graph query and a Severity level. If a Control's query returns any results, an Issue is generated for every result. Note: As with all other Wiz assessments, Wiz doesn't store the data. Wiz only stores the metadata results. Wiz highlights only a partially redacted sample of the detected Data Finding so that you can identify, verify, and remediate the Data Finding. Wiz requires additional permissions to scan your private buckets and platform-as-a-service (PaaS) databases like Cloud SQL for protected data and secrets. Ingest Event Threat Detection from Security Command Center Wiz integrates Google Cloud's Event Threat Detection with Security Command Center to add correlation and context to Security Command Center events and highlight the critical risks on the security graph. For example, Wiz Security Graph visualizes an exposed Compute Engine instance with a critical vulnerability and a lateral movement finding on which Event Threat Detection has also detected potentially suspicious events. This integration requires Security Command Center Premium, because this is the tier that detects threats. The Google Cloud Security Reviewer role that is attached to the Wiz service account grants the permissions to list the Security Command Center findings. The following diagram illustrates the attack path analysis of a hypothetical critical issue in Security Graph: The diagram shows how Wiz ingests threat events from Event Threat Detection and adds them to Security Graph. There is an SSH Brute Force finding associated with a hypothetical Compute Engine instance. The hypothetical Compute Engine instance has Common Vulnerabilities and Exposures (CVE) and is exposed to the internet. The Compute Engine instance also has an access key that leads to data access. The combination of multiple risks represents a high probability for a malicious actor to access this instance and exfiltrate any data it contains. The SSH Brute Force alert on this hypothetical Compute Engine instance needs immediate attention. Connect Cloud Audit Logs to Wiz Cloud Detection and Response Wiz Cloud Detection and Response (CDR) detects, investigates, and responds to cloud threats. Wiz CDR ingests cloud events from Cloud Audit Logs and Google Workspace audit logs. These logs are streamed to a Pub/Sub topic that you create. Here are the deployment steps: Create a Pub/Sub topic and subscription in a Google Cloud project. Stream admin activity and data access audit logs to the created topic. Stream audit logs from Google Cloud to Wiz. Connect Wiz to the Pub/Sub topic. The following diagram shows that Wiz connects to Pub/Sub to read logs that are published by the connected projects or organization: For more information about configuration details, see the Wiz documentation. Deploy the Wiz Runtime Sensor to a GKE cluster The following diagram shows how the Wiz Runtime Sensor in the Google Cloud environment connects to the Wiz container registry and the Wiz cloud backend: In the diagram, the Wiz Runtime Sensor is an eBPF-based executable designed to offer real-time visibility into your Google Cloud and Kubernetes workloads. The Container Security section provides additional detail. Prerequisites To deploy the Wiz Runtime Sensor to a GKE cluster, you must meet the following prerequisites: Google Cloud is connected to Wiz. GKE is running version 1.20.9-gke.1000 and later. The GKE cluster allows outbound HTTPS connectivity. For more information see, Outbound Communication. Deployment Here are the steps to use a Helm chart to deploy Wiz Runtime Sensor daemon set to a GKE cluster: Create a Service Account for the Runtime Sensor in Wiz. Obtain the Runtime Sensor Helm chart. Obtain the Runtime Sensor image pull key from Wiz. Install the Runtime Sensor on a GKE cluster. Perform a confidence check. For more information about deployment details, see Install Runtime Sensor. Security As a customer, you own your Wiz tenant. Wiz doesn't have access to your tenant. To list your cloud resources and interrogate the control plane for their configuration, the cloud scanner connects using read-only permissions. The read-only permissions grant access to your cloud APIs (like AWS, Azure, Google Cloud, OCI, Kubernetes, and others). By default, the read-only permissions that you created for the Wiz scanner grant access to all the Google Cloud services that you use. The goal is complete visibility into your environment. To exclude some of the services, you can modify your Wiz role-creation scripts. VPC Service Controls If your organization uses VPC Service Controls to restrict Google services in projects that you want Wiz to scan, you need to add an access rule for each perimeter. Users that have either the roles/accesscontextmanager.policyAdmin role or the roles/accesscontextmanager.policyEditor role can perform this operation. Because Wiz initiates requests from outside of the VPC Service Controls perimeters, you might need to add Wiz to your ingress rules. When you use an Access Level while setting up your Ingress Policy, you might see the FROM attributes of the API client. If your access level selection restricts source IP addresses, you can add IP addresses for Wiz data centers. Ingress policy In the Google Cloud console, navigate to Security > VPC Service Controls. Each perimeter protects one or more projects. For each perimeter that restricts Google services in projects where you want Wiz to scan, click Edit > Ingress Policy. As shown in the preceding screenshot, leave the Source, Project, and Services fields with their Default values. Select the Wiz Service Account (default name: wiz-service-account), then click Add Rule. Click Save. Navigate to Access Level Manager and add Wiz Data Center Scan IP addresses to the Access Level. As shown in the preceding screenshots, you create an access level with a custom name, and then you add the IP addresses of the data center where your Wiz tenant is located. In the Conditions section: Add each Wiz data center IP address as an IP address subnetwork in this format: 3.132.145.19/24. Set When condition is met, return to TRUE. Click Save to update the perimeter settings. Deployment can take up to 20 minutes. Repeat the previous three steps for every access level used by a perimeter that affects a project that you want Wiz to scan. Use Wiz Outpost to keep data in your project As shown in the following screenshot, Wiz Outpost is an alternative deployment model to the standard SaaS deployment mode that's described in this document. It lets you perform all Workload scanning in your own environment, using your own infrastructure. The following screenshot shows how to connect Outpost in the Wiz portal. As shown in the preceding screenshot, you must provide the Google Cloud organization ID you want to scan using this connector. You must also provide the project ID of the project where the Wiz outpost was deployed. The following sample command creates a Wiz service account with a read-only role to all the cloud services and a role to create snapshots and delete snapshots. For more information about the deployment details, see the Wiz documentation on Google Cloud connector. After determining the correct deployment information for your use case, use Cloud Shell to run the following command: curl -fsSL https://SERVER_URL/deployment-v2/gcp/cli/wiz-gcp.sh | bash /dev/stdin managed-standard organization-deployment --organization-id=ORGANIZATION_ID --wiz-managed-id=WIZ_MANAGED_ID Replace the following: SERVER_URL: The server URL that appears in the Wiz console. ORGANIZATION_ID: The Google Cloud organization ID. WIZ_MANAGED_ID: The account ID for the Wiz managed service account in Google Cloud. Using the Wiz Outpost deployment model, all Workload scanner functionality is pulled out of the Wiz backend and recreated in your own environment. This process is shown in the Architecture section diagram. The Wiz workload scan of the GKE cluster in Wiz project should be deployed to a project in the Customer Organization. Snapshots in the Outpost deployment model are still created, scanned, and deleted in the same manner, but all analysis occurs in your environment. Only the metadata results are sent to the Wiz backend. Examples of metadata results include the following: Installed packages Exposed secrets Malware detection Use case: Agentless scanning provides full stack multi-cloud visibility in minutes Wiz scans the entire cloud stack in read-only mode, including all VMs, containers, serverless apps, data repositories, and PaaS services that use the Cloud APIs. For example, customers use Wiz to search for resources that are associated with Log4j vulnerabilities. The Wiz cloud risk engine compiles multiple layers of configuration, network and identity data, and cloud events from Cloud Audit Logs to surface effective external exposure, unintentionally excessive permissions, exposed secrets, and lateral movement paths. As shown in the following diagram, Wiz Security Graph shows the cloud resources that are associated with Log4j vulnerabilities: Compliance Wiz automatically assesses your compliance posture against more than 100 industry compliance frameworks or your custom frameworks. That assessment helps eliminate the manual effort and complexity of achieving compliance in dynamic and multi-cloud environments. The following screenshot shows a compliance heatmap, which lets you survey your Google Cloud environment across all compliance frameworks–including CIS and NIST–at a high level and quickly determine where your security teams should focus. Container security Wiz assesses and correlates container security risk across container images, identities, the Kubernetes network, and the cloud environment. Wiz also enables comprehensive, end-to-end Kubernetes security posture management and compliance. Wiz Guardrails enable organizations to enact a single policy framework that spans the development lifecycle (CI/CD pipeline) to runtime. Wiz Runtime Sensor is a lightweight eBPF-based agent that can be deployed within Kubernetes clusters as a daemon set to provide real-time visibility and monitoring of running processes, network connections, file activity, system calls–and more–to detect malicious behavior affecting the workload. The following diagram shows the Wiz Guardrails that are in place. These guardrails span the development cycle to runtime. Deploy Wiz Connectors let Wiz access your cloud environment to assume roles, create snapshots, share snapshots with Wiz for analysis, and delete snapshots. You'll need to create a Cloud Connector for your Google Cloud organization or project. As mentioned previously, Wiz supports both shell script and Terraform. The following is a sample script for Cloud Shell that connects Wiz to an organization. It also includes an ID test for standard deployment: curl -fsSL https://SERVER_URL/deployment-v2/gcp/cli/wiz-gcp.sh | bash /dev/stdin managed-standard organization-deployment --organization-id=test --wiz-managed-id=WIZ_MANAGED_ID Replace the following: SERVER_URL: The server URL that appears in the Wiz console. WIZ_MANAGED_ID: The account ID for the Wiz managed service account in Google Cloud. The following is a sample Terraform script that connects Wiz to an organization. It also includes an ID test for standard deployment: module "wiz" { source = "https://SERVER_URL/deployment-v2/gcp/wiz-gcp-org-terraform-module.zip" org_id = "test" wiz_managed_identity_external_id = WIZ_MANAGED_ID data_scanning = false } For more information, see the Wiz documentation on How to connect to Google Cloud. What's next Learn more about how to See and secure your Google Cloud environment with Wiz Read the Wiz Solution Briefing for Google Cloud. Learn more about Wiz and Google Cloud's Security Command Center: Modern threat detection and response rooted in risk prioritization. Learn how to Accelerate your cloud Journey with Wiz and Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Immersive_Stream_for_XR.txt b/Immersive_Stream_for_XR.txt new file mode 100644 index 0000000000000000000000000000000000000000..9a17004aa013a71aed9f102d22a23038e426601b --- /dev/null +++ b/Immersive_Stream_for_XR.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/immersive-stream/xr +Date Scraped: 2025-02-23T12:07:01.317Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Jump to Immersive Stream for XRImmersive Stream for XRGoogle Cloud's Immersive Stream for XR hosts, renders, and streams 3D and eXtended reality (XR) experiences. It immediately engages users in an immersive, interactive, and photorealistic experience without having to download an app. Simply create content once and run on any device.Go to consoleGet started with your quickstart guideSee what experiences our customers are creatingCheck out our resources below developed by Google experts1:15What will you create with Immersive Stream for XR?BenefitsSimple user experienceUsers can enter 3D and AR experiences in seconds without having to wait for new apps to download.Broad device compatibilityExperiences run on iOS, Android, and web, so developers do not need to build for each OS, model, or year.Photorealistic qualityThe experience is rendered on powerful, cloud-based GPUs and then streamed to any device. Augmenting and offloading the processing from the mobile device provides the best user experience.Key featuresKey featuresNative support for a variety of devicesNative support for augmented reality (AR) with the Google app for iOS and Android. Support for landscape and portrait orientation on mobile, tablet, or desktop. Support for embedding experiences into third-party websites using iFrames.Simple customer journey integrationAbility to host a single URL or QR code that works for almost any device. Users can interact with experiences using touch gestures and device movement.Powerful, cloud-based GPUsCloud-based GPUs automatically provide the optimal computing power to render AR and 3D experiences.DocumentationCheck out our resources below developed by Google expertsTutorialGitHub templates and demosGet started quickly with the Immersive Stream for XR template project, with built-in feature demos.Learn moreQuickstartImmersive Stream for XR quickstart guideLearn how to get started with Immersive Stream for XR. You will use Unreal Engine® and the Immersive Stream for XR template project.Learn moreNot seeing what you’re looking for?View all product documentationUse casesSee what experiences our customers are creatingUse caseVW promotes the ID. BuzzVolkswagen uses Immersive Stream for XR to promote their highly anticipated new electric minivan, the ID. Buzz. Try out different color and configuration options, place the car in your own or virtual environments. Experience it for yourself.Use caseExplore historic sites with Virtual WorldsVirtual Worlds created an educational tour of the Great Sphinx of Giza, and made it accessible by anyone using Immersive Stream for XR. Elliot Mizroch, CEO of Virtual Worlds, explains, "We've captured incredible sites from Machu Picchu to the Pyramids of Giza and we want everyone to be able to explore these monuments and learn about our heritage. Immersive Stream for XR finally gives us this opportunity." Experience it for yourself.Use caseKia used Immersive Stream to promote SportageKia used Immersive Stream to promote their Sportage vehicle in Germany. "At Kia Germany, we are excited to use Google Immersive Stream for XR to reach new consumers and provide them the perfect experience to discover our Sportage vehicle," said Jean-Philippe Pottier, Manager of Digital Platforms at Kia. "Our users love that they can change colors, engines, and interact with the model in 3D and augmented reality." Learn more.Use caseF-150 Lightning "Strike Anywhere" experienceAllows users to learn about and experience Ford’s new, all-electric F-150 Lightning with 13 interactive animations, including the innovative Mega Power Frunk, and more. Experience it for yourself.Use caseExperience BMW i4 and iX through augmented realityBMW is using Immersive Stream for XR to allow users to digitally place the BMW iX and BMW i4 into any real-world environment in vivid detail with usable virtual features such as changing paint colors, lighting elements, and more. Learn more.Use caseNext-gen home improvement experience with Lowe’sLowe’s Innovation Labs is using Immersive Stream for XR to help customers design and visualize their dream kitchen. Coming soon: Walk through your design in AR! Learn more.Use caseUsing AR, Aosom users place furniture virtuallyAosom created an experience that allows users to configure and place furniture items in either a virtual living room or in their own space using AR. "Home & Garden shoppers are always looking for offerings that are unique and compatible with their own living space,” said Chunhua Wang, CEO, Aosom, “Google Cloud's Immersive Stream for XR has enabled Aosom to deliver a visually vivid and immersive shopping experience to our customers." Learn more.Use caseImmersive virtual shopping journey with KDDIUse any device to start shopping from anywhere, browse the types, configure the outfits as you like, and get the real feeling of how it really looks in the real world using AR technology. Learn more.Use caseSingapore Tourism Board transforms with technologyOn the forefront of innovation in travel, the Singapore Tourism Board is building a next-generation travel planning experience using Immersive Stream for XR, providing cinematic discovery of Singapore wherever you are. Learn more.View all technical guidesPricingPricingImmersive Stream for XR from Google Cloud charges using a pay-as-you-go model based on configured streaming capacity, which represents the maximum number of concurrent users that the experience can support.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Implement_network_design.txt b/Implement_network_design.txt new file mode 100644 index 0000000000000000000000000000000000000000..325b5c5da31b78f1adf4428031a8e85a1db1b550 --- /dev/null +++ b/Implement_network_design.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones/implement-network-design +Date Scraped: 2025-02-23T11:45:18.843Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement your Google Cloud landing zone network design Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This document provides steps and guidance to implement your chosen network design after you review Decide the network design for your Google Cloud landing zone. If you have not already done so, review Landing zone design in Google Cloud before you choose an option. These instructions are intended for network engineers, architects, and technical practitioners who are involved in creating the network design for your organization's landing zone. Network design options Based on your chosen network design, complete one of the following: Create option 1: Shared VPC network for each environment Create option 2: Hub-and-spoke topology with centralized appliances Create option 3: Hub-and-spoke topology without appliances Create option 4: Expose services in a consumer-producer model with Private Service Connect Create option 1: Shared VPC network for each environment If you have chosen to create the Shared VPC network for each environment in "Decide the network design for your Google Cloud landing zone", follow this procedure. The following steps create a single instance of a VPC. When you need multiple instances of a VPC, such as for development and production environments, repeat the steps for each VPC. Limit external access by using an organization policy We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet. For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints. Restrict Protocol Forwarding Based on type of IP Address Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM. The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level. Set the following values to configure this constraint: Applies to: Customize Policy enforcement: Replace Policy values: Custom Policy type: Deny Custom value: IS:EXTERNAL Define allowed external IPs for VM instances By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet. Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects. Applies to: Customize Policy enforcement: Replace Policy values: Deny All Disable VPC External IPv6 usage The Disable VPC External IPv6 usage constraint, when set to True, prevents the configuration of VPC subnets with external IPv6 addresses for VM instances. Applies to: Customize Enforcement: On Disable default network creation When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment. Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed. Applies to: Customize Enforcement: On Design firewall rules Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads. Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules: Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload. Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights. Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities: Firewall rule priority range Purpose 0-999 Reserved for incident response 1000-1999 Always blocked traffic 2000-1999999999 Workload-specific rules 2000000000-2100000000 Catch-all rules 2100000001-2147483643 Reserved Configure hierarchical firewall policies Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples. Define hierarchical firewall policies to implement the following network access controls: Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389. Health checks for Cloud Load Balancing. The well-known ranges that are used for health checks are allowed. For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443. For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443. Configure your Shared VPC environment Before implementing a Shared VPC design, decide how to share subnets with service projects. You attach a service project to a host project. To determine which subnets are available for the service project, you assign IAM permissions to the host project or individual subnets. For example, you can choose to dedicate a different subnet to each service project, or share the same subnets between service projects. Create a new project for the Shared VPC. Later in this process, this project becomes the host project and contains the networks and networking resources to be shared with the service projects. Enable the Compute Engine API for the host project. Configure Shared VPC for the project. Create the custom-mode VPC network in the host project. Create subnets in the region where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances without external IP addresses to reach Google services. Configure Cloud NAT Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates. Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed. At a minimum, enable Cloud NAT logging for the gateway to log ERRORS_ONLY. To include logs for translations performed by Cloud NAT, configure each gateway to log ALL. Configure hybrid connectivity You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option: If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps. Create a separate project for the physical interconnect ports. Enable the Compute Engine API for the project. Create Dedicated Interconnect connections. For each region where you're terminating hybrid connectivity in the VPC network, do the following: Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions. Configure the peer network (on-premises or other cloud) routers. Configure workload projects Create a separate service project for each workload: Create a new project to function as one of the service projects for the Shared VPC. Enable the Compute Engine API for the service project. Attach the project to the host project. Configure access to all subnets in the host project or some subnets in the host project. Configure observability Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent. The following configurations support the analysis of logging and metrics enabled. You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console. You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights. Next steps The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone. Create option 2: Hub-and-spoke topology with centralized appliances If you have chosen to create the hub-and-spoke topology with centralized appliances in "Decide the network design for your Google Cloud landing zone", follow this procedure. The following steps create a single instance of a VPC. When you need multiple instances of a VPC, such as for development and production environments, repeat the steps for each VPC. Limit external access by using an organization policy We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet. For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints. Restrict Protocol Forwarding Based on type of IP Address Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM. The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level. Set the following values to configure this constraint: Applies to: Customize Policy enforcement: Replace Policy values: Custom Policy type: Deny Custom value: IS:EXTERNAL Define allowed external IPs for VM instances By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet. Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects. Applies to: Customize Policy enforcement: Replace Policy values: Deny All Disable VPC External IPv6 usage The Disable VPC External IPv6 usage constraint, when set to True, prevents the configuration of VPC subnets with external IPv6 addresses for VM instances. Applies to: Customize Enforcement: On Disable default network creation When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment. Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed. Applies to: Customize Enforcement: On Design firewall rules Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads. Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules: Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload. Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights. Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities: Firewall rule priority range Purpose 0-999 Reserved for incident response 1000-1999 Always blocked traffic 2000-1999999999 Workload-specific rules 2000000000-2100000000 Catch-all rules 2100000001-2147483643 Reserved Configure hierarchical firewall policies Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples. Define hierarchical firewall policies to implement the following network access controls: Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389. Health checks for Cloud Load Balancing. The well-known ranges that are used for health checks are allowed. For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443. For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443. Configure your VPC environment The transit and hub VPC networks provide the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks. Create a new project for transit and hub VPC networks. Both VPC networks are part of the same project to support connectivity through the virtual network appliances. Enable the Compute Engine API for the project. Create the transit custom mode VPC network. In the transit VPC network, create a subnet in the regions where you plan to deploy the virtual network appliances. Create the hub custom mode VPC network. In the hub VPC network, create a subnet in the regions where you plan to deploy the virtual network appliances. Configure global or regional network firewall policies to allow ingress and egress traffic for the network virtual appliances. Create a managed instance group for the virtual network appliances. Configure the internal TCP/UDP load balancing resources for the transit VPC. This load balancer is used for routing traffic from the transit VPC to the hub VPC through the virtual network appliances. Configure the internal TCP/UDP load balancing resources for the hub VPC. This load balancer is used for routing traffic from the hub VPC to the transit VPC through the virtual network appliances. Configure Private Service Connect for Google APIs for the hub VPC. Modify VPC routes to send all traffic through the network virtual appliances: Delete the 0.0.0.0/0 route with next-hop default-internet-gateway from the hub VPC. Configure a new route with destination 0.0.0.0/0 and a next-hop of the forwarding rule for the load balancer in the hub VPC. Configure Cloud NAT Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates. Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed. At a minimum, enable Cloud NAT logging for the gateway to log ERRORS_ONLY. To include logs for translations performed by Cloud NAT, configure each gateway to log ALL. Configure hybrid connectivity You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option: If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps. Create a separate project for the physical interconnect ports. Enable the Compute Engine API for the project. Create Dedicated Interconnect connections. For each region where you're terminating hybrid connectivity in the VPC network, do the following: Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions. Configure the peer network (on-premises or other cloud) routers. Configure custom advertised routes in the Cloud Routers for the subnet ranges in the hub and workload VPCs. Configure workload projects Create a separate spoke VPC for each workload: Create a new project to host your workload. Enable the Compute Engine API for the project. Configure VPC Network Peering between the workload spoke VPC and hub VPC with the following settings: Enable custom route export on the hub VPC. Enable custom route import on the workload spoke VPC. Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services. Configure Private Service Connect for Google APIs. To route all traffic through the virtual network appliances in the hub VPC, delete the 0.0.0.0/0 route with next-hop default-internet-gateway from the workload spoke VPC. Configure global or regional network firewall policies to allow ingress and egress traffic for your workload. Configure observability Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent. The following configurations support the analysis of logging and metrics enabled. You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console. You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights. Next steps The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone. Create option 3: Hub-and-spoke topology without appliances If you have chosen to create the hub-and-spoke topology without appliances in "Decide the network design for your Google Cloud landing zone", follow this procedure. The following steps create a single instance of a VPC. When you need multiple instances of a VPC, such as for development and production environments, repeat the steps for each VPC. Limit external access by using an organization policy We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet. For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints. Restrict Protocol Forwarding Based on type of IP Address Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM. The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level. Set the following values to configure this constraint: Applies to: Customize Policy enforcement: Replace Policy values: Custom Policy type: Deny Custom value: IS:EXTERNAL Define allowed external IPs for VM instances By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet. Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects. Applies to: Customize Policy enforcement: Replace Policy values: Deny All Disable VPC External IPv6 usage The Disable VPC External IPv6 usage constraint, when set to True, prevents the configuration of VPC subnets with external IPv6 addresses for VM instances. Applies to: Customize Enforcement: On Disable default network creation When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment. Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed. Applies to: Customize Enforcement: On Design firewall rules Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads. Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules: Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload. Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights. Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities: Firewall rule priority range Purpose 0-999 Reserved for incident response 1000-1999 Always blocked traffic 2000-1999999999 Workload-specific rules 2000000000-2100000000 Catch-all rules 2100000001-2147483643 Reserved Configure hierarchical firewall policies Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples. Define hierarchical firewall policies to implement the following network access controls: Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389. Health checks for Cloud Load Balancing. The well-known ranges that are used for health checks are allowed. For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443. For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443. Configure the hub VPC environment The hub VPC provides the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks. Create a new project for the hub VPC network. Enable the Compute Engine API for the project. Create the hub custom mode VPC network. Configure Private Service Connect for Google APIs for the hub VPC. Configure hybrid connectivity You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option: If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps. Create a separate project for the physical interconnect ports. Enable the Compute Engine API for the project. Create Dedicated Interconnect connections. For each region where you're terminating hybrid connectivity in the VPC network, do the following: Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions. Configure the peer network (on-premises or other cloud) routers. Configure custom advertised routes in the Cloud Routers for the subnet ranges in the hub and workload VPCs. Configure workload projects Create a separate spoke VPC for each workload: Create a new project to host your workload. Enable the Compute Engine API for the project. Configure VPC Network Peering between the workload spoke VPC and hub VPC, with the following settings: Enable custom route export on the hub VPC. Enable custom route import on the workload spoke VPC. Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services. Configure Private Service Connect for Google APIs. Configure Cloud NAT Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates. Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed. At a minimum, enable Cloud NAT logging for the gateway to log ERRORS_ONLY. To include logs for translations performed by Cloud NAT, configure each gateway to log ALL. Configure observability Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent. The following configurations support the analysis of logging and metrics enabled. You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console. You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights. Next steps The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone. Create option 4: Expose services in a consumer-producer model with Private Service Connect If you have chosen to expose services in a consumer-producer model with Private Service Connect for your landing zone, as described in "Decide the network design for your Google Cloud landing zone", follow this procedure. The following steps create a single instance of a VPC. When you need multiple instances of a VPC, such as for development and production environments, repeat the steps for each VPC. Limit external access by using an organization policy We recommend that you limit direct access to the internet to only the resources that need it. Resources without external addresses can still access many Google APIs and services through Private Google Access. Private Google Access is enabled at the subnet level and lets resources interact with key Google services, while isolating them from the public internet. For usability, the default functionality of Google Cloud lets users create resources in all projects, as long as they have the correct IAM permissions. For improved security, we recommend that you restrict the default permissions for resource types that can cause unintended internet access. You can then authorize specific projects only to allow the creation of these resources. Use the instructions at Creating and managing organization policies to set the following constraints. Restrict Protocol Forwarding Based on type of IP Address Protocol forwarding establishes a forwarding rule resource with an external IP address and lets you direct the traffic to a VM. The Restrict Protocol Forwarding Based on type of IP Address constraint prevents the creation of forwarding rules with external IP addresses for the entire organization. For projects authorized to use external forwarding rules, you can modify the constraint at the folder or project level. Set the following values to configure this constraint: Applies to: Customize Policy enforcement: Replace Policy values: Custom Policy type: Deny Custom value: IS:EXTERNAL Define allowed external IPs for VM instances By default, individual VM instances can acquire external IP addresses, which allows both outbound and inbound connectivity with the internet. Enforcing the Define allowed external IPs for VM instances constraint prevents the use of external IP addresses with VM instances. For workloads that require external IP addresses on individual VM instances, modify the constraint at a folder or project level to specify the individual VM instances. Or, override the constraint for the relevant projects. Applies to: Customize Policy enforcement: Replace Policy values: Deny All Disable VPC External IPv6 usage The Disable VPC External IPv6 usage constraint, when set to True, prevents the configuration of VPC subnets with external IPv6 addresses for VM instances. Applies to: Customize Enforcement: On Disable default network creation When a new project is created, a default VPC is automatically created. This is useful for quick experiments that don't require specific network configuration or integration with a larger enterprise networking environment. Configure the Skip default network creation constraint to disable default VPC creation for new projects. You can manually create the default network within a project, if needed. Applies to: Customize Enforcement: On Design firewall rules Firewall rules let you allow or deny traffic to or from your VMs based on a configuration you define. Hierarchical firewall policies are implemented at the organization and folder levels, and network firewall policies are implemented at the VPC network level in the resource hierarchy. Together, these provide an important capability to help secure your workloads. Regardless of where the firewall policies are applied, use the following guidelines when designing and evaluating your firewall rules: Implement least-privilege (also referred to as microsegmentation) principles. Block all traffic by default and only allow the specific traffic you need. This includes limiting the rules to only the protocols and ports you need for each workload. Enable firewall rules logging for visibility into firewall behavior and to use Firewall Insights. Define a numbering methodology for allocating firewall rule priorities. For example, it's best practice to reserve a range of low numbers in each policy for rules needed during incident response. We also recommend that you prioritize more specific rules higher than more general rules, to ensure that the specific rules aren't shadowed by the general rules. The following example shows a possible approach for firewall rule priorities: Firewall rule priority range Purpose 0-999 Reserved for incident response 1000-1999 Always blocked traffic 2000-1999999999 Workload-specific rules 2000000000-2100000000 Catch-all rules 2100000001-2147483643 Reserved Configure hierarchical firewall policies Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. For examples of using hierarchical firewall policies, see Hierarchical firewall policy examples. Define hierarchical firewall policies to implement the following network access controls: Identity-Aware Proxy (IAP) for TCP forwarding. IAP for TCP forwarding is allowed through a security policy that permits ingress traffic from IP range 35.235.240.0/20 for TCP ports 22 and 3389. Health checks for Cloud Load Balancing. The well-known ranges that are used for health checks are allowed. For most Cloud Load Balancing instances (including Internal TCP/UDP Load Balancing, Internal HTTP(S) Load Balancing, External TCP Proxy Load Balancing, External SSL Proxy Load Balancing, and HTTP(S) Load Balancing), a security policy is defined that allows ingress traffic from the IP ranges 35.191.0.0/16 and 130.211.0.0/22 for ports 80 and 443. For Network Load Balancing, a security policy is defined that enables legacy health checks by allowing ingress traffic from IP ranges 35.191.0.0/16, 209.85.152.0/22, and 209.85.204.0/22 for ports 80 and 443. Configure the VPC environment The transit VPC provides the networking resources to enable connectivity between workload spoke VPC networks and on-premises or multi-cloud networks. Create a new project for the transit VPC network. Enable the Compute Engine API for the project. Create the transit custom mode VPC network. Create a Private Service Connect subnet in each region where you plan to publish services running in your hub VPC or on-premises environment. Consider Private Service Connect subnet sizing when deciding your IP addressing plan. For each on-premises service you want to expose to workloads running in Google Cloud, create an internal HTTP(S) or TCP proxy load balancer and expose the services using Private Service Connect. Configure Private Service Connect for Google APIs for the transit VPC. Configure hybrid connectivity You can use Dedicated Interconnect, Partner Interconnect, or Cloud VPN to provide hybrid connectivity to your landing zone. The following steps create the initial hybrid connectivity resources required for this design option: If you're using Dedicated Interconnect, do the following. If you're using Partner Interconnect or Cloud VPN, you can skip these steps. Create a separate project for the physical interconnect ports. Enable the Compute Engine API for the project. Create Dedicated Interconnect connections. For each region where you're terminating hybrid connectivity in the VPC network, do the following: Create two Dedicated or Partner VLAN attachments, one for each edge availability zone. As part of this process, you select Cloud Routers and create BGP sessions. Configure the peer network (on-premises or other cloud) routers. Configure workload projects Create a separate VPC for each workload: Create a new project to host your workload. Enable the Compute Engine API for the project. Create a custom-mode VPC network. Create subnets in the regions where you plan to deploy workloads. For each subnet, enable Private Google Access to allow VM instances with only internal IP addresses to reach Google services. Configure Private Service Connect for Google APIs. For each workload you're consuming from a different VPC or your on-premises environment, create a Private Service Connect consumer endpoint. For each workload you're producing for a different VPC or your on-premises environment, create an internal load balancer and service attachment for the service. Consider Private Service Connect subnet sizing when deciding your IP addressing plan. If the service should be reachable from your on-premises environment, create a Private Service Connect consumer endpoint in the transit VPC. Configure Cloud NAT Follow these steps if the workloads in specific regions require outbound internet access—for example, to download software packages or updates. Create a Cloud NAT gateway in the regions where workloads require outbound internet access. You can customize the Cloud NAT configuration to only allow outbound connectivity from specific subnets, if needed. At a minimum, enable Cloud NAT logging for the gateway to log ERRORS_ONLY. To include logs for translations performed by Cloud NAT, configure each gateway to log ALL. Configure observability Network Intelligence Center provides a cohesive way to monitor, troubleshoot, and visualize your cloud networking environment. Use it to ensure that your design functions with the desired intent. The following configurations support the analysis of logging and metrics enabled. You must enable the Network Management API before you can run Connectivity Tests. Enabling the API is required to use the API directly, the Google Cloud CLI, or the Google Cloud console. You must enable the Firewall Insights API before you can perform any tasks using Firewall Insights. Next steps The initial configuration for this network design option is now complete. You can now either repeat these steps to configure an additional instance of the landing zone environment, such as a staging or production environment, or continue to Decide the security for your Google Cloud landing zone. What's next Decide the security for your Google Cloud landing zone (next document in this series). Read Best practices for VPC network design. Read more about Private Service Connect. Send feedback \ No newline at end of file diff --git a/Implement_preemptive_cyber_defense.txt b/Implement_preemptive_cyber_defense.txt new file mode 100644 index 0000000000000000000000000000000000000000..887bec3e24bf8f94d92a08e538951f74aff9d794 --- /dev/null +++ b/Implement_preemptive_cyber_defense.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/implement-preemptive-cyber-defense +Date Scraped: 2025-02-23T11:43:00.869Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement preemptive cyber defense Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to build robust cyber-defense programs as part of your overall security strategy. This principle emphasizes the use of threat intelligence to proactively guide your efforts across the core cyber-defense functions, as defined in The Defender's Advantage: A guide to activating cyber defense. Principle overview When you defend your system against cyber attacks, you have a significant, underutilized advantage against attackers. As the founder of Mandiant states, "You should know more about your business, your systems, your topology, your infrastructure than any attacker does. This is an incredible advantage." To help you use this inherent advantage, this document provides recommendations about proactive and strategic cyber-defense practices that are mapped to the Defender's Advantage framework. Recommendations To implement preemptive cyber defense for your cloud workloads, consider the recommendations in the following sections: Integrate the functions of cyber defense Use the Intelligence function in all aspects of cyber defense Understand and capitalize on your defender's advantage Validate and improve your defenses continuously Manage and coordinate cyber-defense efforts Integrate the functions of cyber defense This recommendation is relevant to all of the focus areas. The Defender's Advantage framework identifies six critical functions of cyber defense: Intelligence, Detect, Respond, Validate, Hunt, and Mission Control. Each function focuses on a unique part of the cyber-defense mission, but these functions must be well-coordinated and work together to provide an effective defense. Focus on building a robust and integrated system where each function supports the others. If you need a phased approach for adoption, consider the following suggested order. Depending on your current cloud maturity, resource topology, and specific threat landscape, you might want to prioritize certain functions. Intelligence: The Intelligence function guides all the other functions. Understanding the threat landscape—including the most likely attackers, their tactics, techniques, and procedures (TTPs), and the potential impact—is critical to prioritizing actions across the entire program. The Intelligence function is responsible for stakeholder identification, definition of intelligence requirements, data collection, analysis and dissemination, automation, and the creation of a cyber threat profile. Detect and Respond: These functions make up the core of active defense, which involves identifying and addressing malicious activity. These functions are necessary to act on the intelligence that's gathered by the intelligence function. The Detect function requires a methodical approach that aligns detections to attacker TTPs and ensures robust logging. The Respond function must focus on initial triage, data collection, and incident remediation. Validate: The Validate function is a continuous process that provides assurance that your security control ecosystem is up-to-date and operating as designed. This function ensures that your organization understands the attack surface, knows where vulnerabilities exist, and measures the effectiveness of controls. Security validation is also an important component of the detection engineering lifecycle and must be used to identify detection gaps and create new detections. Hunt: The Hunt function involves proactively searching for active threats within an environment. This function must be implemented when your organization has a baseline level of maturity in the Detect and Respond functions. The Hunt function expands the detection capabilities and helps to identify gaps and weaknesses in controls. The Hunt function must be based on specific threats. This advanced function benefits from a foundation of robust intelligence, detection, and response capabilities. Mission Control: The Mission Control function acts as the central hub that connects all of the other functions. This function is responsible for strategy, communication, and decisive action across your cyber-defense program. It ensures that all of the functions are working together and that they're aligned with your organization's business goals. You must focus on establishing a clear understanding of the purpose of the Mission Control function before you use it to connect the other functions. Use the Intelligence function in all aspects of cyber defense This recommendation is relevant to all of the focus areas. This recommendation highlights the Intelligence function as a core part of a strong cyber-defense program. Threat intelligence provides knowledge about threat actors, their TTPs, and indicators of compromise (IOCs). This knowledge should inform and prioritize actions across all cyber-defense functions. An intelligence-driven approach helps you align defenses to meet the threats that are most likely to affect your organization. This approach also helps with efficient allocation and prioritization of resources. The following Google Cloud products and features help you take advantage of threat intelligence to guide your security operations. Use these features to identify and prioritize potential threats, vulnerabilities, and risks, and then plan and implement appropriate actions. Google Security Operations (Google SecOps) helps you store and analyze security data centrally. Use Google SecOps to map logs into a common model, enrich the logs, and link the logs to timelines for a comprehensive view of attacks. You can also create detection rules, set up IoC matching, and perform threat-hunting activities. The platform also provides curated detections, which are predefined and managed rules to help identify threats. Google SecOps can also integrate with Mandiant frontline intelligence. Google SecOps uniquely integrates industry-leading AI, along with threat intelligence from Mandiant and Google VirusTotal. This integration is critical for threat evaluation and understanding who is targeting your organization and the potential impact. Security Command Center Enterprise, which is powered by Google AI, enables security professionals to efficiently assess, investigate, and respond to security issues across multiple cloud environments. The security professionals who can benefit from Security Command Center include security operations center (SOC) analysts, vulnerability and posture analysts, and compliance managers. Security Command Center Enterprise enriches security data, assesses risk, and prioritizes vulnerabilities. This solution provides teams with the information that they need to address high-risk vulnerabilities and to remediate active threats. Chrome Enterprise Premium offers threat and data protection, which helps to protect users from exfiltration risks and prevents malware from getting onto enterprise-managed devices. Chrome Enterprise Premium also provides visibility into unsafe or potentially unsafe activity that can happen within the browser. Network monitoring, through tools like Network Intelligence Center, provides visibility into network performance. Network monitoring can also help you detect unusual traffic patterns or detect data transfer amounts that might indicate an attack or data exfiltration attempt. Understand and capitalize on your defender's advantage This recommendation is relevant to all of the focus areas. As mentioned earlier, you have an advantage over attackers when you have a thorough understanding of your business, systems, topology, and infrastructure. To capitalize on this knowledge advantage, utilize this data about your environments during cyberdefense planning. Google Cloud provides the following features to help you proactively gain visibility to identify threats, understand risks, and respond in a timely manner to mitigate potential damage: Chrome Enterprise Premium helps you enhance security for enterprise devices by protecting users from exfiltration risks. It extends Sensitive Data Protection services into the browser, and prevents malware. It also offers features like protection against malware and phishing to help prevent exposure to unsafe content. In addition, it gives you control over the installation of extensions to help prevent unsafe or unvetted extensions. These capabilities help you establish a secure foundation for your operations. Security Command Center Enterprise provides a continuous risk engine that offers comprehensive and ongoing risk analysis and management. The risk engine feature enriches security data, assesses risk, and prioritizes vulnerabilities to help fix issues quickly. Security Command Center enables your organization to proactively identify weaknesses and implement mitigations. Google SecOps centralizes security data and provides enriched logs with timelines. This enables defenders to proactively identify active compromises and adapt defenses based on attackers' behavior. Network monitoring helps identify irregular network activity that might indicate an attack and it provides early indicators that you can use to take action. To help proactively protect your data from theft, continuously monitor for data exfiltration and use the provided tools. Validate and improve your defenses continuously This recommendation is relevant to all of the focus areas. This recommendation emphasizes the importance of targeted testing and continuous validation of controls to understand strengths and weaknesses across the entire attack surface. This includes validating the effectiveness of controls, operations, and staff through methods like the following: Penetration tests Red-blue team and purple team exercises Tabletop exercises You must also actively search for threats and use the results to improve detection and visibility. Use the following tools to continuously test and validate your defenses against real-world threats: Security Command Center Enterprise provides a continuous risk engine to evaluate vulnerabilities and prioritize remediation, which enables ongoing evaluation of your overall security posture. By prioritizing issues, Security Command Center Enterprise helps you to ensure that resources are used effectively. Google SecOps offers threat-hunting and curated detections that let you proactively identify weaknesses in your controls. This capability enables continuous testing and improvement of your ability to detect threats. Chrome Enterprise Premium provides threat and data protection features that can help you to address new and evolving threats, and continuously update your defenses against exfiltration risks and malware. Cloud Next Generation Firewall (Cloud NGFW) provides network monitoring and data-exfiltration monitoring. These capabilities can help you to validate the effectiveness of your current security posture and identify potential weaknesses. Data-exfiltration monitoring helps you to validate the strength of your organization's data protection mechanisms and make proactive adjustments where necessary. When you integrate threat findings from Cloud NGFW with Security Command Center and Google SecOps, you can optimize network-based threat detection, optimize threat response, and automate playbooks. For more information about this integration, see Unifying Your Cloud Defenses: Security Command Center & Cloud NGFW Enterprise. Manage and coordinate cyber-defense efforts This recommendation is relevant to all of the focus areas. As described earlier in Integrate the functions of cyber defense, the Mission Control function interconnects the other functions of the cyber-defense program. This function enables coordination and unified management across the program. It also helps you coordinate with other teams that don't work on cybersecurity. The Mission Control function promotes empowerment and accountability, facilitates agility and expertise, and drives responsibility and transparency. The following products and features can help you implement the Mission Control function: Security Command Center Enterprise acts as a central hub for coordinating and managing your cyber-defense operations. It brings tools, teams, and data together, along with the built-in Google SecOps response capabilities. Security Command Center provides clear visibility into your organization's security state and enables the identification of security misconfigurations across different resources. Google SecOps provides a platform for teams to respond to threats by mapping logs and creating timelines. You can also define detection rules and search for threats. Google Workspace and Chrome Enterprise Premium help you to manage and control end-user access to sensitive resources. You can define granular access controls based on user identity and the context of a request. Network monitoring provides insights into the performance of network resources. You can import network monitoring insights into Security Command Center and Google SecOps for centralized monitoring and correlation against other timeline based data points. This integration helps you to detect and respond to potential network usage changes caused by nefarious activity. Data-exfiltration monitoring helps to identify possible data loss incidents. With this feature, you can efficiently mobilize an incident response team, assess damages, and limit further data exfiltration. You can also improve current policies and controls to ensure data protection. Product summary The following table lists the products and features that are described in this document and maps them to the associated recommendations and security capabilities. Google Cloud product Applicable recommendations Google SecOps Use the Intelligence function in all aspects of cyber defense: Enables threat hunting and IoC matching, and integrates with Mandiant for comprehensive threat evaluation. Understand and capitalize on your defender's advantage: Provides curated detections and centralizes security data for proactive compromise identification. Validate and improve your defenses continuously: Enables continuous testing and improvement of threat detection capabilities. Manage and coordinate cyber-defense efforts through Mission Control: Provides a platform for threat response, log analysis, and timeline creation. Security Command Center Enterprise Use the Intelligence function in all aspects of cyber defense: Uses AI to assess risk, prioritize vulnerabilities, and provide actionable insights for remediation. Understand and capitalize on your defender's advantage: Offers comprehensive risk analysis, vulnerability prioritization, and proactive identification of weaknesses. Validate and improve your defenses continuously: Provides ongoing security posture evaluation and resource prioritization. Manage and coordinate cyber-defense efforts through Mission Control: Acts as a central hub for managing and coordinating cyber-defense operations. Chrome Enterprise Premium Use the Intelligence function in all aspects of cyber defense: Protects users from exfiltration risks, prevents malware, and provides visibility into unsafe browser activity. Understand and capitalize on your defender's advantage: Enhances security for enterprise devices through data protection, malware prevention, and control over extensions. Validate and improve your defenses continuously: Addresses new and evolving threats through continuous updates to defenses against exfiltration risks and malware. Manage and coordinate cyber-defense efforts through Mission Control: Manage and control end-user access to sensitive resources, including granular access controls. Google Workspace Manage and coordinate cyber-defense efforts through Mission Control: Manage and control end-user access to sensitive resources, including granular access controls. Network Intelligence Center Use the Intelligence function in all aspects of cyber defense: Provides visibility into network performance and detects unusual traffic patterns or data transfers. Cloud NGFW Validate and improve your defenses continuously: Optimizes network-based threat detection and response through integration with Security Command Center and Google SecOps. Previous arrow_back Implement shift-left security Next Use AI securely and responsibly arrow_forward Send feedback \ No newline at end of file diff --git a/Implement_security_by_design.txt b/Implement_security_by_design.txt new file mode 100644 index 0000000000000000000000000000000000000000..e3c85cf1ac0170893a1b83f72f7623994f7203d6 --- /dev/null +++ b/Implement_security_by_design.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/implement-security-by-design +Date Scraped: 2025-02-23T11:42:54.556Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement security by design Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to incorporate robust security features, controls, and practices into the design of your cloud applications, services, and platforms. From ideation to operations, security is more effective when it's embedded as an integral part of every stage of your design process. Principle overview As explained in An Overview of Google's Commitment to Secure by Design, secure by default and secure by design are often used interchangeably, but they represent distinct approaches to building secure systems. Both approaches aim to minimize vulnerabilities and enhance security, but they differ in scope and implementation: Secure by default: focuses on ensuring that a system's default settings are set to a secure mode, minimizing the need for users or administrators to take actions to secure the system. This approach aims to provide a baseline level of security for all users. Secure by design: emphasizes proactively incorporating security considerations throughout a system's development lifecycle. This approach is about anticipating potential threats and vulnerabilities early and making design choices that mitigate risks. This approach involves using secure coding practices, conducting security reviews, and embedding security throughout the design process. The secure-by-design approach is an overarching philosophy that guides the development process and helps to ensure that security isn't an afterthought but is an integral part of a system's design. Recommendations To implement the secure by design principle for your cloud workloads, consider the recommendations in the following sections: Choose system components that help to secure your workloads Build a layered security approach Use hardened and attested infrastructure and services Encrypt data at rest and in transit Choose system components that help to secure your workloads This recommendation is relevant to all of the focus areas. A fundamental decision for effective security is the selection of robust system components—including both hardware and software components—that constitute your platform, solution, or service. To reduce the security attack surface and limit potential damage, you must also carefully consider the deployment patterns of these components and their configurations. In your application code, we recommend that you use straightforward, safe, and reliable libraries, abstractions, and application frameworks in order to eliminate classes of vulnerabilities. To scan for vulnerabilities in software libraries, you can use third-party tools. You can also use Assured Open Source Software, which helps to reduce risks to your software supply chain by using open source software (OSS) packages that Google uses and secures. Your infrastructure must use networking, storage, and compute options that support safe operation and align with your security requirements and risk acceptance levels. Infrastructure security is important for both internet-facing and internal workloads. For information about other Google solutions that support this recommendation, see Implement shift-left security. Build a layered security approach This recommendation is relevant to the following focus areas: AI and ML security Infrastructure security Identity and access management Data security We recommend that you implement security at each layer of your application and infrastructure stack by applying a defense-in-depth approach. Use the security features in each component of your platform. To limit access and identify the boundaries of the potential impact (that is, the blast radius) in the event of a security incident, do the following: Simplify your system's design to accommodate flexibility where possible. Document the security requirements of each component. Incorporate a robust secured mechanism to address resiliency and recovery requirements. When you design the security layers, perform a risk assessment to determine the security features that you need in order to meet internal security requirements and external regulatory requirements. We recommend that you use an industry-standard risk assessment framework that applies to cloud environments and that is relevant to your regulatory requirements. For example, the Cloud Security Alliance (CSA) provides the Cloud Controls Matrix (CCM). Your risk assessment provides you with a catalog of risks and corresponding security controls to mitigate them. When you perform the risk assessment, remember that you have a shared responsibility arrangement with your cloud provider. Therefore, your risks in a cloud environment differ from your risks in an on-premises environment. For example, in an on-premises environment, you need to mitigate vulnerabilities to your hardware stack. In contrast, in a cloud environment, the cloud provider bears these risks. Also, remember that the boundaries of shared responsibilities differ between IaaS, PaaS, and SaaS services for each cloud provider. After you identify potential risks, you must design and create a mitigation plan that uses technical, administrative, and operational controls, as well as contractual protections and third-party attestations. In addition, a threat modeling method, such as the OWASP application threat modeling method, helps you to identify potential gaps and suggest actions to address the gaps. Use hardened and attested infrastructure and services This recommendation is relevant to all of the focus areas. A mature security program mitigates new vulnerabilities as described in security bulletins. The security program should also provide remediation to fix vulnerabilities in existing deployments and secure your VM and container images. You can use hardening guides that are specific to the OS and application of your images, as well as benchmarks like the one provided by the Center of Internet Security (CIS). If you use custom images for your Compute Engine VMs, you need to patch the images yourself. Alternatively, you can use Google-provided curated OS images, which are patched regularly. To run containers on Compute Engine VMs, use Google-curated Container-optimized OS images. Google regularly patches and updates these images. If you use GKE, we recommend that you enable node auto-upgrades so that Google updates your cluster nodes with the latest patches. Google manages GKE control planes, which are automatically updated and patched. To further reduce the attack surface of your containers, you can use distroless images. Distroless images are ideal for security-sensitive applications, microservices, and situations where minimizing the image size and attack surface is paramount. For sensitive workloads, use Shielded VM, which prevents malicious code from being loaded during the VM boot cycle. Shielded VM instances provide boot security, monitor integrity, and use the Virtual Trusted Platform Module (vTPM). To help secure SSH access, OS Login lets your employees connect to your VMs by using Identity and Access Management (IAM) permissions as the source of truth instead of relying on SSH keys. Therefore, you don't need to manage SSH keys throughout your organization. OS Login ties an administrator's access to their employee lifecycle, so when employees change roles or leave your organization, their access is revoked with their account. OS Login also supports Google two-factor authentication, which adds an extra layer of security against account takeover attacks. In GKE, application instances run within Docker containers. To enable a defined risk profile and to restrict employees from making changes to containers, ensure that your containers are stateless and immutable. The immutability principle means that your employees don't modify the container or access it interactively. If the container must be changed, you build a new image and redeploy that image. Enable SSH access to the underlying containers only in specific debugging scenarios. To help globally secure configurations across your environment, you can use organization policies to set constraints or guardrails on resources that affect the behavior of your cloud assets. For example, you can define the following organization policies and apply them either globally across a Google Cloud organization or selectively at the level of a folder or project: Disable external IP address allocation to VMs. Restrict resource creation to specific geographical locations. Disable the creation of Service Accounts or their keys. Encrypt data at rest and in transit This recommendation is relevant to the following focus areas: Infrastructure security Data security Data encryption is a foundational control to protect sensitive information, and it's a key part of data governance. An effective data protection strategy includes access control, data segmentation and geographical residency, auditing, and encryption implementation that's based on a careful assessment of requirements. By default, Google Cloud encrypts customer data that's stored at rest, with no action required from you. In addition to default encryption, Google Cloud provides options for envelope encryption and encryption key management. You must identify the solutions that best fit your requirements for key generation, storage, and rotation, whether you're choosing the keys for your storage, for compute, or for big data workloads. For example, Customer-managed encryption keys (CMEKs) can be created in Cloud Key Management Service (Cloud KMS). The CMEKs can be either software-based or HSM-protected to meet your regulatory or compliance requirements, such as the need to rotate encryption keys regularly. Cloud KMS Autokey lets you automate the provisioning and assignment of CMEKs. In addition, you can bring your own keys that are sourced from a third-party key management system by using Cloud External Key Manager (Cloud EKM). We strongly recommend that data be encrypted in-transit. Google encrypts and authenticates data in transit at one or more network layers when data moves outside physical boundaries that aren't controlled by Google or on behalf of Google. All VM-to-VM traffic within a VPC network and between peered VPC networks is encrypted. You can use MACsec for encryption of traffic over Cloud Interconnect connections. IPsec provides encryption for traffic over Cloud VPN connections. You can protect application-to-application traffic in the cloud by using security features like TLS and mTLS configurations in Apigee and Cloud Service Mesh for containerized applications. By default, Google Cloud encrypts data at rest and data in transit across the network. However, data isn't encrypted by default while it's in use in memory. If your organization handles confidential data, you need to mitigate any threats that undermine the confidentiality and integrity of either the application or the data in system memory. To mitigate these threats, you can use Confidential Computing, which provides a trusted execution environment for your compute workloads. For more information, see Confidential VM overview. Previous arrow_back Overview Next Implement zero trust arrow_forward Send feedback \ No newline at end of file diff --git a/Implement_shift-left_security.txt b/Implement_shift-left_security.txt new file mode 100644 index 0000000000000000000000000000000000000000..eb8221c56050b7a267c98960e4714729b0c1c11e --- /dev/null +++ b/Implement_shift-left_security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/implement-shift-left-security +Date Scraped: 2025-02-23T11:42:58.795Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement shift-left security Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework helps you identify practical controls that you can implement early in the software development lifecycle to improve your security posture. It provides recommendations that help you implement preventive security guardrails and post-deployment security controls. Principle overview Shift-left security means adopting security practices early in the software development lifecycle. This principle has the following goals: Avoid security defects before system changes are made. Implement preventive security guardrails and adopt practices such as infrastructure as code (IaC), policy as code, and security checks in the CI/CD pipeline. You can also use other platform-specific capabilities like Organization Policy Service and hardened GKE clusters in Google Cloud. Detect and fix security bugs early, fast, and reliably after any system changes are committed. Adopt practices like code reviews, post-deployment vulnerability scanning, and security testing. The Implement security by design and shift-left security principles are related but they differ in scope. The security-by-design principle helps you to avoid fundamental design flaws that would require re-architecting the entire system. For example, a threat-modeling exercise reveals that the current design doesn't include an authorization policy, and all users would have the same level of access without it. Shift-left security helps you to avoid implementation defects (bugs and misconfigurations) before changes are applied, and it enables fast, reliable fixes after deployment. Recommendations To implement the shift-left security principle for your cloud workloads, consider the recommendations in the following sections: Adopt preventive security controls Automate provisioning and management of cloud resources Automate secure application releases Ensure that application deployments follow approved processes Scan for known vulnerabilities before application deployment Monitor your application code for known vulnerabilities Adopt preventive security controls This recommendation is relevant to the following focus areas: Identity and access management Cloud governance, risk, and compliance Preventive security controls are crucial for maintaining a strong security posture in the cloud. These controls help you proactively mitigate risks. You can prevent misconfigurations and unauthorized access to resources, enable developers to work efficiently, and help ensure compliance with industry standards and internal policies. Preventive security controls are more effective when they're implemented by using infrastructure as code (IaC). With IaC, preventive security controls can include more customized checks on the infrastructure code before changes are deployed. When combined with automation, preventive security controls can run as part of your CI/CD pipeline's automatic checks. The following products and Google Cloud capabilities can help you implement preventive controls in your environment: Organization Policy Service constraints: configure predefined and custom constraints with centralized control. VPC Service Controls: create perimeters around your Google Cloud services. Identity and Access Management (IAM), Privileged Access Manager, and principal access boundary policies: restrict access to resources. Policy Controller and Open Policy Agent (OPA): enforce IaC constraints in your CI/CD pipeline and avoid cloud misconfigurations. IAM lets you authorize who can act on specific resources based on permissions. For more information, see Access control for organization resources with IAM. Organization Policy Service lets you set restrictions on resources to specify how they can be configured. For example, you can use an organization policy to do the following: Limit resource sharing based on domain. Limit the use of service accounts. Restrict the physical location of newly created resources. In addition to using organizational policies, you can restrict access to resources by using the following methods: Tags with IAM: assign a tag to a set of resources and then set the access definition for the tag itself, rather than defining the access permissions on each resource. IAM Conditions: define conditional, attribute-based access control for resources. Defense in depth: use VPC Service Controls to further restrict access to resources. For more information about resource management, see Decide a resource hierarchy for your Google Cloud landing zone. Automate provisioning and management of cloud resources This recommendation is relevant to the following focus areas: Application security Cloud governance, risk, and compliance Automating the provisioning and management of cloud resources and workloads is more effective when you also adopt declarative IaC, as opposed to imperative scripting. IaC isn't a security tool or practice on its own, but it helps you to improve the security of your platform. Adopting IaC lets you create repeatable infrastructure and provides your operations team with a known good state. IaC also improves the efficiency of rollbacks, audit changes, and troubleshooting. When combined with CI/CD pipelines and automation, IaC also gives you the ability to adopt practices such as policy as code with tools like OPA. You can audit infrastructure changes over time and run automatic checks on the infrastructure code before changes are deployed. To automate the infrastructure deployment, you can use tools like Config Controller, Terraform, Jenkins, and Cloud Build. To help you build a secure application environment using IaC and automation, Google Cloud provides the enterprise foundations blueprint. This blueprint is Google's opinionated design that follows all of our recommended practices and configurations. The blueprint provides step-by-step instructions to configure and deploy your Google Cloud topology by using Terraform and Cloud Build. You can modify the scripts of the enterprise foundations blueprint to configure an environment that follows Google recommendations and meets your own security requirements. You can further build on the blueprint with additional blueprints or design your own automation. The Google Cloud Architecture Center provides other blueprints that can be implemented on top of the enterprise foundations blueprint. The following are a few examples of these blueprints: Deploy an enterprise developer platform on Google Cloud Deploy a secured serverless architecture using Cloud Run Build and deploy generative AI and machine learning models in an enterprise Import data from Google Cloud into a secured BigQuery data warehouse Deploy network monitoring and telemetry capabilities in Google Cloud Automate secure application releases This recommendation is relevant to the following focus area: Application security. Without automated tools, it can be difficult to deploy, update, and patch complex application environments to meet consistent security requirements. We recommend that you build automated CI/CD pipelines for your software development lifecycle (SDLC). Automated CI/CD pipelines help you to remove manual errors, provide standardized development feedback loops, and enable efficient product iterations. Continuous delivery is one of the best practices that the DORA framework recommends. Automating application releases by using CI/CD pipelines helps to improve your ability to detect and fix security bugs early, fast, and reliably. For example, you can scan for security vulnerabilities automatically when artifacts are created, narrow the scope of security reviews, and roll back to a known and safe version. You can also define policies for different environments (such as development, test, or production environments) so that only verified artifacts are deployed. To help you automate application releases and embed security checks in your CI/CD pipeline, Google Cloud provides multiple tools including Cloud Build, Cloud Deploy, Web Security Scanner, and Binary Authorization. To establish a process that verifies multiple security requirements in your SDLC, use the Supply-chain Levels for Software Artifacts (SLSA) framework, which has been defined by Google. SLSA requires security checks for source code, build process, and code provenance. Many of these requirements can be included in an automated CI/CD pipeline. To understand how Google applies these practices internally, see Google Cloud's approach to change. Ensure that application deployments follow approved processes This recommendation is relevant to the following focus area: Application security. If an attacker compromises your CI/CD pipeline, your entire application stack can be affected. To help secure the pipeline, you should enforce an established approval process before you deploy the code into production. If you use Google Kubernetes Engine (GKE), GKE Enterprise, or Cloud Run, you can establish an approval process by using Binary Authorization. Binary Authorization attaches configurable signatures to container images. These signatures (also called attestations) help to validate the image. At deployment time, Binary Authorization uses these attestations to determine whether a process was completed. For example, you can use Binary Authorization to do the following: Verify that a specific build system or CI pipeline created a container image. Validate that a container image is compliant with a vulnerability signing policy. Verify that a container image passes the criteria for promotion to the next deployment environment, such as from development to QA. By using Binary Authorization, you can enforce that only trusted code runs on your target platforms. Scan for known vulnerabilities before application deployment This recommendation is relevant to the following focus area: Application security. We recommend that you use automated tools that can continuously perform vulnerability scans on application artifacts before they're deployed to production. For containerized applications, use Artifact Analysis to automatically run vulnerability scans for container images. Artifact Analysis scans new images when they're uploaded to Artifact Registry. The scan extracts information about the system packages in the container. After the initial scan, Artifact Analysis continuously monitors the metadata of scanned images in Artifact Registry for new vulnerabilities. When Artifact Analysis receives new and updated vulnerability information from vulnerability sources, it does the following: Updates the metadata of the scanned images to keep them up to date. Creates new vulnerability occurrences for new notes. Deletes vulnerability occurrences that are no longer valid. Monitor your application code for known vulnerabilities This recommendation is relevant to the following focus area: Application security. Use automated tools to constantly monitor your application code for known vulnerabilities such as the OWASP Top 10. For more information about Google Cloud products and features that support OWASP Top 10 mitigation techniques, see OWASP Top 10 mitigation options on Google Cloud. Use Web Security Scanner to help identify security vulnerabilities in your App Engine, Compute Engine, and GKE web applications. The scanner crawls your application, follows all of the links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible. It can automatically scan for and detect common vulnerabilities, including cross-site scripting, code injection, mixed content, and outdated or insecure libraries. Web Security Scanner provides early identification of these types of vulnerabilities without distracting you with false positives. In addition, if you use GKE Enterprise to manage fleets of Kubernetes clusters, the security posture dashboard shows opinionated, actionable recommendations to help improve your fleet's security posture. Previous arrow_back Implement zero trust Next Implement preemptive cyber defense arrow_forward Send feedback \ No newline at end of file diff --git a/Implement_two-tower_retrieval_with_large-scale_candidate_generation.txt b/Implement_two-tower_retrieval_with_large-scale_candidate_generation.txt new file mode 100644 index 0000000000000000000000000000000000000000..463b5ef50438475fbe559f6fa207ca076cbed531 --- /dev/null +++ b/Implement_two-tower_retrieval_with_large-scale_candidate_generation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/implement-two-tower-retrieval-large-scale-candidate-generation +Date Scraped: 2025-02-23T11:46:36.545Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement two-tower retrieval for large-scale candidate generation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-16 UTC This document provides a reference architecture that shows you how to implement an end-to-end two-tower candidate generation workflow with Vertex AI. The two-tower modeling framework is a powerful retrieval technique for personalization use cases because it learns the semantic similarity between two different entities, such as web queries and candidate items. This document is for technical practitioners like data scientists and machine learning engineers who are developing large-scale recommendation applications with low-latency serving requirements. For more information about the modeling techniques, problem framing, and data preparation for building a two-tower model, see Scaling deep retrieval with TensorFlow Recommenders and Vector Search. Architecture The following diagram shows an architecture to train a two-tower model and deploy each tower separately for different deployment and serving tasks: The architecture in the diagram includes the following components: Training data: Training files are stored in Cloud Storage. Two-tower training: The combined two-tower model is trained offline using the Vertex AI Training service; each tower is saved separately and used for different tasks. Registered query and candidate towers: After the towers are trained, each tower is separately uploaded to Vertex AI Model Registry. Deployed query tower: The registered query tower is deployed to a Vertex AI online endpoint. Batch predict embeddings: The registered candidate tower is used in a batch prediction job to precompute the embedding representations of all available candidate items. Embeddings JSON: The predicted embeddings are saved to a JSON file in Cloud Storage. ANN index: Vertex AI Vector Search is used to create a serving index that's configured for approximate nearest neighbor (ANN) search. Deployed index: The ANN index is deployed to a Vertex AI Vector Search index endpoint. Products used This reference architecture uses the following Google Cloud products: Vertex AI Training: A fully managed training service that lets you operationalize large-scale model training. Vector Search: A vector similarity-matching service that lets you store, index, and search semantically similar or related data. Vertex AI Model Registry: A central repository where you can manage the lifecycle of your ML models. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Use case To meet low-latency serving requirements, large-scale recommenders are often deployed to production as two-stage systems or sometimes as multi-stage systems. The goal of the first stage, candidate generation, is to sift through a large collection of candidate items and retrieve a relevant subset of hundreds of items for downstream filtering and ranking tasks. To optimize this retrieval task, consider these two core objectives: During model training, learn the best representation of the problem or task to be solved, and compile this representation into embeddings. During model serving, retrieve relevant items fast enough to meet latency requirements. The following diagram shows the conceptual components of a two-stage recommender: In the diagram, candidate generation filters millions of candidate items. Ranking then filters the resulting hundreds of candidate items to return dozens of recommended items. The reference architecture in this document trains a two-tower-based retrieval model. In the architecture, each tower is a neural network that processes either query or candidate item features, and then produces an embedding representation of those features. Each tower is deployed separately, because each tower will be used for different tasks in production: Candidate tower: The candidate tower is used to precompute embeddings for all candidate items. The precomputed embeddings are deployed to a Vertex AI Vector Search index endpoint that's optimized for low-latency retrieval. Deployed tower: During online serving, the deployed query tower converts raw user queries to embedding representations. The embedding representations are then used to look up similar item embeddings in the deployed index. Two-tower architectures are ideal for many retrieval tasks because a two-tower architecture captures the semantic relationship of query and candidate entities, and maps them to a shared embedding space. When the entities are mapped to a shared embedding space, semantically similar entities are clustered closer together. Therefore, if you compute the vector embeddings of a given query, you can search the embedding space for the closest (most similar) candidate items. The primary benefit of such an architecture is the ability to decouple the inference of query and candidate representations. The advantages of this decoupling are mainly two-fold: You can serve new (fresh) items without retraining a new item vocabulary. By feeding any set of item features to the candidate item tower, you can compute the item embeddings for any set of candidates, even those that aren't seen during training. Performing this computation helps to address the cold-start problem. The candidate tower can support an arbitrary set of candidate items, including items that haven't yet interacted with the recommendation system. This support is possible because two-tower architectures process rich content and metadata features about each pair. This kind of processing lets the system describe an unknown item in terms of items that it knows. You can optimize the retrieval inference by precomputing all candidate item embeddings. These precomputed embeddings can be indexed and deployed to a serving infrastructure that's optimized for low-latency retrieval. The co-learning of the towers lets you describe items in terms of queries and the other way around. If you have one half of a pair, like a query, and you need to look for the other corresponding item, you can precompute half of the equation ahead of time. The precomputation lets you make the rest of decision as quickly as possible. Design considerations This section provides guidance to help you develop a candidate-generation architecture in Google Cloud that meets your security and performance needs. The guidance in this section isn't exhaustive. Depending on your specific requirements, you might choose to consider additional design factors and trade-offs. Security Vertex AI Vector Search supports both public and Virtual Private Cloud (VPC) endpoint deployments. If you want to use a VPC network, get started by following Set up a VPC Network Peering connection. If the Vector Search index is deployed within a VPC perimeter, users must access the associated resources from within the same VPC network. For example, if you're developing from Vertex AI Workbench, you need to create the workbench instance within the same VPC network as the deployed index endpoint. Similarly, any pipeline that's expected to create an endpoint, or deploy an index to an endpoint, should run within the same VPC network. Performance optimization This section describes the factors to consider when you use this reference architecture to design a topology in Google Cloud that meets the performance requirements of your workloads. Profile training jobs To optimize data input pipelines and the overall training graph, we recommend that you profile training performance with Cloud Profiler. Profiler is a managed implementation of the open source TensorBoard Profiler. By passing the –profiler argument in the training job, you enable the TensorFlow callback to profile a set number of batches for each epoch. The profile captures traces from the host CPU and from the device GPU or TPU hardware. The traces provide information about the resource consumption of the training job. To avoid out-of-memory errors, we recommend that you start with a profile duration between 2 and 10 train steps, and increase as needed. To learn how to use Profiler with Vertex AI Training and Vertex AI TensorBoard, see Profile model training performance. For debugging best practices, see Optimize GPU performance. For information about how to optimize performance, see Optimize TensorFlow performance using the Profiler. Fully utilize accelerators When you attach training accelerators such as NVIDIA GPUs or Cloud TPUs, it's important to keep them fully utilized. Full utilization of training accelerators is a best practice for cost management because accelerators are the most expensive component in the architecture. Full utilization of training accelerators is also a best practice for job efficiency because having no idle time results in less overall resource consumption. To keep an accelerator fully utilized, you typically perform a few iterations of finding the bottleneck, optimizing the bottleneck, and then repeating these steps until the accelerator device utilization is acceptable. Because many of the datasets for this use case are too large to fit into memory, bottlenecks are typically found between storage, host VMs, and the accelerator. The following diagram shows the conceptual stages of an ML training input pipeline: In the diagram, data is read from storage and preprocessed. After the data is preprocessed, it's sent to the device. To optimize performance, start by determining if overall performance is bounded by the host CPU or by the accelerator device (GPU or TPU). The device is responsible for accelerating the training loop, while the host is responsible for feeding training data to the device and receiving results from the device. The following sections describe how to resolve bottlenecks by improving input pipeline performance and device performance. Improve input pipeline performance Reading data from storage: To improve data reads, try caching, prefetching, sequential access patterns, and parallel I/O. Preprocessing data: To improve data preprocessing, configure parallel processing for data extraction and transformation, and tune the interleave transformation in the data input pipeline. Sending data to device: To reduce overall job time, transfer data from the host to multiple devices in parallel. Improve device performance Increasing mini-batch size. Mini-batches are the number of training samples that are used by each device in one iteration of a training loop. By increasing mini-batch size, you increase parallelism between operations and improve data reuse. However, the mini-batch must be able to fit into memory with the rest of the training program. If you increase mini-batch size too much, you can experience out-of-memory errors and model divergence. Vectorize user-defined functions. Typically, data transformations can be expressed as a user-defined function that describes how to transform each element of an input dataset. To vectorize this function, you apply the transform operation over a batch of inputs at once instead of transforming one element at a time. Any user-defined function has overhead that's related to scheduling and execution. When you transform a batch of inputs, you incur the overhead once per batch, instead of once per dataset element. Scale up before scaling out When you configure the compute resources for your training jobs, we recommend that you scale up before scaling out. This means that you should choose a larger, more powerful device before you use multiple less powerful devices. We recommend that you scale in the following way: Single worker + single device Single worker + more powerful device Single worker + multiple devices Distributed training Evaluate recall against latency for ANN vector search To evaluate the benefits of ANN search, you can measure the latency and recall of a given query. To help with index tuning, Vertex AI Vector Search provides the ability to create a brute-force index. Brute-force indexes will perform an exhaustive search, at the expense of higher latency, to find the true nearest neighbors for a given query vector. Use of brute-force indexes isn't intended for production use, but it provides a good baseline when you compute recall during index tuning. To evaluate recall against latency, you deploy the precomputed candidate embeddings to one index that's configured for ANN search and to another index that's configured for brute-force search. The brute-force index will return the absolute nearest neighbors, but it will typically take longer than an ANN search. You might be willing to sacrifice some retrieval recall for gains in retrieval latency, but this tradeoff should be evaluated. Additional characteristics that impact recall and latency include the following: Modeling parameters: Many modeling decisions impact the embedding space, which ultimately becomes the serving index. Compare the candidates that are retrieved for indexes that are built from both shallow and deep retrieval models. Dimensions: Dimensions are another aspect that are ultimately determined by the model. The dimensions of the ANN index must match the dimensions of the query and candidate tower vectors. Crowding and filtering tags: Tags can provide powerful capabilities to tailor results for different production use cases. It's a best practice to understand how tags influence the retrieved candidates and impact performance. ANN count: Increasing this value increases recall and can proportionately increase latency. Percentage of leaf nodes to search: The percentage of leaf nodes to search is the most critical option for evaluating recall against latency tradeoff. Increasing this value increases recall and can proportionately increase latency. What's next For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Jordan Totten | Customer EngineerJeremy Wortz | Customer EngineerLakshmanan Sethu | Technical Account ManagerOther contributor: Kaz Sato | Staff Developer Advocate Send feedback \ No newline at end of file diff --git a/Implement_zero_trust.txt b/Implement_zero_trust.txt new file mode 100644 index 0000000000000000000000000000000000000000..a79596ac7c21a28dfde777a0266b4e3472035ca5 --- /dev/null +++ b/Implement_zero_trust.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/implement-zero-trust +Date Scraped: 2025-02-23T11:42:56.087Z + +Content: +Home Docs Cloud Architecture Center Send feedback Implement zero trust Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework helps you ensure comprehensive security across your cloud workloads. The principle of zero trust emphasizes the following practices: Eliminating implicit trust Applying the principle of least privilege to access control Enforcing explicit validation of all access requests Adopting an assume-breach mindset to enable continuous verification and security posture monitoring Principle overview The zero-trust model shifts the security focus from perimeter-based security to an approach where no user or device is considered to be inherently trustworthy. Instead, every access request must be verified, regardless of its origin. This approach involves authenticating and authorizing every user and device, validating their context (location and device posture), and granting least privilege access to only the necessary resources. Implementing the zero-trust model helps your organization enhance its security posture by minimizing the impact of potential breaches and protecting sensitive data and applications against unauthorized access. The zero-trust model helps you ensure confidentiality, integrity, and availability of data and resources in the cloud. Recommendations To implement the zero-trust model for your cloud workloads, consider the recommendations in the following sections: Secure your network Verify every access attempt explicitly Monitor and maintain your network Secure your network This recommendation is relevant to the following focus area: Infrastructure security. Transitioning from conventional perimeter-based security to a zero-trust model requires multiple steps. Your organization might have already integrated certain zero-trust controls into its security posture. However, a zero-trust model isn't a singular product or solution. Instead, it's a holistic integration of multiple security layers and best practices. This section describes recommendations and techniques to implement zero trust for network security. Access control: Enforce access controls based on user identity and context by using solutions like Chrome Enterprise Premium and Identity-Aware Proxy (IAP). By doing this, you shift security from the network perimeter to individual users and devices. This approach enables granular access control and reduces the attack surface. Network security: Secure network connections between your on-premises, Google Cloud, and multicloud environments. Use the private connectivity methods from Cloud Interconnect and IPsec VPNs. To help secure access to Google Cloud services and APIs, use Private Service Connect. To help secure outbound access from workloads deployed on GKE Enterprise, use Cloud Service Mesh egress gateways. Network design: Prevent potential security risks by deleting default networks in existing projects and disabling the creation of default networks in new projects. To avoid conflicts, plan your network and IP address allocation carefully. To enforce effective access control, limit the number of Virtual Private Cloud (VPC) networks per project. Segmentation: Isolate workloads but maintain centralized network management. To segment your network, use Shared VPC. Define firewall policies and rules at the organization, folder, and VPC network levels. To prevent data exfiltration, establish secure perimeters around sensitive data and services by using VPC Service Controls. Perimeter security: Protect against DDoS attacks and web application threats. To protect against threats, use Google Cloud Armor. Configure security policies to allow, deny, or redirect traffic at the Google Cloud edge. Automation: Automate infrastructure provisioning by embracing infrastructure as code (IaC) principles and by using tools like Terraform, Jenkins, and Cloud Build. IaC helps to ensure consistent security configurations, simplified deployments, and rapid rollbacks in case of issues. Secure foundation: Establish a secure application environment by using the Enterprise foundations blueprint. This blueprint provides prescriptive guidance and automation scripts to help you implement security best practices and configure your Google Cloud resources securely. Verify every access attempt explicitly This recommendation is relevant to the following focus areas: Identity and access management Security operations (SecOps) Logging, auditing, and monitoring Implement strong authentication and authorization mechanisms for any user, device, or service that attempts to access your cloud resources. Don't rely on location or network perimeter as a security control. Don't automatically trust any user, device, or service, even if they are already inside the network. Instead, every attempt to access resources must be rigorously authenticated and authorized. You must implement strong identity verification measures, such as multi-factor authentication (MFA). You must also ensure that access decisions are based on granular policies that consider various contextual factors like user role, device posture, and location. To implement this recommendation, use the following methods, tools, and technologies: Unified identity management: Ensure consistent identity management across your organization by using a single identity provider (IdP). Google Cloud supports federation with most IdPs, including on-premises Active Directory. Federation lets you extend your existing identity management infrastructure to Google Cloud and enable single sign-on (SSO) for users. If you don't have an existing IdP, consider using Cloud Identity Premium or Google Workspace. Limited service account permissions: Use service accounts carefully, and adhere to the principle of least privilege. Grant only the necessary permissions required for each service account to perform its designated tasks. Use Workload Identity Federation for applications that run on Google Kubernetes Engine (GKE) or run outside Google Cloud to access resources securely. Robust processes: Update your identity processes to align with cloud security best practices. To help ensure compliance with regulatory requirements, implement identity governance to track access, risks, and policy violations. Review and update your existing processes for granting and auditing access-control roles and permissions. Strong authentication: Implement SSO for user authentication and implement MFA for privileged accounts. Google Cloud supports various MFA methods, including Titan Security Keys, for enhanced security. For workload authentication, use OAuth 2.0 or signed JSON Web Tokens (JWTs). Least privilege: Minimize the risk of unauthorized access and data breaches by enforcing the principles of least privilege and separation of duties. Avoid overprovisioning user access. Consider implementing just-in-time privileged access for sensitive operations. Logging: Enable audit logging for administrator and data access activities. For analysis and threat detection, scan the logs by using Security Command Center Enterprise or Google Security Operations. Configure appropriate log retention policies to balance security needs with storage costs. Monitor and maintain your network This recommendation is relevant to the following focus areas: Logging, auditing, and monitoring Application security Security operations (SecOps) Infrastructure security When you plan and implement security measures, assume that an attacker is already inside your environment. This proactive approach involves using the following multiple tools and techniques to provide visibility into your network: Centralized logging and monitoring: Collect and analyze security logs from all of your cloud resources through centralized logging and monitoring. Establish baselines for normal network behavior, detect anomalies, and identify potential threats. Continuously analyze network traffic flows to identify suspicious patterns and potential attacks. Insights into network performance and security: Use tools like Network Analyzer. Monitor traffic for unusual protocols, unexpected connections, or sudden spikes in data transfer, which could indicate malicious activity. Vulnerability scanning and remediation: Regularly scan your network and applications for vulnerabilities. Use Web Security Scanner, which can automatically identify vulnerabilities in your Compute Engine instances, containers, and GKE clusters. Prioritize remediation based on the severity of vulnerabilities and their potential impact on your systems. Intrusion detection: Monitor network traffic for malicious activity and automatically block or get alerts for suspicious events by using Cloud IDS and Cloud NGFW intrusion prevention service. Security analysis: Consider implementing Google SecOps to correlate security events from various sources, provide real-time analysis of security alerts, and facilitate incident response. Consistent configurations: Ensure that you have consistent security configurations across your network by using configuration management tools. Previous arrow_back Implement security by design Next Implement shift-left security arrow_forward Send feedback \ No newline at end of file diff --git a/Implementation_patterns.txt b/Implementation_patterns.txt new file mode 100644 index 0000000000000000000000000000000000000000..30bcf4d63e85ec6876b3e5b5c85319e4b0ac59e7 --- /dev/null +++ b/Implementation_patterns.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-for-authenticating-corporate-users-in-a-hybrid-environment +Date Scraped: 2025-02-23T11:51:13.740Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for authenticating workforce users in a hybrid environment Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document is the second part of a multi-part series that discusses how to extend your identity management solution to Google Cloud to enable your workforce users to authenticate and consume services in a hybrid computing environment. The series consists of the following documents: Authenticating workforce users in a hybrid environment Patterns for authenticating workforce users in a hybrid environment (this document) Introduction When you extend your IT landscape to Google Cloud as part of a hybrid strategy, we recommend that you take a consistent approach to managing identities across environments. As you design and tailor your architecture to meet these constraints and requirements, you can rely on some common patterns. These patterns fall into two categories: Patterns for federating an external identity provider (IdP) with Google Cloud. The aim of these patterns is to enable Google to become an IdP for your workforce users so that Google identities are maintained automatically and your IdP remains the source of truth. Patterns for extending an IdP to Google Cloud. In these patterns, you let applications deployed on Google Cloud reuse your IdP—either by connecting to it directly or by maintaining a replica of your IdP on Google Cloud. Patterns for federating an external IdP with Google Cloud To enable access to the Google Cloud console, the Google Cloud CLI, or any other resource that uses Google as IdP, a workforce user must have a Google identity. Maintaining Google identities for each employee would be cumbersome when all employees already have an account in an IdP. By federating user identities between your IdP and Google Cloud, you can automate the maintenance of Google Accounts and tie their lifecycle to accounts that exist. Federation helps ensure the following: Your IdP remains the single source of truth for identity management. For all user accounts that your IdP manages, or a selected subset of those accounts, a Google Account is created automatically. If an account is disabled or deleted in your IdP, the corresponding Google Account is suspended or deleted. To prevent passwords or other credentials from being copied, the act of authenticating a user is delegated to your IdP. Federate Active Directory with Cloud Identity by using Google Cloud Directory Sync and AD FS If you use Active Directory as IdP, you can federate Active Directory with Cloud Identity by using Google Cloud Directory Sync (GCDS) and Active Directory Federation Services (AD FS): GCDS is a free Google-provided tool that implements the synchronization process. GCDS communicates with Identity Platform over Secure Sockets Layer (SSL) and usually runs in the existing computing environment. AD FS is provided by Microsoft as part of Windows Server. With AD FS, you can use Active Directory for federated authentication. AD FS usually runs in the existing computing environment. For more detailed information about this approach, see Federate Google Cloud with Active Directory. For a variation of this pattern, you can also use Active Directory Lightweight Directory Services (AD LDS) or a different LDAP directory with either AD FS or another SAML-compliant IdP. User experience When you request the protected resource, you are redirected to the Google sign-on screen, which prompts you for your email address. If the email address is known to be associated with an account that has been synchronized from Active Directory, you are redirected to AD FS. Depending on the configuration of AD FS, you might see a sign-on screen prompting for your Active Directory username and password. Or AD FS might attempt to sign you in automatically based on your Windows login (IWA). When AD FS has authenticated you, you are redirected back to the protected resource. Advantages The approach enables a single sign-on experience across on-premises applications and resources on Google Cloud. If you configured AD FS to require multi-factor authentication, that configuration automatically applies to Google Cloud. You don't need to synchronize passwords or other credentials to Google. Because the Cloud Identity API is publicly accessible, there's no need to set up hybrid connectivity between your on-premises network and Google Cloud. Best practices Active Directory and Cloud Identity use a different logical structure. Make sure you understand the differences and assess which way of mapping domains, identities, and groups suits your situation best. Refer to our guide on federating Google Cloud with Active Directory for more detailed information. Synchronize groups in addition to users. With this approach, you can set up IAM so that you can use group memberships in Active Directory to control who has access to resources in Google Cloud. Deploy and expose AD FS so that workforce users can access it, but don't expose it more than necessary. Although workforce users must be able to access AD FS, there's no requirement for AD FS to be reachable from Google or from any application deployed on Google Cloud. Consider enabling Integrated Windows Authentication (IWA) in AD FS to allow users to sign in automatically based on their Windows login. If AD FS becomes unavailable, users might not be able to use the Google Cloud console or any other resource that uses Google as IdP. So ensure that AD FS and the domain controllers AD FS relies on are deployed and sized to meet your availability objectives. If you use Google Cloud to help ensure business continuity, relying on an on-premises AD FS might undermine the intent of using Google Cloud as an independent copy of your deployment. In this case, consider deploying replicas of all relevant systems on Google Cloud: Replicate your Active Directory to Google Cloud and deploy GCDS to run on Google Cloud. Run dedicated AD FS servers on Google Cloud. These servers use the Active Directory domain controllers running on Google Cloud. Configure Cloud Identity to use the AD FS servers deployed on Google Cloud for single sign-on. Federate Azure AD with Cloud Identity If you are a Microsoft Office 365 or Azure customer, you might have connected your on-premises Active Directory to Azure AD. If all user accounts that potentially need access to Google Cloud are already being synchronized to Azure AD, you can reuse this integration by federating Cloud Identity with Azure AD, as the following diagram shows. For more detailed information about this approach, see Federate Google Cloud with Azure Active Directory. User experience When you request the protected resource, you are redirected to the Google sign-on screen, which prompts you for your email address. If the email address is associated with an account that has been synchronized from Azure AD, you are redirected to Azure AD. Depending on how your on-premises Active Directory is connected to Azure AD, Azure AD might prompt you for a username and password. Or it might redirect you to an on-premises AD FS. After successfully authenticating with Azure AD, you are redirected back to the protected resource. Advantages You don't need to install any additional software on-premises. The approach enables a single sign-on experience across Office 365, Azure, and resources on Google Cloud. If you configured Azure AD to require multi-factor (MFA) authentication, MFA automatically applies to Google Cloud. If your on-premises Active Directory uses multiple domains or forests and you have set up a custom Azure AD Connect configuration to map this structure to an Azure AD tenant, you can take advantage of this integration work. You don't need to synchronize passwords or other credentials to Google. Because the Cloud Identity API is publicly accessible, there's no need to set up hybrid connectivity between your on-premises network and Google Cloud or between Azure and Google Cloud. You can surface the Google Cloud console as a tile in the Office 365 portal. Best practices Because Azure AD and Cloud Identity use a different logical structure, make sure you understand the differences. Assess which way of mapping domains, identities, and groups suits your situation best. For more detailed information, see federating Google Cloud with Azure AD. Synchronize groups in addition to users. With this approach, you can set up IAM so that you can use group memberships in Azure AD to control who has access to resources in Google Cloud. If you use Google Cloud to help ensure business continuity, relying on Azure AD for authentication might undermine the intent of using Google Cloud as an independent copy of your deployment. Patterns for extending an external IdP to Google Cloud Some of the applications you plan to deploy on Google Cloud might require the use of authentication protocols not offered by Cloud Identity. To support these workloads, you must allow these applications to use your IdP from within Google Cloud. The following sections describe common patterns for allowing your IdP to be used by workloads deployed on Google Cloud. Expose an on-premises AD FS to Google Cloud If an application requires the use of WS-Trust or WS-Federation, or relies on AD FS-specific features or claims when using OpenID Connect, you can allow the application to directly use AD FS for authentication. By using AD FS, an application can authenticate a user. However, because authentication is not based on a Google identity, the application won't be able to perform any API calls authenticated with user credentials. Instead, any calls to Google Cloud APIs must be authenticated using a service account. User experience When you request the protected resource, you are redirected to the ADFS sign-on screen, which prompts you for your email address. If AD FS isn't publicly exposed over the internet, accessing AD FS might require you to be connected to your company network or corporate VPN. Depending on the configuration of AD FS, you might see a sign-on screen prompting for your Active Directory username and password. Or AD FS might attempt to sign you in automatically based on your Windows login. When AD FS has authenticated you, you are redirected back to the protected resource. Advantages You can use authentication protocols that aren't supported by Cloud Identity, including WS-Trust and WS-Federation. If the application has been developed and tested against AD FS, you can avoid risks that might arise from switching the application to use Cloud Identity. There's no need to set up hybrid connectivity between your on-premises network and Google Cloud. Best practices Deploy and expose AD FS so that workforce users can access it, but don't expose it more than necessary. Although workforce users must be able to access AD FS, there's no requirement for AD FS to be reachable from Google or from any application deployed on Google Cloud. If AD FS becomes unavailable, users might not be able to use the application anymore. Ensure that AD FS and the domain controllers it relies on are deployed and sized to meet your availability objectives. Consider refactoring applications that rely on WS-Trust and WS-Federation to use SAML or OpenID Connect instead. If the application relies on group information being exposed as claims in IdTokens issued by AD FS, consider retrieving group information from a different source such as the Directory API. Querying the Directory API is a privileged operation that requires using a service account that is enabled for Google Workspace domain-wide delegation. Expose an on-premises LDAP directory to Google Cloud Some of your applications might require users to provide their username and password and use these credentials to attempt an LDAP bind operation. If you cannot modify these applications to use other means such as SAML to perform authentication, you can grant them access to an on-premises LDAP directory. Advantages You don't need to change your application. Best practices Use Cloud VPN or Cloud Interconnect to establish hybrid connectivity between Google Cloud and your on-premises network so that you don't need to expose the LDAP directory over the internet. Verify that the latency introduced by querying an on-premises LDAP directory doesn't negatively impact user experience. Ensure that the communication between the application and the LDAP directory is encrypted. You can achieve this encryption by using Cloud VPN or by using Cloud Interconnect with LDAP/S. If the LDAP directory or the private connectivity between Google Cloud and your on-premises network becomes unavailable, users might not be able to use an LDAP-based application anymore. Therefore, ensure that the respective servers are deployed and sized to meet your availability objectives, and consider using redundant VPN tunnels or interconnects. If you use Google Cloud to ensure business continuity, relying on an on-premises LDAP directory might undermine the intent of using Google Cloud as an independent copy of your existing deployment. In this case, consider replicating the LDAP directory to Google Cloud instead. If you use Active Directory, consider running a replica on Google Cloud instead, particularly if you plan to domain-join Windows machines running on Google Cloud to Active Directory. Replicate an on-premises LDAP directory to Google Cloud Replicating an on-premises LDAP directory to Google Cloud is similar to the pattern of Exposing an on-premises LDAP directory to Google Cloud. For applications that use LDAP to verify usernames and passwords, the intent of this approach is to be able to run those applications on Google Cloud. Instead of allowing such applications to query your on-premises LDAP directory, you can maintain a replica of the on-premises directory on Google Cloud. Advantages You don't need to change your application. The availability of LDAP-based applications running on Google Cloud doesn't depend on the availability of the on-premises directory or connectivity to the on-premises network. This pattern is well-suited for business continuity hybrid scenarios. Best practices Use Cloud VPN or Cloud Interconnect to establish hybrid connectivity between Google Cloud and your on-premises network so that you don't need to expose the LDAP directory over the internet. Ensure that the replication between the on-premises LDAP directory is conducted over a secure channel. Deploy multiple LDAP directory replicas across multiple zones or regions to meet your availability objectives. You can use an internal load balancer to distribute LDAP connections among multiple replicas deployed in the same region. Use a separate Google Cloud project with a Shared VPC to deploy LDAP replicas and grant access to this project on a least-privilege basis. Extend an on-premises Active Directory to Google Cloud Some of the workloads that you plan to deploy on Google Cloud might depend on Active Directory Domain Services, for example: Windows machines that need to be domain-joined Applications that use Kerberos or NTLM for authentication Applications that use Active Directory as an LDAP directory to verify usernames and passwords To support such workloads, you can extend your on-premises Active Directory forest to Google Cloud—for example, by deploying a resource forest to Google Cloud and connecting it to your on-premises Active Directory forest, as in the following diagram. For more detail about this approach and other ways to deploy Active Directory in a hybrid environment, see Patterns for using Active Directory in a hybrid environment. Advantages Your workloads can take full advantage of Active Directory, including the ability to join Windows machines to the Active Directory domain. The availability of Active Directory-based applications running on Google Cloud doesn't depend on the availability of on-premises resources or connectivity to the on-premises network. The pattern is well-suited for business continuity hybrid scenarios. Best practices Use Cloud VPN or Cloud Interconnect to establish hybrid connectivity between Google Cloud and your on-premises network. To minimize communication between Google Cloud and your on-premises network, create a separate Active Directory site for Google Cloud deployments. You can use either a single site per Shared VPC or, to minimize inter-region communication, one site per Shared VPC and region. Create a separate Active Directory domain dedicated to resources deployed on Google Cloud and add the domain to the existing forest. Using a separate domain helps reduce replication overhead and partition sizes. To increase availability, deploy at least two domain controllers, spread over multiple zones1. If you use multiple regions, consider deploying domain controllers in each region. Use a separate Google Cloud project with a Shared VPC to deploy domain controllers and grant access to this project on a least-privilege basis. By generating a password or accessing the serial console of domain controller instances, rogue project members might otherwise be able to compromise the domain. Consider deploying an AD FS server farm and GCDS on Google Cloud. This approach lets you federate Active Directory with Cloud Identity without depending on the availability of resources or connectivity to the on-premises network. What's next Learn more about federating Google Cloud with Active Directory. Learn more about patterns for using Active Directory in a hybrid environment. Find out how you can federate Cloud Identity with Azure AD. Design the resource hierarchy for your Google Cloud landing zone. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. For more information about region-specific considerations, see Geography and regions. ↩ Send feedback \ No newline at end of file diff --git a/Import_from_a_Google_Cloud_source.txt b/Import_from_a_Google_Cloud_source.txt new file mode 100644 index 0000000000000000000000000000000000000000..6102228b3c34c2291d3917af16e2ccd36a519001 --- /dev/null +++ b/Import_from_a_Google_Cloud_source.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/confidential-data-warehouse-blueprint +Date Scraped: 2025-02-23T11:56:28.330Z + +Content: +Home Docs Cloud Architecture Center Send feedback Import data from Google Cloud into a secured BigQuery data warehouse Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2021-12-16 UTC Many organizations deploy data warehouses that store confidential information so that they can analyze the data for a variety of business purposes. This document is intended for data engineers and security administrators who deploy and secure data warehouses using BigQuery. It's part of a security blueprint that's made up of the following: A GitHub repository that contains a set of Terraform configurations and scripts. The Terraform configuration sets up an environment in Google Cloud that supports a data warehouse that stores confidential data. A guide to the architecture, design, and security controls that you use this blueprint to implement (this document). A walkthrough that deploys a sample environment. This document discusses the following: The architecture and Google Cloud services that you can use to help secure a data warehouse in a production environment. Best practices for data governance when creating, deploying, and operating a data warehouse in Google Cloud, including data de-identification, differential handling of confidential data, and column-level access controls. This document assumes that you have already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. It helps you to layer additional controls onto your existing security controls to help protect confidential data in a data warehouse. Data warehouse use cases The blueprint supports the following use cases: Import data from Google Cloud into a secured BigQuery data warehouse (this document) Import data from an on-premises environment or another cloud into a BigQuery warehouse Overview Data warehouses such as BigQuery let businesses analyze their business data for insights. Analysts access the business data that is stored in data warehouses to create insights. If your data warehouse includes confidential data, you must take measures to preserve the security, confidentiality, integrity, and availability of the business data while it is stored, while it is in transit, or while it is being analyzed. In this blueprint, you do the following: Configure controls that help secure access to confidential data. Configure controls that help secure the data pipeline. Configure an appropriate separation of duties for different personas. Set up templates to find and de-identify confidential data. Set up appropriate security controls and logging to help protect confidential data. Use data classification and policy tags to restrict access to specific columns in the data warehouse. Architecture To create a confidential data warehouse, you need to categorize data as confidential and non-confidential, and then store the data in separate perimeters. The following image shows how ingested data is categorized, de-identified, and stored. It also shows how you can re-identify confidential data on demand for analysis. The architecture uses a combination of the following Google Cloud services and features: Identity and Access Management (IAM) and Resource Manager restrict access and segment resources. The access controls and resource hierarchy follow the principle of least privilege. VPC Service Controls creates security perimeters that isolate services and resources by setting up authorization, access controls, and secure data exchange. The perimeters are as follows: A data ingestion perimeter that accepts incoming data (in batch or stream) and de-identifies it. A separate landing zone helps to protect the rest of your workloads from incoming data. A confidential data perimeter that can re-identify the confidential data and store it in a restricted area. A governance perimeter that stores the encryption keys and defines what is considered confidential data. These perimeters are designed to protect incoming content, isolate confidential data by setting up additional access controls and monitoring, and separate your governance from the actual data in the warehouse. Your governance includes key management, data catalog management, and logging. Cloud Storage and Pub/Sub receives data as follows: Cloud Storage: receives and stores batch data before de-identification. Cloud Storage uses TLS to encrypt data in transit and encrypts data in storage by default. The encryption key is a customer-managed encryption key (CMEK). You can help to secure access to Cloud Storage buckets using security controls such as Identity and Access Management, access control lists (ACLs), and policy documents. For more information about supported access controls, see Overview of access control. Pub/Sub: receives and stores streaming data before de-identification. Pub/Sub uses authentication, access controls, and message-level encryption with a CMEK to protect your data. Two Dataflow pipelines de-identify and re-identify confidential data as follows: The first pipeline de-identifies confidential data using pseudonymization. The second pipeline re-identifies confidential data when authorized users require access. To protect data, Dataflow uses a unique service account and encryption key for each pipeline, and access controls. To help secure pipeline execution by moving it to the backend service, Dataflow uses Streaming Engine. For more information, see Dataflow security and permissions. Sensitive Data Protection de-identifies confidential data during ingestion. Sensitive Data Protection de-identifies structured and unstructured data based on the infoTypes or records that are detected. Cloud HSM hosts the key encryption key (KEK). Cloud HSM is a cloud-based Hardware Security Module (HSM) service. Data Catalog automatically categorizes confidential data with metadata, also known as policy tags, during ingestion. Data Catalog also uses metadata to manage access to confidential data. For more information, see Data Catalog overview. To control access to data within the data warehouse, you apply policy tags to columns that include confidential data. BigQuery stores the confidential data in the confidential data perimeter. BigQuery uses various security controls to help protect content, including access controls, column-level security for confidential data, and data encryption. Security Command Center monitors and reviews security findings from across your Google Cloud environment in a central location. Organization structure You group your organization's resources so that you can manage them and separate your testing environments from your production environment. Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows you a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or staging), and development. You deploy most of the projects in the blueprint into the production folder, and the data governance project in the common folder which is used for governance. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Prod Contains projects that have cloud resources that have been tested and are ready to use. Common Contains centralized services for the organization, such as the governance project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see the Google Cloud enterprise foundations blueprint. Projects You isolate parts of your environment using projects. The following table describes the projects that are needed within the organization. You create these projects when you run the Terraform code. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Data ingestion Contains services that are required in order to receive data and de-identify confidential data. Governance Contains services that provide key management, logging, and data cataloging capabilities. Non-confidential data Contains services that are required in order to store data that has been de-identified. Confidential data Contains services that are required in order to store and re-identify confidential data. In addition to these projects, your environment must also include a project that hosts a Dataflow Flex Template job. The Flex Template job is required for the streaming data pipeline. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the confidential data warehouse. The following sections describe the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Data analyst group Data analysts analyze the data in the warehouse. This group requires roles in different projects, as described in the following table. Project mapping Roles Data ingestion roles/dataflow.developer roles/dataflow.viewer roles/logging.viewer Additional role for data analysts that require access to confidential data: roles/datacatalog.categoryFineGrainedReader Confidential data roles/bigquery.dataViewer roles/bigquery.jobUser roles/bigquery.user roles/dataflow.viewer roles/dataflow.developer roles/logging.viewer Non-confidential data roles/bigquery.dataViewer roles/bigquery.jobUser roles/bigquery.user roles/logging.viewer Data engineer group Data engineers set up and maintain the data pipeline and warehouse. This group requires roles in different projects, as described in the following table. Project mapping Roles Data ingestion roles/cloudbuild.builds.editor roles/cloudkms.viewer roles/composer.user roles/compute.networkUser roles/dataflow.admin roles/logging.viewer Confidential data roles/bigquery.dataEditor roles/bigquery.jobUser roles/cloudbuild.builds.editor roles/cloudkms.viewer roles/compute.networkUser roles/dataflow.admin roles/logging.viewer Non-confidential data roles/bigquery.dataEditor roles/bigquery.jobUser roles/cloudkms.viewer roles/logging.viewer Network administrator group Network administrators configure the network. Typically, they are members of the networking team. Network administrators require the following roles at the organization level: roles/compute.networkAdmin roles/logging.viewer Security administrator group Security administrators administer security controls such as access, keys, firewall rules, VPC Service Controls, and the Security Command Center. Security administrators require the following roles at the organization level: roles/accesscontextmanager.policyAdmin roles/cloudasset.viewer roles/cloudkms.admin roles/compute.securityAdmin roles/datacatalog.admin roles/dlp.admin roles/iam.securityAdmin roles/logging.admin roles/orgpolicy.policyAdmin Security analyst group Security analysts monitor and respond to security incidents and Sensitive Data Protection findings. Security analysts require the following roles at the organization level: roles/accesscontextmanager.policyReader roles/datacatalog.viewer roles/cloudkms.viewer roles/logging.viewer roles/orgpolicy.policyViewer roles/securitycenter.adminViewer roles/securitycenter.findingsEditor One of the following Security Command Center roles: roles/securitycenter.findingsBulkMuteEditor roles/securitycenter.findingsMuteSetter roles/securitycenter.findingsStateSetter Understanding the security controls you need This section discusses the security controls within Google Cloud that you use to help to secure your data warehouse. The key security principles to consider are as follows: Secure access by adopting least privilege principles. Secure network connections through segmentation design and policies. Secure the configuration for each of the services. Classify and protect data based on its risk level. Understand the security requirements for the environment that hosts the data warehouse. Configure sufficient monitoring and logging for detection, investigation, and response. Security controls for data ingestion To create your data warehouse, you must transfer data from another Google Cloud source (for example, a data lake). You can use one of the following options to transfer your data into the data warehouse on BigQuery: A batch job that uses Cloud Storage. A streaming job that uses Pub/Sub. To help protect data during ingestion, you can use firewall rules, access policies, and encryption. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from the restricted.googleapis.com special domain names. The restricted.googleapis.com domain has the following benefits: It helps reduce your network attack surface by using Private Google Access when workloads communicate to Google APIs and services. It ensures that you only use services that support VPC Service Controls. For more information, see Configuring Private Google Access. You must configure separate subnets for each Dataflow job. Separate subnets ensure that data that is being de-identified is properly separated from data that is being re-identified. The data pipeline requires you to open TCP ports in the firewall, as defined in the dataflow_firewall.tf file in the dwh-networking module repository. For more information, see Configuring internet access and firewall rules. To deny resources the ability to use external IP addresses, the compute.vmExternalIpAccess organization policy is set to deny all. Perimeter controls As shown in the architecture diagram, you place the resources for the confidential data warehouse into separate perimeters. To enable services in different perimeters to share data, you create perimeter bridges. Perimeter bridges let protected services make requests for resources outside of their perimeter. These bridges make the following connections: They connect the data ingestion project to the governance project so that de-identification can take place during ingestion. They connect the non-confidential data project and the confidential data project so that confidential data can be re-identified when a data analyst requests it. They connect the confidential project to the data governance project so that re-identification can take place when a data analyst requests it. In addition to perimeter bridges, you use egress rules to let resources protected by service perimeters access resources that are outside the perimeter. In this solution, you configure egress rules to obtain the external Dataflow Flex Template jobs that are located in Cloud Storage in an external project. For more information, see Access a Google Cloud resource outside the perimeter. Access policy To help ensure that only specific identities (user or service) can access resources and data, you enable IAM groups and roles. To help ensure that only specific sources can access your projects, you enable an access policy for your Google organization. We recommend that you create an access policy that specifies the allowed IP address range for requests and only allows requests from specific users or service accounts. For more information, see Access level attributes. Key management and encryption for ingestion Both ingestion options use Cloud HSM to manage the CMEK. You use the CMEK keys to help protect your data during ingestion. Sensitive Data Protection further protects your data by encrypting confidential data, using the detectors that you configure. To ingest data, you use the following encryption keys: A CMEK key for the ingestion process that's also used by the Dataflow pipeline and the Pub/Sub service. The ingestion process is sometimes referred to as an extract, transform, load (ETL) process. The cryptographic key wrapped by Cloud HSM for the data de-identification process using Sensitive Data Protection. Two CMEK keys, one for the BigQuery warehouse in the non-confidential data project, and the other for the warehouse in the confidential data project. For more information, see Key management. You specify the CMEK location, which determines the geographical location that the key is stored and is made available for access. You must ensure that your CMEK is in the same location as your resources. By default, the CMEK is rotated every 30 days. If your organization's compliance obligations require that you manage your own keys externally from Google Cloud, you can enable Cloud External Key Manager. If you use external keys, you are responsible for key management activities, including key rotation. Service accounts and access controls Service accounts are identities that Google Cloud can use to run API requests on your behalf. Service accounts ensure that user identities do not have direct access to services. To permit separation of duties, you create service accounts with different roles for specific purposes. These service accounts are defined in the data-ingestion module and the confidential-data module. The service accounts are as follows: A Dataflow controller service account for the Dataflow pipeline that de-identifies confidential data. A Dataflow controller service account for the Dataflow pipeline that re-identifies confidential data. A Cloud Storage service account to ingest data from a batch file. A Pub/Sub service account to ingest data from a streaming service. A Cloud Scheduler service account to run the batch Dataflow job that creates the Dataflow pipeline. The following table lists the roles that are assigned to each service account: Service Account Name Project Roles Dataflow controller This account is used for de-identification. sa-dataflow-controller Data ingestion roles/pubsub.subscriber roles/bigquery.admin roles/cloudkms.admin roles/cloudkms.cryptoKeyDecrypter roles/dlp.admin roles/storage.admin roles/dataflow.serviceAgent roles/dataflow.worker roles/compute.viewer Dataflow controller This account is used for re-identification. sa-dataflow-controller-reid Confidential data roles/pubsub.subscriber roles/bigquery.admin roles/cloudkms.admin roles/cloudkms.cryptoKeyDecrypter roles/dlp.admin roles/storage.admin roles/dataflow.serviceAgent roles/dataflow.worker roles/compute.viewer Cloud Storage sa-storage-writer Data ingestion roles/storage.objectViewer roles/storage.objectCreator For descriptions of these roles, see IAM roles for Cloud Storage. Pub/Sub sa-pubsub-writer Data ingestion roles/pubsub.publisher roles/pubsub.subscriber For descriptions of these roles, see IAM roles for Pub/Sub. Cloud Scheduler sa-scheduler-controller Data ingestion roles/compute.viewer roles/dataflow.developer Data de-identification You use Sensitive Data Protection to de-identify your structured and unstructured data during the ingestion phase. For structured data, you use record transformations based on fields to de-identify data. For an example of this approach, see the /examples/de_identification_template/ folder. This example checks structured data for any credit card numbers and card PINs. For unstructured data, you use information types to de-identify data. To de-identify data that is tagged as confidential, you use Sensitive Data Protection and a Dataflow pipeline to tokenize it. This pipeline takes data from Cloud Storage, processes it, and then sends it to the BigQuery data warehouse. For more information about the data de-identification process, see data governance. Security controls for data storage You configure the following security controls to help protect data in the BigQuery warehouse: Column-level access controls Service accounts with limited roles Organizational policies VPC Service Controls perimeters between the confidential project and the non-confidential project, with appropriate perimeter bridges Encryption and key management Column-level access controls To help protect confidential data, you use access controls for specific columns in the BigQuery warehouse. In order to access the data in these columns, a data analyst must have the Fine-Grained Reader role. To define access for columns in BigQuery, you create policy tags. For example, the taxonomy.tf file in the bigquery-confidential-data example module creates the following tags: A 3_Confidential policy tag for columns that include very sensitive information, such as credit card numbers. Users who have access to this tag also have access to columns that are tagged with the 2_Private or 1_Sensitive policy tags. A 2_Private policy tag for columns that include sensitive personal identifiable information (PII) information, such as a person's first name. Users who have access to this tag also have access to columns that are tagged with the 1_Sensitive policy tag. Users do not have access to columns that are tagged with the 3_Confidential policy tag. A 1_Sensitive policy tag for columns that include data that cannot be made public, such as the credit limit. Users who have access to this tag do not have access to columns that are tagged with the 2_Private or 3_Confidential policy tags. Anything that is not tagged is available to all users who have access to the data warehouse. These access controls ensure that, even after the data is re-identified, the data still cannot be read until access is explicitly granted to the user. Note: You can use the default definitions to run the examples. For more best practices, see Best practices for using policy tags in BigQuery. Service accounts with limited roles You must limit access to the confidential data project so that only authorized users can view the confidential data. To do so, you create a service account with the roles/iam.serviceAccountUser role that authorized users must impersonate. Service Account impersonation helps users to use service accounts without downloading the service account keys, which improves the overall security of your project. Impersonation creates a short-term token that authorized users who have the roles/iam.serviceAccountTokenCreator role are allowed to download. Organizational policies This blueprint includes the organization policy constraints that the enterprise foundations blueprint uses and adds additional constraints. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy contraints. The following table describes the additional organizational policy constraints that are defined in the org_policies module: Policy Constraint name Recommended value Restrict resource deployments to specific physical locations. For additional values, see Value groups. gcp.resourceLocations One of the following: in:us-locationsin:eu-locationsin:asia-locations Disable service account creation iam.disableServiceAccountCreation true Enable OS Login for VMs created in the project. For more information, see Managing OS Login in an organization and OS Login. compute.requireOsLogin true Restrict new forwarding rules to be internal only, based on IP address. compute.restrictProtocolForwardingCreationForTypes INTERNAL Define the set of shared VPC subnetworks that Compute Engine resources can use. compute.restrictSharedVpcSubnetworks projects/PROJECT_ID/regions/REGION/s ubnetworks/SUBNETWORK-NAME. Replace SUBNETWORK-NAME with the resource ID of the private subnet that you want the blueprint to use. Disable serial port output logging to Cloud Logging. compute.disableSerialPortLogging true Key management and encryption for storage and re-identification You manage separate CMEK keys for your confidential data so that you can re-identity the data. You use Cloud HSM to protect your keys. To re-identify your data, use the following keys: A CMEK key that the Dataflow pipeline uses for the re-identification process. The original cryptographic key that Sensitive Data Protection uses to de-identify your data. A CMEK key for the BigQuery warehouse in the confidential data project. As mentioned earlier in Key management and encryption for ingestion, you can specify the CMEK location and rotation periods. You can use Cloud EKM if it is required by your organization. Operational controls You can enable logging and Security Command Center Premium tier features such as security health analytics and threat detection. These controls help you to do the following: Monitor who is accessing your data. Ensure that proper auditing is put in place. Support the ability of your incident management and operations teams to respond to issues that might occur. Access Transparency Access Transparency provides you with real-time notification in the event Google support personnel require access to your data. Access Transparency logs are generated whenever a human accesses content, and only Google personnel with valid business justifications (for example, a support case) can obtain access. We recommend that you enable Access Transparency. Logging To help you to meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for services you want to track. The centralized-logging module configures the following best practices: Creating an aggregated log sink across all projects. Storing your logs in the appropriate region. Adding CMEK keys to your logging sink. For all services within the projects, your logs must include information about data reads and writes, and information about what administrators read. For additional logging best practices, see Detective controls. Alerts and monitoring After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security incident might be occurring. For example, you can use alerts to let your security analyst know when an IAM permission has changed. For more information about configuring Security Command Center alerts, see Setting up finding notifications. For additional alerts that are not published by Security Command Center, you can set up alerts with Cloud Monitoring. Additional security considerations The security controls in this blueprint have been reviewed by both the Google Cybersecurity Action Team and a third-party security team. To request access under NDA to both a STRIDE threat model and the summary assessment report, send an email to secured-dw-blueprint-support@google.com. In addition to the security controls described in this solution, you should review and manage the security and risk in key areas that overlap and interact with your use of this solution. These include the following: The code that you use to configure, deploy, and run Dataflow jobs. The data classification taxonomy that you use with this solution. The content, quality, and security of the datasets that you store and analyze in the data warehouse. The overall environment in which you deploy the solution, including the following: The design, segmentation, and security of networks that you connect to this solution. The security and governance of your organization's IAM controls. The authentication and authorization settings for the actors to whom you grant access to the infrastructure that's part of this solution, and who have access to the data that's stored and managed in that infrastructure. Bringing it all together To implement the architecture described in this document, do the following: Determine whether you will deploy the blueprint with the enterprise foundations blueprint or on its own. If you choose not to deploy the enterprise foundations blueprint, ensure that your environment has a similar security baseline in place. Review the Readme for the blueprint and ensure that you meet all the prerequisites. In your testing environment, deploy the walkthrough to see the solution in action. As part of your testing process, consider the following: Use Security Command Center to scan the newly created projects against your compliance requirements. Add your own sample data into the BigQuery warehouse. Work with a data analyst in your enterprise to test their access to the confidential data and whether they can interact with the data from BigQuery in the way that they would expect. Deploy the blueprint into your production environment. What's next Review the Google Cloud enterprise foundations blueprint for a baseline secure environment. To see the details of the blueprint, read the Terraform configuration readme. For more best practices and blueprints, see the security best practices center. Send feedback \ No newline at end of file diff --git a/Import_from_an_external_source.txt b/Import_from_an_external_source.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd0c8b9d2970899103679bd1dad95a39604b8d4a --- /dev/null +++ b/Import_from_an_external_source.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/secured-data-warehouse-blueprint-onprem +Date Scraped: 2025-02-23T11:56:31.258Z + +Content: +Home Docs Cloud Architecture Center Send feedback Import data from an external network into a secured BigQuery data warehouse Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-15 UTC Many organizations deploy data warehouses that store confidential information so that they can analyze the data for various business purposes. This document is intended for data engineers and security administrators who deploy and secure data warehouses using BigQuery. It's part of a security blueprint that includes the following: A GitHub repository that contains a set of Terraform configurations and scripts. The Terraform configuration sets up an environment in Google Cloud that supports a data warehouse that stores confidential data. A guide to the architecture, design, and security controls that you use this blueprint to implement (this document). This document discusses the following: The architecture and Google Cloud services that you can use to help secure a data warehouse in a production environment. Best practices for importing data into BigQuery from an external network such as an on-premises environment. Best practices for data governance when creating, deploying, and operating a data warehouse in Google Cloud, including column-level encryption, differential handling of confidential data, and column-level access controls. This document assumes that you have already configured a foundational set of security controls as described in the Google Cloud enterprise foundations blueprint. It helps you to layer additional controls onto your existing security controls to help protect confidential data in a data warehouse. Data warehouse use cases The blueprint supports the following use cases: Import data from an on-premises environment or another cloud into a BigQuery warehouse (this document) Import data from Google Cloud into a secured BigQuery data warehouse Overview Data warehouses such as BigQuery let businesses analyze their business data for insights. Analysts access the business data that is stored in data warehouses to create insights. If your data warehouse includes data that you consider confidential, you must take measures to preserve the security, confidentiality, integrity, and availability of the business data while it's imported and stored, while it's in transit, or while it's being analyzed. In this blueprint, you do the following: Encrypt your source data that's located outside of Google Cloud (for example, in an on-premises environment) and import it into BigQuery. Configure controls that help secure access to confidential data. Configure controls that help secure the data pipeline. Configure an appropriate separation of duties for different personas. Set up appropriate security controls and logging to help protect confidential data. Use data classification, policy tags, dynamic data masking, and column-level encryption to restrict access to specific columns in the data warehouse. Architecture To create a confidential data warehouse, you need to import data securely and then store the data in a VPC Service Controls perimeter. The following image shows how data is ingested and stored. The architecture uses a combination of the following Google Cloud services and features: Dedicated Interconnect lets you move data between your network and Google Cloud. You can use another connectivity option, as described in Choosing a Network Connectivity product. Identity and Access Management (IAM) and Resource Manager restrict access and segment resources. The access controls and resource hierarchy follow the principle of least privilege. VPC Service Controls creates security perimeters that isolate services and resources by setting up authorization, access controls, and secure data exchange. The perimeters are as follows: A data ingestion perimeter that accepts incoming data (in batch or stream). A separate perimeter helps to protect the rest of your workloads from incoming data. A data perimeter that isolates the encryption data from other workloads. A governance perimeter that stores the encryption keys and defines what is considered confidential data. These perimeters are designed to protect incoming content, isolate confidential data by setting up additional access controls and monitoring, and separate your governance from the actual data in the warehouse. Your governance includes key management, data catalog management, and logging. Cloud Storage and Pub/Sub receive data as follows: Cloud Storage: receives and stores batch data. By default, Cloud Storage uses TLS to encrypt data in transit and AES-256 to encrypt data in storage. The encryption key is a customer-managed encryption key (CMEK). For more information about encryption, see Data encryption options. You can help to secure access to Cloud Storage buckets using security controls such as Identity and Access Management, access control lists (ACLs), and policy documents. For more information about supported access controls, see Overview of access control. Pub/Sub: receives and stores streaming data. Pub/Sub uses authentication, access controls, and message-level encryption with a CMEK to protect your data. Cloud Run functions is triggered by Cloud Storage and writes the data that Cloud Storage uploads to the ingestion bucket into BigQuery. A Dataflow pipeline writes streaming data into BigQuery. To protect data, Dataflow uses a unique service account and access controls. To help secure pipeline execution by moving it to the backend service, Dataflow uses Streaming Engine. For more information, see Dataflow security and permissions. Sensitive Data Protection scans data that is stored in BigQuery to find any sensitive data that isn't protected. For more information, see Using Sensitive Data Protection to scan BigQuery data. Cloud HSM hosts the key encryption key (KEK). Cloud HSM is a cloud-based Hardware Security Module (HSM) service. You use Cloud HSM to generate the encryption key that you use to encrypt the data in your network before sending it to Google Cloud. Data Catalog automatically categorizes confidential data with metadata, also known as policy tags, when it's discovered in BigQuery. Data Catalog also uses metadata to manage access to confidential data. For more information, see Data Catalog overview. To control access to data within the data warehouse, you apply policy tags to columns that include confidential data. BigQuery stores the encrypted data and the wrapped encryption key in separate tables. BigQuery uses various security controls to help protect content, including access controls, column-level encryption, column-level security, and data encryption. Security Command Center monitors and reviews security findings from across your Google Cloud environment in a central location. Cloud Logging collects all the logs from Google Cloud services for storage and retrieval by your analysis and investigation tools. Cloud Monitoring collects and stores performance information and metrics about Google Cloud services. Data Profiler for BigQuery automatically scans for sensitive data in all BigQuery tables and columns across the entire organization, including all folders and projects. Organization structure You group your organization's resources so that you can manage them and separate your testing environments from your production environment. Resource Manager lets you logically group resources by project, folder, and organization. The following diagram shows you a resource hierarchy with folders that represent different environments such as bootstrap, common, production, non-production (or staging), and development. This hierarchy aligns with the organization structure used by the enterprise foundations blueprint. You deploy most of the projects in the blueprint into the production folder, and the Data governance project in the common folder which is used for governance. For alternative resource hierarchies, see Decide a resource hierarchy for your Google Cloud landing zone. Folders You use folders to isolate your production environment and governance services from your non-production and testing environments. The following table describes the folders from the enterprise foundations blueprint that are used by this blueprint. Folder Description Bootstrap Contains resources required to deploy the enterprise foundations blueprint. Common Contains centralized services for the organization, such as the Data governance project. Production Contains projects that have cloud resources that have been tested and are ready to use. In this blueprint, the Production folder contains the Data ingestion project and Data project. Non-production Contains projects that have cloud resources that are currently being tested and staged for release. In this blueprint, the Non-production folder contains the Data ingestion project and Data project. Development Contains projects that have cloud resources that are currently being developed. In this blueprint, the Development folder contains the Data ingestion project and Data project. You can change the names of these folders to align with your organization's folder structure, but we recommend that you maintain a similar structure. For more information, see the Google Cloud enterprise foundations blueprint. Projects You isolate parts of your environment using projects. The following table describes the projects that are needed within the organization. You create these projects when you run the Terraform code. You can change the names of these projects, but we recommend that you maintain a similar project structure. Project Description Data ingestion Contains services that are required to receive data and write it to BigQuery. Data governance Contains services that provide key management, logging, and data cataloging capabilities. Data Contains services that are required to store data. In addition to these projects, your environment must also include a project that hosts a Dataflow Flex Template job. The Flex Template job is required for the streaming data pipeline. Mapping roles and groups to projects You must give different user groups in your organization access to the projects that make up the confidential data warehouse. The following sections describe the blueprint recommendations for user groups and role assignments in the projects that you create. You can customize the groups to match your organization's existing structure, but we recommend that you maintain a similar segregation of duties and role assignment. Data analyst group Data analysts view and analyze the data in the warehouse. This group can view data after it has been loaded into the data warehouse and perform the same operations as the Encrypted data viewer group. This group requires roles in different projects, as described in the following table. Scope of assignment Roles Data ingestion project Dataflow Developer (roles/dataflow.developer) Dataflow Viewer (roles/dataflow.viewer) Logs Viewer (roles/logging.viewer) Data project BigQuery Data Viewer (roles/bigquery.dataViewer) BigQuery Job User (roles/bigquery.jobUser) BigQuery User (roles/bigquery.user) Dataflow Developer (roles/dataflow.developer) Dataflow Viewer (roles/dataflow.viewer) DLP Administrator (roles/dlp.admin) Logs Viewer (roles/logging.viewer) Data policy level Masked Reader (roles/bigquerydatapolicy.maskedReader) Encrypted data viewer group The Encrypted data viewer group can view encrypted data from BigQuery reporting tables through Cloud Looker Studio and other reporting tools, such as SAP Business Objects. The encrypted data viewer group can't view cleartext data from encrypted columns. This group requires the BigQuery User (roles/bigquery.jobUser) role in the Data project. This group also requires the Masked Reader (roles/bigquerydatapolicy.maskedReader) at the data policy level. Plaintext reader group The Plaintext reader group has the required permission to call the decryption user-defined function (UDF) to view plaintext data and the additional permission to read unmasked data. This group requires roles in Data project, as described in the following table. This group requires the following roles in the Data project: BigQuery User (roles/bigquery.user) BigQuery Job User (roles/bigquery.jobUser) Cloud KMS Viewer (roles/cloudkms.viewer) In addition, this group requires the Fine-Grained Reader (roles/datacatalog.categoryFineGrainedReader) role at the Data Catalog level. Data engineer group Data engineers set up and maintain the data pipeline and warehouse. This group requires roles in different projects, as described in the following table. Scope of assignment Roles Data ingestion project Cloud Build Editor (roles/cloudbuild.builds.editor) Cloud KMS (roles/cloudkms.viewer) Composer User (roles/composer.user) Compute Network User (roles/compute.networkUser) Dataflow Admin (roles/dataflow.admin) Logs Viewer (roles/logging.viewer) Data project BigQuery Data Editor (roles/bigquery.dataeditor) BigQuery Job User (roles/bigquery.jobUser) Cloud Build Editor (roles/cloudbuild.builds.editor) Cloud KMS Viewer (roles/cloudkms.viewer) Compute Network User (roles/compute.networkuser) Dataflow Admin (roles/dataflow.admin) DLP Administrator (roles/dlp.admin) Logs Viewer (roles/logging.viewer) Network administrator group Network administrators configure the network. Typically, they are members of the networking team. Network administrators require the following roles at the organization level: Compute Admin (roles/compute.networkAdmin) Logs Viewer (roles/logging.viewer) Security administrator group Security administrators administer security controls such as access, keys, firewall rules, VPC Service Controls, and the Security Command Center. Security administrators require the following roles at the organization level: Access Context Manager Admin (roles/accesscontextmanager.policyAdmin) Cloud Asset Viewer (roles/cloudasset.viewer) Cloud KMS Admin (roles/cloudkms.admin) Compute Security Admin (roles/compute.securityAdmin) Data Catalog Admin (roles/datacatalog.admin) DLP Administrator (roles/dlp.admin) Logging Admin (roles/logging.admin) Organization Administrator (roles/orgpolicy.policyAdmin) Security Admin (roles/iam.securityAdmin) Security analyst group Security analysts monitor and respond to security incidents and Sensitive Data Protection findings. Security analysts require the following roles at the organization level: Access Context Manager Reader (roles/accesscontextmanager.policyReader) Compute Network Viewer (roles/compute.networkViewer) Data Catalog Viewer (roles/datacatalog.viewer) Cloud KMS Viewer (roles/cloudkms.viewer) Logs Viewer (roles/logging.viewer) Organization Policy Viewer (roles/orgpolicy.policyViewer) Security Center Admin Viewer (roles/securitycenter.adminViewer) Security Center Findings Editor (roles/securitycenter.findingsEditor) One of the following Security Command Center roles: Security Center Findings Bulk Mute Editor (roles/securitycenter.findingsBulkMuteEditor) Security Center Finders Mute Setter (roles/securitycenter.findingsMuteSetter) Security Center Findings State Setter (roles/securitycenter.findingsStateSetter) Example group access flows The following sections describe access flows for two groups within the secured data warehouse solution. Access flow for Encrypted data viewer group The following diagram shows what occurs when a user from the Encrypted data viewer group tries to access encrypted data in BigQuery. The steps to access data in BigQuery are as follows: The Encrypted data viewer executes the following query on BigQuery to access confidential data: SELECT ssn, pan FROM cc_card_table BigQuery verifies access as follows: The user is authenticated using valid, unexpired Google Cloud credentials. The user identity and the IP address that the request originated from are part of the allowlist in the Access Level/Ingress rule on the VPC Service Controls perimeter. IAM verifies that the user has the appropriate roles and is authorized to access selected encrypted columns on the BigQuery table. BigQuery returns the confidential data in encrypted format. Access flow for Plaintext reader group The following diagram shows what occurs when a user from the Plaintext reader group tries to access encrypted data in BigQuery. The steps to access data in BigQuery are as follows: The Plaintext reader executes the following query on BigQuery to access confidential data in decrypted format: SELECT decrypt_ssn(ssn) FROM cc_card_table BigQuery calls the decrypt user-defined function (UDF) within the query to access protected columns. Access is verified as follows: IAM verifies that the user has appropriate roles and is authorized to access the decrypt UDF on BigQuery. The UDF retrieves the wrapped data encryption key (DEK) that was used to protect sensitive data columns. The decrypt UDF calls the key encryption key (KEK) in Cloud HSM to unwrap the DEK. The decrypt UDF uses the BigQuery AEAD decrypt function to decrypt the sensitive data columns. The user is granted access to the plaintext data in the sensitive data columns. Understanding the security controls you need This section discusses the security controls within Google Cloud that you use to help to secure your data warehouse. The key security principles to consider are as follows: Secure access by adopting least privilege principles. Secure network connections through segmentation design and policies. Secure the configuration for each of the services. Classify and protect data based on its risk level. Understand the security requirements for the environment that hosts the data warehouse. Configure sufficient monitoring and logging for detection, investigation, and response. Security controls for data ingestion To create your data warehouse, you must transfer data from another source in your on-premises environment, another cloud, or another Google Cloud source. This document focuses on transferring data from your on-premises environment or another cloud; if you're transferring data from another Google Cloud source, see Import data from Google Cloud into a secured BigQuery data warehouse. You can use one of the following options to transfer your data into the data warehouse on BigQuery: A batch job that loads data to a Cloud Storage bucket. A streaming job that uses Pub/Sub. To help protect data during ingestion, you can use client-side encryption, firewall rules, and access level policies. The ingestion process is sometimes referred to as an extract, transform, load (ETL) process. Encrypted connection to Google Cloud You can use Cloud VPN or Cloud Interconnect to protect all data that flows between Google Cloud and your environment. This blueprint recommends Dedicated Interconnect, because it provides a direct connection and high throughput, which are important if you're streaming a lot of data. To permit access to Google Cloud from your environment, you must define allowlisted IP addresses in the access levels policy rules. Network and firewall rules Virtual Private Cloud (VPC) firewall rules control the flow of data into the perimeters. You create firewall rules that deny all egress, except for specific TCP port 443 connections from the restricted.googleapis.com special domain names. The restricted.googleapis.com domain has the following benefits: It helps reduce your network attack surface by using Private Google Access when workloads communicate to Google APIs and services. It ensures that you only use services that support VPC Service Controls. For more information, see Configuring Private Google Access. The data pipeline requires you to open TCP ports in the firewall, as defined in the dataflow_firewall.tf file in the harness-projects module repository. For more information, see Configuring internet access and firewall rules. To deny resources the ability to use external IP addresses, the Define allowed external IPs for VM instances (compute.vmExternalIpAccess) organization policy is set to deny all. Perimeter controls As shown in the architecture diagram, you place the resources for the data warehouse into separate perimeters. To enable services in different perimeters to share data, you create perimeter bridges. Perimeter bridges let protected services make requests for resources outside of their perimeter. These bridges make the following connections: They connect the Data ingestion project to the Data project so that data can be ingested into BigQuery. They connect the Data project to the Data governance project so that Sensitive Data Protection can scan BigQuery for unprotected confidential data. They connect the Data ingestion project to the Data governance project for access to logging, monitoring, and encryption keys. In addition to perimeter bridges, you use egress rules to let resources that are protected by perimeters to access resources that are outside the perimeter. In this solution, you configure egress rules to obtain the external Dataflow Flex Template jobs that are located in Cloud Storage in an external project. For more information, see Access a Google Cloud resource outside the perimeter. Access policy To help ensure that only specific identities (user or service) can access resources and data, you enable IAM groups and roles. To help ensure that only specific sources can access your projects, you enable an access policy for your Google organization. We recommend that you create an access policy that specifies the allowed IP address range for requests originating from your on-premises environment and only allows requests from specific users or service accounts. For more information, see Access level attributes. Client-side encryption Before you move your sensitive data into Google Cloud, encrypt your data locally to help protect it at rest and in transit. You can use the Tink encryption library, or you can use other encryption libraries. The Tink encryption library is compatible with BigQuery AEAD encryption, which the blueprint uses to decrypt column-level encrypted data after the data is imported. The Tink encryption library uses DEKs that you can generate locally or from Cloud HSM. To wrap or protect the DEK, you can use a KEK that is generated in Cloud HSM. The KEK is a symmetric CMEK encryption keyset that is stored securely in Cloud HSM and managed using IAM roles and permissions. During ingestion, both the wrapped DEK and the data are stored in BigQuery. BigQuery includes two tables: one for the data and the other for the wrapped DEK. When analysts need to view confidential data, BigQuery can use AEAD decryption to unwrap the DEK with the KEK and decrypt the protected column. Also, client-side encryption using Tink further protects your data by encrypting sensitive data columns in BigQuery. The blueprint uses the following Cloud HSM encryption keys: A CMEK key for the ingestion process that's also used by Pub/Sub, Dataflow pipeline for streaming, Cloud Storage batch upload, and Cloud Run functions artifacts for subsequent batch uploads. The cryptographic key wrapped by Cloud HSM for the data encrypted on your network using Tink. CMEK key for the BigQuery warehouse in the Data project. You specify the CMEK location, which determines the geographical location that the key is stored and is made available for access. You must ensure that your CMEK is in the same location as your resources. By default, the CMEK is rotated every 30 days. If your organization's compliance obligations require that you manage your own keys externally from Google Cloud, you can enable Cloud External Key Manager. If you use external keys, you're responsible for key management activities, including key rotation. Service accounts and access controls Service accounts are identities that Google Cloud can use to run API requests on your behalf. Service accounts ensure that user identities don't have direct access to services. To permit separation of duties, you create service accounts with different roles for specific purposes. These service accounts are defined in the data-ingestion-sa module and the data-governance-sa module. The service accounts are as follows: Cloud Storage service account runs the automated batch data upload process to the ingestion storage bucket. Pub/Sub service account enables streaming of data to Pub/Sub service. Dataflow controller service account is used by the Dataflow pipeline to transform and write data from Pub/Sub to BigQuery. Cloud Run functions service account writes subsequent batch data uploaded from Cloud Storage to BigQuery. Storage Upload service account allows the ETL pipeline to create objects. Pub/Sub Write service Account lets the ETL pipeline write data to Pub/Sub. The following table lists the roles that are assigned to each service account: Name Roles Scope of Assignment Dataflow controller service account BigQuery Data Editor (roles/bigquery.dataEditor) BigQuery Job User (roles/bigquery.jobUser) Dataflow Developer (roles/dataflow.developer) Dataflow Worker (roles/dataflow.worker) Pub/Sub Editor (roles/pubsub.editor) Pub/Sub Subscriber (roles/pubsub.subscriber) Service Usage Consumer (roles/serviceUsage.serverUsageConsumer) Storage Object Viewer (roles/storage.ObjectViewer) Data ingestion project BigQuery Data Editor (roles/bigquery.dataEditor) BigQuery Metadata Viewer (roles/bigquery.metadataViewer) Data project DLP Inspect Findings Reader (roles/dlp.deidentifyTemplatesReader) DLP Inspect Templates Editor (roles/dlp.inspectTemplatesReader) DLP User (roles/dlp.user) Data governance Cloud Run functions service account BigQuery Data Editor (roles/bigquery.dataEditor) BigQuery Job User (roles/bigquery.JobUser) Cloud Run Invoker (roles/run.invoker) Eventarc Event Receiver (roles/eventarc.eventReceiver) Data ingestion project BigQuery Data Editor (roles/bigquery.dataEditor) BigQuery Metadata Viewer (roles/bigquery.metadataViewer) Data project Storage Upload service account Storage Object Creator (roles/storage.objectCreator) Storage Object Viewer (roles/storage.objectViewer) Data ingestion Project Pub/Sub Write service account Pub/Sub Publisher (roles/pubsub.publisher) Pub/Sub Subscriber (roles/pubsub.subscriber) Data ingestion Project Security controls for data storage You configure the following security controls to help protect data in the BigQuery warehouse: Column-level access controls Service accounts with limited roles Dynamic data masking of sensitive fields Organization policies Sensitive Data Protection automatic scanning and data profiler VPC Service Controls perimeters between the Data ingestion project and the Data project, with appropriate perimeter bridges Encryption and key management, as follows: Encryption at rest with CMEK keys that are stored in Cloud HSM Column-level encryption using Tink and BigQuery AEAD Encryption Dynamic data masking To help with sharing and applying data access policies at scale, you can configure dynamic data masking. Dynamic data masking lets existing queries automatically mask column data using the following criteria: The masking rules that are applied to the column at query runtime. The roles that are assigned to the user who is running the query. To access unmasked column data, the data analyst must have the Fine-Grained Reader role. To define access for columns in BigQuery, you create policy tags. For example, the taxonomy created in the standalone example creates the 1_Sensitive policy tag for columns that include data that cannot be made public, such as the credit limit. The default data masking rule is applied to these columns to hide the value of the column. Anything that isn't tagged is available to all users who have access to the data warehouse. These access controls ensure that, even after the data is written to BigQuery, the data in sensitive fields still cannot be read until access is explicitly granted to the user. Column-level encryption and decryption Column-level encryption lets you encrypt data in BigQuery at a more granular level. Instead of encrypting an entire table, you select the columns that contain sensitive data within BigQuery, and only those columns are encrypted. BigQuery uses AEAD encryption and decryption functions that create the keysets that contain the keys for encryption and decryption. These keys are then used to encrypt and decrypt individual values in a table, and rotate keys within a keyset. Column-level encryption provides dual-access control on encrypted data in BigQuery, because the user must have permissions to both the table and the encryption key to read data in cleartext. Data profiler for BigQuery with Cloud DLP Data profiler lets you identify the locations of sensitive and high risk data in BigQuery tables. Data profiler automatically scans and analyzes all BigQuery tables and columns across the entire organization, including all folders and projects. Data profiler then outputs metrics such as the predicted infoTypes, the assessed data risk and sensitivity levels, and metadata about your tables. Using these insights, you can make informed decisions about how you protect, share, and use your data. Service accounts with limited roles You must limit access to the Data project so that only authorized users can view the sensitive data fields. To do so, you create a service account with the roles/iam.serviceAccountUser role that authorized users must impersonate. Service account impersonation helps users to use service accounts without downloading the service account keys, which improves the overall security of your project. Impersonation creates a short-term token that authorized users who have the roles/iam.serviceAccountTokenCreator role are allowed to download. Organization policies This blueprint includes the organization policy constraints that the enterprise foundations blueprint uses and adds additional constraints. For more information about the constraints that the enterprise foundations blueprint uses, see Organization policy constraints. The following table describes the additional organizational policy constraints that are defined in the organization-policies module. Policy Constraint Name Recommended Value Restrict resource deployments to specific physical locations gcp.resourceLocations One of the following: in:us-locations in:eu-locations in:asia-locations Require CMEK protection gcp.restrictNonCmekServices bigquery.googleapis.com Disable service account creation iam.disableServiceAccountCreation true Disable Service account key creation disableServiceAccountKeyCreation true Enable OS Login for VMs created in the project compute.requireOsLogin true Disable automatic role grants to default service account automaticIamGrantsForDefaultServiceAccounts true Allowed ingress settings (Cloud Run functions) cloudfunctions.allowedIngressSettings ALLOW_INTERNAL_AND_GCLB Restrict new forwarding rules to be internal only, based on IP address compute.restrictProtocolForwardingCreationForTypes INTERNAL Disable serial port output logging to Cloud Logging compute.disableSerialPortLogging true Define the set of shared VPC subnetworks that Compute Engine resources can use compute.restrictSharedVpcSubnetworks projects/PROJECT_ID/regions/REGION/subnetworks/SUBNETWORK-NAME Replace SUBNETWORK-NAME with the resource ID of the private subnet that you want the blueprint to use. Operational controls You can enable logging and Security Command Center Premium tier features such as Security Health Analytics and Event Threat Detection. These controls help you to do the following: Monitor who is accessing your data. Ensure that proper auditing is put in place. Generate findings for misconfigured cloud resources. Support the ability of your incident management and operations teams to respond to issues that might occur. Access Transparency Access Transparency provides you with real-time notification in the event Google support personnel require access to your data. Access Transparency logs are generated whenever a human accesses content, and only Google personnel with valid business justifications (for example, a support case) can obtain access. We recommend that you enable Access Transparency. Logging To help you to meet auditing requirements and get insight into your projects, you configure the Google Cloud Observability with data logs for services you want to track. The harness-logging module configures the following best practices: Creating an aggregated log sink across all projects. Storing your logs in the appropriate region. Adding CMEK keys to your logging sink. For all services within the projects, your logs must include information about data reads and writes, and information about what administrators read. For additional logging best practices, see Detective controls in the enterprise foundations blueprint. Alerts and monitoring After you deploy the blueprint, you can set up alerts to notify your security operations center (SOC) that a security incident might be occurring. For example, you can use alerts to let your security analyst know when an IAM permission has changed. For more information about configuring Security Command Center alerts, see Setting up finding notifications. For additional alerts that aren't published by Security Command Center, you can set up alerts with Cloud Monitoring. Additional security considerations In addition to the security controls described in this solution, you should review and manage the security and risk in key areas that overlap and interact with your use of this solution. These security considerations include the following: The security of the code that you use to configure, deploy, and run Dataflow jobs and Cloud Run functions. The data classification taxonomy that you use with this solution. Generation and management of encryption keys. The content, quality, and security of the datasets that you store and analyze in the data warehouse. The overall environment in which you deploy the solution, including the following: The design, segmentation, and security of networks that you connect to this solution. The security and governance of your organization's IAM controls. The authentication and authorization settings for the actors to whom you grant access to the infrastructure that's part of this solution, and who have access to the data that's stored and managed in that infrastructure. Bringing it all together To implement the architecture described in this document, do the following: Determine whether you will deploy the blueprint with the enterprise foundations blueprint or on its own. If you choose not to deploy the enterprise foundations blueprint, ensure that your environment has a similar security baseline in place. Set up a Dedicated Interconnect connection with your network. Review the README for the blueprint and ensure that you meet all the prerequisites. Verify that your user identity has the iam.serviceAccountUser and iam.serviceAccountTokenCreator roles for your organization's development folder, as described in Organization structure. If you don't have a folder that you use for testing, create a folder and configure access. Record your billing account ID, organization's display name, folder ID for your test or demo folder, and the email addresses for the following user groups: Data analysts Encrypted data viewer Plaintext reader Data engineers Network administrators Security administrators Security analysts Create the Data, Data governance, Data ingestion, and Flex template projects. For a list of APIs that you must enable, see the README. Create the service account for Terraform and assign the appropriate roles for all projects. Set up the Access Control Policy. In your testing environment, deploy the solution: Clone and run the Terraform scripts to set up an environment in Google Cloud. Install the Tink encryption library on your network. Set up Application Default Credentials so that you can run the Tink library on your network. Create encryption keys with Cloud KMS. Generate encrypted keysets with Tink. Encrypt data with Tink using one of the following methods: Using deterministic encryption. Using a helper script with sample data. Upload encrypted data to BigQuery using streaming or batch uploads. Verify that authorized users can read unencrypted data from BigQuery using the BigQuery AEAD decrypt function. For example, run the following create decryption function: CREATE OR REPLACE FUNCTION `{project_id}.{bigquery_dataset}.decrypt`(encodedText STRING) RETURNS STRING AS ( AEAD.DECRYPT_STRING( KEYS.KEYSET_CHAIN('gcp-kms://projects/myProject/locations/us/keyRings/myKeyRing/cryptoKeys/myKeyName', b'\012\044\000\321\054\306\036\026…..'), FROM_BASE64(encodedText), "") ); Run the create view query: CREATE OR REPLACE VIEW `{project_id}.{bigquery_dataset}.decryption_view` AS SELECT Card_Type_Code, Issuing_Bank, Card_Number, `bigquery_dataset.decrypt`(Card_Number) AS Card_Number_Decrypted FROM `project_id.dataset.table_name` Run the select query from view: SELECT Card_Type_Code, Issuing_Bank, Card_Number, Card_Number_Decrypted FROM `{project_id}.{bigquery_dataset}.decrypted_view` For additional queries and use cases, see Column-level encryption with Cloud KMS. Use Security Command Center to scan the newly created projects against your compliance requirements. Deploy the blueprint into your production environment. What's next Review the Google Cloud enterprise foundations blueprint for a baseline secure environment. To see the details of the blueprint, read the Terraform configuration README. To ingest data that is stored in Google Cloud into a BigQuery data warehouse, see Import data from Google Cloud into a secured BigQuery data warehouse. For more best practices and blueprints, see the security best practices center. Send feedback \ No newline at end of file diff --git a/Industry_Solutions.txt b/Industry_Solutions.txt new file mode 100644 index 0000000000000000000000000000000000000000..d60adeb0cb57eac935105f665bb70927560b0b6c --- /dev/null +++ b/Industry_Solutions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions#industry-solutions +Date Scraped: 2025-02-23T11:57:42.514Z + +Content: +Google Cloud solutionsBrowse Google Cloud solutions or visit our Solutions Center to discover and deploy solutions based on your readiness.Visit Solutions Center Contact Sales Navigate toReference architecturesProducts directoryPricing informationFilter byFiltersIndustry solutionsJump Start SolutionsApplication modernizationArtificial intelligenceAPIs and applicationsData analyticsDatabasesInfrastructure modernizationProductivity and collaborationSecurityStartups and small and medium-sized businessesFeatured partner solutionssearchsendIndustry solutionsWhatever your industry's challenge or use case, explore how Google Cloud solutions can help improve efficiency and agility, reduce cost, participate in new business models, and capture new market opportunities.RetailAnalytics and collaboration tools for the retail value chain.Consumer packaged goodsSolutions for CPG digital transformation and brand growth.ManufacturingMigration and AI tools to optimize the manufacturing value chain.AutomotiveDigital transformation along the automotive value chain.Supply chain and logisticsEnable sustainable, efficient, and resilient data-driven operations across supply chain and logistics operations.EnergyMulticloud and hybrid solutions for energy companies.Healthcare and life sciencesAdvance R&D and improve the clinician and patient experience with AI-driven tools.Media and entertainmentSolutions for content production and distribution operations.GamesAI-driven solutions to build and scale games faster.TelecommunicationsHybrid and multicloud services to deploy and monetize 5G.Financial servicesComputing, databases, and analytics tools for financial services.Capital marketsModern cloud-based architectures, high performance computing, and AI/ML.BankingReduce risk, improve customer experiences and data insights.InsuranceStrengthen decision making and deliver customer-centric experiences.PaymentsAdd new revenue streams, ensure secure transactions and scale globally.Government and public sectorGovernmentData storage, AI, and analytics solutions for government agencies.State and local governmentCloud platform helps public sector workforces better serve constituents.Federal governmentTools that increase federal agencies’ innovation and operational effectiveness.Federal cybersecuritySolutions spanning Zero Trust, analytics, and asset protection.EducationTeaching tools to provide more engaging learning experiences.Education technologyAI, analytics, and app development solutions designed for EdTech.Canada public sectorSolutions that help keep your organization secure and compliant.Department of DefenseGoogle Cloud supports the critical missions of the DoD by providing them with the most secure, reliable, innovative cloud solutions.Google Workspace for GovernmentSecure collaboration solutions and program management resources to help fulfill the unique needs of today's government.Jump Start SolutionsTo get started, choose pre-configured, interactive solutions that you can deploy directly from the Google Cloud console.Deploy a dynamic websiteBuild, deploy, and operate a sample dynamic website using responsive web frameworks.Deploy load-balanced virtual machinesLearn best practices for creating and deploying a sample load-balanced VM cluster.Summarize large documentsLearn how to detect text in raw files and automate document summaries with generative AI.Deploy an AI/ML image processing pipelineRecognize and classify images using pre-trained AI models and serverless functions.Build a three-tier web appCreate a model web app using a three-tiered architecture (frontend, middle tier, backend).Deploy an ecommerce web app Build and run a simple ecommerce application for retail organizations using Kubernetes.Create an analytics lakehouse Store, process, analyze, and activate data using a unified data stack.Deploy a data warehouse using BigQueryLearn the basics of building a data warehouse and visualizing data.Build an internal knowledge base Extract question-and-answer pairs from your documents for a knowledge base.Deploy a RAG application Learn how to use retrieval-augmented generation (RAG) to create a chat application.Deploy a Java application Learn to deploy a dynamic web app that mimics a real-world point of sale screen for a retail store.Deploy an ecommerce platform with serverless computing Build and run a simple ecommerce application for retail organizations using serverless capabilities.Build a secure CI/CD pipeline Set up a secure CI/CD pipeline for building, scanning, storing, and deploying containers to GKE.Use a cloud SDK client library Learn key skills for successfully making API calls to identify trends and observations on aggregate data.See all solutions in consoleApplication modernizationAssess, plan, implement, and measure software practices and capabilities to modernize and simplify your organization’s business application portfolios.CAMPProgram that uses DORA to improve your software delivery capabilities.Modernize Traditional ApplicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Migrate from PaaS: Cloud Foundry, OpenshiftTools for moving your existing containers into Google's managed container services.Migrate from MainframeAutomated tools and prescriptive guidance for moving your mainframe apps to the cloud.Modernize Software DeliverySoftware supply chain best practices - innerloop productivity, CI/CD and S3C.DevOps Best PracticesProcesses and resources for implementing DevOps in your org.SRE PrinciplesTools and resources for adopting SRE in your org.Day 2 Operations for GKETools and guidance for effective GKE management and monitoring.FinOps and Optimization of GKEBest practices for running reliable, performant, and cost effective applications on GKE.Run Applications at the EdgeGuidance for localized and low latency apps on Google’s hardware agnostic edge solution.Architect for MulticloudManage workloads across multiple clouds with a consistent platform.Go ServerlessFully managed environment for developing, deploying and scaling apps.API ManagementModernize old applications and accelerate new development with an API-FIRST approach. Learn moreArtificial intelligenceAdd intelligence and efficiency to your business with AI and machine learning.AI HypercomputerAI optimized hardware, software, and consumption, combined to improve productivity and efficiency.Contact Center AIAI model for speaking with customers and assisting human agents.Document AIMachine learning and AI to unlock insights from your documents.Gemini for Google CloudAI-powered collaborator integrated across Google Workspace and Google Cloud. Vertex AI Search for commerceGoogle-quality search and recommendations for retailers' digital properties help increase conversions and reduce search abandonment.Learn moreAPIs and applicationsSecurely unlock your data with APIs, automate processes, and create applications across clouds and on-premises without coding.New business channels using APIsAttract and empower an ecosystem of developers and partners.Open Banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Unlocking legacy applications using APIsCloud services for extending and modernizing legacy apps.Learn moreData analyticsGenerate instant insights from data at any scale with a serverless, fully managed analytics platform that significantly simplifies analytics.Data warehouse modernizationData warehouse to jumpstart your migration and unlock insights.Data lake modernizationServices for building and modernizing your data lake. Spark on Google CloudRun and write Spark where you need it, serverless and integrated.Stream analyticsInsights from ingesting, processing, and analyzing event streams.Business intelligenceSolutions for modernizing your BI stack and creating rich data experiences.Data sciencePut your data to work with Data Science on Google Cloud.Marketing analyticsSolutions for collecting, analyzing, and activating customer data. Geospatial analytics and AISolutions for building a more prosperous and sustainable business.DatasetsData from Google, public, and commercial providers to enrich your analytics and AI initiatives.Cortex FrameworkReduce the time to value with reference architectures, packaged services, and deployment templates.Learn moreDatabasesMigrate and manage enterprise data with security, reliability, high availability, and fully managed data services.Databases for GamesBuild global, live games with Google Cloud databases.Database migrationGuides and tools to simplify your database migration life cycle. Database modernizationUpgrades to modernize your operational database infrastructure.Google Cloud database portfolioDatabase services to migrate, manage, and modernize data.Migrate Oracle workloads to Google CloudRehost, replatform, rewrite your Oracle workloads.Open source databasesFully managed open source databases with enterprise-grade support.SQL Server on Google CloudOptions for running SQL Server virtual machines on Google Cloud. Learn moreInfrastructure modernizationMigrate and modernize workloads on Google's global, secure, and reliable infrastructure.Active AssistAutomatic cloud resource optimization and increased security.Application migrationDiscovery and analysis tools for moving to the cloud.Backup and Disaster RecoveryEnsure your business continuity needs are met.Data center migrationMigration solutions for VMs, apps, databases, and more.Rapid Migration and Modernization ProgramSimplify your path to success in the cloud.High performance computingCompute, storage, and networking options to support any workload.Mainframe modernizationAutomated tools and prescriptive guidance for moving to the cloud.ObservabilityDeliver deep cloud observability with Google Cloud and partners.SAP on Google CloudCertifications for running SAP applications and SAP HANA.Virtual desktopsRemote work solutions for desktops and applications (VDI & DaaS).Windows on Google CloudTools and partners for running Windows workloads.Red Hat on Google CloudEnterprise-grade platform for traditional on-prem and custom applications.Cross-cloud NetworkSimplify hybrid and multicloud networking and secure your workloads, data, and users.Learn moreProductivity and collaborationChange the way teams work with solutions designed for humans and built for impact.Google WorkspaceCollaboration and productivity tools for enterprises. Chrome EnterpriseChrome OS, Chrome Browser, and Chrome devices built for business. Google Workspace EssentialsSecure video meetings and modern collaboration for teams.Cloud IdentityUnified platform for IT admins to manage user devices and apps.Cloud SearchEnterprise search for employees to quickly find company information.Learn moreSecurityDetect, investigate, and protect against online threats.Digital SovereigntyA comprehensive set of sovereign capabilities, allowing you to adopt the right controls on a workload-by-workload basis.Security FoundationSolution with recommended products and guidance to help achieve a strong security posture.Security analytics and operationsSolution for analyzing petabytes of security telemetry.Web App and API Protection (WAAP)Threat and fraud protection for your web applications and APIs.Security and resilience frameworkSolutions for each phase of the security and resilience life cycle.Risk and compliance as code (RCaC)Solution to modernize your governance, risk, and compliance function with automation. Software Supply Chain SecuritySolution for strengthening end-to-end software supply chain security.Google Cloud Cybershield™Strengthen nationwide cyber defense.Learn moreStartups and small and medium-sized businessesAccelerate startup and small and medium-sized businesses growth with tailored solutions and programs.Google Cloud for Web3Build and scale faster with simple, secure tools, and infrastructure for Web3.Startup solutions Grow your startup and solve your toughest challenges using Google’s proven technology.Startup programGet financial, business, and technical support to take your startup to the next level.Small and medium-sized businessesExplore solutions for web hosting, app development, AI, and analytics.Software as a serviceBuild better SaaS products, scale efficiently, and grow your business. Featured partner solutionsGoogle Cloud works with some of the most trusted, innovative partners to help enterprises innovate faster, scale smarter, and stay secure. Here are just a few of them.CiscoCombine Cisco's networking, multicloud, and security portfolio with Google Cloud services to innovate on your own terms.DatabricksDatabricks on Google Cloud offers enterprise flexibility for AI-driven analytics on one open cloud platform.Dell TechnologiesThe Dell and Google Cloud partnership delivers a variety of solutions to help transform how enterprises operate their business.IntelGet performance on your own terms with customizable Google Cloud and Intel technologies designed for the most demanding enterprise workloads and applications.MongoDBMongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure.NetAppDiscover advanced hybrid cloud data services that simplify how you migrate and run enterprise workloads in the cloud.Palo Alto NetworksCombine Google’s secure-by-design infrastructure with dedicated protection from Palo Alto Networks to help secure your applications and data in hybrid environments and on Google Cloud.SAPDrive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.SplunkSplunk and Google Cloud have partnered to help organizations ingest, normalize, and analyze data at scale. VMwareMigrate and run your VMware workloads natively on Google Cloud.Red HatEnterprise-grade platform for traditional on-prem and custom applications with the security, performance, scalability, and simplicity of Google Cloud.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Infrastructure_Manager.txt b/Infrastructure_Manager.txt new file mode 100644 index 0000000000000000000000000000000000000000..2ddd676191b3c7809cc0d3e98208b183553e9a83 --- /dev/null +++ b/Infrastructure_Manager.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/infrastructure-manager/docs +Date Scraped: 2025-02-23T12:04:32.540Z + +Content: +Home Infrastructure Manager Documentation Stay organized with collections Save and categorize content based on your preferences. Infrastructure Manager documentation View all product documentation Infrastructure Manager (Infra Manager) is a managed service that automates the deployment and management of Google Cloud infrastructure resources. Infrastructure is defined using Terraform and deployed onto Google Cloud by Infra Manager, enabling you to manage resources using Infrastructure as Code (IaC). Learn more. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Deploy a VPC with Infra Manager Overview Deploy infrastructure resources View resources deployed Infra Manager and Terraform find_in_page Reference gcloud commands REST API RPC API Client libraries info Resources Pricing Release notes Quotas and Limits Locations \ No newline at end of file diff --git a/Infrastructure_Modernization.txt b/Infrastructure_Modernization.txt new file mode 100644 index 0000000000000000000000000000000000000000..b4ff705646ae0f833f38b1133ac1a06d4121e78d --- /dev/null +++ b/Infrastructure_Modernization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/infrastructure-modernization +Date Scraped: 2025-02-23T11:59:32.098Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Infrastructure modernizationGoogle Cloud and our partners offer flexible infrastructure modernization approaches from rehost to replatform. Once built, you can leverage the innovation we’ve built into our technology—from AI to streaming analytics.Request a demoContact salesGoogle Cloud infrastructure modernization solutionsSolutionsBenefitsCross-Cloud NetworkConnect any app, any cloud, securely with a global cloud networking platform built on Google’s private backbone. Gain seamless any-to-any connectivity, enhanced application experience, and ML-powered security across all users and workloads anywhere. Simplify with service-centric connectivity for hybrid and multicloud networksStrengthen security with ML-powered real-time protectionCut costs and operating overhead with Google Cloud's global networkObservabilityUse Google Cloud's intelligent solutions along with partner offerings to monitor, troubleshoot, and improve application performance across all workloads. Monitor, troubleshoot, and improve app performance with end-to-end visibilityChoose from a wide range of partners for observability on Google CloudApplication migrationMigrating applications to cloud helps you avoid expensive refresh cycles. Use discovery and assessment tools to understand the cost benefit of moving to the cloud, then execute using our portfolio of migration solutions.Greater user and customer satisfaction with improved app performanceMaintain app security with data that is encrypted both in motion and at restSave big with no up-front costs, better density, and rightsizing recommendationsSAP on Google CloudMaintain business continuity on a secure cloud with advanced reliability, network, and uptime performance. Drive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.Focus on your business, not infrastructureAccelerate business agility and insightsGet help at every step of your cloud journeyVMware as a serviceMigrate your VMware environment with just a few clicks and run your VMware workloads natively on Google Cloud without refactoring your applications.Unlock intelligent insights with native servicesEnjoy a fully integrated experienceSimplify management across hybrid cloud environments Backup and disaster recoveryEnsure business continuity by incorporating a cloud-integrated backup and disaster recovery strategy.The simplest, easiest first step to cloud adoptionScalable, economical cloud backup for VMs and databasesDeliver recovery times aligned with the needs of your businessExtend backup usage for ransomware recovery, test/dev and analyticsHigh performance computingSolve the most complex computing challenges with cost-effective, powerful, and flexible infrastructure to support scalable workloads.Reduce queue times for large-batch workloadsPay only for what you needInnovate with high-performance, on-demand resourcesMicrosoft and Windows on Google CloudA first-class experience for Windows workloads. Self-manage or leverage managed services. Use license-included images or bring your own. Migrate, optimize, and modernize with enterprise-class support backed by Microsoft.Migrate to increase IT agility and reduce on-premises footprintOptimize license usage to reduce costModernize to reduce single-vendor dependencyVirtual desktopsCompanies today are confronted with a challenge of balancing security and IT resources with demands for remote work. Enabling secure and scalable access to corporate resources can help you keep business moving forward.User authentication through Google Workspace, IAP, or Active DirectoryHigh-performance virtual desktop partnersIncrease business agility by migrating and replatforming apps to the cloudBare Metal SolutionBring your specialized workloads to Google Cloud, allowing you access and integration with Google Cloud services with minimal latency.Flexibility and agility for specialized workloadsSubscription pricingData center migrationChoose your path to the cloud—lift and shift, application change, or hybrid.Migrate to a more secure cloudGrow confidently on our purpose-built infrastructureModernize at your own paceIntelligent operations with Active AssistActive Assist is a portfolio of tools that use data, intelligence, and machine learning to reduce cloud complexity and administrative toil, making it easy to optimize your cloud's security, performance, and cost.Find the ideal performance-to-cost balance with automatic recommendationsImprove your cloud with proactive alerts on potential gaps or future issuesDrive innovation by reducing administrative time and laborRed Hat on Google CloudGoogle and Red Hat provide an enterprise-grade platform for traditional on-prem and custom applications, with the familiarity of Red Hat, and the security, performance, scalability, and simplicity of Google Cloud.Optimized for Google CloudIntegrated supportBring your own subscription (BYOS) or Procure from Google CloudSolutionsCross-Cloud NetworkConnect any app, any cloud, securely with a global cloud networking platform built on Google’s private backbone. Gain seamless any-to-any connectivity, enhanced application experience, and ML-powered security across all users and workloads anywhere. Simplify with service-centric connectivity for hybrid and multicloud networksStrengthen security with ML-powered real-time protectionCut costs and operating overhead with Google Cloud's global networkObservabilityUse Google Cloud's intelligent solutions along with partner offerings to monitor, troubleshoot, and improve application performance across all workloads. Monitor, troubleshoot, and improve app performance with end-to-end visibilityChoose from a wide range of partners for observability on Google CloudApplication migrationMigrating applications to cloud helps you avoid expensive refresh cycles. Use discovery and assessment tools to understand the cost benefit of moving to the cloud, then execute using our portfolio of migration solutions.Greater user and customer satisfaction with improved app performanceMaintain app security with data that is encrypted both in motion and at restSave big with no up-front costs, better density, and rightsizing recommendationsSAP on Google CloudMaintain business continuity on a secure cloud with advanced reliability, network, and uptime performance. Drive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.Focus on your business, not infrastructureAccelerate business agility and insightsGet help at every step of your cloud journeyVMware as a serviceMigrate your VMware environment with just a few clicks and run your VMware workloads natively on Google Cloud without refactoring your applications.Unlock intelligent insights with native servicesEnjoy a fully integrated experienceSimplify management across hybrid cloud environments Backup and disaster recoveryEnsure business continuity by incorporating a cloud-integrated backup and disaster recovery strategy.The simplest, easiest first step to cloud adoptionScalable, economical cloud backup for VMs and databasesDeliver recovery times aligned with the needs of your businessExtend backup usage for ransomware recovery, test/dev and analyticsHigh performance computingSolve the most complex computing challenges with cost-effective, powerful, and flexible infrastructure to support scalable workloads.Reduce queue times for large-batch workloadsPay only for what you needInnovate with high-performance, on-demand resourcesMicrosoft and Windows on Google CloudA first-class experience for Windows workloads. Self-manage or leverage managed services. Use license-included images or bring your own. Migrate, optimize, and modernize with enterprise-class support backed by Microsoft.Migrate to increase IT agility and reduce on-premises footprintOptimize license usage to reduce costModernize to reduce single-vendor dependencyVirtual desktopsCompanies today are confronted with a challenge of balancing security and IT resources with demands for remote work. Enabling secure and scalable access to corporate resources can help you keep business moving forward.User authentication through Google Workspace, IAP, or Active DirectoryHigh-performance virtual desktop partnersIncrease business agility by migrating and replatforming apps to the cloudBare Metal SolutionBring your specialized workloads to Google Cloud, allowing you access and integration with Google Cloud services with minimal latency.Flexibility and agility for specialized workloadsSubscription pricingData center migrationChoose your path to the cloud—lift and shift, application change, or hybrid.Migrate to a more secure cloudGrow confidently on our purpose-built infrastructureModernize at your own paceIntelligent operations with Active AssistActive Assist is a portfolio of tools that use data, intelligence, and machine learning to reduce cloud complexity and administrative toil, making it easy to optimize your cloud's security, performance, and cost.Find the ideal performance-to-cost balance with automatic recommendationsImprove your cloud with proactive alerts on potential gaps or future issuesDrive innovation by reducing administrative time and laborRed Hat on Google CloudGoogle and Red Hat provide an enterprise-grade platform for traditional on-prem and custom applications, with the familiarity of Red Hat, and the security, performance, scalability, and simplicity of Google Cloud.Optimized for Google CloudIntegrated supportBring your own subscription (BYOS) or Procure from Google CloudNeed help getting started? Tell us what you’re solving for. A Google expert will help you find the best solution.Contact usGoogle Cloud Rapid Assessment and Migration Program helps customers accelerate their path to success.Start your assessmentLearn from our customersVideoPayPal works with Google Cloud to add services and boost speed and security10:42VideoGoogle Cloud helps McKesson gain healthcare insights and serve patients better02:08Case StudyAirAsia refines pricing, increases revenue, and improves customer experience with Google Cloud5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Infrastructure_for_a_RAG-capable_generative_AI_application_using_GKE.txt b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..a100144ab1aa7388487fc05ae3b1bb9fbb8bd253 --- /dev/null +++ b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/rag-capable-gen-ai-app-using-gke +Date Scraped: 2025-02-23T11:46:01.087Z + +Content: +Home Docs Cloud Architecture Center Send feedback Infrastructure for a RAG-capable generative AI application using GKE Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-11 UTC This document provides a reference architecture that you can use to design the infrastructure to run a generative AI application with retrieval-augmented generation (RAG) using Google Kubernetes Engine (GKE), Cloud SQL, and open source tools like Ray, Hugging Face, and LangChain. To help you experiment with this reference architecture, a sample application and Terraform configuration are provided in GitHub. This document is for developers who want to rapidly build and deploy RAG-capable generative AI applications by using open source tools and models. It assumes that you have experience with using GKE and Cloud SQL and that you have a conceptual understanding of AI, machine learning (ML), and large language models (LLMs). This document doesn't provide guidance about how to design and develop a generative AI application. Architecture The following diagram shows a high-level view of an architecture for a RAG-capable generative AI application in Google Cloud: The architecture contains a serving subsystem and an embedding subsystem. The serving subsystem handles the request-response flow between the application and its users. The subsystem includes a frontend server, an inference server, and a responsible AI (RAI) service. The serving subsystem interacts with the embedding subsystem through a vector database. The embedding subsystem enables the RAG capability in the architecture. This subsystem does the following: Ingests data from data sources in Google Cloud, on-premises, and other cloud platforms. Converts the ingested data to vector embeddings. Stores the embeddings in a vector database. The following diagram shows a detailed view of the architecture: As shown in the preceding diagram, the frontend server, inference server, and embedding service are deployed in a regional GKE cluster in Autopilot mode. Data for RAG is ingested through a Cloud Storage bucket. The architecture uses a Cloud SQL for PostgreSQL instance with the pgvector extension as the vector database to store embeddings and perform semantic searches. Vector databases are designed to efficiently store and retrieve high-dimensional vectors. Note: The preceding diagram doesn't show the architecture for networking resources. For guidance to design and configure networking for your GKE cluster, see Best practices for GKE networking. The following sections describe the components and data flow within each subsystem of the architecture. Embedding subsystem The following is the flow of data in the embedding subsystem: Data from external and internal sources is uploaded to the Cloud Storage bucket by human users or programmatically. The uploaded data might be in files, databases, or streamed data. (Not shown in the architecture diagram.) The data upload activity triggers an event that's published to a messaging service like Pub/Sub. The messaging service sends a notification to the embedding service. When the embedding service receives a notification of a data upload event, it does the following: Retrieves data from the Cloud Storage bucket through the Cloud Storage FUSE CSI driver. Reads the uploaded data and preprocesses it using Ray Data. The preprocessing can include chunking the data and transforming it into a suitable format for embedding generation. Runs a Ray job to create vectorized embeddings of the preprocessed data by using an open-source model like intfloat/multilingual-e5-small that's deployed in the same cluster. Writes the vectorized embeddings to the Cloud SQL for PostgreSQL vector database. As described in the following section, when the serving subsystem processes user requests, it uses the embeddings in the vector database to retrieve relevant domain-specific data. Serving subsystem The following is the request-response flow in the serving subsystem: A user submits a natural-language request to a frontend server through a web-based chat interface. The frontend server runs on GKE. The frontend server runs a LangChain process that does the following: Converts the natural-language request to embeddings by using the same model and parameters that the embedding service uses. Retrieves relevant grounding data by performing a semantic search for the embeddings in the vector database. Semantic search helps find embeddings based on the intent of a prompt rather than its textual content. Constructs a contextualized prompt by combining the original request with the grounding data that was retrieved. Sends the contextualized prompt to the inference server, which runs on GKE. The inference server uses the Hugging Face TGI serving framework to serve an open-source LLM like Mistral-7B-Instruct or a Gemma open model. The LLM generates a response to the prompt, and the inference server sends the response to the frontend server. You can store and view logs of the request-response activity in Cloud Logging, and you can set up logs-based monitoring by using Cloud Monitoring. You can also load the generated responses into BigQuery for offline analytics. The frontend server invokes an RAI service to apply the required safety filters to the response. You can use tools like Sensitive Data Protection and Cloud Natural Language API to discover, filter, classify, and de-identify sensitive content in the responses. The frontend server sends the filtered response to the user. Products used The following is a summary of the Google Cloud and open-source products that the preceding architecture uses: Google Cloud products Google Kubernetes Engine (GKE): A Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Cloud SQL: A fully managed relational database service that helps you provision, operate, and manage your MySQL, PostgreSQL, and SQL Server databases on Google Cloud. Open-source products Hugging Face Text Generation Inference (TGI): A toolkit for deploying and serving LLMs. Ray: An open-source unified compute framework that helps you scale AI and Python workloads. LangChain: A framework for developing and deploying applications powered by LLMs. Use cases RAG is an effective technique to improve the quality of output that's generated from an LLM. This section provides examples of use cases for which you can use RAG-capable generative AI applications. Personalized product recommendations An online shopping site might use an LLM-powered chatbot to assist customers with finding products or getting shopping-related help. The questions from a user can be augmented by using historical data about the user's buying behavior and website interaction patterns. The data might include user reviews and feedback that's stored in an unstructured datastore or search-related metrics that are stored in a web analytics data warehouse. The augmented question can then be processed by the LLM to generate personalized responses that the user might find more appealing and compelling. Clinical assistance systems Doctors in hospitals need to quickly analyze and diagnose a patient's health condition to make decisions about appropriate care and medication. A generative AI application that uses a medical LLM like Med-PaLM can be used to assist doctors in their clinical diagnosis process. The responses that the application generates can be grounded in historical patient records by contextualizing the doctors' prompts with data from the hospital's electronic health record (EHR) database or from an external knowledge base like PubMed. Efficient legal research Generative AI-powered legal research lets lawyers quickly query large volumes of statutes and case laws to identify relevant legal precedents or summarize complex legal concepts. The output of such research can be enhanced by augmenting a lawyer's prompts with data that's retrieved from the law firm's proprietary corpus of contracts, past legal communication, and internal case records. This design approach ensures that the generated responses are relevant to the legal domain that the lawyer specializes in. Design alternatives This section presents alternative design approaches that you can consider for your RAG-capable generative AI application in Google Cloud. Fully managed vector search If you need an architecture that uses a fully managed vector search product, you can use Vertex AI and Vector Search, which provides optimized serving infrastructure for very large-scale vector search. For more information, see Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search. Vector-enabled Google Cloud database If you want to take advantage of the vector store capabilities of a fully managed Google Cloud database like AlloyDB for PostgreSQL or Cloud SQL for your RAG application, then see Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL. Other options For information about other infrastructure options, supported models, and grounding techniques that you can use for generative AI applications in Google Cloud, see Choose models and infrastructure for your generative AI application. Design considerations This section provides guidance to help you develop and run a GKE-hosted RAG-capable generative AI architecture that meets your specific requirements for security and compliance, reliability, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud products and features that you use, you might need to consider additional design factors and trade-offs. For design guidance related to the open-source tools in this reference architecture, like Hugging Face TGI, see the documentation for those tools. Security, privacy, and compliance This section describes factors that you should consider when you design and build a RAG-capable generative AI application in Google Cloud that meets your security, privacy, and compliance requirements. Product Design considerations GKE In the Autopilot mode of operation, GKE pre-configures your cluster and manages nodes according to security best practices, which lets you focus on workload-specific security. For more information, see the following: GKE Autopilot security capabilities Move-in ready Kubernetes security with GKE Autopilot To ensure enhanced access control for your applications running in GKE, you can use Identity-Aware Proxy (IAP). IAP integrates with the GKE Ingress resource and ensures that only authenticated users with the correct Identity and Access Management (IAM) role can access the applications. For more information, see Enabling IAP for GKE. By default, your data in GKE is encrypted at rest and in transit using Google-owned and Google-managed encryption keys. As an additional layer of security for sensitive data, you can encrypt data at the application layer by using a key that you own and manage with Cloud KMS. For more information, see Encrypt secrets at the application layer. If you use a Standard GKE cluster, then you can use the following additional data-encryption capabilities: Encrypt data in use (that is, in memory) by using Confidential GKE Nodes. For more information about the features, availability, and limitations of Confidential GKE Nodes, see Encrypt workload data in-use with Confidential Google Kubernetes Engine Nodes. If you need more control over the encryption keys that are used to encrypt Pod traffic across GKE nodes, then you can encrypt the data in transit by using keys that you manage. For more information, see Encrypt your data in-transit in GKE with user-managed encryption keys. Cloud SQL The Cloud SQL instance in the architecture doesn't need to be accessible from the public internet. If external access to the Cloud SQL instance is necessary, you can encrypt external connections by using SSL/TLS or the Cloud SQL Auth Proxy connector. The Auth Proxy connector provides connection authorization by using IAM. The connector uses a TLS 1.3 connection with a 256-bit AES cipher to verify client and server identities and encrypt data traffic. For connections created by using Java, Python, Go, or Node.js, use the appropriate Language Connector instead of the Auth Proxy connector. By default, Cloud SQL uses Google-owned and Google-managed data encryption keys (DEK) and key encryption keys (KEK) to encrypt data at rest. If you need to use KEKs that you control and manage, you can use customer-managed encryption keys (CMEKs). To prevent unauthorized access to the Cloud SQL Admin API, you can create a service perimeter by using VPC Service Controls. For information about configuring Cloud SQL to help meet data residency requirements, see Data residency overview. Cloud Storage By default, the data that's stored in Cloud Storage is encrypted using Google-owned and Google-managed encryption keys. If required, you can use CMEKs or your own keys that you manage by using an external management method like customer-supplied encryption keys (CSEKs). For more information, see Data encryption options. Cloud Storage supports two methods for controlling user access to your buckets and objects: IAM and access control lists (ACLs). In most cases, we recommend using IAM, which lets you grant permissions at the bucket and project levels. For more information, see Overview of access control. The data that you load into the data ingestion subsystem through Cloud Storage might include sensitive data. To protect such data, you can use Sensitive Data Protection to discover, classify, and de-identify the data. For more information, see Using Sensitive Data Protection with Cloud Storage. To mitigate the risk of data exfiltration from Cloud Storage, you can create a service perimeter by using VPC Service Controls. Cloud Storage helps you meet data residency requirements. Data is stored or replicated within the regions that you specify. All of the products in this architecture Admin Activity audit logs are enabled by default for all of the Google Cloud services that are used in this reference architecture. You can access the logs through Cloud Logging and use the logs to monitor API calls or other actions that modify the configuration or metadata of Google Cloud resources. Data Access audit logs are also enabled by default for all of the Google Cloud services in this architecture. You can use these logs to monitor the following: API calls that read the configuration or metadata of resources. User requests to create, modify, or read user-provided resource data. Google doesn't access or use the data in Cloud Logging. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Reliability This section describes design factors that you should consider to build and operate reliable infrastructure for a RAG-capable generative AI application in Google Cloud. Product Design considerations GKE With the Autopilot mode of operation that's used in this architecture, GKE provides the following built-in reliability capabilities: Your workload uses a regional GKE cluster. The control plane and worker nodes are spread across three different zones within a region. Your workloads are robust against zone outages. Regional GKE clusters have a higher uptime SLA than zonal clusters. You don't need to create nodes or manage node pools. GKE automatically creates the node pools and scales them automatically based on the requirements of your workloads. To ensure that sufficient GPU capacity is available when required for autoscaling the GKE cluster, you can create and use reservations. A reservation provides assured capacity in a specific zone for a specified resource. A reservation can be specific to a project, or shared across multiple projects. You incur charges for reserved resources even if the resources aren't provisioned or used. For more information, see Consuming reserved zonal resources. Cloud SQL To ensure that the vector database is robust against database failures and zone outages, use an HA-configured Cloud SQL instance. In the event of a failure of the primary database or a zone outage, Cloud SQL fails over automatically to the standby database in another zone. You don't need to change the IP address for the database endpoint. To ensure that your Cloud SQL instances are covered by the SLA, follow the recommended operational guidelines. For example, ensure that CPU and memory are properly sized for the workload, and enable automatic storage increases. For more information, see Operational guidelines. Cloud Storage You can create Cloud Storage buckets in one of three location types: regional, dual-region, or multi-region. Data that's stored in regional buckets is replicated synchronously across multiple zones within a region. For higher availability, you can use dual-region or multi-region buckets, where data is replicated asynchronously across regions. For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Cost optimization This section provides guidance to help you optimize the cost of setting up and operating a RAG-capable generative AI application in Google Cloud. Product Design considerations GKE In Autopilot mode, GKE optimizes the efficiency of your cluster's infrastructure based on workload requirements. You don't need to constantly monitor resource utilization or manage capacity to control costs. If you can predict the CPU, memory, and ephemeral storage usage of your GKE Autopilot cluster, then you can save money by getting discounts for committed usage. For more information, see GKE committed use discounts. To reduce the cost of running your application, you can use Spot VMs for your GKE nodes. Spot VMs are priced lower than standard VMs, but provide no guarantee of availability. For information about the benefits of nodes that use Spot VMs, how they work in GKE, and how to schedule workloads on such nodes, see Spot VMs. For more cost-optimization guidance, see Best practices for running cost-optimized Kubernetes applications on GKE. Cloud SQL A high availability (HA) configuration helps to reduce downtime for your Cloud SQL database when the zone or instance becomes unavailable. However, the cost of an HA-configured instance is higher than that of a standalone instance. If you don't need HA for the vector database, then you can reduce cost by using a standalone instance, which isn't robust against zone outages. You can detect whether your Cloud SQL instance is over-provisioned and optimize billing by using Cloud SQL cost insights and recommendations powered by Active Assist. For more information, see Reduce over-provisioned Cloud SQL instances. If you can predict the CPU and memory requirements of your Cloud SQL instance, then you can save money by getting discounts for committed usage. For more information, see Cloud SQL committed use discounts. Cloud Storage For the Cloud Storage bucket that you use to load data into the data ingestion subsystem, choose an appropriate storage class. When you choose the storage class, consider the data-retention and access-frequency requirements of your workloads. For example, to control storage costs, you can choose the Standard class and use Object Lifecycle Management. Doing so enables automatic downgrade of objects to a lower-cost storage class or deletion of objects based on conditions that you set. To estimate the cost of your Google Cloud resources, use the Google Cloud Pricing Calculator. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Performance optimization This section describes the factors that you should consider when you design and build a RAG-capable generative AI application in Google Cloud that meets your performance requirements. Product Design considerations GKE Choose appropriate compute classes for your Pods based on the performance requirements of the workloads. For the Pods that run the inference server and the embedding service, we recommend that you use a GPU machine type like nvidia-l4. Cloud SQL To optimize the performance of your Cloud SQL instance, ensure that the CPU and memory that are allocated to the instance are adequate for the workload. For more information, see Optimize underprovisioned Cloud SQL instances. To improve the response time for approximate nearest neighbor (ANN) vector search, use the Inverted File with Flat Compression (IVFFlat) index or Hierarchical Navigable Small World (HNSW) index To help you analyze and improve the query performance of the databases, Cloud SQL provides a Query Insights tool. You can use this tool to monitor performance and trace the source of a problematic query. For more information, see Use Query insights to improve query performance. To get an overview of the status and performance of your databases and to view detailed metrics such as peak connections and disk utilization, you can use the System Insights dashboard. For more information, see Use System insights to improve system performance. Cloud Storage To upload large files, you can use a method called parallel composite uploads. With this strategy, the large file is split into chunks. The chunks are uploaded to Cloud Storage in parallel and then the data is recomposed in the cloud. When network bandwidth and disk speed aren't limiting factors, then parallel composite uploads can be faster than regular upload operations. However, this strategy has some limitations and cost implications. For more information, see Parallel composite uploads. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Deployment To deploy a topology that's based on this reference architecture, you can download and use the open-source sample code that's available in a repository in GitHub. The sample code isn't intended for production use cases. You can use the code to experiment with setting up AI infrastructure for a RAG-enabled generative AI application. The sample code does the following: Provisions a Cloud SQL for PostgreSQL instance to serve as the vector database. Deploys Ray, JupyterHub, and Hugging Face TGI to a GKE cluster that you specify. Deploys a sample web-based chatbot application to your GKE cluster to let you verify the RAG capability. Note: The sample code doesn't set up a messaging component to notify the embedding service when data is uploaded to Cloud Storage. The code includes a Jupyter notebook that you use to upload data to Cloud Storage and trigger the Ray job to generate embeddings. For instructions to use the sample code, see the README for the code. If any errors occur when you use the sample code, and if open GitHub issues don't exist for the errors, then create issues in GitHub. The sample code deploys billable Google Cloud resources. When you finish using the code, remove any resources that you no longer need. What's next Review the following GKE best practices guides: Best practices for GKE networking Best practices for running cost-optimized Kubernetes applications on GKE Learn how to serve Gemma open models using GPUs on GKE with Hugging Face TGI. Review the Google Cloud options for grounding generative AI responses. Learn how to build infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search. Learn how to build infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Anna Berenberg | Engineering FellowAli Zaidi | Solutions ArchitectBala Narasimhan | Group Product ManagerBill Bernsen | Security EngineerBrandon Royal | Outbound Product ManagerCynthia Thomas | Product ManagerGeoffrey Anderson | Product ManagerGleb Otochkin | Cloud Advocate, DatabasesJack Wotherspoon | Software EngineerJulie Amundson | Senior Staff Software EngineerKent Hua | Solutions ManagerKavitha Rajendran | AI/ML Specialist, Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingMegan O'Keefe | Head of Industry Compete, Cloud Platform Evaluations TeamMofi Rahman | Google Cloud Advocate Send feedback \ No newline at end of file diff --git a/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_AlloyDB.txt b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_AlloyDB.txt new file mode 100644 index 0000000000000000000000000000000000000000..b40cffce276f6dde80b174d1299292d456af7788 --- /dev/null +++ b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_AlloyDB.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/rag-capable-gen-ai-app-using-vertex-ai +Date Scraped: 2025-02-23T11:45:58.712Z + +Content: +Home Docs Cloud Architecture Center Send feedback Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-11 UTC This document provides a reference architecture that you can use to design the infrastructure to run a generative artificial intelligence (AI) application with retrieval-augmented generation (RAG). The intended audience for this document includes developers and administrators of generative AI applications and cloud architects. The document assumes a basic understanding of AI, machine learning (ML), and large language model (LLM) concepts. This document doesn't provide guidance about how to design and develop a generative AI application. Note: To learn how you can use open source tools like Ray, Hugging Face, and LangChain to rapidly build and deploy a RAG-capable generative AI application on Google Kubernetes Engine (GKE), see Infrastructure for a RAG-capable generative AI application using GKE . Architecture The following diagram shows a high-level view of an architecture for a RAG-capable generative AI application in Google Cloud: The architecture contains the following interconnected components: Component Purpose Interactions Data ingestion subsystem Prepare and process the external data that's used to enable the RAG capability. The data ingestion subsystem interacts with the other subsystems in the architecture through the database layer. Serving subsystem Handle the request-response flow between the generative AI application and its users. The serving subsystem interacts with the data ingestion subsystem through the database layer. Quality evaluation subsystem Evaluate the quality of responses that the serving subsystem generates. The quality evaluation subsystem interacts with the serving subsystem directly and with the data ingestion subsystem through the database layer. Databases Store the following data: Prompts Vectorized embeddings of the data used for RAG Configuration of the serverless jobs in the data ingestion and quality evaluation subsystems All the subsystems in the architecture interact with the databases. The following diagram shows a detailed view of the architecture: The following sections provide detailed descriptions of the components and data flow within each subsystem of the architecture. Data ingestion subsystem The data ingestion subsystem ingests data from external sources such as files, databases, and streaming services. The uploaded data includes prompts for quality evaluation. The data ingestion subsystem provides the RAG capability in the architecture. The following diagram shows details of the data ingestion subsystem in the architecture: The following are the steps in the data-ingestion flow: Data is uploaded to a Cloud Storage bucket. The data source might be an application user performing an upload, database ingestion, or streaming data. When data is uploaded, a notification is sent to a Pub/Sub topic. Pub/Sub triggers a Cloud Run job to process the uploaded data. Cloud Run starts the job by using configuration data that's stored in an AlloyDB for PostgreSQL database. The Cloud Run job uses Document AI to prepare the data for further processing. For example, the preparation can include parsing the data, converting the data to the required format, and dividing the data into chunks. The Cloud Run job uses the Vertex AI Embeddings for Text model to create vectorized embeddings of the ingested data. Note: The generative AI application that you develop should use the same Embeddings for Text model and parameters to convert the natural-language requests to embeddings. Cloud Run stores the embeddings in an AlloyDB for PostgreSQL database that has the pgvector extension enabled. As described in the following section, when the serving subsystem processes user requests, it uses the embeddings in the vector database to retrieve relevant domain-specific data. Serving subsystem The serving subsystem handles the request-response flow between the generative AI application and its users. The following diagram shows details of the serving subsystem in the architecture: The following are the steps in the request-response flow in the serving subsystem: Users submit requests to the generative AI application through a frontend (for example, a chatbot or mobile app). The generative AI application converts the natural-language request to embeddings. Note: To create the embeddings, the generative AI application that you develop should use the same Embeddings for Text model and parameters that are used to convert the external data to embeddings. The application completes the retrieval part of the RAG approach: The application performs a semantic search for the embedding in the AlloyDB for PostgreSQL vector store that's maintained by the data ingestion subsystem. Semantic search helps find embeddings based on the intent of a prompt rather than its textual content. The application combines the original request with the raw data that's retrieved based on the matching embedding to create a contextualized prompt. The application sends the contextualized prompt to an LLM inference stack that runs on Vertex AI. The LLM inference stack uses a generative AI LLM, which can be a foundation LLM or a custom LLM, and generates a response that's constrained to the provided context. The application can store logs of the request-response activity in Cloud Logging. You can view and use the logs for monitoring using Cloud Monitoring. Google doesn't access or use log data. The application loads the responses to BigQuery for offline analytics. The application screens the responses by using responsible AI filters. The application sends the screened responses to users through the frontend. Quality evaluation subsystem The following diagram shows details of the quality evaluation subsystem in the architecture: When the quality evaluation subsystem receives a request, it does the following: Pub/Sub triggers a Cloud Run job. Cloud Run starts the job by using configuration data that's stored in an AlloyDB for PostgreSQL database. The Cloud Run job pulls evaluation prompts from an AlloyDB for PostgreSQL database. The prompts were previously uploaded to the database by the data ingestion subsystem. The Cloud Run job uses the evaluation prompts to assess the quality of the responses that the serving subsystem generates. The output of this evaluation consists of evaluation scores for metrics like factual accuracy and relevance. Cloud Run loads the evaluation scores and the prompts and responses that were evaluated to BigQuery for future analysis. Products used The following is a summary of all the Google Cloud products that the preceding architecture uses: Vertex AI: An ML platform that lets you train and deploy ML models and AI applications, and customize LLMs for use in AI-powered applications. Cloud Run: A serverless compute platform that lets you run containers directly on top of Google's scalable infrastructure. BigQuery: An enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning geospatial analysis, and business intelligence. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. AlloyDB for PostgreSQL: A fully managed, PostgreSQL-compatible database service that's designed for your most demanding workloads, including hybrid transactional and analytical processing. Document AI: A document processing platform that takes unstructured data from documents and transforms it into structured data. Pub/Sub: An asynchronous and scalable messaging service that decouples services that produce messages from services that process those messages. Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Cloud Monitoring: A service that provides visibility into the performance, availability, and health of your applications and infrastructure. Use cases RAG is an effective technique to improve the quality of output that's generated from an LLM. This section provides examples of use cases for which you can use RAG-capable generative AI applications. Personalized product recommendations An online shopping site might use an LLM-powered chatbot to assist customers with finding products or getting shopping-related help. The questions from a user can be augmented by using historical data about the user's buying behavior and website interaction patterns. The data might include user reviews and feedback that's stored in an unstructured datastore or search-related metrics that are stored in a web analytics data warehouse. The augmented question can then be processed by the LLM to generate personalized responses that the user might find more appealing and compelling. Clinical assistance systems Doctors in hospitals need to quickly analyze and diagnose a patient's health condition to make decisions about appropriate care and medication. A generative AI application that uses a medical LLM like Med-PaLM can be used to assist doctors in their clinical diagnosis process. The responses that the application generates can be grounded in historical patient records by contextualizing the doctors' prompts with data from the hospital's electronic health record (EHR) database or from an external knowledge base like PubMed. Efficient legal research Generative AI-powered legal research lets lawyers quickly query large volumes of statutes and case laws to identify relevant legal precedents or summarize complex legal concepts. The output of such research can be enhanced by augmenting a lawyer's prompts with data that's retrieved from the law firm's proprietary corpus of contracts, past legal communication, and internal case records. This design approach ensures that the generated responses are relevant to the legal domain that the lawyer specializes in. Design alternatives This section presents alternative design approaches that you can consider for your RAG-capable generative AI application in Google Cloud. Fully managed vector search If you need an architecture that uses a fully managed vector search product, you can use Vertex AI and Vector Search, which provides optimized serving infrastructure for very large-scale vector search. For more information, see Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search. Open-source tools and models If you want to rapidly build and deploy RAG-capable generative AI applications by using open source tools and models Ray, Hugging Face, and LangChain, see Infrastructure for a RAG-capable generative AI application using GKE. Other options For information about other infrastructure options, supported models, and grounding techniques that you can use for generative AI applications in Google Cloud, see Choose models and infrastructure for your generative AI application. Design considerations This section provides guidance to help you develop a RAG-capable generative AI architecture in Google Cloud that meets your specific requirements for security and compliance, reliability, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your generative AI application and the Google Cloud products and features that you use, you might need to consider additional design factors and trade-offs. Security and compliance This section describes factors that you should consider when you design and build a RAG-capable generative AI application in Google Cloud that meets your security and compliance requirements. Product Design considerations Vertex AI Vertex AI supports Google Cloud security controls that you can use to meet your requirements for data residency, data encryption, network security, and access transparency. For more information, see Security controls for Vertex AI and Security controls for Generative AI. Cloud Run By default, Cloud Run encrypts data by using a Google-owned and Google-managed encryption key. To protect your containers by using a key that you control, you can use customer-managed encryption keys (CMEK). For more information, see Using customer managed encryption keys. To ensure that only authorized container images are deployed to the Cloud Run jobs, you can use Binary Authorization. Cloud Run helps you meet data residency requirements. Cloud Run container instances run within the region that you select. AlloyDB for PostgreSQL By default, data that's stored in AlloyDB for PostgreSQL is encrypted using Google-owned and Google-managed encryption keys. If you need to use encryption keys that you control and manage, you can use CMEKs. For more information, see About CMEK. To mitigate the risk of data exfiltration from AlloyDB for PostgreSQL databases, you can create a service perimeter by using VPC Service Controls. By default, an AlloyDB for PostgreSQL instance accepts only connections that use SSL. To further secure connections to your AlloyDB for PostgreSQL databases, you can use the AlloyDB for PostgreSQL Auth Proxy connector. The Auth Proxy connector provides Identity and Access Management (IAM)-based connection authorization and uses a TLS 1.3 connection with a 256-bit AES cipher to verify client and server identities and encrypt data traffic. For more information, see About the AlloyDB for PostgreSQL Auth Proxy. For connections created by using Java, Python, or Go, use the appropriate Language Connector instead of the Auth Proxy connector. AlloyDB for PostgreSQL helps you meet data residency requirements. Data is stored or replicated within the regions that you specify. BigQuery BigQuery provides many features that you can use to control access to data, protect sensitive data, and ensure data accuracy and consistency. For more information, see Introduction to data governance in BigQuery. BigQuery helps you meet data residency requirements. Data is stored within the region that you specify. Cloud Storage By default, the data that's stored in Cloud Storage is encrypted using Google-owned and Google-managed encryption keys. If required, you can use CMEKs or your own keys that you manage by using an external management method like customer-supplied encryption keys (CSEKs). For more information, see Data encryption options. Cloud Storage supports two methods for granting users access to your buckets and objects: IAM and access control lists (ACLs). In most cases, we recommend using IAM, which lets you grant permissions at the bucket and project levels. For more information, see Overview of access control. The data that you load into the data ingestion subsystem through Cloud Storage might include sensitive data. To protect such data, you can use Sensitive Data Protection to discover, classify, and de-identify the data. For more information, see Using Sensitive Data Protection with Cloud Storage. Cloud Storage helps you meet data residency requirements. Data is stored or replicated within the regions that you specify. Pub/Sub By default, Pub/Sub encrypts all messages, both at rest and in transit, by using Google-owned and Google-managed encryption keys. Pub/Sub supports the use of CMEKs for message encryption at the application layer. For more information, see Configuring message encryption. If you have data residency requirements, to ensure that message data is stored in specific locations, you can configure message storage policies. Document AI By default, data at rest is encrypted using Google-managed encryption keys. If you need to use encryption keys that you control and manage, you can use CMEKs. For more information, see Document AI Security & Compliance. Cloud Logging Admin Activity audit logs are enabled by default for all the Google Cloud services that are used in this reference architecture. These logs record API calls or other actions that modify the configuration or metadata of Google Cloud resources. Data Access audit logs are enabled by default for BigQuery. For the other services that are used in this architecture, you can enable Data Access audit logs. The logs let you track API calls that read the configuration or metadata of resources or user requests to create, modify, or read user-provided resource data. To help meet data residency requirements, you can configure Cloud Logging to store log data in the region that you specify. For more information, see Regionalize your logs. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Reliability This section describes design factors that you should consider to build and operate reliable infrastructure for a RAG-capable generative AI application in Google Cloud. Product Design considerations Cloud Run Cloud Run is a regional service. Data is stored synchronously across multiple zones within a region. Traffic is automatically load-balanced across the zones. If a zone outage occurs, Cloud Run jobs continue to run and data isn't lost. If a region outage occurs, the Cloud Run jobs stop running until Google resolves the outage. Individual Cloud Run jobs or tasks might fail. To handle such failures, you can use task retries and checkpointing. For more information, see Jobs retries and checkpoints best practices. AlloyDB for PostgreSQL By default, AlloyDB for PostgreSQL clusters provide high availability (HA) with automatic failover. The primary instance has redundant nodes that are located in two different zones within a region. This redundancy ensures that the clusters are robust against zone outages. To plan for recovery from region outages, you can use cross-region replication. BigQuery Data that you load into BigQuery is stored synchronously in two zones within the region that you specify. This redundancy helps ensure that your data isn't lost when a zone outage occurs. For more information about reliability features in BigQuery, see Understand reliability. Cloud Storage You can create Cloud Storage buckets in one of three location types: regional, dual-region, or multi-region. Data stored in regional buckets is replicated synchronously across multiple zones within a region. For higher availability, you can use dual-region or multi-region buckets, where data is replicated asynchronously across regions. Pub/Sub To manage transient spikes in message traffic, you can configure flow control in the publisher settings. To handle failed publishes, adjust the retry-request variables as necessary. For more information, see Retry requests. Document AI Document AI is a regional service. Data is stored synchronously across multiple zones within a region. Traffic is automatically load-balanced across the zones. If a zone outage occurs, data isn't lost. If a region outage occurs, the Document AI is unavailable until Google resolves the outage. Note: For more information about region-specific considerations, see Geography and regions. For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Cost optimization This section provides guidance to help you optimize the cost of setting up and operating a RAG-capable generative AI application in Google Cloud. Product Design considerations Cloud Run When you create Cloud Run jobs, you specify the amount of memory and CPU to be allocated to the container instance. To control costs, start with the default (minimum) CPU and memory allocations. To improve performance, you can increase the allocation by configuring the CPU limit and memory limit. If you can predict the CPU and memory requirements of your Cloud Run jobs, then you can save money by getting discounts for committed usage. For more information, see Cloud Run committed use discounts. AlloyDB for PostgreSQL By default, a primary instance of an AlloyDB for PostgreSQL cluster is highly available (HA). The instance has an active node and a standby node. If the active node fails, AlloyDB for PostgreSQL fails over to the standby node automatically. If you don't need HA for the databases, then you can reduce cost by making the cluster's primary instance a basic instance. A basic instance isn't robust against zone outages and it has longer downtime during maintenance operations. For more information, see Reduce costs using basic instances. If you can predict the CPU and memory requirements of your AlloyDB for PostgreSQL instance, then you can save money by getting discounts for committed usage. For more information, see AlloyDB for PostgreSQL committed use discounts. BigQuery BigQuery lets you estimate the cost of queries before running them. To optimize query costs, you need to optimize storage and query computation. For more information, see Estimate and control costs. Cloud Storage For the Cloud Storage bucket that you use to load data into the data ingestion subsystem, choose an appropriate storage class based on the data-retention and access-frequency requirements of your workloads. For example, you can choose the Standard storage class, and use Object Lifecycle Management to control storage costs by automatically downgrading objects to a lower-cost storage class or deleting objects based on conditions that you set. Cloud Logging To control the cost of storing logs, you can do the following: Reduce the volume of logs by excluding or filtering unnecessary log entries. For more information, see Exclusion filters. Reduce the period for which log entries are retained. For more information, see Configure custom retention. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Performance This section describes the factors that you should consider when you design and build a RAG-capable generative AI application in Google Cloud that meets your performance requirements. Product Design considerations Cloud Run By default, each Cloud Run container instance is allocated one CPU and 512 MiB of memory. Depending on your performance requirements for your Cloud Run jobs, you can configure the CPU limit and memory limit. AlloyDB for PostgreSQL To help you analyze and improve query performance of the databases, AlloyDB for PostgreSQL provides a Query Insights tool. You can use this tool to monitor performance and trace the source of a problematic query. For more information, see Query Insights overview. To get an overview of the status and performance of your databases and to view detailed metrics such as peak connections and maximum replication lag, you can use the System Insights dashboard. For more information, see Monitor an instance using the AlloyDB for PostgreSQL System Insights dashboard. To reduce the load on your primary AlloyDB for PostgreSQL instance and to scale out the capacity to handle read requests, you can add read pool instances to the cluster. For more information, see AlloyDB for PostgreSQL nodes and instances. BigQuery BigQuery provides a query execution graph that you can use to analyze query performance and get performance insights for issues like slot contention and insufficient shuffle quota. For more information, see Get query performance insights. After you address the issues that you identify through query performance insights, you can further optimize queries by using techniques like reducing the volume of input and output data. For more information, see Optimize query computation. Cloud Storage To upload large files, you can use a method called parallel composite uploads. With this strategy, the large file is split into chunks. The chunks are uploaded to Cloud Storage in parallel and then the data is recomposed in the cloud. Parallel composite uploads can be faster than regular upload operations when network bandwidth and disk speed aren't limiting factors. However, this strategy has some limitations and cost implications. For more information, see Parallel composite uploads. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Deployment To get started and experiment with building infrastructure on Google Cloud for RAG-capable generative AI applications, you can use Jump Start Solution: Generative AI RAG with Cloud SQL. This solution deploys a Python-based chat application on Cloud Run and uses a fully managed Cloud SQL database for vector search. The sample code for this solution is available in GitHub. Note: The Jump Start Solution uses Cloud SQL as the vector database, whereas this reference architecture uses AlloyDB for PostgreSQL. Both Cloud SQL and AlloyDB for PostgreSQL support the pgvector extension, which lets you run vector search in a PostgreSQL database. What's next Learn how to Build Generative AI applications with Vertex AI PaLM API and LangChain. Learn how to Build enterprise gen AI apps with Google Cloud databases. Learn how New GenAI Databases Retrieval App helps improves LLM answers. Try the Codelab to Build an LLM and RAG-based chat application using AlloyDB for PostgreSQL AI and LangChain. Try Generative AI document summarization. Read about Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Read about Retrieval-Augmented Generation for Large Language Models. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Andrew Brook | Engineering DirectorAnna Berenberg | Engineering FellowAssaf Namer | Principal Cloud Security ArchitectBalachandar Krishnamoorthy | Principal Software EngineerDaniel Lees | Cloud Security ArchitectDerek Downey | Developer Relations EngineerEran Lewis | Senior Product ManagerGeoffrey Anderson | Product ManagerGleb Otochkin | Cloud Advocate, DatabasesHamsa Buvaraghan | AI Product ManagerIrina Sigler | Product ManagerJack Wotherspoon | Software EngineerJason Davenport | Developer AdvocateJordan Totten | Customer EngineerJulia Wiesinger | Product ManagerKara Greenfield | Customer EngineerKurtis Van Gent | Staff Software EngineerPer Jacobsson | Software EngineerPranav Nambiar | DirectorRichard Hendricks | Architecture Center StaffSafiuddin Khaja | Cloud EngineerSandy Ghai | Group Product ManagerVladimir Vuskovic | Product Management DirectorSteren Giannini | Group Product ManagerWietse Venema | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_Vector_Search.txt b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_Vector_Search.txt new file mode 100644 index 0000000000000000000000000000000000000000..7afcfeb04550d2cd09cfc8d0ca284c92d92e42f7 --- /dev/null +++ b/Infrastructure_for_a_RAG-capable_generative_AI_application_using_Vertex_AI_and_Vector_Search.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/gen-ai-rag-vertex-ai-vector-search +Date Scraped: 2025-02-23T11:45:56.577Z + +Content: +Home Docs Cloud Architecture Center Send feedback Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This document provides a reference architecture that you can use to design the infrastructure for a generative AI application with retrieval-augmented generation (RAG) by using Vector Search. Vector Search is a fully managed Google Cloud service that provides optimized serving infrastructure for very large-scale vector-similarity matching. The intended audience for this document includes architects, developers, and administrators of generative AI applications. The document assumes a basic understanding of AI, machine learning (ML), and large language model (LLM) concepts. This document doesn't provide guidance about how to design and develop a generative AI application. Note: Depending on your requirements for application hosting and AI infrastructure, you can consider other architectural approaches for your RAG-capable generative AI application. For more information, see the "Design alternatives" section later in this document. Architecture The following diagram shows a high-level view of the architecture that this document presents: The architecture in the preceding diagram has two subsystems: data ingestion and serving. The data ingestion subsystem ingests data that's uploaded from external sources. The subsystem prepares the data for RAG and interacts with Vertex AI to generate embeddings for the ingested data and to build and update the vector index. The serving subsystem contains the generative AI application's frontend and backend services. The frontend service handles the query-response flow with application users and forwards queries to the backend service. The backend service uses Vertex AI to generate query embeddings, perform vector-similarity search, and apply Responsible AI safety filters and system instructions. The following diagram shows a detailed view of the architecture: The following sections describe the data flow within each subsystem of the preceding architecture diagram. Data ingestion subsystem The data ingestion subsystem ingests data from external sources and prepares the data for RAG. The following are the steps in the data-ingestion and preparation flow: Data is uploaded from external sources to a Cloud Storage bucket. The external sources might be applications, databases, or streaming services. When data is uploaded to Cloud Storage, a message is published to a Pub/Sub topic. When the Pub/Sub topic receives a message, it triggers a Cloud Run job. The Cloud Run job parses the raw data, formats it as required, and divides it into chunks. The Cloud Run job uses the Vertex AI Embeddings API to create embeddings of the chunks by using an embedding model that you specify. Vertex AI supports text and multimodal embedding models. The Cloud Run job builds a Vector Search index of the embeddings and then deploys the index. When new data is ingested, the preceding steps are performed for the new data and the index is updated using streaming updates. When the serving subsystem processes user requests, it uses the Vector Search index for vector-similarity search. The next section describes the serving flow. Serving subsystem The serving subsystem handles the query-response flow between the generative AI application and its users. The following are the steps in the serving flow: A user submits a natural-language query to a Cloud Run service that provides a frontend interface (such as a chatbot) for the generative AI application. The frontend service forwards the user query to a backend Cloud Run service. The backend service processes the query by doing the following: Converts the query to embeddings by using the same embeddings model and parameters that the data ingestion subsystem uses to generate embeddings of the ingested data. Retrieves relevant grounding data by performing a vector-similarity search for the query embeddings in the Vector Search index. Constructs an augmented prompt by combining the original query with the grounding data. Sends the augmented prompt to an LLM that's deployed on Vertex AI. The LLM generates a response. For each prompt, Vertex AI applies the Responsible AI safety filters that you've configured and then sends the filtered response and AI safety scores to the Cloud Run backend service. The application sends the response to the user through the Cloud Run frontend service. You can store and view logs of the query-response activity in Cloud Logging, and you can set up logs-based monitoring by using Cloud Monitoring. You can also load the generated responses into BigQuery for offline analytics. The Vertex AI prompt optimizer helps you improve prompts at scale, both during initial prompt design and for ongoing prompt tuning. The prompt optimizer evaluates your model's response to a set of sample prompts that ML engineers provide. The output of the evaluation includes the model's responses to the sample prompts, scores for metrics that the ML engineers specify, and a set of optimized system instructions that you can consider using. Products used This reference architecture uses the following Google Cloud products: Vertex AI: An ML platform that lets you train and deploy ML models and AI applications, and customize LLMs for use in AI-powered applications. Vector Search: A vector similarity-matching service that lets you store, index, and search semantically similar or related data. Cloud Run: A serverless compute platform that lets you run containers directly on top of Google's scalable infrastructure. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Pub/Sub: An asynchronous and scalable messaging service that decouples services that produce messages from services that process those messages. Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Cloud Monitoring: A service that provides visibility into the performance, availability, and health of your applications and infrastructure. BigQuery: An enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning geospatial analysis, and business intelligence. Use cases RAG is an effective technique to improve the quality of output that's generated from an LLM. This section provides examples of use cases for which you can use RAG-capable generative AI applications. Personalized product recommendations An online shopping site might use an LLM-powered chatbot to assist customers with finding products or getting shopping-related help. The questions from a user can be augmented by using historical data about the user's buying behavior and website interaction patterns. The data might include user reviews and feedback that's stored in an unstructured datastore or search-related metrics that are stored in a web analytics data warehouse. The augmented question can then be processed by the LLM to generate personalized responses that the user might find more appealing and compelling. Clinical assistance systems Doctors in hospitals need to quickly analyze and diagnose a patient's health condition to make decisions about appropriate care and medication. A generative AI application that uses a medical LLM like Med-PaLM can be used to assist doctors in their clinical diagnosis process. The responses that the application generates can be grounded in historical patient records by contextualizing the doctors' prompts with data from the hospital's electronic health record (EHR) database or from an external knowledge base like PubMed. Efficient legal research Generative AI-powered legal research lets lawyers quickly query large volumes of statutes and case laws to identify relevant legal precedents or summarize complex legal concepts. The output of such research can be enhanced by augmenting a lawyer's prompts with data that's retrieved from the law firm's proprietary corpus of contracts, past legal communication, and internal case records. This design approach ensures that the generated responses are relevant to the legal domain that the lawyer specializes in. Design alternatives This section presents alternative design approaches that you can consider for your RAG-capable generative AI application in Google Cloud. AI infrastructure alternatives If you want to take advantage of the vector store capabilities of a fully managed Google Cloud database like AlloyDB for PostgreSQL or Cloud SQL for your RAG application, then see Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL. If you want to rapidly build and deploy RAG-capable generative AI applications by using open source tools and models Ray, Hugging Face, and LangChain, see Infrastructure for a RAG-capable generative AI application using Google Kubernetes Engine (GKE). Application hosting options In the architecture that's shown in this document, Cloud Run is the host for the generative AI application services and the data processing job. Cloud Run is a developer-focused, fully managed application platform. If you need greater configuration flexibility and control over the compute infrastructure, you can deploy your application to GKE clusters or to Compute Engine VMs. The decision of whether to use Cloud Run, GKE, or Compute Engine as your application host involves trade-offs between configuration flexibility and management effort. With the serverless Cloud Run option, you deploy your application to a preconfigured environment that requires minimal management effort. With Compute Engine VMs and GKE containers, you're responsible for managing the underlying compute resources, but you have greater configuration flexibility and control. For more information about choosing an appropriate application hosting service, see the following documents: Is my app a good fit for Cloud Run? Select a managed container runtime environment Hosting Applications on Google Cloud Other options For information about other infrastructure options, supported models, and grounding techniques that you can use for generative AI applications in Google Cloud, see Choose models and infrastructure for your generative AI application. Design considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud and third-party products and features that you use, there might be additional design factors and trade-offs that you should consider. Security, compliance, and privacy This section describes design considerations and recommendations to design a topology in Google Cloud that meets the security and compliance requirements of your workloads. Product Design considerations and recommendations Vertex AI Security controls: Vertex AI supports Google Cloud security controls that you can use to meet your requirements for data residency, data encryption, network security, and access transparency. For more information, see Security controls for Vertex AI and Security controls for Generative AI. Model access: You can set up organization policies to limit the type and versions of LLMs that can be used in a Google Cloud project. For more information, see Control access to Model Garden models. Shared responsibility: Vertex AI secures the underlying infrastructure and provides tools and security controls to help you protect your data, code, and models. For more information, see Vertex AI shared responsibility. Data protection: Use the Cloud Data Loss Prevention API to discover and de-identify sensitive data, such as personally identifiable information (PII), in the prompts and responses and in log data. For more information, see this video: Protecting sensitive data in AI apps. Cloud Run Ingress security (frontend service): To control external access to the application, disable the default run.app URL of the frontend Cloud Run service and set up a regional external Application Load Balancer. Along with load-balancing incoming traffic to the application, the load balancer handles SSL certificate management. For added protection, you can use Google Cloud Armor security policies to provide request filtering, DDoS protection, and rate limiting for the service. Ingress security (backend service): The Cloud Run service for the application's backend in this architecture doesn't need access from the internet. To ensure that only internal clients can access the service, set the ingress parameter to internal. For more information, see Restrict network ingress for Cloud Run. Data encryption: By default, Cloud Run encrypts data by using a Google-owned and Google-managed encryption key. To protect your containers by using a key that you control, you can use customer-managed encryption keys (CMEK). For more information, see Using customer managed encryption keys. Container image security: To ensure that only authorized container images are deployed to the Cloud Run jobs and services, you can use Binary Authorization. Data residency: Cloud Run helps you to meet data residency requirements. Cloud Run container instances run within the region that you select. For more guidance about container security, see General Cloud Run development tips. Cloud Storage Data encryption: By default, the data that's stored in Cloud Storage is encrypted using Google-owned and Google-managed encryption keys. If required, you can use CMEKs or your own keys that you manage by using an external management method like customer-supplied encryption keys (CSEKs). For more information, see Data encryption options. Access control: Cloud Storage supports two methods for controlling user access to your buckets and objects: Identity and Access Management (IAM) and access control lists (ACLs). In most cases, we recommend using IAM, which lets you grant permissions at the bucket and project levels. For more information, see Overview of access control. Data protection: The data that you load into the data ingestion subsystem through Cloud Storage might include sensitive data. To protect such data, you can use Sensitive Data Protection to discover, classify, and de-identify the data. For more information, see Using Sensitive Data Protection with Cloud Storage. Network control: To mitigate the risk of data exfiltration from Cloud Storage, you can create a service perimeter by using VPC Service Controls. Data residency: Cloud Storage helps you to meet data residency requirements. Data is stored or replicated within the regions that you specify. Pub/Sub Data encryption: By default, Pub/Sub encrypts all messages, both at rest and in transit, by using Google-owned and Google-managed encryption keys. Pub/Sub supports the use of CMEKs for message encryption at the application layer. For more information, see Configure message encryption. Data residency: If you have data residency requirements, in order to ensure that message data is stored in specific locations, you can configure message storage policies. Cloud Logging Administrative activity audit: Logging of administrative activity is enabled by default for all of the Google Cloud services that are used in this reference architecture. You can access the logs through Cloud Logging and use the logs to monitor API calls or other actions that modify the configuration or metadata of Google Cloud resources. Data access audit: Logging of data access events is enabled by default for BigQuery. For the other services that are used in this architecture, you can enable Data Access audit logs. You can use these logs to monitor the following: API calls that read the configuration or metadata of resources. User requests to create, modify, or read user-provided resource data. Security of log data: Google doesn't access or use the data in Cloud Logging. Data residency: To help meet data residency requirements, you can configure Cloud Logging to store log data in the region that you specify. For more information, see Regionalize your logs. All of the products in the architecture Mitigate data exfiltration risk: To reduce the risk of data exfiltration, create a VPC Service Controls perimeter around the infrastructure. VPC Service Controls supports all of the services that are used in this reference architecture. Post-deployment optimization: After you deploy your application in Google Cloud, use the Active Assist service to get recommendations that can help you to further optimize the security of your cloud resources. Review the recommendations and apply them as appropriate for your environment. For more information, see Find recommendations in Recommendation Hub. Access control: Follow the principle of least privilege for every cloud service. For general guidance regarding security for AI and ML deployments in Google Cloud, see the following resources: (Blog) Introducing Google's Secure AI Framework (Documentation) AI and ML security perspective in the Google Cloud Architecture Framework (Documentation) Vertex AI shared responsibility (Whitepaper) Generative AI, Privacy, and Google Cloud (Video) Protecting sensitive data in AI apps Reliability This section describes design considerations and recommendations to build and operate reliable infrastructure for your deployment in Google Cloud. Product Design considerations and recommendations Vector Search Query scaling: To make sure that the Vector Search index can handle increases in query load, you can configure autoscaling for the index endpoint. When the query load increases, the number of nodes is increased automatically up to the maximum that you specify. For more information, see Enable autoscaling. Cloud Run Robustness to infrastructure outages: Cloud Run is a regional service. Data is stored synchronously across multiple zones within a region. Traffic is automatically load-balanced across the zones. If a zone outage occurs, Cloud Run continues to run and data isn't lost. If a region outage occurs, Cloud Run stops running until Google resolves the outage. Failure handling: Individual Cloud Run jobs or tasks might fail. To handle such failures, you can use task retries and checkpointing. For more information, see Jobs retries and checkpoints best practices. Cloud Storage Data availability: You can create Cloud Storage buckets in one of three location types: regional, dual-region, or multi-region. Data that's stored in regional buckets is replicated synchronously across multiple zones within a region. For higher availability, you can use dual-region or multi-region buckets, where data is replicated asynchronously across regions. Pub/Sub Rate control: To avoid errors during periods of transient spikes in message traffic, you can limit the rate of publish requests by configuring flow control in the publisher settings. Failure handling: To handle failed publish attempts, adjust the retry-request variables as necessary. For more information, see Retry requests. BigQuery Robustness to infrastructure outages: Data that you load into BigQuery is stored synchronously in two zones within the region that you specify. This redundancy helps to ensure that your data isn't lost when a zone outage occurs. For more information about reliability features in BigQuery, see Understand reliability. All of the products in the architecture Post-deployment optimization: After you deploy your application in Google Cloud, use the Active Assist service to get recommendations to further optimize the reliability of your cloud resources. Review the recommendations and apply them as appropriate for your environment. For more information, see Find recommendations in Recommendation Hub. For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Cost optimization This section provides guidance to optimize the cost of setting up and operating a Google Cloud topology that you build by using this reference architecture. Product Design considerations and recommendations Vector Search Billing for Vector Search depends on the size of your index, queries per second (QPS), and the number and machine type of the nodes that you use for the index endpoint. For high-QPS workloads, batching the queries can help to reduce cost. For information about how you can estimate Vector Search cost, see Vector Search pricing examples. To improve the utilization of the compute nodes on which the Vector Search index is deployed, you can configure autoscaling for the index endpoint. When demand is low, the number of nodes is reduced automatically to the minimum that you specify. For more information, see Enable autoscaling. Cloud Run When you create Cloud Run jobs and services, you specify the amount of memory and CPU to be allocated to the container instance. To control costs, start with the default (minimum) CPU and memory allocations. To improve performance, you can increase the allocation by configuring the CPU limit and memory limit. For more information, see the following documentation: Configure memory limits for services Configure CPU limits for services Configure memory limits for jobs Configure CPU limits for jobs If you can predict the CPU and memory requirements of your Cloud Run jobs and services, then you can save money by getting discounts for committed usage. For more information, see Cloud Run committed use discounts. Cloud Storage For the Cloud Storage bucket that you use to load data into the data ingestion subsystem, choose an appropriate storage class. When you choose the storage class, consider the data-retention and access-frequency requirements of your workloads. For example, to control storage costs, you can choose the Standard class and use Object Lifecycle Management. Doing so enables automatic downgrade of objects to a lower-cost storage class or deletion of objects based on conditions that you set. Cloud Logging To control the cost of storing logs, you can do the following: Reduce the volume of logs by excluding or filtering unnecessary log entries. For more information, see Exclusion filters. Reduce the period for which log entries are retained. For more information, see Configure custom retention. BigQuery BigQuery lets you estimate the cost of queries before you run them. To optimize query costs, you need to optimize storage and query computation. For more information, see Estimate and control costs. All of the products in the architecture After you deploy your application in Google Cloud, use the Active Assist service to get recommendations to further optimize the cost of your cloud resources. Review the recommendations and apply them as appropriate for your environment. For more information, see Find recommendations in Recommendation Hub. To estimate the cost of your Google Cloud resources, use the Google Cloud Pricing Calculator. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Performance optimization This section describes design considerations and recommendations to design a topology in Google Cloud that meets the performance requirements of your workloads. Product Design considerations and recommendations Vector Search When you create the index, set the shard size, distance measure type, and number of embeddings for each leaf node based on your performance requirements. For example, if your application is extremely sensitive to latency variability, we recommend a large shard size. For more information, see Configuration parameters that affect performance. When you configure the compute capacity of the nodes on which the Vector Search index is deployed, consider your requirements for performance. Choose an appropriate machine type and set the maximum number of nodes based on the query load that you expect. For more information, see Deployment settings that affect performance. Configure the query parameters for the Vertex Search index based on your requirements for query performance, availability, and cost. For example, the approximateNeighborsCount parameter specifies the number of neighbors that must be retrieved before exact reordering is performed. Decreasing the value of this parameter can help to reduce latency and cost. For more information, see Query-time settings that affect performance. An index that's up-to-date helps to improve the accuracy of the generated responses. You can update your Vector Search index by using batch or streaming updates. Streaming updates let you perform near real-time queries on updated data. For more information, see Update and rebuild an active index. Cloud Run By default, each Cloud Run container instance is allocated one CPU and 512 MiB of memory. Depending on the performance requirements, you can configure the CPU limit and the memory limit. For more information, see the following documentation: Configure memory limits for services Configure CPU limits for services Configure memory limits for jobs Configure CPU limits for jobs To ensure optimal latency even after a period of no traffic, you can configure a minimum number of instances. When such instances are idle, the CPU and memory that are allocated to the instances are billed at a lower price. For more performance optimization guidance, see General Cloud Run development tips. Cloud Storage To upload large files, you can use a method called parallel composite uploads. With this strategy, the large file is split into chunks. The chunks are uploaded to Cloud Storage in parallel and then the data is recomposed in the cloud. When network bandwidth and disk speed aren't limiting factors, then parallel composite uploads can be faster than regular upload operations. However, this strategy has some limitations and cost implications. For more information, see Parallel composite uploads. BigQuery BigQuery provides a query execution graph that you can use to analyze query performance and get performance insights for issues like slot contention and insufficient shuffle quota. For more information, see Get query performance insights. After you address the issues that you identify through query performance insights, you can further optimize queries by using techniques like reducing the volume of input and output data. For more information, see Optimize query computation. All of the products in the architecture After you deploy your application in Google Cloud, use the Active Assist service to get recommendations to further optimize the performance of your cloud resources. Review the recommendations and apply them as appropriate for your environment. For more information, see Find recommendations in Recommendation Hub. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. What's next Choose models and infrastructure for your generative AI application Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL Infrastructure for a RAG-capable generative AI application using GKE For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Assaf Namer | Principal Cloud Security ArchitectDeepak Michael | Networking Specialist Customer EngineerDivam Anand | Product Strategy and Operations LeadEran Lewis | Senior Product ManagerJerome Simms | Director, Product ManagementMark Schlagenhauf | Technical Writer, NetworkingNicholas McNamara | Product and Commercialization Strategy PrincipalPreston Holmes | Outbound Product Manager - App AccelerationRob Edwards | Technology Practice Lead, DevOpsVictor Moreno | Product Manager, Cloud NetworkingWietse Venema | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Integration_Services.txt b/Integration_Services.txt new file mode 100644 index 0000000000000000000000000000000000000000..3e1ad18cc7195a63309001489213765d23cb62c3 --- /dev/null +++ b/Integration_Services.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/integration-services +Date Scraped: 2025-02-23T12:05:14.941Z + +Content: +Integration servicesIntegrate your applications, data, and processes with Google Cloud.Get startedUnlock the value of existing applications and dataConfigure and connect enterprise applications like Salesforce, databases like MySQL, and event-driven systems like Pub/Sub.Deploy and execute processes that connect a series of Google Cloud services together.Build a real-time intelligence platform for your business by unifying stream and batch data processing. Our solutionsGoogle Cloud's integration services consist of application and data integration products.CategoryProductDocumentationService and Application IntegrationApplication IntegrationConnect to third-party applications and enable data consistency without code.Create an integration with an API triggerDeveloper ConnectConnect to third-party source code management platforms.Connect to a GitHub source code repositoryWorkflowsCombine Google Cloud services and APIs to build applications and data pipelines.Create a workflowApigeeManage your APIs with enhanced scale, security, and automation.Build your first API proxyEventarcBuild an event-driven architecture that can connect any service.Receive direct events from Cloud StorageCloud TasksEnqueue requests to protect downstream services and control delivery flow rates.Add a task to a Cloud Tasks queueSchedulerSchedule any job using a fully managed cron job service.Schedule and run a cron jobData integrationDataflowUnified stream and batch data processing that's serverless, fast, and cost-effective.Create a Dataflow pipeline using PythonDataprocProcess and integrate Hadoop/Spark workloads.Create a Dataproc cluster by using a templateCloud Data FusionManage and develop ETL/ELT pipelines.Create a data pipelineCloud ComposerRun a fully managed Apache Airflow service at scale to build data pipelines.Run an Apache Airflow DAG in Cloud ComposerDatastreamStreaming data ingestion and replication that's serverless and easy to use.Replicate data to BigQuery in near-real timeCross-categoryPub/SubIngest data without any transforms to get raw data into Google Cloud.Publish and receive messages in Pub/Sub by using a client libraryOur solutionsService and Application IntegrationApplication IntegrationConnect to third-party applications and enable data consistency without code.Create an integration with an API triggerData integrationDataflowUnified stream and batch data processing that's serverless, fast, and cost-effective.Create a Dataflow pipeline using PythonCross-categoryPub/SubIngest data without any transforms to get raw data into Google Cloud.Publish and receive messages in Pub/Sub by using a client libraryKey benefits Simplify integration operations and managementFocus on building business and IT processes instead of building connectors and managing server clusters, as Google Cloud removes operational overhead from data and application workloads.Release new processes and apps fasterAdd new services to applications without re-architecting your entire system. Build a standardized foundational mesh of real-time data, applications, and services that can power innovation. Leverage integration services fit for purposeBuild integrations using the solutions that best fit your business needs from a large portfolio of data, service, and application integration tools that offer virtually limitless capacity to manage your workloads.Increase integration developer productivityConnect data and applications sources to seamlessly build integrations without siloed and overlapping development. Increase productivity across your organization by standardizing integrations.Learn moreCase studyDow Jones brings key historical events datasets to life with Dataflow5-min readBlog postBuild an event-driven orchestration with Eventarc and Workflows5-min readBlog postThe next generation of Dataflow: Dataflow Prime, Dataflow Go, and Dataflow ML5-min readBlog postNow available to try: Application Integration and Integration Connectors - Preview5-min readMore stories from our blogWith Google Cloud’s Application Integration and API Management, we're planning to facilitate our API integration approach by connecting, securing, and managing the multitude of data and applications required to support digital experiences at ATB Financial.Innes Holman, VP Tech Strategy and Architecture, ATB FinancialTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Get startedWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Interservice_communication_in_a_microservices_setup.txt b/Interservice_communication_in_a_microservices_setup.txt new file mode 100644 index 0000000000000000000000000000000000000000..4e963f94c578c28a7304fcbfb7ffda8a1a498684 --- /dev/null +++ b/Interservice_communication_in_a_microservices_setup.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/microservices-architecture-interservice-communication +Date Scraped: 2025-02-23T11:46:53.215Z + +Content: +Home Docs Cloud Architecture Center Send feedback Interservice communication in a microservices setup Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document is the third in a four-part series about designing, building, and deploying microservices. This series describes the various elements of a microservices architecture. The series includes information about the benefits and drawbacks of the microservices architecture pattern, and how to apply it. Introduction to microservices Refactor a monolith into microservices Interservice communication in a microservices setup (this document) Distributed tracing in a microservices application This series is intended for application developers and architects who design and implement the migration to refactor a monolith application to a microservices application. This document describes the tradeoffs between asynchronous messaging compared to synchronous APIs in microservices. The document walks you through deconstruction of a monolithic application and shows you how to convert a synchronous request in the original application to asynchronous flow in the new microservices-based setup. This conversion includes implementing distributed transactions between services. Example application In this document, you use a prebuilt ecommerce application called Online Boutique. The application implements basic ecommerce flows such as browsing items, adding products to a cart, and checkout. The application also features recommendations and ads based on user selection. Logical separation of service In this document, you isolate the payment service from the rest of the application. All flows in the original Online Boutique application are synchronous. In the refactored application, the payment process is converted to an asynchronous flow. Therefore, when you receive a purchase request, instead of processing it immediately, you provide a "request received" confirmation to the user. In the background, an asynchronous request is triggered to the payment service to process the payment. Before you move payment data and logic into a new service, you isolate the payment data and logic from the monolith. When you isolate the payment data and logic in the monolith, it's easier to refactor your code in the same codebase if you get payment service boundaries wrong (business logic or data). The components of the monolith application in this document are already modularized, so they're isolated from each other. If your application has tighter interdependencies, you need to isolate the business logic and create separate classes and modules. You also need to decouple any database dependencies into their own tables and create separate repository classes. When you decouple database dependencies, there can be foreign key relationships between the split tables. However, after you completely decouple the service from the monolith, these dependencies cease to exist, and the service interacts exclusively through predefined API or RPC contracts. Distributed transactions and partial failures After you isolate the service and split it from the monolith, a local transaction in the original monolithic system is distributed among multiple services. In the monolith implementation, the checkout process followed the sequence shown in the following diagram. Figure 1. A checkout process sequence in a monolith implementation. In figure 1, when the application receives a purchase order, the checkout controller calls the payment service and order service to process payment and save the order respectively. If any step fails, the database transaction can be rolled back. Consider an example scenario in which the order request is successfully stored in the order table, but the payment fails. In this scenario, the entire transaction is rolled back and the entry is removed from the order table. After you decouple payment into its own service, the modified checkout flow is similar to the following diagram: Figure 2. A checkout process sequence after payment is decoupled into its own service. In figure 2, the transaction now spans multiple services and their corresponding databases, so it's a distributed transaction. On receiving an order request, the checkout controller saves the order details in its local database and calls other services to complete the order. These services, such as the payment service, can use their own local database to store details about the order. In the monolithic application, the database system ensures that the local transactions are atomic. However, by default, the microservice-based system that has a separate database for each service doesn't have a global transaction coordinator that spans the different databases. Because transactions aren't centrally coordinated, a failure in processing a payment doesn't roll back changes that were committed in the order service. Therefore, the system is in an inconsistent state. The following patterns are commonly used to handle distributed transactions: Two-phase commit protocol (2PC): Part of a family of consensus protocols, 2PC coordinates the commit of a distributed transaction and it maintains atomicity, consistency, isolation, durability (ACID) guarantees. The protocol is divided into the prepare and commit phases. A transaction is committed only if all the participants voted for it. If the participants don't reach a consensus, then the entire transaction is rolled back. Saga: The Saga pattern consists of running local transactions within each microservice that make up the distributed transaction. An event is triggered at the end of every successful or failed operation. All microservices involved in the distributed transaction subscribe to these events. If the following microservices receive a success event, they execute their operation. If there is a failure, the preceding microservices complete compensating actions to undo changes. Saga provides a consistent view of the system by guaranteeing that when all steps are complete, either all operations succeed or compensating actions undo all the work. We recommend Saga for long-lived transactions. In a microservices-based application, you expect interservice calls and communication with third-party systems. Therefore, it's best to design for eventual consistency: retry for recoverable errors and expose compensating events that eventually amend non-recoverable errors. There are various ways to implement a Saga—for example, you can use task and workflow engines such as Apache Airflow or Apache Camel. You can also write your own event handlers using systems based on Kafka, RabbitMQ, or ActiveMQ. The Online Boutique application uses the checkout service to orchestrate the payment, shipping and email notification services. The checkout service also handles the business and order workflow. As an alternative to building your own workflow engine, you can use third-party component such as Zeebe. Zeebe provides a UI-based modeler. We recommend that you carefully evaluate the choices for microservices orchestrator based on your application's requirements. This choice is a critical part of running and scaling your microservices. Refactored application To enable distributed transactions in the refactored application, the checkout service handles the communication between the payment, shipping, and email service. The generic Business Process Model and Notation (BPMN) workflow uses the following flow: Figure 3. An order workflow that helps ensure distributed transactions in typical microservices. The preceding diagram shows the following workflow: The frontend service receives an order request and then does the following: Sends the order items to cart service. The cart service then saves the order details (Redis). Redirects to checkout page. The checkout service pulls orders from the cart service, sets the order status as Pending, and asks the customer for payment. Confirms that the user paid. Once confirmed, the checkout service tells the email service to generate a confirmation email and send it to the customer. The payment service subsequently processes the request. If the payment request succeeds, the payment service updates the order status to Complete. If the payment request fails, then payment service initiates a compensating transaction. The payment request is canceled. The checkout service changes the order status to Failed. If the payment service is unavailable, the request times out after N seconds and the checkout service initiates a compensating transaction. The checkout service changes the order status to Failed. Objectives Deploy the monolithic Online Boutique application on Google Kubernetes Engine (GKE). Validate the monolithic checkout flow. Deploy the microservices version of the refactored monolithic application Verify that the new checkout flow works. Verify that the distributed transaction and compensation actions work if there is a failure. Costs In this document, you use the following billable components of Google Cloud: GKE Cloud SQL Container Registry To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. When you finish this document, you can avoid continued billing by deleting the resources you created. For more information, see Clean up. Before you begin In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Enable the APIs for Compute Engine, Google Kubernetes Engine, Cloud SQL, Artifact Analysis, and Container Registry: gcloud services enable \ compute.googleapis.com \ sql-component.googleapis.com \ servicenetworking.googleapis.com\ container.googleapis.com \ containeranalysis.googleapis.com \ containerregistry.googleapis.com \ sqladmin.googleapis.com Export the following environment variables: export PROJECT=$(gcloud config get-value project) export CLUSTER=$PROJECT-gke export REGION="us-central1" Deploy the ecommerce monolith In this section, you deploy the monolithic Online Boutique application in a GKE cluster. The application uses Cloud SQL as its relational database. The following diagram illustrates the monolithic application architecture: Figure 4. A client connects to the application in a GKE cluster, and the application connects to a Cloud SQL database. To deploy the application, complete the following steps: Clone the GitHub repository: git clone https://github.com/GoogleCloudPlatform/monolith-to-microservices-example Replace the PROJECT_ID placeholder in the Terraform variables manifest file: cd monolith-to-microservices-example/setup && \ sed -i -e "s/\[PROJECT_ID\]/$PROJECT/g" terraform.tfvars Run the Terraform scripts to complete the infrastructure setup and deploy the infrastructure. To learn more about Terraform, see Getting started with Terraform on Google Cloud: terraform init && terraform apply -auto-approve The Terraform script creates the following: A VPC network named PROJECT_ID-vpc GKE cluster named PROJECT_ID-gke A Cloud SQL instance named PROJECT_ID-mysql A database named ecommerce that the application uses A user root with the password set to password You can modify the Terraform script to auto-generate a password. This setup uses a simplified example that you shouldn't use in production. Infrastructure provisioning can take up to 10 minutes. When the script is successful, the output looks like the following: ... Apply complete! Resources: 8 added, 0 changed, 0 destroyed. Outputs: kubernetes_cluster_name = PROJECT_ID-gke sql_database_name = PROJECT_ID-mysql vpc_name = PROJECT_ID-vpc Connect to the cluster and create a namespace named monolith. You deploy the application in its own namespace in the GKE cluster: gcloud container clusters get-credentials $CLUSTER \ --region $REGION \ --project $PROJECT && \ kubectl create ns monolith The application running on GKE uses Kubernetes Secrets to access the Cloud SQL database. Create a secret that uses the user credentials for the database: kubectl create secret generic dbsecret \ --from-literal=username=root \ --from-literal=password=password -n monolith Build the monolith image and upload it to Container Registry: cd ~/monolith gcloud builds submit --tag gcr.io/$PROJECT_ID/ecomm Update the reference in the deploy.yaml file to the newly created Docker image: cd ~/monolith sed -i -e "s/\[PROJECT_ID\]/$PROJECT/g" deploy.yaml Replace placeholders in deployment manifest files and then deploy the application: cd .. && \ DB_IP=$(gcloud sql instances describe $PROJECT-mysql | grep "ipAddress:" | tail -1 | awk -F ":" '{print $NF}') sed -i -e "s/\[DB_IP\]/$DB_IP/g" monolith/deploy.yaml kubectl apply -f monolith/deploy.yaml Check the status of the deployment: kubectl rollout status deployment/ecomm -n monolith The output looks like the following. Waiting for deployment "ecomm" rollout to finish: 0 of 1 updated replicas are available... deployment "ecomm" successfully rolled out Get the IP address of the deployed application: kubectl get svc ecomm -n monolith \ -o jsonpath="{.status.loadBalancer.ingress[*].ip}" -w Wait for the load balancer IP address to be published. To exit the command, press Ctrl+C. Note the load balancer IP address and then access the application at the URL http://IP_ADDRESS. It might take some time for the load balancer to become healthy and start passing traffic. Validate the monolith checkout flow In this section, you create a test order to validate the checkout flow. Go to the URL that you noted in the previous section, http://IP_ADDRESS. On the application home page that appears, select any product and then click Add to Cart. To create a test purchase, click Place your order: When checkout is successful, the order confirmation window appears and displays an Order Confirmation ID. To view order details, connect to the database: gcloud sql connect $PROJECT-mysql --user=root You can also use any other supported methods to connect to the database. When prompted, enter the password as password. To view saved order details, run the following command: select cart_id from ecommerce.cart; The output looks like the following: +--------------------------------------+ | cart_id | +--------------------------------------+ | 7cb9ab11-d268-477f-bf4d-4913d64c5b27 | +--------------------------------------+ Deploy the microservices-based ecommerce application In this section, you deploy the refactored application. This document only focuses on decoupling frontend and payment services. The next document in this series, Distributed tracing in a microservices application, describes other services, such as recommendation and ads services, that you can decouple from the monolith. The checkout service handles the distributed transactions between the frontend and the payment services and is deployed as a Kubernetes service in the GKE cluster, as shown in the following diagram: Figure 5. The checkout service orchestrates transactions between the cart, payment, and email services. Deploy the microservices In this section, you use the infrastructure that you provisioned earlier to deploy microservices in their own namespace microservice: Ensure that you have the following requirements: Google Cloud project Shell environment with gcloud, git, and kubectl In Cloud Shell, clone the microservices repository: git clone https://github.com/GoogleCloudPlatform/microservices-demo cd microservices-demo/ Set the Google Cloud project and region and ensure the GKE API is enabled: export PROJECT_ID=PROJECT_ID export REGION=us-central1 gcloud services enable container.googleapis.com \ --project=${PROJECT_ID} Replace PROJECT_ID with the ID of your Google Cloud project. Create a GKE cluster and get the credentials for it: gcloud container clusters create-auto online-boutique \ --project=${PROJECT_ID} --region=${REGION} Creating the cluster may take a few minutes. Deploy microservices to the cluster: kubectl apply -f ./release/kubernetes-manifests.yaml Wait for the pods to be ready: kubectl get pods After a few minutes, you see the Pods in a Running state. Access the web frontend in a browser using the frontend's external IP address: kubectl get service frontend-external | awk '{print $4}' Visit http://EXTERNAL_IP in a web browser to access your instance of Online Boutique. Note: The load balancer IP address and access the application at the URL http://EXTERNAL_IP. It might take some time for the load balancer to become healthy and start passing traffic. Validate the new checkout flow To verify the checkout process flow, select a product and place an order, as described in the earlier section Validate the monolith checkout flow. When you complete order checkout, the confirmation window doesn't display a confirmation ID. Instead, the confirmation window directs you to check your email for confirmation details. To verify that the order was received, that the payment service processed the payment, and that the order details were updated, run the following command: kubectl logs -f deploy/checkoutservice --tail=100 The output looks like the following: [...] {"message":"[PlaceOrder] user_id=\"98828e7a-b2b3-47ce-a663-c2b1019774a3\" user_currency=\"CAD\"","severity":"info","timestamp":"2023-08-10T04:19:20.498893921Z"} {"message":"payment went through (transaction_id: f0b4a592-026f-4b4a-9892-ce86d2711aed)","severity":"info","timestamp":"2023-08-10T04:19:20.528338189Z"} {"message":"order confirmation email sent to \"someone@example.com\"","severity":"info","timestamp":"2023-08-10T04:19:20.540275988Z"} To exit the logs, press Ctrl+C. Verify that the payment was successful: kubectl logs -f deploy/paymentservice -n --tail=100 The output looks like the following: [...] {"severity":"info","time":1691641282208,"pid":1,"hostname":"paymentservice-65cc7795f6-r5m8r","name":"paymentservice-charge","message":"Transaction processed: visa ending 0454 Amount: CAD119.30128260"} {"severity":"info","time":1691641300051,"pid":1,"hostname":"paymentservice-65cc7795f6-r5m8r","name":"paymentservice-server","message":"PaymentService#Charge invoked with request {\"amount\":{\"currency_code\":\"USD\",\"units\":\"137\",\"nanos\":850000000},\"credit_card\":{\"credit_card_number\":\"4432-8015-6152-0454\",\"credit_card_cvv\":672,\"credit_card_expiration_year\":2039,\"credit_card_expiration_month\":1}}"} To exit the logs, press Ctrl+C. Verify that order confirmation email is sent: kubectl logs -f deploy/emailservice -n --tail=100 The output looks like the following: [...] {"timestamp": 1691642217.5026057, "severity": "INFO", "name": "emailservice-server", "message": "A request to send order confirmation email to kalani@examplepetstore.com has been received."} Note: The example app performs transactions automatically. You can therefore ignore the repeated logs of similar activities. The log messages for each microservices indicate that the distributed transaction across the checkout, payment, and email services have completed successfully. Validate compensation action in a distributed transaction This section simulates a scenario in which a customer is placing an order and the payment service goes down. To simulate the service's unavailability, delete the payment deployment and service: kubectl delete deploy paymentservice && \ kubectl delete svc paymentservice Access the application again and complete the checkout flow. In this example, if the payment service doesn't respond, the request times out and a compensation action is triggered. In the UI frontend, click the Place Order button. The output resembles the following: HTTP Status: 500 Internal Server Error rpc error: code = Internal desc = failed to charge card: could not charge the card: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp: lookup paymentservice on 34.118.224.10:53: no such host" failed to complete the order main.(*frontendServer).placeOrderHandler /src/handlers.go:360 Review the frontend service logs: kubectl logs -f deploy/frontend --tail=100 The output resembles the following: [...] {"error":"failed to complete the order: rpc error: code = Internal desc = failed to charge card: could not charge the card: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp: lookup paymentservice on 34.118.224.10:53: no such host\"","http.req.id":"0a4cb058-ee9b-470a-9bb1-3a965636022e","http.req.method":"POST","http.req.path":"/cart/checkout","message":"request error","session":"96c94881-a435-4490-9801-c788dc400cc1","severity":"error","timestamp":"2023-08-11T18:25:47.127294259Z"} Review the Checkout Service logs: kubectl logs -f deploy/frontend --tail=100 The output resembles the following: [...] {"message":"[PlaceOrder] user_id=\"96c94881-a435-4490-9801-c788dc400cc1\" user_currency=\"USD\"","severity":"info","timestamp":"2023-08-11T18:25:46.947901041Z"} {"message":"[PlaceOrder] user_id=\"96c94881-a435-4490-9801-c788dc400cc1\" user_currency=\"USD\"","severity":"info","timestamp":"2023-08-11T19:54:21.796343643Z"} Notice that there's no subsequent call to email service to send notification. There is no transaction log, like payment went through (transaction_id: 06f0083f-fa47-4d91-8258-6d61edfab1ca). Review the email service logs: kubectl logs -f deploy/emailservice --tail=100 Notice that there are no log entries created for the fail transaction on email service. As an orchestrator, if a service call fails, the checkout service returns an error status and exits the checkout process. Clean up If you plan to complete the steps in the next document of this series, Distributed tracing in a microservices application, you can reuse the project and resources instead of deleting them. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the resources If you want to keep the Google Cloud project that you used in this document, delete the individual resources. In Cloud Shell, run the following command: cd setup && terraform destroy -auto-approve To delete the microservices cluster using the Google Cloud CLI, run the following command: gcloud container clusters delete online-boutique \ --location $REGION What's next Learn more about microservices architecture. Read the first document in this series to learn about microservices, their benefits, challenges, and use cases. Read the second document in this series to learn about application refactoring strategies to decompose microservices. Read the final document in this series to learn about distributed tracing of requests between microservices. Send feedback \ No newline at end of file diff --git a/IoT_platform_product.txt b/IoT_platform_product.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee46fb6c1f8afd74a303b2bebab5671c76d6a592 --- /dev/null +++ b/IoT_platform_product.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices/iot-platform-product-architecture +Date Scraped: 2025-02-23T11:48:10.942Z + +Content: +Home Docs Cloud Architecture Center Send feedback IoT platform product architecture on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-09 UTC IoT platform products typically provide basic MQTT and HTTPS data connectivity. They also let you provision devices, and they provide authentication and management, telemetry storage and visualization, data processing, and alerting. Organizations often use IoT platforms when a standalone MQTT broker isn't sufficient for a use case and a more complete IoT platform product is needed. An IoT platform provides a unified interface for managing a heterogeneous collection of devices. This interface is important for many connected device applications, and it's a key difference between an IoT platform and a standalone MQTT broker. This document outlines the basic architectural considerations and recommendations that you need to make before you deploy an IoT platform product architecture on Google Cloud. This document is part of a series of documents that provide information about IoT architectures on Google Cloud. The other documents in this series include the following: Connected device architectures on Google Cloud overview Standalone MQTT broker architecture on Google Cloud IoT platform product architecture on Google Cloud (this document) Best practices for running an IoT backend on Google Cloud Device on Pub/Sub architecture to Google Cloud Best practices for automatically provisioning and configuring edge and bare metal systems and servers The following diagram shows an example architecture with a generic IoT platform product running on Google Cloud. As shown in the preceding diagram, the IoT platform deploys an MQTT broker or endpoint for device connectivity. The IoT platform is connected to an external proxy Network Load Balancer to distribute traffic from the edge devices. Additional IoT applications can connect to the IoT platform through Pub/Sub, or by using the Dataflow MQTT connector. The IoT platform provides a set of device management services. As shown in the diagram, these services are as follows: Device credential store Rules engine Device authentication and authorization Device configuration management Device registry Device update management IoT platform products also generally include services such as digital twin features, low-code development interfaces, alerting and notification capabilities, and other analytics functionality. Architectural considerations and choices The following sections describe the architectural choices that you can make for an IoT platform product architecture, and the impact of these choices. Ingestion endpoints Most commercial IoT platform applications include an MQTT endpoint, and usually also an HTTPS endpoint for data ingestion from connected devices. MQTT An IoT platform implements an MQTT endpoint in one of the following ways: A connector between MQTT and another message service An MQTT broker which implements the full MQTT specification When you evaluate commercial IoT platforms, it's important to know which of the preceding approaches the vendor has chosen for the product so that you can determine the implications for your use case. In some cases, the MQTT endpoint only connects the MQTT clients with a backend messaging service, such as Kafka or Pub/Sub. This type of endpoint usually does not implement the complete MQTT protocol specification, and often doesn't include features such as QoS levels 1 and 2, or shared subscriptions. The advantage of this approach is that it decreases complexity in the IoT platform, because there's no separate MQTT broker application. Operational costs are lower, and maintenance is simpler than if the platform uses a separate MQTT broker. However, because of the reduced support for more advanced MQTT protocol features, this approach means that there is less flexibility and functionality for MQTT message transport than a standalone MQTT broker that implements the complete MQTT specification. Other IoT platforms provide a complete MQTT broker as part of the platform, as shown in the example architecture in this document. This broker might be one of the existing open source brokers, or a proprietary broker implementation. A full MQTT broker provides the full bidirectional MQTT capability described earlier, but a full broker can add complexity and operational costs to the management of the IoT platform. HTTPS and other supplementary protocols In addition to MQTT, many IoT platforms provide more data ingestion endpoints than those that are shown in the main architecture that this document describes. HTTPS is a common alternative protocol to MQTT for connected device use cases. It has higher overhead than MQTT, but it's more widely supported by mobile devices such as phones, and by web browsers and other applications. It's frequently used in certain connected device applications, and it's supported by open source platforms like Eclipse Hono and many commercial products. Many constrained device applications use (Constrained Application Protocol (CoAP), defined in RFC 7252) as an MQTT alternative. CoAP targets low-overhead and small-footprint clients for embedded devices and sensors. Many commercial IoT platform applications also provide a CoAP endpoint. Load balancing For more information about choosing the best load balancer for your architecture, see the load balancing section of the Standalone MQTT broker architecture on Google Cloud because those considerations are applicable to this case as well. Device authentication and credential management Management of device credentials and authentication is a key part of the operation of an IoT platform. The authentication methods supported by connected devices vary widely across applications and device form factors. It's important to select the appropriate authentication method for the target use case, and implement the chosen authentication scheme correctly. Unlike a standalone MQTT Broker, an IoT platform provides integrated services to manage device identity and credentials. Most IoT platforms use X.509 client certificate authentication for authentication, JWT token-based authentication (often combined with OAuth 2.0), and username and password authentication. Some platforms also support integration with an external LDAP authentication provider. For some constrained devices, JWT or username and password authentication might be more appropriate, because these schemes require fewer resources on a connected device. When you use either JWT or username and password authentication, it's important that you encrypt the network connection separately to mTLS authentication, because an encrypted connection is not required by either of these authentication methods. X.509 certificate authentication, by contrast, consumes more resources on the connected device, but is typically used in an mTLS-encrypted connection and thus provides a high level of security. Provisioning the authentication credentials on the edge device at manufacturing time is also an important part of the connected device authentication scheme, but is outside the scope of this document. For more information about authentication and credential management, see Best practices for running an IoT backend on Google Cloud. Manage connected devices Typically, connected devices publish telemetry events and state information to the platform through one of the ingestion endpoints such as MQTT. If you're using a multi-protocol IoT platform, devices can communicate using any of the supported protocols. We recommend that your organization use an IoT platform that has the following capabilities: Software and system updates: The delivery and rollback of firmware, software, and application updates to the connected devices. These updates usually also involve storage and management of the updates themselves. Configuration updates: The delivery, storage, and rollback of updates to the configuration of applications deployed on the connected devices. Credential creation and management: The creation of new device credentials, delivery of those credentials to the connected device, auditing of device access attempts and activity, and revocation of compromised or expired credentials at the appropriate time. Rules engine and data processing: The definition and execution of data-driven rules and other data processing steps. This capability often includes some type of low-code interface for defining rules and data processing pipelines. Backend workloads Most IoT platforms provide their own internal data storage and transport capabilities that let you connect to your backend workloads and applications. AMQP, RabbitMQ, and Kafka are commonly used to provide internal data transport. These can all be connected to Pub/Sub using the Pub/Sub SDK. You can also use an integrated database system such as PostgreSQL to store data in the platform. In many cases, the IoT platform can be configured to use one of the Cloud Storage products directly, such as Cloud SQL, Firebase, or BigQuery. If the IoT platform has a complete MQTT broker, backend applications can also communicate with devices by using the MQTT capability of the platform. If the application supports MQTT, the application can connect with the broker as a subscriber. If there is no MQTT support, Apache Beam provides an MQTT driver, which enables bidirectional integration with Dataflow as well as other Beam deployments. Use cases The following sections describe example scenarios where an IoT platform is a better architectural choice than a standalone MQTT broker or a direct connection to Pub/Sub. Smart appliance management Applications that manage multiple smart appliances are well-suited to an IoT platform. An example of such an application is a platform to manage kitchen appliances such as dishwashers and coffee makers. These devices generally connect to a cloud-based application, either directly over Wi-Fi or through a local gateway that uses a Bluetooth Low Energy (BLE) or another local protocol. The management capabilities of an IoT platform are important here, for monitoring the state of each device, managing software updates and security patches, and capturing device activity to provide critical intelligence to the manufacturer and the customer. These capabilities are beyond the scope of a basic MQTT broker. At a minimum, a device information repository, a device state database, a telemetry datastore, and an analytics interface are all critical to building a successful smart appliance platform. Logistics and asset tracking For a logistics and asset tracking application, an IoT platform product offers more complete functionality than a basic MQTT broker, so is a better choice for this use case. Monitoring the current and past state and location of a large fleet of assets depends on a robust device state database and identity management system. As new assets are deployed, they need to be connected to the platform with as little friction as possible, and subsequently monitored throughout the asset lifecycle. In many cases, the application also collects other sensor information about the asset, such as local temperature, humidity and atmospheric pressure, or 3D positioning and acceleration data to detect unexpected movements or drops. All this data must be ingested and associated with the correct asset for analysis in any backend application, so the full-featured device management provided by the IoT platform is an important capability. What's next Read about connecting devices and building IoT applications on Google Cloud using Intelligent Products Essentials. Learn about practices for automatically provisioning and configuring edge and bare metal systems and servers. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Kerberized_data_lake_on_Dataproc.txt b/Kerberized_data_lake_on_Dataproc.txt new file mode 100644 index 0000000000000000000000000000000000000000..2fbe4bed35475b36820f7923f9761a1e595b473c --- /dev/null +++ b/Kerberized_data_lake_on_Dataproc.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/kerberized-data-lake-dataproc +Date Scraped: 2025-02-23T11:52:52.157Z + +Content: +Home Docs Cloud Architecture Center Send feedback Kerberized data lake on Dataproc Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-16 UTC This document describes the concepts, best practices, and reference architecture for the networking, authentication, and authorization of a Kerberized data lake on Google Cloud using Dataproc on-cluster Key Distribution Center (KDC) and Apache Ranger. Dataproc is Google Cloud's managed Hadoop and Spark service. This document is intended for Apache Hadoop administrators, cloud architects, and big data teams who are migrating their traditional Hadoop and Spark clusters to a modern data lake powered by Dataproc. A Kerberized data lake on Google Cloud helps organizations with hybrid and multi-cloud deployments to extend and use their existing IT investments in identity and access control management. On Google Cloud, organizations can provide their teams with as many job-scoped ephemeral clusters as needed. This approach removes much of the complexity of maintaining a single cluster with growing dependencies and software configuration interactions. Organizations can also create longer-running clusters for multiple users and services to access. This document shows how to use industry standard tools, such as Kerberos and Apache Ranger, to help ensure fine-grained user security (authentication, authorization, and audit) for both cluster cases on Dataproc. Customer use case Enterprises are migrating their on-premises Hadoop-based data lakes to public cloud platforms to solve the challenges they are facing managing their traditional clusters. One of these organizations, a large technology leader in Enterprise Software and Hardware, decided to migrate their on-premises Hadoop system to Google Cloud. Their on-premises Hadoop environment served the analytics needs of hundreds of teams and business units, including their cybersecurity team that had 200 data analytics team members. When one team member ran a large query with their legacy data lake, they experienced issues due to the rigid nature of their resources. The organization struggled to keep up with the analytics needs of the team using their on-premises environment, so they moved to Google Cloud. By moving to Google Cloud, the organization was able to reduce the number of issues being reported on their on-premises data lake by 25% a month. The foundation of the organization's migration plan to Google Cloud was the decision to reshape and optimize their large monolithic clusters according to teams' workloads, and shift the focus from cluster management to unlocking business value. The few large clusters were broken into smaller, cost-effective Dataproc clusters, while workloads and teams were migrated to the following types of models: Ephemeral job-scoped clusters: With only a few minutes spin-up time, the ephemeral model allows a job or a workflow to have a dedicated cluster that is shut down upon job completion. This pattern decouples storage from compute nodes by substituting Hadoop Distributed File System (HDFS) with Cloud Storage, using Dataproc's built-in Cloud Storage Connector for Hadoop. Semi-long-running clusters: When ephemeral job-scoped clusters can't serve the use case, then Dataproc clusters can be long running. When the cluster's stateful data is offloaded to Cloud Storage, the cluster can be easily shut down, and they are considered as semi-long running. Smart cluster autoscaling also allows these clusters to start small and to optimize their compute resources for specific applications. This autoscaling replaces management of YARN queues. The hybrid security challenge In the preceding customer scenario, the customer migrated their substantial data management system to the cloud. However, other parts of the organization's IT needed to remain on-premises (for example, some of the legacy operational systems that feed the data lake). The security architecture needed to help ensure the on-premises central LDAP-based identity provider (IdP) remains the authoritative source for their corporate identities using the data lake. On-premises Hadoop security is based on Kerberos and LDAP for authentication (often as part of the organization's Microsoft Active Directory (AD)) and on several other open source software (OSS) products, such as Apache Ranger. This security approach allows for fine-grained authorization and audit of users' activities and teams' activities in the data lake clusters. On Google Cloud, Identity and Access Management (IAM) is used to manage access to specific Google Cloud resources, such as Dataproc and Cloud Storage. This document discusses a security approach that uses the best of on-premises and OSS Hadoop security (focusing on Kerberos, corporate LDAP, and Apache Ranger) along with IAM to help secure workloads and data both inside and outside the Hadoop clusters. Architecture The following diagram shows the high-level architecture: In the preceding diagram, clients run jobs on multi-team or single-team clusters. The clusters use a central Hive metastore and Kerberos authentication with a corporate identity provider. Components The architecture proposes a combination of industry standard open source tools and IAM to authenticate and authorize the different ways to submit jobs that are described later in this document. The following are the main components that work together to provide fine-grained security of teams' and users' workloads in the Hadoop clusters: Kerberos: Kerberos is a network authentication protocol that uses secret-key cryptography to provide strong authentication for client/server applications. The Kerberos server is known as Key Distribution Center (KDC). Kerberos is widely used in on-premises systems like AD to authenticate human users, services, and machines (client entities are denoted as user principals). Enabling Kerberos on Dataproc uses the free MIT distribution of Kerberos to create an on-cluster KDC. Dataproc's on-cluster KDC serves user principals' requests to access resources inside the cluster, like Apache Hadoop YARN, HDFS, and Apache Spark (server resources are denoted as service principals). Kerberos cross-realm trust lets you connect the user principals of one realm to another. Apache Ranger: Apache Ranger provides fine-grained authorization for users to perform specific actions on Hadoop services. It also audits user access and implements administrative actions. Ranger can synchronize with an on-premises corporate LDAP server or with AD to get user and services identities. Shared Hive metastore: The Hive metastore is a service that stores metadata for Apache Hive and other Hadoop tools. Because many of these tools are built around it, the Hive metastore has become a critical component of many data lakes. In the proposed architecture, a centralized and Kerberized Hive metastore allows multiple clusters to share metadata in a secure manner. While Kerberos, Ranger, and a shared Hive metastore work together to allow fine-grained user security within the Hadoop clusters, IAM controls access to Google Cloud resources. For example, Dataproc uses the Dataproc Service Account to perform reads and writes on Cloud Storage buckets. Cluster dimensions The following dimensions characterize a Dataproc cluster: Tenancy: A cluster is multi-tenant if it serves the requests of more than one human user or service, or single-tenant if it serves the requests of a single user or service. Kerberos: A cluster can be Kerberized if you enable Kerberos on Dataproc or non-Kerberized if you don't enable Kerberos on Dataproc. Lifecycle: A cluster is ephemeral if it's created only for the duration of a specific job or workflow, contains only the resources needed to run the job, and it's shut down upon job completion. Otherwise, the cluster is considered semi-long running. Different combinations of these dimensions determine the use cases that a specific cluster is best suited for. This document discusses the following representative examples: The sample multi-team clusters shown in the architecture are Kerberized, multi-tenant, semi-long-running clusters. These clusters are best suited for interactive query workloads, for example they serve long-term data analytics and business intelligence (BI) exploration. In the architecture, the clusters are located in a Google Cloud project that's shared by several teams and serves the requests of those teams, hence the name. In this document, the term team or application team describes a group of people in an organization who are working on the same software component or acting as one functional team. For example, a team might refer to backend developers of a microservice, BI analysts of a business application, or even cross-functional teams, such as Big Data infrastructure teams. The sample single-team clusters shown in the architecture are clusters that can be multi-tenant (for members of the same team) or single-tenant. As ephemeral clusters, single-team clusters can be used for jobs such as by Data Engineers to run Spark batch processing jobs, or by Data Scientists for a model training job. As semi-long-running clusters, single-team clusters can serve data analytics and BI workloads that are scoped for a single team or person. The single-team clusters are located in Google Cloud projects that belong to a single team, which simplifies usage auditing, billing, and resource isolation. For example, only members of the single team can access the Cloud Storage buckets that are used for persisting the cluster's data. In this approach, application teams have dedicated projects, so the single-team clusters aren't Kerberized. We recommend that you analyze your particular requirements and choose the best dimension combinations for your situation. Submitting jobs Users can submit jobs to both types of clusters using various tools, including the following: The Dataproc API, using REST calls or client libraries. The Google Cloud CLI gcloud command-line tool in a local terminal window or from the Google Cloud console in Cloud Shell, opened in a local browser. A Dataproc Workflow Template, which is a reusable workflow configuration that defines a graph of jobs with information about where to run those jobs. If the workflow uses the managed cluster option, it uses an ephemeral cluster. Cloud Composer using the Dataproc Operator. Composer directed acyclic graphs (DAGs) can also be used to orchestrate Dataproc Workflow Templates. Opening an SSH session into the master node in the cluster, and submitting a job directly, or by using tools like Apache Beeline. This tool is usually reserved only for administrators and power users. An example of a power user is a developer who wants to troubleshoot the configuration parameters for a service and verify them by running test jobs directly on the master node. Networking The following diagram highlights the networking concepts of the architecture: In the preceding diagram, the networking architecture uses a meshed hybrid pattern, in which some resources are located on Google Cloud, and some are located on-premises. The meshed hybrid pattern uses a Shared VPC, with a common host project and separate projects for each Dataproc cluster type and team. The architecture is described in detail in the following On Google Cloud and On-premises sections. On Google Cloud On Google Cloud, the architecture is structured using a Shared VPC. A Shared VPC lets resources from multiple projects connect to a common VPC network. Using a common VPC network lets resources communicate with each other securely and efficiently using internal IP addresses from that network. To set up a Shared VPC, you create the following projects: Host project: The host project contains one or more Shared VPC networks used by all the service projects. Service projects: a service project contains related Google Cloud resources. A Shared VPC Admin attaches the service projects to the Host Project to allow them to use subnets and resources in the Shared VPC network. This attachment is essential for the single-team clusters to be able to access the centralized Hive metastore. As shown in the Networking diagram, we recommend creating separate service projects for the Hive metastore cluster, the multi-team clusters, and clusters for each individual team. Members of each team in your organization can then create single-team clusters within their respective projects. To allow the components within the hybrid network to communicate, you must configure firewall rules to allow the following traffic: Internal cluster traffic for Hadoop services including HDFS NameNode to communicate with HDFS DataNodes, and for YARN ResourceManager to communicate with YARN NodeManagers. We recommend using filtering with the cluster service account for these rules. External cluster traffic on specific ports to communicate with the Hive metastore (port tcp:9083,8020), on-premises KDC (port tcp:88), and LDAP (port 636), and other centralized external services that you use in your particular scenario, for example Kafka on Google Kubernetes Engine (GKE). All Dataproc clusters in this architecture are created with internal IP addresses only. To allow cluster nodes to access Google APIs and services, you must enable Private Google Access for the cluster subnets. To allow administrators and power users access to the private IP address VM instances, use IAP TCP forwarding to forward SSH, RDP, and other traffic over an encrypted tunnel. The cluster web interfaces of the cluster applications and optional components (for example Spark, Hadoop, Jupyter, and Zeppelin) are securely accessed through the Dataproc Component Gateway. The Dataproc Component Gateway is an HTTP-inverting proxy that is based on Apache Knox. On-premises This document assumes that the resources located on-premises are the corporate LDAP directory service and the corporate Kerberos Key Distribution Center (KDC) where the user and team service principals are defined. If you don't need to use an on-premises identity provider, you can simplify the setup by using Cloud Identity and a KDC on a Dataproc cluster or on a virtual machine. To communicate with the on-premises LDAP and KDC, you use either Cloud Interconnect or Cloud VPN. This setup helps ensure that communication between environments uses private IP addresses if the subnetworks in the RFC 1918 IP address don't overlap. For more information about the different connection options, see Choosing a Network Connectivity product. In a hybrid scenario, your authentication requests must be handled with minimal latency. To achieve this goal, you can use the following techniques: Serve all authentication requests for service identities from the on-cluster KDC, and only use an identity provider external to the cluster for user identities. Most of the authentication traffic is expected to be requests from service identities. If you're using AD as your identity provider, User Principal Names (UPNs) represent the human users and AD service accounts. We recommend that you replicate the UPNs from your on-premises AD into a Google Cloud region that is close to your data lake clusters. This AD replica handles authentication requests for UPNs, so the requests never transit to your on-premises AD. Each on-cluster KDC handles the Service Principal Names (SPNs) using the first technique. The following diagram shows an architecture that uses both techniques: In the preceding diagram, an on-premises AD synchronizes UPNs to an AD replica in a Google Cloud region. The AD replica authenticates UPNs, and an on-cluster KDC authenticates SPNs. For information about deploying AD on Google Cloud, see Deploying a fault-tolerant Microsoft Active Directory environment. For information about how to size the number of instances for domain controllers, see Integrating MIT Kerberos and Active Directory. Authentication The following diagram shows the components that are used to authenticate users in the different Hadoop clusters. Authentication lets users use services such as Apache Hive or Apache Spark. In the preceding diagram, clusters in Kerberos realms can set up cross-realm trust to use services on other clusters, such as the Hive metastore. Non-kerberized clusters can use a Kerberos client and an account keytab to use services on other clusters. Shared and secured Hive metastore The centralized Hive metastore allows multiple clusters that are running different open source query engines—such as Apache Spark, Apache Hive/Beeline, and Presto—to share metadata. You deploy the Hive metastore server on a Kerberized Dataproc cluster and deploy the Hive metastore database on a remote RDBMS, such as a MySQL instance on Cloud SQL. As a shared service, a Hive metastore cluster only serves authenticated requests. For more information about configuring the Hive metastore, see Using Apache Hive on Dataproc. Instead of Hive metastore, you can use the Dataproc Metastore, which is a fully managed, highly available (within a region), autohealing, serverless Apache Hive metastore. You can also enable Kerberos for the Dataproc Metastore service, as explained in Configuring Kerberos for a service. Kerberos In this architecture, the multi-team clusters are used for analytics purposes and they are Kerberized by following the guide to Dataproc security configuration. The Dataproc secure mode creates an on-cluster KDC and it manages the cluster's service principals and keytabs as required by the Hadoop secure mode specification. A keytab is a file that contains one or more pairs of Kerberos principals and an encrypted copy of that principal's key. A keytab allows programmatic Kerberos authentication when interactive login with the kinit command is infeasible. Access to a keytab means the ability to impersonate the principals that are contained in it. Therefore, a keytab is a highly sensitive file that needs to be securely transferred and stored. We recommend using Secret Manager to store the contents of keytabs before they are transferred to their respective clusters. For an example of how to store the contents of a keytab, see Configuring Kerberos for a service. After a keytab is downloaded to the cluster master node, the file must have restricted file access permissions. The on-cluster KDC handles the authentication requests for all services within that cluster. Most authentication requests are expected to be this type of request. To minimize latency, it is important for the KDC to resolve those requests without them leaving the cluster. The remaining requests are from human users and AD service accounts. The AD replica on Google Cloud or the central ID provider on-premises handles these requests, as explained in the preceding On-premises section. In this architecture, the single-team clusters aren't Kerberized, so there is no KDC present. To allow these clusters to access the shared Hive metastore, you only need to install a Kerberos client. To automate access, you can use the team's keytab. For more information, see the Identity mapping section later in this document. Kerberos cross-realm trust in multi-team clusters Cross-realm trust is highly relevant when your workloads are hybrid or multi-cloud. Cross-realm trust lets you integrate central corporate identities into shared services available in your Google Cloud data lake. In Kerberos, a realm defines a group of systems under a common KDC. Cross-realm authentication enables a user principal from one realm to authenticate in another realm and use its services. This configuration requires you to establish trust between realms. In the architecture, there are three realms: EXAMPLE.COM: is the corporate realm, where all Kerberos user principals for users, teams, and services are defined (UPNs). MULTI.EXAMPLE.COM: is the realm that contains the multi-team clusters. The cluster is preconfigured with service principals (SPNs) for the Hadoop services, such as Apache Spark and Apache Hive. METASTORE.EXAMPLE.COM: is a realm for the Hive metastore. The single-team clusters aren't Kerberized, so they don't belong to a realm. To be able to use the corporate user principals across all clusters, you establish the following unidirectional cross-realm trusts: Configure the multi-team realm and metastore realm to trust the corporate realm. With this configuration, the principals that are defined in the corporate realm can access the multi-team clusters and metastore. Although trust can be transitive, we recommend that you configure the metastore realm to have a direct trust to the corporate realm. This configuration avoids depending on the availability of the multi-team realm. Configure the metastore realm to trust the multi-team realm so that the multi-team clusters can access the metastore. This configuration is necessary to permit HiveServer2 principal access to the metastore. For more information see Getting Started with Kerberized dataproc clusters with cross-realm trust, and its corresponding Terraform config files in the GitHub repository. If you prefer a built-in, or cloud-native, IAM approach to multi-team clusters and if you don't need hybrid identity management, consider using Dataproc service-account based, secure multi-tenancy. In these clusters, multiple IAM identities can access Cloud Storage and other Google resources as different service accounts. Identity mapping in single-team clusters The preceding sections described the configuration of the Kerberized side of the architecture. However, single-team clusters aren't Kerberized, so they require a special technique to allow them to participate in this ecosystem. This technique uses the Google Cloud project separation property and IAM service accounts to isolate and help secure application teams' Hadoop workloads. As described in the preceding Networking section, each team has a corresponding Google Cloud project where it can create single-team clusters. Within the single-team clusters, one or more members of the team are the tenants of the cluster. This method of segregation by projects also restricts access to the Cloud Storage buckets (used by these clusters) to the respective teams. An administrator creates the service project and provisions the team service account in that project. When creating a cluster, this service account is specified as the cluster service account. The administrator also creates a Kerberos principal for the team in the corporate realm, and creates its corresponding keytab. The keytab is securely stored in Secret Manager and the administrator copies the keytab into the cluster master node. The Kerberos principal allows access from the single-team cluster to the Hive metastore. To facilitate automated provisioning and to easily recognize these pairs of related identities, the identities should follow a common naming convention—for example: Team service account: revenue-reporting-app@proj-A.iam.gserviceaccount.com Kerberos team principal: revenue_reporting/app@EXAMPLE.COM This identity-mapping technique offers a unique approach to map a Kerberos identity to a service account, both belonging to the same team. These identities are used in different ways: The Kerberos identity gives the cluster access to shared Hadoop resources, such as the Hive metastore. The service account gives the cluster access to shared Google Cloud resources, such as Cloud Storage. This technique avoids the need to create a similar mapping for each member of the team. However, because this technique uses one service account or one Kerberos principal for the entire team, actions in the Hadoop cluster can't be tracked to individual members of the team. To manage access to cloud resources in the team's project that are outside the scope of the Hadoop clusters (such as Cloud Storage buckets, managed services, and VMs), an administrator adds the team members to a Google group. The administrator can use the Google group to manage IAM permissions for the entire team to access cloud resources. When you grant IAM permissions to a group and not to individual members of the team, you simplify management of user access when members join or leave the application team. By granting direct IAM permissions to the team's Google group, you can disable impersonation to the service account, which helps to simplify traceability of actions in Cloud Audit Logs. When you define the members of your team, we recommend that you balance the level of granularity that you require for access management, resource usage, and auditing against the administrative efforts derived from those tasks. If a cluster serves strictly one human user (a single-tenant cluster), instead of a team, then consider using Dataproc Personal Cluster Authentication. Clusters with Personal Cluster Authentication enabled are intended only for short-term interactive workloads to securely run as one end-user identity. These clusters use the IAM end-user credentials to interact with other Google Cloud resources, such as Cloud Storage. The cluster authenticates as the end user, instead of authenticating as the cluster service account. In this type of cluster, Kerberos is automatically enabled and configured for secure communication within the personal cluster. Authentication flow To run a job on a Dataproc cluster, users have many options, as described in the preceding Submitting jobs section. In all cases, their user account or service account must have access to the cluster. When a user runs a job on a multi-team cluster, there are two choices for a Kerberos principal, as shown in the following diagram: The preceding diagram shows the following options: If the user uses a Google Cloud tool—such as the Dataproc API, Cloud Composer, or gcloud command-line tool—to submit the job request, the cluster uses the Dataproc Kerberos principal to authenticate with the KDC and access the Hive metastore. If the user runs a job from an SSH session, they can submit jobs directly in that session using their own user Kerberos principal. When a user runs a job on a single-team cluster, there is only one possible Kerberos principal—the team Kerberos principal—as show in the following diagram: In the preceding diagram, a job uses the team Kerberos principal when the job needs access to the shared Hive metastore. Otherwise, because single-team clusters are non-Kerberized, users can start jobs using their own user account. For single-team clusters, it is a good practice to run kinit for the team principal as part of an initialization action at the time of cluster creation. After cluster creation, use a cron schedule according to the defined ticket lifetime, using the lifetime (-l) command option. To run kinit, the initialization action first downloads the keytab to the cluster master node. For security purposes, it is imperative that you restrict access to the keytab. Grant POSIX read permissions to only the account that runs kinit. If possible, delete the keytab and renew the token using kinit -R to extend its lifetime using a cron job until it has met the maximum ticket lifetime. At that time, the cron job can re-download the keytab, rerun kinit, and restart the renewal cycle. Both multi-team and single-team cluster types use service accounts to access other Google Cloud resources, such as Cloud Storage. Single-team clusters use the team service account as the custom cluster's service account. Authorization This section describes the coarse- and fine-grained authorization mechanisms for each cluster, as shown in the following diagram: The architecture in the preceding diagram uses IAM for coarse-grained authorization, and Apache Ranger for fine-grained authorization. These authorization methods are described in the following sections: Coarse-grained authorization and Fine-grained authorization. Coarse-grained authorization IAM lets you control user and group access to your project's resources. IAM defines permissions, and the roles that grant those permissions. There are four predefined Dataproc roles: admin, editor, viewer, and worker. These roles grant Dataproc permissions that let users and service accounts perform specific actions on clusters, jobs, operations, and workflow templates. The roles grant access to the Google Cloud resources they need to accomplish their tasks. One of these resources is Cloud Storage, which we recommend using as the cluster storage layer, instead of HDFS. The granularity of the IAM Dataproc permissions doesn't extend to the level of the services that are running on each cluster, such as Apache Hive or Apache Spark. For example, you can authorize a certain account to access data from a Cloud Storage bucket or to run jobs. However, you cannot specify which Hive columns that account is allowed to access with that job. The next section describes how you can implement that kind of fine-grained authorization in your clusters. Fine-grained authorization To authorize fine-grained access, you use the Dataproc Ranger optional component in the architecture described in Best practices to use Apache Ranger on Dataproc. In that architecture, Apache Ranger is installed in each of your clusters, both single and multi-team. Apache Ranger provides fine-grained access control for your Hadoop applications (authorization and audit) at the service, table, or column level. Apache Ranger uses identities that are provided by the corporate LDAP repository and defined as Kerberos principals. In multi-team clusters, the Ranger UserSync daemon periodically updates the Kerberized identities from the corporate LDAP server. In single-team clusters, only the team identity is required. Ephemeral clusters present a challenge because they shut down after their jobs are completed, but must not lose their Ranger policies and admin logs. To address this challenge, you externalize policies to a central Cloud SQL database, and externalize audit logs to Cloud Storage folders. For more information, see Best practices to use Apache Ranger on Dataproc. Authorization flow When a user wants to access one or more of the cluster services to process data, the request goes through the following flow: The user authenticates through one of the options described in the Authentication flow section. The target service receives the user request and calls the corresponding Apache Ranger plugin. The plugin periodically retrieves the policies from the Ranger Policy Server. These policies determine if the user identity is allowed to perform the requested action on the specific service. If the user identity is allowed to perform the action, then the plugin allows the service to process the request and the user gets the results. Every user interaction with a Hadoop service, both allowed or denied, is written to Ranger logs by the Ranger Audit Server. Each cluster has its own logs folder in Cloud Storage. Dataproc also writes job and cluster logs that you can view, search, filter, and archive in Cloud Logging. In this document, the reference architectures used two types of Dataproc clusters: multi-team clusters and single-team clusters. By using multiple Dataproc clusters that can be easily provisioned and secured, an organization can use a job, product, or domain-focused approach instead of traditional, centralized clusters. This approach works well with an overall Data Mesh architecture, which lets teams fully own and secure their data lake and its workloads and offer data as a service. What's next Read about getting started with Kerberized Dataproc clusters with cross-realm trust, and its corresponding Terraform config files in the GitHub repository. Read about the best practices for using Apache Ranger on Dataproc. Read about how to set up secure data access on Dataproc for data analysts using business intelligence (BI) tools like Tableau and Looker. Read about how to move your on-premises Apache Hadoop system to Google Cloud. Read about Best practices for federating Google Cloud with an external identity provider. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Keycloak_single_sign-on.txt b/Keycloak_single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..1d43a32a4df3591d5096bb9807b071a5c99bcfa5 --- /dev/null +++ b/Keycloak_single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/keycloak-single-sign-on +Date Scraped: 2025-02-23T11:55:44.755Z + +Content: +Home Docs Cloud Architecture Center Send feedback Keycloak single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This guide shows how to set up single sign-on (SSO) between Keycloak and your Cloud Identity or Google Workspace account by using SAML federation. The document assumes you have installed and are using Keycloak. Note: Keycloak does not provide built-in integration for automatically provisioning users and groups to Cloud Identity or Google Workspace. To automate user and group provisioning, you must combine Keycloak with a provisioning tool such as Google Cloud Directory Sync, which lets you provision users and groups from an LDAP server. Objectives Configure your Keycloak server so that it can be used as an identity provider (IdP) by Cloud Identity or Google Workspace. Configure your Cloud Identity or Google Workspace account so that it uses Keycloak for SSO. Before you begin If you don't have a Cloud Identity account, sign up for an account. Make sure your Cloud Identity account has super-admin privileges. If your Keycloak server is used to manage more than one realm, decide which realm you want to use for the federation. Ensure that you have admin access to the selected realm. Create a SAML profile To configure single sign-on with Keycloak, you first create a SAML profile in your Cloud Identity or Google Workspace account. The SAML profile contains the settings related to your Keycloak server, including its URL and signing certificate. You later assign the SAML profile to certain groups or organizational units. To create a new SAML profile in your Cloud Identity or Google Workspace account, do the following: In the Admin Console, go to Security > Authentication > SSO with third-party IdP. Go to SSO with third-party IdP Click Third-party SSO profiles > Add SAML profile. On the SAML SSO profile page, enter the following settings: Name: Keycloak IDP entity ID: Keycloak 17 or later https://KEYCLOAK/realms/REALM Keycloak 16 or earlier https://KEYCLOAK/auth/realms/REALM Sign-in page URL: Keycloak 17 or later https://KEYCLOAK/realms/REALM/protocol/saml Keycloak 16 or earlier https://KEYCLOAK/auth/realms/REALM/protocol/saml Sign-out page URL: Keycloak 17 or later https://KEYCLOAK/realms/REALM/protocol/openid-connect/logout Keycloak 16 or earlier https://KEYCLOAK/auth/realms/REALM/protocol/openid-connect/logout?redirect_uri=https://KEYCLOAK/auth/realms/REALM/account/ Change password URL: Keycloak 17 or later https://KEYCLOAK/realms/REALM/account Keycloak 16 or earlier https://KEYCLOAK/auth/realms/REALM/account In all URLs, replace the following: KEYCLOAK: the fully qualified domain name of your Keycloak server REALM: the name of your selected realm Don't upload a verification certificate yet. Click Save. The SAML SSO profile page that appears contains two URLs: Entity ID ACS URL You need these URLs in the next section when you configure Keycloak. Configure Keycloak You configure your Keycloak server by creating a client. Create a client Create a new SAML client in Keycloak: Sign in to Keycloak and open the administration console. Select the realm that you want to use for federation. In the menu, select Clients. Click Create client. Configure the following settings for the client: Keycloak 19 or later Client type: SAML Client ID: Entity URL from your SSO profile. Name: Google Cloud Keycloak 18 or earlier Client ID: Entity URL from your SSO profile. Client Protocol: saml Client SAML Endpoint: leave blank Click Save. Specify the details for the client by configuring the following settings: Keycloak 19 or later On the Settings tab: Valid Redirect URIs: ACS URL from your SSO profile Name ID Format: email Force Name ID Format: on Sign documents: off Sign Assertions: on On the Keys tab: Client Signature Required: off Keycloak 18 or earlier Name: A name such as Google Cloud Sign Assertions: on Client Signature Required: off Force Name ID Format: on Name ID Format: email Valid Redirect URIs: ACS URL from your SSO profile Keep the default values for all other settings. Click Save. Export the signing certificate After Keycloak authenticates a user, it passes a SAML assertion to Cloud Identity or Google Workspace. To enable Cloud Identity and Google Workspace to verify the integrity and authenticity of that assertion, Keycloak signs the assertion with a special token-signing key and provides a certificate that enables Cloud Identity or Google Workspace to check the signature. You now export the signing certificate from Keycloak: In the menu, select Realm settings. Select the Keys tab. Find the row for Algorithm: RS256. If is more than one row, use the one with Use: SIG. Then select Certificate. A dialog that contains a base64-encoded certificate appears. Copy the base64-encoded certificate value to the clipboard. Before you can use the signing certificate, you must convert it into PEM format by adding a header and footer. Open a text editor such as Notepad or vim. Paste the following header, followed by a newline: -----BEGIN CERTIFICATE----- Paste the base64-encoded certificate from the clipboard. Add a newline and paste the following footer: -----END CERTIFICATE----- After the change, the file looks similar to the following: -----BEGIN CERTIFICATE----- MIICmzCCAYMCBgF7v8/V1TANBgkq... -----END CERTIFICATE----- Save the file to a temporary location on your computer. Complete the SAML profile You use the signing certificate to complete the configuration of your SAML profile: Return to the Admin Console and go to Security > Authentication > SSO with third-party IdP. Go to SSO with third-party IdP Open the Keycloak SAML profile that you created earlier. Click the IDP details section to edit the settings. Click Upload certificate and pick the token signing certificate that you downloaded previously. Click Save. Your SAML profile is complete, but you still need to assign it. Assign the SAML profile Select the users for which the new SAML profile should apply: In the Admin Console, on the SSO with third-party IDPs page, click Manage SSO profile assignments > Manage. Go to Manage SSO profile assignments In the left pane, select the group or organizational unit for which you want to apply the SSO profile. To apply the profile to all users, select the root organizational unit. In the right pane, select Another SSO profile. In the menu, select the Keycloak - SAML SSO profile that you created earlier. Click Save. Repeat the steps to assign the SAML profile to another group or organizational unit. Test single sign-on You've completed the single sign-on configuration. You can now check whether SSO works as intended. Choose a Keycloak user that satisfies the following criteria: The user has an email address. The email address corresponds to the primary email address of an existing user in your Cloud Identity or Google Workspace account. The Cloud Identity user does not have super-admin privileges. User accounts that have super-admin privileges must always sign in by using Google credentials, so they aren't suitable for testing single sign-on. Open a new browser window and go to the Google Cloud console. On the Google sign-in page, enter the email address of the user account, and then click Next. You are redirected to Keycloak. Enter your Keycloak credentials, and then click Sign in. After successful authentication, Keycloak redirects you back to the Google Cloud console. Because this is the first login for this user, you're asked to accept the Google terms of service and privacy policy. If you agree to the terms, click Accept. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud terms of service. If you agree to the terms, click Yes, and then click Agree and Continue. Click the avatar icon, and then click Sign out. You are redirected to Keycloak. If you have trouble signing in, keep in mind that user accounts with super-admin privileges can bypass SSO, so you can still use the Admin console to verify or change settings. Optional: Configure redirects for domain-specific service URLs When you link to the Google Cloud console from internal portals or documents, you can improve the user experience by using domain-specific service URLs. Unlike regular service URLs such as https://console.cloud.google.com/, domain specific-service URLs include the name of your primary domain. Unauthenticated users that click a link to a domain specific-service URL are immediately redirected to Keycloak instead of being shown a Google sign-in page first. Examples for domain-specific service URLs include the following: Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ To configure domain-specific service URLs so that they redirect to Keycloak, do the following: In the Admin Console, on the SSO with third-party IDPs page, click Domain-specific service URLs > Edit. Go to domain-specific service URLs Set Automatically redirect users to the third-party IdP in the following SSO profile to enabled. Set SSO profile to Keycloak. Click Save. Optional: Configure login challenges Google sign-in might ask users for additional verification when they sign in from unknown devices or when their sign-in attempt looks suspicious for other reasons. These login challenges help improve security and we recommend to leave login challenges enabled. If you find that login challenges cause too much friction, you can disable login challenges by doing the following: In the Admin Console, go to Security > Authentication > Login challenges. In the left pane, select an organizational unit for which you want to disable login challenges. To disable login challenges for all users, select the root organizational unit. Under Settings for users signing in using other SSO profiles, select Don't ask users for additional verifications from Google. Click Save. What's next Learn more about Identity and Access Management (IAM). Read about best practices for setting up an enterprise organization in Google Cloud. Send feedback \ No newline at end of file diff --git a/Landing_zones_overview.txt b/Landing_zones_overview.txt new file mode 100644 index 0000000000000000000000000000000000000000..813cc189948b8f8ce8f39a4281756abf1b664300 --- /dev/null +++ b/Landing_zones_overview.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/landing-zones +Date Scraped: 2025-02-23T11:45:08.955Z + +Content: +Home Docs Cloud Architecture Center Send feedback Landing zone design in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This document provides an overview on how to design landing zones in Google Cloud. A landing zone, also called a cloud foundation, is a modular and scalable configuration that enables organizations to adopt Google Cloud for their business needs. A landing zone is often a prerequisite to deploying enterprise workloads in a cloud environment. A landing zone is not a zone or zonal resources. This document is aimed at solutions architects, technical practitioners, and executive stakeholders who want an overview of the following: Typical elements of landing zones in Google Cloud Where to find detailed information on landing zone design How to deploy a landing zone for your enterprise, including options to deploy pre-built solutions This document is part of a series that helps you understand how to design and build a landing zone. The other documents in this series help guide you through the high-level decisions that you need to make when you design your organization's landing zone. In this series, you learn about the following: Landing zone design in Google Cloud (this document) Decide how to onboard identities to Google Cloud Decide the resource hierarchy for your Google Cloud landing zone Decide the network design for your Google Cloud landing zone Decide the security for your Google Cloud landing zone This series does not specifically address compliance requirements from regulated industries such as financial services or healthcare. What is a Google Cloud landing zone? Landing zones help your enterprise deploy, use, and scale Google Cloud services more securely. Landing zones are dynamic and grow as your enterprise adopts more cloud-based workloads over time. To deploy a landing zone, you must first create an organization resource and create a billing account, either online or invoiced. A landing zone spans multiple areas and includes different elements, such as identities, resource management, security, and networking. Many other elements can also be part of a landing zone, as described in Elements of a landing zone. The following diagram shows a sample implementation of a landing zone. It shows an Infrastructure as a Service (IaaS) use case with hybrid cloud and on-premises connectivity in Google Cloud: The example architecture in the preceding diagram shows a Google Cloud landing zone that includes the following Google Cloud services and features: Resource Manager defines a resource hierarchy with organizational policies. A Cloud Identity account synchronizes with an on-premises identity provider and Identity and Access Management (IAM) providing granular access to Google Cloud resources. A network deployment that includes the following: A Shared VPC network for each environment (production, development, and testing) connects resources from multiple projects to the VPC network. Virtual Private Cloud (VPC) firewall rules control connectivity to and from workloads in the Shared VPC networks. A Cloud NAT gateway allows outbound connections to the internet from resources in these networks without external IP addresses. Cloud Interconnect connects on-premises applications and users. (You can choose between different Cloud Interconnect options, including Dedicated Interconnect or Partner Interconnect.) Cloud VPN connects to other cloud service providers. A Cloud DNS private zone hosts DNS records for your deployments in Google Cloud. Multiple service projects are configured to use the Shared VPC networks. These service projects host your application resources. Google Cloud Observability includes Cloud Monitoring for monitoring and Cloud Logging for logging. Cloud Audit Logs, Firewall Rules Logging and VPC Flow Logs help ensure all necessary data is logged and available for analysis. A VPC Service Controls perimeter includes Shared VPC and the on-premises environment. A security perimeter isolates service and resources, which helps to mitigate the risk of data exfiltration from supported Google Cloud services. The diagram above is only an example, because there is no single or standard implementation of a landing zone. Your business must make many design choices, depending on different factors, including the following: Your industry Your organizational structure and processes Your security and compliance requirements The workloads that you want to move to Google Cloud Your existing IT infrastructure and other cloud environments The location of your business and customers When to build a landing zone We recommend that you build a landing zone before you deploy your first enterprise workload on Google Cloud, because a landing zone provides the following: A foundation that's designed to be secure The network for enterprise workloads The tools that you require to govern your internal cost distribution However, because a landing zone is modular, your first iteration of a landing zone is often not your final version. Therefore, we recommend that you design a landing zone with scalability and growth in mind. For example, if your first workload does not require access to on-premises network resources, you could build connectivity to your on-premises environment later. Depending on your organization and the type of workloads that you plan to run on Google Cloud, some workloads might have very different requirements. For example, some workloads might have unique scalability or compliance requirements. In these cases, you might require more than one landing zone for your organization: one landing zone to host most of the workloads and a separate landing zone to host the unique workloads. You can share some elements such as identities, billing, and the organization resource across your landing zones. However, other elements, such as the network setup, deployment mechanisms, and folder-level policies, might vary. Elements of a landing zone A landing zone requires you to design the following core elements on Google Cloud: Identity provisioning Resource hierarchy Network Security controls In addition to these core elements, your business might have additional requirements. The following table describes these elements and where you can find more information about them. Landing zone element Description Monitoring and logging Design a monitoring and logging strategy that helps ensure all relevant data is logged and that you have dashboards that visualize the data and alerts that notify you of any actionable exceptions. For more information, see Google Cloud Observability documentation Backup and disaster recovery Design a strategy for backups and disaster recovery. For more information, see the following: Disaster recovery planning guide Backup and DR Service Compliance Follow the compliance frameworks that are relevant to your organization. For more information, see the Compliance resource center. Cost efficiency and control Design capabilities to monitor and optimize cost for workloads in your landing zone. For more information, see the following: Overview of cloud billing concepts Google Cloud Architecture Framework: Cost optimization Cost management API management Design a scalable solution for APIs that you develop. For more information, see Apigee API Management. Cluster management Design Google Kubernetes Engine (GKE) clusters that follow best practices to build scalable, resilient, and observable services. For more information, see the following: GKE best practices GKE Autopilot mode Multi-cluster Services About Cloud Service Mesh Best practices for designing and deploying a landing zone Designing and deploying a landing zone requires planning. You must have the right team to perform the tasks, and use a project management process. We also recommend that you follow the technical best practices that are described in this series. Build a team Bring together a team that includes people from multiple technical functions across the organization. The team must include people who can build all landing zone elements, including security, identity, networks, and operations. Identify a cloud practitioner who understands Google Cloud to lead the team. Your team should include members who manage the project and track achievements, and members who collaborate with application or business owners. Make sure that all stakeholders are involved early in the process. Your stakeholders must come to a common understanding of the scope of the process and make high-level decisions when the project gets kicked off. Apply project management to your landing zone deployment Designing and deploying your landing zone can take multiple weeks, so project management is essential. Ensure that project goals are clearly defined and communicated to all stakeholders and that all parties receive updates on any project changes. Define regular checkpoints and agree on milestones with realistic timelines that take operational processes and unexpected delays into account. To best align with business requirements, plan the initial landing zone deployment around the use cases that you want to deploy first in Google Cloud. We recommend that you first deploy workloads that can most easily run on Google Cloud, such as horizontally scaling multi-tier web applications. These workloads might be new or existing workloads. To assess existing workloads for migration readiness, see Migration to Google Cloud: Getting started. Because landing zones are modular, center the initial design around the elements that are required to migrate your first workloads and plan to add other elements later. Follow technical best practices Consider using Infrastructure as Code (IaC), with, for example, Terraform. IaC helps you make your deployment repeatable and modular. Having a CI/CD pipeline that deploys cloud infrastructure changes using GitOps helps you ensure that you follow internal guidelines and put the right controls in place. When you design your landing zone, ensure that you and your team take technical best practices into consideration. For more information on decisions to make in your landing zone, see the other guides in this series. In addition to this series, the following table describes frameworks, guides, and blueprints that can also help you follow best practices, depending on your use cases. Related documentation Description Google Cloud setup checklist A high-level checklist to help you set up Google Cloud for scalable, production-ready, enterprise workloads. Enterprise foundation blueprint An opinionated view of Google Cloud security best practices, aimed at CISO, security practitioners, risk managers, or compliance officers. Google Cloud architecture framework Recommendations and best practices to help architects, developers, administrators, and other cloud practitioners design and operate a cloud topology that's secure, efficient, resilient, high-performing, and cost-effective. Terraform blueprints A list of blueprints and modules that are packaged as Terraform modules and that you can use to create resources for Google Cloud. Identify resources to help implement your landing zone Google Cloud offers the following options to help you set up your landing zone: Design and deploy a landing zone that is customized to your requirements with Google Cloud partners or Google Cloud professional services. Onboard a workload with the Google Cloud Customer Onboarding program. Deploy a generic landing zone with the setup guide in the Google Cloud console. Deploy a highly opinionated landing zone that is aligned to the security foundations blueprint by using the Terraform example foundation. All these offerings have approaches that are designed specifically to meet the needs of different industries and business sizes, across the globe. To help you make the best selection for your use case, we recommend that you work with your Google Cloud account team to make the selection and help to ensure a successful project. What's next Decide how to onboard identities to Google Cloud (next document in this series). Decide the resource hierarchy for your Google Cloud landing zone. Decide the network design for your Google Cloud landing zone. Decide the security for your Google Cloud landing zone. Send feedback \ No newline at end of file diff --git a/Learn_more.txt b/Learn_more.txt new file mode 100644 index 0000000000000000000000000000000000000000..e634f2fbb4517d592d4e13d2e45aca711220ea92 --- /dev/null +++ b/Learn_more.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/transform +Date Scraped: 2025-02-23T11:57:05.414Z + +Content: +transformwithInside the moments when cloud computing changed everythingAI & Machine LearningFrom dark data to bright insights: How AI agents make data simpleRead articleBy Yasmeen Ahmad • 4-minute readAI & Machine LearningFrom dark data to bright insights: How AI agents make data simpleBy Yasmeen Ahmad • 4-minute readAI & Machine LearningOn the smartchain: How Web3 and gen AI make each other betterRead articleBy Matt A.V. Chaban • 8-minute readAI & Machine LearningOn the smartchain: How Web3 and gen AI make each other betterBy Matt A.V. Chaban • 8-minute readAI & Machine LearningYour data is the fuel for AI: 5 steps to build strong data foundationsRead articleBy Ami Dave • 5-minute readAI & Machine LearningYour data is the fuel for AI: 5 steps to build strong data foundationsBy Ami Dave • 5-minute readLatest storiesAI & Machine LearningHealthcare’s AI transformation: Agents, search, and platformsBy Aashima Gupta • 6-minute readAI & Machine LearningHundreds of organizations are fine-tuning Gemini models. Here's their favorite use cases.By Mikhail Chrestkha • 9-minute readSecurity & IdentityHow Google Does It: Finding, tracking, and fixing vulnerabilitiesBy Ana Oprea • 5-minute readAI & Machine LearningAI expectations 2025: Here's what thousands of cloud customers reveal they're looking forBy Matt A.V. Chaban • 4-minute readAI & Machine LearningThe Prompt: Prototype to productionBy Warren Barkley • 6-minute readSecurity & IdentityHow Google Does It: How we secure our own cloudBy Seth Vargo • 4-minute readLoad more stories Featured videosGen AI unicorns leading the way2-min videoAcross Asia, better begins with Google Cloud2-min videoHow TIME is using cloud to tell the stories that matter3-min videoSustainable fishing and empowering local fishermen through data3-min videoBrowse all videos \ No newline at end of file diff --git a/Learning_Hub.txt b/Learning_Hub.txt new file mode 100644 index 0000000000000000000000000000000000000000..87a04a510af58b12712d0eb5784068965f12dc35 --- /dev/null +++ b/Learning_Hub.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/learn +Date Scraped: 2025-02-23T12:11:16.128Z + +Content: +Check out the latest AI training courses and new Google Cloud Certificates to build your future. Learn more.Discover your learning pathExplore role-based learning paths, skill badges and certifications. Invest in your learning through Google Developer Program and discover more about team training opportunities.What's new The latest learning newsGEN AI TRAININGCheck out the latest introductory gen AI training and earn a skill badge to shareGet startedCERTIFICATESPrepare for in-demand entry-level job roles with new #GoogleCloudCertificatesExplore certificatesCREDLYShare your Google Cloud credentials with your network, now available through CredlyLearn moreCloud CredentialsBuild your future with credentialsThrough Google Cloud Certificates, skill badges and certifications, gain the in-demand cloud skills to build your future and prepare for the cloud of tomorrow.Explore credentialsBuild your future with credentialsPrepare for a career in cloud with Google Cloud CertificatesEarn a Google Cloud Certificate, and develop your cloud knowledge and skills, to prepare for in-demand entry level cloud roles, in areas like data analytics and cybersecurity.Discover CertificatesEarn skill badgesSkill badges earned exclusively on Google Cloud Skills Boost, are digital credentials that demonstrate an individual’s skills in the latest, in-demand cloud technologies needed by organizations.Earn a skill badgeGoogle Cloud certificationsFoundational certificationThis certification is for anyone who wishes to demonstrate their knowledge of cloud capabilities and how Google Cloud products and services can be used to achieve an organization’s digital transformation goals.Associate certificationThis certification is for technical individuals, with experience deploying Google Cloud applications, monitoring operations, and managing cloud enterprise solutions.Professional certificationThese certifications are for experienced technical individuals who have in-depth, hands-on experience configuring Google Cloud environments for an organization, and deploying services and solutions based on business requirements.Learn moreLearn by roleDevelop your cloud career by roleAPI DeveloperCitizen DeveloperCloud Digital LeaderCloud EngineerCloud ArchitectCloud DeveloperContact Center EngineerData AnalystData EngineerDatabase EngineerDevOps EngineerGoogle Workspace AdministratorHybrid and Multi-Cloud ArchitectMachine Learning EngineerNetwork EngineerSecurity EngineerStartup Cloud EngineerAPI DeveloperAPI DeveloperCitizen DeveloperCloud Digital LeaderCloud EngineerCloud ArchitectCloud DeveloperContact Center EngineerData AnalystData EngineerDatabase EngineerDevOps EngineerGoogle Workspace AdministratorHybrid and Multi-Cloud ArchitectMachine Learning EngineerNetwork EngineerSecurity EngineerStartup Cloud EngineerAPI DeveloperAn API Developer designs, builds, and maintains API proxies. Their work can involve multiple areas including authentication, authorization, monitoring, logging, governance, or documentation.This learning path for API Developers includes a combination of on-demand courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnDeveloping and designing APIs with Google Cloud’s Apigee API PlatformDeveloper and secure APIs with Apigee XDeploy and manage Apigee XCitizen DeveloperCitizen Developers create mobile and desktop applications using AppSheet, an intelligent, no-code platform for creating apps.This learning path for Citizen Developers includes a combination of on-demand courses to watch and hands-on learning to earn Google Cloud completion badges.Start learningWhat you will learnFoundations of Building No-Code Apps with AppSheetImplementation of Building No-Code Apps with AppSheetAutomation of Building No-Code Apps with AppSheetCloud Digital LeaderA Cloud Digital Leader can articulate the capabilities of core cloud products and services, and understands how they benefit organizations.This foundational no-cost learning path is designed for business practitioners working in the cloud. It includes a combination of on-demand courses to watch and earn Google Cloud completion badges. It is designed to develop knowledge in broad cloud literacy and helps prepare for the Google Cloud Digital Leader certification exam.Start learningWhat you will learnDigital Transformation with Google CloudInnovating with Data and Google CloudInfrastructure and Application Modernization with Google CloudUnderstanding Google Cloud Security and OperationsCloud EngineerA Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions.This learning path for Cloud Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnGoogle Cloud Infrastructure Foundations and AutomationGoogle Compute Engine, Cloud Logging, Cloud Monitoring, NetworkingCloud Shell, Cloud IAM, Cloud SDKGoogle Cloud Storage, Cloud Identity, Load BalancingResource Manager, Deployment Manager, VPC, Google Kubernetes EnginePrepare for the Associate Cloud Engineer certification examCloud ArchitectA Cloud Architect designs, develops, and manages robust and scalable cloud architecture solutions.This learning path for Cloud Architects covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnGoogle Cloud Infrastructure Foundations and AutomationGoogle Compute Engine, Cloud Logging, Cloud Monitoring, NetworkingCloud Shell, Cloud IAM, Cloud SDKGoogle Cloud Storage, Cloud Identity, Load BalancingResource Manager, Deployment Manager, VPC, Google Kubernetes EnginePrepare for the Professional Cloud Architect certification examCloud DeveloperA Cloud Developer designs, builds, analyzes, and maintains cloud-native applications.This learning path for Cloud Developers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnCloud Shell, Cloud SDK, Cloud Storage, App EngineCloud Datastore, Firebase, Firebase SDK, Cloud SQL, Cloud IAMCloud Functions, Cloud EndPoints, Google Kubernetes Engine, Container Registry, Cloud Logging and MonitoringCloud Pub/Sub, Cloud Spanner, Spring Boot, Spring CloudContact Center EngineerLearn how to design, develop, and deploy customer conversational solutions using Contact Center Artificial Intelligence (CCAI). Explore best practices for integrating conversational solutions with your existing contact center software, establishing a framework for human agent assistance, and implementing solutions securely and at scale.This learning path for Contact Center Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnContact Center Artificial IntelligenceDialogflowNatural Language Understanding (NLU)Data AnalystData Analysts gather data, analyze that data and translate the results into insights to share with business stakeholders.This learning path for Data Analysts covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnBigQuery, SQL, BigQuery MLData VisualizationCloud APIs, AI Platform, Cloud Data FusionData EngineerData Engineers design and build systems that collect and transform the data used to inform business decisions.This learning path for Data Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnBigQuery, Dataflow, Data FusionCloud Composer, BigQuery ML, IoTTensorFlow, Dataproc, workload migrationDatabase EngineerDatabase Engineers design, plan, test, implement, and monitor database migrations.This learning path for Database Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnCompute EngineCloud SQLBigQuery, Cloud SpannerDevOps EngineerCloud DevOps Engineers are responsible for maintaining the efficient operations of the full software delivery pipeline. They oversee that production balances both service reliability and delivery speed.This learning path for Cloud DevOps Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnGoogle App Engine, Google Compute EngineGoogle Kubernetes Engine, Site Reliability Engineering (SRE)Cloud Debugger, Cloud Data Loss Prevention (DLP), Google Cloud Operations Suite, Error ReportingGoogle Workspace AdministratorGoogle Workspace Administrators transform business objectives into tangible configurations, policies, and security practices as they relate to their organization’s users, content, and integrations. Workspace Administrators use tools, programming languages, and APIs to automate workflows and enable efficient and secure communication and data access.This learning path for Google Workspace Administrators covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnApp Script, Google Workspace Admin consoleGmail, Google Maps, Google Drive, Calendar, Docs, SheetsApp Maker, Google Workspace API and IntegrationsHybrid and Multi-Cloud ArchitectHybrid and Multi-Cloud Architects manage applications across multiple clouds, or between on-premises and cloud environments.This learning path for Hybrid and Multi-Cloud Architects covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnContainers, Google Kubernetes Engine (GKE), Google Cloud IAMAnthos, Istio, Docker, Cloud IAM, Cloud SDKContainer Registry, VPC, Resource Manager, Cloud Logging, Cloud MonitoringMachine Learning EngineerA Machine Learning Engineer designs, builds, productionizes, optimizes, operates, and maintains ML systems.This learning path for ML Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnTensorFlow, AI Platform NotebooksCloud Dataflow, Cloud DataFusion, Vertex AI, BigQuery, BigQuery MLCloud ML APIs, Kubeflow PipelinesNetwork EngineerCloud Network Engineers configure, maintain, and troubleshoot network components of their cloud-based infrastructure.This learning path for Cloud Network Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnCloud CDN, Cloud DNS, Cloud IAM, NetworkingCloud Armor, Cloud Shell, Cloud Console, Cloud InterconnectCloud Load Balancing, Network Service Tiers, Cloud Deployment Manager, VPC, Cloud NATSecurity EngineerSecurity Engineers actively assess existing cloud implementations, identifying potential security issues, and prioritizing solutions. Learn best practices in cloud security and how the Google Cloud security model can help protect your technology stack. Start with the Network Engineer learning path to support your learning on this path.This learning path for Security Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnGoogle Kubernetes Engine, Cloud ShellVirtual machines, Stackdriver, Cloud Functions, IAMCloud DNS, Cloud CDN, and Cloud NATStartup Cloud EngineerA Startup Cloud Engineer needs to be agile and understand how to implement solutions quickly and economically. This learning path exposes you to many parts of the Google Cloud Platform, giving you the ability to solve your unique problems.This learning path for Startup Cloud Engineers covers a range of courses to watch and hands-on learning to complete and earn Google Cloud skill badges.Start learningWhat you will learnCloud Shell, Cloud SDK, Cloud Storage, App EngineCloud Datastore, Firebase, Firebase SDK, Cloud SQL, Cloud IAM, Cloud FunctionsCloud EndPoints, Google Kubernetes Engine, BigQuery, BigQuery MLInvest in a learning subscriptionStart your learning subscriptionGo Further with Google Developer Program premium tier.The Google Developer Program premium tier unlocks exclusive benefits, resources, and opportunities to help you learn, build, and grow with Google.Learn moreTraining for teamsUpskill your teamDiscover how our customers are transforming their teams with Google Cloud training and certification. Train your team with the latest cloud skills through engaging with Google Cloud Consulting. Discover the full range of consulting services available today. Learn more.Samsung upskills their Big Data team to transform their businessHow HSBC is upskilling at scale with Google CloudWayfair trains their ML and data engineers, and application developers with Google CloudMore ways to learnTraining options to meet your learning styleJoin our Google Cloud online eventsFind a training class near youRead recent training and certification blogsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Live_Stream_API.txt b/Live_Stream_API.txt new file mode 100644 index 0000000000000000000000000000000000000000..bf0340eb871bcfa41eea98bac60556124155fc30 --- /dev/null +++ b/Live_Stream_API.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/livestream/docs +Date Scraped: 2025-02-23T12:06:18.986Z + +Content: +Home Live Stream API Documentation Stay organized with collections Save and categorize content based on your preferences. Live Stream API documentation View all product documentation With Live Stream API, you can transcode live, linear video streams into a variety of formats. Live Stream API benefits broadcasters, production companies, businesses, and individuals looking to transform their live video content for use across a variety of user devices. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart for an MPEG-DASH live stream Quickstart for an HLS live stream Product overview find_in_page Reference REST API DRM protocol documentation info Resources Pricing Release notes Troubleshoot \ No newline at end of file diff --git a/Load_balanced_managed_VMs.txt b/Load_balanced_managed_VMs.txt new file mode 100644 index 0000000000000000000000000000000000000000..0ea276eba624c2764d5cf39b0eb56ec0dbcef99c --- /dev/null +++ b/Load_balanced_managed_VMs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/reliability/load-balanced-vms +Date Scraped: 2025-02-23T11:54:44.175Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Load balanced managed VMs Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-04-10 UTC This guide helps you understand, deploy, and use the Load balanced managed VMs Jump Start Solution, which demonstrates how to create a virtual machine cluster with a load balancer, make VMs globally available, and instantaneously manage traffic. You can deploy the solution to help you do the following: Create redundant versions of an application that is hosted on multiple VMs. Automatically scale the number of VMs to meet user demand. Automatically heal failing copies of an application. Distribute traffic to multiple locations. Migrate an existing load-balanced implementation to the cloud with minor modifications (lift and shift). This document is intended for developers who have some background with load balancers. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Learn about load balancer features and configurations, including auto-scaling and auto-healing. Deploy two or more VMs that can potentially serve an application, and use a load balancer to manage traffic. Modify the deployment location and the number of nodes. Understand load balancer design considerations. Architecture This solution deploys a group of VMs that are managed by a load balancer. The following diagram shows the architecture of the Google Cloud resources: Request flow The following is the request processing flow of the topology that the load balanced managed VMs solution deploys. The steps in the flow are numbered as shown in the preceding architecture diagram. The user makes a request to the application, which is deployed on Compute Engine. The request first lands on Cloud Load Balancing. Cloud Load Balancing distributes traffic to the Compute Engine managed instance group (MIG), which scales the number of instances based on traffic volume. Components and configuration The architecture includes the following components: Component Product description Purpose in this solution Compute Engine A secure and customizable compute service that lets you create and run virtual machines on Google's infrastructure. Multiple virtual machines in a MIG create redundant versions of a prospective application. Cloud Load Balancing A service that provides high performance, scalable load balancing on Google Cloud. Process incoming user requests, and distribute to nodes based on configured settings. Cost For an estimate of the cost of the Google Cloud resources that the load balanced managed VMs solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Deploy the solution This section guides you through the process of deploying the solution. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case your administrator needs this information. roles/compute.instanceAdmin.v1 roles/editor roles/iam.serviceAccountActor roles/iam.serviceAccountUser Choose a deployment method To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure-as-code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Load balanced managed VMs page. Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step interactive guide is displayed. Complete the steps in the interactive guide: Select a project where you want to create resources that are deployed by the solution and click Continue. In the Deployment name field, type a name you have not previously used in this project. Optionally, add an identifying label to the deployment. (Solution indicator and deployment name labels are automatically added.) You can use labels to organize resources by criteria such as cost center, environment, or state. For more information about labels, see Creating and managing labels From the Region and Zone drop-down lists, select the desired location where resources will be created. For more information about regions and zones, see Geography and regions In the Number of nodes field, type the minimum number of virtual machines in the MIG. The load balancer is configured to scale the number of virtual machines based on user traffic volume. For this deployment, you can use the default value of 2 nodes. For more information about creating multiple VMs, see Basic scenarios for creating managed instance groups (MIGs). Click Continue. When you've finished specifying options, click Deploy. The Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying from the console After the deployment is completed, the Status field changes to Deployed. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour You deployed the example solution, viewed the load balancer configuration, and viewed the application site that is served by VMs. To learn about design recommendations to address your organization's unique load balancing needs, see Design recommendations. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-google-load-balanced-vms/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-google-load-balanced-vms/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-load-balanced-vms/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-google-load-balanced-vms/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values that you set in this file must match the variable types, as declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Google Cloud zone where you want to deploy the solution # Example: us-central1-a zone = "ZONE" # The number of Cloud Compute nodes you want to deploy (minimum of 2) # Example: 2 nodes = "NODES" # The name of this particular deployment, will get added as a prefix to most resources # Example: load-balanced-vms deployment_name = "DEPLOYMENT_NAME" # The following variables have default values. You can set your own values or remove them to accept the defaults # A set of key/value label pairs to assign to the resources that are deployed by this solution # Example: {"team"="monitoring", "environment"="test"} labels = {"KEY1"="VALUE1",..."KEYn"="VALUEn"} # Whether to enable underlying APIs # Example: true enable_apis = "ENABLE_APIS" # If you want to deploy to an existing network, enter your network details in the following variables: # VPC network to deploy VMs in. A VPC will be created if not specified network_id = "NETWORK_ID" # Subnetwork to deploy VMs in. A Subnetwork will be created if not specified subnet_self_link = "SUBNET_SELF_LINK" #Shared VPC host project ID, if a Shared VPC is provided via network_id network_project_id = "NETWORK_PROJECT_ID" For information about the values that you can assign to the required variables, see the following: project_id, region, and zone are required. For information about the values that you can use for these variables, see the following: Identifying projects Available regions and zones The other variables have default values. You might change some of them (for example, deployment_name and labels). Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-load-balanced-vms/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-load-balanced-vms/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The following additional output is displayed: Outputs: console_page_for_load_balancer = "https://console.cloud.google.com/net-services/loadbalancing/details/http/-lb-url-map?project=" load_balancer_endpoint = "" To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Design recommendations This section provides recommendations for using the load balanced managed VMs solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. For a high level overview of best practices, see Patterns for scalable and resilient apps . Security Implement the recommendations in the following guides to help secure your architecture: Implement security by design Implement shift-left security For example, your architecture might have the following requirements: You might require security features that are only available on a specific operating system. For more information, see Operating system details You might need to fine-tune subnet details in a custom network. For more information about creating networks, see Create and manage VPC networks Reliability Use the following guidelines to create reliable services: Designing resilient systems on Compute Engine Using load balancing for highly available applications For example, you might fine-tune your VM health check details to ensure that timing is in line with your organization's commitments to customers. For more information about configuring health checks, see Set up an application health check and autohealing . Performance Optimize performance by adhering to the best practices described in Google Cloud Architecture Framework: Performance optimization. For example, the application that you deploy might require specific hardware requirements. For more information about configuring disk, memory, and CPU details on Compute Engine, see Machine families resource and comparison guide . Cost Use the best practices in the following guide to optimize the cost of your workflows: Google Cloud Architecture Framework: Cost optimization For example, you might set the maximum number of nodes in your MIG based on a maximum cost you would prefer to incur for Compute Engine instances. For more information about setting the target size of the autoscaler, see Turning off or restricting an autoscaler. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the deployment When you no longer need the solution, to avoid continued billing for the resources that you created in this solution, delete all the resources. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-google-load-balanced-vms/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Review the following documentation to learn about architectural and operational best practices for products used in this solution: Patterns for scalable and resilient apps Compute Engine: Create and start a VM Compute Engine: Create custom images Compute Engine: Create instance templates Compute Engine: Basic scenarios for creating managed instance groups (MIGs) Send feedback \ No newline at end of file diff --git a/Local_SSD.txt b/Local_SSD.txt new file mode 100644 index 0000000000000000000000000000000000000000..57a0eb14670573f0ebfdf83a4173bfad82417e48 --- /dev/null +++ b/Local_SSD.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/local-ssd +Date Scraped: 2025-02-23T12:10:09.994Z + +Content: +Block storageLocal SSD storage for virtual machine instancesHigh-performance, ephemeral block storage.Try it in consoleView documentationHow to use Local SSDsCaches or storage for transient, low value dataScratch processing space for high performance computing or data analyticsTemporary data storage for Microsoft SQL ServerSee all block storage options >FeaturesFastest IOPS and high-speed throughputChoose Local SSDs when you need Google's highest-speed ephemeral storage. Reach IOPS of 3,200,000 / 1,600,000 (read / write) and throughput up to 12,480 MiB per second / 6,240MiB per second on general purpose instances for 12 TB instances. On the newest storage-optimized instances, reach an industry-leading IOPS of 6,000,000 read and write and throughput up to 36,000 MiB per second / 30,000 MiB per second (read / write) on 36 TB instances.Scale as you need itAttach up to 32 Local SSD disks for 12 TB of total Local SSD storage space per general purpose instance, you can format and mount multiple Local SSD partitions into a single logical volume.Customizable VM shapesDepending on your workload, you may require specific memory-to-storage configuration to achieve the best performance at the right price. With Local SSDs, attach custom-sized disks to your VMs, allowing you to tailor your storage to your use case, needs, and budget.View all featuresHow It WorksLocal SSD in a minuteWanting to find a tool that gives you extra storage for your VM instances? In this episode of Cloud Bytes, Jen Person explains what a Local SSD is and the different use cases for this tool. Watch to learn if this ephemeral storage option fits best with your developer projects. Play videoCommon UsesCaches or storage for transient, low value dataUsing Local SSDs for caching allows you to store data in a lower-latency drive so responses to requests for that data can occur fast.Learn how to use Local SSDs for data cachingLearning resourcesUsing Local SSDs for caching allows you to store data in a lower-latency drive so responses to requests for that data can occur fast.Learn how to use Local SSDs for data cachingScratch processing space for high performance computing or data analyticsLocal SSDs are designed for temporary storage use cases such as scratch processing space. Because Local SSDs are located on the physical machine where your VM is running, they allow for very fast temporary storage space for HPC or data analytics.Learn how to use Local SSDs for scratch spaceLearning resourcesLocal SSDs are designed for temporary storage use cases such as scratch processing space. Because Local SSDs are located on the physical machine where your VM is running, they allow for very fast temporary storage space for HPC or data analytics.Learn how to use Local SSDs for scratch spaceTemporary data storage for Microsoft SQL ServerLocal SSDs can be a high-performance option for Microsoft SQL Server's tempdb system database and Windows pagefile. Local SSDs are physically connected to the server that hosts the VM instance, which can lead to better performance, lower latency, and higher input/output operations per second (IOPS) compared to other block storage options. However, local SSDs are removed when the instance is shut down or reset using the API.Learn how to create a high performance SQL Server instanceLearning resourcesLocal SSDs can be a high-performance option for Microsoft SQL Server's tempdb system database and Windows pagefile. Local SSDs are physically connected to the server that hosts the VM instance, which can lead to better performance, lower latency, and higher input/output operations per second (IOPS) compared to other block storage options. However, local SSDs are removed when the instance is shut down or reset using the API.Learn how to create a high performance SQL Server instancePricingHow Local SSD pricing worksPricing for Local SSDs is based on usage, and is typically billed per GB per month.ProductPricingLocal SSDSee pricing pageHow Local SSD pricing worksPricing for Local SSDs is based on usage, and is typically billed per GB per month.Local SSDPricingSee pricing pageGet started with Local SSDsLearn more about options and pricingVisit documentationCreate a VM instance and start using Local SSDsOpen consoleLearn how to optimize Local SSD performanceExplore guideHow to choose a disk interface Learn moreUnderstand how to choose a valid number of disksView detailsBusiness CaseTop companies are using Local SSDMixpanel: Scalable, high-performance product analyticsJoe Xavier, VP of Engineering, Mixpanel"We saw the pace of innovation as faster at Google, which opens up possibilities for us to do things we otherwise couldn’t do. With Google Cloud doing a lot of the heavy lifting, we can concentrate on making our analytics products the best they can be."Learn moreMore storiesIntegraGen created scalable, high-performance analytics tools that enable researchers and oncologists to rapidly interpret genomic data.Planet Labs, Inc. has launched the largest constellation of satellites in human history to capture daily images of the entire Earth’s landmass on a daily basis.Technical resourcesAdding Local SSDsOptimizing Local SSD performanceLearn more about the ephemeral nature of Local SSDsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Log_on-premises_resources.txt b/Log_on-premises_resources.txt new file mode 100644 index 0000000000000000000000000000000000000000..3175ab0b4466e7d33cb27cb8674a6f09a8a3070a --- /dev/null +++ b/Log_on-premises_resources.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/logging-on-premises-resources-with-bindplane +Date Scraped: 2025-02-23T11:53:28.954Z + +Content: +Home Docs Cloud Architecture Center Send feedback Log on-premises resources with BindPlane Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC This document is one part of a two-part series on extending Cloud Logging and Cloud Monitoring to include on-premises infrastructure and apps. Log on-premises resources with BindPlane (this document): Read about how Logging supports logging from on-premises resources. Monitor on-premises resources with BindPlane: Read about how Monitoring supports monitoring of on-premises resources. You might consider using Logging and Monitoring for logging and monitoring of your on-premises resources for the following reasons: You want a temporary solution while you move infrastructure to Google Cloud and you want to log and monitor your on-premises resources until they're decommissioned. You might have a diverse computing environment with multiple clouds and on-premises resources. In either case, with the Logging and Monitoring APIs and BindPlane, you can gain visibility into your on-premises resources. This document is intended for DevOps practitioners, managers, and executives who are interested in a logging strategy for resources in Google Cloud and their remaining on-premises infrastructure and apps. Ingesting logs with Logging You can get logs into Logging by using the API in two supported ways: Use BindPlane from observIQ to ingest logs from your on-premises or other cloud sources. Use the Cloud Logging API directly from your app or by using a custom agent. Using BindPlane to ingest Logging logs The following diagram shows the architecture of how BindPlane ingests logs and then how those logs are ingested into Logging. BindPlane enables users to remotely deploy and manage agents on the hosts they want to gather logs from. For more information, read about the architecture of BindPlane. This option requires the least amount of effort to deploy because it requires configuration to set up rather than development. Advantages: Requires configuration, not development. Included in the cost of using Logging. Is a supported configuration by Logging product and support. Can extend to logs not provided by the default configuration. Disadvantages: Requires the use of a third-party tool. Might need to provide a custom configuration if the log source isn't provided by default. The provided list of logs is available in Sources. Using the Logging API directly The following diagram shows the architecture of how logs are collected by instrumentation and ingested into Logging. Using the APIs directly means that you need to instrument your applications to send logs directly to the API or develop a custom agent to send logs to the API. This is the option that requires the highest level of effort because it requires development effort. Advantages: Provides flexibility because you can implement the instrumentation with client logging libraries. Disadvantages: Requires separate solution for infrastructure logs, such as a custom agent. Requires code instrumentation, which might mean higher cost to implement. Requires the use of batching and other scalable ingestion techniques for proper ingestion performance. Support is provided for the Logging API only, not custom-developed code. Using BindPlane This document covers using BindPlane from observIQ to ingest logs into Logging. Because it's included in the cost of Logging, BindPlane doesn't require development and provides a product-supported solution. Agents, sources, and destinations For detailed information about agents, sources, and destinations, see the BindPlane Quickstart Guide. Example use case Enterprise customers use BindPlane to ingest logs in the following on-premises logging scenarios: Custom parsing and filtering of log data from custom application logs. Collection of operating-system events from Linux or Windows virtual machines. Ingestion of syslog streams from network or other compatible devices. Collection of Kubernetes system and application logs. Send logs from on-premises to Logging After you set up BindPlane and start sending logs, those logs are sent to Logging. To view, process, and export logs, go to the Google Cloud console. The logs are listed as generic_node or generic_taskresource types. For more information about the labels included in each resource type, see the Logging resource list. Cloud Logging supports non-Cloud Logging logs through the use of two resource types: Generic Node: Identifies a machine or other computational resource for which no other resource type is applicable. The label values must uniquely identify the node. Generic Task: Identifies an app process for which no other resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task. View logs in Logging On the Logs Explorer page, the All resources list includes Generic Node as a resource type. The list of logs that appear on the page were captured as the generic_node resource type. Expand a row to see log entry details. The log entries use a structured logging format, which provides a richer format for searching the logs because the log payload is stored as a jsonPayload. The structured logging format makes logs more accessible because you can use the fields in the payload as a part of the search. The BindPlane agent provides a mapping from the original log entry to the structured log entry in Logging. Conclusion With logs available in Logging, you can take advantage of the full use of the Logging features. Logs appear in the Google Cloud console. You can export logs with Logging exports and use them to create metrics and alerts in Monitoring by using logs-based metrics. What's next Logging and Monitoring BindPlane set-up instructions for Cloud Monitoring and Logging Set up logs-based metrics in Logging For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Logging_and_monitoring.txt b/Logging_and_monitoring.txt new file mode 100644 index 0000000000000000000000000000000000000000..2a11d3ba0f225e74b3247af91c5ce823d7bd7f5e --- /dev/null +++ b/Logging_and_monitoring.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/logging-monitoring +Date Scraped: 2025-02-23T11:47:10.219Z + +Content: +Home Docs Cloud Architecture Center Send feedback Logging and monitoring Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC This section describes how logging and monitoring work in the enterprise application blueprint for both the developer platform and the applications. Google Cloud Observability for GKE provides Cloud Logging and Cloud Monitoring services for blueprint applications. By default, the base source code in the application templates sends logs to stdout. Using stdout is a best practice for containerized applications because stdout lets the platform handle the application logs. The application code is instrumented with Prometheus client libraries to export application-specific metrics. GKE automatically provides metrics for each application, including Kube State metrics, resource utilization, SRE golden metrics, and database instance metrics. For the developer platform team, the platform provides infrastructure, usage, and cross-application traffic metrics. Logging storage Cloud Operations for GKE also lets you collect system and application logs into central log buckets. The blueprint also includes a project in each environment folder that's used for storing logs. The enterprise foundation blueprint has a separate logging project where the aggregate Cloud Audit Logs logs from across the entire Google Cloud organization are exported. The log types most needed by tenants are also separated by tenant. For example, an application developer who works on the frontend application might be granted access to only frontend container logs and pod logs, and only in the development and non-production environments. The following table lists log types, locations, and access control granularity. Access control granularity Log types Log storage location Developer platform Multi-tenant infrastructure logs Project: eab-infra-cicd Application factory logs Project: eab-app-factory By environment Node Cluster control plane Non-tenant containers or pods Project: eab-gke-{env}Bucket: _Default Compute Engine resources that are used by GKE Cloud Service Mesh traffic Project: eab-gke-{env} By environment and tenant Tenant containers or pods Project: eab-gke-{env}Bucket: per-tenant (scope) Alloy DB sessions Other tenant-owned resources Project: eab-app-{appname}-{env} By tenant Application builds Application deploys Project: eab-app-cicd-{appname} Application monitoring Google Cloud Observability for GKE provides predefined monitoring dashboards for GKE. The blueprint also enables Google Cloud Managed Service for Prometheus, which collects metrics from Prometheus exporters and lets you query the data globally using PromQL. PromQL means that you can use familiar tools like Grafana dashboards and PromQL-based alerts. Cloud Service Mesh is enabled to provide you with dashboards in the Google Cloud console to observe and troubleshoot interactions between services and across tenants. The blueprint also includes a project for a multi-project monitoring metrics scope. Threat and vulnerability monitoring Security Command Center provides insight into the overall security posture of the blueprint. Security Command Center Premium tier provides Container Threat Detection for active container-based workloads in GKE. Web Security Scanner is used to detect vulnerabilities in your internet-facing services. Web Security Scanner detects vulnerabilities by crawling an HTTP service and following all links, starting at the base URL. Web Security Scanner then exercises as many user inputs and event handlers as possible. What's next Read about operations for both the developer platform and applications (next document in this series). Send feedback \ No newline at end of file diff --git a/Looker(1).txt b/Looker(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..d0910838eb21222dd95ba155dfc7a4dc4c87c378 --- /dev/null +++ b/Looker(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/looker +Date Scraped: 2025-02-23T12:02:28.352Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.LookerAnalyze governed data, deliver business insights, and build AI-powered applicationsBuild the foundation for responsible data insights with Looker. Leveraging Google’s deep roots and track record of AI-led innovation, Looker delivers the most intelligent BI solution by combining foundational AI, cloud-first infrastructure, industry leading APIs, and our flexible semantic layer.Try it freeRequest demoProduct highlights:API-first platform with composable BIBusiness-friendly AI powered analytics with governed, modeled dataOpen and trusted semantic modelGoogle-easy dashboardingUnleash Data's Full Potential: The Looker Story2:54 videoFeaturesTransform your data landscape into a unified, trusted source for AI and human analysis alike with Looker’s universal semantic modeling layerLooker’s trusted modeling layer provides a single place to curate and govern the metrics most important to your business. This means that users will see consistent results regardless of where they are consumed. In order for you to use this consistent trustworthy information in your tools of choice, we have opened up the Looker modeling layer to our ecosystem and are actively engaged with our partners to ensure the information in your Looker platform doesn’t stay there. Gemini in Looker: AI-powered analytics for self-service BIAn AI assistant that helps accelerate analytical workflows in Looker like the creation and configuration of visualizations, formula creation, data modeling, report creation, and more — all underpinned by both a robust semantic and Gemini foundational models. With Gemini foundational models, RAG, and more, you’ll see relentless focus on the quality outputs of the models and rapid innovation on a reimagined UX that is both collaborative and conversational for mass appeal.Build custom data experiences and data apps with Looker’s powerful embedded capabilitiesEmbedded analytics goes beyond placing dashboards in apps. It's about transforming data into deeply integrated, value-driving experiences. With Looker, fully interactive dashboards can be seamlessly integrated into your applications. The robust API coverage allows you to do just about anything from the UI through API. This unlocks limitless possibilities to create data experiences beyond a Looker dashboard. In addition, Looker extensions integrate directly with Vertex AI, enabling powerful custom AI workflows and advanced analytics within your Looker instance.Bring your insights to life with out-of-the-box visual, real-time, and self-service analyticsLooker offers two ways to empower your users with self-service capabilities — Looker and Looker Studio.Looker offers enterprise dashboards that are real-time, built on governed data, offers repeatable analysis, and inspires in-depth understanding of the data. Users can explore existing tiles, asking new questions, expanding filters, and drilling down to row level detail to fully understand the data behind a metric.Looker Studio offers interactive, collaborative, and ad hoc reports and dashboards, with access to over 800 data sources and connectors and a flexible drag-and-drop canvas. You have the option to perform ad hoc analysis on both governed and unmodeled data. It's free, fast, and easy to start. Simplify and secure your analytics with Looker on Google CloudLooker on Google Cloud takes the existing powerful capabilities from the Looker platform and seamlessly integrates it into the secure Google Cloud ecosystem. Features our customers have come to love and expect in all Google Cloud products are now also available with Looker on Google Cloud, including SSO with Google Cloud IAM, private networking, seamless integration with BigQuery, and a unified Terms of Service.View all featuresHow It WorksLooker is Google for your business data and enables you to make your organization's information accessible and useful. Looker Studio enables you to tell impactful, insightful stories with engaging reports and data visualizations.Together, you put inspiration and opportunity into the hands of the people who make things happen.Get started with LookerLooker: Google for your business dataCommon UsesReduce cross-cloud spendingLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementTutorials, quickstarts, & labsLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementAnalyze marketing dataLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformTutorials, quickstarts, & labsLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformCreate custom data and AI applicationsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubTutorials, quickstarts, & labsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubMonetize your business dataLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsTutorials, quickstarts, & labsLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsMore productive collaborationLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceTutorials, quickstarts, & labsLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceA dynamic data analytics powerhouseLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerTutorials, quickstarts, & labsLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerPricingLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker productDetailsCostLooker (Google Cloud core)Platform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.Work with sales to identify a solution that works for you.Edition pricingStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.User licensingDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.Connect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker (Google Cloud core)DetailsPlatform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.CostWork with sales to identify a solution that works for you.Edition pricingDetailsStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.CostAnnual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.User licensingDetailsDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.CostConnect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsPricing CalculatorEstimate your monthly Looker costs.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptRequest a demoRequest a demoHave a unique project or use case?Contact salesLearn how to send and share content in LookerRead guideLearn how to create and edit dashboards and reports in LookerRead guideLearn how to use embedding and the API in LookerRead guideGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Looker(2).txt b/Looker(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..3a893afa088ed30e5d28a84cda764a3aebb39ffb --- /dev/null +++ b/Looker(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/looker +Date Scraped: 2025-02-23T12:03:31.834Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.LookerAnalyze governed data, deliver business insights, and build AI-powered applicationsBuild the foundation for responsible data insights with Looker. Leveraging Google’s deep roots and track record of AI-led innovation, Looker delivers the most intelligent BI solution by combining foundational AI, cloud-first infrastructure, industry leading APIs, and our flexible semantic layer.Try it freeRequest demoProduct highlights:API-first platform with composable BIBusiness-friendly AI powered analytics with governed, modeled dataOpen and trusted semantic modelGoogle-easy dashboardingUnleash Data's Full Potential: The Looker Story2:54 videoFeaturesTransform your data landscape into a unified, trusted source for AI and human analysis alike with Looker’s universal semantic modeling layerLooker’s trusted modeling layer provides a single place to curate and govern the metrics most important to your business. This means that users will see consistent results regardless of where they are consumed. In order for you to use this consistent trustworthy information in your tools of choice, we have opened up the Looker modeling layer to our ecosystem and are actively engaged with our partners to ensure the information in your Looker platform doesn’t stay there. Gemini in Looker: AI-powered analytics for self-service BIAn AI assistant that helps accelerate analytical workflows in Looker like the creation and configuration of visualizations, formula creation, data modeling, report creation, and more — all underpinned by both a robust semantic and Gemini foundational models. With Gemini foundational models, RAG, and more, you’ll see relentless focus on the quality outputs of the models and rapid innovation on a reimagined UX that is both collaborative and conversational for mass appeal.Build custom data experiences and data apps with Looker’s powerful embedded capabilitiesEmbedded analytics goes beyond placing dashboards in apps. It's about transforming data into deeply integrated, value-driving experiences. With Looker, fully interactive dashboards can be seamlessly integrated into your applications. The robust API coverage allows you to do just about anything from the UI through API. This unlocks limitless possibilities to create data experiences beyond a Looker dashboard. In addition, Looker extensions integrate directly with Vertex AI, enabling powerful custom AI workflows and advanced analytics within your Looker instance.Bring your insights to life with out-of-the-box visual, real-time, and self-service analyticsLooker offers two ways to empower your users with self-service capabilities — Looker and Looker Studio.Looker offers enterprise dashboards that are real-time, built on governed data, offers repeatable analysis, and inspires in-depth understanding of the data. Users can explore existing tiles, asking new questions, expanding filters, and drilling down to row level detail to fully understand the data behind a metric.Looker Studio offers interactive, collaborative, and ad hoc reports and dashboards, with access to over 800 data sources and connectors and a flexible drag-and-drop canvas. You have the option to perform ad hoc analysis on both governed and unmodeled data. It's free, fast, and easy to start. Simplify and secure your analytics with Looker on Google CloudLooker on Google Cloud takes the existing powerful capabilities from the Looker platform and seamlessly integrates it into the secure Google Cloud ecosystem. Features our customers have come to love and expect in all Google Cloud products are now also available with Looker on Google Cloud, including SSO with Google Cloud IAM, private networking, seamless integration with BigQuery, and a unified Terms of Service.View all featuresHow It WorksLooker is Google for your business data and enables you to make your organization's information accessible and useful. Looker Studio enables you to tell impactful, insightful stories with engaging reports and data visualizations.Together, you put inspiration and opportunity into the hands of the people who make things happen.Get started with LookerLooker: Google for your business dataCommon UsesReduce cross-cloud spendingLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementTutorials, quickstarts, & labsLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementAnalyze marketing dataLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformTutorials, quickstarts, & labsLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformCreate custom data and AI applicationsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubTutorials, quickstarts, & labsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubMonetize your business dataLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsTutorials, quickstarts, & labsLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsMore productive collaborationLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceTutorials, quickstarts, & labsLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceA dynamic data analytics powerhouseLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerTutorials, quickstarts, & labsLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerPricingLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker productDetailsCostLooker (Google Cloud core)Platform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.Work with sales to identify a solution that works for you.Edition pricingStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.User licensingDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.Connect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker (Google Cloud core)DetailsPlatform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.CostWork with sales to identify a solution that works for you.Edition pricingDetailsStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.CostAnnual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.User licensingDetailsDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.CostConnect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsPricing CalculatorEstimate your monthly Looker costs.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptRequest a demoRequest a demoHave a unique project or use case?Contact salesLearn how to send and share content in LookerRead guideLearn how to create and edit dashboards and reports in LookerRead guideLearn how to use embedding and the API in LookerRead guideGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Looker.txt b/Looker.txt new file mode 100644 index 0000000000000000000000000000000000000000..4b5bb46c8243fa6c93dd3c91f3dd23fb74ab4884 --- /dev/null +++ b/Looker.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/looker +Date Scraped: 2025-02-23T12:01:37.716Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register today.LookerAnalyze governed data, deliver business insights, and build AI-powered applicationsBuild the foundation for responsible data insights with Looker. Leveraging Google’s deep roots and track record of AI-led innovation, Looker delivers the most intelligent BI solution by combining foundational AI, cloud-first infrastructure, industry leading APIs, and our flexible semantic layer.Try it freeRequest demoProduct highlights:API-first platform with composable BIBusiness-friendly AI powered analytics with governed, modeled dataOpen and trusted semantic modelGoogle-easy dashboardingUnleash Data's Full Potential: The Looker Story2:54 videoFeaturesTransform your data landscape into a unified, trusted source for AI and human analysis alike with Looker’s universal semantic modeling layerLooker’s trusted modeling layer provides a single place to curate and govern the metrics most important to your business. This means that users will see consistent results regardless of where they are consumed. In order for you to use this consistent trustworthy information in your tools of choice, we have opened up the Looker modeling layer to our ecosystem and are actively engaged with our partners to ensure the information in your Looker platform doesn’t stay there. Gemini in Looker: AI-powered analytics for self-service BIAn AI assistant that helps accelerate analytical workflows in Looker like the creation and configuration of visualizations, formula creation, data modeling, report creation, and more — all underpinned by both a robust semantic and Gemini foundational models. With Gemini foundational models, RAG, and more, you’ll see relentless focus on the quality outputs of the models and rapid innovation on a reimagined UX that is both collaborative and conversational for mass appeal.Build custom data experiences and data apps with Looker’s powerful embedded capabilitiesEmbedded analytics goes beyond placing dashboards in apps. It's about transforming data into deeply integrated, value-driving experiences. With Looker, fully interactive dashboards can be seamlessly integrated into your applications. The robust API coverage allows you to do just about anything from the UI through API. This unlocks limitless possibilities to create data experiences beyond a Looker dashboard. In addition, Looker extensions integrate directly with Vertex AI, enabling powerful custom AI workflows and advanced analytics within your Looker instance.Bring your insights to life with out-of-the-box visual, real-time, and self-service analyticsLooker offers two ways to empower your users with self-service capabilities — Looker and Looker Studio.Looker offers enterprise dashboards that are real-time, built on governed data, offers repeatable analysis, and inspires in-depth understanding of the data. Users can explore existing tiles, asking new questions, expanding filters, and drilling down to row level detail to fully understand the data behind a metric.Looker Studio offers interactive, collaborative, and ad hoc reports and dashboards, with access to over 800 data sources and connectors and a flexible drag-and-drop canvas. You have the option to perform ad hoc analysis on both governed and unmodeled data. It's free, fast, and easy to start. Simplify and secure your analytics with Looker on Google CloudLooker on Google Cloud takes the existing powerful capabilities from the Looker platform and seamlessly integrates it into the secure Google Cloud ecosystem. Features our customers have come to love and expect in all Google Cloud products are now also available with Looker on Google Cloud, including SSO with Google Cloud IAM, private networking, seamless integration with BigQuery, and a unified Terms of Service.View all featuresHow It WorksLooker is Google for your business data and enables you to make your organization's information accessible and useful. Looker Studio enables you to tell impactful, insightful stories with engaging reports and data visualizations.Together, you put inspiration and opportunity into the hands of the people who make things happen.Get started with LookerLooker: Google for your business dataCommon UsesReduce cross-cloud spendingLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementTutorials, quickstarts, & labsLooker for cloud cost managementMany large companies now use multiple cloud platforms (multicloud or hybrid cloud). This leads to complex billing from multiple providers, and current solutions often lack transparency or are too expensive. Companies need a cost-effective way to gain insights into their cross-cloud spending and optimize their costs.With Looker, you get robust out-of-the-box reporting for your day-to-day needs and long-term optimization initiatives. Looker helps you easily identify the one-off high cost problems faster and slow burn global inefficiencies sooner.Looker for cloud cost managementAnalyze marketing dataLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformTutorials, quickstarts, & labsLooker for Google Marketing PlatformMarketers need to analyze and activate their first-party data but lack a friendly, interactive UI from which to do it from. While this has always been a desire/need, the shifts in data regulations and browser privacy has made this a must-solve problem.Looker acts as a user interface for this solution. With out-of-the-box analytics (data exploration and dashboards), you can build custom segments from first party data. Looker also offers packaged activation paths to places like Google Analytics, so you can activate those segments.Looker for Google Marketing PlatformCreate custom data and AI applicationsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubTutorials, quickstarts, & labsLooker for gen AI applicationsCompanies are eager to increase data analytics adoption, but the technical learning curve hinders widespread use. While generative AI (gen AI) promises to transform data interaction, businesses struggle to get started and have concerns about the reliability of AI-generated results.Looker adds an API-first platform that is easy to create and customize data applications, a semantic layer to provide trusted metrics, an in-database architecture to optimize the performance and scalability of the cloud, and gen AI integration to provide a natural language interface into the application.Looker gen AI extension on GitHubMonetize your business dataLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsTutorials, quickstarts, & labsLooker for data monetizationData monetization is the process of leveraging the data your organization already collects to create new revenue streams or business value. Looker is ideal for data monetization because it allows you to create tailored data products, embed analytics seamlessly, scale with your business, and maintain data security.Monetize data with embedded analyticsMore productive collaborationLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceTutorials, quickstarts, & labsLooker for WorkspaceCustomers struggle to understand the usage of productivity and collaboration solutions like Google Workspace across the organization, and are often unprepared for security audits of these tools. With the Looker Block for Workspace, you get modeled schemas for Gmail, login, Drive, Meet, and rules. In addition, you get two out-of-the-box dashboards on Security Audit and Adoption and Collaboration, where you can drill down to row level audit log details.Other Looker for Workspace solutions include: Looker for Connected Sheets, to explore modeled data from Looker within Sheets and bring the power of LookML to the spreadsheet, and Looker Studio integration with Google Sheets, transforming Sheets data into interactive visualizations.Visit the Looker MarketplaceA dynamic data analytics powerhouseLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerTutorials, quickstarts, & labsLooker for BigQueryLooker and BigQuery together form a dynamic data analytics powerhouse, transforming raw information into actionable insights that drive business growth. BigQuery's unparalleled ability to store and process massive datasets seamlessly integrates with Looker's intuitive, Google-friendly platform.With Looker's semantic modeling layer, complex data relationships are simplified, ensuring a single source of truth and fostering collaboration across your organization. From interactive dashboards and custom applications to real-time insights and AI-powered analytics, Looker and BigQuery empower you to make smarter decisions, faster, and stay ahead of the competition.Unlock BigQuery's full potential with LookerPricingLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker productDetailsCostLooker (Google Cloud core)Platform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.Work with sales to identify a solution that works for you.Edition pricingStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.Annual commitment. Work with sales to identify a solution that works for you.User licensingDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.Connect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.Connect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsLooker pricingLooker pricing has two components: platform pricing and user pricing. Contact sales to identify a solution that works for you.Looker (Google Cloud core)DetailsPlatform pricing is the cost to run a Looker (Google Cloud core) instance and includes platform administration, integrations, and semantic modeling capabilities.User pricing is the cost for licensing individual users to access the Looker platform. These costs will vary based on the type of user and their permissions within the Looker (Google Cloud core) platform.CostWork with sales to identify a solution that works for you.Edition pricingDetailsStandard: A Looker (Google Cloud core) product for small organizations or teams with fewer than 50 users that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 1,000 query-based API calls per month, and up to 1,000 administrative API calls per month.CostAnnual commitment. Work with sales to identify a solution that works for you.Enterprise: A Looker (Google Cloud core) product with enhanced security features for a wide variety of internal BI and analytics use cases that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 100,000 query-based API calls per month, and up to 10,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.Embed: A Looker (Google Cloud core) product for deploying and maintaining external analytics and custom applications at scale that includes one production instance, 10 standard users, 2 developer users, upgrades, up to 500,000 query-based API calls per month, and up to 100,000 administrative API calls per month.DetailsAnnual commitment. Work with sales to identify a solution that works for you.User licensingDetailsDeveloper user: An end user provisioned on a Looker (Google Cloud core) platform for access to any combination of Looker interfaces including Administration, LookML Models (including Development Mode), Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, Scheduling, the Looker API interfaces, and access to Support.CostConnect with our sales team to get a custom quote for your organization.Standard user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, Looks (individual reports and charts), Explore, SQL Runner, and Scheduling Looker interfaces. Standard user privileges include data filtering, drill-to-row-level-detail, data downloads, Dashboard or Look creation, and view-only access to LookML. Standard user privileges do not include access to Development Mode, Administration, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Viewer user: An end user provisioned on a Looker (Google Cloud core) platform for access to Folders, Boards, Dashboards, and Looks (individual reports and charts) Looker interfaces. Viewer User privileges include data filtering, drill-to-row-level-detail, Scheduling, data downloads, and view-only access to LookML. Viewer user privileges do not include Dashboard or Look creation, or access to Development Mode, Administration, SQL Runner, Explore, the Looker API interfaces, or Support.DetailsConnect with our sales team to get a custom quote for your organization.Learn more about Looker pricing. View all pricing detailsPricing CalculatorEstimate your monthly Looker costs.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptRequest a demoRequest a demoHave a unique project or use case?Contact salesLearn how to send and share content in LookerRead guideLearn how to create and edit dashboards and reports in LookerRead guideLearn how to use embedding and the API in LookerRead guideGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Looker_Studio.txt b/Looker_Studio.txt new file mode 100644 index 0000000000000000000000000000000000000000..38277fdfbd09ff6e237c74eea636307e67b21a56 --- /dev/null +++ b/Looker_Studio.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/looker-studio +Date Scraped: 2025-02-23T12:02:30.234Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayJump to Looker StudioLooker StudioSelf-service business intelligence with unmatched flexibility for smarter business decisions.Get startedDeploy nowDeploy an example data warehouse or analytics lakehouse solution to store, analyze, and visualize data using BigQuery and Looker Studio.Tell impactful stories by creating and sharing engaging reports and data visualizationsUnite your data by easily connecting to more than 800 data sourcesTransform your data to impactful business metrics and dimensions with intuitive smart reportsCreate meaningful, shareable, customizable, charts and graphs with just a few clicksVIDEOLooker and Looker Studio: Google for your business data1:34BenefitsYour data is beautiful—use itLooker Studio unlocks the power of your data by making it easy to create interactive dashboards and compelling reports from a wide variety of sources, driving smarter business decisions.Connect to data without limitsYou can access a wide variety of data sources through the more than 600 partner connectors that instantly enable you to connect virtually any kind of data, without any coding or software.Share your data storyYou can share your compelling reports with your team or with the world, collaborate in real time, or embed your reports on the web.Key featuresKey featuresAn easy-to-use web interfaceLooker Studio is designed to be intuitive and easy to use. The report editor features simple drag-and-drop objects with fully custom property panels and a snap-to-grid canvas.Report templatesWith a robust library of report templates to choose from, you can visualize your data in minutes. Connect your data sources and customize the design to meet your needs.Data connectorsData sources act as pipes to connect a Looker Studio report to underlying data. Each source has a unique, prebuilt connector to ensure your data is easy to access and use.Looker Studio APIThe Looker Studio API allows Google Workspace or Cloud Identity organizations to automate management and migration of Looker Studio assets. You can configure an application to use the Looker Studio API quickly and easily.Report embeddingEmbedding allows you to include your Looker Studio report in any web page or intranet, making it easier for you to tell your data story to your team or the world.Combine data from different sources and display it in an insightful dashboardWhat's newGet the latest Looker Studio news, updates, features, benefits, and resourcesSign up for the monthly Looker Studio newsletter to learn about product updates, read the release notes, or learn to create reports with Looker studio. Blog postGetting started with Looker Studio ProLearn moreBlog postIntroducing the next evolution of Looker, your unified BI platformLearn moreBlog postIntroducing Looker Studio as our newest Google Cloud serviceLearn moreDocumentationDocumentationGoogle Cloud BasicsPut the power of your marketing insights in everyone’s handsSee how you can build vibrant reports and dashboards with just a few clicks, and leverage reusable templates to generate fast and professional visualizations.Learn moreQuickstartLearn the fundamentals of working with Looker StudioLeverage our quickstart guide to rapidly get up and running with Looker Studio, sparking your data visualization journey.Learn moreUse CaseLooker Studio ProLooker Studio Pro makes it easy to manage access to your reports and data source at scale with two new features: team workspaces and Google Cloud project linking.Learn moreUse CaseDeveloping solutions with BigQuery and Looker StudioThis video walks through how to use BigQuery with Looker Studio and leverage Google Cloud’s cost-effective data warehouse to bring big data into valuable insights.Learn moreGoogle Cloud BasicsBigQuery data visualization in Looker Studio tutorialExplore BigQuery data using Looker Studio. Connect your data, create visualizations, and share your insights with others. Learn moreTutorialExplore and Create Reports with Looker StudioTake this lab to learn how to create new reports and explore your ecommerce dataset visually for insights.Start labNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesWhat's new in Looker Studio?PricingPricingLooker Studio is available at no charge for creators and report viewers.Enterprise customers who upgrade to Looker Studio Pro will receive support and expanded administrative features, including team content management. You can upgrade to Looker Studio Pro directly from the Looker Studio website.Looker Studio pricingPricing per user per month for Looker Studio Pro varies depending on the length of the customer’s subscription. Our self-service tier remains available at no cost.ServicePriceLooker StudioSelf-service business intelligenceNo chargeLooker Studio Pro(Project subscription)Department level business intelligence with Google Cloud support and system administration.$9 per user per project per monthTake the next stepStart your next project, explore interactive tutorials, and manage your account.Launch Looker StudioNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/MLOps-_Continuous_delivery_and_automation_pipelines_in_machine_learning.txt b/MLOps-_Continuous_delivery_and_automation_pipelines_in_machine_learning.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1cc1c5056c86f08c43856b85f3875a414bc6445 --- /dev/null +++ b/MLOps-_Continuous_delivery_and_automation_pipelines_in_machine_learning.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning +Date Scraped: 2025-02-23T11:46:12.867Z + +Content: +Home Docs Cloud Architecture Center Send feedback MLOps: Continuous delivery and automation pipelines in machine learning Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-28 UTC This document discusses techniques for implementing and automating continuous integration (CI), continuous delivery (CD), and continuous training (CT) for machine learning (ML) systems. This document applies primarily to predictive AI systems. Data science and ML are becoming core capabilities for solving complex real-world problems, transforming industries, and delivering value in all domains. Currently, the ingredients for applying effective ML are available to you: Large datasets Inexpensive on-demand compute resources Specialized accelerators for ML on various cloud platforms Rapid advances in different ML research fields (such as computer vision, natural language understanding, generative AI, and recommendations AI systems). Therefore, many businesses are investing in their data science teams and ML capabilities to develop predictive models that can deliver business value to their users. This document is for data scientists and ML engineers who want to apply DevOps principles to ML systems (MLOps). MLOps is an ML engineering culture and practice that aims at unifying ML system development (Dev) and ML system operation (Ops). Practicing MLOps means that you advocate for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment and infrastructure management. Data scientists can implement and train an ML model with predictive performance on an offline holdout dataset, given relevant training data for their use case. However, the real challenge isn't building an ML model, the challenge is building an integrated ML system and to continuously operate it in production. With the long history of production ML services at Google, we've learned that there can be many pitfalls in operating ML-based systems in production. Some of these pitfalls are summarized in Machine Learning: The High Interest Credit Card of Technical Debt. As shown in the following diagram, only a small fraction of a real-world ML system is composed of the ML code. The required surrounding elements are vast and complex. Figure 1. Elements for ML systems. Adapted from Hidden Technical Debt in Machine Learning Systems. The preceding diagram displays the following system components: Configuration Automation Data collection Data verification Testing and debugging Resource management Model analysis Process and metadata management Serving infrastructure Monitoring To develop and operate complex systems like these, you can apply DevOps principles to ML systems (MLOps). This document covers concepts to consider when setting up an MLOps environment for your data science practices, such as CI, CD, and CT in ML. The following topics are discussed: DevOps versus MLOps Steps for developing ML models MLOps maturity levels MlOps for generative AI DevOps versus MLOps DevOps is a popular practice in developing and operating large-scale software systems. This practice provides benefits such as shortening the development cycles, increasing deployment velocity, and dependable releases. To achieve these benefits, you introduce two concepts in the software system development: Continuous integration (CI) Continuous delivery (CD) An ML system is a software system, so similar practices apply to help guarantee that you can reliably build and operate ML systems at scale. However, ML systems differ from other software systems in the following ways: Team skills: In an ML project, the team usually includes data scientists or ML researchers, who focus on exploratory data analysis, model development, and experimentation. These members might not be experienced software engineers who can build production-class services. Development: ML is experimental in nature. You should try different features, algorithms, modeling techniques, and parameter configurations to find what works best for the problem as quickly as possible. The challenge is tracking what worked and what didn't, and maintaining reproducibility while maximizing code reusability. Testing: Testing an ML system is more involved than testing other software systems. In addition to typical unit and integration tests, you need data validation, trained model quality evaluation, and model validation. Deployment: In ML systems, deployment isn't as simple as deploying an offline-trained ML model as a prediction service. ML systems can require you to deploy a multi-step pipeline to automatically retrain and deploy models. This pipeline adds complexity and requires you to automate steps that are manually done before deployment by data scientists to train and validate new models. Production: ML models can have reduced performance not only due to suboptimal coding, but also due to constantly evolving data profiles. In other words, models can decay in more ways than conventional software systems, and you need to consider this degradation. Therefore, you need to track summary statistics of your data and monitor the online performance of your model to send notifications or roll back when values deviate from your expectations. ML and other software systems are similar in continuous integration of source control, unit testing, integration testing, and continuous delivery of the software module or the package. However, in ML, there are a few notable differences: CI is no longer only about testing and validating code and components, but also testing and validating data, data schemas, and models. CD is no longer about a single software package or a service, but a system (an ML training pipeline) that should automatically deploy another service (model prediction service). CT is a new property, unique to ML systems, that's concerned with automatically retraining and serving the models. The following section discusses the typical steps for training and evaluating an ML model to serve as a prediction service. Data science steps for ML In any ML project, after you define the business use case and establish the success criteria, the process of delivering an ML model to production involves the following steps. These steps can be completed manually or can be completed by an automatic pipeline. Data extraction: You select and integrate the relevant data from various data sources for the ML task. Data analysis: You perform exploratory data analysis (EDA) to understand the available data for building the ML model. This process leads to the following: Understanding the data schema and characteristics that are expected by the model. Identifying the data preparation and feature engineering that are needed for the model. Data preparation: The data is prepared for the ML task. This preparation involves data cleaning, where you split the data into training, validation, and test sets. You also apply data transformations and feature engineering to the model that solves the target task. The output of this step are the data splits in the prepared format. Model training: The data scientist implements different algorithms with the prepared data to train various ML models. In addition, you subject the implemented algorithms to hyperparameter tuning to get the best performing ML model. The output of this step is a trained model. Model evaluation: The model is evaluated on a holdout test set to evaluate the model quality. The output of this step is a set of metrics to assess the quality of the model. Model validation: The model is confirmed to be adequate for deployment—that its predictive performance is better than a certain baseline. Model serving: The validated model is deployed to a target environment to serve predictions. This deployment can be one of the following: Microservices with a REST API to serve online predictions. An embedded model to an edge or mobile device. Part of a batch prediction system. Model monitoring: The model predictive performance is monitored to potentially invoke a new iteration in the ML process. The level of automation of these steps defines the maturity of the ML process, which reflects the velocity of training new models given new data or training new models given new implementations. The following sections describe three levels of MLOps, starting from the most common level, which involves no automation, up to automating both ML and CI/CD pipelines. MLOps level 0: Manual process Many teams have data scientists and ML researchers who can build state-of-the-art models, but their process for building and deploying ML models is entirely manual. This is considered the basic level of maturity, or level 0. The following diagram shows the workflow of this process. Figure 2. Manual ML steps to serve the model as a prediction service. Characteristics The following list highlights the characteristics of the MLOps level 0 process, as shown in Figure 2: Manual, script-driven, and interactive process: Every step is manual, including data analysis, data preparation, model training, and validation. It requires manual execution of each step, and manual transition from one step to another. This process is usually driven by experimental code that is interactively written and executed in notebooks by data scientists, until a workable model is produced. Disconnection between ML and operations: The process separates data scientists who create the model and engineers who serve the model as a prediction service. The data scientists hand over a trained model as an artifact to the engineering team to deploy on their API infrastructure. This handoff can include putting the trained model in a storage location, checking the model object into a code repository, or uploading it to a models registry. Then engineers who deploy the model need to make the required features available in production for low-latency serving, which can lead to training-serving skew. Infrequent release iterations: The process assumes that your data science team manages a few models that don't change frequently—either changing model implementation or retraining the model with new data. A new model version is deployed only a couple of times per year. No CI: Because few implementation changes are assumed, CI is ignored. Usually, testing the code is part of the notebooks or script execution. The scripts and notebooks that implement the experiment steps are source controlled, and they produce artifacts such as trained models, evaluation metrics, and visualizations. No CD: Because there aren't frequent model version deployments, CD isn't considered. Deployment refers to the prediction service: The process is concerned only with deploying the trained model as a prediction service (for example, a microservice with a REST API), rather than deploying the entire ML system. Lack of active performance monitoring: The process doesn't track or log the model predictions and actions, which are required in order to detect model performance degradation and other model behavioral drifts. The engineering team might have their own complex setup for API configuration, testing, and deployment, including security, regression, and load and canary testing. In addition, production deployment of a new version of an ML model usually goes through A/B testing or online experiments before the model is promoted to serve all the prediction request traffic. Challenges MLOps level 0 is common in many businesses that are beginning to apply ML to their use cases. This manual, data-scientist-driven process might be sufficient when models are rarely changed or trained. In practice, models often break when they are deployed in the real world. The models fail to adapt to changes in the dynamics of the environment, or changes in the data that describes the environment. For more information, see Why Machine Learning Models Crash and Burn in Production. To address these challenges and to maintain your model's accuracy in production, you need to do the following: Actively monitor the quality of your model in production: Monitoring lets you detect performance degradation and model staleness. It acts as a cue to a new experimentation iteration and (manual) retraining of the model on new data. Frequently retrain your production models: To capture the evolving and emerging patterns, you need to retrain your model with the most recent data. For example, if your app recommends fashion products using ML, its recommendations should adapt to the latest trends and products. Continuously experiment with new implementations to produce the model: To harness the latest ideas and advances in technology, you need to try out new implementations such as feature engineering, model architecture, and hyperparameters. For example, if you use computer vision in face detection, face patterns are fixed, but better new techniques can improve the detection accuracy. To address the challenges of this manual process, MLOps practices for CI/CD and CT are helpful. By deploying an ML training pipeline, you can enable CT, and you can set up a CI/CD system to rapidly test, build, and deploy new implementations of the ML pipeline. These features are discussed in more detail in the next sections. MLOps level 1: ML pipeline automation The goal of level 1 is to perform continuous training of the model by automating the ML pipeline; this lets you achieve continuous delivery of model prediction service. To automate the process of using new data to retrain models in production, you need to introduce automated data and model validation steps to the pipeline, as well as pipeline triggers and metadata management. The following figure is a schematic representation of an automated ML pipeline for CT. Figure 3. ML pipeline automation for CT. Characteristics The following list highlights the characteristics of the MLOps level 1 setup, as shown in Figure 3: Rapid experiment: The steps of the ML experiment are orchestrated. The transition between steps is automated, which leads to rapid iteration of experiments and better readiness to move the whole pipeline to production. CT of the model in production: The model is automatically trained in production using fresh data based on live pipeline triggers, which are discussed in the next section. Experimental-operational symmetry: The pipeline implementation that is used in the development or experiment environment is used in the preproduction and production environment, which is a key aspect of MLOps practice for unifying DevOps. Modularized code for components and pipelines: To construct ML pipelines, components need to be reusable, composable, and potentially shareable across ML pipelines. Therefore, while the EDA code can still live in notebooks, the source code for components must be modularized. In addition, components should ideally be containerized to do the following: Decouple the execution environment from the custom code runtime. Make code reproducible between development and production environments. Isolate each component in the pipeline. Components can have their own version of the runtime environment, and have different languages and libraries. Continuous delivery of models: An ML pipeline in production continuously delivers prediction services to new models that are trained on new data. The model deployment step, which serves the trained and validated model as a prediction service for online predictions, is automated. Pipeline deployment: In level 0, you deploy a trained model as a prediction service to production. For level 1, you deploy a whole training pipeline, which automatically and recurrently runs to serve the trained model as the prediction service. Additional components This section discusses the components that you need to add to the architecture to enable ML continuous training. Data and model validation When you deploy your ML pipeline to production, one or more of the triggers discussed in the ML pipeline triggers section automatically executes the pipeline. The pipeline expects new, live data to produce a new model version that is trained on the new data (as shown in Figure 3). Therefore, automated data validation and model validation steps are required in the production pipeline to ensure the following expected behavior: Data validation: This step is required before model training to decide whether you should retrain the model or stop the execution of the pipeline. This decision is automatically made if the following was identified by the pipeline. Data schema skews: These skews are considered anomalies in the input data. Therefore, input data that doesn't comply with the expected schema is received by the downstream pipeline steps, including the data processing and model training steps. In this case, you should stop the pipeline so the data science team can investigate. The team might release a fix or an update to the pipeline to handle these changes in the schema. Schema skews include receiving unexpected features, not receiving all the expected features, or receiving features with unexpected values. Data values skews: These skews are significant changes in the statistical properties of data, which means that data patterns are changing, and you need to trigger a retraining of the model to capture these changes. Model validation: This step occurs after you successfully train the model given the new data. You evaluate and validate the model before it's promoted to production. This offline model validation step consists of the following. Producing evaluation metric values using the trained model on a test dataset to assess the model's predictive quality. Comparing the evaluation metric values produced by your newly trained model to the current model, for example, production model, baseline model, or other business-requirement models. You make sure that the new model produces better performance than the current model before promoting it to production. Making sure that the performance of the model is consistent on various segments of the data. For example, your newly trained customer churn model might produce an overall better predictive accuracy compared to the previous model, but the accuracy values per customer region might have large variance. Making sure that you test your model for deployment, including infrastructure compatibility and consistency with the prediction service API. In addition to offline model validation, a newly deployed model undergoes online model validation—in a canary deployment or an A/B testing setup—before it serves prediction for the online traffic. Feature store An optional additional component for level 1 ML pipeline automation is a feature store. A feature store is a centralized repository where you standardize the definition, storage, and access of features for training and serving. A feature store needs to provide an API for both high-throughput batch serving and low-latency real-time serving for the feature values, and to support both training and serving workloads. The feature store helps data scientists do the following: Discover and reuse available feature sets for their entities, instead of re-creating the same or similar ones. Avoid having similar features that have different definitions by maintaining features and their related metadata. Serve up-to-date feature values from the feature store. Avoid training-serving skew by using the feature store as the data source for experimentation, continuous training, and online serving. This approach makes sure that the features used for training are the same ones used during serving: For experimentation, data scientists can get an offline extract from the feature store to run their experiments. For continuous training, the automated ML training pipeline can fetch a batch of the up-to-date feature values of the dataset that are used for the training task. For online prediction, the prediction service can fetch in a batch of the feature values related to the requested entity, such as customer demographic features, product features, and current session aggregation features. For online prediction and feature retrieval, the prediction service identifies the relevant features for an entity. For example, if the entity is a customer, relevant features might include age, purchase history, and browsing behavior. The service batches these feature values together and retrieves all the needed features for the entity at once, rather than individually. This retrieval method helps with efficiency, especially when you need to manage multiple entities. Metadata management Information about each execution of the ML pipeline is recorded in order to help with data and artifacts lineage, reproducibility, and comparisons. It also helps you debug errors and anomalies. Each time you execute the pipeline, the ML metadata store records the following metadata: The pipeline and component versions that were executed. The start and end date, time, and how long the pipeline took to complete each of the steps. The executor of the pipeline. The parameter arguments that were passed to the pipeline. The pointers to the artifacts produced by each step of the pipeline, such as the location of prepared data, validation anomalies, computed statistics, and extracted vocabulary from the categorical features. Tracking these intermediate outputs helps you resume the pipeline from the most recent step if the pipeline stopped due to a failed step, without having to re-execute the steps that have already completed. A pointer to the previous trained model if you need to roll back to a previous model version or if you need to produce evaluation metrics for a previous model version when the pipeline is given new test data during the model validation step. The model evaluation metrics produced during the model evaluation step for both the training and the testing sets. These metrics help you compare the performance of a newly trained model to the recorded performance of the previous model during the model validation step. ML pipeline triggers You can automate the ML production pipelines to retrain the models with new data, depending on your use case: On demand: Ad hoc manual execution of the pipeline. On a schedule: New, labeled data is systematically available for the ML system on a daily, weekly, or monthly basis. The retraining frequency also depends on how frequently the data patterns change, and how expensive it is to retrain your models. On availability of new training data: New data isn't systematically available for the ML system and instead is available on an ad hoc basis when new data is collected and made available in the source databases. On model performance degradation: The model is retrained when there is noticeable performance degradation. On significant changes in the data distributions (concept drift). It's hard to assess the complete performance of the online model, but you notice significant changes on the data distributions of the features that are used to perform the prediction. These changes suggest that your model has gone stale, and that needs to be retrained on fresh data. Challenges Assuming that new implementations of the pipeline aren't frequently deployed and you are managing only a few pipelines, you usually manually test the pipeline and its components. In addition, you manually deploy new pipeline implementations. You also submit the tested source code for the pipeline to the IT team to deploy to the target environment. This setup is suitable when you deploy new models based on new data, rather than based on new ML ideas. However, you need to try new ML ideas and rapidly deploy new implementations of the ML components. If you manage many ML pipelines in production, you need a CI/CD setup to automate the build, test, and deployment of ML pipelines. MLOps level 2: CI/CD pipeline automation For a rapid and reliable update of the pipelines in production, you need a robust automated CI/CD system. This automated CI/CD system lets your data scientists rapidly explore new ideas around feature engineering, model architecture, and hyperparameters. They can implement these ideas and automatically build, test, and deploy the new pipeline components to the target environment. The following diagram shows the implementation of the ML pipeline using CI/CD, which has the characteristics of the automated ML pipelines setup plus the automated CI/CD routines. Figure 4. CI/CD and automated ML pipeline. This MLOps setup includes the following components: Source control Test and build services Deployment services Model registry Feature store ML metadata store ML pipeline orchestrator Characteristics The following diagram shows the stages of the ML CI/CD automation pipeline: Figure 5. Stages of the CI/CD automated ML pipeline. The pipeline consists of the following stages: Development and experimentation: You iteratively try out new ML algorithms and new modeling where the experiment steps are orchestrated. The output of this stage is the source code of the ML pipeline steps that are then pushed to a source repository. Pipeline continuous integration: You build source code and run various tests. The outputs of this stage are pipeline components (packages, executables, and artifacts) to be deployed in a later stage. Pipeline continuous delivery: You deploy the artifacts produced by the CI stage to the target environment. The output of this stage is a deployed pipeline with the new implementation of the model. Automated triggering: The pipeline is automatically executed in production based on a schedule or in response to a trigger. The output of this stage is a trained model that is pushed to the model registry. Model continuous delivery: You serve the trained model as a prediction service for the predictions. The output of this stage is a deployed model prediction service. Monitoring: You collect statistics on the model performance based on live data. The output of this stage is a trigger to execute the pipeline or to execute a new experiment cycle. The data analysis step is still a manual process for data scientists before the pipeline starts a new iteration of the experiment. The model analysis step is also a manual process. Continuous integration In this setup, the pipeline and its components are built, tested, and packaged when new code is committed or pushed to the source code repository. Besides building packages, container images, and executables, the CI process can include the following tests: Unit testing your feature engineering logic. Unit testing the different methods implemented in your model. For example, you have a function that accepts a categorical data column and you encode the function as a one-hot feature. Testing that your model training converges (that is, the loss of your model goes down by iterations and overfits a few sample records). Testing that your model training doesn't produce NaN values due to dividing by zero or manipulating small or large values. Testing that each component in the pipeline produces the expected artifacts. Testing integration between pipeline components. Continuous delivery In this level, your system continuously delivers new pipeline implementations to the target environment that in turn delivers prediction services of the newly trained model. For rapid and reliable continuous delivery of pipelines and models, you should consider the following: Verifying the compatibility of the model with the target infrastructure before you deploy your model. For example, you need to verify that the packages that are required by the model are installed in the serving environment, and that the required memory, compute, and accelerator resources are available. Testing the prediction service by calling the service API with the expected inputs, and making sure that you get the response that you expect. This test usually captures problems that might occur when you update the model version and it expects a different input. Testing prediction service performance, which involves load testing the service to capture metrics such as queries per second (QPS) and model latency. Validating the data either for retraining or batch prediction. Verifying that models meet the predictive performance targets before they are deployed. Automated deployment to a test environment, for example, a deployment that is triggered by pushing code to the development branch. Semi-automated deployment to a pre-production environment, for example, a deployment that is triggered by merging code to the main branch after reviewers approve the changes. Manual deployment to a production environment after several successful runs of the pipeline on the pre-production environment. To summarize, implementing ML in a production environment doesn't only mean deploying your model as an API for prediction. Rather, it means deploying an ML pipeline that can automate the retraining and deployment of new models. Setting up a CI/CD system lets you automatically test and deploy new pipeline implementations. This system lets you cope with rapid changes in your data and business environment. You don't have to immediately move all of your processes from one level to another. You can gradually implement these practices to help improve the automation of your ML system development and production. What's next Learn more about Architecture for MLOps using TensorFlow Extended, Vertex AI Pipelines, and Cloud Build. Learn about the Practitioners Guide to Machine Learning Operations (MLOps). Watch the MLOps Best Practices on Google Cloud (Cloud Next '19) on YouTube. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Jarek Kazmierczak | Solutions ArchitectKhalid Salama | Staff Software Engineer, Machine LearningValentin Huerta | AI EngineerOther contributor: Sunil Kumar Jang Bahadur | Customer Engineer Send feedback \ No newline at end of file diff --git a/MLOps_using_TensorFlow_Extended,_Vertex_AI_Pipelines,_and_Cloud_Build.txt b/MLOps_using_TensorFlow_Extended,_Vertex_AI_Pipelines,_and_Cloud_Build.txt new file mode 100644 index 0000000000000000000000000000000000000000..a561c9a523ac4194ea67e715d5e888ed8fcda85a --- /dev/null +++ b/MLOps_using_TensorFlow_Extended,_Vertex_AI_Pipelines,_and_Cloud_Build.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/architecture-for-mlops-using-tfx-kubeflow-pipelines-and-cloud-build +Date Scraped: 2025-02-23T11:46:21.862Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture for MLOps using TensorFlow Extended, Vertex AI Pipelines, and Cloud Build Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-28 UTC This document describes the overall architecture of a machine learning (ML) system using TensorFlow Extended (TFX) libraries. It also discusses how to set up a continuous integration (CI), continuous delivery (CD), and continuous training (CT) for the ML system using Cloud Build and Vertex AI Pipelines. In this document, the terms ML system and ML pipeline refer to ML model training pipelines, rather than model scoring or prediction pipelines. This document is for data scientists and ML engineers who want to adapt their CI/CD practices to move ML solutions to production on Google Cloud, and who want to help ensure the quality, maintainability, and adaptability of their ML pipelines. This document covers the following topics: Understanding CI/CD and automation in ML. Designing an integrated ML pipeline with TFX. Orchestrating and automating the ML pipeline using Vertex AI Pipelines. Setting up a CI/CD system for the ML pipeline using Cloud Build. MLOps To integrate an ML system in a production environment, you need to orchestrate the steps in your ML pipeline. In addition, you need to automate the execution of the pipeline for the continuous training of your models. To experiment with new ideas and features, you need to adopt CI/CD practices in the new implementations of the pipelines. The following sections give a high-level overview of CI/CD and CT in ML. ML pipeline automation In some use cases, the manual process of training, validating, and deploying ML models can be sufficient. This manual approach works if your team manages only a few ML models that aren't retrained or aren't changed frequently. In practice, however, models often break down when deployed in the real world because they fail to adapt to changes in the dynamics of environments, or the data that describes such dynamics. For your ML system to adapt to such changes, you need to apply the following MLOps techniques: Automate the execution of the ML pipeline to retrain new models on new data to capture any emerging patterns. CT is discussed later in this document in the ML with Vertex AI Pipelines section. Set up a continuous delivery system to frequently deploy new implementations of the entire ML pipeline. CI/CD is discussed later in this document in the CI/CD setup for ML on Google Cloud section. You can automate the ML production pipelines to retrain your models with new data. You can trigger your pipeline on demand, on a schedule, on the availability of new data, on model performance degradation, on significant changes in the statistical properties of the data, or based on other conditions. CI/CD pipeline compared to CT pipeline The availability of new data is one trigger to retrain the ML model. The availability of a new implementation of the ML pipeline (including new model architecture, feature engineering, and hyperparameters) is another important trigger to re-execute the ML pipeline. This new implementation of the ML pipeline serves as a new version of the model prediction service, for example, a microservice with a REST API for online serving. The difference between the two cases is as follows: To train a new ML model with new data, the previously deployed CT pipeline is executed. No new pipelines or components are deployed; only a new prediction service or newly trained model is served at the end of the pipeline. To train a new ML model with a new implementation, a new pipeline is deployed through a CI/CD pipeline. To deploy new ML pipelines quickly, you need to set up a CI/CD pipeline. This pipeline is responsible for automatically deploying new ML pipelines and components when new implementations are available and approved for various environments (such as development, test, staging, pre-production, and production). The following diagram shows the relationship between the CI/CD pipeline and the ML CT pipeline. Figure 1. CI/CD and ML CT pipelines. The output for these pipelines is as follows: If given new implementation, a successful CI/CD pipeline deploys a new ML CT pipeline. If given new data, a successful CT pipeline trains a new model and deploys it as a prediction service. Designing a TFX-based ML system The following sections discuss how to design an integrated ML system using TensorFlow Extended (TFX) to set up a CI/CD pipeline for the ML system. Although there are several frameworks for building ML models, TFX is an integrated ML platform for developing and deploying production ML systems. A TFX pipeline is a sequence of components that implement an ML system. This TFX pipeline is designed for scalable, high-performance ML tasks. These tasks include modeling, training, validation, serving inference, and managing deployments. The key libraries of TFX are as follows: TensorFlow Data Validation (TFDV): Used for detecting anomalies in the data. TensorFlow Transform (TFT): Used for data preprocessing and feature engineering. TensorFlow Estimators and Keras: Used for building and training ML models. TensorFlow Model Analysis (TFMA): Used for ML model evaluation and analysis. TensorFlow Serving (TFServing): Used for serving ML models as REST and gRPC APIs. TFX ML system overview The following diagram shows how the various TFX libraries are integrated to compose an ML system. Figure 2. A typical TFX-based ML system. Figure 2 shows a typical TFX-based ML system. The following steps can be completed manually or by an automated pipeline: Data extraction: The first step is to extract the new training data from its data sources. The outputs of this step are data files that are used for training and evaluating the model. Data validation: TFDV validates the data against the expected (raw) data schema. The data schema is created and fixed during the development phase, before system deployment. The data validation steps detect anomalies related to both data distribution and schema skews. The outputs of this step are the anomalies (if any) and a decision on whether to execute downstream steps or not. Data transformation: After the data is validated, the data is split and prepared for the ML task by performing data transformations and feature engineering operations using TFT. The outputs of this step are data files to train and evaluate the model, usually transformed in TFRecords format. In addition, the transformation artifacts that are produced help with constructing the model inputs and embed the transformation process in the exported saved model after training. Model training and tuning: To implement and train the ML model, use the tf.Keras API with the transformed data produced by the previous step. To select the parameter settings that lead to the best model, you can use Keras tuner, a hyperparameter tuning library for Keras. Alternatively, you can use other services like Katib, Vertex AI Vizier, or the hyperparameter tuner from Vertex AI. The output of this step is a saved model that is used for evaluation, and another saved model that is used for online serving of the model for prediction. Model evaluation and validation: When the model is exported after the training step, it's evaluated on a test dataset to assess the model quality by using TFMA. TFMA evaluates the model quality as a whole, and identifies which part of the data model isn't performing. This evaluation helps guarantee that the model is promoted for serving only if it satisfies the quality criteria. The criteria can include fair performance on various data subsets (for example, demographics and locations), and improved performance compared to previous models or a benchmark model. The output of this step is a set of performance metrics and a decision on whether to promote the model to production. Model serving for prediction: After the newly trained model is validated, it's deployed as a microservice to serve online predictions by using TensorFlow Serving. The output of this step is a deployed prediction service of the trained ML model. You can replace this step by storing the trained model in a model registry. Subsequently a separate model serving CI/CD process is launched. For an example on how to use the TFX libraries, see the official TFX Keras Component tutorial. TFX ML system on Google Cloud In a production environment, the components of the system have to run at scale on a reliable platform. The following diagram shows how each step of the TFX ML pipeline runs using a managed service on Google Cloud, which ensures agility, reliability, and performance at a large scale. Figure 3. TFX-based ML system on Google Cloud. The following table describes the key Google Cloud services shown in figure 3: Step TFX Library Google Cloud service Data extraction and validation TensorFlow Data Validation Dataflow Data transformation TensorFlow Transform Dataflow Model training and tuning TensorFlow Vertex AI Training Model evaluation and validation TensorFlow Model Analysis Dataflow Model serving for predictions TensorFlow Serving Vertex AI Prediction Model Storage N/A Vertex AI Model Registry Dataflow is a fully managed, serverless, and reliable service for running Apache Beam pipelines at scale on Google Cloud. Dataflow is used to scale the following processes: Computing the statistics to validate the incoming data. Performing data preparation and transformation. Evaluating the model on a large dataset. Computing metrics on different aspects of the evaluation dataset. Cloud Storage is a highly available and durable storage for binary large objects. Cloud Storage hosts artifacts produced throughout the execution of the ML pipeline, including the following: Data anomalies (if any) Transformed data and artifacts Exported (trained) model Model evaluation metrics Vertex AI Training is a managed service to train ML models at scale. You can execute model training jobs with pre-build containers for TensorFlow, Scikit learn, XGBoost and PyTorch. You can also run any framework using your own custom containers. For your training infrastructure you can use accelerators and multiple nodes for distributed training. In addition, a scalable, Bayesian optimization-based service for a hyperparameter tuning is available Vertex AI Prediction is a managed service to run batch predictions using your trained models and online predictions by deploying your models as a microservice with a REST API. The service also integrates with Vertex Explainable AI and Vertex AI Model Monitoring to understand your models and receive alerts when there is a feature or feature attribution skew and drift. Vertex AI Model Registry lets you manage the lifecycle of your ML models. You can version your imported models and view their performance metrics. A model can then be used for batch predictions or deploy your model for online serving using Vertex AI Prediction Orchestrating the ML system using Vertex AI Pipelines This document has covered how to design a TFX-based ML system, and how to run each component of the system at scale on Google Cloud. However, you need an orchestrator in order to connect these different components of the system together. The orchestrator runs the pipeline in a sequence, and automatically moves from one step to another based on the defined conditions. For example, a defined condition might be executing the model serving step after the model evaluation step if the evaluation metrics meet predefined thresholds. Steps can also run in parallel in order to save time, for example validate the deployment infrastructure and evaluate the model. Orchestrating the ML pipeline is useful in both the development and production phases: During the development phase, orchestration helps the data scientists to run the ML experiment, instead of manually executing each step. During the production phase, orchestration helps automate the execution of the ML pipeline based on a schedule or certain triggering conditions. ML with Vertex AI Pipelines Vertex AI Pipelines is a Google Cloud managed service that lets you orchestrate and automate ML pipelines where each component of the pipeline can run containerized on Google Cloud or other cloud platforms. Pipeline parameters and artifacts generated are automatically stored on Vertex ML Metadata which allows lineage and execution tracking. Vertex AI Pipelines service consists of the following: A user interface for managing and tracking experiments, jobs, and runs. An engine for scheduling multistep ML workflows. A Python SDK for defining and manipulating pipelines and components. Integration with Vertex ML Metadata to save information about executions, models, datasets, and other artifacts. The following constitutes a pipeline executed on Vertex AI Pipelines: A set of containerized ML tasks, or components. A pipeline component is self-contained code that is packaged as a Docker image. A component performs one step in the pipeline. It takes input arguments and produces artifacts. A specification of the sequence of the ML tasks, defined through a Python domain-specific language (DSL). The topology of the workflow is implicitly defined by connecting the outputs of an upstream step to the inputs of a downstream step. A step in the pipeline definition invokes a component in the pipeline. In a complex pipeline, components can execute multiple times in loops, or they can be executed conditionally. A set of pipeline input parameters, whose values are passed to the components of the pipeline, including the criteria for filtering data and where to store the artifacts that the pipeline produces. The following diagram shows a sample graph of Vertex AI Pipelines. Figure 4. A sample graph of Vertex AI Pipelines. Kubeflow Pipelines SDK The Kubeflow Pipelines SDK lets you create components, define their orchestration, and run them as a pipeline. For details about Kubeflow Pipelines components, see Create components in the Kubeflow documentation. You can also use the TFX Pipeline DSL and use TFX components. A TFX component encapsulates metadata capabilities. The driver supplies metadata to the executor by querying the metadata store. The publisher accepts the results of the executor and stores them in metadata. You can also implement your custom component, which has the same integration with the metadata. You can compile your TFX pipelines to a Vertex AI Pipelines compatible YAML using tfx.orchestration.experimental.KubeflowV2DagRunner. Then, you can submit the file to Vertex AI Pipelines for execution. The following diagram shows how in Vertex AI Pipelines, a containerized task can invoke other services such as BigQuery jobs, Vertex AI (distributed) training jobs, and Dataflow jobs. Figure 5. Vertex AI Pipelines invoking Google Cloud managed services. Vertex AI Pipelines lets you orchestrate and automate a production ML pipeline by executing the required Google Cloud services. In figure 5, Vertex ML Metadata serves as the ML metadata store for Vertex AI Pipelines. Pipeline components aren't limited to executing TFX-related services on Google Cloud. These components can execute any data-related and compute-related services, including Dataproc for SparkML jobs, AutoML, and other compute workloads. Containerizing tasks in Vertex AI Pipelines has the following advantages: Decouples the execution environment from your code runtime. Provides reproducibility of the code between the development and production environment, because the things you test are the same in production. Isolates each component in the pipeline; each can have its own version of the runtime, different languages, and different libraries. Helps with composition of complex pipelines. Integrates with Vertex ML Metadata for traceability and reproducibility of pipeline executions and artifacts. For a comprehensive introduction to Vertex AI Pipelines, see the list of available notebooks examples. Triggering and scheduling Vertex AI Pipelines When you deploy a pipeline to production, you need to automate its executions, depending on the scenarios discussed in the ML pipeline automation section. The Vertex AI SDK lets you operate the pipeline programmatically. The google.cloud.aiplatform.PipelineJob class includes APIs to create experiments, and to deploy and run pipelines. By using the SDK, you can therefore invoke Vertex AI Pipelines from another service to achieve scheduler or event based triggers. Figure 6. Flow diagram demonstrating multiple triggers for Vertex AI Pipelines using Pub/Sub and Cloud Run functions. In figure 6 you can see an example on how to trigger the Vertex AI Pipelines service to execute a pipeline. The pipeline is triggered using the Vertex AI SDK from a Cloud Run function. The Cloud Run function itself is a subscriber to the Pub/Sub and is triggered based on new messages. Any service that wants to trigger the execution of the pipeline can publish on the corresponding Pub/Sub topic. The preceding example has three publishing services: Cloud Scheduler is publishing messages on a schedule and therefore triggering the pipeline. Cloud Composer is publishing messages as part of a larger workflow, like a data ingestion workflow that triggers the training pipeline after BigQuery ingests new data. Cloud Logging publishes a message based on logs that meet some filtering criteria. You can set up the filters to detect the arrival of new data or even skew and drift alerts generated by the Vertex AI Model Monitoring service. Setting up CI/CD for ML on Google Cloud Vertex AI Pipelines lets you orchestrate ML systems that involve multiple steps, including data preprocessing, model training and evaluation, and model deployment. In the data science exploration phase, Vertex AI Pipelines help with rapid experimentation of the whole system. In the production phase, Vertex AI Pipelines lets you automate the pipeline execution based on new data to train or retrain the ML model. CI/CD architecture The following diagram shows a high-level overview of CI/CD for ML with Vertex AI Pipelines. Figure 7: High-level overview of CI/CD with Vertex AI Pipelines. At the core of this architecture is Cloud Build. Cloud Build can import source from Artifact Registry, GitHub, or Bitbucket, and then execute a build to your specifications, and produce artifacts such as Docker containers or Python tar files. Cloud Build executes your build as a series of build steps, defined in a build configuration file (cloudbuild.yaml). Each build step runs in a Docker container. You can either use the supported build steps provided by Cloud Build, or write your own build steps. The Cloud Build process, which performs the required CI/CD for your ML system, can be executed either manually or through automated build triggers. Triggers execute your configured build steps whenever changes are pushed to the build source. You can set a build trigger to execute the build routine on changes to the source repository, or to execute the build routine only when changes match certain criteria. In addition, you can have build routines (Cloud Build configuration files) that are executed in response to different triggers. For example, you can have build routines that are triggered when commits are made to the development branch or to the main branch. You can use configuration variable substitutions to define the environment variables at build time. These substitutions are captured from triggered builds. These variables include $COMMIT_SHA, $REPO_NAME, $BRANCH_NAME, $TAG_NAME, and $REVISION_ID. Other non-trigger-based variables are $PROJECT_ID and $BUILD_ID. Substitutions are helpful for variables whose value isn't known until build time, or to reuse an existing build request with different variable values. CI/CD workflow use case A source code repository typically includes the following items: The Python pipelines workflow source code where the pipeline workflow is defined The Python pipeline components source code and the corresponding component specification files for the different pipeline components such as data validation, data transformation, model training, model evaluation, and model serving. Dockerfiles that are required to create Docker container images, one for each pipeline component. Python unit and integration tests to test the methods implemented in the component and overall pipeline. Other scripts, including the cloudbuild.yaml file, test triggers and pipeline deployments. Configuration files (for example, the settings.yaml file), including configurations to the pipeline input parameters. Notebooks used for exploratory data analysis, model analysis, and interactive experimentation on models. In the following example, a build routine is triggered when a developer pushes source code to the development branch from their data science environment. Figure 8. Example build steps performed by Cloud Build. Cloud Build typically performs the following build steps, which are also shown in figure 7: The source code repository is copied to the Cloud Build runtime environment, under the /workspace directory. Run unit and integration tests. Optional: Run static code analysis by using an analyzer such as Pylint. If the tests pass, the Docker container images are built, one for each pipeline component. The images are tagged with the $COMMIT_SHA parameter. The Docker container images are uploaded to the Artifact Registry (as shown in figure 7). The image URL is updated in each of the component.yaml files with the created and tagged Docker container images. The pipeline workflow is compiled to produce the pipeline.json file. The pipeline.json file is uploaded to Artifact Registry. Optional: Run the pipeline with the parameter values as part of an integration test or production execution. The executed pipeline generates a new model and could also deploy the model as an API on Vertex AI Prediction. For a production ready end-to-end MLOps example that includes CI/CD using Cloud Build, see Vertex Pipelines End-to-end Samples on GitHub. Additional considerations When you set up the ML CI/CD architecture on Google Cloud, consider the following: For the data science environment, you can use a local machine, or a Vertex AI Workbench. You can configure the automated Cloud Build pipeline to skip triggers, for example, if only documentation files are edited, or if the experimentation notebooks are modified. You can execute the pipeline for integration and regression testing as a build test. Before the pipeline is deployed to the target environment, you can use the wait() method to wait for the submitted pipeline run to complete. As an alternative to using Cloud Build, you can use other build systems such as Jenkins. A ready-to-go deployment of Jenkins is available on Google Cloud Marketplace. You can configure the pipeline to deploy automatically to different environments, including development, test, and staging, based on different triggers. In addition, you can deploy to particular environments manually, such as pre-production or production, typically after getting a release approval. You can have multiple build routines for different triggers or for different target environments. You can use Apache Airflow, a popular orchestration and scheduling framework, for general-purpose workflows, which you can run using the fully managed Cloud Composer service. When you deploy a new version of the model to production, deploy it as a canary release to get an idea of how it will perform (CPU, memory, and disk usage). Before you configure the new model to serve all live traffic, you can also perform A/B testing. Configure the new model to serve 10% to 20% of the live traffic. If the new model performs better than the current one, you can configure the new model to serve all traffic. Otherwise, the serving system rolls back to the current model. What's next Learn more about GitOps-style continuous delivery with Cloud Build. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Ross Thomson | Cloud Solutions ArchitectKhalid Salama | Staff Software Engineer, Machine LearningOther contributor: Wyatt Gorman | HPC Solutions Manager Send feedback \ No newline at end of file diff --git a/Manage_and_monitor_infrastructure.txt b/Manage_and_monitor_infrastructure.txt new file mode 100644 index 0000000000000000000000000000000000000000..6427af12b6149dd397c2546993de38c6e9696ab3 --- /dev/null +++ b/Manage_and_monitor_infrastructure.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/manage-and-monitor +Date Scraped: 2025-02-23T11:54:17.152Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage and monitor your Google Cloud infrastructure Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC After you deploy an application to production in Google Cloud, you might need to modify the infrastructure that it uses. For example, you might need to change the machine types of your VMs or change the storage class of the Cloud Storage buckets. This part of the Google Cloud infrastructure reliability guide summarizes change-management guidelines that you can follow to reduce the reliability risk of the infrastructure resources. This part also describes how you can monitor the availability of Google Cloud infrastructure. Deploy infrastructure changes progressively When you need to change your Google Cloud infrastructure, as much as possible, deploy the changes to production progressively. For example, if you need to change the machine types of the VMs, deploy the changes to a few VMs in one zone, and monitor the effects of the changes. If you observe any issues, revert the infrastructure quickly to the previous stable state. Diagnose and resolve the issues, and then restart the progressive deployment process. After verifying that your workload runs as expected, gradually deploy the changes across all of your infrastructure. Control changes to global resources When you modify global resources such as VPC networks and global load balancers, take extra care to verify the changes before deploying them to production. Because global resources are resilient to zone and region outages, you might decide to use single instances of certain global resources in your architecture. In such deployments, the global resources can become single points of failure. For example, if you inadvertently misconfigure a forwarding rule of your global load balancer, the frontend can stop receiving or processing user requests. Effectively, the application is unavailable to users in this case though the backend is intact. To avoid such situations, exercise rigorous control over changes to global resources. For example, in your change-review process, you can classify any modifications to global resources as high-risk changes that additional reviewers must verify and approve. Monitor availability of Google Cloud infrastructure You can monitor the current status of the Google Cloud services across all the regions by using the Google Cloud Service Health Dashboard. You can also view a history of the infrastructure failures (called incidents) for each service. The history page provides the details of each incident, such as the incident duration, affected zones and regions, affected services, and any recommended workarounds. You can also view incidents relevant to your project using Personalized Service Health. Service Health also lets you request incident information using an API on a per-project or per-organization basis and lets you configure alerts. Google provides regular updates about the status of each incident, including an estimated time for the next update. You can programmatically get status updates for incidents by using an RSS feed. For more information, see Incidents and the Google Cloud Service Health Dashboard. Note: Even when there's no infrastructure outage, your application might be unavailable due to errors in the application or configuration issues. For example, a software update might have caused the app servers to crash, or an administrator might have inadvertently deleted the load balancer forwarding rules. For help with troubleshooting issues with specific Google Cloud resources, see the documentation for the appropriate service. Previous arrow_back Manage traffic and load Next What's next arrow_forward Send feedback \ No newline at end of file diff --git a/Manage_and_optimize_cloud_resources.txt b/Manage_and_optimize_cloud_resources.txt new file mode 100644 index 0000000000000000000000000000000000000000..892ca10cdd1a939aa07c4d48ec7c9bd395ce53ee --- /dev/null +++ b/Manage_and_optimize_cloud_resources.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence/manage-and-optimize-cloud-resources +Date Scraped: 2025-02-23T11:42:45.355Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage and optimize cloud resources Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you manage and optimize the resources that are used by your cloud workloads. It involves right-sizing resources based on actual usage and demand, using autoscaling for dynamic resource allocation, implementing cost optimization strategies, and regularly reviewing resource utilization and costs. Many of the topics that are discussed in this principle are covered in detail in the Cost optimization pillar. Principle overview Cloud resource management and optimization play a vital role in optimizing cloud spending, resource usage, and infrastructure efficiency. It includes various strategies and best practices aimed at maximizing the value and return from your cloud spending. This pillar's focus on optimization extends beyond cost reduction. It emphasizes the following goals: Efficiency: Using automation and data analytics to achieve peak performance and cost savings. Performance: Scaling resources effortlessly to meet fluctuating demands and deliver optimal results. Scalability: Adapting infrastructure and processes to accommodate rapid growth and diverse workloads. By focusing on these goals, you achieve a balance between cost and functionality. You can make informed decisions regarding resource provisioning, scaling, and migration. Additionally, you gain valuable insights into resource consumption patterns, which lets you proactively identify and address potential issues before they escalate. Recommendations To manage and optimize resources, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Right-size resources Continuously monitoring resource utilization and adjusting resource allocation to match actual demand are essential for efficient cloud resource management. Over-provisioning resources can lead to unnecessary costs, and under-provisioning can cause performance bottlenecks that affect application performance and user experience. To achieve an optimal balance, you must adopt a proactive approach to right-sizing cloud resources. This recommendation is relevant to the governance focus area of operational readiness. Cloud Monitoring and Recommender can help you to identify opportunities for right-sizing. Cloud Monitoring provides real-time visibility into resource utilization metrics. This visibility lets you track resource usage patterns and identify potential inefficiencies. Recommender analyzes resource utilization data to make intelligent recommendations for optimizing resource allocation. By using these tools, you can gain insights into resource usage and make informed decisions about right-sizing the resources. In addition to Cloud Monitoring and Recommender, consider using custom metrics to trigger automated right-sizing actions. Custom metrics let you track specific resource utilization metrics that are relevant to your applications and workloads. You can also configure alerts to notify administrators when predefined thresholds are met. The administrators can then take necessary actions to adjust resource allocation. This proactive approach ensures that resources are scaled in a timely manner, which helps to optimize cloud costs and prevent performance issues. Use autoscaling Autoscaling compute and other resources helps to ensure optimal performance and cost efficiency of your cloud-based applications. Autoscaling lets you dynamically adjust the capacity of your resources based on workload fluctuations, so that you have the resources that you need when you need them and you can avoid over-provisioning and unnecessary costs. This recommendation is relevant to the processes focus area of operational readiness. To meet the diverse needs of different applications and workloads, Google Cloud offers various autoscaling options, including the following: Compute Engine managed instance groups (MIGs) are groups of VMs that are managed and scaled as a single entity. With MIGs, you can define autoscaling policies that specify the minimum and maximum number of VMs to maintain in the group, and the conditions that trigger autoscaling. For example, you can configure a policy to add VMs in a MIG when the CPU utilization reaches a certain threshold and to remove VMs when the utilization drops below a different threshold. Google Kubernetes Engine (GKE) autoscaling dynamically adjusts your cluster resources to match your application's needs. It offers the following tools: Cluster Autoscaler adds or removes nodes based on Pod resource demands. Horizontal Pod Autoscaler changes the number of Pod replicas based on CPU, memory, or custom metrics. Vertical Pod Autoscaler fine-tunes Pod resource requests and limits based on usage patterns. Node Auto-Provisioning automatically creates optimized node pools for your workloads. These tools work together to optimize resource utilization, ensure application performance, and simplify cluster management. Cloud Run is a serverless platform that lets you run code without having to manage infrastructure. Cloud Run offers built-in autoscaling, which automatically adjusts the number of instances based on the incoming traffic. When the volume of traffic increases, Cloud Run scales up the number of instances to handle the load. When traffic decreases, Cloud Run scales down the number of instances to reduce costs. By using these autoscaling options, you can ensure that your cloud-based applications have the resources that they need to handle varying workloads, while avoiding overprovisioning and unnecessary costs. Using autoscaling can lead to improved performance, cost savings, and more efficient use of cloud resources. Leverage cost optimization strategies Optimizing cloud spending helps you to effectively manage your organization's IT budgets. This recommendation is relevant to the governance focus area of operational readiness. Google Cloud offers several tools and techniques to help you optimize cloud costs. By using these tools and techniques, you can get the best value from your cloud spending. These tools and techniques help you to identify areas where costs can be reduced, such as identifying underutilized resources or recommending more cost-effective instance types. Google Cloud options to help optimize cloud costs include the following: Committed use discounts (CUDs) are discounts for committing to a certain level of usage over a period of time. Sustained use discounts in Compute Engine provide discounts for consistent usage of a service. Spot VMs provide access to unused VM capacity at a lower cost compared to regular VMs. Pricing models might change over time, and new features might be introduced that offer better performance or lower cost compared to existing options. Therefore, you should regularly review pricing models and consider alternative features. By staying informed about the latest pricing models and features, you can make informed decisions about your cloud architecture to minimize costs. Google Cloud's Cost Management tools, such as budgets and alerts, provide valuable insights into cloud spending. Budgets and alerts let users set budgets and receive alerts when the budgets are exceeded. These tools help users track their cloud spending and identify areas where costs can be reduced. Track resource usage and costs You can use tagging and labeling to track resource usage and costs. By assigning tags and labels to your cloud resources like projects, departments, or other relevant dimensions, you can categorize and organize the resources. This lets you monitor and analyze spending patterns for specific resources and identify areas of high usage or potential cost savings. This recommendation is relevant to these focus areas of operational readiness: governance and tooling. Tools like Cloud Billing and Cost Management help you to get a comprehensive understanding of your spending patterns. These tools provide detailed insights into your cloud usage and they let you identify trends, forecast costs, and make informed decisions. By analyzing historical data and current spending patterns, you can identify the focus areas for your cost-optimization efforts. Custom dashboards and reports help you to visualize cost data and gain deeper insights into spending trends. By customizing dashboards with relevant metrics and dimensions, you can monitor key performance indicators (KPIs) and track progress towards your cost optimization goals. Reports offer deeper analyses of cost data. Reports let you filter the data by specific time periods or resource types to understand the underlying factors that contribute to your cloud spending. Regularly review and update your tags, labels, and cost analysis tools to ensure that you have the most up-to-date information on your cloud usage and costs. By staying informed and conducting cost postmortems or proactive cost reviews, you can promptly identify any unexpected increases in spending. Doing so lets you make proactive decisions to optimize cloud resources and control costs. Establish cost allocation and budgeting Accountability and transparency in cloud cost management are crucial for optimizing resource utilization and ensuring financial control. This recommendation is relevant to the governance focus area of operational readiness. To ensure accountability and transparency, you need to have clear mechanisms for cost allocation and chargeback. By allocating costs to specific teams, projects, or individuals, your organization can ensure that each of these entities is responsible for its cloud usage. This practice fosters a sense of ownership and encourages responsible resource management. Additionally, chargeback mechanisms enable your organization to recover cloud costs from internal customers, align incentives with performance, and promote fiscal discipline. Establishing budgets for different teams or projects is another essential aspect of cloud cost management. Budgets enable your organization to define spending limits and track actual expenses against those limits. This approach lets you make proactive decisions to prevent uncontrolled spending. By setting realistic and achievable budgets, you can ensure that cloud resources are used efficiently and aligned with business objectives. Regular monitoring of actual spending against budgets helps you to identify variances and address potential overruns promptly. To monitor budgets, you can use tools like Cloud Billing budgets and alerts. These tools provide real-time insights into cloud spending and they notify stakeholders of potential overruns. By using these capabilities, you can track cloud costs and take corrective actions before significant deviations occur. This proactive approach helps to prevent financial surprises and ensures that cloud resources are used responsibly. Previous arrow_back Manage incidents and problems Next Automate and manage change arrow_forward Send feedback \ No newline at end of file diff --git a/Manage_incidents_and_problems.txt b/Manage_incidents_and_problems.txt new file mode 100644 index 0000000000000000000000000000000000000000..bacc19de8d6c1cb6c30b179083508fb3940415e1 --- /dev/null +++ b/Manage_incidents_and_problems.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence/manage-incidents-and-problems +Date Scraped: 2025-02-23T11:42:43.766Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage incidents and problems Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you manage incidents and problems related to your cloud workloads. It involves implementing comprehensive monitoring and observability, establishing clear incident response procedures, conducting thorough root cause analysis, and implementing preventive measures. Many of the topics that are discussed in this principle are covered in detail in the Reliability pillar. Principle overview Incident management and problem management are important components of a functional operations environment. How you respond to, categorize, and solve incidents of differing severity can significantly affect your operations. You must also proactively and continuously make adjustments to optimize reliability and performance. An efficient process for incident and problem management relies on the following foundational elements: Continuous monitoring: Identify and resolve issues quickly. Automation: Streamline tasks and improve efficiency. Orchestration: Coordinate and manage cloud resources effectively. Data-driven insights: Optimize cloud operations and make informed decisions. These elements help you to build a resilient cloud environment that can handle a wide range of challenges and disruptions. These elements can also help to reduce the risk of costly incidents and downtime, and they can help you to achieve greater business agility and success. These foundational elements are spread across the four focus areas of operational readiness: Workforce, Processes, Tooling, and Governance. Note: The Google SRE Book defines many of the terms and concepts that are described in this document. We recommend the Google SRE Book as supplemental reading to support the recommendations that are described in this document. Recommendations To manage incidents and problems effectively, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Establish clear incident response procedures Clear roles and responsibilities are essential to ensure effective and coordinated response to incidents. Additionally, clear communication protocols and escalation paths help to ensure that information is shared promptly and effectively during an incident. This recommendation is relevant to these focus areas of operational readiness: workforce, processes, and tooling. To establish incident response procedures, you need to define the roles and expectations of each team member, such as incident commanders, investigators, communicators, and technical experts. Establishing communication and escalation paths includes identifying important contacts, setting up communication channels, and defining the process for escalating incidents to higher levels of management when necessary. Regular training and preparation helps to ensure that teams are equipped with the knowledge and skills to respond to incidents effectively. By documenting incident response procedures in a runbook or playbook, you can provide a standardized reference guide for teams to follow during an incident. The runbook must outline the steps to be taken at each stage of the incident response process, including communication, triage, investigation, and resolution. It must also include information about relevant tools and resources and contact information for important personnel. You must regularly review and update the runbook to ensure that it remains current and effective. Centralize incident management For effective tracking and management throughout the incident lifecycle, consider using a centralized incident management system. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. A centralized incident management system provides the following advantages: Improved visibility: By consolidating all incident-related data in a single location, you eliminate the need for teams to search in various channels or systems for context. This approach saves time and reduces confusion, and it gives stakeholders a comprehensive view of the incident, including its status, impact, and progress. Better coordination and collaboration: A centralized system provides a unified platform for communication and task management. It promotes seamless collaboration between the different departments and functions that are involved in incident response. This approach ensures that everyone has access to up-to-date information and it reduces the risk of miscommunication and misalignment. Enhanced accountability and ownership: A centralized incident management system enables your organization to allocate tasks to specific individuals or teams and it ensures that responsibilities are clearly defined and tracked. This approach promotes accountability and encourages proactive problem-solving because team members can easily monitor their progress and contributions. A centralized incident management system must offer robust features for incident tracking, task assignment, and communication management. These features let you customize workflows, set priorities, and integrate with other systems, such as monitoring tools and ticketing systems. By implementing a centralized incident management system, you can optimize your organization's incident response processes, improve collaboration, and enhance visibility. Doing so leads to faster incident resolution times, reduced downtime, and improved customer satisfaction. It also helps foster a culture of continuous improvement because you can learn from past incidents and identify areas for improvement. Conduct thorough post-incident reviews After an incident occurs, you must conduct a detailed post-incident review (PIR), which is also known as a postmortem, to identify the root cause, contributing factors, and lessons learned. This thorough review helps you to prevent similar incidents in the future. This recommendation is relevant to these focus areas of operational readiness: processes and governance. The PIR process must involve a multidisciplinary team that has expertise in various aspects of the incident. The team must gather all of the relevant information through interviews, documentation review, and site inspections. A timeline of events must be created to establish the sequence of actions that led up to the incident. After the team gathers the required information, they must conduct a root cause analysis to determine the factors that led to the incident. This analysis must identify both the immediate cause and the systemic issues that contributed to the incident. Along with identifying the root cause, the PIR team must identify any other contributing factors that might have caused the incident. These factors could include human error, equipment failure, or organizational factors like communication breakdowns and lack of training. The PIR report must document the findings of the investigation, including the timeline of events, root cause analysis, and recommended actions. The report is a valuable resource for implementing corrective actions and preventing recurrence. The report must be shared with all of the relevant stakeholders and it must be used to develop safety training and procedures. To ensure a successful PIR process, your organization must foster a blameless culture that focuses on learning and improvement rather than assigning blame. This culture encourages individuals to report incidents without fear of retribution, and it lets you address systemic issues and make meaningful improvements. By conducting thorough PIRs and implementing corrective measures based on the findings, you can significantly reduce the risk of similar incidents occurring in the future. This proactive approach to incident investigation and prevention helps to create a safer and more efficient work environment for everyone involved. Maintain a knowledge base A knowledge base of known issues, solutions, and troubleshooting guides is essential for incident management and resolution. Team members can use the knowledge base to quickly identify and address common problems. Implementing a knowledge base helps to reduce the need for escalation and it improves overall efficiency. This recommendation is relevant to these focus areas of operational readiness: workforce and processes. A primary benefit of a knowledge base is that it lets teams learn from past experiences and avoid repeating mistakes. By capturing and sharing solutions to known issues, teams can build a collective understanding of how to resolve common problems and best practices for incident management. Use of a knowledge base saves time and effort, and helps to standardize processes and ensure consistency in incident resolution. Along with helping to improve incident resolution times, a knowledge base promotes knowledge sharing and collaboration across teams. With a central repository of information, teams can easily access and contribute to the knowledge base, which promotes a culture of continuous learning and improvement. This culture encourages teams to share their expertise and experiences, leading to a more comprehensive and valuable knowledge base. To create and manage a knowledge base effectively, use appropriate tools and technologies. Collaboration platforms like Google Workspace are well-suited for this purpose because they let you easily create, edit, and share documents collaboratively. These tools also support version control and change tracking, which ensures that the knowledge base remains up-to-date and accurate. Make the knowledge base easily accessible to all relevant teams. You can achieve this by integrating the knowledge base with existing incident management systems or by providing a dedicated portal or intranet site. A knowledge base that's readily available lets teams quickly access the information that they need to resolve incidents efficiently. This availability helps to reduce downtime and minimize the impact on business operations. Regularly review and update the knowledge base to ensure that it remains relevant and useful. Monitor incident reports, identify common issues and trends, and incorporate new solutions and troubleshooting guides into the knowledge base. An up-to-date knowledge base helps your teams resolve incidents faster and more effectively. Automate incident response Automation helps to streamline your incident response and remediation processes. It lets you address security breaches and system failures promptly and efficiently. By using Google Cloud products like Cloud Run functions or Cloud Run, you can automate various tasks that are typically manual and time-consuming. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Automated incident response provides the following benefits: Reduction in incident detection and resolution times: Automated tools can continuously monitor systems and applications, detect suspicious or anomalous activities in real time, and notify stakeholders or respond without intervention. This automation lets you identify potential threats or issues before they escalate into major incidents. When an incident is detected, automated tools can trigger predefined remediation actions, such as isolating affected systems, quarantining malicious files, or rolling back changes to restore the system to a known good state. Reduced burden on security and operations teams: Automated incident response lets the security and operations teams focus on more strategic tasks. By automating routine and repetitive tasks, such as collecting diagnostic information or triggering alerts, your organization can free up personnel to handle more complex and critical incidents. This automation can lead to improved overall incident response effectiveness and efficiency. Enhanced consistency and accuracy of the remediation process: Automated tools can ensure that remediation actions are applied uniformly across all affected systems, minimizing the risk of human error or inconsistency. This standardization of the remediation process helps to minimize the impact of incidents on users and the business. Previous arrow_back Ensure operational readiness and performance using CloudOps Next Manage and optimize cloud resources arrow_forward Send feedback \ No newline at end of file diff --git a/Manage_traffic_and_load.txt b/Manage_traffic_and_load.txt new file mode 100644 index 0000000000000000000000000000000000000000..2ec8e7d02e9144660bdbbef90ea952dc87e7bf18 --- /dev/null +++ b/Manage_traffic_and_load.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/traffic-load +Date Scraped: 2025-02-23T11:54:13.830Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage traffic and load for your workloads in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC When you run an application stack on distributed resources in the cloud, network traffic must be routed efficiently to the available resources across multiple locations. This part of the Google Cloud infrastructure reliability guide describes traffic- and load-management techniques that you can use to help to improve the reliability of your cloud workloads. Capacity planning To ensure that your application deployed in Google Cloud has adequate infrastructure resources, you must estimate the capacity that's required, and manage the deployed capacity. This section provides guidelines to help you plan and manage capacity. Forecast the application load When you forecast the load, consider factors like the number of users and the rate at which the application might receive requests. In your forecasts, consider historical load trends, seasonal variations, load spikes during special events, and growth driven by business changes like expansion to new geographies. Estimate capacity requirements Based on your deployment architecture and considering the performance and reliability objectives of your application, estimate the quantity of Google Cloud resources that are necessary to handle the expected load. For example, if you plan to use Compute Engine managed instance groups (MIGs), decide the size of each MIG, VM machine type, and the number, type, and size of persistent disks. You can use the Google Cloud Pricing Calculator to estimate the cost of the Google Cloud resources. Plan adequate redundancy When you estimate the capacity requirements, provide adequate redundancy for every component of the application stack. For example, to achieve N+1 redundancy, every component in the application stack must have at least one redundant component beyond the minimum that's necessary to handle the forecast load. Benchmark the application Run load tests to determine the resource efficiency of your application. Resource efficiency is the relationship between the load on the application and the resources such as CPU and memory that the application consumes. The resource efficiency of an application can deteriorate when the load is exceptionally high, and the efficiency might change over time. Conduct the load tests for both normal and peak load conditions, and repeat the benchmarking tests at regular intervals. Manage quotas Google Cloud service quotas are per-project limits, which help you control the consumption of cloud resources. Quotas are of two types: Resource quotas are the maximum resources that you can create, such as the number of regional Google Kubernetes Engine (GKE) clusters in a region. Rate quotas limit the number of API requests that can be sent to a service in a specific period. Quotas can be zonal, regional, or global. Review the current resource quotas and API rate quotas for the services that you plan to use in your projects. Ensure that the quotas are sufficient for the capacity that you need. When required, you can request more quota. Reserve compute capacity To make sure that capacity for Compute Engine resources is available when necessary, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or shared across multiple projects. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Monitor utilization, and reassess requirements periodically After you deploy the required resources, monitor the capacity utilization. You might find opportunities to optimize cost by removing idle resources. Periodically reassess the capacity requirements, and consider any changes in the application behavior, performance and reliability objectives, user load, and your IT budget. Autoscaling When you run an application on resources that are distributed across multiple locations, the application remains available during outages at one of the locations. In addition, redundancy helps ensure that users experience consistent application behavior. For example, when there's a spike in the load, the redundant resources ensure that the application continues to perform at a predictable level. But when the load on the application is low, redundancy can result in inefficient utilization of cloud resources. For example, the shopping cart component of an ecommerce application might need to process payments for 99.9% of orders within 200 milliseconds after order confirmation. To meet this requirement during periods of high load, you might provision redundant compute and storage capacity. But when the load on the application is low, a portion of the provisioned capacity might remain unused or under-utilized. To remove the unused resources, you would need to monitor the utilization and adjust the capacity. Autoscaling helps you manage cloud capacity and maintain the required level of availability without the operational overhead of managing redundant resources. When the load on your application increases, autoscaling helps to improve the availability of the application by provisioning additional resources automatically. During periods of low load, autoscaling removes unused resources, and helps to reduce cost. Certain Google Cloud services, like Compute Engine, let you configure autoscaling for the resources that you provision. Managed services like Cloud Run can scale capacity automatically without you having to configure anything. The following are examples of Google Cloud services that support autoscaling. This list is not exhaustive. Compute Engine: MIGs let you scale stateless applications that are deployed on Compute Engine VMs automatically to match the capacity with the current load. For more information, see Autoscaling groups of instances. GKE: You can configure GKE clusters to automatically resize the node pools to match the current load. For more information, see Cluster autoscaler. For GKE clusters that you provision in the Autopilot mode, GKE automatically scales the nodes and workloads based on the traffic. Cloud Run: Services that you provision in Cloud Run scale out automatically to the number of container instances that are necessary to handle the current load. When the application has no load, the service automatically scales in the number of container instances to zero. For more information, see About container instance autoscaling. Cloud Run functions: Each request to a function is assigned to an instance of the function. If the volume of inbound requests exceeds the number of existing function instances, Cloud Run functions automatically starts new instances of the function. For more information, see Cloud Run functions execution environment. Bigtable: When you create a cluster in a Bigtable instance, you can configure the cluster to scale automatically. Bigtable monitors the CPU and storage load, and adjusts the number of nodes in the cluster to maintain the target utilization rates that you specify. For more information, see Bigtable autoscaling. Dataproc Serverless: When you submit an Apache Spark batch workload, Dataproc Serverless dynamically scales the workload resources, such as the number of executors, to run the workload efficiently. For more information, see Dataproc Serverless for Spark autoscaling. Load balancing Load balancing helps to improve application reliability by routing traffic to only the available resources and by ensuring that individual resources aren't overloaded. Consider the following reliability-related design recommendations when choosing and configuring load balancers for your cloud deployment. Load-balance internal traffic Configure load balancing for the traffic between the tiers of the application stack as well, not just for the traffic between the external clients and the application. For example, in a 3-tier web application stack, you can use an internal load balancer for reliable communication between the web and app tiers. Choose an appropriate load balancer type To load-balance external traffic to an application that's distributed across multiple regions, you can use a global load balancer or multiple regional load balancers. For more information, see Benefits and risks of global load balancing for multi-region deployments. If the backends are in a single region and you don't need the features of global load balancing, you can use a regional load balancer, which is resilient to zone outages. When you choose the load balancer type, consider other factors besides availability, such as geographic control over TLS termination, performance, cost, and the traffic type. For more information, see Choose a load balancer. Configure health checks Autoscaling helps to ensure that your applications have adequate infrastructure resources to handle the current load. But even when sufficient infrastructure resources exist, an application or parts of it might not be responsive. For example, all the VMs that host your application might be in the RUNNING state. But the application software that's deployed on some of the VMs might have crashed. Load-balancing health checks ensure that the load balancers route application traffic to only the backends that are responsive. If your backends are MIGs, then consider configuring an extra layer of health checks to autoheal the VMs that aren't available. When autohealing is configured for a MIG, the unavailable VMs are proactively deleted, and new VMs are created. Rate limiting At times, your application might experience a rapid or sustained increase in the load. If the application isn't designed to handle the increased load, the application or the resources that it uses might fail, making the application unavailable. The increased load might be caused by malicious requests, such as network-based distributed denial-of-service (DDoS) attacks. A sudden spike in the load can also occur due to other reasons such as configuration errors in the client software. To ensure that your application can handle excessive load, consider applying suitable rate-limiting mechanisms. For example, you can set quotas for the number of API requests that a Google Cloud service can receive. Rate-limiting techniques can also help optimize the cost of your cloud infrastructure. For example, by setting project-level quotas for specific resources, you can limit the billing that the project can incur for those resources. Network Service Tier Google Cloud Network Service Tiers let you optimize connectivity between systems on the internet and your Google Cloud workloads. For applications that serve users globally and have backends in more than one region, choose Premium Tier. Traffic from the internet enters the high-performance Google network at the point of presence (PoP) that's closest to the sending system. Within the Google network, traffic is routed from the entry PoP to the appropriate Google Cloud resource, such as a Compute Engine VM. Outbound traffic is sent through the Google network, exiting at the PoP that's closest to the destination. This routing method helps to improve the availability perception of users by reducing the number of network hops between the users and the PoPs closest to them. Previous arrow_back Design reliable infrastructure Next Manage and monitor infrastructure arrow_forward Send feedback \ No newline at end of file diff --git a/Managed_Service_for_Prometheus.txt b/Managed_Service_for_Prometheus.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1871e7df2490cce8c9a1c637f2bce6d7769896c --- /dev/null +++ b/Managed_Service_for_Prometheus.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/managed-prometheus +Date Scraped: 2025-02-23T12:07:33.724Z + +Content: +Jump to Managed Service for PrometheusManaged Service for PrometheusA fully managed and easy-to-use monitoring service, built on the same globally scalable data store used by Google Cloud.Go to consoleView documentationGet started without the need for complex migrationBuilt on the same backend used by Google that collects over 2 trillion active time seriesLearn more about what customers are sayingHear what's new with the service4:08Getting started with Google Cloud Managed Service for PrometheusBenefitsUse Prometheus without managing infrastructureFully managed Prometheus®-compatible monitoring stack with default two year retention and global queries over regionalized data. No need to federate, add resources manually, or devote time to maintenance.Default two year metrics retention includedSharding an expanding storage footprint is a hassle when running your own Prometheus-compatible aggregator. To alleviate this pain point for you, all metrics are stored for two years at no additional charge.Keep open source tooling and avoid vendor lock-inUse PromQL with both Cloud Monitoring and open source tools like Grafana®. Configure deployment and scraping via any open source method, such as prometheus-operator or annotations.Key featuresKey featuresBacked by Monarch, Google’s in-memory time series databaseManaged Service for Prometheus uses the same technology that Google uses to monitor its own services, meaning that even the largest Prometheus deployments can be monitored at global scale. Additionally, the service is maintained by the same Site Reliability Engineering (SRE) team that maintains Google’s own monitoring, so you can be confident that your metrics will be available when you need them.Use Cloud Monitoring with Managed Service for PrometheusYou can view Prometheus metrics and over 1,500 free Google Cloud system metrics together for a “single pane of glass” across your infrastructure and applications. Prometheus metrics can be used with the dashboarding, alerting, and SLO monitoring features inside Cloud Monitoring. Chart your Prometheus metrics right alongside your GKE metrics, your load balancer metrics, and more. Cloud Monitoring supports PromQL, so your developers can start using it immediately.Managed or self-deployed collectors, and the Ops Agent Managed Service for Prometheus offers managed collectors that are automatically deployed, scaled, sharded, configured, and maintained. Scraping and rules are configured via lightweight custom resources (CRs). Migration from a Prometheus operator is easy, and managed collection supports most use cases. You can also keep your existing collector deployment method and configurations if the managed collectors do not currently support your use case. The Ops Agent simplifies the collection of Prometheus metrics on Virtual Machines to make it easier for you to standardize all environments on Prometheus.View all featuresVIDEODeep dive: Managed Service for Prometheus15:57CustomersCustomers are freeing up developer time and keeping their open source toolsBlog post How The Home Depot gets a single pane of glass for metrics across 2,200 stores4-min readCase studyMaisons du Monde’s journey to Managed Service for Prometheus6-min readVideoOpenX uses Google Cloud Managed Service for Prometheus to free up resources and keep current tooling16:12See all customersWe have been running Prometheus ourselves for GKE metrics, but the ongoing maintenance took up too many development hours. We started using Managed Service for Prometheus and it just works. It can handle whatever volume we have because it's built on the same back end that Google uses itself, and we get to keep using the same Grafana dashboards as before while keeping open standards and protocols.Peter Kieltyka, CEO and Chief Architect, Horizon Blockchain GamesWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postMonitor Google Compute Engine instances with Prometheus and the Ops AgentRead the blogBlog postIntroducing a high-usage tier for Managed Service for PrometheusRead the blogBlog postGoogle Cloud Managed Service for Prometheus is now GARead the blogVideoDeep dive: Managed Service for PrometheusWatch videoBlog postIntroducing Google Cloud Managed Service for PrometheusRead the blogBlog postAnnouncements at Next '21, including Managed Service for PrometheusRead the blogDocumentationDocumentationQuickstartDocumentation overviewGet started with Managed Service for Prometheus.Learn moreTutorialSet up data collection for Managed Service for PrometheusThe service offers both managed and self-deployed collectors. Get step-by-step instructions for setting up each option.Learn moreTutorialQuery data from Managed Service for PrometheusQuery the data sent to the service using the Prometheus HTTP API, Prometheus UI, Grafana, the service page in the Google Cloud Console, and Cloud Monitoring.Learn moreTutorialRule evaluation and metric filteringLearn how to use functions you expect from Prometheus such as rule evaluation and metric filtering.Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseDiagnose problems with your applications quicklyUse PromQL to define alerts and diagnose issues when alerts are triggered. With Managed Service for Prometheus, you do not have to change your visualization tools or alerts so your existing incident creation and investigation workflows will continue working.Use caseCost-effectively monitor dynamic environments Managed Service for Prometheus charges on a per-sample basis, which does not charge for cardinality up front when a new container is spun up. With per-sample pricing, you only pay while the container is alive, so you are not penalized for using Horizontal Pod Autoscaling. Managed Service for Prometheus features other cost controls such as customizable sampling periods, filters, and the ability to keep data local and not send it to the datastore.Use caseStandardize on Prometheus across your environmentsAdopting one metrics standard across your Kubernetes, serverless, and VM deployments makes it easier to bring dashboards together for better monitoring. Plus, your developers and administrators only need to know PromQL to work with your metrics. Managed Service for Prometheus supports this use case with collection options for GKE, Cloud Run, and the Ops Agent for VMs on Google Cloud.View all technical guidesAll featuresAll featuresStand-alone global rule evaluatorYou can continue to evaluate your existing recording and alerting rules against global data in Managed Service for Prometheus. The results are stored just like collected data, meaning you will not need to co-locate aggregated data on a single Prometheus server.Dynamic multi-project monitoringMetrics scopes are a read-time-only construct in Cloud Monitoring that enables multi-project monitoring via a single Grafana data source. Each metric scope appears as a separate data source in Grafana and can be assigned read permissions on a per-service account basis.Managed collectorsManaged collectors are automatically deployed, scaled, sharded, configured, and maintained. Scraping and rules are configured via lightweight custom resources (CRs).Self-deployed collectorsUse your preferred deployment mechanism by simply swapping out your regular Prometheus binary for Managed Service for Prometheus’ collector binary. Scraping is configured via your preferred standard method and you scale and shard manually. Reuse your existing configs and run both regular Prometheus and Managed Service for Prometheus side by side.Prometheus sidecar for Cloud RunGet Prometheus-style monitoring for Cloud Run services by adding the sidecar. The sidecar uses Google Cloud Managed Service for Prometheus on the server side and a distribution of the OpenTelemetry Collector, custom-built for serverless workloads, on the client side.Support for monitoring additional environmentsSelf-deployed collectors can be configured to collect data from applications running outside of Google Cloud. These targets can be Kubernetes or non-Kubernetes environments, such as VMs.Use PromQL in Cloud MonitoringUse PromQL in the Cloud Monitoring user interface, including in Metrics Explorer and Dashboard Builder. Get auto-completion of metric names, label keys, and label values. Query free system metrics, Kubernetes metrics, log-based metrics, and custom metrics, alongside your Prometheus metrics with PromQL.Backed by Monarch, Google’s in-memory time series databaseThe service uses the same technology that Google uses to monitor its own services, meaning that even the largest Prometheus deployments can be monitored at global scale.Cost control mechanismsHelp keep your spending under control with an exported metrics filter, a reduced charge for sparse histograms, a fee structure that charges less for longer sampling periods, and the ability to only send locally pre-aggregated data.Cost identification and attributionUse Cloud Monitoring to break out your Prometheus ingestion volume by metric name and namespace. Quickly identify the metrics that cost you the most and which namespace is sending them.Ex: Live migration for VMsEx: Compute Engine virtual machines can live-migrate between host systems without rebooting, which keeps your applications running even when host systems require maintenance.PricingPricingThe pricing for Managed Service for Prometheus lets you control your usage and spending. Learn more in the pricing details guide.FeaturePriceFree allotment per monthEffective dateMetrics ingested by using Google Cloud Managed Service for Prometheus$0.060/million samples†: first 0-50 billion samples#$0.048/million samples: next 50-250 billion samples$0.036/million samples: next 250-500 billion samples$0.024/million samples: >500 billion samplesNot applicableAugust 8, 2023Monitoring API calls$0.01/1,000 Read API calls(Write API calls are free)First 1 million Read API calls included per billing accountJuly 1, 2018†Google Cloud Managed Service for Prometheus uses Cloud Monitoring storage for externally created metric data and uses the Monitoring API to retrieve that data. Managed Service for Prometheus meters based on samples ingested instead of bytes to align with Prometheus' conventions. For more information about sample-based metering, see Pricing for controllability and predictability. For computational examples, see Pricing examples based on samples ingested.#Samples are counted per billing account.View pricing detailsThe Grafana Labs Marks are trademarks of Grafana Labs, and are used with Grafana Labs’ permission. We are not affiliated with, endorsed or sponsored by Grafana Labs or its affiliates.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Management_Tools.txt b/Management_Tools.txt new file mode 100644 index 0000000000000000000000000000000000000000..3f12c01a08325af47d33bdefee38c5fe8c69f524 --- /dev/null +++ b/Management_Tools.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/management +Date Scraped: 2025-02-23T12:05:33.317Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go. Google Cloud management toolsAll the tools you need to streamline your cloud, API, and application management tasks, complete with access to all Google APIs, including Google Cloud’s Billing API, and turnkey solutions from our cloud marketplace. Go to console Find the tools to help you manage your Google Cloud projectsCategoryProductsFeaturesStreamline management & workflows Cloud EndpointsCloud Endpoints’ NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an OpenAPI Specification or one of our API frameworks, Cloud Endpoints provides tools for every phase of API development and insights with Cloud Monitoring, Cloud Trace, and Cloud Logging.Control who has access to your APILightning fast (less than 1 ms per call)World-class monitoringChoose your own frameworkGoogle Cloud appThe Google Cloud app gives you a convenient way to discover, understand, receive alerts, and respond to production issues. Monitor and make changes to Google Cloud resources from your iOS and Android device. Manage Google Cloud resources such as projects, billing, App Engine apps, and Compute Engine VMs.Incident management and alertsCustomizable dashboardDiagnose and fix issues Cost ManagementGoogle Cloud cost management tools provide visibility, accountability, control, and intelligence so that you can scale your business in the cloud with confidence. Tailored to meet the needs of organizations of all sizes, these tools help reduce complexity and increase the predictability of your cloud costs.Resource hierarchy and access controlsReports, dashboards, budgets, and alertsRecommendations for optimizing your costs and usageIntelligent ManagementActive Assist is Google Cloud's suite of tools that use data, intelligence, and machine learning to reduce cloud complexity and administrative toil. This suite makes it easy to optimize your cloud's performance, security, and cost with minimal time and labor.Optimize performance, security, and cost with intelligent toolsTools to optimize cloud performance and cost with minimal effortCarbon FootprintCarbon Footprint enables users to accurately measure the gross carbon emissions from electricity associated with the usage of Google Cloud services. Monitor cloud emissions over time by project, product and region. Export your Carbon Footprint data to BigQuery for further data analysis or to include in emissions accounting.Include cloud emissions in reports and disclosuresVisualize carbon insights via dashboards and chartsReduce the emissions of cloud applications and infrastructure Empower developers Google Cloud MarketplaceGoogle Cloud Marketplace offers ready-to-go solutions that launch quickly to Google Cloud and other environments with Anthos. Enterprise procurement teams can buy and fulfill quickly so they can strategically partner with IT during development.Explore, launch, and manage production-grade solutions in just a few clicksImprove your solution procurement experienceService CatalogService Catalog enables cloud admins to list their cloud solutions and share them with their organization. Control distribution, ensure internal compliance, and increase discoverability for solutions within an enterprise.Advanced search and filtersConsolidated view in a single toolDeployment guardrails and inventory controlGoogle CloudManage and get insights into everything that powers your cloud application—including web applications, data analysis, virtual machines, datastore, databases, networking, and developer services. Google Cloud helps you deploy, scale, and diagnose production issues in a simple web-based interface.Powerful web admin UIFind and manage resources quickly and securelyAdvanced data management, storage, and processing capabilities Cloud ShellCloud Shell provides you with command-line access to cloud resources directly from your browser. Easily manage projects and resources without having to install the Cloud SDK or other tools on your system. With Cloud Shell, the Cloud SDK gcloud command and other utilities are always available, up to date, and fully authenticated.Command-line access to cloud resources directly from a browser5 GB of persistent disk storagePre-installed command-line tools Cloud APIsAccess Google Cloud products from your code. Cloud APIs provide similar functionality as Cloud SDK and Google Cloud and allow you to automate your workflows by using your favorite language. Use these Cloud APIs with REST calls or client libraries in popular programming languages.Programmatic interfaces for Google Cloud servicesAutomate workflows using your favorite languageConfig ConnectorConfig Connector is a Kubernetes add-on that allows you to manage Google Cloud resources through Kubernetes, reducing the complexity and cognitive load for developers. With Config Connector, your environments can leverage RBAC for access control, single source of configuration and desired state management, eventual consistency and reconciliation.Eventual consistency and automatic reconciliationRBAC for access controlSingle source of configuration and desired state managementTerraform on Google CloudTerraform is an open source tool that lets you provision Google Cloud resources with declarative configuration files—resources such as virtual machines, containers, storage, and networking. Terraform's infrastructure-as-code (IaC) approach supports DevOps best practices for change management.Supports DevOps best practices for change managementCollection of reference modules to help you get startedPersonalized Service HealthGain visibility into disruptive events impacting Google Cloud products and services relevant to your projects. All events are available in the Google Cloud console and at a variety of integration points, including custom alerts, API, and logs.View relevant disruptions using the Service Health dashboardConfigure alerts to be notified when an incident is detectedAccess disruptive events using the Service Health APISupports 50+ Google Cloud products and services Find the tools to help you manage your Google Cloud projectsStreamline management & workflows Cloud EndpointsCloud Endpoints’ NGINX-based proxy and distributed architecture give unparalleled performance and scalability. Using an OpenAPI Specification or one of our API frameworks, Cloud Endpoints provides tools for every phase of API development and insights with Cloud Monitoring, Cloud Trace, and Cloud Logging.Control who has access to your APILightning fast (less than 1 ms per call)World-class monitoringChoose your own framework Empower developers Google Cloud MarketplaceGoogle Cloud Marketplace offers ready-to-go solutions that launch quickly to Google Cloud and other environments with Anthos. Enterprise procurement teams can buy and fulfill quickly so they can strategically partner with IT during development.Explore, launch, and manage production-grade solutions in just a few clicksImprove your solution procurement experienceTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Mandiant_Consulting.txt b/Mandiant_Consulting.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2463555a2f3b10a1367ba5732a04714e75b9404 --- /dev/null +++ b/Mandiant_Consulting.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/consulting/mandiant-services +Date Scraped: 2025-02-23T12:09:34.270Z + +Content: +Download the latest Cyber Snapshot report for today's hot cybersecurity topics. Read now.Mandiant Cybersecurity ConsultingCybersecurity incidents require diligent preparation, rapid action, and critical asset protection. Our experts provide guidance so you can maintain confidence in business operations before, during, and after an incident.Contact an expertIncident response assistanceExperienced specialists backed by intel expertsMandiant responds to some of the world’s largest breaches. We combine our frontline expertise and deep understanding of global attacker behavior to respond to incidents and help organizations prepare their cyber defenses against compromise.Explore all security servicesThe expertise you need, when you need itUse our frontline experience and deep understanding of global attacker behavior to prepare and defend your organization against compromise.Resolve incidents quicklyTackle breaches confidentlyBuild your incident response capabilities, respond to active breaches, and minimize the impact of an attack.• Resolve incidents quickly with unmatched expertise• Confirm past and ongoing compromise• Stop breaches and investigate• Manage and respond to a crisis• Recover after a breachExploreReduce the impact of a breachPut Mandiant on speed-dial with established Ts and Cs and a 2-hour response timeIncident Response RetainerUncover active or prior compromise and exploitable mis-configurationsCompromise AssessmentHelp protect stakeholders, minimize risk, and preserve brand reputationCrisis CommunicationsGuard against future compromiseIncrease resilience in the face of threatsEnhance security capabilities to help outmaneuver today’s threat actors and provide resilience against future compromise.• Improve capabilities against threats• Advance approach to cyber risk management• Strengthen defenses against supply chain attacks• Evaluate insider threatsExploreBecome threat readyAssess your ability to effectively detect and respond to evolving cyber attacksCyber Defense AssessmentEvaluate cyber risk exposure for effective decision-making and risk mitigationCyber Risk ManagementAssess and help mitigate risks associated with supply chain, M&A, and systems out of direct controlCyber Security Due DiligenceBattle-test your securityTest and strengthen your security program with real-world attacks See how your security program performs under pressure with simulated attacks against your environment to harden systems and operations.• Test security controls and operations• Evaluate with real-world attacks• Harden against the latest threats• Identify and close security gapsExploreTest your security and operationsFind flaws before attackers can by testing your controls against the latest attack scenariosRed Team AssessmentEvaluate your cloud security approach to protect cloud implementations and assetsCloud Architecture AssessmentAssess your critical assets to pinpoint and reduce related vulnerabilities and misconfigurationsPenetration TestingDevelop and mature critical security functionsElevate your cyber defense capabilities across all critical functionsEstablish and mature cyber defense capabilities across functions.• Work to improve processes and technologies• Up-level threat detection, containment, and remediation capabilities• Receive hands-on support to implement necessary changes • Help optimize security operations and hunt functionsExploreBegin your transformationHands-on support to help implement priority changes and apply best practices for risk mitigationCyber Defense Center DevelopmentBalance cyber risk with rapid business transformation to help CISOs and execs achieve their goalsExecutive Cybersecurity ServicesIncident response planning, metrics development, and playbook creation for priority use casesPlanning and Process ImprovementIncident responseResolve incidents quicklyTackle breaches confidentlyBuild your incident response capabilities, respond to active breaches, and minimize the impact of an attack.• Resolve incidents quickly with unmatched expertise• Confirm past and ongoing compromise• Stop breaches and investigate• Manage and respond to a crisis• Recover after a breachExploreReduce the impact of a breachPut Mandiant on speed-dial with established Ts and Cs and a 2-hour response timeIncident Response RetainerUncover active or prior compromise and exploitable mis-configurationsCompromise AssessmentHelp protect stakeholders, minimize risk, and preserve brand reputationCrisis CommunicationsStrategic readinessGuard against future compromiseIncrease resilience in the face of threatsEnhance security capabilities to help outmaneuver today’s threat actors and provide resilience against future compromise.• Improve capabilities against threats• Advance approach to cyber risk management• Strengthen defenses against supply chain attacks• Evaluate insider threatsExploreBecome threat readyAssess your ability to effectively detect and respond to evolving cyber attacksCyber Defense AssessmentEvaluate cyber risk exposure for effective decision-making and risk mitigationCyber Risk ManagementAssess and help mitigate risks associated with supply chain, M&A, and systems out of direct controlCyber Security Due DiligenceTechnical assuranceBattle-test your securityTest and strengthen your security program with real-world attacks See how your security program performs under pressure with simulated attacks against your environment to harden systems and operations.• Test security controls and operations• Evaluate with real-world attacks• Harden against the latest threats• Identify and close security gapsExploreTest your security and operationsFind flaws before attackers can by testing your controls against the latest attack scenariosRed Team AssessmentEvaluate your cloud security approach to protect cloud implementations and assetsCloud Architecture AssessmentAssess your critical assets to pinpoint and reduce related vulnerabilities and misconfigurationsPenetration TestingCybersecurity transformationDevelop and mature critical security functionsElevate your cyber defense capabilities across all critical functionsEstablish and mature cyber defense capabilities across functions.• Work to improve processes and technologies• Up-level threat detection, containment, and remediation capabilities• Receive hands-on support to implement necessary changes • Help optimize security operations and hunt functionsExploreBegin your transformationHands-on support to help implement priority changes and apply best practices for risk mitigationCyber Defense Center DevelopmentBalance cyber risk with rapid business transformation to help CISOs and execs achieve their goalsExecutive Cybersecurity ServicesIncident response planning, metrics development, and playbook creation for priority use casesPlanning and Process ImprovementHow ready is your organization?Take The Defender's Advantage Cyber Defense Discovery self-assessment to measure your capabilities across the six critical functions of cyber defense.Take the assessmentGet flexible access to experts with a services retainerEstablish an incident response retainer with flexible units to use for training and other services. Mandiant Expertise on Demand is an annual subscription that extends your cyber defense team with frontline Mandiant experts, when you need them.2-hour incident response time by frontline Mandiant expertsEstablish terms and conditions before an incident occurs, to shorten your response timesAsk your toughest cyber security questions at any timeThere is no limit to the number of ad-hoc requests that can be submittedAccess proactive services, training, and intel across the year as your needs changeRequest investigations, training, consulting services, and custom or finished intelligence reportsView MoreTap specialists across a range of services that fit your business challengesMandiant AcademyBuild your cybersecurity with training, certifications, and real-world exercisesCyber Risk ManagementAdvance your approach for effective decision-making and risk mitigationRansomware and multifaceted extortion protectionAssess and help mitigate ransomware risks and understand effective response capabilitiesOT and ICS ServicesReceive guidance for securing mission-critical operational technology and industrial control systemsInsider Threat ProtectionReduce operational risk and minimize the impact of insider threat incidents and data theftSecuring the use of AIUtilize AI to enhance cyber defenses while safeguarding the use of your AI systemsCyber Risk PartnersMandiant works with leading law firms, insurance partners, ransomware negotiators and other specialized firms to mitigate risk and minimize liability resulting from cyber attacks.Expand allLaw firmsInsurance Underwriters and BrokersInsights from the frontlines Get the latest trends in the cyber threat landscape from Mandiant M-Trends 2024Read the M-Trends 2024 reportDiscover the best practices for effective cyber defense with The Defender's AdvantageRead The Defender's Advantage ebookLearn how Mandiant consultants leverage AIRead the blogRead the newly released: Cyber Snapshot report, Issue 7Download the reportHave questions? Contact us.Mandiant experts are ready to answer your questions.Request a consultIf you are currently experiencing a breach, contact Mandiant incident responders now.Incident response assistanceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Mandiant_Incident_Response.txt b/Mandiant_Incident_Response.txt new file mode 100644 index 0000000000000000000000000000000000000000..1dd4808b5a819acb0ab0b3b9d30494cad6c3ba2b --- /dev/null +++ b/Mandiant_Incident_Response.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/consulting/mandiant-incident-response-services +Date Scraped: 2025-02-23T12:09:15.417Z + +Content: +Mandiant Incident Response ServicesWith experience responding to large, impactful breaches around the world, our experts are trained in incident response, from technical response to crisis management. Contact an expertIncident response assistanceThreat Intelligence BlogGet the latest research from Mandiant frontline investigations.Today's top cyber trends and attacker operationsRead our M-Trends 2024 Special Report to gain the latest adversary insights drawn directly from the frontlines. Creating an Effective Incident Response PlanView Dark Reading’s Tech Insights report that gives guidance around what it takes to develop a functional, cohesive, and unified IR plan.Mandiant Incident Response Services help you prepare for and respond to breachesServicesKey BenefitsIncident Response ServiceActivate Mandiant experts to help complete in-depth attack analysis, perform crisis management over response timeline, and recover business operations after a breach.On-call incident responders to immediately begin triage activitiesMinimize business impact through methodical investigation and containmentRapid recovery of business operations with full breach remediationMandiant RetainerExtend your cyber defense capabilities and capacity with flexible access to a wide range of industry-leading security experts.Two-hour response time in the event of a breachPre-paid funds to use for proactive services, training, and moreAccess Mandiant's world-renowned incident responders, threat intelligence analysts, and security consultantsIncident Response RetainerRetain Mandiant intelligence experts to enable fast and effective response to cyber incidents.Two-hour response time in the event of a breachPre-negotiated terms and conditions for incident response workIncident response preparedness activitiesCyber Crisis Communications Planning and ResponseBuild an effective incident communications approach that helps protect stakeholders, minimizes risk, and preserves brand reputation.Align crisis communications with technical response activitiesDevelop communication playbooks for incidents including disclosure requirementsEvaluate communication processes, tools, and staff in advance of a breachCompromise AssessmentLook for ongoing and past attacker activity in your environment while improving your ability to respond effectively to future threats.Search for previously unseen attacker activity with insight into motivationsIdentify weaknesses in configurations, vulnerabilities, and policy violations Best practice recommendations to effectively respond to future incidentsMandiant Managed DefenseGet 24/7 threat detection, investigation, and rapid response to mitigate attacks before they impact your business.Support for numerous endpoint, network, and cloud technologiesExpert monitoring and alert investigation with continual threat huntingAccelerated response leveraging the latest threat intelligenceServicesIncident Response ServiceActivate Mandiant experts to help complete in-depth attack analysis, perform crisis management over response timeline, and recover business operations after a breach.On-call incident responders to immediately begin triage activitiesMinimize business impact through methodical investigation and containmentRapid recovery of business operations with full breach remediationMandiant RetainerExtend your cyber defense capabilities and capacity with flexible access to a wide range of industry-leading security experts.Two-hour response time in the event of a breachPre-paid funds to use for proactive services, training, and moreAccess Mandiant's world-renowned incident responders, threat intelligence analysts, and security consultantsIncident Response RetainerRetain Mandiant intelligence experts to enable fast and effective response to cyber incidents.Two-hour response time in the event of a breachPre-negotiated terms and conditions for incident response workIncident response preparedness activitiesCyber Crisis Communications Planning and ResponseBuild an effective incident communications approach that helps protect stakeholders, minimizes risk, and preserves brand reputation.Align crisis communications with technical response activitiesDevelop communication playbooks for incidents including disclosure requirementsEvaluate communication processes, tools, and staff in advance of a breachCompromise AssessmentLook for ongoing and past attacker activity in your environment while improving your ability to respond effectively to future threats.Search for previously unseen attacker activity with insight into motivationsIdentify weaknesses in configurations, vulnerabilities, and policy violations Best practice recommendations to effectively respond to future incidentsMandiant Managed DefenseGet 24/7 threat detection, investigation, and rapid response to mitigate attacks before they impact your business.Support for numerous endpoint, network, and cloud technologiesExpert monitoring and alert investigation with continual threat huntingAccelerated response leveraging the latest threat intelligenceReady to get started?Discover how Mandiant can help your security team respond to incidents faster and more effectively.Contact usLearn more about activating The Defender's Advantage to help improve your overall approach to cyber defense.Attend the virtual seriesTake the next stepStart your next project, explore interactive tutorials, and manage your account.Request a ConsultNeed help getting started?Contact salesWork with a trusted partnerFind a partnerContinue browsingSee all security products and servicesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Mandiant_Managed_Defense.txt b/Mandiant_Managed_Defense.txt new file mode 100644 index 0000000000000000000000000000000000000000..b0f8d242753aaaa3427f0df0801c71ad14bc5bd2 --- /dev/null +++ b/Mandiant_Managed_Defense.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/managed-defense +Date Scraped: 2025-02-23T12:09:05.988Z + +Content: +Read the latest Defenders Advantage report - Download todayMandiant Managed DefenseMandiant Managed Defense provides 24/7 threat detection, investigation, and response (TDIR) with access to frontline experts who monitor your security technology to help find and investigate threats, proactively hunt for ongoing or past breaches, and respond before attacks impact your business.Get started todayManaged Defense for Google Security Operations The Managed Defense team works seamlessly with your security team and the AI-infused capabilities of Google Security Operations to quickly and effectively monitor, detect, triage, investigate, and respond to incidents.View the datasheetFind and eliminate threats with confidence 24x7Alert Monitoring, Triage, and Investigation Find actionable incidents in real timeBenefit from the speed with which Mandiant integrates frontline knowledge and attacker research to help protect customers at speed and scale. Alerts are triaged and prioritized by a Mandiant expert within minutes. We investigate critical threats and provide context to inform decision-making.A year in the cybersecurity trenches with Mandiant Managed DefenseLearn about 2023 insightsSee and Stop Software Supply Chain CompromiseEffectively detect and investigateContinual, Managed Threat HuntingDisrupt attacks with the help of elite threat huntersGuided by their knowledge of threat actors, Mandiant threat hunters use the scale and speed of Google to quickly find anomalies and help reduce attacker dwell time in your environment. We'll map results to MITRE’s ATT&CK framework so that you can see subverted controls and take decisive action.Augmenting Cyber Defense with Mandiant HuntLearn how it worksFinding Malware: Detection Opportunities within Google SecOpsLearn how to detect malware within Google SecOpsRapid, complete responseAccelerate containment and remediation of threatsStop attacks and resolve incidents without impact or the need for a formal incident response engagement. Our experts can contain impacted hosts, investigate quickly, and provide actionable guidance. You gain access to security consultants and benefit from the collective knowledge of Mandiant.Tales from the TrenchesAttack and malware trends observed over Q2 2024.Learn about the trends, detection opportunities, and mitigation strategies from Managed Defense.Attack and malware trends observed over Q1, 2024Mandiant expertiseAugment your skilled security teamStrengthen your team's capabilities with dedicated Mandiant experts guiding them through security investigations and responses. Benefit from our collective experience and knowledge, ensuring your team is prepared to handle any security incident.Beyond Detection Threat Hunting Webinar SeriesThe art and science of proactive threat huntingLearn about the best practices and key methodologies you should evaluate for your program.Hunting for threats in the cloudLearn how to turn cloud compromises into hunt missions.Detect incidents fastAlert Monitoring, Triage, and Investigation Find actionable incidents in real timeBenefit from the speed with which Mandiant integrates frontline knowledge and attacker research to help protect customers at speed and scale. Alerts are triaged and prioritized by a Mandiant expert within minutes. We investigate critical threats and provide context to inform decision-making.A year in the cybersecurity trenches with Mandiant Managed DefenseLearn about 2023 insightsSee and Stop Software Supply Chain CompromiseEffectively detect and investigateReduce dwell timeContinual, Managed Threat HuntingDisrupt attacks with the help of elite threat huntersGuided by their knowledge of threat actors, Mandiant threat hunters use the scale and speed of Google to quickly find anomalies and help reduce attacker dwell time in your environment. We'll map results to MITRE’s ATT&CK framework so that you can see subverted controls and take decisive action.Augmenting Cyber Defense with Mandiant HuntLearn how it worksFinding Malware: Detection Opportunities within Google SecOpsLearn how to detect malware within Google SecOpsAccelerate your responseRapid, complete responseAccelerate containment and remediation of threatsStop attacks and resolve incidents without impact or the need for a formal incident response engagement. Our experts can contain impacted hosts, investigate quickly, and provide actionable guidance. You gain access to security consultants and benefit from the collective knowledge of Mandiant.Tales from the TrenchesAttack and malware trends observed over Q2 2024.Learn about the trends, detection opportunities, and mitigation strategies from Managed Defense.Attack and malware trends observed over Q1, 2024Enhance your talentMandiant expertiseAugment your skilled security teamStrengthen your team's capabilities with dedicated Mandiant experts guiding them through security investigations and responses. Benefit from our collective experience and knowledge, ensuring your team is prepared to handle any security incident.Beyond Detection Threat Hunting Webinar SeriesThe art and science of proactive threat huntingLearn about the best practices and key methodologies you should evaluate for your program.Hunting for threats in the cloudLearn how to turn cloud compromises into hunt missions.Our approachManaged threat detection, investigation, and response powered by intelligence and expertise.Consult an expertWhat customers are sayingFinancial firm reduced costs and overhead with Mandiant MDRAscendium Education partners with Mandiant experts to improve cybersecurity.Video (3:06)Bank of the Philippine Islands builds a reliable, scaled security program with Google Cloud SecurityManaged Defense aids BPI's security team in identifying advanced persistent threats (APTs) and enhances threat intelligence.Video (2:57)How Mandiant Managed Defense enhances services with Google Security Operations Leveraging Google SecOps, Mandiant enhances investigations and detects new threats faster with less manual intervention.Video (3:00)View MoreYou've got the technology, now add unparalleled expertise and intelligenceManaged Defense supports threat hunting, alert triage, investigation, and rapid response capabilities for your technology stack. Explore the supported technology partners.Review the Managed Defense Technology Datasheet to learn more.Having been a long-time Google Security Operations customer, it only made sense to layer in the power of Mandiant Managed Defense for Google SecOps. This add-on advantage has proven to be a force multiplier, truly allowing the Vertiv Security team the opportunity to pivot towards advanced strategic cybersecurity work. CISO of VertivLearn how Mandiant experts can help improve your defensesJoin forces with frontline experts—amplify your team and elevate your security with Mandiant expertise.Get started todayMake Google part of your security team.Learn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Manufacturing.txt b/Manufacturing.txt new file mode 100644 index 0000000000000000000000000000000000000000..9e39b65bc23e39846a26fb252bacab5713e31491 --- /dev/null +++ b/Manufacturing.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/manufacturing +Date Scraped: 2025-02-23T11:57:58.764Z + +Content: +Google Cloud for manufacturingThe world's leading manufacturers use our data, AI, and productivity solutions to optimize production operations, improve customer experiences, and accelerate workforce productivity.Talk with an expert2:16How Ford is innovating on Google CloudAI trends in manufacturing and automotiveMultimodal AI, AI agents, and other groundbreaking innovations are poised to revolutionize the manufacturing and automotive industry. Learn how these emerging trends could impact your business and gain a competitive edge.Read it hereTransform your business with Google CloudBring engineering innovation to your enterprise in a secure, scalable, and sustainable way.Modernize operations with data and AI Increase production efficiencyUse data and AI solutions to improve visibility and performance on time-to-market, quality, cost, security, and sustainability.Check out more resources on how to optimize manufacturing and supply chain operations with data and AI. “We worked with Google Cloud to implement a data platform, streaming and storing over 25 million records per week. We're gaining strong insights from the data that will help us implement predictive and preventive actions and continue to become even more efficient in our manufacturing plants. ”Jason Ryska, Director of Manufacturing Technology Development, Ford Motor Company Discover solutions:Process, contextualize, and store factory dataManufacturing Data EngineDeploy AI models to detect production defectsVisual Inspection AIAccelerate business insightsGoogle Cloud Cortex FrameworkBetter serve your customers with insightsCapture new revenue by owning the end-to-end digital customer journeyDigitize the customer value proposition and journey to reach and delight customers.“We know that our new and existing customers expect unique and innovative campaigns for two of the most unique and innovative vehicles in our brand’s history, and Google Cloud helped us create something very special to share with them.”Albi Pagenstert, Head of Brand Communications and Strategy, BMW North AmericaDiscover solutions:Drive predictive marketing, digital commerce, and innovationCustomer Data PlatformDelight customers with AI-powered experiences while freeing up human agents' timeContact Center AIBrowse and provide recommendations on your own digital properties at scaleDiscovery AIAccelerate development and efficiencyRun processes 24x7 with the highest speed and reliabilityBoost engineering speed, strengthen security by extending manufacturing environments into the cloud. A flexible and powerful high performance computing (HPC) platform that clears the way for innovation.“We wanted a cloud provider with cutting-edge technology for AI application development.”Arnaud Hubaux, Technical Program Manager AI/ML, ASMLDiscover solutions:Run multicloud apps consistently at scaleHybrid cloudDecrease your time to insights and data warehouse costsSAP analytics and AIFast product design and R&D accelerationHPCOptimize manufacturing and supply chain operationsModernize operations with data and AI Increase production efficiencyUse data and AI solutions to improve visibility and performance on time-to-market, quality, cost, security, and sustainability.Check out more resources on how to optimize manufacturing and supply chain operations with data and AI. “We worked with Google Cloud to implement a data platform, streaming and storing over 25 million records per week. We're gaining strong insights from the data that will help us implement predictive and preventive actions and continue to become even more efficient in our manufacturing plants. ”Jason Ryska, Director of Manufacturing Technology Development, Ford Motor Company Discover solutions:Process, contextualize, and store factory dataManufacturing Data EngineDeploy AI models to detect production defectsVisual Inspection AIAccelerate business insightsGoogle Cloud Cortex FrameworkDeliver next-generation digital customer experience and productsBetter serve your customers with insightsCapture new revenue by owning the end-to-end digital customer journeyDigitize the customer value proposition and journey to reach and delight customers.“We know that our new and existing customers expect unique and innovative campaigns for two of the most unique and innovative vehicles in our brand’s history, and Google Cloud helped us create something very special to share with them.”Albi Pagenstert, Head of Brand Communications and Strategy, BMW North AmericaDiscover solutions:Drive predictive marketing, digital commerce, and innovationCustomer Data PlatformDelight customers with AI-powered experiences while freeing up human agents' timeContact Center AIBrowse and provide recommendations on your own digital properties at scaleDiscovery AIImprove workforce productivityAccelerate development and efficiencyRun processes 24x7 with the highest speed and reliabilityBoost engineering speed, strengthen security by extending manufacturing environments into the cloud. A flexible and powerful high performance computing (HPC) platform that clears the way for innovation.“We wanted a cloud provider with cutting-edge technology for AI application development.”Arnaud Hubaux, Technical Program Manager AI/ML, ASMLDiscover solutions:Run multicloud apps consistently at scaleHybrid cloudDecrease your time to insights and data warehouse costsSAP analytics and AIFast product design and R&D accelerationHPCExperience the power of data and AI at Manufacturing x DigitalGoogle Cloud demonstrates how new cloud and AI solutions can help you improve production, supply chain and frontline workforce processes.Plan your visitSub-industriesTake a deep dive on use cases across sub-industries.AutomotiveProvide the best experience to consumers, reduce costs, and generate new digital revenue streams. Electronics and semiconductorsInnovate faster from chip design through the manufacturing value chain.EnergyRun energy-efficient data centers and reach efficiency goals.Latest thought leadership in manufacturingFive use cases for manufacturers to get started with generative AIGen AI can be used across a wide array of applications, from productivity to efficiency boosts.5-min readDriving smart factory transformation with data and AICheck out content around using data and AI to optimize manufacturing operations and supply chains.5-min readFour ways AI and cloud help chipmakers thrive in a volatile worldHow the semiconductor industry strengthens supply chains and optimizes operations with AI.5-min readPower of resilient and sustainable manufacturing webinarHow to drive a cost efficient and sustainable operation with Koenig & Bauer, GFT, and SoftServe.Video (1:20:00)How Renault built a factory of the futureTake a deep dive through Renault's factory floor to see how they drive Industry 4.0 innovation. 5-min readModern manufacturing at the edgeLearn how to unlock speed, efficiency, and transformation opportunities with edge for manufacturing.5-min readRenault Group's partnership with Google Cloud on Industry 4.0 and supply chainHear from Renault's VP of Supply Chain and VP of Industry & Quality on the partnership.Video (3:02)Manufacturing 2025: bolder vision, stronger purposeRead the report on digital transformation from Harvard Business Review Analytic Services and Wipro.5-min readIDC 2023 Cloud CSAT Manufacturing ReportSee how Google Cloud ranks in critical categories, from data security to quality.5-min readLeading companies trust Google CloudThe world's top manufacturers trust Google Cloud to optimize factory operations, build transparent supply chains, produce optimal products, and create quality customer experiences.See all customersRecommended PartnersOur partner ecosystem can help you solve your business challenges and unlock growth opportunities with seamless implementations and integrated out-of-the-box or custom solutions.Expand allIndustry partnersService partnersReady to get started?Contact our team to solve your toughest manufacturing challenges with cloud-based technology.Talk with an expertWork with a trusted partnerFind a partnerManufacturing customers and stories Learn moreSee all industry solutionsContinue browsingGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Mapping_BeyondProd_principles.txt b/Mapping_BeyondProd_principles.txt new file mode 100644 index 0000000000000000000000000000000000000000..e01687185626d434d0a97fa214327865fd53fa0a --- /dev/null +++ b/Mapping_BeyondProd_principles.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/mapping-beyondprod-security-principles +Date Scraped: 2025-02-23T11:47:23.419Z + +Content: +Home Docs Cloud Architecture Center Send feedback Mapping BeyondProd security principles to the blueprint Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC BeyondProd refers to the services and controls in Google's infrastructure that work together to help protect workloads. BeyondProd helps protect the application services that Google runs in its own environment, including how Google changes code and how Google ensures service isolation. Although the BeyondProd paper refers to specific technologies that Google uses to manage its own infrastructure that aren't exposed to customers, the security principles of BeyondProd can be applied to customer applications as well. BeyondProd includes several key security principles that apply to the blueprint. The following table maps the BeyondProd principles to the blueprint. Security principle Mapping to blueprint Security capability Network edge protection Cloud Load Balancing Helps protect against various DDoS attack types such as UDP floods and SYN floods. Google Cloud Armor Helps provide protection against web application attacks, DDoS attacks, and bots through always-on protection and customizable security policies. Cloud CDN Helps provide DDoS attack mitigation through taking load away from exposed services by directly serving content. GKE clusters with Private Service Connect access to the control plane and private node pools for clusters that use private IP addresses only Helps protect against public internet threats and helps provide more granular control over access to the clusters. Firewall policy Narrowly defines an allowlist for inbound traffic to GKE services from Cloud Load Balancing. No inherent mutual trust between services Cloud Service Mesh Enforces authentication and authorization to help ensure only approved services can communicate with one another. Workload Identity Federation for GKE Enhances security by reducing the risk of credential theft through automating the authentication and authorization process for workloads, eliminating the need for you to manage and store credentials. Firewall policy Helps ensure only approved communication channels are allowed within the Google Cloud network to GKE clusters. Trusted machines that run code with known provenance Binary Authorization Helps ensure only trusted images are deployed to GKE by enforcing imaging signing and signature validation during deployment. Consistent policy enforcement across services Policy Controller Lets you define and enforce policies that govern your GKE clusters. Simple, automated, and standardized change rollout Foundation infrastructure pipeline Multi-tenant infrastructure pipeline Fleet-scope pipeline Application factory pipeline Application CI/CD pipeline Provides an automated and controlled deployment process with built-in compliance and validation to build out resources and applications. Config Sync Helps improve cluster security by providing centralized configuration management and automated configuration reconciliation. Isolation between workloads that share an operating system Container-Optimized OS Container-Optimized OS contains only essential components required for running Docker containers, making it less vulnerable to exploits and malware. Trusted hardware and attestation Shielded GKE nodes Ensures only trusted software is loaded when a node boots up. Continually monitors the node's software stack, alerting you if any changes are detected. What's next Read about deploy the blueprint (next document in this series). Send feedback \ No newline at end of file diff --git a/Maps_and_Geospatial.txt b/Maps_and_Geospatial.txt new file mode 100644 index 0000000000000000000000000000000000000000..849e63fabf86ce3e40bc68a3c86cc1b28f881d55 --- /dev/null +++ b/Maps_and_Geospatial.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/geospatial +Date Scraped: 2025-02-23T12:05:59.002Z + +Content: +Join us April 6th at the Google Data Cloud Summit to hear the latest innovations in analytics, AI, business intelligence, and databases.Geospatial AnalyticsBuild a more prosperous and sustainable future for your organization with a comprehensive platform for geospatial workloads and applications that only Google can deliver.Contact usBenefitsUnlock location-driven innovation and optimizationsProven performance for analytics, AI, and appsRely on Google's 40+ years of combined experience and innovations operating Earth Engine, Google Maps, and BigQuery for unmatched scalability, reliability, security, and performance.A planet’s worth of geospatial data and insights Access petabytes of current and historical data layers that are updated and expanding daily–from earth observation data, to civic data, to transportation and industry datasets from around the world.Flexible and familiar tooling for fast time-to-valueAccelerate the development and deployment of your geospatial use cases using flexible and familiar tooling, open-source frameworks, and Google’s robust partner ecosystem.Key featuresRethink how geospatial relationships and insights can accelerate your businessGoogle Cloud's comprehensive platform helps you solve for common geospatial use cases at scale.Use case: Environmental risk and sustainabilityUnderstand the risks posed by environmental conditions by predicting natural disasters like flooding and wildfires, which will help you more effectively anticipate and plan for risk. Map economic, environmental, and social conditions to determine focus areas for protecting and preserving the environment.Use case: Location intelligenceCombining proprietary site metrics with publicly available data like traffic patterns and geographic mobility, you can use geospatial analytics to find the optimum locations for your business and predict financial outcomes–whether you’re a retailer looking for new places to open stores or a telecom provider optimizing cell tower coverage.Use case: Supply chain and logistics optimizationBetter manage fleet operations such as last-mile logistics, autonomous vehicles, precision railroading, and mobility planning by incorporating geospatial context into business decision-making. Create a digital twin of your supply chain to mitigate supply chain risk, design for sustainability, and minimize your carbon footprint. Ready to get started? Contact usLearn moreAnnouncing new tools to measure—and reduce—your environmental impactRead the blogVideoUsing data cloud to map out insights and business risks from the ground upWatch videoHelping companies tackle climate change with Earth EngineRead the blogCustomersIn their own wordsVideoNext '21: How Unilever leverages Earth Engine to build a more sustainable supply chainVideo (30:45)VideoNext '20: How Telus serves location-based insights using BigQueryVideo (34:44)See all customersPartnersFeatured geospatial cloud partnersConnect with our partners to accelerate launching, innovating, or running geospatial applications. And do so on the industry's cleanest cloud.Related servicesGeospatial cloud productsBigQueryTap into BigQuery's native support for vector data analysis in common geospatial data formats.Earth EngineLeverage Google's planet-scale geospatial data catalog coupled with powerful computing infrastructure.Google Maps PlatformExplore where real-world insights and immersive location experiences can take your business.Vertex AIBuild, deploy, and scale geospatial ML models faster, with pre-trained and custom tooling within a unified AI platform.Cloud SQLRun the popular PostGIS extension of PostgreSQL without self-management hassles.DataprocSpin up fully managed Spark environments with Dataproc for processing geospatial data at scale.DocumentationGetting started guides and technical documentationGoogle Cloud BasicsBigQuery documentation for geospatial analyticsFind tutorials, quickstart guides, visualization options, release notes, and more.Learn moreGoogle Cloud BasicsEarth Engine documentationFind tutorials, JavaScript and Python guides, API References, connect with the Earth Engine developer community, and more.Learn moreGoogle Cloud BasicsGoogle Maps Platform documentationAll the information you need to bring the real world to your web and mobile apps with Google Maps Platform SDKs and APIs for Maps, Routes, and Places.Learn moreBest PracticeAdvanced visualization on Google Maps Platform using deck.glTake advantage of the wide variety of beautiful, insightful 2D and 3D visualizations offered by deck.gl to create a new level of mapping experiences with your data.Learn moreNot seeing what you’re looking for?View documentationWhat's newGeospatial cloud launches, announcements, and blogsSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.NewsNow Previewing Earth Engine on Google Cloud for Commercial UseSign up nowBlog postHow Bayer Crop Science Uses BigQuery and Geobeam to improve soil healthRead the blogNewsNew geography functions available in BigQueryLearn moreTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Marketing_Analytics.txt b/Marketing_Analytics.txt new file mode 100644 index 0000000000000000000000000000000000000000..cff0f6cff4761d7a16657fabe9f553911439aa94 --- /dev/null +++ b/Marketing_Analytics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/marketing-analytics +Date Scraped: 2025-02-23T11:59:06.375Z + +Content: +Read our latest customer story: How Adlucent uses AI to uncover new marketing insights from big data to better support clients.Marketing analytics and AI solutionsBring business and marketing data together for a holistic view of marketing's performance and business impact. Create advanced audiences and enhance your customer experiences with the power of your data and Google Cloud.Contact usGo to consoleBenefitsHow marketing analytics can help your business growIncrease ROI with data-driven marketing insightsConnect data from siloed systems into BigQuery to gain holistic marketing insights. Enable robust channel performance reporting and predictive marketing analytics. Build predictive and advanced audience segmentsLeverage Google Cloud’s machine learning capabilities to build differentiated audiences. Create audience segments like propensity to buy and high customer lifetime value to reach your most valuable customers.Enhance customer experiences and brand measurementDeliver enhanced customer experiences by providing relevant recommendations at scale. Monitor and leverage customer sentiment to plan product improvements, prioritize customer issues, and optimize Ads spend.Key featuresGoogle is your media and cloud partner on this journeyGoogle is uniquely positioned to bring your marketing and business data together in one platform with native Google Ads integrations and more.How marketing data and analytics come together with Google CloudGoogle Cloud offers seamless connectors to key data sources like Google Ads, Google Analytics 360, Campaign Manager, etc. Connect your first-party data like CRM, sales, product, customer service, social and more, for a holistic view of marketing analytics in a connected platform. Easily build AI models from this consolidated data with built-in machine learning tools like BigQuery ML and Vertex AI.Gain a competitive advantage with marketing analytics and AIAn advanced marketing analytics and AI solution for all users and intents throughout the data and ML life cycle means more innovation in marketing for your organization. Flexible tools like BigQuery and Looker offer streamlined and scalable collaboration as your organization takes advantage of continuous enhancements with Google Cloud's technologies.Futureproof your marketing with leading AI and machine learningThe digital marketing landscape is changing: from more complex customer journeys to regulatory changes on how data is collected and used to industry changes impacting third-party cookie measurement. A solution created upon the foundation of Google’s pioneering AI research and best in class algorithms with responsible AI capabilities can help your organization navigate these changes.Drive campaign performance with intelligent, real-time dataWhether you are a marketing leader at your organization or an agency providing best-in-class marketing services, partnering with Google Cloud can help you tackle marketing analytics challenges by leveraging native Ads data integrations with BigQuery and developing custom, differentiated audience segments with Google Cloud's AI products.Understand how consumers are searching with Google Trends dataMarketers can now access Google Trends datasets through direct interaction with BigQuery in a safe, secure, and private manner. This means you can explore what consumers are searching for to inform your analyses and make better data-driven decisions. If you’re new to BigQuery, spin up a free project using the BigQuery sandbox or check out the sample Looker dashboard to explore this data.Ready to get started? Contact usCustomersRead about our customers using Google Cloud marketing analytics and AI solutionsVideoSucceeding in digital media and eCommerce with data—Motorola Mobility and Supermetrics9:04VideoHow Overstock.com increased marketing ROI with Instant BQML13:39Case studyBlue Apron builds an analytics platform with Looker and BigQuery to enable faster business decisions5-min readVideoSee how L'Oreal Taiwan uses data-driven marketing with Google Cloud and Google Marketing PlatformVideo (2:04)Case studyUsing Google Cloud for data analytics helps zulily improve customer experiences5-min readCase studyHearst Newspapers: Engaging readers with cloud machine learning5-min readSee all customersPartnersGoogle Cloud marketing analytics specialized partnersNot sure where to get started or already working with a partner? Explore our partner ecosystem and learn how to bring Google Cloud to advance your marketing analytics.See all partnersDocumentationAdditional marketing analytics and AI resources Explore our documentation to learn more with tutorials, explore popular architectures, and see patterns for common marketing analytics use cases.ArchitectureBuilding an ecommerce recommendation systemLearn how to build a recommendation system with BigQuery ML to generate product or service recommendations from customer data. Learn moreTutorialBuild new audiences based on current customer lifetime valueLearn how to identify your most valuable current customers and then use them to develop similar audiences in Google Ads.Learn moreWhitepaperCreative analysis at scale with Google Cloud and MLThis article explores how to take advantage of cloud technology in order to see creative performance and insights at scale.Learn moreQuickstartLooker for Google Marketing PlatformGet started with out-of-the-box insights with Looker's Blocks and Actions for Google Marketing Platform (GMP) and activate data from Looker in GMP and GA360.Learn morePatternCreate a unified app analytics platformCentralize your data sources in a data warehouse and dig deeper into customer behavior to make informed business decisions.Learn moreBest PracticeDeliver great experiences with Google Cloud and Ads productsBrands are using Google Ads and Cloud products together to deliver better campaigns and customer experiences.Learn morePatternUse Google Trends data for common business needsUse the Google Trends data to solve business challenges like uncovering trends in retail locations, predicting product demand, and inspiring new marketing campaigns.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Media_and_Entertainment.txt b/Media_and_Entertainment.txt new file mode 100644 index 0000000000000000000000000000000000000000..45ed60e65ff2c66f5cf0d1a4079813f62b922b2b --- /dev/null +++ b/Media_and_Entertainment.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/media-entertainment +Date Scraped: 2025-02-23T11:57:52.945Z + +Content: +Learn how MLB hits a home-run with its fans by transforming how they engage with the sport with data and AI.Google Cloud for media and entertainmentThe world's leading media companies use Google Cloud to transform audience experiences with innovation and insights.Talk with an expert2:55Watch how Spotify unlocks data insights and innovations with Google Cloud2:33Watch how media companies can provide personalized content discovery Build personalized experiences with generative AIGenerative AI revolutionizes how media companies can help audiences find relevant content. Using conversational interactions, consumers can receive highly personalized content and service recommendations, including music, video, and blogs. Learn more about generative AICreate, distribute, and monetize media with Google CloudGoogle serves billions of users. With Google Cloud, harness world-class technology to transform experiences for audiences of any size.Generative AI and analytics for mediaAs audiences have more choice than ever, media companies need an edgeTurn complexity into opportunity. Use AI and analytics to uncover new ways to innovate and monetize.“We’re excited about how generative AI-powered search will help users find the most relevant content even easier and faster.”Robin Chacko, EVP Direct-to-Consumer, STARZExplore data and AI solutions:Build, deploy, and scale machine learning models fasterRecommendations from Vertex AIServerless and cost-effective enterprise data warehouseBigQueryStreaming and Live Broadcasts for MediaMedia companies need to serve experiences to worldwide audiencesImprove how media is shared with the masses. Deliver content to a global audience while maximizing cost efficiencies, and modernizing infrastructure.Leverage the same infrastructure that YouTube uses to serve its two billion global users—producing reliable streams with access to a large network.“As we continue to reach new viewers… we know we can depend on Google Cloud to help us have a solid, state-of-the-art offering.”Geir Magnusson Jr, CTO, fuboTVExplore content delivery solutions:Fast, reliable web acceleration Cloud CDNHigh-quality video streamingMedia CDNDirect-to-Consumer for MediaAudiences have shifted their media consumption to digital platformsUnderstand and delight audiences. Forge personalized connections with users using richer data and insights. “On-premises, when we experience a failure on the main encoders, it takes a few seconds for the backup encoders to kick in. As a result, mobile customers see a few seconds of black screen. Now, with the Google Cloud encoding solution, the switch is seamless because we already have a copy available and ready to deliver.”Davide Gandino, Head of Streaming, Cloud, and Computing Systems, Sky ItaliaExplore solutions:Help audiences find what they need with Google-quality search and recommendationsVertex AI Search for media and entertainment Maximize data scalability with up to 99.9% availabilitySpannerImprove decision-making with intelligent data access and analyticsCloud-Enabled Content Production for MediaMedia companies must produce content quicker and with more efficiencyCatapult content creation. Streamline production workflows and modernize content management.“Google Cloud has always been an important part of our overall strategy to attain infinite rendering power at a moment’s notice, and what you see on the screen is a result of that important partnership.”Jordan Soles, CTO, Rodeo FXExplore media APIs:Personalized video streams on a per-user basisVideo Stitcher APIConvert and package on-demand video/audio files for optimized streamingTranscoder APIDeliver high-quality live experiencesLive Stream APIUnlock media insightsGenerative AI and analytics for mediaAs audiences have more choice than ever, media companies need an edgeTurn complexity into opportunity. Use AI and analytics to uncover new ways to innovate and monetize.“We’re excited about how generative AI-powered search will help users find the most relevant content even easier and faster.”Robin Chacko, EVP Direct-to-Consumer, STARZExplore data and AI solutions:Build, deploy, and scale machine learning models fasterRecommendations from Vertex AIServerless and cost-effective enterprise data warehouseBigQueryDistribute content anywhereStreaming and Live Broadcasts for MediaMedia companies need to serve experiences to worldwide audiencesImprove how media is shared with the masses. Deliver content to a global audience while maximizing cost efficiencies, and modernizing infrastructure.Leverage the same infrastructure that YouTube uses to serve its two billion global users—producing reliable streams with access to a large network.“As we continue to reach new viewers… we know we can depend on Google Cloud to help us have a solid, state-of-the-art offering.”Geir Magnusson Jr, CTO, fuboTVExplore content delivery solutions:Fast, reliable web acceleration Cloud CDNHigh-quality video streamingMedia CDNDelight digital audiencesDirect-to-Consumer for MediaAudiences have shifted their media consumption to digital platformsUnderstand and delight audiences. Forge personalized connections with users using richer data and insights. “On-premises, when we experience a failure on the main encoders, it takes a few seconds for the backup encoders to kick in. As a result, mobile customers see a few seconds of black screen. Now, with the Google Cloud encoding solution, the switch is seamless because we already have a copy available and ready to deliver.”Davide Gandino, Head of Streaming, Cloud, and Computing Systems, Sky ItaliaExplore solutions:Help audiences find what they need with Google-quality search and recommendationsVertex AI Search for media and entertainment Maximize data scalability with up to 99.9% availabilitySpannerImprove decision-making with intelligent data access and analyticsBoost media productionCloud-Enabled Content Production for MediaMedia companies must produce content quicker and with more efficiencyCatapult content creation. Streamline production workflows and modernize content management.“Google Cloud has always been an important part of our overall strategy to attain infinite rendering power at a moment’s notice, and what you see on the screen is a result of that important partnership.”Jordan Soles, CTO, Rodeo FXExplore media APIs:Personalized video streams on a per-user basisVideo Stitcher APIConvert and package on-demand video/audio files for optimized streamingTranscoder APIDeliver high-quality live experiencesLive Stream APILatest thought leadership in media and entertainmentHow AI is shaping the future of media and entertainmentMultimodal AI, AI agents, and other groundbreaking innovations are poised to revolutionize the media and entertainment industry. Learn how these emerging trends could impact your business and gain a competitive edge.5-min readGenerative AI in media and entertainmentGen AI can help you create quality content and enhance personalization. Discover three strategies you can implement.5-min readHow Recommendations AI for media can boost customer retentionLearn how Google is using AI to help redefine the standard for recommendations in media.Video (9:11)TIME is using Google Cloud to tell the stories that matterTIME leverages BigQuery, Looker, and Recommendations AI to deepen their audience relationship.Video (2:56)Create the future of media and entertainment with GoogleSee how Cloud, Ads, Google TV, Play, and YouTube solutions can help accelerate your media business.Video (1:04)Google Cloud Media CDN: Efficiently scale and deliver media content to anyone, anywhere globallyWatch how to use the same infrastructure that delivers YouTube videos.Video (7:01)How MLB analyzes data with Google CloudSee how MLB turns 25 M data points into insights for players and fans alike.Video (6:31)How Newsweek increased total revenue per visit by 10% with Recommendations AILearn how Recommendations AI has improved Newsweek's CTR by 50-75% and conversion rates by 10%.5-min readIn the Cloud: Googlers share ways to transform media and entertainmentLearn how data and AI are transforming the media industry today.Video (8:51)Transform digital experiences with Google AI-powered search and recommendationsUse AI to help your media company hypercharge your search and recommendation capabilities.Video (16:28)Media supply chains - why now is the best time to migrateRead how broadcasters can use data and AI to achieve operational excellence and optimize costs.5-min readView MoreLeading media companies trust Google CloudThe world's top media and entertainment companies trust Google Cloud to unlock media insights and produce quality, direct-to-consumer content. See all customersRecommended partners for media and entertainmentOur industry partners can help you solve your business challenges and unlock growth opportunities with painless implementations and integrated out-of-the-box or custom solutions.See all partnersProtect your content with multilayered securityMPAUnder a shared security model, customers who use Google Cloud can configure their cloud services to support Motion Picture Association (MPA) best practices.Learn moreISEGoogle Cloud undergoes a quarterly security assessment from Independent Security Evaluators (ISE). This assessment tests visual effects workflows and is shared with content owners and film studios.Learn moreMTCS (Singapore) Tier 3The Multi-Tier Cloud Security (MTCS) Singapore Standard (SS)584 is a cloud security certification managed by the Singapore Info-comm Media Development Authority (IMDA).Learn moreGDPRGoogle Cloud champions initiatives that prioritize and improve the security and privacy of customer personal data.Learn moreReady to get started?Contact our team to help solve your media challenges with cloud-based technology.Talk with an expertRender media with Google CloudExplore trainingArchitect cloud for media streamingExplore trainingServe media with Google CloudExplore trainingGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Meet_regulatory,_compliance,_and_privacy_needs.txt b/Meet_regulatory,_compliance,_and_privacy_needs.txt new file mode 100644 index 0000000000000000000000000000000000000000..95964ba2dd44e03509335dabe2acb9854207a5e8 --- /dev/null +++ b/Meet_regulatory,_compliance,_and_privacy_needs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/meet-regulatory-compliance-and-privacy-needs +Date Scraped: 2025-02-23T11:43:09.577Z + +Content: +Home Docs Cloud Architecture Center Send feedback Meet regulatory, compliance, and privacy needs Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework helps you identify and meet regulatory, compliance, and privacy requirements for cloud deployments. These requirements influence many of the decisions that you need to make about the security controls that must be used for your workloads in Google Cloud. Principle overview Meeting regulatory, compliance, and privacy needs is an unavoidable challenge for all businesses. Cloud regulatory requirements depend on several factors, including the following: The laws and regulations that apply to your organization's physical locations The laws and regulations that apply to your customers' physical locations Your industry's regulatory requirements Privacy regulations define how you can obtain, process, store, and manage your users' data. You own your own data, including the data that you receive from your users. Therefore, many privacy controls are your responsibility, including controls for cookies, session management, and obtaining user permission. The recommendations to implement this principle are grouped within the following sections: Recommendations to address organizational risks Recommendations to address regulatory and compliance obligations Recommendations to manage your data sovereignty Recommendations to address privacy requirements Recommendations to address organizational risks This section provides recommendations to help you identify and address risks to your organization. Identify risks to your organization This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Before you create and deploy resources on Google Cloud, complete a risk assessment. This assessment should determine the security features that you need to meet your internal security requirements and external regulatory requirements. Your risk assessment provides you with a catalog of organization-specific risks, and informs you about your organization's capability to detect and counteract security threats. You must perform a risk analysis immediately after deployment and whenever there are changes in your business needs, regulatory requirements, or threats to your organization. As mentioned in the Implement security by design principle, your security risks in a cloud environment differ from on-premises risks. This difference is due to the shared responsibility model in the cloud, which varies by service (IaaS, PaaS, or SaaS) and your usage. Use a cloud-specific risk assessment framework like the Cloud Controls Matrix (CCM). Use threat modeling, like OWASP application threat modeling, to identify and address vulnerabilities. For expert help with risk assessments, contact your Google account representative or consult Google Cloud's partner directory. After you catalog your risks, you must determine how to address them—that is, whether you want to accept, avoid, transfer, or mitigate the risks. For mitigation controls that you can implement, see the next section about mitigating your risks. Mitigate your risks This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. When you adopt new public cloud services, you can mitigate risks by using technical controls, contractual protections, and third-party verifications or attestations. Technical controls are features and technologies that you use to protect your environment. These include built-in cloud security controls like firewalls and logging. Technical controls can also include using third-party tools to reinforce or support your security strategy. There are two categories of technical controls: You can implement Google Cloud's security controls to help you mitigate the risks that apply to your environment. For example, you can secure the connection between your on-premises networks and your cloud networks by using Cloud VPN and Cloud Interconnect. Google has robust internal controls and auditing to protect against insider access to customer data. Our audit logs provide you with near real-time logs of Google administrator access on Google Cloud. Contractual protections refer to the legal commitments made by us regarding Google Cloud services. Google is committed to maintaining and expanding our compliance portfolio. The Cloud Data Processing Addendum (CDPA) describes our commitments with regard to the processing and security of your data. The CDPA also outlines the access controls that limit Google support engineers' access to customers' environments, and it describes our rigorous logging and approval process. We recommend that you review Google Cloud's contractual controls with your legal and regulatory experts, and verify that they meet your requirements. If you need more information, contact your technical account representative. Third-party verifications or attestations refer to having a third-party vendor audit the cloud provider to ensure that the provider meets compliance requirements. For example, to learn about Google Cloud attestations with regard to the ISO/IEC 27017 guidelines, see ISO/IEC 27017 - Compliance. To view the current Google Cloud certifications and letters of attestation, see Compliance resource center. Recommendations to address regulatory and compliance obligations A typical compliance journey has three stages: assessment, gap remediation, and continual monitoring. This section provides recommendations that you can use during each of these stages. Assess your compliance needs This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Compliance assessment starts with a thorough review of all of your regulatory obligations and how your business is implementing them. To help you with your assessment of Google Cloud services, use the Compliance resource center. This site provides information about the following: Service support for various regulations Google Cloud certifications and attestations To better understand the compliance lifecycle at Google and how your requirements can be met, you can contact sales to request help from a Google compliance specialist. Or, you can contact your Google Cloud account manager to request a compliance workshop. For more information about tools and resources that you can use to manage security and compliance for Google Cloud workloads, see Assuring Compliance in the Cloud. Automate implementation of compliance requirements This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. To help you stay in compliance with changing regulations, determine whether you can automate how you implement compliance requirements. You can use both compliance-focused capabilities that Google Cloud provides and blueprints that use recommended configurations for a particular compliance regime. Assured Workloads builds on the controls within Google Cloud to help you meet your compliance obligations. Assured Workloads lets you do the following: Select your compliance regime. Then, the tool automatically sets the baseline personnel access controls for the selected regime. Set the location for your data by using organization policies so that your data at rest and your resources remain only in that region. Select the key-management option (such as the key rotation period) that best meets your security and compliance requirements. Select the access criteria for Google support personnel to meet certain regulatory requirements such as FedRAMP Moderate. For example, you can select whether Google support personnel have completed the appropriate background checks. Use Google-owned and Google-owned and Google-managed encryption key that are FIPS-140-2 compliant and support FedRAMP Moderate compliance. For an added layer of control and for the separation of duties, you can use customer-managed encryption keys (CMEK). For more information about keys, see Encrypt data at rest and in transit. In addition to Assured Workloads, you can use Google Cloud blueprints that are relevant to your compliance regime. You can modify these blueprints to incorporate your security policies into your infrastructure deployments. To help you build an environment that supports your compliance requirements, Google's blueprints and solution guides include recommended configurations and provide Terraform modules. The following table lists blueprints that address security and alignment with compliance requirements. Requirement Blueprints and solution guides FedRAMP Google Cloud FedRAMP implementation guide Setting up a FedRAMP Aligned Three-Tier Workload on Google Cloud HIPAA Protecting healthcare data on Google Cloud Setting up a HIPAA-aligned workload using Data Protection Toolkit Monitor your compliance This recommendation is relevant to the following focus areas: Cloud governance, risk, and compliance Logging, monitoring, and auditing Most regulations require that you monitor particular activities, which include access-related activities. To help with your monitoring, you can use the following: Access Transparency: View near real-time logs when Google Cloud administrators access your content. Firewall Rules Logging: Record TCP and UDP connections inside a VPC network for any rules that you create. These logs can be useful for auditing network access or for providing early warning that the network is being used in an unapproved manner. VPC Flow Logs: Record network traffic flows that are sent or received by VM instances. Security Command Center Premium: Monitor for compliance with various standards. OSSEC (or another open source tool): Log the activity of individuals who have administrator access to your environment. Key Access Justifications: View the reasons for a key-access request. Security Command Center notifications: Get alerts when noncompliance issues occur. For example, get alerts when users disable two-step verification or when service accounts are over-privileged. You can also set up automatic remediation for specific notifications. Recommendations to manage your data sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Data sovereignty provides you with a mechanism to prevent Google from accessing your data. You approve access only for provider behaviors that you agree are necessary. For example, you can manage your data sovereignty in the following ways: Store and manage encryption keys outside the cloud. Grant access to these keys based on detailed access justifications. Protect data in use by using Confidential Computing. Manage your operational sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Operational sovereignty provides you with assurances that Google personnel can't compromise your workloads. For example, you can manage operational sovereignty in the following ways: Restrict the deployment of new resources to specific provider regions. Limit Google personnel access based on predefined attributes such as their citizenship or geographic location. Manage software sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Software sovereignty provides you with assurances that you can control the availability of your workloads and run them wherever you want. Also, you can have this control without being dependent or locked in with a single cloud provider. Software sovereignty includes the ability to survive events that require you to quickly change where your workloads are deployed and what level of outside connection is allowed. For example, to help you manage your software sovereignty, Google Cloud supports hybrid and multicloud deployments. In addition, GKE Enterprise lets you manage and deploy your applications in both cloud environments and on-premises environments. If you choose on-premises deployments for data sovereignty reasons, Google Distributed Cloud is a combination of hardware and software that brings Google Cloud into your data center. Recommendations to address privacy requirements Google Cloud includes the following controls that promote privacy: Default encryption of all data when it's at rest, when it's in transit, and while it's being processed. Safeguards against insider access. Support for numerous privacy regulations. The following recommendations address additional controls that you can implement. For more information, see Privacy Resource Center. Control data residency This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Data residency describes where your data is stored at rest. Data residency requirements vary based on system design objectives, industry regulatory concerns, national law, tax implications, and even culture. Controlling data residency starts with the following: Understand your data type and its location. Determine what risks exist for your data and which laws and regulations apply. Control where your data is stored or where it goes. To help you comply with data residency requirements, Google Cloud lets you control where your data is stored, how it's accessed, and how it's processed. You can use resource location policies to restrict where resources are created and to limit where data is replicated between regions. You can use the location property of a resource to identify where the service is deployed and who maintains it. For more information, see Resource locations supported services. Classify your confidential data This recommendation is relevant to the following focus area: Data security. You must define what data is confidential, and then ensure that the confidential data is properly protected. Confidential data can include credit card numbers, addresses, phone numbers, and other personally identifiable information (PII). Using Sensitive Data Protection, you can set up appropriate classifications. You can then tag and tokenize your data before you store it in Google Cloud. Additionally, Dataplex offers a catalog service that provides a platform for storing, managing, and accessing your metadata. For more information and an example of data classification and de-identification, see De-identification and re-identification of PII using Sensitive Data Protection. Lock down access to sensitive data This recommendation is relevant to the following focus areas: Data security Identity and access management Place sensitive data in its own service perimeter by using VPC Service Controls. VPC Service Controls improves your ability to mitigate the risk of unauthorized copying or transferring of data (data exfiltration) from Google-managed services. With VPC Service Controls, you configure security perimeters around the resources of your Google-managed services to control the movement of data across the perimeter. Set Google Identity and Access Management (IAM) access controls for that data. Configure multifactor authentication (MFA) for all users who require access to sensitive data. Previous arrow_back Use AI for security Send feedback \ No newline at end of file diff --git a/Memorystore.txt b/Memorystore.txt new file mode 100644 index 0000000000000000000000000000000000000000..cbe95003013d94e2f26fa3848e7d8c0c622864a9 --- /dev/null +++ b/Memorystore.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/memorystore +Date Scraped: 2025-02-23T12:04:10.522Z + +Content: +Learn more about the latest generative AI launches, including vector search capabilities and LangChain integrations, across Google Cloud databases.Jump to MemorystoreMemorystoreFully managed in-memory Valkey, Redis* and Memcached service that offers sub millisecond data access, scalability, and high availability for a wide range of applications. Go to consoleView Redis Cluster documentation100% compatible with open source Valkey, Redis Cluster, Redis, and MemcachedMigrate your caching layer to cloud with zero code changeHigh availability, up to a 99.99% SLAVIDEOIntroducing Memorystore for Redis Cluster4:12BenefitsFocus on building great appsMemorystore automates complex tasks for open source Valkey, Redis Cluster, Redis, and Memcached like enabling high availability, failover, patching, and monitoring so you can spend more time building your applications.Simplified scalingMemorystore for Valkey and Redis Cluster scale without downtime to support up to 250 nodes, terabytes of keyspace, and 60x more throughput than Memorystore for Redis with microsecond latencies.Highly availableMemorystore for Valkey and Redis Cluster have zero-downtime scaling, automatically distributed replicas across availability zones, and automated failover. Memorystore for Redis Cluster also offers a 99.99% SLA.Key featuresKey features Choice of enginesChoose from the most popular open source caching engines to build your applications. Memorystore supports Valkey, Redis Cluster, Redis, and Memcached and is fully protocol compatible. Choose the right engine that fits your cost and availability requirements.ConnectivityMemorystore for Valkey and Memorystore for Redis Cluster are available with Private Service Connect (PSC) to simplify management and to offer secure, private, and granular connectivity with minimal IP consumption. All services are integrated with cloud monitoring, and more, Memorystore is built on the best of Google Cloud.Vector searchUse Memorystore for Redis as an ultra-low latency data store for your generative AI applications. Approximate nearest neighbor (ANN) vector search (in preview) delivers fast, approximate results—ideal for large datasets where a close match is sufficient. Exact nearest neighbor (KNN) vector search (in preview) promises accurate results, although it may require a bit more time to process. Fully managedProvisioning, replication, failover, and patching are all automated, which drastically reduces the time you spend on DevOps.View all featuresVIDEOWhat’s new with Memorystore: How Redis provides speed, reliability, ease of use, and lower costs22:17CustomersLearn from customers using Memorystore in-memory service for Valkey, Redis and MemcachedBlog postInstacart migrates to Memorystore and sees a 23 percent reduction in latency and costs5-min readBlog post Virgin Media O2 analyzes billions of records at sub-millisecond latencies with Memorystore for Redis5-min readBlog postOpenX is serving over 150 billion ad requests per day with cloud databases5-min readSee all customersHad we known the full scope of benefits from switching to Memorystore earlier, we could have saved more engineering time for delivering value to other parts of our e-commerce platform.Dennis Turko, Staff Software Engineer, InstacartRead the blogWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postHigh availability with Memorystore for Redis Cluster, Part 1: Four ninesRead the blogBlog postZero-downtime scaling with Memorystore for Redis ClusterRead the blogBlog postMemorystore for Redis Cluster is generally availableRead the blogBlog postScale Redis with the new Memorystore for Redis ClusterRead the blogVideoBest practices for Memorystore for RedisWatch videoDocumentationDocumentationGoogle Cloud BasicsMemorystore for Valkey OverviewRead about the benefits, use cases, and features of Memorystore for Valkey. The overview also provides key details about the service.Learn moreGoogle Cloud BasicsMemorystore for Redis Cluster overviewRead about the benefits, use cases, and features of Memorystore for Redis Cluster. The overview also provides key details about the service.Learn moreGoogle Cloud BasicsMemorystore for Redis standalone overviewRead about the use cases, and features of Memorystore for Redis standalone service. The overview also provides key details about the service.Learn moreGoogle Cloud BasicsMemorystore for Memcached overviewThis page introduces the Memorystore for Memcached service, including use cases, key concepts, and the advantages of using Memcached.Learn moreQuickstartRedis quickstart using the UILearn how to create a Memorystore for Redis instance, connect to the instance, set a value, retrieve a value, and delete the instance.Learn moreTutorialConnecting to a Redis instanceLearn how to access Redis instances from Compute Engine, GKE clusters, Cloud Functions, the App Engine flexible environment, and the App Engine standard environment.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for MemorystoreAll featuresAll features Choice of enginesChoose from the most popular open source caching engines to build your applications. Memorystore supports Valkey, Redis Cluster, Redis, and Memcached and is fully protocol compatible. Choose the right engine that fits your cost and availability requirements.ConnectivityMemorystore for Valkey and Memorystore for Redis Cluster is available with Private Service Connect (PSC) to simplify management and to offer secure, private, and granular connectivity with minimal IP consumption. Memorystore for Redis and Memcached support Private Service Access (PSA) and Direct Peering to offer connectivity using private IP.LangChain integrationEasily build gen AI applications that are more accurate, transparent, and reliable with LangChain integration. Memorystore for Redis has three LangChain integrations—Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages Memory for enabling chains to recall previous conversations. Visit the GitHub repository to learn more.Vector searchUse Memorystore for Redis as an ultra-low latency data store for your generative AI applications. Approximate nearest neighbor (ANN) vector search (in preview) delivers fast, approximate results—ideal for large datasets where a close match is sufficient. Exact nearest neighbor (KNN) vector search (in preview) promises accurate results, although it may require a bit more time to process.Fully managedProvisioning, replication, failover, and patching are all automated, which drastically reduces the time you spend doing DevOps.PersistenceAchieve near-zero Recovery Point Objectives (RPO) through continuous write logging or setup periodic RDB snapshots, ensuring heightened resiliency against zonal failures.SecurityMemorystore is protected from the internet using VPC networks and private IP and comes with IAM integration—all designed to protect your data. Systems are monitored 24/7/365, ensuring your applications and data are protected. Memorystore for Redis provides in-transit encryption and Redis AUTH to further secure your sensitive data.Highly scalableMemorystore for Valkey and Memorystore for Redis Cluster provide zero-downtime scaling up to 250 nodes, terabytes of keyspace, flexible node sizes from 1.4 GB to 58 GB, 10 TB+ per instance and 60x more throughput with microsecond latencies.MonitoringMonitor your instance and set up custom alerts with Cloud Monitoring. You can also integrate with OpenCensus to get more insights to client-side metrics.Highly availableMemorystore for Redis Cluster offers a 99.99% SLA with automatic failover. Shards are automatically distributed across zones for maximum availability. Standard tier Memorystore for Redis instances provide a 99.9% availability SLA with automatic failover to ensure that your instance is highly available. You also get the same availability SLA for Memcached instances.MigrationMemorystore is compatible with open source protocol, which makes it easy to switch your applications with no code changes. You can leverage the RIOT tool to seamlessly migrate existing Redis deployments to Memorystore for Valkey or Memorystore for Redis Cluster.PricingPricingMemorystore offers various sizes to fit any budget. Pricing varies with settings—including how much capacity, how many replicas and which region you provision. Memorystore also offers per-second billing and instances and is easy to start and stop. View Memorystore for Redis Cluster pricingView Memorystore for Redis pricingView Memorystore for Memcached pricing*Redis is a trademark of Redis Ltd. All rights therein are reserved to Redis Ltd. Any use by Google is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Google. Memorystore is based on and is compatible with open-source Redis versions 7.2 and earlier and supports a subset of the total Redis command library.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Meshed_pattern.txt b/Meshed_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..02db109d9f6d844630f19cbe08a4cd5f3054ed82 --- /dev/null +++ b/Meshed_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/meshed-pattern +Date Scraped: 2025-02-23T11:50:30.584Z + +Content: +Home Docs Cloud Architecture Center Send feedback Meshed pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The meshed pattern is based on establishing a hybrid network architecture. That architecture spans multiple computing environments. In these environments, all systems can communicate with one another and aren't limited to one-way communication based on the security requirements of your applications. This networking pattern applies primarily to tiered hybrid, partitioned multicloud, or bursting architectures. It's also applicable to business continuity design to provision a disaster recovery (DR) environment in Google Cloud. In all cases, it requires that you connect computing environments in a way that align with the following communication requirements: Workloads can communicate with one another across environment boundaries using private RFC 1918 IP addresses. Communication can be initiated from either side. The specifics of the communications model can vary based on the applications and security requirements, such as the communication models discussed in the design options that follow. The firewall rules that you use must allow traffic between specific IP address sources and destinations based on the requirements of the application, or applications, for which the pattern is designed. Ideally, you can use a multi-layered security approach to restrict traffic flows in a fine-grained fashion, both between and within computing environments. Architecture The following diagram illustrates a high level reference architecture of the meshed pattern. All environments should use an overlap-free RFC 1918 IP address space. On the Google Cloud side, you can deploy workloads into a single or multiple shared VPCs or non-shared VPCs. For other possible design options of this pattern, refer to the design variations that follow. The selected structure of your VPCs should align with the projects and resources hierarchy design of your organization. The VPC network of Google Cloud extends to other computing environments. Those environments can be on-premises or in another cloud. Use one of the hybrid and multicloud networking connectivity options that meet your business and application requirements. Limit communications to only the allowed IP addresses of your sources and destinations. Use any of the following capabilities, or a combination of them: Firewall rules or firewall policies. Network virtual appliance (NVA) with next generation firewall (NGFW) inspection capabilities, placed in the network path. Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention without changing the network design or routing. Variations The meshed architecture pattern can be combined with other approaches to meet different design requirements, while still considering the communication requirements of the pattern. The pattern options are described in the following sections: One VPC per environment Use a centralized application layer firewall Microservices zero trust distributed architecture One VPC per environment The common reasons to consider the one-VPC-per-environment option are as follows: The cloud environment requires network-level separation of the VPC networks and resources, in alignment with your organization's resource hierarchy design. If administrative domain separation is required, it can also be combined with a separate project per environment. To centrally manage network resources in a common network and provide network isolation between the different environments, use a shared VPC for each environment that you have in Google Cloud, such as development, testing, and production. Scale requirements that might need to go beyond the VPC quotas for a single VPC or project. As illustrated in the following diagram, the one-VPC-per-environment design lets each VPC integrate directly with the on-premises environment or other cloud environments using VPNs, or a Cloud Interconnect with multiple VLAN attachments. The pattern displayed in the preceding diagram can be applied on a landing zone hub-and-spoke network topology. In that topology, a single (or multiple) hybrid connection can be shared with all spoke VPCs. It's shared by using a transit VPC to terminate both the hybrid connectivity and the other spoke VPCs. You can also expand this design by adding NVA with next-generation firewall (NGFW) inspection capabilities at the transit VPC, as described in the next section, "Use a centralized application layer firewall." Use a centralized application layer firewall If your technical requirements mandate considering application layer (Layer 7) and deep packet inspection with advanced firewalling capabilities that exceed the capabilities of Cloud Next Generation Firewall, you can use an NGFW appliance hosted in an NVA. However, that NVA must meet the security needs of your organization. To implement these mechanisms, you can extend the topology to pass all cross-environment traffic through a centralized NVA firewall, as shown in the following diagram. You can apply the pattern in the following diagram on the landing zone design by using a hub-and-spoke topology with centralized appliances: As shown in the preceding diagram, The NVA acts as the perimeter security layer and serves as the foundation for enabling inline traffic inspection. It also enforces strict access control policies. To inspect both east-west and north-south traffic flows, the design of a centralized NVA might include multiple segments with different levels of security access controls. Microservices zero trust distributed architecture When containerized applications are used, the microservices zero trust distributed architecture discussed in the mirrored pattern section is also applicable to this architecture pattern. The key difference between this pattern and the mirrored pattern is that the communication model between workloads in Google Cloud and other environments can be initiated from either side. Traffic must be controlled and fine-grained, based on the application requirements and security requirements using Service Mesh. Meshed pattern best practices Before you do anything else, decide on your resource hierarchy design, and the design required to support any project and VPC. Doing so can help you select the optimal networking architecture that aligns with the structure of your Google Cloud projects. Use a zero trust distributed architecture when using Kubernetes within your private computing environment and Google Cloud. When you use centralized NVAs in your design, you should define multiple segments with different levels of security access controls and traffic inspection policies. Base these controls and policies on the security requirements of your applications. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by the Google Cloud security vendor that supplies your NVAs. To provide increased privacy, data integrity, and a controlled communication model, expose applications through APIs using API gateways, like Apigee and Apigee hybrid with end-to-end mTLS. You can also use a shared VPC with Apigee in the same organization resource. If the design of your solution requires exposing a Google Cloud based application to the public internet, consider the design recommendations discussed in Networking for internet-facing application delivery. To help protect Google Cloud services in your projects, and to help mitigate the risk of data exfiltration, use VPC Service Controls to specify service perimeters at the project or VPC network level. Also, you can extend service perimetersto a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Review the general best practices for hybrid and multicloud networking patterns. If you intend to enforce stricter isolation and more fine-grained access between your applications hosted in Google Cloud, and in other environments, consider using one of the gated patterns that are discussed in the other documents in this series. Previous arrow_back Mirrored pattern Next Gated patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Microsoft_Entra_ID_B2B_user_provisioning_and_single_sign-on.txt b/Microsoft_Entra_ID_B2B_user_provisioning_and_single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..98045f58f4b9c81e798735a77152b5d12fd36e41 --- /dev/null +++ b/Microsoft_Entra_ID_B2B_user_provisioning_and_single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/azure-ad-b2b-user-provisioning-and-sso +Date Scraped: 2025-02-23T11:55:34.758Z + +Content: +Home Docs Cloud Architecture Center Send feedback Microsoft Entra ID (formerly Azure AD) B2B user provisioning and single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document shows you how you can extend Microsoft Entra ID (formerly Azure AD) user provisioning and single sign-on to enable single sign-on (SSO) for Microsoft Entra ID B2B collaboration users. The document assumes that you use Microsoft Office 365 or Microsoft Entra ID in your organization and that you've already configured Microsoft Entra ID user provisioning and single sign-on as in the following diagram. In this diagram, users from external identity providers (IdPs) and from other Microsoft Entra ID tenants sign on to the Microsoft Entra ID tenant through B2B sign-on. Objectives Extend the Microsoft Entra ID user provisioning configuration to cover Microsoft Entra B2B guest users. Extend the Microsoft Entra ID SSO configuration to cover Microsoft Entra B2B guest users. Configure Cloud Identity to limit session lengths for guest users. Before you begin Make sure you've set up Microsoft Entra ID user provisioning and single sign-on. Note: This document refers to the Google Cloud/Google Workspace Connector by Microsoft gallery app from the Microsoft Azure marketplace. This app is a Microsoft product and is not maintained or supported by Google. Microsoft Entra B2B guest users Microsoft Entra ID lets you invite external users as guests to your Microsoft Entra ID tenant. When you invite an external user, Microsoft Entra ID creates a guest user account in your tenant. These guest user accounts differ from regular Microsoft Entra ID user accounts in multiple ways: Guest users don't have a password. To sign on, guest users are automatically redirected to their home tenant or to the external identity provider (IdP) that they've been invited from. The user principal name (UPN) of the guest user account uses a prefix derived from the invitee's email address, combined with the tenant's initial domain—for example: prefix#EXT#@tenant.onmicrosoft.com. If you invite a user from a different Microsoft Entra ID tenant and the user is later deleted in its home tenant, then the guest user account remains active in your Microsoft Entra ID tenant. These differences affect the way you configure user provisioning and single sign-on: Because onmicrosoft.com is a Microsoft-owned DNS domain, you cannot add tenant.onmicrosoft.com as a secondary domain to your Cloud Identity or Google Workspace account. This caveat means that you cannot use the guest user's UPN as primary email address when provisioning the user to Cloud Identity or Google Workspace. To provision guest users to Cloud Identity or Google Workspace, you must set up a mapping that transforms the guest user's UPN into a domain used by your Cloud Identity or Google Workspace account. In this document, you set up a UPN mapping as indicated in the following table. Original UPN in Microsoft Entra ID Primary email address in Cloud Identity or Google Workspace Regular user alice@example.com alice@example.com Microsoft Entra ID guest charlie@altostrat.com charlie_altostrat.com@example.com External guest user@hotmail.com user_hotmail.com@example.com Note: The primary email address used for guest users must use the primary domain of your Cloud Identity or Google Workspace account. When a user is deleted in its home tenant, Microsoft Entra ID won't suspend the corresponding user in Cloud Identity or Google Workspace. This poses a security risk: Although any attempts to use single sign-on will fail for such a user, existing browser sessions and refresh tokens (including those used by the Google Cloud CLI) might remain active for days or weeks, allowing the user to continue accessing resources. Using the approach presented in this document, you can mitigate this risk by provisioning guest users to a dedicated organizational unit in Cloud Identity or Google Workspace, and by applying a policy that restricts the session length to 8 hours. The policy ensures that browser sessions and existing refresh tokens are invalidated at most 8 hours after the user has been deleted in its home tenant, effectively revoking all access. The user in Cloud Identity or Google Workspace stays active, however, until you delete the guest user from your Microsoft Entra ID account. Prepare your Cloud Identity or Google Workspace account Create an organizational unit in your Cloud Identity or Google Workspace account that all guest users will be provisioned to. Open the Admin Console and sign in using the super-admin user created when you signed up for Cloud Identity or Google Workspace. In the menu, go to Directory > Organizational units. Click Create organizational unit and provide a name and description for the OU: Name of organizational unit: guests Description: Microsoft Entra B2B guest users Click Create. Apply a policy to the organizational unit that limits the session length to 8 hours. The session length not only applies to browser sessions, but also restricts the lifetime of OAuth refresh tokens. In the Admin Console, go to Security > Access and data control > Google Cloud session control. Select the organizational unit guests and apply the following settings: Reauthentication policy: Require reauthentication Reauthentication frequency: 8 hours. This duration reflects the maximum amount of time a guest user might still be able to access Google Cloud resources after it has been suspended in Microsoft Entra ID. Reauthentication method: Password. This setting ensures that users have to re-authenticate by using Microsoft Entra ID after a session has expired. Click Override. Note: The configuration change can take up to 24 hours to take effect. Configure Microsoft Entra ID provisioning You are now ready to adjust your existing Microsoft Entra ID configuration to support provisioning of B2B guest users. In the Azure portal, go to Microsoft Entra ID > Enterprise applications. Select the enterprise application Google Cloud (Provisioning), which you use for user provisioning. Click Manage > Provisioning. Click Edit provisioning. Under Mappings, click Provision Microsoft Entra ID Users. Select the row userPrincipalName. In the Edit Attribute dialog, apply the following changes: Mapping type: Change from Direct to Expression. Expression: Replace([originalUserPrincipalName], "#EXT#@TENANT_DOMAIN", , , "@PRIMARY_DOMAIN", , ) Replace the following: TENANT_DOMAIN: the .onmicrosoft.com domain of your Microsoft Entra ID tenant, such as tenant.onmicrosoft.com. PRIMARY_DOMAIN: the primary domain name used by your Cloud Identity or Google Workspace account, such as example.org. Click OK. Select Add new mapping. In the Edit Attribute dialog, configure the following settings: Mapping type: Expression. Expression: IIF(Instr([originalUserPrincipalName], "#EXT#", , )="0", "/", "/guests") Target attribute: OrgUnitPath Click OK. Click Save. Click Yes to confirm that saving changes will result in users and groups being resynchronized. Close the Attribute Mapping dialog. Configure Microsoft Entra ID for single sign-on To ensure that guest users can authenticate by using single sign-on, you now extend your existing Microsoft Entra ID configuration to enable single sign-on for guests: In the Azure portal, go to Microsoft Entra ID > Enterprise applications. Select the Google Cloud enterprise application, which you use for single sign-on. Click Manage > Single sign-on. On the ballot screen, click the SAML card. On the User Attributes & Claims card, click Edit. Select the row labeled Unique User Identifier (Name ID). Select Claim conditions. Add a conditional claim for external guests: User type: External guests Source: Transformation Transformation: RegexReplace() Parameter 1: Attribute Attribute: user.userprincipalname Regex pattern: (?'username'^.*?)#EXT#@(?i).*\.onmicrosoft\.com$ Replacement pattern: {username}@PRIMARY_DOMAIN, replacing PRIMARY_DOMAIN with the primary domain name used by your Cloud Identity or Google Workspace account. Click Add. Add a conditional claim for Microsoft Entra ID guests from different tenants: User type: Microsoft Entra guests Source: Transformation Transformation: RegexReplace() Parameter 1: Attribute Attribute: user.localuserprincipalname Note: Make sure to select user.localuserprincipalname instead of user.userprincipalname Regex pattern: (?'username'^.*?)#EXT#@(?i).*\.onmicrosoft\.com$ Replacement pattern: {username}@PRIMARY_DOMAIN, replacing PRIMARY_DOMAIN with the primary domain name used by your Cloud Identity or Google Workspace account. Click Add. Add a conditional claim for regular Microsoft Entra ID users: User type: Members Source: Attribute Value: user.userprincipalname Click Save. Test single sign-on To verify that the configuration works correctly, you need three test users in your Microsoft Entra ID tenant: A regular Microsoft Entra ID user. A Microsoft Entra ID guest user. This is a user that has been invited from a different Microsoft Entra ID tenant. An external guest user. This is a user that has been invited using a non–Microsoft Entra ID email address such as a @hotmail.com address. For each user, you perform the following test: Open a new incognito browser window and go to the https://console.cloud.google.com/. In the Google Sign-In page that appears, enter the email address of the user as it appears in the Primary email address in Cloud Identity or Google Workspace column of the earlier table. Refer to that table to see how the email address in Cloud Identity or Google Workspace derives from the user principal name. You are redirected to Microsoft Entra ID where you see another sign-in prompt. At the sign-in prompt, enter the UPN of the user and follow the instructions to authenticate. After successful authentication, Microsoft Entra ID redirects you back to Google Sign-In. Because this is the first time you've signed in using this user, you are asked to accept the Google Terms of Service and privacy policy. If you agree to the terms, click Accept. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud Terms of Service. If you agree to the terms, choose Yes, and then click Agree and continue. Click the avatar icon, and then click Sign out. You are redirected to a Microsoft Entra ID page confirming that you have been successfully signed out. What's next Learn more about federating Google Cloud with Microsoft Entra ID. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external IdP. Send feedback \ No newline at end of file diff --git a/Microsoft_Entra_ID_My_Apps_portal_integration.txt b/Microsoft_Entra_ID_My_Apps_portal_integration.txt new file mode 100644 index 0000000000000000000000000000000000000000..d517f94cc80d62e5b4bae1ab6966ec2a6ea55ee2 --- /dev/null +++ b/Microsoft_Entra_ID_My_Apps_portal_integration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/integrating-google-services-and-apps-with-azure-ad-portal +Date Scraped: 2025-02-23T11:55:37.006Z + +Content: +Home Docs Cloud Architecture Center Send feedback Microsoft My Apps portal integration Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document shows how to add Google services and Identity-Aware Proxy (IAP) web-secured web applications to the Microsoft My Apps portal and how to enable automatic sign-on for these applications. The document assumes that you have federated your Cloud Identity or Google Workspace account with Microsoft Entra ID by configuring Microsoft Entra ID for single sign-on. Before you begin Make sure you've completed the steps to federate your Cloud Identity or Google Workspace account with Microsoft Entra ID. Initiate single sign-on from a portal To support authenticating with an external identity provider (IdP) like Azure AD, Cloud Identity and Google Workspace rely on service provider–initiated sign-on. With this type of sign-on, authentication starts at the service provider, which then redirects you to the IdP—for example: You access a Google service such as the Google Cloud console or Looker Studio by opening a URL or bookmark. Google and its services take the role as the service provider in this scenario. The Google Sign-in screen appears, prompting you to enter the email address of your Google identity. You're redirected to Microsoft Entra ID, which serves as the IdP. You sign in to Microsoft Entra ID. Microsoft Entra ID redirects you back to the Google service that you originally attempted to access. A benefit of service provider–initiated sign-on is that users can directly access Google services by opening a link or using a bookmark. If your organization uses Microsoft Entra ID, then you can use the Microsoft My Apps portal for this purpose. Not being forced to open applications through a portal is convenient for power users who bookmark specific sites or might memorize certain URLs. For other users, it can still be valuable to expose the links to relevant applications in a portal. However, adding a link such as https://lookerstudio.google.com to the Microsoft My Apps portal reveals a shortcoming of the service provider–initiated sign-on process. Although a user that clicks the link in the portal has a valid Microsoft Entra ID session, they might still see the Google Sign-in screen and are prompted to enter their email address. This seemingly redundant sign-in prompt is a result of Google Sign-In not being made aware of the existing Microsoft Entra ID session. You can avoid the additional Google Sign-in prompt by using special URLs when configuring the Microsoft My Apps portal. These URLs embed a hint about which Cloud Identity or Google Workspace account users are expected to use. The extra information enables authentication to be performed silently, resulting in an improved user experience. Add links to the Microsoft My Apps portal The following table lists common Google services, the corresponding name in Microsoft Entra ID, and the link that you can use to implement SSO as outlined in the previous section. Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ For each Google service that you want to add to the Microsoft My Apps portal, create a new enterprise application: In the Azure portal, go to Microsoft Entra ID > Enterprise applications. Click New application. Click Create your own application and enter the following: What's the name of your app: Enter the name of the Google service as indicated in the preceding table. What are you looking to do with your application: Select Integrate any other application you don't find in the gallery (Non-gallery). Click Create. Select Properties. Change the logo to the file linked in the table. Click Save. In the menu on the left, select Single sign-on. Select Linked. Enter the URL listed in the table—for example, http://docs.google.com/a/DOMAIN. Replace DOMAIN with the primary domain name of your Cloud Identity or Google Workspace account such as example.com. Click Save. Notice that you don't have to configure SAML-based SSO in the application. All single sign-on operations continue to be handled by the application that you previously created for single sign-on. To assign the application to users, do the following: In the menu on the left, select Properties. Set User assignment required to Yes. Click Save. In the menu on the left, click Manage > Users and groups. Click Add user. Select Users. Select the users or groups that you want to provision. If you select a group, all members of the group are provisioned. Click Select. Click Assign. It might take several minutes for a link to show up in the My Apps portal. Control access Assigning users and groups to individual applications in Microsoft Entra ID controls the visibility of the link, but it doesn't control access to a service. A service that isn't visible on a user's My Apps portal might still be accessible if the user opens the right URL. To control which users and groups are allowed to access a service, you must also turn the service on or off in the Google Admin Console. You can simplify the process of controlling visibility and access by using groups: For each Google service, create a security group in Microsoft Entra ID—for example, Looker Studio users and Google Drive users. Assign the groups to the appropriate Microsoft Entra ID enterprise application as outlined in the previous section. For example, assign Looker Studio users to the Looker Studio application and Google Drive users to the Google Drive application. Configure the groups to be provisioned to your Cloud Identity or Google Workspace account. In the Admin Console, turn on the respective service for each group. For example, turn on Looker Studio for the Looker Studio users group and Google Drive for the Google Drive users group. Turn the service off for everybody else. By adding and removing members to these groups, you now control both access and visibility in a single step. IAP-protected web applications If you're using IAP to protect your web applications, you can add links to these applications to the Microsoft My Apps portal and enable a single sign-on experience for them. Add links to the Microsoft My Apps portal The process for adding a link to the Microsoft My Apps portal is the same as for Google services, but you must use the URL of your IAP-protected web application. As you can with Google services, you can prevent users from seeing a Google sign-in screen after following a link to a IAP-protected web application in the portal, but the process is different. Instead of using a special URL, you configure IAP to always use a specific Cloud Identity or Google Workspace account for authentication: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell Initialize an environment variable: PRIMARY_DOMAIN=primary-domain Replace primary-domain with the primary domain of your Cloud Identity or Google Workspace account—for example, example.com. Create a temporary settings file that instructs IAP to always use the primary domain of your Cloud Identity or Google Workspace account for authentication: cat << EOF > iap-settings.yaml accessSettings: oauthSettings: loginHint: "$PRIMARY_DOMAIN" EOF Apply the setting to all IAP web resources in the project: gcloud iap settings set iap-settings.yaml --resource-type=iap_web Remove the temporary settings file: rm iap-settings.yaml Note: Configuring IAP to always use the primary domain of your Cloud Identity or Google Workspace account for authentication might prevent users from other Cloud Identity or Google Workspace accounts from being able to access the application. Control access Assigning users and groups to individual applications in Microsoft Entra ID controls the visibility of the link to your IAP-protected web application, but does not control access to the application. To control access, you also have to customize the Identity and Access Management (IAM) policy of the IAP-protected web application. As with Google services, you can simplify the process of controlling visibility and access by using groups: For each application, create a security group in Microsoft Entra ID—for example, Payroll application users. Assign the group to the respective Microsoft Entra ID enterprise application. Configure the group to be provisioned to your Cloud Identity or Google Workspace account. Update the IAM policy of the IAP-protected web application to grant the IAP-Secured Web App User role to the Payroll application users group while disallowing access for other users By adding and removing members to the Payroll application users group, you control both access and visibility in a single step. What's next Learn more about federating Google Cloud with Microsoft Entra ID. Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external IdP. Read more about Identity-Aware Proxy. Send feedback \ No newline at end of file diff --git a/Microsoft_Entra_ID_user_provisioning_and_single_sign-on.txt b/Microsoft_Entra_ID_user_provisioning_and_single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..f17bcbef043adcf1531ecced7f60a09442d46a41 --- /dev/null +++ b/Microsoft_Entra_ID_user_provisioning_and_single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/federating-gcp-with-azure-ad-configuring-provisioning-and-single-sign-on +Date Scraped: 2025-02-23T11:55:31.849Z + +Content: +Home Docs Cloud Architecture Center Send feedback Microsoft Entra ID (formerly Azure AD) user provisioning and single sign-on Stay organized with collections Save and categorize content based on your preferences. This document shows you how to set up user provisioning and single sign-on between a Microsoft Entra ID (formerly Azure AD) tenant and your Cloud Identity or Google Workspace account. The document assumes that you already use Microsoft Office 365 or Microsoft Entra ID in your organization and want to use Microsoft Entra ID for allowing users to authenticate with Google Cloud. Microsoft Entra ID itself might be connected to an on-premises Active Directory and might use Entra ID federation, pass-through authentication, or password hash synchronization. Objectives Set up Microsoft Entra ID to automatically provision users and, optionally, groups to Cloud Identity or Google Workspace. Configure single sign-on to allow users to sign in to Google Cloud by using a Microsoft Entra ID user account or a user that has been provisioned from Active Directory to Microsoft Entra ID. Costs If you are using the free edition of Cloud Identity, setting up federation with Microsoft Entra ID won't use any billable components of Google Cloud. Check the Microsoft Entra ID pricing page for any fees that might apply to using Microsoft Entra ID. Before you begin Make sure you understand the differences between connecting Google Cloud to Microsoft Entra ID versus directly connecting Google Cloud to Active Directory. Decide how you want to map identities, groups, and domains between Microsoft Entra ID and Cloud Identity or Google Workspace. Specifically, answer the following questions: Do you plan to use email addresses or User Principal Names (UPNs) as common identifiers for users? Do you plan to provision groups? If so, do you plan to map groups by email address or by name? Do you plan to provision all users to Google Cloud or only a select subset of users? Before connecting your production Microsoft Entra ID tenant to Google Cloud, consider using a Microsoft Entra ID test tenant for setting up and testing user provisioning. Sign up for Cloud Identity if you don't have an account already. If you're using the free edition of Cloud Identity and intend to provision more than 50 users, request an increase of the total number of free Cloud Identity users through your support contact. If you suspect that any of the domains you plan to use for Cloud Identity could have been used by employees to register consumer accounts, consider migrating these user accounts first. For more details, see Assessing existing user accounts. Note: This document refers to the Google Cloud/G Suite Connector by Microsoft gallery app from the Microsoft Azure marketplace. This app is a Microsoft product and is not maintained or supported by Google. Prepare your Cloud Identity or Google Workspace account Create a user for Microsoft Entra ID To let Microsoft Entra ID access your Cloud Identity or Google Workspace account, you must create a user for Microsoft Entra ID in your Cloud Identity or Google Workspace account. The Microsoft Entra ID user is only intended for automated provisioning. Therefore, it's best to keep it separate from other user accounts by placing it in a separate organizational unit (OU). Using a separate OU also ensures that you can later disable single sign-on for the Microsoft Entra ID user. To create a new OU, do the following: Open the Admin Console and log in using the super-admin user created when you signed up for Cloud Identity or Google Workspace. In the menu, go to Directory > Organizational units. Click Create organizational unit and provide an name and description for the OU: Name: Automation Description: Automation users Click Create. Create a user account for Microsoft Entra ID and place it in the Automation OU: In the menu, go to Directory > Users and click Add new user to create a user. Provide an appropriate name and email address such as the following: First Name: Microsoft Entra ID Last Name: Provisioning Primary email: azuread-provisioning Keep the primary domain for the email address. Click Manage user's password, organizational unit, and profile photo and configure the following settings: Organizational unit: Select the Automation OU that you created previously. Password: Select Create password and enter a password. Ask for a password change at the next sign-in: Disabled. Click Add new user. Click Done. Assign privileges to Microsoft Entra ID To let Microsoft Entra ID create, list, and suspend users and groups in your Cloud Identity or Google Workspace account, you must grant the azuread-provisioning user additional privileges as follows: To allow Microsoft Entra ID to manage all users, including delegated administrators and super-admin users, you must make the azuread-provisioning user a super-admin. To allow Microsoft Entra ID to manage non-admin users only, it's sufficient to make the azuread-provisioning user a delegated administrator. As a delegated administrator, Microsoft Entra ID can't manage other delegated administrators or super-admin users. Super-admin Note: The super-admin role grants the user full access to Cloud Identity, Google Workspace, and Google Cloud resources. To make the azuread-provisioning user a super-admin, do the following: Locate the newly created user in the list and click the user's name to open their account page. Under Admin roles and privileges, click Assign roles. Enable the super-admin role. Click Save. Delegated administrator To make the azuread-provisioning user a delegated administrator, create a new admin role and assign it to the user: In the menu, go to Account > Admin roles. Click Create new role. Provide a name and description for the role such as the following: Name: Microsoft Entra ID Description: Role for automated user and group provisioning Click Continue. On the next screen, scroll down to the section named Admin API privileges and set the following privileges to enabled: Organization Units > Read Users Groups Click Continue. Click Create role. Click Assign users. Select the azuread-provisioning user and click Assign role. Warning: To protect the user against credential theft and malicious use, we recommend that you enable 2-step verification for the user. For more details on how to protect super-admin users, see Security best practices for administrator accounts. Register domains In Cloud Identity and Google Workspace, users and groups are identified by email address. The domains used by these email addresses must be registered and verified first. Prepare a list of DNS domains that you need to register: If you plan to map users by UPN, include all domains used by UPNs. If in doubt, include all custom domains of your Microsoft Entra ID tenant. If you plan to map users by email address, include all domains used in email addresses. The list of domains might be different from the list of custom domains of your Microsoft Entra ID tenant. If you plan to provision groups, amend the list of DNS domains: If you plan to map groups by email address, include all domains used in group email addresses. If in doubt, include all custom domains of your Microsoft Entra ID tenant. If you plan to map groups by name, include a dedicated subdomain like groups.PRIMARY_DOMAIN, where PRIMARY_DOMAIN is the primary domain name of your Cloud Identity or Google Workspace account. Now that you've identified the list of DNS domains, you can register any missing domains. For each domain on the list not yet registered, perform the following steps: In the Admin Console, go to Account > Domains > Manage domains. Click Add a domain. Enter the domain name and select Secondary domain. Click Add domain and start verification and follow the instructions to verify ownership of the domain. Configure Microsoft Entra ID provisioning Create an enterprise application You are ready to connect Microsoft Entra ID to your Cloud Identity or Google Workspace account by setting up the Google Cloud/G Suite Connector by Microsoft gallery app from the Microsoft Azure marketplace. Note: This app is a Microsoft product and is not maintained or supported by Google. The gallery app can be configured to handle both user provisioning and single sign-on. In this document, you use two instances of the gallery app—one for user provisioning and one for single sign-on. First, create an instance of the gallery app to handle user provisioning: Open the Azure portal and sign in as a user with global administrator privileges. Select Microsoft Entra ID > Enterprise applications. Click New application. Search for Google Cloud, and then click the Google Cloud/G Suite Connector by Microsoft item in the result list. Set the name of the application to Google Cloud (Provisioning). Click Create. Adding the application may take a few seconds, you should then be redirected to a page titled Google Cloud (Provisioning) - Overview. In the menu on the left, click Manage > Properties: Set Enabled for users to sign-in to No. Set Assignment required to No. Set Visible to users to No. Click Save. In the menu on the left, click Manage > Provisioning: Click Get started. Change Provisioning Mode to Automatic. Click Admin Credentials > Authorize. Sign in using the azuread-provisioning@DOMAIN user you created earlier, where DOMAIN is the primary domain of your Cloud Identity or Google Workspace account. Note: If you configured your Cloud Identity or Google Workspace account to restrict access for third-party apps, signing in might fail with the error message Access blocked: Authorization Error. To resolve this error, grant the Google Cloud Connector app access by adding the client ID 283861851054-1n5jlu9rm93njt6kh3k5k7pqv73q7d8d.apps.googleusercontent.com to your list of trusted apps. Because this is the first time you've signed on using this user, you are asked to accept the Google Terms of Service and privacy policy. If you agree to the terms, click I understand. Confirm access to the Cloud Identity API by clicking Allow. Click Test Connection to verify that Microsoft Entra ID can successfully authenticate with Cloud Identity or Google Workspace. Click Save. Configure user provisioning The right way to configure user provisioning depends on whether you intend to map users by email address or by UPN. UPN Under Mappings, click Provision Entra ID Users. For the attributes surname and givenName, do the following: Click Edit. Set Default value if null to _. Click OK. Click Save. Confirm that saving changes will result in users and groups being resynchronized by clicking Yes. Click X to close the Attribute Mapping dialog. UPN: domain substitution Under Mappings, click Provision Entra ID Users. For the attribute userPrincipalName, do the following: Click Edit. Configure the following mapping: Mapping type: Expression Expression: Replace([userPrincipalName], "@DOMAIN", , , "@SUBSTITUTE_DOMAIN", , ) Replace the following: DOMAIN: domain name you want to replace SUBSTITUTE_DOMAIN domain name to use instead Click OK. For the attributes surname and givenName, do the following: Click Edit. Set Default value if null to _. Click OK. Click Save. Confirm that saving changes will result in users and groups being resynchronized by clicking Yes. Click X to close the Attribute Mapping dialog. Email address Under Mappings, click Provision Entra ID Users. For the attribute userPrincipalName, do the following: Click Edit. Set Source attribute to mail. Click OK. For the attributes surname and givenName, do the following: Click Edit. Set Default value if null to _. Click OK. Click Save. Confirm that saving changes will result in users and groups being resynchronized by clicking Yes. Click X to close the Attribute Mapping dialog. You must configure mappings for primaryEmail, name.familyName, name.givenName, and suspended. All other attribute mappings are optional. When you configure additional attribute mappings, note the following: The Google Cloud/G Suite Connector by Microsoft gallery currently doesn't let you assign email aliases. The Google Cloud/G Suite Connector by Microsoft gallery currently doesn't let you assign licenses to users. As a workaround, consider setting up automatic licensing for organizational units. To assign a user to an organization unit, add a mapping for OrgUnitPath. The path must begin with a / character and must refer to an organizational unit that already exists, for example /employees/engineering. Configure group provisioning The right way to configure group provisioning depends on whether your groups are mail-enabled. If groups aren't mail-enabled, or if groups use an email address ending with "onmicrosoft.com", you can derive an email address from the group's name. No group mapping Under Mappings, click Provision Entra ID Groups. Set Enabled to No. Click Save. Confirm that saving changes will result in users and groups being resynchronized by clicking Yes. Click X to close the Attribute Mapping dialog. Name Under Mappings section, click Provision Entra ID Groups. For the attribute mail, do the following: Click Edit. Configure the following settings: Mapping type: Expression. Expression: Join("@", NormalizeDiacritics(StripSpaces([displayName])), "GROUPS_DOMAIN"). Replace GROUPS_DOMAIN with the domain that all group email addresses are supposed to use—for example, groups.example.com. Target attribute: email. Click OK. Click Save. Confirm that saving changes will result in users and groups being resynchronized by clicking Yes. Click X to close the Attribute Mapping dialog. Email address If you map groups by email address, keep the default settings. Configure user assignment If you know that only a certain subset of users need access to Google Cloud, you can optionally restrict the set of users to be provisioned by assigning the enterprise app to specific users or groups of users. If you want all users to be provisioned, you can skip the following steps. In the menu on the left, click Manage > Users and groups. Add the users or groups you want to provision. If you select a group, all members of this group are automatically provisioned. Click Assign. Enable automatic provisioning The next step is to configure Microsoft Entra ID to automatically provision users to Cloud Identity or Google Workspace: In the menu on the left, click Manage > Provisioning. Select Edit provisioning. Set Provisioning Status to On. Under Settings, set Scope to one of the following: Sync only assigned users and groups if you have configured user assignment. Sync all users and groups otherwise. If this box to set the scope isn't displayed, click Save and refresh the page. Click Save. Microsoft Entra ID starts an initial synchronization. Depending on the number of users and groups in the directory, this process can take several minutes or hours. You can refresh the browser page to see the status of the synchronization at the bottom of the page or select Audit Logs in the menu to see more details. After the initial synchronization has completed, Microsoft Entra ID will periodically propagate updates from Microsoft Entra ID to your Cloud Identity or Google Workspace account. For further details on how Microsoft Entra ID handles user and group modifications, see Mapping the user lifecycle and Mapping the group lifecycle. Troubleshooting If the synchronization doesn't start within five minutes, you can force it to start by doing the following: Click Edit provisioning. Set Provisioning Status to Off. Click Save. Set Provisioning Status to On. Click Save. Close the provisioning dialog. Click Restart provisioning. If synchronization still doesn't start, click Test Connection to verify that your credentials have been saved successfully. Configure Microsoft Entra ID for single sign-on Although all relevant Microsoft Entra ID users are now automatically being provisioned to Cloud Identity or Google Workspace, you cannot use these users to sign in yet. To allow users to sign in, you still need to configure single sign-on. Create a SAML profile To configure single sign-on with Microsoft Entra ID, you first create a SAML profile in your Cloud Identity or Google Workspace account. The SAML profile contains the settings related to your Microsoft Entra ID tenant, including its URL and signing certificate. You later assign the SAML profile to certain groups or organizational units. To create a new SAML profile in your Cloud Identity or Google Workspace account, do the following: In the Admin Console, go to SSO with third-party IdP. Go to SSO with third-party IdP Click Third-party SSO profiles > Add SAML profile. On the SAML SSO profile page, enter the following settings: Name: Entra ID IDP entity ID: Leave blank Sign-in page URL: Leave blank Sign-out page URL:: Leave blank Change password URL:: Leave blank Don't upload a verification certificate yet. Click Save. The SAML SSO profile page that appears contains two URLs: Entity ID ACS URL You need these URLs in the next section when you configure Microsoft Entra ID. Create a Microsoft Entra ID application Create a second enterprise application to handle single sign-on: In the Azure portal, go to Microsoft Entra ID > Enterprise applications. Click New application. Search for Google Cloud, and then click Google Cloud/G Suite Connector by Microsoft in the result list. Set the name of the application to Google Cloud. Click Create. Adding the application may take a few seconds. You are then redirected to a page titled Google Cloud - Overview. In the menu on the left, click Manage > Properties. Set Enabled for users to sign-in to Yes. Set Assignment required to Yes unless you want to allow all users to use single sign-on. Click Save. Configure user assignment If you already know that only a certain subset of users need access to Google Cloud, you can optionally restrict the set of users to be allowed to sign in by assigning the enterprise app to specific users or groups of users. If you set User assignment required to No before, then you can skip the following steps. In the menu on the left, click Manage > Users and groups. Add the users or groups you want to allow single sign-on for. Click Assign. Enable single sign-on To enable Cloud Identity to use Microsoft Entra ID for authentication, you must adjust some settings: In the menu on the left, click Manage > Single sign-on. On the ballot screen, click the SAML card. On the Basic SAML Configuration card, click edit Edit. In the Basic SAML Configuration dialog, enter the following settings: Identifier (Entity ID): Add the Entity URL from your SSO profile and set Default to enabled. Remove all other entries. Reply URL: Add the ACS URL from your SSO profile. Sign on URL: https://www.google.com/a/PRIMARY_DOMAIN/ServiceLogin?continue=https://console.cloud.google.com/ Replace PRIMARY_DOMAIN with the primary domain name used by your Cloud Identity or Google Workspace account. Click Save, and then dismiss the dialog by clicking X. On the SAML Signing Certificate card, find the entry labeled Certificate (Base 64) and click Download to download the certificate to your local computer. On the Set up Google Cloud card, you find two URLs: Login URL Microsoft Entra ID Identifier You need these URLs in the next section when you complete the SAML profile. The remaining steps differ depending on whether you map users by email address or by UPN. UPN On the Attributes & Claims card, click edit Edit. Delete all claims listed under Additional claims. You can delete records by clicking the … button and selecting Delete. The list of attributes and claims looks like the following: Dismiss the dialog by clicking X. UPN: domain substitution On the User Attributes & Claims card, click edit Edit. Delete all claims listed under Additional claims. You can delete records by clicking the … button and selecting Delete. The list of attributes and claims looks like the following: Click Unique User Identifier (Name ID) to change the claims mapping. Set Source to Transformation and configure the following transformation: Transformation: ExtractMailPrefix() Parameter 1: user.userPrincipalName Select Add transformation and configure the following transformation: Transformation: Join() Separator: @ Parameter 2: Enter the substitute domain name. You must use the same substitute domain name for user provisioning and single sign-on. If the domain name isn't listed, you might need to verify it first . Click Add. Click Save. Dismiss the dialog by clicking X. Email address On the User Attributes & Claims card, click edit Edit. Select the row labeled Unique User Identifier (Name ID). Change Source attribute to user.mail. Click Save. Delete all claims listed under Additional claims. To delete all records, click more_horiz, and then click Delete. Dismiss the dialog by clicking close. Complete the SAML profile Complete the configuration of your SAML profile: Return to the Admin Console and go to Security > Authentication > SSO with third-party IdP. Go to SSO with third-party IdP Open the Entra ID SAML profile that you created earlier. Click the IDP details section to edit the settings. Enter the following settings: IDP entity ID: Enter the Microsoft Entra Identifier from the Set up Google Cloud card in the Azure Portal. Sign-in page URL: Enter the Login URL from the Set up Google Cloud card in the Azure Portal. Sign-out page URL: https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0 Change password URL: https://account.activedirectory.windowsazure.com/changepassword.aspx Under Verification certificate, click Upload certificate, and then pick the token signing certificate that you downloaded previously. Click Save. The Microsoft Entra ID token signing certificate is valid for a limited amount of time and you must rotate the certificate before it expires. For more information, see Rotate a single sign-on certificate later in this document. Your SAML profile is complete, but you still need to assign it. Assign the SAML profile Select the users for which the new SAML profile should apply: In the Admin Console, on the SSO with third-party IDPs page, click Manage SSO profile assignments > Manage. Go to Manage SSO profile assignments On the left pane, select the group or organizational unit for which you want to apply the SSO profile. To apply the profile to all users, select the root organizational unit. On the right pane, select Another SSO profile. In the menu, select the Entra ID - SAML SSO profile that you created earlier. Warning: Make sure you don't accidentally use the Microsoft - OIDC profile. Click Save. To assign the SAML profile to another group or organizational unit, repeat the steps above. Update the SSO settings for the Automation OU to disable single sign-on: On the left pane, select the Automation OU. On the right pane, select None. Click Override. Optional: Configure redirects for domain-specific service URLs When you link to the Google Cloud console from internal portals or documents, you can improve the user experience by using domain-specific service URLs. Unlike regular service URLs such as https://console.cloud.google.com/, domain specific-service URLs include the name of your primary domain. Unauthenticated users that click a link to a domain specific-service URL are immediately redirected to Entra ID instead of being shown a Google sign-in page first. Examples for domain-specific service URLs include the following: Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ To configure domain-specific service URLs so that they redirect to Entra ID, do the following: In the Admin Console, on the SSO with third-party IDPs page, click Domain-specific service URLs > Edit. Go to domain-specific service URLs Set Automatically redirect users to the third-party IdP in the following SSO profile to enabled. Set SSO profile to Entra ID. Click Save. Optional: Configure login challenges Google sign-in might ask users for additional verification when they sign in from unknown devices or when their sign-in attempt looks suspicious for other reasons. These login challenges help to improve security, and we recommend that you leave login challenges enabled. If you find that login challenges cause too much inconvenience, you can disable login challenges by doing the following: In the Admin Console, go to Security > Authentication > Login challenges. In the left pane, select an organizational unit for which you want to disable login challenges. To disable login challenges for all users, select the root organizational unit. Under Settings for users signing in using other SSO profiles, select Don't ask users for additional verifications from Google. Click Save. Test single sign-on Now that you've completed the single sign-on configuration in both Microsoft Entra ID and Cloud Identity or Google Workspace, you can access Google Cloud in two ways: Through the list of apps in your Microsoft Office portal. Directly by opening https://console.cloud.google.com/. To check that the second option works as intended, run the following test: Pick a Microsoft Entra ID user that has been provisioned to Cloud Identity or Google Workspace and that doesn't have super-admin privileges assigned. Users with super-admin privileges always have to sign in using Google credentials and are therefore not suitable for testing single sign-on. Open a new browser window and go to https://console.cloud.google.com/. In the Google Sign-In page that appears, enter the email address of the user and click Next. If you use domain substitution, this address must be the email address with the substitution applied. You are redirected to Microsoft Entra ID and will see another sign-in prompt. Enter the email address of the user (without domain substitution) and click Next. After entering your password, you are prompted whether to stay signed in or not. For now, choose No. After successful authentication, Microsoft Entra ID should redirect you back to Google Sign-In. Because this is the first time you've signed in using this user, you are asked to accept the Google Terms of Service and privacy policy. If you agree to the terms, click I understand. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud Terms of Service. If you agree to the terms, choose Yes and click Agree and continue. Click the avatar icon on the top left of the page, and then click Sign out. You are redirected to a Microsoft Entra ID page confirming that you have been successfully signed out. Keep in mind that users with super-admin privileges are exempted from single sign-on, so you can still use the Admin Console to verify or change settings. Rotate a single sign-on certificate The Microsoft Entra ID token signing certificate is valid for only several months, and you must replace the certificate before it expires. To rotate a signing certificate, add an additional certificate to the Microsoft Entra ID application: In the Azure portal, go to Microsoft Entra ID > Enterprise applications and open the application that you created for single sign-on. In the menu on the left, click Manage > Single sign-on. On the SAML Signing Certificate card, click edit Edit. You see a list of one or more certificates. One certificate is marked as Active. Click New certificate. Keep the default signing settings and click Save. The certificate is added to the list of certificates and is marked as Inactive. Select the new certificate and click more_horiz > Base64 certificate download. Keep the browser window open and don't close the dialog. To use the new certificate, do the following: Open a new browser tab or window. Open the Admin Console and go to SSO with third-party IdP. Go to SSO with third-party IdP Open the Entra ID SAML profile. Click IDP details. Click Upload another certificate and select the new certificate that you downloaded previously. Click Save. Return to the Microsoft Entra ID portal and the SAML Signing Certificate dialog. Select the new certificate and click more_horiz > Make certificate active. Click Yes to activate the certificate. Microsoft Entra ID now uses the new signing certificate. Test that SSO still works as expected. For more information, see Test single sign-on. To remove the old certificate, do the following: Return to the Admin Console and the Entra ID SAML profile. Click IDP details. Under Verification certificate, compare the expiry dates of your certificates to find the old certificate and click delete. Click Save. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. To disable single sign-on in your Cloud Identity or Google Workspace account, follow these steps: In the Admin Console and go to Manage SSO profile assignments. Go to Manage SSO profile assignments For each profile assignment, do the following: Open the profile. If you see an Inherit button, click Inherit. If you don't see an Inherit button, select None and click Save. Return to the SSO with third-party IDPs page and open the Microsoft Entra ID SAML profile. Click Delete. You can remove single sign-on and provisioning settings in Microsoft Entra ID as follows: In the Azure portal, go to Microsoft Entra ID > Enterprise applications. From the list of applications, choose Google Cloud. In the menu on the left, click Manage > Single sign-on. Click Delete. Confirm the deletion by clicking Yes. What's next Learn more about federating Google Cloud with Microsoft Entra ID Read about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with our best practices for managing super-admin accounts. Send feedback \ No newline at end of file diff --git a/Migrate_Amazon_EC2_to_Compute_Engine.txt b/Migrate_Amazon_EC2_to_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..1e6490a6700a386ff4cb2c414173e1a4870f22bc --- /dev/null +++ b/Migrate_Amazon_EC2_to_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-amazon-ec2-to-compute-engine +Date Scraped: 2025-02-23T11:51:54.629Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon EC2 to Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC Google Cloud provides tools, products, guidance, and professional services to migrate virtual machines (VMs) along with their data from Amazon Elastic Compute Cloud (Amazon EC2) to Compute Engine. This document discusses how to design, implement, and validate a plan to migrate from Amazon EC2 to Compute Engine. The discussion in this document is intended for cloud administrators who want details about how to plan and implement a migration process. It's also intended for decision-makers who are evaluating the opportunity to migrate and who want to explore what migration might look like. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine (this document) Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to Google Kubernetes Engine Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. Build an inventory of your Amazon EC2 instances To scope your migration, you create an inventory of your Amazon EC2 instances. You can then use the inventory to assess your deployment and operational processes for deploying workloads on those instances. To build the inventory of your Amazon EC2 instances, we recommend that you use Migration Center, Google Cloud's unified platform that helps you accelerate your end-to-end cloud journey from your current environment to Google Cloud. Migration Center lets you import data from Amazon EC2 and other AWS resources. Migration Center then recommends relevant Google Cloud services that you can migrate to. After assessing your environment using Migration Center, we recommend that you generate a technical migration assessment report by using the Migration Center discovery client CLI. For more information, see Collect guest data from Amazon EC2 instances for offline assessment. The data that Migration Center and the Migration Center discovery client CLI provide might not fully capture the dimensions that you're interested in. In that case, you can integrate that data with the results from other data-collection mechanisms that you create that are based on AWS APIs, AWS developer tools, and the AWS command-line interface. In addition to the data that you get from Migration Center and the Migration Center discovery client CLI, consider the following data points for each Amazon EC2 instance that you want to migrate: Deployment region and zone. Instance type and size. The Amazon Machine Image (AMI) that the instance is launching from. The instance hostname, and how other instances and workloads use this hostname to communicate with the instance. The instance tags as well as metadata and user data. The instance virtualization type. The instance purchase option, such as on-demand purchase or spot purchase. How the instance stores data, such as using instance stores and Amazon EBS volumes. The instance tenancy configuration. Whether the instance is in a specific placement group. Whether the instance is in a specific autoscaling group. The security groups that the instance belongs to. Any AWS Network Firewall configuration that involves the instance. Whether the workloads that run on the instance are protected by AWS Shield and AWS WAF. Whether you're controlling the processor state of your instance, and how the workloads that run on the instance depend on the processor state. The configuration of the instance I/O scheduler. How you're exposing workloads that run on the instance to clients that run in your AWS environment (such as other workloads) and to external clients. Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. Migrate your workloads To migrate your workloads from Amazon EC2 to Compute Engine, you do the following: Migrate VMs from Amazon EC2 to Compute Engine. Migrate your VM disks to Persistent Disk. Expose workloads that run on Compute Engine to clients. Refactor deployment and operational processes to target Google Cloud instead of targeting Amazon EC2. The following sections provide details about each of these tasks. Migrate your VMs to Compute Engine To migrate VMs from Amazon EC2 to Compute Engine, we recommend that you use Migrate to Virtual Machines, which is a fully managed service. For more information, see Migration journey with Migrate to VMs. As part of the migration, Migrate for VMs migrates Amazon EC2 instances in their current state, apart from required configuration changes. If your Amazon EC2 instances run customized Amazon EC2 AMIs, Migrate for VMs migrates these customizations to Compute Engine instances. However, if you want to make your infrastructure reproducible, you might need to apply equivalent customizations by building Compute Engine operating system images as part of your deployment and operational processes, as explained later in this document. You can also import your Amazon EC2 AMIs into Compute Engine. Migrate your VM disks to Persistent Disk You can also use Migrate to VMs to migrate disks from your source Amazon EC2 VMs to Persistent Disk, with minimal interruptions to the workloads that are running on the Amazon EC2 VMs. For more information, see Migrate VM disks and attach them to a new VM. For example, you can migrate a data disk attached to an Amazon EC2 VM to Persistent Disk, and attach it to a new Compute Engine VM. Expose workloads that run on Compute Engine After you migrate your Amazon EC2 instances to Compute Engine instances, you might need to provision and configure your Google Cloud environment to expose the workloads to clients. Google Cloud offers secure and reliable services and products for exposing your workloads to clients. For workloads that run on your Compute Engine instances, you configure resources for the following categories: Firewalls Traffic load balancing DNS names, zones, and records DDoS protection and web application firewalls For each of these categories, you can start by implementing a baseline configuration that's similar to how you configured AWS services and resources in the equivalent category. You can then iterate on the configuration and use additional features that are provided by Google Cloud services. The following sections explain how to provision and configure Google Cloud resources in these categories, and how they map to AWS resources in similar categories. Firewalls If you configured AWS security groups and AWS Network Firewall policies and rules, you can configure Cloud Next Generation Firewall policies and rules. You can also provision VPC Service Controls rules to regulate network traffic inside your VPC. You can use VPC Service Controls to control outgoing traffic from your Compute Engine instances, and to help mitigate the risk of data exfiltration. For example, if you use AWS security groups to allow or deny connections to your Amazon EC2 instances, you can configure similar Virtual Private Cloud (VPC) firewall rules that apply to your Compute Engine instances. If you use remote access protocols like SSH or RDP to connect to your Amazon EC2 VMs, you can remove the VM's public IP address and connect to the VM remotely with Identity-Aware Proxy (IAP). IAP TCP forwarding lets you establish an encrypted tunnel. You can use the tunnel to forward SSH, RDP, and other internet traffic to VMs without assigning your VMs public IP addresses. Because connections from the IAP service originate from a reserved public IP address range, you need to create matching VPC firewall rules. If you have Windows-based VMs and you turned on Windows Firewall, verify that the Windows Firewall isn't configured to block RDP connections from IAP. For more information, see Troubleshooting RDP. Traffic load balancing If you've configured Elastic Load Balancing (ELB) in your AWS environment, you can configure Cloud Load Balancing to distribute network traffic to help improve the scalability of your workloads in Google Cloud. Cloud Load Balancing supports several global and regional load balancing products that work at different layers of the OSI model, such as at the transport layer and at the application layer. You can choose a load balancing product that's suitable for the requirements of your workloads. Cloud Load Balancing also supports configuring Transport Layer Security (TLS) to encrypt network traffic. When you configure TLS for Cloud Load Balancing, you can use self-managed or Google-managed TLS certificates, depending on your requirements. DNS names, zones, and records If you use Amazon Route 53 in your AWS environment, you can use the following in Google Cloud: Cloud Domains to register your DNS domains. Cloud DNS to manage your public and private DNS zones and your DNS records. For example, if you registered a domain by using Amazon Route 53, you can transfer the domain registration to Cloud Domains. Similarly, if you configured public and private DNS zones using Amazon Route 53, you can migrate that configuration to Cloud DNS. DDoS protection and web application firewalls If you configured AWS Shield and AWS WAF in your AWS environment, you can use Google Cloud Armor to help protect your Google Cloud workloads from DDoS attacks and from common exploits. Refactor deployment and operational processes After you refactor your workloads, you refactor your deployment and operational processes to do the following: Provision and configure resources in your Google Cloud environment instead of provisioning resources in your source environment. Build and configure workloads, and deploy them in your Google Cloud instead of deploying them in your source environment. You gathered information about these processes during the assessment phase earlier in this process. The type of refactoring that you need to consider for these processes depends on how you designed and implemented them. The refactoring also depends on what you want the end state to be for each process. For example, consider the following: You might have implemented these processes in your source environment and you intend to design and implement similar processes in Google Cloud. For example, you can refactor these processes to use Cloud Build, Cloud Deploy, and Infrastructure Manager. You might have implemented these processes in another third-party environment outside your source environment. In this case, you need to refactor these processes to target your Google Cloud environment instead of your source environment. A combination of the previous approaches. Refactoring deployment and operational processes can be complex and can require significant effort. If you try to perform these tasks as part of your workload migration, the workload migration can become more complex, and it can expose you to risks. After you assess your deployment and operational processes, you likely have an understanding of their design and complexity. If you estimate that you require substantial effort to refactor your deployment and operational processes, we recommend that you consider refactoring these processes as part of a separate, dedicated project. For more information about how to design and implement deployment processes on Google Cloud, see: Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments This document focuses on the deployment processes that produce the artifacts to deploy, and deploy them in the target runtime environment. The refactoring strategy highly depends on the complexity of these processes. The following list outlines a possible, general, refactoring strategy: Provision artifact repositories on Google Cloud. For example, you can use Artifact Registry to store artifacts and build dependencies. Refactor your build processes to store artifacts both in your source environment and in Artifact Registry. Refactor your deployment processes to deploy your workloads in your target Google Cloud environment. For example, you can start by deploying a small subset of your workloads in Google Cloud, using artifacts stored in Artifact Registry. Then, you gradually increase the number of workloads deployed in Google Cloud, until all the workloads to migrate run on Google Cloud. Refactor your build processes to store artifacts in Artifact Registry only. If necessary, migrate earlier versions of the artifacts to deploy from the repositories in your source environment to Artifact Registry. For example, you can copy container images to Artifact Registry. Decommission the repositories in your source environment when you no longer require them. To facilitate eventual rollbacks due to unanticipated issues during the migration, you can store container images both in your current artifact repositories in Google Cloud while the migration to Google Cloud is in progress. Finally, as part of the decommissioning of your source environment, you can refactor your container image building processes to store artifacts in Google Cloud only. Although it might not be crucial for the success of a migration, you might need to migrate your earlier versions of your artifacts from your source environment to your artifact repositories on Google Cloud. For example, to support rolling back your workloads to arbitrary points in time, you might need to migrate earlier versions of your artifacts to Artifact Registry. For more information, see Migrate images from a third-party registry. If you're using Artifact Registry to store your artifacts, we recommend that you configure controls to help you secure your artifact repositories, such as access control, data exfiltration prevention, vulnerability scanning, and Binary Authorization. For more information, see Control access and protect artifacts. Optimize your Google Cloud environment Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Migrate_Amazon_EKS_to_GKE.txt b/Migrate_Amazon_EKS_to_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..8260f6922d62c38a5b22752604a8b115355dc725 --- /dev/null +++ b/Migrate_Amazon_EKS_to_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-amazon-eks-to-gke +Date Scraped: 2025-02-23T11:51:59.622Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC Google Cloud provides tools, products, guidance, and professional services to migrate from Amazon Elastic Kubernetes Service (Amazon EKS) to Google Kubernetes Engine (GKE). This document helps you to design, implement, and validate a plan to migrate from Amazon EKS to GKE. This document also provides guidance if you're evaluating the opportunity to migrate and want to explore what migration might look like. Besides running on Amazon Elastic Compute Cloud (Amazon EC2), Amazon EKS has a few other deployment options, such as Amazon EKS on AWS outputs and Amazon EKS anywhere. This document focuses on Amazon EKS on EC2. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to Google Kubernetes Engine (this document) Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run GKE is a Google-managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure, and provides features that help you manage your Kubernetes environment, such as: Two editions: GKE Standard and GKE Enterprise. With GKE Standard, you get access to a standard tier of core features. With GKE Enterprise, you get access to all the capabilities of GKE. For more information, see GKE editions. Two modes of operation: Standard and Autopilot. With Standard, you manage the underlying infrastructure and the configuration of each node in your GKE cluster. With Autopilot, GKE manages the underlying infrastructure such as node configuration, autoscaling, auto-upgrades, baseline security and network configuration. For more information about GKE modes of operation, see Choose a GKE mode of operation. Industry-unique service level agreement for Pods when using Autopilot in multiple zones. Automated node pool creation and deletion with node auto-provisioning. Google-managed multi-cluster networking to help you design and implement highly available, distributed architectures for your workloads. For more information about GKE, see GKE overview. For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. Build your inventories To scope your migration, you create two inventories: The inventory of your clusters. The inventory of your workloads that are deployed in those clusters. After you build these inventories, you: Assess your deployment and operational processes for your source environment. Assess supporting services and external dependencies. Build the inventory of your clusters To build the inventory of your clusters, consider the following for each cluster: Number and type of nodes. When you know how many nodes and the characteristics of each node that you have in your current environment, you size your clusters when you move to GKE. The nodes in your new environment might run on a different hardware architecture or generation than the ones you use in your environment. The performance of each architecture and generation is different, so the number of nodes you need in your new environment might be different from your environment. Evaluate any type of hardware that you're using in your nodes, such as high-performance storage devices, GPUs, and TPUs. Assess which operating system image that you're using on your nodes. Internal or external cluster. Evaluate which actors, either internal to your environment or external, that each cluster is exposed to. To support your use cases, this evaluation includes the workloads running in the cluster, and the interfaces that interact with your clusters. Multi-tenancy. If you're managing multi-tenant clusters in your environment, assess if it works in your new Google Cloud environment. Now is a good time to evaluate how to improve your multi-tenant clusters because your multi-tenancy strategy influences how you build your foundation on Google Cloud. Kubernetes version. Gather information about the Kubernetes version of your clusters to assess if there is a mismatch between those versions and the ones available in GKE. If you're running an older or a recently released Kubernetes version, you might be using features that are unavailable in GKE. The features might be deprecated, or the Kubernetes version that ships them is not yet available in GKE. Kubernetes upgrade cycle. To maintain a reliable environment, understand how you're handling Kubernetes upgrades and how your upgrade cycle relates to GKE upgrades. Node pools. If you're using any form of node grouping, you might want to consider how these groupings map to the concept of node pools in GKE because your grouping criteria might not be suitable for GKE. Node initialization. Assess how you initialize each node before marking it as available to run your workloads so you can port those initialization procedures over to GKE. Network configuration. Assess the network configuration of your clusters, their IP address allocation, how you configured their networking plugins, how you configured their DNS servers and DNS service providers, if you configured any form of NAT or SNAT for these clusters, and whether they are part of a multi-cluster environment. Compliance: Assess any compliance and regulatory requirements that your clusters are required to satisfy, and whether you're meeting these requirements. Quotas and limits. Assess how you configured quotas and limits for your clusters. For example, how many Pods can each node run? How many nodes can a cluster have? Labels and tags. Assess any metadata that you applied to clusters, node pools, and nodes, and how you're using them. For example, you might be generating reports with detailed, label-based cost attribution. The following items that you assess in your inventory focus on the security of your infrastructure and Kubernetes clusters: Namespaces. If you use Kubernetes Namespaces in your clusters to logically separate resources, assess which resources are in each Namespace, and understand why you created this separation. For example, you might be using Namespaces as part of your multi-tenancy strategy. You might have workloads deployed in Namespaces reserved for Kubernetes system components, and you might not have as much control in GKE. Role-based access control (RBAC). If you use RBAC authorization in your clusters, list a description of all ClusterRoles and ClusterRoleBindings that you configured in your clusters. Network policies. List all network policies that you configured in your clusters, and understand how network policies work in GKE. Pod security contexts. Capture information about the Pod security contexts that you configured in your clusters and learn how they work in GKE. Service accounts. If any process in your cluster is interacting with the Kubernetes API server, capture information about the service accounts that they're using. When you build the inventory of your Kubernetes clusters, you might find that some of the clusters need to be decommissioned as part of your migration. Make sure that your migration plan includes retiring these resources. Build the inventory of your Kubernetes workloads After you complete the Kubernetes clusters inventory and assess the security of your environment, build the inventory of the workloads deployed in those clusters. When evaluating your workloads, gather information about the following aspects: Pods and controllers. To size the clusters in your new environment, assess how many instances of each workload you have deployed, and if you're using Resource quotas and compute resource consumption limits. Gather information about the workloads that are running on the control plane nodes of each cluster and the controllers that each workload uses. For example, how many Deployments are you using? How many DaemonSets are you using? Jobs and CronJobs. Your clusters and workloads might need to run Jobs or CronJobs as part of their initialization or operation procedures. Assess how many instances of Jobs and CronJobs you have deployed, and the responsibilities and completion criteria for each instance. Kubernetes Autoscalers. To migrate your autoscaling policies in the new environment, learn how the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler, work on GKE. Stateless and stateful workloads. Stateless workloads don't store data or state in the cluster or to persistent storage. Stateful applications save data for later use. For each workload, assess which components are stateless and which are stateful, because migrating stateful workloads is typically harder than migrating stateless ones. Kubernetes features. From the cluster inventory, you know which Kubernetes version each cluster runs. Review the release notes of each Kubernetes version to know which features it ships and which features it deprecates. Then assess your workloads against the Kubernetes features that you need. The goal of this task is to know whether you're using deprecated features or features that are not yet available in GKE. If you find any unavailable features, migrate away from deprecated features and adopt the new ones when they're available in GKE. Storage. For stateful workloads, assess if they use PersistenceVolumeClaims. List any storage requirements, such as size and access mode, and how these PersistenceVolumeClaims map to PersistenceVolumes. To account for future growth, assess if you need to expand any PersistenceVolumeClaim. Configuration and secret injection. To avoid rebuilding your deployable artifacts every time there is a change in the configuration of your environment, inject configuration and secrets into Pods using ConfigMaps and Secrets. For each workload, assess which ConfigMaps and Secrets that workload is using, and how you're populating those objects. Dependencies. Your workloads probably don't work in isolation. They might have dependencies, either internal to the cluster, or from external systems. For each workload, capture the dependencies, and if your workloads have any tolerance for when the dependencies are unavailable. For example, common dependencies include distributed file systems, databases, secret distribution platforms, identity and access management systems, service discovery mechanisms, and any other external systems. Kubernetes Services. To expose your workloads to internal and external clients, use Services. For each Service, you need to know its type. For externally exposed services, assess how that service interacts with the rest of your infrastructure. For example, how is your infrastructure supporting LoadBalancer services, Gateway objects, and Ingress objects? Which Ingress controllers did you deploy in your clusters? Service mesh. If you're using a service mesh in your environment, you assess how it's configured. You also need to know how many clusters it spans, which services are part of the mesh, and how you modify the topology of the mesh. Taints and tolerations and affinity and anti-affinity. For each Pod and Node, assess if you configured any Node taints, Pod tolerations, or affinities to customize the scheduling of Pods in your Kubernetes clusters. These properties might also give you insights about possible non-homogeneous Node or Pod configurations, and might mean that either the Pods, the Nodes, or both need to be assessed with special focus and care. For example, if you configured a particular set of Pods to be scheduled only on certain Nodes in your Kubernetes cluster, it might mean that the Pods need specialized resources that are available only on those Nodes. Authentication: Assess how your workloads authenticate against resources in your cluster, and against external resources. Assess supporting services and external dependencies After you assess your clusters and their workloads, evaluate the rest of the supporting services and aspects in your infrastructure, such as the following: StorageClasses and PersistentVolumes. Assess how your infrastructure is backing PersistentVolumeClaims by listing StorageClasses for dynamic provisioning, and statically provisioned PersistentVolumes. For each PersistentVolume, consider the following: capacity, volume mode, access mode, class, reclaim policy, mount options, and node affinity. VolumeSnapshots and VolumeSnapshotContents. For each PersistentVolume, assess if you configured any VolumeSnapshot, and if you need to migrate any existing VolumeSnapshotContents. Container Storage Interface (CSI) drivers. If deployed in your clusters, assess if these drivers are compatible with GKE, and if you need to adapt the configuration of your volumes to work with CSI drivers that are compatible with GKE. Data storage. If you depend on external systems to provision PersistentVolumes, provide a way for the workloads in your GKE environment to use those systems. Data locality has an impact on the performance of stateful workloads, because the latency between your external systems and your GKE environment is proportional to the distance between them. For each external data storage system, consider its type, such as block volumes, file storage, or object storage, and any performance and availability requirements that it needs to satisfy. Custom resources and Kubernetes add-ons. Collect information about any custom Kubernetes resources and any Kubernetes add-ons that you might have deployed in your clusters, because they might not work in GKE, or you might need to modify them. For example, if a custom resource interacts with an external system, you assess if that's applicable to your Google Cloud environment. Backup. Assess how you're backing up the configuration of your clusters and stateful workload data in your source environment. Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Tools to build the inventory of your source environment To build the inventory of your Amazon EKS clusters, we recommend that you use Migration Center, Google Cloud's unified platform that helps you accelerate your end-to-end cloud journey from your current environment to Google Cloud. Migration Center lets you import data from Amazon EKS and other AWS resources. Migration Center then recommends relevant Google Cloud services that you can migrate to. Refine the inventory of your Amazon EKS clusters and workloads The data that Migration Center provides might not fully capture the dimensions that you're interested in. In that case, you can integrate that data with the results from other data-collection mechanisms that you create that are based on AWS APIs, AWS developer tools, and the AWS command-line interface. In addition to the data that you get from Migration Center, consider the following data points for each Amazon EKS cluster that you want to migrate: Consider Amazon EKS-specific aspects and features for each Amazon EKS cluster, including the following: Private clusters Cluster endpoint access control Secret encryption Managed node groups and self-managed nodes Tags on Amazon EKS resources Amazon Custom AMIs in EKS Usage of Amazon EKS Fargate Usage of Amazon EKS Managed Prometheus OpenID Connect authentication configuration Assess how you're authenticating against your Amazon EKS clusters and how you configured AWS Identity and Access Management (IAM) for Amazon EKS. Assess the networking configuration of your Amazon EKS clusters. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. The following sections integrate the considerations in Migrate to Google Cloud: Plan and build your foundation. Plan for multi-tenancy To design an efficient resource hierarchy, consider how your business and organizational structures map to Google Cloud. For example, if you need a multi-tenant environment on GKE, you can choose between the following options: Creating one Google Cloud project for each tenant. Sharing one project among different tenants, and provisioning multiple GKE clusters. Using Kubernetes namespaces. Your choice depends on your isolation, complexity, and scalability needs. For example, having one project per tenant isolates the tenants from one another, but the resource hierarchy becomes more complex to manage due to the high number of projects. However, although managing Kubernetes Namespaces is relatively easier than a complex resource hierarchy, this option doesn't guarantee as much isolation. For example, the control plane might be shared between tenants. For more information, see Cluster multi-tenancy. Configure identity and access management GKE supports multiple options for managing access to resources within your Google Cloud project and its clusters using RBAC. For more information, see Access control. Configure GKE networking Network configuration is a fundamental aspect of your environment. Before provisioning and configure any cluster, we recommend that you assess the GKE network model, the best practices for GKE networking, and how to plan IP addresses when migrating to GKE. Set up monitoring and alerting Having a clear picture of how your infrastructure and workloads are performing is key to finding areas of improvement. GKE has deep integrations with Google Cloud Observability, so you get logging, monitoring, and profiling information about your GKE clusters and workloads inside those clusters. Migrate your data and deploy workloads In the deployment phase, you do the following: Provision and configure your GKE environment. Configure your GKE clusters. Refactor your workloads. Refactor deployment and operational processes. Migrate data from your source environment to Google Cloud. Deploy your workloads in your GKE environment. Validate your workloads and GKE environment. Expose workloads running on GKE. Shift traffic from the source environment to the GKE environment. Decommission the source environment. Provision and configure your Google Cloud environment Before moving any workload to your new Google Cloud environment, you provision the GKE clusters. GKE supports enabling certain features on existing clusters, but there might be features that you can only enable at cluster creation time. To help you avoid disruptions and simplify the migration, we recommend that you enable the cluster features that you need at cluster creation time. Otherwise, you might need to destroy and recreate your clusters in case the cluster features you need cannot be enabled after creating a cluster. After the assessment phase, you now know how to provision the GKE clusters in your new Google Cloud environment to meet your needs. To provision your clusters, consider the following: The number of clusters, the number of nodes per cluster, the types of clusters, the configuration of each cluster and each node, and the scalability plans of each cluster. The mode of operation of each cluster. GKE offers two modes of operation for clusters: GKE Autopilot and GKE Standard. The number of private clusters. The choice between VPC-native or router-based networking. The Kubernetes versions and release channels that you need in your GKE clusters. The node pools to logically group the nodes in your GKE clusters, and if you need to automatically create node pools with node auto-provisioning. The initialization procedures that you can port from your environment to the GKE environment and new procedures that you can implement. For example, you can automatically bootstrap GKE nodes by implementing one or multiple, eventually privileged, initialization procedures for each node or node pool in your clusters. The scalability plans for each cluster. The additional GKE features that you need, such as Cloud Service Mesh, and GKE add-ons, such as Backup for GKE. For more information about provisioning GKE clusters, see: About cluster configuration choices. Manage, configure, and deploy GKE clusters. Understanding GKE security. Harden your cluster's security. GKE networking overview. Best practices for GKE networking. Storage for GKE clusters overview. Fleet management When you provision your GKE clusters, you might realize that you need a large number of them to support all the use cases of your environment. For example, you might need to separate production from non-production environments, or separate services across teams or geographies. For more information, see multi-cluster use cases. As the number of clusters increases, your GKE environment might become harder to operate because managing a large number of clusters poses significant scalability and operational challenges. GKE provides tools and features to help you manage fleets, a logical grouping of Kubernetes clusters. For more information, see Fleet management. Multi-cluster networking To help you improve the reliability of your GKE environment, and to distribute your workloads across several GKE clusters, you can use: Multi-Cluster Service Discovery, a cross-cluster service discovery and invocation mechanism. Services are discoverable and accessible across GKE clusters. For more information, see Multi-Cluster Service Discovery. Multi-cluster gateways, a cross-cluster ingress traffic load balancing mechanism. For more information, see Deploying multi-cluster Gateways. Multi-cluster mesh on managed Cloud Service Mesh. For more information, see Set up a multi-cluster mesh. For more information about migrating from a single-cluster GKE environment to a multi-cluster GKE environment, see Migrate to multi-cluster networking. Configure your GKE clusters After you provision your GKE clusters and before deploying any workload or migrating data, you configure namespaces, RBAC, network policies, service accounts, and other Kubernetes and GKE objects for each GKE cluster. To configure Kubernetes and GKE objects in your GKE clusters, we recommend that you: Ensure that you have the necessary credentials and permissions to access both the clusters in your source environment, and in your GKE environment. Assess if the objects in the Kubernetes clusters your source environment are compatible with GKE, and how the implementations that back these objects differ from the source environment and GKE. Refactor any incompatible object to make it compatible with GKE, or retire it. Create these objects to your GKE clusters. Configure any additional objects that your need in your GKE clusters. Config Sync To help you adopt GitOps best practices to manage the configuration of your GKE clusters as your GKE scales, we recommend that you use Config Sync, a GitOps service to deploy configurations from a source of truth. For example, you can store the configuration of your GKE clusters in a Git repository, and use Config Sync to apply that configuration. For more information, see Config Sync architecture. Policy Controller Policy Controller helps you apply and enforce programmable policies to help ensure that your GKE clusters and workloads run in a secure and compliant manner. As your GKE environment scales, you can use Policy Controller to automatically apply policies, policy bundles, and constraints to all your GKE clusters. For example, you can restrict the repositories from where container images can be pulled from, or you can require each namespace to have at least one label to help you ensure accurate resource consumption tracking. For more information, see Policy Controller. Refactor your workloads A best practice to design containerized workloads is to avoid dependencies on the container orchestration platform. This might not always be possible in practice due to the requirements and the design of your workloads. For example, your workloads might depend on environment-specific features that are available in your source environment only, such as add-ons, extensions, and integrations. Although you might be able to migrate most workloads as-is to GKE, you might need to spend additional effort to refactor workloads that depend on environment-specific features, in order to minimize these dependencies, eventually switching to alternatives that are available on GKE. To refactor your workloads before migrating them to GKE, you do the following: Review source environment-specific features, such as add-ons, extensions, and integrations. Adopt suitable alternative GKE solutions. Refactor your workloads. Review source environment-specific features If you're using source environment-specific features, and your workloads depend on these features, you need to: Find suitable alternatives GKE solutions. Refactor your workloads in order to make use of the alternative GKE solutions. As part of this review, we recommend that you do the following: Consider whether you can deprecate any of these source environment-specific features. Evaluate how critical a source environment-specific feature is for the success of the migration. Adopt suitable alternative GKE solutions After you reviewed your source environment-specific features, and mapped them to suitable GKE alternative solutions, you adopt these solutions in your GKE environment. To reduce the complexity of your migration, we recommend that you do the following: Avoid adopting alternative GKE solutions for source environment-specific features that you aim to deprecate. Focus on adopting alternative GKE solutions for the most critical source environment-specific features, and plan dedicated migration projects for the rest. Refactor your workloads While most of your workloads might work as is in GKE, you might need to refactor some of them, especially if they depended on source environment-specific features for which you adopted alternative GKE solutions. This refactoring might involve: Kubernetes object descriptors, such as Deployments, and Services expressed in YAML format. Container image descriptors, such as Dockerfiles and Containerfiles. Workloads source code. To simplify the refactoring effort, we recommend that you focus on applying the least amount of changes that you need to make your workloads suitable for GKE, and critical bug fixes. You can plan other improvements and changes as part of future projects. Refactor deployment and operational processes After you refactor your workloads, you refactor your deployment and operational processes to do the following: Provision and configure resources in your Google Cloud environment instead of provisioning resources in your source environment. Build and configure workloads, and deploy them in your Google Cloud instead of deploying them in your source environment. You gathered information about these processes during the assessment phase earlier in this process. The type of refactoring that you need to consider for these processes depends on how you designed and implemented them. The refactoring also depends on what you want the end state to be for each process. For example, consider the following: You might have implemented these processes in your source environment and you intend to design and implement similar processes in Google Cloud. For example, you can refactor these processes to use Cloud Build, Cloud Deploy, and Infrastructure Manager. You might have implemented these processes in another third-party environment outside your source environment. In this case, you need to refactor these processes to target your Google Cloud environment instead of your source environment. A combination of the previous approaches. Refactoring deployment and operational processes can be complex and can require significant effort. If you try to perform these tasks as part of your workload migration, the workload migration can become more complex, and it can expose you to risks. After you assess your deployment and operational processes, you likely have an understanding of their design and complexity. If you estimate that you require substantial effort to refactor your deployment and operational processes, we recommend that you consider refactoring these processes as part of a separate, dedicated project. For more information about how to design and implement deployment processes on Google Cloud, see: Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments This document focuses on the deployment processes that produce the artifacts to deploy, and deploy them in the target runtime environment. The refactoring strategy highly depends on the complexity of these processes. The following list outlines a possible, general, refactoring strategy: Provision artifact repositories on Google Cloud. For example, you can use Artifact Registry to store artifacts and build dependencies. Refactor your build processes to store artifacts both in your source environment and in Artifact Registry. Refactor your deployment processes to deploy your workloads in your target Google Cloud environment. For example, you can start by deploying a small subset of your workloads in Google Cloud, using artifacts stored in Artifact Registry. Then, you gradually increase the number of workloads deployed in Google Cloud, until all the workloads to migrate run on Google Cloud. Refactor your build processes to store artifacts in Artifact Registry only. If necessary, migrate earlier versions of the artifacts to deploy from the repositories in your source environment to Artifact Registry. For example, you can copy container images to Artifact Registry. Decommission the repositories in your source environment when you no longer require them. To facilitate eventual rollbacks due to unanticipated issues during the migration, you can store container images both in your current artifact repositories in Google Cloud while the migration to Google Cloud is in progress. Finally, as part of the decommissioning of your source environment, you can refactor your container image building processes to store artifacts in Google Cloud only. Although it might not be crucial for the success of a migration, you might need to migrate your earlier versions of your artifacts from your source environment to your artifact repositories on Google Cloud. For example, to support rolling back your workloads to arbitrary points in time, you might need to migrate earlier versions of your artifacts to Artifact Registry. For more information, see Migrate images from a third-party registry. If you're using Artifact Registry to store your artifacts, we recommend that you configure controls to help you secure your artifact repositories, such as access control, data exfiltration prevention, vulnerability scanning, and Binary Authorization. For more information, see Control access and protect artifacts. Migrate data GKE supports several data storage services, such as block storage, raw block storage, file storage, and object storage. For more information, see Storage for GKE clusters overview. To migrate data to your GKE environment, you do the following: Provision and configure all necessary storage infrastructure. Configure StorageClass provisioners, in your GKE clusters. Not all StorageClass provisioners are compatible with all environments. Before deploying a StorageClass provisioner, we recommend that you evaluate its compatibility with GKE, and its dependencies. Configure StorageClasses. Configure PersistentVolumes and PersistentVolumeClaims to store the data to migrate. Migrate data from your source environment to these PersistentVolumes. The specifics of this data migration depend on the characteristics of the source environment. To migrate data from your source environment to your Google Cloud environment, we recommend that you design a data migration plan by following the guidance in Migrate to Google Cloud: Transfer large datasets. Migrate data from EKS to GKE AWS provides several data storage options for Amazon EKS. This document focuses on the following data migration scenarios: Migrate data from Amazon EBS volumes to GKE PersistentVolume resources. Copy data from Amazon EBS volumes to Amazon S3 or to Cloud Storage, and then migrate data to GKE PersistentVolume resources. Migrate data from Amazon EBS volumes to GKE PersistentVolumes You can migrate data from Amazon EBS volumes to GKE PersistentVolume resources by using one of the following approaches: Directly copy data from Amazon EBS volumes to Compute Engine persistent disks: Provision Amazon EC2 instances and attach the Amazon EBS volumes that contain the data to migrate. Provision Compute Engine instances with empty persistent disks that have sufficient capacity to store the data to migrate. Run a data synchronization tool, such as rsync, to copy data from the Amazon EBS volumes to the Compute Engine persistent disks. Detach the persistent disks from the Compute Engine instances. Decommission the Compute Engine instances. Configure the persistent disks as GKE PersistentVolume resources. Migrate Amazon EC2 instances and Amazon EBS volumes to Compute Engine: Provision Amazon EC2 instances and attach the Amazon EBS volumes that contain the data to migrate. Migrate the Amazon EC2 instances and the Amazon EBS volumes to Compute Engine with Migrate for Virtual Machines. Detach the persistent disks from the Compute Engine instances. Decommission the Compute Engine instances. Configure the persistent disks as GKE PersistentVolume resources. For more information about migrating Amazon EC2 instances to Compute Engine, see Migrate from AWS to Google Cloud: Migrate from Amazon EC2 to Compute Engine. For more information about using Compute Engine persistent disks as GKE PersistentVolume resources, see Using pre-existing persistent disks as PersistentVolumes. Copy data from Amazon EBS volumes to an interim media, and migrate to GKE PersistentVolumes Instead of migrating from Amazon EBS volumes to GKE PersistentVolume resources directly, you can use an interim media such as an object store: Upload data from Amazon EBS volumes to an interim media such as an Amazon S3 bucket or a Cloud Storage bucket. Download the data from the interim media to your GKE PersistentVolume resources. In certain scenarios, using multiple media can simplify data transfer based on your network and security configurations. For example, you can initially upload the data to an S3 bucket, then copy it from the S3 bucket to a Cloud Storage bucket, and finally download the data to your persistent volumes. Regardless of the approach that you choose, to ensure a smooth transition while taking note of important considerations, we recommend that you review Migrate from AWS to Google Cloud: Migrate from Amazon S3 to Cloud Storage. Choose a migration option The best migration option for you depends on your specific needs and requirements, such as the following considerations: The amount of data that you need to migrate. If you have a small amount of data to migrate, such as a few data files, consider tools like rsync to copy the data directly to Compute Engine persistent disks. This option is relatively quick, but it might not be suitable for a large amount of data. If you have a large amount of data to migrate, consider using Migrate to Virtual Machines to migrate the data to Compute Engine. This option is more complex than directly copying data, but it's more reliable and scalable. The type of data that you need to migrate. Your network connectivity between the source and the target environments. If you can't establish direct network connectivity between your AWS EC2 and Compute Engine instances, you might want to consider using Amazon S3 or Cloud Storage to store the data temporarily while you migrate it to Compute Engine. This option might be less expensive because you won't have to keep your EC2 and Compute Engine instances running simultaneously. Your migration timeline. If you have limited network bandwidth or a large amount of data, and your timeline isn't tight, you can also consider using a Transfer Appliance to move your data from AWS to Google Cloud. No matter which option you choose, it's important that you test your migration before you make it live. Testing will help you to identify any potential problems and help to ensure that your migration is successful. Deploy your workloads When your deployment processes are ready, you deploy your workloads to GKE. For more information, see Overview of deploying workloads. To prepare the workloads to deploy for GKE, we recommend that you analyze your Kubernetes descriptors because some Google Cloud resources that GKE automatically provisions for you are configurable by using Kubernetes labels and annotations, instead of having to manually provision these resources. For example, you can provision an internal load balancer instead of an external one by adding an annotation to a LoadBalancer Service. Validate your workloads After you deploy workloads in your GKE environment, but before you expose these workloads to your users, we recommend that you perform extensive validation and testing. This testing can help you verify that your workloads are behaving as expected. For example, you may: Perform integration testing, load testing, compliance testing, reliability testing, and other verification procedures that help you ensure that your workloads are operating within their expected parameters, and according to their specifications. Examine logs, metrics, and error reports in Google Cloud Observability to identify any potential issues, and to spot trends to anticipate problems before they occur. For more information about workload validation, see Testing for reliability. Expose your workloads Once you complete the validation testing of the workloads running in your GKE environment, expose your workloads to make them reachable. To expose workloads running in your GKE environment, you can use Kubernetes Services, and a service mesh. For more information about exposing workloads running in GKE, see: About Services About service networking About Gateway Shift traffic to your Google Cloud environment After you have verified that the workloads are running in your GKE environment, and after you have exposed them to clients, you shift traffic from your source environment to your GKE environment. To help you avoid big-scale migrations and all the related risks, we recommend that you gradually shift traffic from your source environment to your GKE. Depending on how you designed your GKE environment, you have several options to implement a load balancing mechanism that gradually shifts traffic from your source environment to your target environment. For example, you may implement a DNS resolution policy that resolves DNS records according to some policy to resolve a certain percentage of requests to IP addresses belonging to your GKE environment. Or you can implement a load balancing mechanism using virtual IP addresses and network load balancers. After you start gradually shifting traffic to your GKE environment, we recommend that you monitor how your workloads behave as their loads increase. Finally, you perform a cutover, which happens when you shift all the traffic from your source environment to your GKE environment. For more information about load balancing, see Load balancing at the frontend. Decommission the source environment After the workloads in your GKE environment are serving requests correctly, you decommission your source environment. Before you start decommissioning resources in your source environment, we recommend that you do the following: Back up any data to help you restore resources in your source environment. Notify your users before decommissioning the environment. To decommission your source environment, do the following: Decommission the workloads running in the clusters in your source environment. Delete the clusters in your source environment. Delete the resources associated with these clusters, such as security groups, load balancers, and virtual networks. To avoid leaving orphaned resources, the order in which you decommission the resources in your source environment is important. For example, certain providers require that you decommission Kubernetes Services that lead to the creation of load balancers before being able to decommission the virtual networks containing those load balancers. Optimize your Google Cloud environment Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment. Establish your optimization requirements Optimization requirements help you narrow the scope of the current optimization iteration. For more information about optimization requirements and goals, see Establish your optimization requirements and goals. To establish your optimization requirements for your GKE environment, start by consider the following aspects: Security, privacy, and compliance: help you enhance the security posture of your GKE environment. Reliability: help you improve the availability, scalability, and resilience of your GKE environment. Cost optimization: help you optimize the resource consumption and resulting spending of your GKE environment. Operational efficiency: help you maintain and operate your GKE environment efficiently. Performance optimization: help you optimize the performance of the workloads deployed in your GKE environment. Security, privacy, and compliance Monitor the security posture of you GKE clusters. You can use the security posture dashboard to get opinionated, actionable recommendations to help you improve the security posture of your GKE environment. Harden your GKE environment. Understand the GKE security model, and how to harden harden your GKE clusters. Protect your software supply-chain. For security-critical workloads, Google Cloud provides a modular set of products that implement software supply chain security best practices across the software lifecycle. Reliability Improve the reliability of your clusters. To help you design a GKE cluster that is more resilient to unlikely zonal outages, prefer regional clusters over zonal or multi-zonal ones. Note: For more information about region-specific considerations, see Geography and regions. Workload backup and restore. Configure a workload backup and restore workflow with Backup for GKE. Cost optimization For more information about optimizing the cost of your GKE environment, see: Right-size your GKE workloads at scale. Reducing costs by scaling down GKE clusters during off-peak hours. Identify idle GKE clusters. Operational efficiency To help you avoid issues that affect your production environment, we recommend that you: Design your GKE clusters to be fungible. By considering your clusters as fungible and by automating their provisioning and configuration, you can streamline and generalize the operational processes to maintain them and also simplify future migrations and GKE cluster upgrades. For example, if you need to upgrade a fungible GKE cluster to a new GKE version, you can automatically provision and configure a new, upgraded cluster, automatically deploy workloads in the new cluster, and decommission the old, outdated GKE cluster. Monitor metrics of interest. Ensure that all the metrics of interest about your workloads and clusters are properly collected. Also, verify that all the relevant alerts that use these metrics as inputs are in place, and working. For more information about configuring monitoring, logging, and profiling in your GKE environment, see: Observability for GKE GKE Cluster notifications Performance optimization Set up cluster autoscaling and node auto-provisioning. Automatically resize your GKE cluster according to demand by using cluster autoscaling and node auto-provisioning. Automatically scale workloads. GKE supports several scaling mechanisms, such as: Automatically scale workloads based on metrics. Automatically scale workloads by changing the shape of the number of Pods your Kubernetes workloads by configuring Horizontal Pod autoscaling. Automatically scale workloads by adjusting resource requests and limits by configuring Vertical Pod autoscaling. For more information, see About GKE scalability. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Marco Ferrari | Cloud Solutions ArchitectXiang Shen | Solutions Architect Send feedback \ No newline at end of file diff --git a/Migrate_Amazon_S3_to_Cloud_Storage.txt b/Migrate_Amazon_S3_to_Cloud_Storage.txt new file mode 100644 index 0000000000000000000000000000000000000000..d9ab396f5e6374c6f8e7405f7e9da153647522c6 --- /dev/null +++ b/Migrate_Amazon_S3_to_Cloud_Storage.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-amazon-s3-to-cloud-storage +Date Scraped: 2025-02-23T11:51:56.602Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon S3 to Cloud Storage Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-30 UTC Google Cloud provides tools, products, guidance, and professional services to help you migrate data from Amazon Simple Storage Service (Amazon S3) to Cloud Storage. This document discusses how to design, implement, and validate a plan to migrate from Amazon S3 to Cloud Storage. The document describes a portion of the overall migration process in which you create an inventory of Amazon S3 artifacts and create a plan for how to handle the migration process. The discussion in this document is intended for cloud administrators who want details about how to plan and implement a migration process. It's also intended for decision-makers who are evaluating the opportunity to migrate and who want to explore what migration might look like. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage (this document) Migrate from Amazon EKS to Google Kubernetes Engine Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. Build an inventory of your Amazon S3 buckets To scope your migration, you create two inventories: an inventory of your Amazon S3 buckets, and an inventory of the objects that are stored in the buckets. After you build the inventory of your Amazon S3 buckets, refine the inventory by considering the following data points about each Amazon S3 bucket: How you've configured Amazon S3 bucket server-side encryption. Your settings for Amazon S3 bucket identity and access management. The configuration for S3 Block Public Access. Any cost allocation tags for Amazon S3 buckets. The configuration for S3 Object Lock. How you're accessing the Amazon S3 bucket. How you've configured Requester Pays. The settings for Amazon S3 object versioning. The configuration for AWS Backup policies for Amazon S3. Whether you're using Amazon S3 Intelligent-Tiering. How you've configured for Amazon S3 object replication. The Amazon S3 object lifecycle. We also recommend that you gather data about your Amazon S3 buckets that lets you compute aggregate statistics about the objects that each bucket contains. For example, if you gather the total object size, average object size, and object count, it can help you estimate the time and cost that's needed to migrate from an Amazon S3 bucket to a Cloud Storage bucket. To build the inventory of your Amazon S3 buckets and to gather data points about your Amazon S3 buckets, you can implement data-collection mechanisms and processes that rely on AWS tools, such as the following: Amazon S3 monitoring tools S3 Analytics AWS Multi-Account Multi-Region Data Aggregation AWS APIs AWS developer tools The AWS command-line interface To help you avoid issues during the migration, and to help estimate the effort needed for the migration, we recommend that you evaluate how Amazon S3 bucket features map to similar Cloud Storage bucket features. The following table summarizes this mapping. Amazon S3 feature Cloud Storage feature Bucket naming rules Bucket name requirements Bucket location Bucket location Server-side encryption Encryption options Identity and access management Identity and Access Management (IAM) Public access Public data access Public access prevention Cost allocation S3 bucket tags Tags and labels S3 Object Lock Retention policies and retention policy lock Methods for accessing an Amazon S3 bucket Uploads and downloads Requester Pays Requester Pays Object versioning Object versioning AWS Backup policies for Amazon S3 Event-driven transfer jobs Intelligent-Tiering Autoclass Object replication Redundancy across regions and turbo replication Event-driven transfer jobs Object lifecycle Object Lifecycle Management As noted earlier, the features listed in the preceding table might look similar when you compare them. However, differences in the design and implementation of the features in the two cloud providers can have significant effects on your migration from Amazon S3 to Cloud Storage. Build an inventory of the objects stored in your Amazon S3 objects After you build the inventory of your Amazon S3 buckets, we recommend that you build an inventory of the objects stored in these buckets by using the Amazon S3 inventory tool. To build the inventory of your Amazon S3 objects, consider the following for each object: Amazon S3 object name Amazon S3 object size Amazon S3 object metadata Amazon S3 object subresources Amazon S3 object versions, and if you need to migrate these versions Amazon S3 object presigned URLs Amazon S3 object transformations Amazon S3 object tags Amazon S3 object storage classes Amazon S3 object archiving We also recommend that you gather data about your Amazon S3 objects to understand how often you and your workloads create, update, and delete Amazon S3 objects. To help you avoid issues during the migration, and to help estimate the effort needed for the migration, we recommend that you evaluate how Amazon S3 object features map to similar Cloud Storage object features. The following table summarizes this mapping. Amazon S3 feature Cloud Storage feature Object naming rules Object name requirements Object metadata Object tags Object metadata Object subresources Object metadata Object presigned URLs Signed URLs Object transformations Pub/Sub notifications for Cloud Storage Cloud Run functions Cloud Run Object storage classes Object archiving Cloud Storage storage classes As noted earlier, the features listed in the preceding table might look similar when you compare them. However, differences in the design and implementation of the features in the two cloud providers can have significant effects on your migration from Amazon S3 to Cloud Storage. Complete the assessment After you build the inventories from your Amazon S3 environment, complete the rest of the activities of the assessment phase as described in Migrate to Google Cloud: Assess and discover your workloads. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. Migrate data and workloads from Amazon S3 to Cloud Storage To migrate data from Amazon S3 to Cloud Storage, we recommend that you design a data migration plan by following the guidance in Migrate to Google Cloud: Transfer your large datasets. That document recommends using Storage Transfer Service, a Google Cloud product that lets you migrate data from several sources to Cloud Storage, such as from on-premises environments or from other cloud storage providers. Storage Transfer Service supports several types of data transfer jobs, such as the following: Run-once transfer jobs, which transfer data from Amazon S3 or other supported sources to Cloud Storage on demand. Scheduled transfer jobs, which transfer data from Amazon S3 or other supported sources to Cloud Storage on a schedule. Event-driven transfer jobs, which automatically transfer data when Amazon S3 sends Amazon S3 Event Notifications to Amazon Simple Queue Service (SQS). To implement a data migration plan, you can configure one or more data transfer jobs. For example, to reduce the length of cut-over windows during the migration, you can implement a continuous replication data migration strategy as follows: Configure a run-once transfer job to copy the data from an Amazon S3 bucket to the Cloud Storage bucket. Perform data validation and consistency checks to compare data in the Amazon S3 bucket against the copied data in the Cloud Storage bucket. Set up event-driven transfer jobs to automatically transfer data from the Amazon S3 bucket to the Cloud Storage bucket when the content of the Amazon S3 bucket changes. Stop the workloads and services that have access to the data that's being migrated (that is, to the data that's involved in the previous step). Refactor workloads to use Cloud Storage instead of Amazon S3. You can refactor your workloads by using one of the following approaches, or by using the approaches in sequence: Simple migration from Amazon S3 to Cloud Storage. In a simple migration, you use your existing tools and libraries that generate authenticated REST requests to Amazon S3 to instead generate authenticated requests to Cloud Storage. Fully migrate from Amazon S3 to Cloud Storage. In a full migration, you can use all of the features of Cloud Storage, including multiple projects and OAuth 2.0 for authentication. Wait for the replication to fully synchronize Cloud Storage with Amazon S3. Start your workloads. When you no longer need your Amazon S3 environment as a fallback option, retire it. Storage Transfer Service can preserve certain metadata when you migrate objects from a supported source to Cloud Storage. We recommend that you assess whether Storage Transfer Service can migrate the Amazon S3 metadata that you're interested in. When you design your data migration plan, we recommend that you also assess AWS network egress costs and your Amazon S3 costs. For example, consider the following options to transfer data: Across the public internet. By using an interconnect link. By using Amazon CloudFront. The option that you choose can have an impact on your AWS network egress costs and your Amazon S3 costs. The option can also affect the amount of effort and resources that you need in order to provision and configure the infrastructure. For more information about costs, see the following: Understanding data transfer charges in the AWS documentation Amazon S3 pricing When you migrate data from Amazon S3 to Cloud Storage, we recommend that you use VPC Service Controls to build a perimeter that explicitly denies communication between Google Cloud services unless the services are authorized. Optimize your Google Cloud environment Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Migrate_Oracle_workloads_to_Google_Cloud.txt b/Migrate_Oracle_workloads_to_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..9326a14b47966634547f469843a71560bcde6f1d --- /dev/null +++ b/Migrate_Oracle_workloads_to_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/oracle +Date Scraped: 2025-02-23T11:59:23.762Z + +Content: +Oracle Database@Google Cloud is now generally available. Read the press release.Oracle on Google CloudMigrate and supercharge Oracle databases and applications in the cloud. You can now easily accelerate your cloud journey, unlock the power of your data, and embrace AI with confidence.Contact usIntroduction videoBenefits to help you transform your businessInnovate faster with the best of Google and the best of OracleGoogle Cloud's strengths in data and AI, complement Oracle's database expertise. Data is your competitive advantage, and we’re making it easy to leverage Google Cloud's industry-leading data and AI capabilities with your Oracle data. Imagine using Vertex AI, Google Cloud’s AI and ML platform, Gemini foundation models, or Oracle Database Vector Search to bring enterprise truth to your Oracle data or taking advantage of BigQuery and Looker to uncover insights that drive innovation and efficiency.Migrate and supercharge Oracle databases and applications in the cloudAccelerate migration and simplify management of Oracle databases and mission-critical database applications with Oracle Database@Google Cloud. Take advantage of Google Cloud's differentiated infrastructure on Google Kubernetes Engine and Compute Engine which offer accelerated application performance, platform reliability, and high levels of security. Experience the same high levels of Oracle database performance, security, and availability as in Oracle Cloud Infrastructure, within Google Cloud.Simplify operations with an integrated multicloud experienceTake advantage of simplified purchasing and contracting experience using Google Cloud Marketplace to purchase Oracle database services using your existing Google Cloud commitments, while leveraging your existing Oracle license benefits. Native integration with Google Cloud console, APIs, monitoring, and operations plus a unified customer experience across purchasing, billing, and collaborative support, simplifies your operations and frees you to focus on innovation and running your business.Google Cloud and Oracle are your transformation partnersGoogle Cloud and Oracle have come together to deliver a seamless Oracle experience on Google Cloud that simplifies cloud adoption, providing you choice and openness for your deployments. We're providing you with solutions that are simple to procure and deploy, highly secure, and fully integrated with your existing ecosystem.Read the press releaseDun & Bradstreet is a global leader in business decisioning data and analytics. With the unparalleled quality of D&B data, we can now seamlessly integrate Oracle Database’s performance, reliability, and scalability with Google Cloud’s powerful analytics and AI tools. This synergy allows us to process and analyze massive datasets with unprecedented speed and efficiency, extract deeper insights, and deliver more value to our customers.Adam Fayne, VP, Enterprise Engineering, Dun & BradstreetChoose the best solution for your Oracle workloadsOracle Database@Google CloudOracle Database@Google Cloud is now generally available in four Google Cloud regions across the United States and Europe - Ashburn (us-east4), Salt Lake City (us-west3), Frankfurt (europe-west3), and London (europe-west2), with many more regions coming soon. With Oracle Database@Google Cloud, we are deploying the latest database services in a Google Cloud datacenter running on Oracle Cloud Infrastructure (OCI) hardware. You have access to the most advanced managed service for Oracle database, Autonomous database—and can also migrate your mission critical databases to the most advanced platform for Oracle Database, and Exadata Database Cloud Service.Read more in this blogOracle Database@Google Cloud reference architectureOCI and Google Cloud Cross-Cloud InterconnectOCI and Google Cross-Cloud InterconnectDeploy workloads across both Oracle Cloud Infrastructure (OCI) and Google Cloud regions with no cross-cloud data transfer charges. You can begin onboarding in eleven Cross-Cloud Interconnect regions – Ashburn (us-east4), Montreal (northamerica-northeast1), Frankfurt (europe-west3), Madrid (europe-southwest1), London (europe-west2), Sydney (australia-southeast1), Melbourne (australia-southeast2), Mumbai (asia-south1), Tokyo (asia-northeast1), Singapore (asia-southeast1), and Sao Paulo (southamerica-east1). We'll be expanding to more regions in the coming months and providing a low-latency, high-throughput, private connection between Google Cloud and Oracle with seamless interoperability.Learn about Cross-Cloud InterconnectOracle Database on Google Compute Engine or Kubernetes EngineEasily migrate and run your Oracle database and applications on high-performance, reliable Google Cloud infrastructure across Compute Engine or Kubernetes Engine. You can bring your own support playbooks, runbooks and software licenses. Learn how a highly available enterprise application can be hosted on Compute Engine virtual machines with low-latency connectivity to Oracle Cloud Infrastructure (OCI) Exadata databases that run in Google Cloud.Reference architecture for Oracle Database on Compute EngineOracle Database on Compute Engine reference architectureBare Metal Solution for OracleBare Metal SolutionRun Oracle databases the same way you do it, on-premises on a fully managed end-to-end infrastructure including compute, storage, and networking. This solution is deployed by global customers for their mission critical workloads. Billing, support, and SLA are provided by Google Cloud.Visit the Bare Metal Solution pageOracle and Google Cloud have extended their multicloud strategy with this partnership, and customers can now take advantage of the automation in Oracle Autonomous Database and the performance of Exadata on OCI running in Google datacenters. As a result, customers can combine data from Oracle databases with Google Cloud services like Gemini foundation models and the Vertex AI development platform to develop and run a new generation of cloud native applications. Oracle and Google Cloud’s mutual customers are the ultimate winners in this multicloud strategy, as they benefit from the simplicity, security, and low latency of a unified operating environment.Carl Olofson, Research Vice President, Data Management Software, IDCTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact SalesGet tips and best practicesSee tutorialsWork with a trusted partnerFind a partnerContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_consumer_accounts.txt b/Migrate_consumer_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..91113d42bab2d10b5ab9589c882ec5047e7dcb0e --- /dev/null +++ b/Migrate_consumer_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/migrating-consumer-accounts +Date Scraped: 2025-02-23T11:55:52.083Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate consumer accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how you can migrate consumer accounts to managed user accounts controlled by Cloud Identity or Google Workspace. If your organization hasn't used Cloud Identity or Google Workspace before, it's possible that some of your employees are using consumer accounts to access Google services. Some of these consumer accounts might use a corporate email address such as alice@example.com as their primary email address. Consumer accounts are owned and managed by the individuals who created them. Your organization therefore has no control over the configuration, security, and lifecycle of these accounts. Before you begin To migrate consumer accounts to Cloud Identity or Google Workspace, you must meet the following prerequisites: You have identified a suitable onboarding plan and have completed all prerequisites for consolidating your existing user accounts. You have created a Cloud Identity or Google Workspace account and have prepared the default organization unit (OU) to grant appropriate access to migrated user accounts. Each consumer account that you plan to migrate must meet the following criteria: It can't be a Gmail account. It must use a primary email address that corresponds to the primary or a secondary domain of your Cloud Identity or Google Workspace account. In the context of a consumer account migration, alternate email addresses and alias domains are ignored. Its owner must be able to receive email on the account's primary email address. Converting a consumer account to a managed account implies that the user who registered the consumer account passes control of the account and its associated data to your organization. Your organization might require employees to sign and adhere to an acceptable-email-use policy that disallows the use of corporate email addresses for private purposes. In this case, you can safely assume that the consumer account has been used only for business purposes. However, if your organization doesn't have such a policy or the policy allows certain personal use, the consumer account might be associated with a mixture of corporate and personal data. Given this uncertainty, you can't force the migration of a consumer account to a managed account—a migration therefore always requires the user's consent. Process Migrating consumer accounts to managed accounts is a multi-step process that you must plan carefully. The following sections walk you through the process. Process overview The goal of migrating is to convert a consumer account into a managed user account while maintaining both the identity of the account as reflected by its email address, and any data associated with the account. During a migration like this, an account can be in one of four states, as the following state machine diagram shows. When you add and verify a domain in Cloud Identity or Google Workspace, any consumer account that uses an email address with this domain becomes an unmanaged account. For the user, this has no impact; they can sign in and access their data as normal. Adding a domain in Google Workspace or Cloud Identity affects only users whose email address matches this exact domain. For example, if you add example.com, the account johndoe@example.com is identified as an unmanaged account, while johndoe@corp.example.com is not unless you also add corp.example.com to the Cloud Identity or Google Workspace account. The existence of unmanaged accounts is surfaced to you as the Cloud Identity or Google Workspace administrator. You can then ask the user to transfer their account into a managed account. In the preceding diagram, if the user johndoe consents to a transfer, the unmanaged account is converted to a managed account. The identity remains the same, but now Cloud Identity or Google Workspace controls the account, including all of its data. If the user johndoe doesn't consent to a data transfer, but you create an account in Cloud Identity or Google Workspace using the same email address, the result is a conflicting account. A conflicting account is actually two accounts—one consumer, one managed—that are associated with the same identity, as in the following diagram. A user who signs in by using a conflicting account sees a screen prompting them to select either the managed account or the consumer account to resume the sign-on process. Important: In the presence of a conflicting account, only the consumer account still has access to the original data, while only the managed account is under the control of your organization. The result reflects that the user never consented to transferring the data—however, it likely doesn't reflect the intent of your migration. To avoid ending up with conflicting accounts, it's helpful to understand account states in more detail. Process in detail The following state machine diagram illustrates account states in more detail. Rectangular boxes on the left denote actions a Cloud Identity or Google Workspace administrator can take; rectangular boxes on the right denote the activities only the owner of a consumer account can perform. Find unmanaged user accounts When signing up for Cloud Identity or Google Workspace, you must provide a domain name, which you're then asked to verify ownership for. When you've completed the sign-up process, you can add and verify secondary domains. By verifying a domain, you automatically initiate a search for consumer accounts that use this domain in their email address. Within about 12 hours, these accounts will be surfaced as unmanaged user accounts in the transfer tool for unmanaged users. The search for consumer accounts considers the primary domain registered in Cloud Identity or Google Workspace as well as any secondary domain that has been verified. The search tries to match these domains to the primary email address of any consumer accounts. In contrast, alias domains registered in Cloud Identity or Google Workspace as well as alternate email addresses of consumer accounts aren't considered. Users of affected consumer accounts aren't made aware that you verified a domain or that you identified their account as an unmanaged account. They can continue to use their accounts as normal. Initiate a transfer In addition to showing you all unmanaged accounts, the transfer tool for unmanaged users lets you initiate an account transfer by sending an account transfer request. Initially, an account is listed as Not yet invited, indicating that no transfer request has been sent. If you select a user and send an account transfer request, the user receives an email similar to the following. Meanwhile, the account switches to being listed as Invited. Accept or decline a transfer Having received the transfer request, an affected user might simply ignore it and continue to use the account as normal. In this case, you can send another request, repeating the procedure. Alternatively, the user might follow up on the email, but decline the transfer. This causes the user to be listed as Declined in the transfer tool. If you suspect that declining was unintentional, you can repeat the procedure by sending another request. In both cases, the functionality of the unmanaged account is unimpaired—users are able to sign in and access their data. However, the process of migrating account data to Google Workspace or Cloud Identity is impeded as long as a user keeps ignoring or declining a transfer request. To prevent this from happening, make sure that you communicate your migration plan to employees before you send the first transfer requests. Also make sure that employees are fully aware of the reasons and consequences of accepting or declining a transfer request. Rather than declining the request, a user might also change the email address of the account. If the user changes the primary email address to use a domain that hasn't been verified by any Cloud Identity or Google Workspace account, this causes the account to become a consumer account again. Although the transfer tool might still temporarily list the user as an unmanaged account, you can no longer initiate an account transfer for such a renamed account. Create a conflicting account If at any point you create a user account in Cloud Identity or Google Workspace with the same email address as an unmanaged user account, the Admin Console warns you about an impending conflict: If you ignore this warning and create a user account anyway, this new account, together with the unmanaged account, becomes a conflicting account. Creating a conflicting account is useful if you want to evict an unwanted consumer account, but it's better to avoid it if your goal is to migrate a consumer account to Cloud Identity or Google Workspace. Creating a conflicting account can happen unintentionally. After signing up for Cloud Identity or Google Workspace, you might decide to set up single sign-on with an external identity provider (IdP) such as Azure Active Directory (AD) or Active Directory. When configured, the external IdP might automatically create accounts in Cloud Identity or Google Workspace for all users that you enabled single sign-on for, inadvertently creating conflicting accounts. Use a conflicting account Each time the user signs in using a conflicting account, they see a ballot screen like the following. When they select the first option, the sign-in process continues using the managed part of the conflicting account. They are asked to provide the password that you set for the managed account, or if you configured single sign-on, they are redirected to an external IdP to authenticate. After they are authenticated, they can use the account like any other managed account—however, because none of the data has been transferred from the original consumer account, it's effectively a new account. When choosing the second option in the ballot screen, the user is prompted to change the email address of the consumer part of the conflicting account. By changing the email address, the user resolves the conflict by ensuring that the managed account and consumer account have different identities again. The result remains that they have one consumer account that has all their original data, and one managed account that doesn't have access to the original data. The user can postpone renaming the account by clicking Do this later. This action turns the account into an Evicted state. In this state, the user sees the same ballot screen every time they sign in, and the account is assigned a temporary gtempaccount.com email address until renamed. Another way to resolve the conflict is to delete the managed account in Cloud Identity or Google Workspace, or if they use single sign-on, in the external IdP. This causes the ballot screen to not be shown the next time they sign in using the account, but the user still needs to change the email address of the account. If the user changes the email address to a private email address, the account remains a consumer account. If the user decides to change the email address back to the original corporate email address, the account becomes an unmanaged account again. Complete a transfer If a user accepts the transfer, the account is surfaced in Cloud Identity or Google Workspace. The account is now considered a managed account, and all data associated with the original consumer account is transferred to the managed account. If Cloud Identity or Google Workspace is not set up to use an external IdP for single sign-on, the user can sign in using their original password and continue to use the account as normal. If you do use single sign-on, the user won't be able to sign in with their existing password anymore. Instead, they're sent to the sign-in page of your external IdP when attempting to sign in. For this to succeed, the external IdP must recognize the user and permit single sign-on. Otherwise, the account becomes locked out. Best practices If you intend to migrate existing consumer accounts to either Cloud Identity or Google Workspace, plan and coordinate the migration steps in advance. Good planning avoids disrupting your users and minimizes the risk of inadvertently creating conflicting accounts. Consider the following best practices when planning a consumer account migration: If you use an external IdP, make sure that you configure user account provisioning and single sign-on in a way that does not impede consumer account migration. Inform affected users ahead of a migration. Migrating consumer accounts to managed accounts requires users' consent and might also affect them personally if they have used accounts for private purposes. Therefore, it's crucial that you inform affected users about your migration plans. Convey the following information to users before starting the migration: Rationale and importance of the account migration Impact on personal data associated with existing accounts Timeframe in which users can expect to receive a transfer request. Time window in which you expect users to either approve or decline a transfer Upcoming changes to the sign-in process after migration (only applies when using federation) Instructions on how to transfer ownership of private Google Docs files to a personal account If you announce the migration by email, some users might assume it's a phishing attempt. To prevent this from happening, consider announcing the migration through a different medium as well. For an example of an announcement email, see Advance communication for user account migration. Initiate transfers in batches. Begin with a small batch of approximately 10 users, and grow your batch size as you go. Allow sufficient time for affected users to react to transfer requests. Keep in mind that some employees might be on vacation or parental leave and won't be able to react quickly. Make sure that by consenting to a transfer, users don't lose access to data or Google services that they need. What's next Review how you can assess existing user accounts. Learn how to evict unwanted consumer accounts. Send feedback \ No newline at end of file diff --git a/Migrate_from_AWS_Lambda_to_Cloud_Run.txt b/Migrate_from_AWS_Lambda_to_Cloud_Run.txt new file mode 100644 index 0000000000000000000000000000000000000000..8c920ecaa5cec2399bda6519fcbdb72cbd967891 --- /dev/null +++ b/Migrate_from_AWS_Lambda_to_Cloud_Run.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-aws-lambda-to-cloudrun +Date Scraped: 2025-02-23T11:52:09.694Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from AWS Lambda to Cloud Run Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-21 UTC Google Cloud provides tools, products, guidance, and professional services to assist in migrating serverless workloads from Amazon Web Services (AWS) Lambda to Google Cloud. Although Google Cloud provides several services on which you can develop and deploy serverless applications, this document focuses on migrating to Cloud Run, a serverless runtime environment. Both AWS Lambda and Cloud Run share similarities such as automatic resource provisioning, scaling by the cloud provider, and a pay-per-use pricing model. This document helps you to design, implement, and validate a plan to migrate serverless workloads from AWS Lambda to Cloud Run. Additionally, it offers guidance for those evaluating the potential benefits and process of such a migration. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to Google Kubernetes Engine Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run (this document) For more information about picking the right serverless runtime environment for your business logic, see Select a managed container runtime environment. For a comprehensive mapping between AWS and Google Cloud services, see compare AWS and Azure services to Google Cloud services. For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Migrating serverless workloads often extends beyond just moving functions from one cloud provider to another. Because cloud-based applications rely on an interconnected web of services, migrating from AWS to Google Cloud might require replacing dependent AWS services with Google Cloud services. For example, consider a scenario in which your Lambda function interacts with Amazon SQS and Amazon SNS. To migrate this function, you will likely need to adopt Pub/Sub and Cloud Tasks to achieve similar functionality. Migration also presents a valuable chance for you to thoroughly review your serverless application's architecture and design decisions. Through this review, you might discover opportunities to do the following: Optimize with Google Cloud built-in features: Explore whether Google Cloud services offer unique advantages or better align with your application's requirements. Simplify your architecture: Assess whether streamlining is possible by consolidating functionality or using services differently within Google Cloud. Improve cost-efficiency: Evaluate the potential cost differences of running your refactored application on the infrastructure that is provided on Google Cloud. Improve code efficiency: Refactor your code alongside the migration process. Plan your migration strategically. Don't view your migration as a rehost (lift and shift) exercise. Use your migration as a chance to enhance the overall design and code quality of your serverless application. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. Build an inventory of your AWS Lambda workloads To define the scope of your migration, you create an inventory and collect information about your AWS Lambda workloads. To build the inventory of your AWS Lambda workloads, we recommend that you implement a data-collection mechanism based on AWS APIs, AWS developer tools, and the AWS command-line interface. After you build your inventory, we recommend that you gather information about each AWS Lambda workload in the inventory. For each workload, focus on aspects that help you anticipate potential friction. Also, analyze that workload to understand how you might need to modify the workload and its dependencies before you migrate to Cloud Run. We recommend that you start by collecting data about the following aspects of each AWS Lambda workload: The use case and design The source code repository The deployment artifacts The invocation, triggers, and outputs The runtime and execution environments The workload configuration The access controls and permissions The compliance and regulatory requirements The deployment and operational processes Use case and design Gathering information about the use case and design of the workloads helps in identifying a suitable migration strategy. This information also helps you to understand whether you need to modify your workloads and their dependencies before the migration. For each workload, we recommend that you do the following: Gain insights into the specific use case that the workload serves, and identify any dependencies with other systems, resources, or processes. Analyze the workload's design and architecture. Assess the workload's latency requirements. Source code repository Inventorying the source code of your AWS Lambda functions helps if you need to refactor your AWS Lambda workloads for compatibility with Cloud Run. Creating this inventory involves tracking the codebase, which is typically stored in version control systems like Git or in development platforms such as GitHub or GitLab. The inventory of your source code is essential for your DevOps processes, such as continuous integration and continuous development (CI/CD) pipelines, because these processes will also need to be updated when you migrate to Cloud Run. Deployment artifacts Knowing what deployment artifacts are needed by the workload is another component in helping you understand whether you might need to refactor your AWS Lambda workloads. To identify what deployment artifacts that the workload needs, gather the following information: The type of deployment package to deploy the workload. Any AWS Lambda layer that contains additional code, such as libraries and other dependencies. Any AWS Lambda extensions that the workload depends on. The qualifiers that you configured to specify versions and aliases. The deployed workload version. Invocation, triggers, and outputs AWS Lambda supports several invocation mechanisms, such as triggers, and different invocation models, such as synchronous invocation and asynchronous invocation. For each AWS Lambda workload, we recommend that you gather the following information that is related to triggers and invocations: The triggers and event source mappings that invoke the workload. Whether the workload supports synchronous and asynchronous invocations. The workload URLs and HTTP(S) endpoints. Your AWS Lambda workloads can interact with other resources and systems. You need to know what resources consume the outputs of your AWS Lambda workloads and how those resources consume those outputs. This knowledge helps you to determine whether you need to modify anything that might depend on those outputs, such as other systems or workloads. For each AWS Lambda workload, we recommend that you gather the following information about other resources and systems: The destination resources that the workload might send events to. The destinations that receive information records for asynchronous invocations. The format for the events that the workload processes. How your AWS Lambda workload and its extensions interact with AWS Lambda APIs, or other AWS APIs. In order to function, your AWS Lambda workloads might store persistent data and interact with other AWS services. For each AWS Lambda workload, we recommend that you gather the following information about data and other services: Whether the workload accesses virtual private clouds (VPCs) or other private networks. How the workload stores persistent data, such as by using ephemeral data storage and Amazon Elastic File System (EFS). Runtime and execution environments AWS Lambda supports several execution environments for your workloads. To correctly map AWS Lambda execution environments to Cloud Run execution environments, we recommend that you assess the following for each AWS Lambda workload: The execution environment of the workload. The instruction set architecture of the computer processor on which the workload runs. If your AWS Lambda workloads run in language-specific runtime environments, consider the following for each AWS Lambda workload: The type, version, and unique identifier of the language-specific runtime environment. Any modifications that you applied to the runtime environment. Workload configuration In order to configure your workloads as you migrate them from AWS Lambda to Cloud Run, we recommend that you assess how you configured each AWS Lambda workload. For each AWS Lambda workload, gather information about the following concurrency and scalability settings: The concurrency controls. The scalability settings. The configuration of the instances of the workload, in terms of the amount of memory available and the maximum execution time allowed. Whether the workload is using AWS Lambda SnapStart, reserved concurrency, or provisioned concurrency to reduce latency. The environment variables that you configured, as well as the ones that AWS Lambda configures and the workload depends on. The tags and attribute-based access control. The state machine to handle exceptional conditions. The base images and configuration files (such as the Dockerfile) for deployment packages that use container images. Access controls and permissions As part of your assessment, we recommend that you assess the security requirements of your AWS Lambda workloads and their configuration in terms of access controls and management. This information is critical if you need to implement similar controls in your Google Cloud environment. For each workload, consider the following: The execution role and permissions. The identity and access management configuration that the workload and its layers use to access other resources. The identity and access management configuration that other accounts and services use to access the workload. The Governance controls. Compliance and regulatory requirements For each AWS Lambda workload, make sure that you understand its compliance and regulatory requirements by doing the following: Assess any compliance and regulatory requirements that the workload needs to meet. Determine whether the workload is currently meeting these requirements. Determine whether there are any future requirements that will need to be met. Compliance and regulatory requirements might be independent from the cloud provider that you're using, and these requirements might have an impact on the migration as well. For example, you might need to ensure that data and network traffic stays within the boundaries of certain geographies, such as the European Union (EU). Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Complete the assessment After you build the inventories from your AWS Lambda environment, complete the rest of the activities of the assessment phase as described in Migrate to Google Cloud: Assess and discover your workloads. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. Migrate your AWS Lambda workloads To migrate your workloads from AWS Lambda to Cloud Run, do the following: Design, provision, and configure your Cloud Run environment. If needed, refactor your AWS Lambda workloads to make them compatible with Cloud Run. Refactor your deployment and operational processes to deploy and observe your workloads on Cloud Run. Migrate the data that is needed by your AWS Lambda workloads. Validate the migration results in terms of functionality, performance, and cost. To help you avoid issues during the migration, and to help estimate the effort that is needed for the migration, we recommend that you evaluate how AWS Lambda features compare to similar Cloud Run features. AWS Lambda and Cloud Run features might look similar when you compare them. However, differences in the design and implementation of the features in the two cloud providers can have significant effects on your migration from AWS Lambda to Cloud Run. These differences can influence both your design and refactoring decisions, as highlighted in the following sections. Design, provision, and configure your Cloud Run environment The first step of the migrate phase is to design your Cloud Run environment so that it can support the workloads that you are migrating from AWS Lambda. In order to correctly design your Cloud Run environment, use the data that you gathered during the assessment phase about each AWS Lambda workload. This data helps you to do the following: Choose the right Cloud Run resources to deploy your workload. Design your Cloud Run resources configuration. Provision and configure the Cloud Run resources. Choose the right Cloud Run resources For each AWS Lambda workload to migrate, choose the right Cloud Run resource to deploy your workloads. Cloud Run supports the following main resources: Cloud Run services: a resource that hosts a containerized runtime environment, exposes a unique endpoint, and automatically scales the underlying infrastructure according to demand. Cloud Run jobs: a resource that executes one or more containers to completion. The following table summarizes how AWS Lambda resources map to these main Cloud Run resources: AWS Lambda resource Cloud Run resource AWS Lambda function that gets triggered by an event such as those used for websites and web applications, APIs and microservices, streaming data processing, and event-driven architectures. Cloud Run service that you can invoke with triggers. AWS Lambda function that has been scheduled to run such as those for background tasks and batch jobs. Cloud Run job that runs to completion. Beyond services and jobs, Cloud Run provides additional resources that extend these main resources. For more information about all of the available Cloud Run resources, see Cloud Run resource model. Design your Cloud Run resources configuration Before you provision and configue your Cloud Run resources, you design their configuration. Certain AWS Lambda configuration options, such as resource limits and request timeouts, are comparable to similar Cloud Run configuration options. The following sections describe the configuration options that are available in Cloud Run for service triggers and job execution, resource configuration, and security. These sections also highlight AWS Lambda configuration options that are comparable to those in Cloud Run. Cloud Run service triggers and job execution Service triggers and job execution are the main design decisions that you need to consider when you migrate your AWS Lambda workloads. Cloud Run provides a variety of options to trigger and run the event-based workloads that are used in AWS Lambda. In addition, Cloud Run provides options that you can use for streaming workloads and scheduled jobs. When you migrate your workloads, it is often useful to first understand what triggers and mechanisms are available in Cloud Run. This information helps with your understanding of how Cloud Run works. Then, you can use this understanding to determine which Cloud Run features are comparable to AWS Lambda features and which Cloud Run features could be used when refactoring those workloads. To learn more about the service triggers that Cloud Run provides, use the following resources: HTTPS invocation: send HTTPS requests to trigger Cloud Run services. HTTP/2 invocation: send HTTP/2 requests to trigger Cloud Run services. WebSockets: connect clients to a WebSockets server running on Cloud Run. gRPC connections: connect to Cloud Run services using gRPC. Asynchronous invocation: enqueue tasks using Cloud Tasks to be asynchronously processed by Cloud Run services. Scheduled invocation: use Cloud Scheduler to invoke Cloud Run services on a schedule. Event-based invocation: create Eventarc triggers to invoke Cloud Run services on specific events in CloudEvents format. Integration with Pub/Sub: invoke Cloud Run services from Pub/Sub push subscriptions. Integration with Workflows: invoke a Cloud Run service from a workflow. To learn more about the job execution mechanisms that Cloud Run provides, use the following resources: On-demand execution: create job executions that run to completion. Scheduled execution: execute Cloud Run jobs on a schedule. Integration with Workflows: execute Cloud Run jobs from a workflow. To help you understand which Cloud Run invocation or execution mechanisms are comparable to AWS Lambda invocation mechanisms, use the following table. For each Cloud Run resource that you need to provision, make sure to choose the right invocation or execution mechanism. AWS Lambda feature Cloud Run feature HTTPS trigger (function URLs) HTTPS invocation HTTP/2 trigger (partially supported using an external API gateway) HTTP/2 invocation (supported natively) WebSockets (supported using external API gateway) WebSockets (supported natively) N/A (gRPC connections not supported) gRPC connections Asynchronous invocation Cloud Tasks integration Scheduled invocation Cloud Scheduler integration Event-based trigger in a proprietary event format Event-based invocation in CloudEvents format Amazon SQS and Amazon SNS integration Pub/Sub integration AWS Lambda Step Functions Workflows integration Note: This document focuses on migrating from AWS Lambda workloads to Cloud Run. If you use AWS Step Functions and you need to migrate them as well, see Migrate from AWS Step Functions to Workflows. Cloud Run resource configuration To supplement the design decisions that you made for triggering and running your migrated workloads, Cloud Run supports several configuration options that let you fine tune several aspects of the runtime environment. These configuration options consist of resource services and jobs. As mentioned earlier, you can better understand how Cloud Run works by first developing an understanding of all of the configuration options that are available in Cloud Run. This understanding then helps you to compare AWS Lambda features to similar Cloud Run features, and helps you determine how to refactor your workloads. To learn more about the configurations that Cloud Run services provide, use the following resources: Capacity: memory limits, CPU limits, CPU allocation, request timeout, maximum concurrent requests per instance, minimum instances, maximum instances, and GPU configuration Environment: container port and entrypoint, environment variables, Cloud Storage volume mounts, in-memory volume mounts, execution environment, container health checks, secrets, and service accounts Autoscaling Handling exceptional conditions: Pub/Sub failure handling, and Eventarc failure handling Metadata: description, labels, and tags Traffic serving: custom domain mapping, auto-assigned URLs, Cloud CDN integration, and session affinity To learn more about the jobs that Cloud Run provides, use the following resources: Capacity: memory limits, CPU limits, parallelism, and task timeout Environment: container entrypoint, environment variables, Cloud Storage volume mounts, in-memory volume mounts, secrets, and service accounts Handling exceptional conditions: maximum retries Metadata: labels and tags To help you understand which Cloud Run configuration options are comparable to AWS Lambda configuration options, use the following table. For each Cloud Run resource that you need to provision, make sure to choose the right configuration options. AWS Lambda feature Cloud Run feature Provisioned concurrency Minimum instances Reserved concurrency per instance (The concurrency pool is shared across AWS Lambda functions in your AWS account.) Maximum instances per service N/A (not supported, one request maps to one instance) Concurrent requests per instance N/A (depends on memory allocation) CPU allocation Scalability settings Instance autoscaling for services Parallelism for jobs Instance configuration (CPU, memory) CPU and memory limits Maximum execution time Request timeout for services Task timeout for jobs AWS Lambda SnapStart Startup CPU boost Environment variables Environment variables Ephemeral data storage In-memory volume mounts Amazon Elastic File System connections NFS volume mounts S3 volume mounts are not supported Cloud Storage volume mounts AWS Secrets Manager in AWS Lambda workloads Secrets Workload URLs and HTTP(S) endpoints Auto-assigned URLs Cloud Run integrations with Google Cloud products Sticky sessions (using an external load balancer) Session affinity Qualifiers Revisions In addition to the features that are listed in the previous table, you should also consider the differences between how AWS Lambda and Cloud Run provision instances of the execution environment. AWS Lambda provisions a single instance for each concurrent request. However, Cloud Run lets you set the number of concurrent requests that an instance can serve. That is, the provisioning behavior of AWS Lambda is equivalent to setting the maximum number of concurrent requests per instance to 1 in Cloud Run. Setting the maximum number of concurrent requests to more than 1 can significantly save costs because the CPU and memory of the instance is shared by the concurrent requests, but they are only billed once. Most HTTP frameworks are designed to handle requests in parallel. Cloud Run security and access control When you design your Cloud Run resources, you also need to decide on the security and access controls that you need for your migrated workloads. Cloud Run supports several configuration options to help you secure your environment, and to set roles and permissions for your Cloud Run workloads. This section highlights the security and access controls that are available in Cloud Run. This information helps you both understand how your migrated workloads will function in Cloud Run and identify those Cloud Run options that you might need if you refactor those workloads. To learn more about the authentication mechanisms that Cloud Run provides, use the following resources: Allow public (unauthenticated) access Custom audiences Developer authentication Service-to-service authentication User authentication To learn more about the security features that Cloud Run provides, use the following resources: Access control with Identity and Access Management (IAM) Per-service identity Google Cloud Armor integration Binary Authorization Customer managed encryption keys Software supply chain security Ingress restriction settings VPC Service Controls To help you understand which Cloud Run security and access controls are comparable to those that are available in AWS Lambda, use the following table. For each Cloud Run resource that you need to provision, make sure to choose the right access controls and security features. AWS Lambda feature Cloud Run feature Access control with AWS identity access and management Access control with Google Cloud's IAM Execution role Google Cloud's IAM role Permission boundaries Google Cloud's IAM permissions and custom audiences Governance controls Organization Policy Service Code signing Binary authorization Full VPC access Granular VPC egress access controls Provision and configure Cloud Run resources After you choose the Cloud Run resources to deploy your workloads, you provision and configure those resources. For more information about provisioning Cloud Run resources, see the following: Deploy a Cloud Run Service from source Deploying container images to Cloud Run Create jobs Deploy a Cloud Run function Refactor AWS Lambda workloads To migrate your AWS Lambda workloads to Cloud Run, you might need to refactor them. For example, if an event-based workload accepts triggers that contain Amazon CloudWatch events, you might need to refactor that workload to make it accept events in the CloudEvents format. There are several factors that may influence the amount of refactoring that you need for each AWS Lambda workload, such as the following: Architecture. Consider how the workload is designed in terms of architecture. For example, AWS Lambda workloads that have clearly separated the business logic from the logic to access AWS-specific APIs might require less refactoring as compared to workloads where the two logics are mixed. Idempotency. Consider whether the workload is idempotent in regard to its inputs. A workload that is idempotent to inputs might require less refactoring as compared to workloads that need to maintain state about which inputs they've already processed. State. Consider whether the workload is stateless. A stateless workload might require less refactoring as compared to workloads that maintain state. For more information about the services that Cloud Run supports to store data, see Cloud Run storage options. Runtime environment. Consider whether the workload makes any assumptions about its runtime environment. For these types of workloads, you might need to satisfy the same assumptions in the Cloud Run runtime environment or refactor the workload if you can't assume the same for the Cloud Run runtime environment. For example, if a workload requires certain packages or libraries to be available, you need to install them in the Cloud Run runtime environment that is going to host that workload. Configuration injection. Consider whether the workload supports using environment variables and secrets to inject (set) its configuration. A workload that supports this type of injection might require less refactoring as compared to workloads that support other configuration injection mechanisms. APIs. Consider whether the workload interacts with AWS Lambda APIs. A workload that interacts with these APIs might need to be refactored to use Cloud APIs and Cloud Run APIs. Error reporting. Consider whether the workload reports errors using standard output and error streams. A workload that does such error reporting might require less refactoring as compared to workloads that report errors using other mechanisms. For more information about developing and optimizing Cloud Run workloads, see the following resources: General development tips Optimize Java applications for Cloud Run Optimize Python applications for Cloud Run Load testing best practices Jobs retries and checkpoints best practices Frontend proxying using Nginx Serve static assets with Cloud CDN and Cloud Run Understand Cloud Run zonal redundancy Refactor deployment and operational processes After you refactor your workloads, you refactor your deployment and operational processes to do the following: Provision and configure resources in your Google Cloud environment instead of provisioning resources in your source environment. Build and configure workloads, and deploy them in your Google Cloud instead of deploying them in your source environment. You gathered information about these processes during the assessment phase earlier in this process. The type of refactoring that you need to consider for these processes depends on how you designed and implemented them. The refactoring also depends on what you want the end state to be for each process. For example, consider the following: You might have implemented these processes in your source environment and you intend to design and implement similar processes in Google Cloud. For example, you can refactor these processes to use Cloud Build, Cloud Deploy, and Infrastructure Manager. You might have implemented these processes in another third-party environment outside your source environment. In this case, you need to refactor these processes to target your Google Cloud environment instead of your source environment. A combination of the previous approaches. Refactoring deployment and operational processes can be complex and can require significant effort. If you try to perform these tasks as part of your workload migration, the workload migration can become more complex, and it can expose you to risks. After you assess your deployment and operational processes, you likely have an understanding of their design and complexity. If you estimate that you require substantial effort to refactor your deployment and operational processes, we recommend that you consider refactoring these processes as part of a separate, dedicated project. For more information about how to design and implement deployment processes on Google Cloud, see: Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments This document focuses on the deployment processes that produce the artifacts to deploy, and deploy them in the target runtime environment. The refactoring strategy highly depends on the complexity of these processes. The following list outlines a possible, general, refactoring strategy: Provision artifact repositories on Google Cloud. For example, you can use Artifact Registry to store artifacts and build dependencies. Refactor your build processes to store artifacts both in your source environment and in Artifact Registry. Refactor your deployment processes to deploy your workloads in your target Google Cloud environment. For example, you can start by deploying a small subset of your workloads in Google Cloud, using artifacts stored in Artifact Registry. Then, you gradually increase the number of workloads deployed in Google Cloud, until all the workloads to migrate run on Google Cloud. Refactor your build processes to store artifacts in Artifact Registry only. If necessary, migrate earlier versions of the artifacts to deploy from the repositories in your source environment to Artifact Registry. For example, you can copy container images to Artifact Registry. Decommission the repositories in your source environment when you no longer require them. To facilitate eventual rollbacks due to unanticipated issues during the migration, you can store container images both in your current artifact repositories in Google Cloud while the migration to Google Cloud is in progress. Finally, as part of the decommissioning of your source environment, you can refactor your container image building processes to store artifacts in Google Cloud only. Although it might not be crucial for the success of a migration, you might need to migrate your earlier versions of your artifacts from your source environment to your artifact repositories on Google Cloud. For example, to support rolling back your workloads to arbitrary points in time, you might need to migrate earlier versions of your artifacts to Artifact Registry. For more information, see Migrate images from a third-party registry. If you're using Artifact Registry to store your artifacts, we recommend that you configure controls to help you secure your artifact repositories, such as access control, data exfiltration prevention, vulnerability scanning, and Binary Authorization. For more information, see Control access and protect artifacts. Refactor operational processes As part of your migration to Cloud Run, we recommend that you refactor your operational processes to constantly and effectively monitor your Cloud Run environment. Cloud Run integrates with the following operational services: Google Cloud Observability to provide logging, monitoring, and alerting for your workloads. If needed, you can also get Prometheus-style monitoring or OpenTelemetry metrics for your Cloud Run workloads. Cloud Audit Logs to provide audit logs. Cloud Trace to provide distributed performance tracing. Migrate data The assessment phase earlier in this process should have helped you determine whether the AWS Lambda workloads that you're migrating either depend on or produce data that resides in your AWS environment. For example, you might have determined that you need to migrate data from AWS S3 to Cloud Storage, or from Amazon RDS and Aurora to Cloud SQL and AlloyDB for PostgreSQL. For more information about migrating data from AWS to Google Cloud, see Migrate from AWS to Google Cloud: Get started. As with refactoring deployment and operational processes, migrating data from AWS to Google Cloud can be complex and can require significant effort. If you try to perform these data migration tasks as part of the migration of your AWS Lambda workloads, the migration can become complex and can expose you to risks. After you analyze the data to migrate, you'll likely have an understanding of the size and complexity of the data. If you estimate that you require substantial effort to migrate this data, we recommend that you consider migrating data as part of a separate, dedicated project. Validate the migration results Validating your workload migration isn't a one-time event but a continuous process. You need to maintain focus on testing and validation before, during, and after migrating from AWS Lambda to Cloud Run. To help ensure a successful migration with optimal performance and minimal disruptions, we recommend that you use the following process to validate the workloads that you're migrating from AWS Lambda to Cloud Run: Before you start the migration phase, refactor your existing test cases to take into account the target Google Cloud environment. During the migration, validate test results at each migration milestone and conduct thorough integration tests. After the migrations, do the following testing: Baseline testing: Establish performance benchmarks of the original workload on AWS Lambda, such as execution time, resource usage, and error rates under different loads. Replicate these tests on Cloud Run to identify discrepancies that could point to migration or configuration problems. Functional testing: Ensure that the core logic of your workloads remains consistent by creating and executing test cases that cover various input and expected output scenarios in both environments. Load testing: Gradually increase traffic to evaluate the performance and scalability of Cloud Run under real-world conditions. To help ensure a seamless migration, address discrepancies such as errors and resource limitations. Optimize your Google Cloud environment Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Marco Ferrari | Cloud Solutions ArchitectXiang Shen | Solutions ArchitectOther contributors: Steren Giannini | Group Product ManagerJames Ma | Product ManagerHenry Bell | Cloud Solutions ArchitectChristoph Stanger | Strategic Cloud EngineerCC Chen | Customer Solutions ArchitectWietse Venema | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_MySQL_to_Cloud_SQL_for_MySQL.txt b/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_MySQL_to_Cloud_SQL_for_MySQL.txt new file mode 100644 index 0000000000000000000000000000000000000000..47f68370d09409ebd76f2fd7e6898f8c7e42120b --- /dev/null +++ b/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_MySQL_to_Cloud_SQL_for_MySQL.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-aws-rds-to-sql-mysql +Date Scraped: 2025-02-23T11:52:02.192Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-16 UTC Google Cloud provides tools, products, guidance, and professional services to migrate from Amazon Relational Database Service (RDS) or Amazon Aurora to Cloud SQL for MySQL. This document is intended for cloud and database administrators who want to plan, implement, and validate a database migration project. It's also intended for decision makers who are evaluating the opportunity to migrate and want an example of what a migration might look like. This document focuses on a homogeneous database migration, which is a migration where the source and destination databases use the same database technology. In this migration guide, the source is Amazon RDS or Amazon Aurora for MySQL, and the destination is Cloud SQL for MySQL. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to GKE Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL (this document) Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. The database assessment phase helps you choose the size and specifications of your target Cloud SQL database instance that matches the source for similar performance needs. Pay special attention to disk size and throughput, IOPS, and number of vCPUs. Migrations might struggle or fail due to incorrect target database instance sizing. Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. When deciding on the Cloud SQL instance, keep in mind that disk performance is based on the disk size and the number of vCPUs. The following sections rely on Migrate to Google Cloud: Assess and discover your workloads, and integrate the information in that document. Build an inventory of your Amazon RDS or Amazon Aurora instances To define the scope of your migration, you create an inventory and collect information about your Amazon RDS or Amazon Aurora instances. We recommend that you automate this process by using specialized tools, because manual approaches are prone to error and can lead to incorrect assumptions. Amazon RDS or Amazon Aurora and Cloud SQL might not have similar features, instance specifications, or operations. Some functionalities might be implemented differently or be unavailable. Areas of differences might include infrastructure, storage, authentication and security, replication, backup, high availability, resource capacity model and specific database engine feature integrations, and extensions. Depending on the database engine type, instance size, and architecture, there can also be differences in the default values of database parameter settings. Benchmarking can help you to better understand the workloads that are to be migrated and contributes to defining the right architecture of the migration target environment. Collecting information about performance is important to help estimate the performance needs of the Google Cloud target environment. Benchmarking concepts and tools are detailed in the Perform testing and validation phase of the migration process, but they also apply to the inventory building stage. Tools for assessments For an initial overview assessment of your current infrastructure, we recommend that you use Google Cloud Migration Center along with other specialized database assessment tools such as migVisor and Database Migration Assessment Tool (DMA). With Migration Center, you can perform a complete assessment of your application and database landscape, including the technical fit for a database migration to Google Cloud. You receive size and configuration recommendations for each source database, and create a total cost of ownership (TCO) report for servers and databases. For more information about assessing your AWS environment by using Migration Center, see Import data from other cloud providers. In addition to Migration Center, you can use the specialized tool migVisor. migVisor supports a variety of database engines and is particularly suitable for heterogeneous migrations. For an introduction to migVisor, see the migVisor overview. migVisor can identify artifacts and incompatible proprietary database features that can cause migration defaulting, and can point to workarounds. migVisor can also recommend a target Google Cloud solution, including initial sizing and architecture. The migVisor database assessment output provides the following: Comprehensive discovery and analysis of current database deployments. Detailed report of migration complexity, based on the proprietary features used by your database. Financial report with details on cost savings post migration, migration costs, and new operating budget. Phased migration plan to move databases and associated applications with minimal disruption to the business. To see some examples of assessment outputs, see migVisor - Cloud migration assessment tool. Note that migVisor temporarily increases database server utilization. Typically, this additional load is less than 3%, and can be run during non-peak hours. The migVisor assessment output helps you to build a complete inventory of your RDS instances. The report includes generic properties (database engine version and edition, CPUs, and memory size), as well as details about database topology, backup policies, parameter settings, and special customizations in use. If you prefer to use open source tools, you can use data collector scripts with (or instead of) the mentioned tools. These scripts can help you collect detailed information (about workloads, features, database objects, and database code) and build your database inventory. Also, scripts usually provide a detailed database migration assessment, including a migration effort estimation. We recommend the open source tool DMA, which was built by Google engineers. It offers a complete and accurate database assessment, including features in use, database logic, and database objects (including schemas, tables, views, functions, triggers, and stored procedures). To use DMA, download the collection scripts for your database engine from the Git repository, and follow the instructions. Send the output files to Google Cloud for analysis. Google Cloud creates and delivers a database assessment readout, and provides the next steps in the migration journey. Identify and document the migration scope and affordable downtime At this stage, you document essential information that influences your migration strategy and tooling. By now, you can answer the following questions: Are your databases larger than 5 TB? Are there any large tables in your database? Are they larger than 16 TB? Where are the databases located (regions and zones), and what's their proximity to applications? How often does the data change? What is your data consistency model? What are the options for destination databases? How compatible are the source and destination databases? Does the data need to reside in some physical locations? Is there data that can be compressed and archived, or is there data that doesn't need migration at all? To define the migration scope, decide what data to keep and what to migrate. Migrating all your databases might take considerable time and effort. Some data might remain in your source database backups. For example, old logging tables or archival data might not be needed. Alternatively, you might decide to move data after the migration process, depending on your strategy and tools. Establish data migration baselines that help you compare and evaluate your outcomes and impacts. These baselines are reference points that represent the state of your data before and after the migration and help you make decisions. It's important to take measurements on the source environment that can help you evaluate your data migration's success. Such measurements include the following: The size and structure of your data. The completeness and consistency of your data. The duration and performance of the most important business transactions and processes. Determine how much downtime you can afford. What are the business impacts of downtime? Are there periods of low database activity, during which there are fewer users affected by downtime? If so, how long are such periods and when do they occur? Consider having a partial write only downtime, while read-only requests are still served. Assess your deployment and administration process After you build the inventories, assess the operational and deployment processes for your database to determine how you need to adapt them to facilitate your migration. These processes are fundamental to how you prepare and maintain your production environment. Consider how you complete the following tasks: Define and enforce security policies for your instances: For example, you might need to replace Amazon Security Groups. You can use Google IAM roles, VPC firewall rules, and VPC Service Controls to control access to your Cloud SQL instances and constrain the data within a VPC. Patch and configure your instances: Your existing deployment tools might need to be updated. For example, you might be using custom configuration settings in Amazon parameter groups or Amazon option groups. Your provisioning tools might need to be adapted to use Google Cloud Storage or Secret Manager to read such custom configuration lists. Manage monitoring and alerting infrastructure: Metric categories for your Amazon source database instances provide observability during the migration process. Metric categories might include Amazon CloudWatch, Performance Insights, Enhanced Monitoring, and OS process lists. Complete the assessment After you build the inventories from your Amazon RDS or Amazon Aurora environment, complete the rest of the activities of the assessment phase as described in Migrate to Google Cloud: Assess and discover your workloads. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. If you plan to use Database Migration Service for migration, see Networking methods for MySQL to understand the networking configurations available for migration scenarios. Monitoring and alerting Use Google Cloud Monitoring, which includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, like Datadog and Splunk. For more information, see About database observability. Migrate Amazon RDS and Amazon Aurora for MySQL instances to Cloud SQL for MySQL To migrate your instances, you do the following: Choose the migration strategy: continuous replication or scheduled maintenance. Choose the migration tools, depending on your chosen strategy and requirements. Define the migration plan and timeline for each database migration, including preparation and execution tasks. Define the preparation tasks that must be done to ensure the migration tool can work properly. Define the execution tasks, which include work activities that implement the migration. Define fallback scenarios for each execution task. Perform testing and validation, which can be done in a separate staging environment. Perform the migration. Perform the production cut-over. Clean up the source environment and configure the target instance. Perform tuning and optimization. Each phase is described in the following sections. Choose your migration strategy At this step, you have enough information to evaluate and select one of the following migration strategies that best suits your use case for each database: Scheduled maintenance (also called one-time migration): This approach is ideal if you can afford downtime. This option is relatively lower in cost and complexity, because your workloads and services won't require much refactoring. However, if the migration fails before completion, you have to restart the process, which prolongs the downtime. Continuous replication (also called trickle migration): For mission-critical databases, this option offers a lower risk of data loss and near-zero downtime. The efforts are split into several chunks, so if a failure occurs, rollback and repeat takes less time. However, setup is more complex and takes more planning and time. Additional effort is also required to refactor the applications that connect to your database instances. Consider one of the following variations: Using the Y (writing and reading) approach, which is a form of parallel migration, duplicating data in both source and destination instances during the migration. Using a data-access microservice, which reduces refactoring effort required by the Y (writing and reading) approach. For more information about data migration strategies, see Evaluating data migration approaches. The following diagram shows a flowchart based on example questions that you might have when deciding the migration strategy for a single database: The preceding flowchart shows the following decision points: Can you afford any downtime? If no, adopt the continuous replication migration strategy. If yes, continue to the next decision point. Can you afford the downtime represented by the cut-over window while migrating data? The cut-over window represents the amount of time to take a backup of the database, transfer it to Cloud SQL, restore it, and then switch over your applications. If no, adopt the continuous replication migration strategy. If yes, adopt the scheduled maintenance migration strategy. Strategies might vary for different databases, even when they're located on the same instance. A mix of strategies can produce optimal results. For example, migrate small and infrequently used databases by using the scheduled maintenance approach, but use continuous replication for mission-critical databases where having downtime is expensive. Usually, a migration is considered completed when the switch between the initial source instance and the target instance takes place. Any replication (if used) is stopped and all reads and writes are done on the target instance. Switching when both instances are in sync means no data loss and minimal downtime. For more information about data migration strategies and deployments, see Classification of database migrations. Minimize downtime and migration-related impacts Migration configurations that provide no application downtime require a more complicated setup. Find the right balance between setup complexity and downtime scheduled during low-traffic business hours. Each migration strategy has a tradeoff and some impact associated with the migration process. For example, replication processes involve some additional load on your source instances and your applications might be affected by replication lag. Applications (and customers) might have to wait during application downtime, at least as long as the replication lag lasts before using the new database. In practice, the following factors might increase downtime: Database queries can take a few seconds to complete. At the time of migration, in-flight queries might be aborted. The cache might take some time to fill up if the database is large or has a substantial buffer memory. Applications stopped in the source and restarted in Google Cloud might have a small lag until the connection to the Google Cloud database instance is established. Network routes to the applications must be rerouted. Depending on how DNS entries are set up, this can take some time. When you update DNS records, reduce TTL before the migration. The following common practices can help minimize downtime and impact: Find a time period when downtime would have a minimal impact on your workloads. For example, outside normal business hours, during weekends, or other scheduled maintenance windows. Identify parts of your workloads that can undergo migration and production cut-over at different stages. Your applications might have different components that can be isolated, adapted, and migrated with no impact. For example, frontends, CRM modules, backend services, and reporting platforms. Such modules could have their own databases that can be scheduled for migration earlier or later in the process. If you can afford some latency on the target database, consider implementing a gradual migration. Use incremental, batched data transfers, and adapt part of your workloads to work with the stale data on the target instance. Consider refactoring your applications to support minimal migration impact. For example, adapt your applications to write to both source and target databases, and therefore implement an application-level replication. Choose your migration tools The most important factor for a successful migration is choosing the right migration tool. Once the migration strategy has been decided, review and decide upon the migration tool. There are many tools available, each optimized for certain migration use cases. Use cases can include the following: Migration strategy (scheduled maintenance or continuous replication). Source and target database engines and engine versions. Environments in which source instances and target instances are located. Database size. The larger the database, the more time it takes to migrate the initial backup. Frequency of the database changes. Availability to use managed services for migration. To ensure a seamless migration and cut-over, you can use application deployment patterns, infrastructure orchestration, and custom migration applications. However, specialized tools called managed migration services can facilitate the process of moving data, applications, or even entire infrastructures from one environment to another. They run the data extraction from the source databases, securely transport data to the target databases, and can optionally modify the data during transit. With these capabilities, they encapsulate the complex logic of migration and offer migration monitoring capabilities. Managed migration services provide the following advantages: Minimize downtime: Services use the built-in replication capabilities of the database engines when available, and perform replica promotion. Ensure data integrity and security: Data is securely and reliably transferred from the source to the destination database. Provide a consistent migration experience: Different migration techniques and patterns can be consolidated into a consistent, common interface by using database migration executables, which you can manage and monitor. Offer resilient and proven migration models: Database migrations are infrequent but critical events. To avoid beginner mistakes and issues with existing solutions, you can use tools from experienced experts, rather than building a custom solution. Optimize costs: Managed migration services can be more cost effective than provisioning additional servers and resources for custom migration solutions. The next sections describe the migration tool recommendations, which depend on the chosen migration strategy. Tools for scheduled maintenance migrations The following diagram shows a flowchart with questions that can help you choose the migration tool for a single database, when you use a scheduled maintenance migration strategy: The preceding flowchart shows the following decision points: Do you prefer managed migration services? If yes, we recommend that you use Database Migration Service. If no, we recommend that you migrate by using the database engine built-in backups. Using a managed migration service provides some advantages, which are discussed in the Choose your migration tools section of this document. The following subsections describe the tools that can be used for one-time migrations, along with their limitations and best practices. Database Migration Service for scheduled maintenance migration Database Migration Service manages both the initial sync and ongoing change data capture (CDC) replication, and provides the status of the migration operation. By using Database Migration Service, you can create migration jobs that you can manage and verify. We recommend that you use Database Migration Service, because it supports migrations to Cloud SQL for MySQL through one-time migration jobs. For large databases, you can use data dump parallelism in Database Migration Service. For more information about database migration architecture and about Database Migration Service, see Migrate a database to Cloud SQL for MySQL by using Database Migration Service and Migration fidelity for MySQL. For more information about Database Migration Service limitations and prerequisites, see the following: Known limitations in Database Migration Service. Quotas and limits in Database Migration Service. Configure your source in Database Migration Service. Built-in database engine backups When significant downtime is acceptable, and your source databases are relatively static, you can use the database engine's built-in dump and load (also sometimes called backup and restore) capabilities. Some effort is required for setup and synchronization, especially for a large number of databases, but database engine backups are usually readily available and straightforward to use. Database engine backups have the following general limitations: Backups can be error prone, particularly if performed manually. Data is unsecured if the backups are not properly managed. Backups lack proper monitoring capabilities. Effort is required to scale, if many databases are being migrated by using this tool. If you choose this approach, consider the following limitations and best practices: If your database is relatively small (under 10 GB), we recommend that you use mysqldump. There is no fixed limit, but the ideal database size for mysqldump is usually under 10 GB. If you import databases larger than 10 GB, you might require some other strategies to minimize the database downtime. Some of these strategies are as follows: Export the schema and data in subsections, without secondary keys. Take advantage of timestamps. If any of your tables use timestamp columns, you can use a feature of the mysqldump command that lets you specify a WHERE clause to filter by a timestamp column. Consider other approaches like using mydumper and myloader tools. Database dumps and backups usually don't include MySQL user accounts. When preparing to migrate, review all user accounts on the source instance. For example, you can run the SHOW GRANTS command for each user to review the current set of privileges and see if there are any that are restricted in Cloud SQL. Similarly, the pt-show-grants tool from Percona can also list grants. Restrictions on user privileges in Cloud SQL can affect migrations when migrating database objects that have a DEFINER attribute. For example, a stored procedure might have a super-privileged definer to run SQL commands like changing a global variable that isn't allowed in Cloud SQL. For cases like this, you might need to rewrite the stored procedure code or migrate non-table objects like stored procedures as a separate migration step. For more information, check Best practices for importing and exporting data. For further reading about limitations and best practices, see the following: Cloud SQL for MySQL about users. Best practices for importing and exporting data with Cloud SQL for MySQL. Cloud SQL for MySQL Known Issues. Other approaches for scheduled maintenance migration As part of using a managed import to set up replication from an external MySQL database, there's an initial load of data from the external database into the Cloud SQL instance. This approach uses a service that extracts data from the external server - the RDS instance in this case - and imports it into the Cloud SQL instance directly. For big databases (several TB), a faster way is to use a custom import initial load that uses the mydumper and myloader tools. You can consider exporting the tables from your MySQL database to CSV files, which can then be imported into Cloud SQL for MySQL. To export data from your RDS instance, you can use AWS Database Migration Service (AWS DMS) and an S3 bucket as the target. For detailed information and steps, see Use a managed import to set up replication from external databases. Tools for continuous replication migrations The following diagram shows a flowchart with questions that can help you choose the migration tool for a single database, when you use a continuous replication migration strategy: The preceding flowchart shows the following decision points: Do you prefer to use managed migration services? If yes, can you afford a few seconds of write downtime, depending on the number of tables in your database? If yes, use Database Migration Service. If no, explore other migration options. If no, you must evaluate if database engine built-in replication is supported: If yes, we recommend that you use built-in replication. If no, we recommend that you explore other migration options. The following sections describe the tools that can be used for continuous migrations, along with their limitations and best practices. Database Migration Service for continuous replication migration Database Migration Service provides seamless support for continuous migrations through its use of continuous migration jobs types. Only continuous migration jobs can be promoted, which signals the replication to stop. If you choose this tool, consider the following restrictions and best practices: Avoid long-running transactions during migration. In such cases, we recommend migration from a read replica if you're not migrating from AWS Aurora. For a full list of limitations, see Known limitations. Database engine built-in replication Database engine built-in replication is an alternative option to Database Migration Service for a continuous migration. Database replication represents the process of copying and distributing data from a database called the primary database to other databases called replicas. It's intended to increase data accessibility and improve the fault tolerance and reliability of a database system. Although database migration is not the primary purpose of database replication, it can be successfully used as a tool to achieve this goal. Database replication is usually an ongoing process that occurs in real time as data is inserted, updated, or deleted in the primary database. It can be done as a one-time operation, or a sequence of batch operations. Most modern database engines implement different ways of achieving database replication. One type of replication can be achieved by copying and sending the write ahead log or transaction log of the primary to its replicas. This approach is called physical or binary replication. Other replication types work by distributing the raw SQL statements that a primary database receives, instead of replicating file system level changes. Cloud SQL supports replication for MySQL. However, there are some prerequisites and limitations. Prerequisites: Ensure that you are using MySQL 5.5, 5.6, 5.7, or 8.0 on your Amazon RDS or Amazon Aurora instance. Ensure that binary logs are enabled. To speed up the initial full dump phase, choose a large enough machine tier from a CPU and memory size perspective. If you need to speed up the CDC phase, you can try to configure parallel replication and enable high performance flushing. If the Cloud SQL replica is enabled with internal IP addressss because the outgoing IP address isn't static, configure the Amazon RDS or Amazon Aurora server's firewall to allow the internal IP range allocated for the private services access of the VPC network that the Cloud SQL replica uses as its private network. For more information, see About replicating from an external server and Configure private services access. Limitations: If you have single large tables and many secondary indexes in your database, the initial full dump might take longer. If your external server contains DEFINER clauses (views, events, triggers, or stored procedures), depending on the order of when these statements are executed, replication might fail. Learn more about DEFINER usage and potential workarounds in Cloud SQL. InnoDB is the only supported database engine in Cloud SQL. Migrating MyISAM might cause data inconsistency and require data validation. See Converting tables from MyISAM to InnoDB. Other approaches for continuous replication migration Other continuous replication migration approaches include the following: Refactor your applications to perform Y (writing and reading) or use a data-access microservice. Continuous replication is performed by your applications. The migration effort is focused on the refactoring or development of tools that connect to your database instances. Reader applications are then gradually refactored and deployed to use the target instance. Replicate near real-time changes of your MySQL instance by using Datastream. Datastream is a serverless CDC and replication service that can be used with Amazon RDS or Amazon Aurora to replicate and synchronize data. Use Datastream to replicate changes from your MySQL instance to either BigQuery or Cloud Storage, and then use Dataflow or Dataproc to bring the data into your Cloud SQL instance. For more information about replication with Datastream, see the following: Configure an Amazon RDS for MySQL database in Datastream Configure an Amazon Aurora MySQL database in Datastream Stream changes to data in real time with Datastream Third-party tools for continuous replication migration In some cases, it might be better to use one third-party tool for most database engines. Such cases might be if you prefer to use a managed migration service and you need to ensure that the target database is always in near-real-time sync with the source, or if you need more complex transformations like data cleaning, restructuring, and adaptation during the migration process. If you decide to use a third-party tool, choose one of the following recommendations, which you can use for most database engines. Striim is an end-to-end, in-memory platform for collecting, filtering, transforming, enriching, aggregating, analyzing, and delivering data in real time: Advantages: Handles large data volumes and complex migrations. Built-in change data capture for SQL Server. Preconfigured connection templates and no-code pipelines. Able to handle mission-critical, large databases that operate under heavy transactional load. Exactly-once delivery. Disadvantages: Not open source. Can become expensive, especially for long migrations. Some limitations in data definition language (DDL) operations propagation. For more information, see Supported DDL operations and Schema evolution notes and limitations. For more information about Striim, see Running Striim in the Google Cloud. Debezium is an open source distributed platform for CDC, and can stream data changes to external subscribers: Advantages: Open source. Strong community support. Cost effective. Fine-grained control on rows, tables, or databases. Specialized for change capture in real time from database transaction logs. Disadvantages: Requires specific experience with Kafka and ZooKeeper. At-least-once delivery of data changes, which means that you need duplicates handling. Manual monitoring setup using Grafana and Prometheus. No support for incremental batch replication. For more information about Debezium migrations, see Near Real Time Data Replication using Debezium. Fivetran is an automated data movement platform for moving data out of and across cloud data platforms. Advantages: Preconfigured connection templates and no-code pipelines. Propagates any schema changes from your source to the target database. Exactly-once delivery of your data changes, which means that you don't need duplicates handling. Disadvantages: Not open source. Support for complex data transformation is limited. Define the migration plan and timeline For a successful database migration and production cut-over, we recommend that you prepare a well-defined, comprehensive migration plan. To help reduce the impact on your business, we recommend that you create a list of all the necessary work items. Defining the migration scope reveals the work tasks that you must do before, during, and after the database migration process. For example, if you decide not to migrate certain tables from a database, you might need pre-migration or post-migration tasks to implement this filtering. You also ensure that your database migration doesn't affect your existing service-level agreement (SLA) and business continuity plan. We recommend that your migration planning documentation include the following documents: Technical design document (TDD) RACI matrix Timeline (such as a T-Minus plan) Database migrations are an iterative process, and first migrations are often slower than the later ones. Usually, well-planned migrations run without issues, but unplanned issues can still arise. We recommend that you always have a rollback plan. As a best practice, follow the guidance from Migrate to Google Cloud: Best practices for validating a migration plan. TDD The TDD documents all technical decisions to be made for the project. Include the following in the TDD: Business requirements and criticality Recovery time objective (RTO) Recovery point objective (RPO) Database migration details Migration effort estimates Migration validation recommendations RACI matrix Some migrations projects require a RACI matrix, which is a common project management document that defines which individuals or groups are responsible for tasks and deliverables within the migration project. Timeline Prepare a timeline for each database that needs to be migrated. Include all work tasks that must be performed, and defined start dates and estimated end dates. For each migration environment, we recommend that you create a T-minus plan. A T-minus plan is structured as a countdown schedule, and lists all the tasks required to complete the migration project, along with the responsible groups and estimated duration. The timeline should account for not only pre-migration preparation tasks execution, but also validating, auditing, or testing tasks that happen after the data transfer takes place. The duration of migration tasks typically depends on database size, but there are also other aspects to consider, like business logic complexity, application usage, and team availability. A T-Minus plan might look like the following: Date Phase Category Tasks Role T-minus Status 11/1/2023 Pre-migration Assessment Create assessment report Discovery team -21 Complete 11/7/2023 Pre-migration Target preparation Design target environment as described by the design document Migration team -14 Complete 11/15/2023 Pre-migration Company governance Migration date and T-Minus approval Leadership -6 Complete 11/18/2023 Migration Set up DMS Build connection profiles Cloud migration engineer -3 Complete 11/19/2023 Migration Set up DMS Build and start migration jobs Cloud migration engineer -2 Not started 11/19/2023 Migration Monitor DMS Monitor DMS Jobs and DDL changes in the source instance Cloud migration engineer -2 Not started 11/21/2023 Migration Cutover DMS Promote DMS replica Cloud migration engineer 0 Not started 11/21/2023 Migration Migration validation Database migration validation Migration team 0 Not started 11/21/2023 Migration Application test Run capabilities and performance tests Migration team 0 Not started 11/22/2023 Migration Company governance Migration validation GO or NO GO Migration team 1 Not started 11/23/2023 Post-migration Validate monitoring Configure monitoring Infrastructure team 2 Not started 11/25/2023 Post-migration Security Remove DMS user account Security team 4 Not started Multiple database migrations If you have multiple databases to migrate, your migration plan should contain tasks for all of the migrations. We recommend that you start the process by migrating a smaller, ideally non-mission-critical database. This approach can help you to build your knowledge and confidence in the migration process and tooling. You can also detect any flaws in the process in the early stages of the overall migration schedule. If you have multiple databases to migrate, the timelines can be parallelized. For example, to speed up the migration process, you might choose to migrate a group of small, static, or less mission-critical databases at the same time, as shown in the following diagram. In the example shown in the diagram, databases 1-4 are a group of small databases that are migrated at the same time. Define the preparation tasks The preparation tasks are all the activities that you need to complete to fulfill the migration prerequisites. Without the preparation tasks, the migration can't take place or the migration results in the migrated database being in an unusable state. Preparation tasks can be categorized as follows: Amazon RDS or Amazon Aurora instance preparations and prerequisites Source database preparation and prerequisites Cloud SQL setup Migration specific setup Amazon RDS or Amazon Aurora instance preparation and prerequisites Consider the following common setup and prerequisite tasks: Depending on your migration path, you might need to allow remote connections on your RDS instances. If your RDS instance is configured to be private in your VPC, private RFC 1918 connectivity must exist between Amazon and Google Cloud. You might need to configure a new security group to allow remote connections on required ports. By default, in AWS, network access is turned off for database instances. You can specify rules in a security group that allow access from an IP address range, port, or security group. The same rules apply to all database instances that are associated with that security group. Make sure you have enough free disk space to buffer WAL logs for the duration of the full load operation on your Amazon RDS instance. For optimal migration results, consider migrating from a read replica, unless you are migrating from Amazon Aurora. Additionally, we recommend that you begin the migration process during a period of reduced database activity. MySQL limits the hostname to 60 characters. Amazon RDS database hostnames are typically longer than 60 characters. If this is the case for the database you're migrating, then configure a DNS redirect to create a CNAME record that associates your domain name with the domain name of your RDS database instance. If you're using built-in replication, you need to set up your Amazon RDS or Amazon Aurora instance for replication. Built-in replication, or tools that use it, need binary logging for MySQL set to ROW. If you're using third-party tools, upfront settings and configurations are usually required before using the tool. Check the documentation from the third-party tool. For more information about instance preparation and prerequisites, see Set up the external server for replication for MySQL. Source database preparation and prerequisites If you choose to use Database Migration Service, you need to configure your source database before connecting to it. Review the migration jobs before implementing the jobs. For more information, see Configure your source for MySQL. Depending on your database engine, you might need to change certain features. For example, Cloud SQL for MySQL supports only the InnoDB database engine. You need to change MyISAM tables to InnoDB. Some third-party migration tools require that all LOB columns are nullable. Any LOB columns that are NOT NULL need to have that constraint removed during migration. Take baseline measurements on your source environment in production use. Consider the following: Measure the size of your data, as well as your workload's performance. How long do important queries or transactions take, on average? How long during peak times? Document the baseline measurements for later comparison, to help you decide if the fidelity of your database migration is satisfactory. Decide if you can switch your production workloads and decommission your source environment, or if you still need it for fallback purposes. Cloud SQL setup Carefully choose the size and specifications of your target Cloud SQL for MySQL database instance to match the source for similar performance needs. Pay special attention to disk size and throughput, IOPS, and number of vCPUs. Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. You must confirm the following properties and requirements before you create your Cloud SQL instances, because they can't be changed later without recreating them. Choose the project and region of your target Cloud SQL instances carefully. Cloud SQL instances can't be migrated between Google Cloud projects and regions without data transfer. Migrate to a matching major version on Cloud SQL. For example, if your source uses MySQL 8.0.34 on Amazon RDS or Amazon Aurora, you should migrate to Cloud SQL for MySQL version 8.0.34. Replicate user information separately, if you are using built-in database engine backups or Database Migration Service. Cloud SQL manages users using Google IAM. For details, review the limitations in the Built-in database engine backups section. Review the database engine configuration flags and compare their source and target instance values. Make sure you understand their impact and whether they need to be the same or not. For example, we recommend running the SHOW VARIABLES command on your source database before the migration, and then run it again on the Cloud SQL database. Update flag settings as needed on the Cloud SQL database to replicate the source settings. Note that not all MySQL flags are allowed on a Cloud SQL instance. For more information about Cloud SQL setup, see the following: General best practices for MySQL Instance configuration options for MySQL Important considerations before Aurora to Cloud SQL for MySQL migration Migration specific setup If you import SQL dump files to Cloud SQL, you might need to create a Cloud Storage bucket. The bucket stores the database dump. If you use replication, you must ensure that the Cloud SQL replica has access to your primary database. This access can be achieved through the documented connectivity options. Depending on your scenario and criticality, you might need to implement a fallback scenario, which usually includes reversing the direction of the replication. In this case, you might need an additional replication mechanism from Cloud SQL back to your source Amazon instance. For most third-party tools, you need to provision migration specific resources. For example, for Striim, you need to use the Google Cloud Marketplace to begin. Then, to set up your migration environment in Striim, you can use the Flow Designer to create and change applications, or you can select a pre-existing template. Applications can also be coded using the Tungsten Query Language (TQL) programming language. Using a data validation dashboard, you can get a visual representation of data handled by your Striim application. You can decommission the resources that connect your Amazon and Google Cloud environment after the migration is completed and validated. For more information, see the following: Manage migration jobs in Database Migration Service Best practices for importing and exporting data Connectivity for MySQL Define the execution tasks Execution tasks implement the migration work itself. The tasks depend on your chosen migration tool, as described in the following subsections. Built-in database engine backups To perform a one-time migration, use the mysqldump utility to create a backup, which copies the data from Amazon RDS to your local client computer. For instructions, see Import a SQL dump file to Cloud SQL for MySQL. You can check the status of import and export operations for your Cloud SQL instance. Database Migration Service migration jobs Define and configure migration jobs in Database Migration Service to migrate data from a source instance to the destination database. Migration jobs connect to the source database instance through user-defined connection profiles. Test all the prerequisites to ensure the job can run successfully. Choose a time when your workloads can afford a small downtime for the migration and production cut-over. In Database Migration Service, the migration begins with the initial full backup and load, followed by a continuous flow of changes from the source to the destination database instance. Database Migration Service requires a few seconds to get the read lock on all the tables in your source Amazon RDS or Amazon Aurora instance that it needs to perform the initial full dump in a consistent way. To achieve the read lock, it might need a write downtime of anywhere between 1 and 49 seconds. The downtimes depends on the number of tables in your database, with 1 second corresponding to 100 tables and 9 seconds corresponding to 10000 tables. The migration process with Database Migration Service ends with the promotion operation. Promoting a database instance disconnects the destination database from the flow of changes coming from the source database, and then the now standalone destination instance is promoted to a primary instance. This approach is also sometimes called a production switch. For more information about migration jobs in Database Migration Service, see the following: Review a migration job Manage migration jobs Promote a migration For a detailed migration setup process, see Migrate a database to Cloud SQL for MySQL by using Database Migration Service. In Database Migration Service, the migration is performed by starting and running a migration job. Database engine built-in replication You can use built-in replication from Amazon RDS to an external Cloud SQL for MySQL instance. For MySQL, you first need to start with an initial dump that can be stored in a Cloud Storage bucket, and then import the data into Cloud SQL for MySQL. Then, you start the replication process. Monitor the replication, and stop the writes on your source instance at an appropriate time. Check the replication status again to make sure that all of the changes were replicated, and then promote the Cloud SQL for MySQL replica to a standalone instance to complete your migration. For a detailed instructions about how to set up built-in replication from an external server like Amazon RDS or Amazon Aurora for MySQL, see About replicating from an external server and Configure Cloud SQL and the external server for replication. For more information and guidance, see the following: Export your database to a Cloud Storage bucket Use a managed import to set up replication from external databases Start replication on the external server Third-party tools Define any execution tasks for the third-party tool you've chosen. This section focuses on Striim as an example. Striim uses applications that acquire data from various sources, process the data, and then deliver the data to other applications or targets. Applications can be created graphically using the web client and they contain sources, targets, and other logical components organized into one or more flows. To set up your migration environment in Striim, you can use the Flow Designer feature to create and change applications, or you can select from a number of pre-existing templates. Applications can also be coded by using the TQL programming language. You can get a visual representation of data handled by your Striim application by using a data validation dashboard. For a quick start with Striim in Google Cloud, see Running Striim in the Google Cloud. To learn more about Striim's basic concepts, see Striim concepts. Make sure that you also read the best practices guide for Switching from an initial load to continuous replication. Consider the following best practices for your data migration: Inform the involved teams whenever each of the plan steps starts and finishes. If any of the steps take longer than expected, compare the time elapsed with the maximal amount of time allotted for each step. Issue regular intermediary updates to involved teams when this happens. If the time span is greater than the maximal amount of time reserved for each step in the plan, consider rolling back. Make "go or no-go" decisions for every step of the migration and cut-over plan. Consider rollback actions or alternative scenarios for each of the steps. Define fallback scenarios Define fallback action items for each migration execution task, to safeguard against unforeseen issues that might occur during the migration process. The fallback tasks usually depend on the migration strategy and tools used. Fallback might require significant effort. As a best practice, don't perform production cut-over until your test results are satisfactory. Both the database migration and the fallback scenario should be properly tested to avoid a severe outage. Define success criteria and timebox all your migration execution tasks. Doing a migration dry run helps collect information about the expected times for each task. For example, for a scheduled maintenance migration, you can afford the downtime represented by the cut-over window. However, it's important to plan your next action in case the one-time migration job or the restore of the backup fails midway. Depending on how much time of your planned downtime has elapsed, you might have to postpone the migration if the migration task doesn't finish in the expected amount of time. A fallback plan usually refers to rolling back the migration after you perform the production cut-over, if issues on the target instance appear. If you implement a fallback plan, remember that it must be treated as a full database migration, including planning and testing. If you choose not to have a fallback plan, make sure you understand the possible consequences. Having no fallback plan can add unforeseen effort and cause avoidable disruptions in your migration process. Although a fallback is a last resort, and most database migrations don't end up using it, we recommend that you always have a fallback strategy. Simple fallback In this fallback strategy, you switch your applications back to the original source database instance. Adopt this strategy if you can afford downtime when you fall back or if you don't need the transactions committed on the new target system. If you do need all the written data on your target database, and you can afford some downtime, you can consider stopping writes to your target database instance, taking built-in backups and restoring them on your source instance, and then re-connecting your applications to the initial source database instance. Depending on the nature of your workload and amount of data written on the target database instance, you could bring it into your initial source database system at a later time, especially if your workloads aren't dependent on any specific record creation time or any time ordering constraints. Reverse replication In this strategy, you replicate the writes that happen on your new target database after production cut-over back to your initial source database. In this way, you keep the original source in sync with the new target database and have the writes happening on the new target database instance. Its main disadvantage is that you can't test the replication stream until after you cut-over to the target database instance, therefore it doesn't allow end-to-end testing and it has a small period of no fallback. Choose this approach when you can still keep your source instance for some time and you migrate using the continuous replication migration. Forward replication This strategy is a variation of reverse replication. You replicate the writes on your new target database to a third database instance of your choice. You can point your applications to this third database, which connects to the server and runs read-only queries while the server is unavailable. You can use any replication mechanism, depending on your needs. The advantage of this approach is that it can be fully end-to-end tested. Take this approach when you want to be covered by a fallback at all times or when you must discard your initial source database shortly after the production cut-over. Duplicate writes If you choose a Y (writing and reading) or data-access microservice migration strategy, this fallback plan is already set. This strategy is more complicated, because you need to refactor applications or develop tools that connect to your database instances. Your applications write to both initial source and target database instances, which lets you perform a gradual production cut-over until you are using only your target database instances. If there are any issues, you connect your applications back to the initial source with no downtime. You can discard the initial source and the duplicate writing mechanism when you consider the migration performed with no issues observed. We recommend this approach when it's critical to have no migration downtime, have a reliable fallback in place, and when you have time and resources to perform application refactoring. Perform testing and validation The goals of this step are to test and validate the following: Successful migration of the data in the database. Integration with existing applications after they are switched to use the new target instance. Define the key success factors, which are subjective to your migration. The following are examples of subjective factors: Which data to migrate. For some workloads, it might not be necessary to migrate all of the data. You might not want to migrate data that is already aggregated, redundant, archived, or old. You might archive that data in a Cloud Storage bucket, as a backup. An acceptable percentage of data loss. This particularly applies to data used for analytics workloads, where losing part of the data does not affect general trends or performance of your workloads. Data quality and quantity criteria, which you can apply to your source environment and compare to the target environment after the migration. Performance criteria. Some business transactions might be slower in the target environment, but the processing time is still within defined expectations. The storage configurations in your source environment might not map directly to Google Cloud environment targets. For example, configurations from the General Purpose SSD (gp2 and gp3) volumes with IOPS burst performance or Provisioned IOPS SSD. To compare and properly size the target instances, benchmark your source instances, in both the assessment and validation phases. In the benchmarking process, you apply production-like sequences of operations to the database instances. During this time, you capture and process metrics to measure and compare the relative performance of both source and target systems. For conventional, server based configurations, use relevant measurements observed during peak loads. For flexible resource capacity models like Aurora Serverless, consider looking at historical metric data to observe your scaling needs. The following tools can be used for testing, validation, and database benchmarking: HammerDB: an open source database benchmarking and load testing tool. It supports complex transactional and analytic workloads, based on industry standards, on multiple database engines (both TPROC-C and TPROC-H). HammerDB has detailed documentation and a wide community of users. You can share and compare results across several database engines and storage configurations. For more information, see Load testing SQL Server using HammerDB and Benchmark Amazon RDS SQL Server performance using HammerDB. DBT2 Benchmark Tool: benchmarking specialized for MySQL. A set of database workload kits mimics an application for a company that owns warehouses and involves a mix of read and write transactions. Use this tool if you want to use a ready-made online transaction processing (OLTP) load test. DbUnit: an open source unit testing tool used to test relational database interactions in Java. The setup and use is straightforward, and it supports multiple database engines (MySQL, PostgreSQL, SQL Server, and others). However, the test execution can be slow sometimes, depending on the size and complexity of the database. We recommend this tool when simplicity is important. DbFit: an open source database testing framework that supports test-driven code development and automated testing. It uses a basic syntax for creating test cases and features data-driven testing, version control, and test result reporting. However, support for complex queries and transactions is limited and it doesn't have large community support or extensive documentation, compared to other tools. We recommend this tool if your queries are not complex and you want to perform automated tests and integrate them with your continuous integration and delivery process. To run an end-to-end test, including testing of the migration plan, always perform a migration dry run exercise. A dry run performs the full-scope database migration without switching any production workloads, and it offers the following advantages: Lets you ensure that all objects and configurations are properly migrated. Helps you define and execute your migration test cases. Offers insights into the time needed for the actual migration, so you can calibrate your timeline. Represents an occasion to test, validate, and adapt the migration plan. Sometimes you can't plan for everything in advance, so this helps you to spot any gaps. Data testing can be performed on a small set of the databases to be migrated or the entire set. Depending on the total number of databases and the tools used for implementing their migration, you can decide to adopt a risk based approach. With this approach, you perform data validation on a subset of databases migrated through the same tool, especially if this tool is a managed migration service. For testing, you should have access to both source and target databases and do the following tasks: Compare source and target schemas. Check if all tables and executables exist. Check row counts and compare data at the database level. Run custom data validation scripts. Test that the migrated data is also visible in the applications that switched to use the target database (migrated data is read through the application). Perform integration testing between the switched applications and the target database by testing various use cases. This testing includes both reading and writing data to the target databases through the applications so that the workloads fully support migrated data together with newly created data. Test the performance of the most used database queries to observe if there's any degradation due to misconfigurations or wrong sizing. Ideally, all these migration test scenarios are automated and repeatable on any source system. The automated test cases suite is adapted to perform against the switched applications. If you're using Database Migration Service as your migration tool, see Verify a migration. Data Validation Tool For performing data validation, we recommend that you use the Data Validation Tool (DVT). The DVT is an open sourced Python CLI tool, backed by Google, that provides an automated and repeatable solution for validation across different environments. The DVT can help streamline the data validation process by offering customized, multi-level validation functions to compare source and target tables on the table, column, and row level. You can also add validation rules. The DVT covers many Google Cloud data sources, including AlloyDB for PostgreSQL, BigQuery, Cloud SQL, Spanner, JSON, and CSV files on Cloud Storage. It can also be integrated with Cloud Run functions and Cloud Run for event based triggering and orchestration. The DVT supports the following types of validations: Schema level comparisons Column (including 'AVG', 'COUNT', 'SUM', 'MIN', 'MAX', 'GROUP BY', and 'STRING_AGG') Row (including hash and exact match in field comparisons) Custom query results comparison For more information about the DVT, see the Git repository and Data validation made easy with Google Cloud's Data Validation Tool. Perform the migration The migration tasks include the activities to support the transfer from one system to another. Consider the following best practices for your data migration: Inform the involved teams whenever a plan step begins and finishes. If any of the steps take longer than expected, compare the time elapsed with the maximum amount of time allotted for that step. Issue regular intermediary updates to involved teams when this happens. If the time span is greater than the maximal amount of time reserved for each step in the plan, consider rolling back. Make "go or no-go" decisions for every step of the migration and cut-over plan. Consider rollback actions or alternative scenarios for each of the steps. Perform the migration by following your defined execution tasks, and refer to the documentation for your selected migration tool. Perform the production cut-over The high-level production cut-over process can differ depending on your chosen migration strategy. If you can have downtime on your workloads, then your migration cut-over begins by stopping writes to your source database. For continuous replication migrations, you typically do the following high-level steps in the cut-over process: Stop writing to the source database. Drain the source. Stop the replication process. Deploy the applications that point to the new target database. After the data has been migrated by using the chosen migration tool, you validate the data in the target database. You confirm that the source database and the target databases are in sync and the data in the target instance adheres to your chosen migration success standards. Once the data validation passes your criteria, you can perform the application level cut-over. Deploy the workloads that have been refactored to use the new target instance. You deploy the versions of your applications that point to the new target database instance. The deployments can be performed either through rolling updates, staged releases, or by using a blue-green deployment pattern. Some application downtime might be incurred. Follow the best practices for your production cut-over: Monitor your applications that work with the target database after the cut-over. Define a time period of monitoring to consider whether or not you need to implement your fallback plan. Note that your Cloud SQL or AlloyDB for PostgreSQL instance might need a restart if you change some database flags. Consider that the effort of rolling back the migration might be greater than fixing issues that appear on the target environment. Cleanup the source environment and configure the Cloud SQL or AlloyDB for PostgreSQL instance After the cut-over is completed, you can delete the source databases. We recommend performing the following important actions before the cleanup of your source instance: Create a final backup of each source database. These backups provide you with an end state of the source databases. The backups might also be required in some cases for compliance with some data regulations. Collect the database parameter settings of your source instance. Alternatively, check that they match the ones you've gathered in the inventory building phase. Adjust the target instance parameters to match the ones from the source instance. Collect database statistics from the source instance and compare them to the ones in the target instance. If the statistics are disparate, it's hard to compare the performance of the source instance and target instance. In a fallback scenario, you might want to implement the replication of your writes on the Cloud SQL instance back to your original source database. The setup resembles the migration process but would run in reverse: the initial source database would become the new target. As a best practice to keep the source instances up to date after the cut-over, replicate the writes performed on the target Cloud SQL instances back to the source database. If you need to roll back, you can fall back to your old source instances with minimal data loss. Besides the source environment cleanup, you must make the following critical configurations for your Cloud SQL instances: Configure a maintenance window for your primary instance to control when disruptive updates can occur. Configure the storage on the instance so that you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL might perform. To receive an alert if available disk space gets lower than 20%, create a metrics-based alerting policy for the disk utilization metric. Don't start an administrative operation before the previous operation has completed. For more information about maintenance and best practices, see About maintenance on Cloud SQL instances. Optimize your environment after migration Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. Establish your optimization requirements Review the following optimization requirements for your Google Cloud environment and choose the ones that best fit your workloads: Increase the reliability and availability of your database With Cloud SQL, you can implement a high availability and disaster recovery strategy that aligns with your recovery time objective (RTO) and recovery point objective (RPO). To increase reliability and availability, consider the following: In cases of read-heavy workloads, add read replicas to offload traffic from the primary instance. For mission critical workloads, use the high-availability configuration, replicas for regional failover, and a robust disaster recovery configuration. For less critical workloads, automated and on-demand backups can be sufficient. To prevent accidental removal of instances, use instance deletion protection. Consider using Cloud SQL Enterprise Plus edition to benefit from increased availability, log retention, and near-zero downtime planned maintenance. For more information about Cloud SQL Enterprise Plus, see Introduction to Cloud SQL editions and Near-zero downtime planned maintenance. For more information on increasing the reliability and availability of your database, see the following: Promote replicas for regional migration or disaster recovery Cloud SQL database disaster recovery About Cloud SQL backups Prevent deletion of an instance documentation Increase the cost effectiveness of your database infrastructure To have a positive economic impact, your workloads must use the available resources and services efficiently. Consider the following options: Provide the database with the minimum required storage capacity by doing the following: To scale storage capacity automatically as your data grows, enable automatic storage increases. However, ensure that you configure your instances to have some buffer in peak workloads. Remember that database workloads tend to increase over time. Identify possible overestimated resources: Rightsizing your Cloud SQL instances can reduce the infrastructure cost without adding additional risks to the capacity management strategy. Cloud Monitoring provides predefined dashboards that help identify the health and capacity utilization of many Google Cloud components, including Cloud SQL. For details, see Create and manage custom dashboards. Identify instances that don't require high availability or disaster recovery configurations, and remove them from your infrastructure. Remove tables and objects that are no longer needed. You can store them in a full backup or an archival Cloud Storage bucket. Evaluate the most cost-effective storage type (SSD or HDD) for your use case. For most use cases, SSD is the most efficient and cost-effective choice. If your datasets are large (10 TB or more), latency-insensitive, or infrequently accessed, HDD might be more appropriate. For details, see Choose between SSD and HDD storage. Purchase committed use discounts for workloads with predictable resource needs. Use Active Assist to get cost insights and recommendations. For more information and options, see Do more with less: Introducing Cloud SQL cost optimization recommendations with Active Assist. Increase the performance of your database infrastructure Minor database-related performance issues frequently have the potential to impact the entire operation. To maintain and increase your Cloud SQL instance performance, consider the following guidelines: If you have a large number of database tables, they can affect instance performance and availability, and cause the instance to lose its SLA coverage. Ensure that your instance isn't constrained on memory or CPU. For performance-intensive workloads, ensure that your instance has at least 60 GB of memory. For slow database inserts, updates, or deletes, check the locations of the writer and database; sending data over long distances introduces latency. Improve query performance by using the predefined Query Insights dashboard in Cloud Monitoring (or similar database engine built-in features). Identify the most expensive commands and try to optimize them. Prevent database files from becoming unnecessarily large. Set autogrow in MBs rather than as a percentage, using increments appropriate to the requirement. Check reader and database location. Latency affects read performance more than write performance. Consider using Cloud SQL Enterprise Plus edition to benefit from increased machine configuration limits and data cache. For more information, see Introduction to Cloud SQL editions. For more information about increasing performance, see Performance in "Diagnose issues". Increase database observability capabilities Diagnosing and troubleshooting issues in applications that connect to database instances can be challenging and time-consuming. For this reason, a centralized place where all team members can see what's happening at the database and instance level is essential. You can monitor Cloud SQL instances in the following ways: Cloud SQL uses built-in memory custom agents to collect query telemetry. Use Cloud Monitoring to collect measurements of your service and the Google Cloud resources that you use. Cloud Monitoring includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. You can create custom dashboards that help you monitor metrics and set up alert policies so that you can receive timely notifications. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, such as Datadog and Splunk. Cloud Logging collects logging data from common application components. Cloud Trace collects latency data and executed query plans from applications to help you track how requests propagate through your application. Database Center provides an AI-assisted, centralized database fleet overview. You can monitor the health of your databases, availability configuration, data protection, security, and industry compliance. General Cloud SQL best practices and operational guidelines Apply the best practices for Cloud SQL to configure and tune the database. Some important Cloud SQL general recommendations are as follows: If you have large instances, we recommend that you split them into smaller instances, when possible. Configure storage to accommodate critical database maintenance. Ensure you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL might perform. Having too many database tables can affect database upgrade time. Ideally, aim to have under 10,000 tables per instance. Choose the appropriate size for your instances to account for transaction (binary) log retention, especially for high write activity instances. To be able to efficiently handle any database performance issues that you might encounter, use the following guidelines until your issue is resolved: Scale up infrastructure: Increase resources (such as disk throughput, vCPU, and RAM). Depending on the urgency and your team's availability and experience, vertically scaling your instance can resolve most performance issues. Later, you can further investigate the root cause of the issue in a test environment and consider options to eliminate it. Perform and schedule database maintenance operations: Index defragmentation, statistics updates, vacuum analyze, and reindex heavily updated tables. Check if and when these maintenance operations were last performed, especially on the affected objects (tables, indexes). Find out if there was a change from normal database activities. For example, recently adding a new column or having lots of updates on a table. Perform database tuning and optimization: Are the tables in your database properly structured? Do the columns have the correct data types? Is your data model right for the type of workload? Investigate your slow queries and their execution plans. Are they using the available indexes? Check for index scans, locks, and waits on other resources. Consider adding indexes to support your critical queries. Eliminate non-critical indexes and foreign keys. Consider rewriting complex queries and joins. The time it takes to resolve your issue depends on the experience and availability of your team and can range from hours to days. Scale out your reads: Consider having read replicas. When scaling vertically isn't sufficient for your needs, and database tuning and optimization measures aren't helping, consider scaling horizontally. Routing read queries from your applications to a read replica improves the overall performance of your database workload. However, it might require additional effort to change your applications to connect to the read replica. Database re-architecture: Consider partitioning and indexing the database. This operation requires significantly more effort than database tuning and optimization, and it might involve a data migration, but it can be a long-term fix. Sometimes, poor data model design can lead to performance issues, which can be partially compensated by vertical scale-up. However, a proper data model is a long-term fix. Consider partitioning your tables. Archive data that isn't needed anymore, if possible. Normalize your database structure, but remember that denormalizing can also improve performance. Database sharding: You can scale out your writes by sharding your database. Sharding is a complicated operation and involves re-architecting your database and applications in a specific way and performing data migration. You split your database instance in multiple smaller instances by using a specific partitioning criteria. The criteria can be based on customer or subject. This option lets you horizontally scale both your writes and reads. However, it increases the complexity of your database and application workloads. It might also lead to unbalanced shards called hotspots, which would outweigh the benefit of sharding. - Specifically for Cloud SQL for MySQL, make sure that your tables have a primary or unique key. Cloud SQL uses row-based replication, which works best on tables with primary or unique keys. For more details, see General best practices and Operational guidelines for Cloud SQL for MySQL. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Alex Cârciu | Solutions ArchitectMarco Ferrari | Cloud Solutions ArchitectOther contributors: Derek Downey | Developer Relations EngineerPaweł Krentowski | Technical WriterMatthew Smith | Strategic Cloud EngineerSomdyuti Paul | Data Management SpecialistZach Seils | Networking Specialist Send feedback \ No newline at end of file diff --git a/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_PostgreSQL_to_Cloud_SQL_and_AlloyDB_for_PostgreSQL.txt b/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_PostgreSQL_to_Cloud_SQL_and_AlloyDB_for_PostgreSQL.txt new file mode 100644 index 0000000000000000000000000000000000000000..37437e63286dcb6e122f70195a6c077689622f2d --- /dev/null +++ b/Migrate_from_Amazon_RDS_and_Amazon_Aurora_for_PostgreSQL_to_Cloud_SQL_and_AlloyDB_for_PostgreSQL.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-aws-rds-aurora-to-postgresql +Date Scraped: 2025-02-23T11:52:05.049Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL and AlloyDB for PostgreSQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-12 UTC Google Cloud provides tools, products, guidance, and professional services to migrate from Amazon Relational Database Service (RDS) or Amazon Aurora to Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL. This document is intended for cloud and database administrators who want to plan, implement, and validate a database migration project. It's also intended for decision makers who are evaluating the opportunity to migrate and want an example of what a migration might look like. This document focuses on a homogeneous database migration, which is a migration where the source and destination databases are the same database technology. In this migration guide, the source is Amazon RDS or Amazon Aurora for PostgreSQL, and the destination is Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to GKE Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL and AlloyDB for PostgreSQL (this document) Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Migrate from AWS Lambda to Cloud Run For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. The database assessment phase helps you choose the size and specifications of your target Cloud SQL database instance that matches the source for similar performance needs. Pay special attention to disk size and throughput, IOPS, and number of vCPUs. Migrations might struggle or fail due to incorrect target database instance sizing. Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. When deciding on the Cloud SQL instance, keep in mind that disk performance is based on the disk size and the number of vCPUs. The following sections rely on Migrate to Google Cloud: Assess and discover your workloads, and integrate the information in that document. Build an inventory of your Amazon RDS or Amazon Aurora instances To define the scope of your migration, you create an inventory and collect information about your Amazon RDS and Amazon Aurora instances. Ideally, this should be an automated process because manual approaches are prone to error and can lead to incorrect assumptions. Amazon RDS or Amazon Aurora and Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL might not have similar features, instance specifications, or operation. Some functionalities might be implemented differently or be unavailable. Areas of differences might include infrastructure, storage, authentication and security, replication, backup, high availability, resource capacity model and specific database engine feature integrations, and extensions. Depending on the database engine type, instance size, and architecture, there are also differences in the default values of database parameter settings. Benchmarking can help you to better understand the workloads that are to be migrated and contributes to defining the right architecture of the migration target environment. Collecting information about performance is important to help estimate the performance needs of the Google Cloud target environment. Benchmarking concepts and tools are detailed in the Perform testing and validation of the migration process, but they also apply to the inventory building stage. Tools for assessments For an initial overview assessment of your current infrastructure, we recommend that you use Google Cloud Migration Center along with other specialized database assessment tools such as migVisor and Database Migration Assessment Tool (DMA). With Migration Center, you can perform a complete assessment of your application and database landscape, including the technical fit for a database migration to Google Cloud. You receive size and configuration recommendations for each source database, and create a total cost of ownership (TCO) report for servers and databases. For more information about assessing your AWS environment by using Migration Center, see Import data from other cloud providers. In addition to Migration Center, you can use the specialized tool migVisor. migVisor supports a variety of database engines and is particularly suitable for heterogeneous migrations. For an introduction to migVisor, see the migVisor overview. migVisor can identify artifacts and incompatible proprietary database features that can cause migration defaulting, and can point to workarounds. migVisor can also recommend a target Google Cloud solution, including initial sizing and architecture. The migVisor database assessment output provides the following: Comprehensive discovery and analysis of current database deployments. Detailed report of migration complexity, based on the proprietary features used by your database. Financial report with details on cost savings post migration, migration costs, and new operating budget. Phased migration plan to move databases and associated applications with minimal disruption to the business. To see some examples of assessment outputs, see migVisor - Cloud migration assessment tool. Note that migVisor temporarily increases database server utilization. Typically, this additional load is less than 3%, and can be run during non-peak hours. The migVisor assessment output helps you to build a complete inventory of your RDS instances. The report includes generic properties (database engine version and edition, CPUs, and memory size), as well as details about database topology, backup policies, parameter settings, and special customizations in use. If you prefer to use open source tools, you can use data collector scripts with (or instead of) the mentioned tools. These scripts can help you collect detailed information (about workloads, features, database objects, and database code) and build your database inventory. Also, scripts usually provide a detailed database migration assessment, including a migration effort estimation. We recommend the open source tool DMA, which was built by Google engineers. It offers a complete and accurate database assessment, including features in use, database logic, and database objects (including schemas, tables, views, functions, triggers, and stored procedures). To use DMA, download the collection scripts for your database engine from the Git repository, and follow the instructions. Send the output files to Google Cloud for analysis. Google Cloud creates and delivers a database assessment readout, and provides the next steps in the migration journey. Identify and document the migration scope and affordable downtime At this stage, you document essential information that influences your migration strategy and tooling. By now, you can answer the following questions: Are your databases larger than 5 TB? Are there any large tables in your database? Are they larger than 16 TB? Where are the databases located (regions and zones), and what's their proximity to applications? How often does the data change? What is your data consistency model? What are the options for destination databases? How compatible are the source and destination databases? Does the data need to reside in some physical locations? Is there data that can be compressed and archived, or is there data that doesn't need migration at all? To define the migration scope, decide what data to keep and what to migrate. Migrating all your databases might take considerable time and effort. Some data might remain in your source database backups. For example, old logging tables or archival data might not be needed. Alternatively, you might decide to move data after the migration process, depending on your strategy and tools. Establish data migration baselines that help you compare and evaluate your outcomes and impacts. These baselines are reference points that represent the state of your data before and after the migration and help you make decisions. It's important to take measurements on the source environment that can help you evaluate your data migration's success. Such measurements include the following: The size and structure of your data. The completeness and consistency of your data. The duration and performance of the most important business transactions and processes. Determine how much downtime you can afford. What are the business impacts of downtime? Are there periods of low database activity, during which there are fewer users affected by downtime? If so, how long are such periods and when do they occur? Consider having a partial write only downtime, while read-only requests are still served. Assess your deployment and administration process After you build the inventories, assess the operational and deployment processes for your database to determine how they need to be adapted to facilitate your migration. These processes are fundamental to how you prepare and maintain your production environment. Consider how you complete the following tasks: Define and enforce security policies for your instances. For example, you might need to replace Amazon Security Groups. You can use Google IAM roles, VPC firewall rules, and VPC Service Controls to control access to your Cloud SQL for PostgreSQL instances and constrain the data within a VPC. Patch and configure your instances. Your existing deployment tools might need to be updated. For example, you might be using custom configuration settings in Amazon parameter groups or Amazon option groups. Your provisioning tools might need to be adapted to use Cloud Storage or Secret Manager to read such custom configuration lists. Manage monitoring and alerting infrastructure. Metric categories for your Amazon source database instances provide observability during the migration process. Metric categories might include Amazon CloudWatch, Performance Insights, Enhanced Monitoring, and OS process lists. Complete the assessment After you build the inventories from your Amazon RDS or Amazon Aurora environment, complete the rest of the activities of the assessment phase as described in Migrate to Google Cloud: Assess and discover your workloads. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. If you plan to use Database Migration Service for migration, see Networking methods for Cloud SQL for PostgreSQL to understand the networking configurations available for migration scenarios. Monitoring and alerting Use Google Cloud Monitoring, which includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, like Datadog and Splunk. For more information, see About database observability. Migrate Amazon RDS and Amazon Aurora for PostgreSQL instances to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL To migrate your instances, you do the following: Choose the migration strategy: continuous replication or scheduled maintenance. Choose the migration tools, depending on your chosen strategy and requirements. Define the migration plan and timeline for each database migration, including preparation and execution tasks. Define the preparation tasks that must be done to ensure the migration tool can work properly. Define the execution tasks, which include work activities that implement the migration. Define fallback scenarios for each execution task. Perform testing and validation, which can be done in a separate staging environment. Perform the migration. Perform the production cut-over. Clean up the source environment and configure the target instance. Perform tuning and optimization. Each phase is described in the following sections. Choose your migration strategy At this step, you have enough information to evaluate and select one of the following migration strategies that best suits your use case for each database: Scheduled maintenance (also called one-time migration): This approach is ideal if you can afford downtime. This option is relatively lower in cost and complexity, because your workloads and services won't require much refactoring. However, if the migration fails before completion, you have to restart the process, which prolongs the downtime. Continuous replication (also called trickle migration): For mission-critical databases, this option offers a lower risk of data loss and near-zero downtime. The efforts are split into several chunks, so if a failure occurs, rollback and repeat takes less time. However, setup is more complex and takes more planning and time. Additional effort is also required to refactor the applications that connect to your database instances. Consider one of the following variations: Using the Y (writing and reading) approach, which is a form of parallel migration, duplicating data in both source and destination instances during the migration. Using a data-access microservice, which reduces refactoring effort required by the Y (writing and reading) approach. For more information about data migration strategies, see Evaluating data migration approaches. The following diagram shows a flowchart based on example questions that you might have when deciding the migration strategy for a single database: The preceding flowchart shows the following decision points: Can you afford any downtime? If no, adopt the continuous replication migration strategy. If yes, continue to the next decision point. Can you afford the downtime represented by the cut-over window while migrating data? The cut-over window represents the amount of time to take a backup of the database, transfer it to Cloud SQL, restore it, and then switch over your applications. If no, adopt the continuous replication migration strategy. If yes, adopt the scheduled maintenance migration strategy. Strategies might vary for different databases, even when they're located on the same instance. A mix of strategies can produce optimal results. For example, migrate small and infrequently used databases by using the scheduled maintenance approach, but use continuous replication for mission-critical databases where having downtime is expensive. Usually, a migration is considered completed when the switch between the initial source instance and the target instance takes place. Any replication (if used) is stopped and all reads and writes are done on the target instance. Switching when both instances are in sync means no data loss and minimal downtime. For more information about data migration strategies and deployments, see Classification of database migrations. Minimize downtime and migration-related impacts Migration configurations that provide no application downtime require a more complicated setup. Find the right balance between setup complexity and downtime scheduled during low-traffic business hours. Each migration strategy has a tradeoff and some impact associated with the migration process. For example, replication processes involve some additional load on your source instances and your applications might be affected by replication lag. Applications (and customers) might have to wait during application downtime, at least as long as the replication lag lasts before using the new database. In practice, the following factors might increase downtime: Database queries can take a few seconds to complete. At the time of migration, in-flight queries might be aborted. The cache might take some time to fill up if the database is large or has a substantial buffer memory. Applications stopped in the source and restarted in Google Cloud might have a small lag until the connection to the Google Cloud database instance is established. Network routes to the applications must be rerouted. Depending on how DNS entries are set up, this can take some time. When you update DNS records, reduce TTL before the migration. The following common practices can help minimize downtime and impact: Find a time period when downtime would have a minimal impact on your workloads. For example, outside normal business hours, during weekends, or other scheduled maintenance windows. Identify parts of your workloads that can undergo migration and production cut-over at different stages. Your applications might have different components that can be isolated, adapted, and migrated with no impact. For example, frontends, CRM modules, backend services, and reporting platforms. Such modules could have their own databases that can be scheduled for migration earlier or later in the process. If you can afford some latency on the target database, consider implementing a gradual migration. Use incremental, batched data transfers, and adapt part of your workloads to work with the stale data on the target instance. Consider refactoring your applications to support minimal migration impact. For example, adapt your applications to write to both source and target databases, and therefore implement an application-level replication. Choose your migration tools The most important factor for a successful migration is choosing the right migration tool. Once the migration strategy has been decided, review and decide upon the migration tool. There are many tools available, each optimized for certain migration use cases. Use cases can include the following: Migration strategy (scheduled maintenance or continuous replication). Source and target database engines and engine versions. Environments in which source instances and target instances are located. Database size. The larger the database, the more time it takes to migrate the initial backup. Frequency of the database changes. Availability to use managed services for migration. To ensure a seamless migration and cut-over, you can use application deployment patterns, infrastructure orchestration, and custom migration applications. However, specialized tools called managed migration services can facilitate the process of moving data, applications, or even entire infrastructures from one environment to another. They run the data extraction from the source databases, securely transport data to the target databases, and can optionally modify the data during transit. With these capabilities, they encapsulate the complex logic of migration and offer migration monitoring capabilities. Managed migration services provide the following advantages: Minimize downtime: Services use the built-in replication capabilities of the database engines when available, and perform replica promotion. Ensure data integrity and security: Data is securely and reliably transferred from the source to the destination database. Provide a consistent migration experience: Different migration techniques and patterns can be consolidated into a consistent, common interface by using database migration executables, which you can manage and monitor. Offer resilient and proven migration models: Database migrations are infrequent but critical events. To avoid beginner mistakes and issues with existing solutions, you can use tools from experienced experts, rather than building a custom solution. Optimize costs: Managed migration services can be more cost effective than provisioning additional servers and resources for custom migration solutions. The next sections describe the migration tool recommendations, which depend on the chosen migration strategy. Tools for scheduled maintenance migrations The following subsections describe the tools that can be used for one-time migrations, along with the limitations and best practices of each tool. For one-time migrations to Cloud SQL for PostgreSQL or to AlloyDB for PostgreSQL, we recommend that you use database engine backups to both export the data from your source instance and import that data into your target instance. One-time migration jobs are not supported in Database Migration Service. Built-in database engine backups When significant downtime is acceptable, and your source databases are relatively static, you can use the database engine's built-in dump and load (also sometimes called backup and restore) capabilities. Some effort is required for setup and synchronization, especially for a large number of databases, but database engine backups are usually readily available and straightforward to use. This approach is suitable for any database size, and it's usually more effective than other tools for large instances. Database engine backups have the following general limitations: Backups can be error prone, particularly if performed manually. Data can be unsecured if the snapshots are not properly managed. Backups lack proper monitoring capabilities. Backups require effort to scale when migrating many databases. You can use the PostgreSQL built-in backup utilities, pg_dump and pg_dumpall, to migrate both Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL. However, the pg_dump and pg_dumapall utilities have the following general limitations: The built-in backup utilities should be used to migrate databases that are 500 GB in size or less. Dumping and restoring large databases can take a long time and may require substantial disk space and memory. The pg_dump utility doesn't include users and roles. To migrate these user accounts and roles, you can use the pg_dumpall utility. Cloud Storage supports a maximum single-object size of up to 5 terabytes (TB). If you have databases larger than 5 TB, the export operation to Cloud Storage fails. In this case, you need to break down your export files into smaller segments. If you choose to use these utilities, consider the following restrictions and best practices: Compress data to reduce cost and transfer duration. Use the --jobs option with the pg_dump command to run a given number of dump jobs simultaneously. Use the -z option with the pg_dump command to specify the compression level to use. Acceptable values for this option range from 0 to 9. The default value is to compress at a 6 level. Higher values decrease the size of the dump file, but can cause high resource consumptions at the client host. If the client host has enough resources, higher compression levels can further lower the dump file size. Use the correct flags when you create a SQL dump file. Verify the imported database. Monitor the output of the pg_restore utility for any error messages during the restore process. Review PostgreSQL logs for any warnings or errors during the restore process. Run basic queries and table counts to verify your database integrity. For further reading about limitations and best practices, see the following resources: Best practices for importing and exporting data with Cloud SQL for PostgreSQL Cloud SQL for PostgreSQL Known Issues Import a DMP file in AlloyDB for PostgreSQL Migrate users with credentials to AlloyDB for PostgreSQL or another Cloud SQL instance Other approaches for scheduled maintenance migration Using other approaches than the built-in backup utilities might provide more control and flexibility in your scheduled maintenance migration. These other types of approaches let you perform transformations, checks, or other operations on your data while doing the migration. You can consolidate multiple instances or databases into a single instance or database. Consolidating instances can help mitigate operational costs and ease your scalability issues. One such alternative to back-up utilities is to use flat files to export and import your data. For more information about flat file import, see Export and import using CSV files for Cloud SQL for PostgreSQL. Another alternative is to use a managed import to set up replication from an external PostgreSQL database. When you use a managed import, there is an initial data load from the external database into the Cloud SQL for PostgreSQL instance. This initial load uses a service that extracts data from the external server - the RDS or Aurora instance - and imports it into the Cloud SQL for PostgreSQL instance directly. For more information, see use a managed import to set up replication from external databases. An alternative way to do a one-time data migration of your data is to export the tables from your source PostgreSQL database to CSV or SQL files. You can then import the CSV or SQL file into Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL. To export the date of your source instance, you can use the aws_s3 extension for PostgreSQL. Alternatively, you can use Amazon Database Migration Service and an S3 bucket as a target. For detailed information about this approach, see the following resources: Installing the aws_s3 extension for PostgreSQL Using Amazon S3 as a target for Amazon Database Migration Service You can also manually import data into an AlloyDB for PostgreSQL instance. The supported formats are as follows: CSV: With this file format, each file in this format contains the contents of one table in the database. You can load the data into the CSV file by using the psql command-line program. For more information, see Import a CSV file. DMP: This file format contains the archive of an entire PostgreSQL database. You import data from this file by using the pg_restore utility. For more information, see Import a DMP file. SQL: This file format contains the text reconstruction of a PostgreSQL database. The data in this file is processed by using the psql command line. For more information, see Import a SQL file. Tools for continuous replication migrations The following diagram shows a flowchart with questions that can help you choose the migration tool for a single database when you use a continuous replication migration strategy: The preceding flowchart shows the following decision points: Do you prefer to use managed migration services? If yes, can you afford a few seconds of write downtime while the replication step takes place? If yes, use Database Migration Service. If no, explore other migration options. If no, you must evaluate whether database engine built-in replication is supported: If yes, we recommend that you use built-in replication. If no, we recommend that you explore other migration options. The following sections describe the tools that can be used for continuous migrations, along with their limitations and best practices. Database Migration Service for continuous replication migration Database Migration Service provides a specific job type for continuous migrations. These continuous migration jobs support high-fidelity migrations to Cloud SQL for PostgreSQL and to AlloyDB for PostgreSQL. When you use Database Migration Service to migrate to Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL, there are prerequisites and limitations that are associated with each target instance: If you are migrating to Cloud SQL for PostgreSQL, use the following resources: The full list of prerequisites is provided in Configure your source for PostgreSQL. The full list of limitations is provided in Known limitations for PostgresQL. If you are migrating to AlloyDB for PostgreSQL, use the following resources: The full list of prerequisites is provided in Configure your source for PostgreSQL to AlloyDB for PostgreSQL. The full list of limitations is provided in Known limitations for PostgreSQL to AlloyDB for PostgreSQL. Database engine built-in replication Database engine built-in replication is an alternative option to Database Migration Service for a continuous migration. Database replication represents the process of copying and distributing data from a database called the primary database to other databases called replicas. It's intended to increase data accessibility and improve the fault tolerance and reliability of a database system. Although database migration is not the primary purpose of database replication, it can be successfully used as a tool to achieve this goal. Database replication is usually an ongoing process that occurs in real time as data is inserted, updated, or deleted in the primary database. Database replication can be done as a one-time operation, or a sequence of batch operations. Most modern database engines implement different ways of achieving database replication. One type of replication can be achieved by copying and sending the write ahead log or transaction log of the primary to its replicas. This approach is called physical or binary replication. Other replication types work by distributing the raw SQL statements that a primary database receives, instead of replicating file system level changes. These replication types are called logical replication. For PostgreSQL, there are also third-party extensions, such as pglogical, that implement logical replication relying on write-ahead logging (WAL). Cloud SQL supports replication for PostgreSQL. However, there are some prerequisites and limitations. The following are limitations and prerequisites for Cloud SQL for PostgreSQL: The following Amazon versions are supported: Amazon RDS 9.6.10 and later, 10.5 and later, 11.1 and later, 12, 13, 14 Amazon Aurora 10.11 and later, 11.6 and later, 12.4 and later, and 13.3 and later The firewall of the external server must be configured to allow the internal IP range that has been allocated for the private services access of the VPC network. The Cloud SQL for PostgreSQL replica database uses the VPC network as its private network. The firewall of the source database must be configured to allow the entire internal IP range that has been allocated for the private service connection of the VPC network. The Cloud SQL for PostgreSQL destination instance uses this VPC network in the privateNetwork field of its IpConfiguration setting. When you install the pglogical extension on a Cloud SQL for PostgreSQL instance, make sure that you have set the enable_pglogical flag to on (for example, cloudsql.enable_pglogical=on). Make sure the shared_preload_libraries parameter includes the pglogical extension on your source instance (for example, shared_preload_libraries=pg_stat_statements,pglogical). Set the rds.logical_replication parameter to 1. This setting enables write-ahead logs at the logical level. These changes require a restart of the primary instance. For more information about using Cloud SQL for PostgreSQL for replication, see the external server checklist in the replication section for PostgreSQL and also Set up logical replication and decoding for PostgreSQL from the Cloud SQL official documentation. AlloyDB for PostgreSQL supports replication through the pglogical extension. To enable the pglogical extension for replication, you can use the CREATE EXTENSION command. Before using that command, you must first set the database flags alloydb.enable_pglogical and alloydb.logical_decoding to on in the AlloyDB for PostgreSQL instance where you want to use the extension. Setting these flags requires a restart of the instance. Other approaches for continuous replication migration You can use Datastream to replicate near real-time changes of your source PostgreSQL instance. Datastream uses change data capture (CDC) and replication to replicate and synchronize data. You can then use Datastream for continuous replication from Amazon RDS and Amazon Aurora. You use Datastream to replicate changes from your PostgreSQL instance to either BigQuery or Cloud Storage. That replicated data can then be brought into your Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instance with Dataflow by using a Dataflow Flex Template, or with Dataproc. For more information about using Datastream and Dataflow, see the following resources: Configure an Amazon RDS for PostgreSQL database in Datastream Configure an Amazon Aurora PostgreSQL database in Datastream Work with PostgreSQL database WAL log files Stream changes to data in real time with Datastream Dataflow overview Dataflow Flex Template to upload batch data from Google Cloud Storage to AlloyDB for PostgreSQL (and Postgres) Third-party tools for continuous replication migration In some cases, it might be better to use one third-party tool for most database engines. Such cases might be if you prefer to use a managed migration service and you need to ensure that the target database is always in near-real-time sync with the source, or if you need more complex transformations like data cleaning, restructuring, and adaptation during the migration process. If you decide to use a third-party tool, choose one of the following recommendations, which you can use for most database engines. Striim is an end-to-end, in-memory platform for collecting, filtering, transforming, enriching, aggregating, analyzing, and delivering data in real time: Advantages: Handles large data volumes and complex migrations. Built-in change data capture for SQL Server. Preconfigured connection templates and no-code pipelines. Able to handle mission-critical, large databases that operate under heavy transactional load. Exactly-once delivery. Disadvantages: Not open source. Can become expensive, especially for long migrations. Some limitations in data definition language (DDL) operations propagation. For more information, see Supported DDL operations and Schema evolution notes and limitations. For more information about Striim, see Running Striim in the Google Cloud. Debezium is an open source distributed platform for CDC, and can stream data changes to external subscribers: Advantages: Open source. Strong community support. Cost effective. Fine-grained control on rows, tables, or databases. Specialized for change capture in real time from database transaction logs. Disadvantages: Requires specific experience with Kafka and ZooKeeper. At-least-once delivery of data changes, which means that you need duplicates handling. Manual monitoring setup using Grafana and Prometheus. No support for incremental batch replication. For more information about Debezium migrations, see Near Real Time Data Replication using Debezium. Fivetran is an automated data movement platform for moving data out of and across cloud data platforms. Advantages: Preconfigured connection templates and no-code pipelines. Propagates any schema changes from your source to the target database. Exactly-once delivery of your data changes, which means that you don't need duplicates handling. Disadvantages: Not open source. Support for complex data transformation is limited. Define the migration plan and timeline For a successful database migration and production cut-over, we recommend that you prepare a well-defined, comprehensive migration plan. To help reduce the impact on your business, we recommend that you create a list of all the necessary work items. Defining the migration scope reveals the work tasks that you must do before, during, and after the database migration process. For example, if you decide not to migrate certain tables from a database, you might need pre-migration or post-migration tasks to implement this filtering. You also ensure that your database migration doesn't affect your existing service-level agreement (SLA) and business continuity plan. We recommend that your migration planning documentation include the following documents: Technical design document (TDD) RACI matrix Timeline (such as a T-Minus plan) Database migrations are an iterative process, and first migrations are often slower than the later ones. Usually, well-planned migrations run without issues, but unplanned issues can still arise. We recommend that you always have a rollback plan. As a best practice, follow the guidance from Migrate to Google Cloud: Best practices for validating a migration plan. TDD The TDD documents all technical decisions to be made for the project. Include the following in the TDD: Business requirements and criticality Recovery time objective (RTO) Recovery point objective (RPO) Database migration details Migration effort estimates Migration validation recommendations RACI matrix Some migrations projects require a RACI matrix, which is a common project management document that defines which individuals or groups are responsible for tasks and deliverables within the migration project. Timeline Prepare a timeline for each database that needs to be migrated. Include all work tasks that must be performed, and defined start dates and estimated end dates. For each migration environment, we recommend that you create a T-minus plan. A T-minus plan is structured as a countdown schedule, and lists all the tasks required to complete the migration project, along with the responsible groups and estimated duration. The timeline should account for not only pre-migration preparation tasks execution, but also validating, auditing, or testing tasks that happen after the data transfer takes place. The duration of migration tasks typically depends on database size, but there are also other aspects to consider, like business logic complexity, application usage, and team availability. A T-Minus plan might look like the following: Date Phase Category Tasks Role T-minus Status 11/1/2023 Pre-migration Assessment Create assessment report Discovery team -21 Complete 11/7/2023 Pre-migration Target preparation Design target environment as described by the design document Migration team -14 Complete 11/15/2023 Pre-migration Company governance Migration date and T-Minus approval Leadership -6 Complete 11/18/2023 Migration Set up DMS Build connection profiles Cloud migration engineer -3 Complete 11/19/2023 Migration Set up DMS Build and start migration jobs Cloud migration engineer -2 Not started 11/19/2023 Migration Monitor DMS Monitor DMS Jobs and DDL changes in the source instance Cloud migration engineer -2 Not started 11/21/2023 Migration Cutover DMS Promote DMS replica Cloud migration engineer 0 Not started 11/21/2023 Migration Migration validation Database migration validation Migration team 0 Not started 11/21/2023 Migration Application test Run capabilities and performance tests Migration team 0 Not started 11/22/2023 Migration Company governance Migration validation GO or NO GO Migration team 1 Not started 11/23/2023 Post-migration Validate monitoring Configure monitoring Infrastructure team 2 Not started 11/25/2023 Post-migration Security Remove DMS user account Security team 4 Not started Multiple database migrations If you have multiple databases to migrate, your migration plan should contain tasks for all of the migrations. We recommend that you start the process by migrating a smaller, ideally non-mission-critical database. This approach can help you to build your knowledge and confidence in the migration process and tooling. You can also detect any flaws in the process in the early stages of the overall migration schedule. If you have multiple databases to migrate, the timelines can be parallelized. For example, to speed up the migration process, you might choose to migrate a group of small, static, or less mission-critical databases at the same time, as shown in the following diagram. In the example shown in the diagram, databases 1-4 are a group of small databases that are migrated at the same time. Define the preparation tasks The preparation tasks are all the activities that you need to complete to fulfill the migration prerequisites. Without the preparation tasks, the migration can't take place or the migration results in the migrated database being in an unusable state. Preparation tasks can be categorized as follows: Preparations and prerequisites for an Amazon RDS or Amazon Aurora instance Source database preparation and prerequisites Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instance setup Migration-specific setup Amazon RDS or Amazon Aurora instance preparation and prerequisites Consider the following common setup and prerequisite tasks: Depending on your migration path, you might need to allow remote connections on your RDS instances. If your RDS instance is configured to be private in your VPC, private RFC 1918 connectivity must exist between Amazon and Google Cloud. You might need to configure a new security group to allow remote connections on required ports and apply the security group to your Amazon RDS or Amazon Aurora instance: By default, in AWS, network access is turned off for database instances. You can specify rules in a security group that allow access from an IP address range, port, or security group. The same rules apply to all database instances that are associated with that security group. If you are migrating from Amazon RDS, make sure that you have enough free disk to buffer write-ahead logs for the duration of the full load operation on your Amazon RDS instance. For ongoing replication (streaming changes through CDC), you must use a full RDS instance and not a read replica. If you're using built-in replication, you need to set up your Amazon RDS or Amazon Aurora instance for replication for PostgreSQL. Built-in replication or tools that use built-in replication need logical replication for PostgreSQL. If you're using third-party tools, upfront settings and configurations are usually required before using the tool. Check the documentation from the third-party tool. For more information about instance preparation and prerequisites, see Set up the external server for replication for PostgreSQL. Source database preparation and prerequisites If you choose to use Database Migration Service, configure your source database before connecting to it. For more information, see Configure your source for PostgreSQL and Configure your source for PostgreSQL to AlloyDB for PostgreSQL. For tables that don't have primary keys, after Database Migration Service migrates the initial backup, only INSERT statements will be migrated to the target database during the CDC phase. DELETE and UPDATE statements are not migrated for those tables. Consider that large objects can't be replicated by Database Migration Service, as the logical decoding facility in PostgreSQL doesn't support decoding changes to large objects. If you choose to use built-in replication, consider that logical replication has certain limitations with respect to data definition language (DDL) commands, sequences, and large objects. Primary keys must exist or be added on tables that are to be enabled for CDC and that go through lots of updates. Some third-party migration tools require that all large object columns are nullable. Any large object columns that are NOT NULL need to have that constraint removed during migration. Take baseline measurements on your source environment in production use. Consider the following: Measure the size of your data, as well as your workload's performance. How long do important queries or transactions take, on average? How long during peak times? Document the baseline measurements for later comparison, to help you decide if the fidelity of your database migration is satisfactory. Decide if you can switch your production workloads and decommission your source environment, or if you still need it for fallback purposes. Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instance setup To have your target instance achieve similar performance levels to that of your source instance, choose the size and specifications of your target PostgreSQL database instance to match those of the source instance. Pay special attention to disk size and throughput, input/output operations per second (IOPS), and number of virtual CPUs (vCPUs). Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. When deciding on the Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL instance, keep in mind that disk performance is based on the disk size and the number of vCPUs. You must confirm the following properties and requirements before you create your Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instances. If you want to change these properties and requirements later, you will need to recreate the instances. Choose the project and region of your target Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instances carefully. Instances can't be migrated between Google Cloud projects and regions without data transfer. Migrate to a matching major version on Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL. For example, if you are using PostgreSQL 14.x on Amazon RDS or Amazon Aurora, you should migrate to Cloud SQL or AlloyDB for PostgreSQL for PostgreSQL version 14.x. Replicate user information separately if you are using built-in database engine backups or Database Migration Service. For details, review the limitations in the Database engine specific backups section. Review the database engine specific configuration flags and compare their source and target instance values. Make sure you understand their impact and whether they need to be the same or not. For example, when working with PostgreSQL, we recommend comparing the values from the pg_settings table on your source database to the one on the Cloud SQL and AlloyDB for PostgreSQL instance. Update flag settings as needed on the target database instance to replicate the source settings. Depending on the nature of your workload, you might need to enable specific extensions to support your database. If your workload requires these extensions, review the supported PostgreSQL extensions and how to enable them in Cloud SQL and AlloyDB for PostgreSQL. For more information about Cloud SQL for PostgreSQL setup, see Instance configuration options, Database engine specific flags, and supported extensions. For more information about AlloyDB for PostgreSQL setup, see Supported database flags and supported extensions. Migration specific setup If you can afford downtime, you can import SQL dump files to Cloud SQL and AlloyDB for PostgreSQL. In such cases you might need to create a Cloud Storage bucket to store the database dump. If you use replication, you must ensure that the Cloud SQL and AlloyDB for PostgreSQL replica has access to your primary (source) database. This access can be gained by using the documented connectivity options. Depending on your use case and criticality, you might need to implement a fallback scenario, which usually includes reversing the direction of the replication. In this case, you might need an additional replication mechanism from your target Cloud SQL and AlloyDB for PostgreSQL back to your source Amazon instance. You can decommission the resources that connect your Amazon and Google Cloud environment after the migration is completed and validated. If you're migrating to AlloyDB for PostgreSQL, consider using a Cloud SQL for PostgreSQL instance as a potential destination for your fallback scenarios. Use the pglogical extension to set up logical replication to that Cloud SQL instance. For more information, see the following resources: Best practices for importing and exporting data Connectivity for PostgreSQL and PostgreSQL to AlloyDB for PostgreSQL in Database Migration Service Define the execution tasks Execution tasks implement the migration work itself. The tasks depend on your chosen migration tool, as described in the following subsections. Built-in database engine backups Use the pg_dump utility to create a backup. For more information about using this utility to import and export data, see the following resources: pg_dump utility documentation page Import data to Cloud SQL for PostgreSQL Import a DMP file to AlloyDB for PostgreSQL Database Migration Service migration jobs Define and configure migration jobs in Database Migration Service to migrate data from a source instance to the destination database. Migration jobs connect to the source database instance through user-defined connection profiles. Test all the prerequisites to ensure the job can run successfully. Choose a time when your workloads can afford a small downtime for the migration and production cut-over. In Database Migration Service, the migration begins with the initial schema dump and restore without indexes and constraints, followed by the data copy operation. After data copy completes, indexes and constraints are restored. Finally the continuous replication of changes from the source to the destination database instance is started. Database Migration Service uses the pglogical extension for replication from your source to the target database instance. At the beginning of migration, this extension sets up replication by requiring exclusive short-term locks on all the tables in your source Amazon RDS or Amazon Aurora instance. For this reason, we recommend starting the migration when the database is least busy, and avoiding statements on the source during the dump and replication phase, as DDL statements are not replicated. If you must perform DDL operations, use the 'pglogical' functions to run DDL statements on your source instance or manually run the same DDL statements on the migration target instance, but only after the initial dump stage finishes. The migration process with Database Migration Service ends with the promotion operation. Promoting a database instance disconnects the destination database from the flow of changes coming from the source database, and then the now standalone destination instance is promoted to a primary instance. This approach is also called a production switch. For a fully detailed migration setup process, see the quick start guides for PostgreSQL and PostgreSQL to AlloyDB for PostgreSQL. Database engine built-in replication Cloud SQL supports two types of logical replication: the built-in logical replication of PostgreSQL and logical replication through the pglogical extension. For AlloyDB for PostgreSQL we recommend using the pglogical extension for replication. Each type of logical replication has its own features and limitations. Built-in logical replication has the following features and limitations: It's available in PostgreSQL 10 and later. All changes and columns are replicated. Publications are defined at the table level. Data is only replicated from base tables to base tables. It doesn't perform sequence replication. It's the recommended replication type when there are many tables that have no primary key. Set up built-in PostgreSQL logical replication. For tables without a primary key, apply the REPLICA IDENTITY FULL form so that built-in replication uses the entire row as the unique identifier instead of a primary key. pglogical logical replication has the following features and limitations: It's available in all PostgreSQL versions and offers cross version support. Row filtering is available on the source. It doesn't replicate UNLOGGED and TEMPORARY tables. A primary key or unique key is required on tables to capture UPDATE and DELETE changes. Sequence replication is available. Delayed replication is possible. It provides conflict detection and configurable resolution if there are multiple publishers or conflicts between replicated data and local changes. For instructions on how to set up built-in PostgreSQL logical replication from an external server like Amazon RDS or Amazon Aurora for PostgreSQL, see the following resources: Set up built-in PostgreSQL logical replication Set up logical replication with pglogical Third-party tools Define any execution tasks for the third-party tool you've chosen. This section focuses on Striim as an example. Striim uses applications that acquire data from various sources, process the data, and then deliver the data to other applications or targets. You use one or more flows to organize these migration processes within your custom applications. To code your custom applications, you have a choice of using a graphical programming tool or the Tungsten Query Language (TQL) programming language. For more information about how to use Striim, see the following resources: Striim basics: Striim concepts Striim in Google Cloud quickstart: Running Striim in the Google Cloud Configuration settings for continuous replication: PostgreSQL and SQL Server Best practices guide: Switching from an initial load to continuous replication If you decide to use Striim to migrate your data, see the following guides on how to use Striim to migrate data into Google Cloud: Striim Migration Service to Google Cloud Tutorials How to Migrate Transactional Databases to AlloyDB for PostgreSQL Define fallback scenarios Define fallback action items for each migration execution task, to safeguard against unforeseen issues that might occur during the migration process. The fallback tasks usually depend on the migration strategy and tools used. Fallback might require significant effort. As a best practice, don't perform production cut-over until your test results are satisfactory. Both the database migration and the fallback scenario should be properly tested to avoid a severe outage. Define success criteria and timebox all your migration execution tasks. Doing a migration dry run helps collect information about the expected times for each task. For example, for a scheduled maintenance migration, you can afford the downtime represented by the cut-over window. However, it's important to plan your next action in case the one-time migration job or the restore of the backup fails midway. Depending on how much time of your planned downtime has elapsed, you might have to postpone the migration if the migration task doesn't finish in the expected amount of time. A fallback plan usually refers to rolling back the migration after you perform the production cut-over, if issues on the target instance appear. If you implement a fallback plan, remember that it must be treated as a full database migration, including planning and testing. If you choose not to have a fallback plan, make sure you understand the possible consequences. Having no fallback plan can add unforeseen effort and cause avoidable disruptions in your migration process. Although a fallback is a last resort, and most database migrations don't end up using it, we recommend that you always have a fallback strategy. Simple fallback In this fallback strategy, you switch your applications back to the original source database instance. Adopt this strategy if you can afford downtime when you fall back or if you don't need the transactions committed on the new target system. If you do need all the written data on your target database, and you can afford some downtime, you can consider stopping writes to your target database instance, taking built-in backups and restoring them on your source instance, and then re-connecting your applications to the initial source database instance. Depending on the nature of your workload and amount of data written on the target database instance, you could bring it into your initial source database system at a later time, especially if your workloads aren't dependent on any specific record creation time or any time ordering constraints. Reverse replication In this strategy, you replicate the writes that happen on your new target database after production cut-over back to your initial source database. In this way, you keep the original source in sync with the new target database and have the writes happening on the new target database instance. Its main disadvantage is that you can't test the replication stream until after you cut-over to the target database instance, therefore it doesn't allow end-to-end testing and it has a small period of no fallback. Choose this approach when you can still keep your source instance for some time and you migrate using the continuous replication migration. Forward replication This strategy is a variation of reverse replication. You replicate the writes on your new target database to a third database instance of your choice. You can point your applications to this third database, which connects to the server and runs read-only queries while the server is unavailable. You can use any replication mechanism, depending on your needs. The advantage of this approach is that it can be fully end-to-end tested. Take this approach when you want to be covered by a fallback at all times or when you must discard your initial source database shortly after the production cut-over. Duplicate writes If you choose a Y (writing and reading) or data-access microservice migration strategy, this fallback plan is already set. This strategy is more complicated, because you need to refactor applications or develop tools that connect to your database instances. Your applications write to both initial source and target database instances, which lets you perform a gradual production cut-over until you are using only your target database instances. If there are any issues, you connect your applications back to the initial source with no downtime. You can discard the initial source and the duplicate writing mechanism when you consider the migration performed with no issues observed. We recommend this approach when it's critical to have no migration downtime, have a reliable fallback in place, and when you have time and resources to perform application refactoring. Perform testing and validation The goals of this step are to test and validate the following: Successful migration of the data in the database. Integration with existing applications after they are switched to use the new target instance. Define the key success factors, which are subjective to your migration. The following are examples of subjective factors: Which data to migrate. For some workloads, it might not be necessary to migrate all of the data. You might not want to migrate data that is already aggregated, redundant, archived, or old. You might archive that data in a Cloud Storage bucket, as a backup. An acceptable percentage of data loss. This particularly applies to data used for analytics workloads, where losing part of the data does not affect general trends or performance of your workloads. Data quality and quantity criteria, which you can apply to your source environment and compare to the target environment after the migration. Performance criteria. Some business transactions might be slower in the target environment, but the processing time is still within defined expectations. The storage configurations in your source environment might not map directly to Google Cloud environment targets. For example, configurations from the General Purpose SSD (gp2 and gp3) volumes with IOPS burst performance or Provisioned IOPS SSD. To compare and properly size the target instances, benchmark your source instances, in both the assessment and validation phases. In the benchmarking process, you apply production-like sequences of operations to the database instances. During this time, you capture and process metrics to measure and compare the relative performance of both source and target systems. For conventional, server based configurations, use relevant measurements observed during peak loads. For flexible resource capacity models like Aurora Serverless, consider looking at historical metric data to observe your scaling needs. The following tools can be used for testing, validation, and database benchmarking: HammerDB: an open source database benchmarking and load testing tool. It supports complex transactional and analytic workloads, based on industry standards, on multiple database engines (both TPROC-C and TPROC-H). HammerDB has detailed documentation and a wide community of users. You can share and compare results across several database engines and storage configurations. For more information, see Load testing SQL Server using HammerDB and Benchmark Amazon RDS SQL Server performance using HammerDB. DBT2 Benchmark Tool: benchmarking specialized for MySQL. A set of database workload kits mimics an application for a company that owns warehouses and involves a mix of read and write transactions. Use this tool if you want to use a ready-made online transaction processing (OLTP) load test. DbUnit: an open source unit testing tool used to test relational database interactions in Java. The setup and use is straightforward, and it supports multiple database engines (MySQL, PostgreSQL, SQL Server, and others). However, the test execution can be slow sometimes, depending on the size and complexity of the database. We recommend this tool when simplicity is important. DbFit: an open source database testing framework that supports test-driven code development and automated testing. It uses a basic syntax for creating test cases and features data-driven testing, version control, and test result reporting. However, support for complex queries and transactions is limited and it doesn't have large community support or extensive documentation, compared to other tools. We recommend this tool if your queries are not complex and you want to perform automated tests and integrate them with your continuous integration and delivery process. To run an end-to-end test, including testing of the migration plan, always perform a migration dry run exercise. A dry run performs the full-scope database migration without switching any production workloads, and it offers the following advantages: Lets you ensure that all objects and configurations are properly migrated. Helps you define and execute your migration test cases. Offers insights into the time needed for the actual migration, so you can calibrate your timeline. Represents an occasion to test, validate, and adapt the migration plan. Sometimes you can't plan for everything in advance, so this helps you to spot any gaps. Data testing can be performed on a small set of the databases to be migrated or the entire set. Depending on the total number of databases and the tools used for implementing their migration, you can decide to adopt a risk based approach. With this approach, you perform data validation on a subset of databases migrated through the same tool, especially if this tool is a managed migration service. For testing, you should have access to both source and target databases and do the following tasks: Compare source and target schemas. Check if all tables and executables exist. Check row counts and compare data at the database level. Run custom data validation scripts. Test that the migrated data is also visible in the applications that switched to use the target database (migrated data is read through the application). Perform integration testing between the switched applications and the target database by testing various use cases. This testing includes both reading and writing data to the target databases through the applications so that the workloads fully support migrated data together with newly created data. Test the performance of the most used database queries to observe if there's any degradation due to misconfigurations or wrong sizing. Ideally, all these migration test scenarios are automated and repeatable on any source system. The automated test cases suite is adapted to perform against the switched applications. If you're using Database Migration Service as your migration tool, see either the PostgreSQL or PostgreSQL to AlloyDB for PostgreSQL version of the "Verify a migration" topic. Data Validation Tool For performing data validation, we recommend that you use the Data Validation Tool (DVT). The DVT is an open sourced Python CLI tool, backed by Google, that provides an automated and repeatable solution for validation across different environments. The DVT can help streamline the data validation process by offering customized, multi-level validation functions to compare source and target tables on the table, column, and row level. You can also add validation rules. The DVT covers many Google Cloud data sources, including AlloyDB for PostgreSQL, BigQuery, Cloud SQL, Spanner, JSON, and CSV files on Cloud Storage. It can also be integrated with Cloud Run functions and Cloud Run for event based triggering and orchestration. The DVT supports the following types of validations: Schema level comparisons Column (including 'AVG', 'COUNT', 'SUM', 'MIN', 'MAX', 'GROUP BY', and 'STRING_AGG') Row (including hash and exact match in field comparisons) Custom query results comparison For more information about the DVT, see the Git repository and Data validation made easy with Google Cloud's Data Validation Tool. Perform the migration The migration tasks include the activities to support the transfer from one system to another. Consider the following best practices for your data migration: Inform the involved teams whenever a plan step begins and finishes. If any of the steps take longer than expected, compare the time elapsed with the maximum amount of time allotted for that step. Issue regular intermediary updates to involved teams when this happens. If the time span is greater than the maximal amount of time reserved for each step in the plan, consider rolling back. Make "go or no-go" decisions for every step of the migration and cut-over plan. Consider rollback actions or alternative scenarios for each of the steps. Perform the migration by following your defined execution tasks, and refer to the documentation for your selected migration tool. Perform the production cut-over The high-level production cut-over process can differ depending on your chosen migration strategy. If you can have downtime on your workloads, then your migration cut-over begins by stopping writes to your source database. For continuous replication migrations, you typically do the following high-level steps in the cut-over process: Stop writing to the source database. Drain the source. Stop the replication process. Deploy the applications that point to the new target database. After the data has been migrated by using the chosen migration tool, you validate the data in the target database. You confirm that the source database and the target databases are in sync and the data in the target instance adheres to your chosen migration success standards. Once the data validation passes your criteria, you can perform the application level cut-over. Deploy the workloads that have been refactored to use the new target instance. You deploy the versions of your applications that point to the new target database instance. The deployments can be performed either through rolling updates, staged releases, or by using a blue-green deployment pattern. Some application downtime might be incurred. Follow the best practices for your production cut-over: Monitor your applications that work with the target database after the cut-over. Define a time period of monitoring to consider whether or not you need to implement your fallback plan. Note that your Cloud SQL or AlloyDB for PostgreSQL instance might need a restart if you change some database flags. Consider that the effort of rolling back the migration might be greater than fixing issues that appear on the target environment. Cleanup the source environment and configure the Cloud SQL or AlloyDB for PostgreSQL instance After the cut-over is completed, you can delete the source databases. We recommend performing the following important actions before the cleanup of your source instance: Create a final backup of each source database. These backups provide you with an end state of the source databases. The backups might also be required in some cases for compliance with some data regulations. Collect the database parameter settings of your source instance. Alternatively, check that they match the ones you've gathered in the inventory building phase. Adjust the target instance parameters to match the ones from the source instance. Collect database statistics from the source instance and compare them to the ones in the target instance. If the statistics are disparate, it's hard to compare the performance of the source instance and target instance. In a fallback scenario, you might want to implement the replication of your writes on the Cloud SQL instance back to your original source database. The setup resembles the migration process but would run in reverse: the initial source database would become the new target. As a best practice to keep the source instances up to date after the cut-over, replicate the writes performed on the target Cloud SQL instances back to the source database. If you need to roll back, you can fall back to your old source instances with minimal data loss. Alternatively, you can use another instance and replicate your changes to that instance. For example, when AlloyDB for PostgreSQL is a migration destination, consider setting up replication to a Cloud SQL for PostgreSQL instance for fallback scenarios. In addition to the source environment cleanup, the following critical configurations for your Cloud SQL for PostgreSQL instances must be done: Configure a maintenance window for your primary instance to control when disruptive updates can occur. Configure the storage on the instance so that you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL may perform. To receive an alert if available disk space gets lower than 20%, create a metrics-based alerting policy for the disk utilization metric. Don't start an administrative operation before the previous operation has completed. For more information about maintenance and best practices on Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL instances, see the following resources: About maintenance on Cloud SQL for PostgreSQL instances About instance settings on Cloud SQL for PostgreSQL instances About maintenance on AlloyDB for PostgreSQL Configure an AlloyDB for PostgreSQL instance's database flags For more information about maintenance and best practices, see About maintenance on Cloud SQL instances. Optimize your environment after migration Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. Establish your optimization requirements Review the following optimization requirements for your Google Cloud environment and choose the ones that best fit your workloads: Increase the reliability and availability of your database With Cloud SQL, you can implement a high availability and disaster recovery strategy that aligns with your recovery time objective (RTO) and recovery point objective (RPO). To increase reliability and availability, consider the following: In cases of read-heavy workloads, add read replicas to offload traffic from the primary instance. For mission critical workloads, use the high-availability configuration, replicas for regional failover, and a robust disaster recovery configuration. For less critical workloads, automated and on-demand backups can be sufficient. To prevent accidental removal of instances, use instance deletion protection. When migrating to Cloud SQL for PostgreSQL, consider using Cloud SQL Enterprise Plus edition to benefit from increased availability, log retention, and near-zero downtime planned maintenance. For more information about Cloud SQL Enterprise Plus, see Introduction to Cloud SQL editions and Near-zero downtime planned maintenance. For more information on increasing the reliability and availability of your Cloud SQL for PostgreSQL database, see the following documents: Promote replicas for regional migration or disaster recovery Cloud SQL database disaster recovery About Cloud SQL backups Prevent deletion of an instance documentation When migrating to AlloyDB for PostgreSQL, configure backup plans and consider using the AlloyDB for PostgreSQL Auth Proxy. Consider creating and working with secondary clusters for cross-region replication. For more information on increasing the reliability and availability of your AlloyDB for PostgreSQL database, see the following documents: AlloyDB for PostgreSQL under the hood: Business continuity About the AlloyDB for PostgreSQL Auth proxy Work with cross-region replication Increase the cost effectiveness of your database infrastructure To have a positive economic impact, your workloads must use the available resources and services efficiently. Consider the following options: Provide the database with the minimum required storage capacity by doing the following: To scale storage capacity automatically as your data grows, enable automatic storage increases. However, ensure that you configure your instances to have some buffer in peak workloads. Remember that database workloads tend to increase over time. Identify possible overestimated resources: Rightsizing your Cloud SQL instances can reduce the infrastructure cost without adding additional risks to the capacity management strategy. Cloud Monitoring provides predefined dashboards that help identify the health and capacity utilization of many Google Cloud components, including Cloud SQL. For details, see Create and manage custom dashboards. Identify instances that don't require high availability or disaster recovery configurations, and remove them from your infrastructure. Remove tables and objects that are no longer needed. You can store them in a full backup or an archival Cloud Storage bucket. Evaluate the most cost-effective storage type (SSD or HDD) for your use case. For most use cases, SSD is the most efficient and cost-effective choice. If your datasets are large (10 TB or more), latency-insensitive, or infrequently accessed, HDD might be more appropriate. For details, see Choose between SSD and HDD storage. Purchase committed use discounts for workloads with predictable resource needs. Use Active Assist to get cost insights and recommendations. For more information and options, see Do more with less: Introducing Cloud SQL cost optimization recommendations with Active Assist. When migrating to Cloud SQL for PostgreSQL, you can reduce overprovisioned instances and identify idle Cloud SQL for PostgreSQL instances. For more information on increasing the cost effectiveness of your Cloud SQL for PostgreSQL database instance, see the following documents: Enable automatic storage increases for Cloud SQL Identify idle Cloud SQL instances Reduce overprovisioned Cloud SQL instances Optimize queries with high memory usage Create and manage custom dashboards Choose between SSD and HDD storage Committed use discounts Active Assist When using AlloyDB for PostgreSQL, you can do the following to increase cost effectiveness: Use the columnar engine to efficiently perform certain analytical queries such as aggregation functions or table scans. Use cluster storage quota recommender for AlloyDB for PostgreSQL to detect clusters which are at risk of hitting the storage quota. For more information on increasing the cost effectiveness of your AlloyDB for PostgreSQL database infrastructure, see the following documentation sections: About the AlloyDB for PostgreSQL columnar engine Optimize underprovisioned AlloyDB for PostgreSQL clusters Increase cluster storage quota for AlloyDB for PostgreSQL Monitor active queries Increase the performance of your database infrastructure Minor database-related performance issues frequently have the potential to impact the entire operation. To maintain and increase your Cloud SQL instance performance, consider the following guidelines: If you have a large number of database tables, they can affect instance performance and availability, and cause the instance to lose its SLA coverage. Ensure that your instance isn't constrained on memory or CPU. For performance-intensive workloads, ensure that your instance has at least 60 GB of memory. For slow database inserts, updates, or deletes, check the locations of the writer and database; sending data over long distances introduces latency. Improve query performance by using the predefined Query Insights dashboard in Cloud Monitoring (or similar database engine built-in features). Identify the most expensive commands and try to optimize them. Prevent database files from becoming unnecessarily large. Set autogrow in MBs rather than as a percentage, using increments appropriate to the requirement. Check reader and database location. Latency affects read performance more than write performance. When migrating from Amazon RDS and Aurora for PostgreSQL to Cloud SQL for PostgreSQL, consider the following guidelines: Use caching to improve read performance. Inspect the various statistics from the pg_stat_database view. For example, the blks_hit / (blks_hit + blks_read) ratio should be greater than 99%. If this ratio isn't greater than 99%, consider increasing the size of your instance's RAM. For more information, see PostgreSQL statistics collector. Reclaim space and prevent poor index performance. Depending on how often your data is changing, either set a schedule to run the VACUUM command on your Cloud SQL for PostgreSQL. Use Cloud SQL Enterprise Plus edition for increased machine configuration limits and data cache. For more information about Cloud SQL Enterprise Plus, see Introduction to Cloud SQL editions. Switch to AlloyDB for PostgreSQL. If you switch, you can have full PostgreSQL compatibility, better transactional processing, and fast transactional analytics workloads supported on your processing database. You also get a recommendation for new indexes through the use of the index advisor feature. For more information about increasing the performance of your Cloud SQL for PostgreSQL database infrastructure, see Cloud SQL performance improvement documentation for PostgreSQL. When migrating from Amazon RDS and Aurora for PostgreSQL to AlloyDB for PostgreSQL, consider the following guidelines to increase the performance of your AlloyDB for PostgreSQL database instance: Use the AlloyDB for PostgreSQL columnar engine to accelerate your analytical queries. Use the index advisor in AlloyDB for PostgreSQL. The index advisor tracks the queries that are regularly run against your database and it analyzes them periodically to recommend new indexes that can increase their performance. Improve query performance by using Query Insights in AlloyDB for PostgreSQL. Increase database observability capabilities Diagnosing and troubleshooting issues in applications that connect to database instances can be challenging and time-consuming. For this reason, a centralized place where all team members can see what's happening at the database and instance level is essential. You can monitor Cloud SQL instances in the following ways: Cloud SQL uses built-in memory custom agents to collect query telemetry. Use Cloud Monitoring to collect measurements of your service and the Google Cloud resources that you use. Cloud Monitoring includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. You can create custom dashboards that help you monitor metrics and set up alert policies so that you can receive timely notifications. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, such as Datadog and Splunk. Cloud Logging collects logging data from common application components. Cloud Trace collects latency data and executed query plans from applications to help you track how requests propagate through your application. Database Center provides an AI-assisted, centralized database fleet overview. You can monitor the health of your databases, availability configuration, data protection, security, and industry compliance. For more information about increasing the observability of your database infrastructure, see the following documentation sections: Monitor Cloud SQL for PostgreSQL instances Monitor instances with AlloyDB for PostgreSQL General Cloud SQL and AlloyDB for PostgreSQL best practices and operational guidelines Apply the best practices for Cloud SQL to configure and tune the database. Some important Cloud SQL general recommendations are as follows: If you have large instances, we recommend that you split them into smaller instances, when possible. Configure storage to accommodate critical database maintenance. Ensure you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL might perform. Having too many database tables can affect database upgrade time. Ideally, aim to have under 10,000 tables per instance. Choose the appropriate size for your instances to account for transaction (binary) log retention, especially for high write activity instances. To be able to efficiently handle any database performance issues that you might encounter, use the following guidelines until your issue is resolved: Scale up infrastructure: Increase resources (such as disk throughput, vCPU, and RAM). Depending on the urgency and your team's availability and experience, vertically scaling your instance can resolve most performance issues. Later, you can further investigate the root cause of the issue in a test environment and consider options to eliminate it. Perform and schedule database maintenance operations: Index defragmentation, statistics updates, vacuum analyze, and reindex heavily updated tables. Check if and when these maintenance operations were last performed, especially on the affected objects (tables, indexes). Find out if there was a change from normal database activities. For example, recently adding a new column or having lots of updates on a table. Perform database tuning and optimization: Are the tables in your database properly structured? Do the columns have the correct data types? Is your data model right for the type of workload? Investigate your slow queries and their execution plans. Are they using the available indexes? Check for index scans, locks, and waits on other resources. Consider adding indexes to support your critical queries. Eliminate non-critical indexes and foreign keys. Consider rewriting complex queries and joins. The time it takes to resolve your issue depends on the experience and availability of your team and can range from hours to days. Scale out your reads: Consider having read replicas. When scaling vertically isn't sufficient for your needs, and database tuning and optimization measures aren't helping, consider scaling horizontally. Routing read queries from your applications to a read replica improves the overall performance of your database workload. However, it might require additional effort to change your applications to connect to the read replica. Database re-architecture: Consider partitioning and indexing the database. This operation requires significantly more effort than database tuning and optimization, and it might involve a data migration, but it can be a long-term fix. Sometimes, poor data model design can lead to performance issues, which can be partially compensated by vertical scale-up. However, a proper data model is a long-term fix. Consider partitioning your tables. Archive data that isn't needed anymore, if possible. Normalize your database structure, but remember that denormalizing can also improve performance. Database sharding: You can scale out your writes by sharding your database. Sharding is a complicated operation and involves re-architecting your database and applications in a specific way and performing data migration. You split your database instance in multiple smaller instances by using a specific partitioning criteria. The criteria can be based on customer or subject. This option lets you horizontally scale both your writes and reads. However, it increases the complexity of your database and application workloads. It might also lead to unbalanced shards called hotspots, which would outweigh the benefit of sharding. For Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL, consider the following best practices: To offload traffic from the primary instance, add read replicas. You can also use a load balancer such as HAProxy to manage traffic to the replicas. However, avoid too many replicas as this hinders the VACUUM operation. For more information on using HAProxy, see the official HAProxy website. Optimize the VACUUM operation by increasing system memory and the maintenance_work_mem parameter. Increasing system memory means that more tuples can be batched in each iteration. Because larger indexes consume a significant amount of time for the index scan, set the INDEX_CLEANUP parameter to OFF to quickly clean up and freeze the table data. When using AlloyDB for PostgreSQL, use the AlloyDB for PostgreSQL System Insights dashboard and audit logs. The AlloyDB for PostgreSQL System Insights dashboard displays metrics of the resources that you use, and lets you monitor them. For more details, see the guidelines from the Monitor instances topic in the AlloyDB for PostgreSQL documentation. For more details, see the following resources: General best practices section and Operational Guidelines for Cloud SQL for PostgreSQL About maintenance and Overview for AlloyDB for PostgreSQL What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Alex Cârciu | Solutions ArchitectMarco Ferrari | Cloud Solutions ArchitectOther contributor: Somdyuti Paul | Data Management Specialist Send feedback \ No newline at end of file diff --git a/Migrate_from_Amazon_RDS_for_SQL_Server_to_Cloud_SQL_for_SQL_Server.txt b/Migrate_from_Amazon_RDS_for_SQL_Server_to_Cloud_SQL_for_SQL_Server.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ca7432a145eb803a1d5c60b6891448b959b70e6 --- /dev/null +++ b/Migrate_from_Amazon_RDS_for_SQL_Server_to_Cloud_SQL_for_SQL_Server.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-aws-rds-to-cloudsql-for-sqlserver +Date Scraped: 2025-02-23T11:52:07.395Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate from AWS to Google Cloud: Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-28 UTC Google Cloud provides tools, products, guidance, and professional services to migrate from Amazon Relational Database Service (RDS) to Cloud SQL for SQL Server. This document is intended for cloud and database administrators who want to plan, implement, and validate a database migration project. It's also intended for decision makers who are evaluating the opportunity to migrate and want an example of what a migration might look like. This document focuses on a homogeneous database migration, which is a migration where the source and destination databases are the same database technology. The source is Amazon RDS for SQL Server, and the destination is Cloud SQL for SQL Server. This document is part of a multi-part series about migrating from AWS to Google Cloud that includes the following documents: Get started Migrate from Amazon EC2 to Compute Engine Migrate from Amazon S3 to Cloud Storage Migrate from Amazon EKS to GKE Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server (this document) Migrate from AWS Lambda to Cloud Run For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess the source environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. The database assessment phase helps you choose the size and specifications of your target Cloud SQL database instance that matches the source for similar performance needs. Pay special attention to disk size and throughput, IOPS, and number of vCPUs. Migrations might struggle or fail due to incorrect target database instance sizing. Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. When deciding on the Cloud SQL instance, keep in mind that disk performance is based on the disk size and the number of vCPUs. The following sections rely on Migrate to Google Cloud: Assess and discover your workloads, and integrate the information in that document. Build an inventory of your Amazon RDS instances To define the scope of your migration, you create an inventory and collect information about your Amazon RDS instances. Ideally, the process should be automated, because manual approaches are prone to error and can lead to incorrect assumptions. Amazon RDS and Cloud SQL might not have similar features, instance specifications, or operations. Some functionalities might be implemented differently or be unavailable. Areas of difference might include infrastructure, storage, authentication and security, replication, backup, high availability, resource capacity model and specific database engine feature integrations, and extensions. Depending on the database engine type, instance size, and architecture, there are also differences in the default values of database parameter settings. Benchmarking can help you to better understand the workloads to be migrated and helps you define the right architecture of the migration target environment. Collecting information about performance is important to help estimate the performance needs of the Google Cloud target environment. Benchmarking concepts and tools are detailed in the Perform testing and validation phase of the migration process, but they also apply to the inventory building stage. Tools for assessments For an initial overview assessment of your current infrastructure, we recommend that you use Google Cloud Migration Center along with other specialized database assessment tools such as migVisor and Database Migration Assessment Tool (DMA). With Migration Center, you can perform a complete assessment of your application and database landscape, including the technical fit for a database migration to Google Cloud. You receive size and configuration recommendations for each source database, and create a total cost of ownership (TCO) report for servers and databases. For more information about assessing your AWS environment by using Migration Center, see Import data from other cloud providers. In addition to Migration Center, you can use the specialized tool migVisor. migVisor supports a variety of database engines and is particularly suitable for heterogeneous migrations. For an introduction to migVisor, see the migVisor overview. migVisor can identify artifacts and incompatible proprietary database features that can cause migration defaulting, and can point to workarounds. migVisor can also recommend a target Google Cloud solution, including initial sizing and architecture. The migVisor database assessment output provides the following: Comprehensive discovery and analysis of current database deployments. Detailed report of migration complexity, based on the proprietary features used by your database. Financial report with details on cost savings post migration, migration costs, and new operating budget. Phased migration plan to move databases and associated applications with minimal disruption to the business. To see some examples of assessment outputs, see migVisor - Cloud migration assessment tool. Note that migVisor temporarily increases database server utilization. Typically, this additional load is less than 3%, and can be run during non-peak hours. The migVisor assessment output helps you to build a complete inventory of your RDS instances. The report includes generic properties (database engine version and edition, CPUs, and memory size), as well as details about database topology, backup policies, parameter settings, and special customizations in use. If you prefer to use open source tools, you can use data collector scripts with (or instead of) the mentioned tools. These scripts can help you collect detailed information (about workloads, features, database objects, and database code) and build your database inventory. Also, scripts usually provide a detailed database migration assessment, including a migration effort estimation. We recommend the open source tool DMA, which was built by Google engineers. It offers a complete and accurate database assessment, including features in use, database logic, and database objects (including schemas, tables, views, functions, triggers, and stored procedures). To use DMA, download the collection scripts for your database engine from the Git repository, and follow the instructions. Send the output files to Google Cloud for analysis. Google Cloud creates and delivers a database assessment readout, and provides the next steps in the migration journey. Identify and document the migration scope and affordable downtime At this stage, you document essential information that influences your migration strategy and tooling. By now, you can answer the following questions: Are your databases larger than 5 TB? Are there any large tables in your database? Are they larger than 16 TB? Where are the databases located (regions and zones), and what's their proximity to applications? How often does the data change? What is your data consistency model? What are the options for destination databases? How compatible are the source and destination databases? Does the data need to reside in some physical locations? Is there data that can be compressed and archived, or is there data that doesn't need migration at all? To define the migration scope, decide what data to keep and what to migrate. Migrating all your databases might take considerable time and effort. Some data might remain in your source database backups. For example, old logging tables or archival data might not be needed. Alternatively, you might decide to move data after the migration process, depending on your strategy and tools. Establish data migration baselines that help you compare and evaluate your outcomes and impacts. These baselines are reference points that represent the state of your data before and after the migration and help you make decisions. It's important to take measurements on the source environment that can help you evaluate your data migration's success. Such measurements include the following: The size and structure of your data. The completeness and consistency of your data. The duration and performance of the most important business transactions and processes. Determine how much downtime you can afford. What are the business impacts of downtime? Are there periods of low database activity, during which there are fewer users affected by downtime? If so, how long are such periods and when do they occur? Consider having a partial write only downtime, while read-only requests are still served. Assess your deployment and administration process After you build the inventories, assess the operational and deployment processes for your database to determine how you need to adapt them to facilitate your migration. These processes are fundamental to how you prepare and maintain your production environment. Consider how you complete the following tasks: Define and enforce security policies for your instances: For example, you might need to replace Amazon Security Groups. You can use Google IAM roles, VPC firewall rules, and VPC Service Controls to control access to your Cloud SQL instances and constrain the data within a VPC. If you're planning on using Windows Authentication with Cloud SQL for SQL Server, you need to deploy Managed Microsoft AD and connect to your existing Active Directory infrastructure. Patch and configure your instances: Your existing deployment tools might need to be updated. For example, you might be using custom configuration settings in Amazon parameter groups or Amazon option groups. Your provisioning tools might need to be adapted to use Google Cloud Storage or Secret Manager to read such custom configuration lists. Manage monitoring and alerting infrastructure: Metric categories for your Amazon source database instances provide observability during the migration process. Metric categories might include Amazon CloudWatch, Performance Insights, Enhanced Monitoring, and OS process lists. Complete the assessment After you build the inventories from your Amazon RDS environment, complete the rest of the activities of the assessment phase as described in Migrate to Google Cloud: Assess and discover your workloads. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. Monitoring and alerting Use Google Cloud Monitoring, which includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, like Datadog and Splunk. For more information, see About database observability. Migrate Amazon RDS for SQL Server instances to Cloud SQL for SQL Server To migrate your instances, you do the following: Choose the migration strategy: continuous replication or scheduled maintenance. Choose the migration tools, depending on your chosen strategy and requirements. Define the migration plan and timeline for each database migration, including preparation and execution tasks. Define the preparation tasks that must be done to ensure the migration tool can work properly. Define the execution tasks, which include work activities that implement the migration. Define fallback scenarios for each execution task. Perform testing and validation, which can be done in a separate staging environment. Perform the migration. Perform the production cut-over. Clean up the source environment and configure the target instance. Perform tuning and optimization. Each phase is described in the following sections. Choose the migration strategy At this step, you have enough information to evaluate and select one of the following migration strategies that best suits your use case for each database: Scheduled maintenance (also called one-time migration): This approach is ideal if you can afford downtime. This option is relatively lower in cost and complexity, because your workloads and services won't require much refactoring. However, if the migration fails before completion, you have to restart the process, which prolongs the downtime. Continuous replication (also called trickle migration): For mission-critical databases, this option offers a lower risk of data loss and near-zero downtime. The efforts are split into several chunks, so if a failure occurs, rollback and repeat takes less time. However, setup is more complex and takes more planning and time. Additional effort is also required to refactor the applications that connect to your database instances. Consider one of the following variations: Using the Y (writing and reading) approach, which is a form of parallel migration, duplicating data in both source and destination instances during the migration. Using a data-access microservice, which reduces refactoring effort required by the Y (writing and reading) approach. For more information about data migration strategies, see Evaluating data migration approaches. The following diagram shows a flowchart based on example questions that you might have when deciding the migration strategy for a single database: The preceding flowchart shows the following decision points: Can you afford any downtime? If no, adopt the continuous replication migration strategy. If yes, continue to the next decision point. Can you afford the downtime represented by the cut-over window while migrating data? The cut-over window represents the amount of time to take a backup of the database, transfer it to Cloud SQL, restore it, and then switch over your applications. If no, adopt the continuous replication migration strategy. If yes, adopt the scheduled maintenance migration strategy. Strategies might vary for different databases, even when they're located on the same instance. A mix of strategies can produce optimal results. For example, migrate small and infrequently used databases by using the scheduled maintenance approach, but use continuous replication for mission-critical databases where having downtime is expensive. Usually, a migration is considered completed when the switch between the initial source instance and the target instance takes place. Any replication (if used) is stopped and all reads and writes are done on the target instance. Switching when both instances are in sync means no data loss and minimal downtime. For more information about data migration strategies and deployments, see Classification of database migrations. Minimize downtime and migration-related impacts Migration configurations that provide no application downtime require a more complicated setup. Find the right balance between setup complexity and downtime scheduled during low-traffic business hours. Each migration strategy has a tradeoff and some impact associated with the migration process. For example, replication processes involve some additional load on your source instances and your applications might be affected by replication lag. Applications (and customers) might have to wait during application downtime, at least as long as the replication lag lasts before using the new database. In practice, the following factors might increase downtime: Database queries can take a few seconds to complete. At the time of migration, in-flight queries might be aborted. The cache might take some time to fill up if the database is large or has a substantial buffer memory. Applications stopped in the source and restarted in Google Cloud might have a small lag until the connection to the Google Cloud database instance is established. Network routes to the applications must be rerouted. Depending on how DNS entries are set up, this can take some time. When you update DNS records, reduce TTL before the migration. The following common practices can help minimize downtime and impact: Find a time period when downtime would have a minimal impact on your workloads. For example, outside normal business hours, during weekends, or other scheduled maintenance windows. Identify parts of your workloads that can undergo migration and production cut-over at different stages. Your applications might have different components that can be isolated, adapted, and migrated with no impact. For example, frontends, CRM modules, backend services, and reporting platforms. Such modules could have their own databases that can be scheduled for migration earlier or later in the process. If you can afford some latency on the target database, consider implementing a gradual migration. Use incremental, batched data transfers, and adapt part of your workloads to work with the stale data on the target instance. Consider refactoring your applications to support minimal migration impact. For example, adapt your applications to write to both source and target databases, and therefore implement an application-level replication. Choose your migration tools The most important factor for a successful migration is choosing the right migration tool. Once the migration strategy has been decided, review and decide upon the migration tool. There are many tools available, each optimized for certain migration use cases. Use cases can include the following: Migration strategy (scheduled maintenance or continuous replication). Source and target database engines and engine versions. Environments in which source instances and target instances are located. Database size. The larger the database, the more time it takes to migrate the initial backup. Frequency of the database changes. Availability to use managed services for migration. To ensure a seamless migration and cut-over, you can use application deployment patterns, infrastructure orchestration, and custom migration applications. However, specialized tools called managed migration services can facilitate the process of moving data, applications, or even entire infrastructures from one environment to another. They run the data extraction from the source databases, securely transport data to the target databases, and can optionally modify the data during transit. With these capabilities, they encapsulate the complex logic of migration and offer migration monitoring capabilities. Managed migration services provide the following advantages: Minimize downtime: Services use the built-in replication capabilities of the database engines when available, and perform replica promotion. Ensure data integrity and security: Data is securely and reliably transferred from the source to the destination database. Provide a consistent migration experience: Different migration techniques and patterns can be consolidated into a consistent, common interface by using database migration executables, which you can manage and monitor. Offer resilient and proven migration models: Database migrations are infrequent but critical events. To avoid beginner mistakes and issues with existing solutions, you can use tools from experienced experts, rather than building a custom solution. Optimize costs: Managed migration services can be more cost effective than provisioning additional servers and resources for custom migration solutions. The next sections describe the migration tool recommendations, which depend on the chosen migration strategy. Tools for scheduled maintenance migrations The following subsections describe the tools that can be used for one-time migrations, along with their limitations and best practices. Built-in database engine backups When significant downtime is acceptable, and your source databases are relatively static, you can use the database engine's built-in backup and restore capabilities. Some effort is required for setup and synchronization, especially for a large number of databases, but database engine backups are usually readily available and straightforward to use. This approach is suitable for any database size, and it's usually more effective than other tools for large instances. Database engine backups have the following general limitations: Backups can be error prone, particularly if performed manually. Data is unsecured if the backups are not properly managed. Backups lack proper monitoring capabilities. If you choose this approach, consider the following restrictions and best practices: For backups of databases larger than 5 TB, use a striped backup (a backup of multiple files). When using a striped backup, you can't back up or restore from more than 10 backup files at the same time. You must back up to an Amazon S3 bucket in the same Amazon region of your source database instance. Backups don't include the SQL Server logins, permissions, and server roles, because they're situated at the instance level. To transfer this type of data from the source instance to the target instance, use PowerShell scripts or tools like DBAtools. For details about limitations and best practices, see Best practices for importing and exporting data with Cloud SQL for SQL Server and Cloud SQL for SQL Server Known Issues. Other approaches for scheduled maintenance migrations Using other approaches might provide more control and flexibility in your scheduled maintenance migration process. For example, by using flat files to export and import your data (or by using custom scripts), you can do the following: Perform transformations, checks, or other operations on your data during the migration. For example, validations, aggregation, or normalization and denormalization on your source data. Choose which structures you want to migrate and which to leave behind. You might decide to extract just a subset of your tables in flat files for import. Choose to filter out data based on domain, source, age, or other custom criteria. For example, you might exclude data that reaches an age threshold, and store it in files or the final backup of your source database, before the migration. Refactor your database structures, and synchronize the incurred downtime with the migration downtime. Consolidate multiple instances or databases into a single instance or database, to mitigate operational costs and ease your scalability issues. For example, you might want to change your approach from having one instance, database, or schema per customer to a single, multi-tenancy optimized database structure. Other approaches include the following: Use CSV or JSON files: With this approach, you extract the database's data into the files, and then import the files to your target instances. While generally slower, this method helps you migrate a subset of tables (or data within a given table). The CSV and JSON formats are understood by many tools. If you automate the process, you have the option to transition to a batched continuous replication migration. Use the SQL Server Import and Export Wizard from Microsoft: This tool uses SQL Server Integration Services (SSIS), and lets you import data from various sources, like other database engines or flat files. Use the SQL Server Generate and Publish Scripts Wizard and bcp utility: This tool is a part of Microsoft SQL Server Management Studio. This approach lets you create scripts for either your entire database schema or only parts of it. The bcp utility lets you script the data and export it into files. Use snapshot replication, if your source uses Amazon RDS Standard: With this approach, you restore the SQL Server backup of the RDS instance to another standalone instance of SQL Server on Compute Engine. Then, use snapshot replication to migrate to Cloud SQL for SQL Server. The snapshot generation keeps locks on source tables, which may impact your workloads. The snapshot replication might introduce additional loads on your Amazon RDS server. However, you can choose which objects get migrated or replicated, which provides flexibility. For details, see Migrating data from SQL Server 2017 to Cloud SQL for SQL Server using snapshot replication. Tools for continuous replication migrations The following diagram shows a flowchart with questions that can help you choose the migration tool for a single database, when you use a continuous replication migration strategy: The preceding flowchart shows the following decision points: Do you prefer to use managed migration services? If yes, continue to the next decision. If you can afford some minimal downtime and don't need data transformation or real-time synchronization, we recommend Database Migration Service. Otherwise, explore third party options. If no, continue to the next decision. If database engine built-in replication is supported, we recommend using built-in replication. If not, we recommend exploring other migration options. Can you afford minimal downtime, and migrate without data transformation or real-time synchronization? If yes, we recommend Database Migration Service. If no, explore third-party options. Is database engine specific built-in replication supported? If yes, we recommend using built-in replication. If no, we recommend exploring other migration options. The following sections describe the tools that can be used for continuous replication migrations, along with their limitations and best practices. Database Migration Service for continuous replication migration Database Migration Service supports homogeneous migrations to Cloud SQL for SQL Server, when the source is Amazon RDS. Database Migration Service is a cost-effective and straightforward tool. We recommend Database Migration Service for situations with the following circumstances: You can afford minimal downtime. You don't need real-time synchronization. You don't need to perform data transformations during the migration. If you choose this tool, consider the following restrictions and best practices: The amount of downtime depends on the frequency of your transaction log backups. Backups don't include the SQL Server logins, permissions, or server roles, because they're at the instance level. Script them out from the source instance, and then transfer them to the target instance by using PowerShell scripts or tools like DBAtools. For a full list of limitations, see Known limitations. Database engine built-in replication Cloud SQL supports replication for SQL Server. However, Standard Amazon RDS for SQL Server can only be a Subscriber. Built-in replication from Amazon RDS Standard is not available. Only Amazon RDS Custom for SQL Server can be set up as a built-in Publisher. For a list of supported and unsupported features on Amazon RDS, see Amazon RDS for Microsoft SQL Server. Other approaches for continuous replication migration Other continuous replication migration approaches include the following: Refactor your applications to perform Y (writing and reading) or use a data-access microservice. Continuous replication is performed by your applications. The migration effort is focused on the refactoring or development of tools that connect to your database instances. Reader applications are then gradually refactored and deployed to use the target instance. Implement functions that periodically query data on your source instance, filter for only new data, and write data to CSV, JSON, or Parquet files. These files are stored in a Google Cloud Storage bucket. The files can be written immediately to your target database instance by using Cloud Run functions. Change data capture (CDC) capabilities can help you achieve a near real-time replication migration. You can stream CDC into an Amazon S3 data lake in Parquet format, by using AWS Database Migration Service (AWS DMS). You can have a custom implementation to read the files and write their content into Cloud SQL. Third-party tools for continuous replication migrations In some cases, it might be better to use one third-party tool for most database engines. Such cases might be if you prefer to use a managed migration service and you need to ensure that the target database is always in near-real-time sync with the source, or if you need more complex transformations like data cleaning, restructuring, and adaptation during the migration process. If you decide to use a third-party tool, choose one of the following recommendations, which you can use for most database engines. Striim is an end-to-end, in-memory platform for collecting, filtering, transforming, enriching, aggregating, analyzing, and delivering data in real time: Advantages: Handles large data volumes and complex migrations. Built-in change data capture for SQL Server. Preconfigured connection templates and no-code pipelines. Able to handle mission-critical, large databases that operate under heavy transactional load. Exactly-once delivery. Disadvantages: Not open source. Can become expensive, especially for long migrations. Some limitations in data definition language (DDL) operations propagation. For more information, see Supported DDL operations and Schema evolution notes and limitations. For more information about Striim, see Running Striim in the Google Cloud. Debezium is an open source distributed platform for CDC, and can stream data changes to external subscribers: Advantages: Open source. Strong community support. Cost effective. Fine-grained control on rows, tables, or databases. Specialized for change capture in real time from database transaction logs. Disadvantages: Requires specific experience with Kafka and ZooKeeper. At-least-once delivery of data changes, which means that you need duplicates handling. Manual monitoring setup using Grafana and Prometheus. No support for incremental batch replication. For more information about Debezium migrations, see Near Real Time Data Replication using Debezium. Fivetran is an automated data movement platform for moving data out of and across cloud data platforms. Advantages: Preconfigured connection templates and no-code pipelines. Propagates any schema changes from your source to the target database. Exactly-once delivery of your data changes, which means that you don't need duplicates handling. Disadvantages: Not open source. Support for complex data transformation is limited. Define the migration plan and timeline For a successful database migration and production cut-over, we recommend that you prepare a well-defined, comprehensive migration plan. To help reduce the impact on your business, we recommend that you create a list of all the necessary work items. Defining the migration scope reveals the work tasks that you must do before, during, and after the database migration process. For example, if you decide not to migrate certain tables from a database, you might need pre-migration or post-migration tasks to implement this filtering. You also ensure that your database migration doesn't affect your existing service-level agreement (SLA) and business continuity plan. We recommend that your migration planning documentation include the following documents: Technical design document (TDD) RACI matrix Timeline (such as a T-Minus plan) Database migrations are an iterative process, and first migrations are often slower than the later ones. Usually, well-planned migrations run without issues, but unplanned issues can still arise. We recommend that you always have a rollback plan. As a best practice, follow the guidance from Migrate to Google Cloud: Best practices for validating a migration plan. TDD The TDD documents all technical decisions to be made for the project. Include the following in the TDD: Business requirements and criticality Recovery time objective (RTO) Recovery point objective (RPO) Database migration details Migration effort estimates Migration validation recommendations RACI matrix Some migrations projects require a RACI matrix, which is a common project management document that defines which individuals or groups are responsible for tasks and deliverables within the migration project. Timeline Prepare a timeline for each database that needs to be migrated. Include all work tasks that must be performed, and defined start dates and estimated end dates. For each migration environment, we recommend that you create a T-minus plan. A T-minus plan is structured as a countdown schedule, and lists all the tasks required to complete the migration project, along with the responsible groups and estimated duration. The timeline should account for not only pre-migration preparation tasks execution, but also validating, auditing, or testing tasks that happen after the data transfer takes place. The duration of migration tasks typically depends on database size, but there are also other aspects to consider, like business logic complexity, application usage, and team availability. A T-Minus plan might look like the following: Date Phase Category Tasks Role T-minus Status 11/1/2023 Pre-migration Assessment Create assessment report Discovery team -21 Complete 11/7/2023 Pre-migration Target preparation Design target environment as described by the design document Migration team -14 Complete 11/15/2023 Pre-migration Company governance Migration date and T-Minus approval Leadership -6 Complete 11/18/2023 Migration Set up DMS Build connection profiles Cloud migration engineer -3 Complete 11/19/2023 Migration Set up DMS Build and start migration jobs Cloud migration engineer -2 Not started 11/19/2023 Migration Monitor DMS Monitor DMS Jobs and DDL changes in the source instance Cloud migration engineer -2 Not started 11/21/2023 Migration Cutover DMS Promote DMS replica Cloud migration engineer 0 Not started 11/21/2023 Migration Migration validation Database migration validation Migration team 0 Not started 11/21/2023 Migration Application test Run capabilities and performance tests Migration team 0 Not started 11/22/2023 Migration Company governance Migration validation GO or NO GO Migration team 1 Not started 11/23/2023 Post-migration Validate monitoring Configure monitoring Infrastructure team 2 Not started 11/25/2023 Post-migration Security Remove DMS user account Security team 4 Not started Multiple database migrations If you have multiple databases to migrate, your migration plan should contain tasks for all of the migrations. We recommend that you start the process by migrating a smaller, ideally non-mission-critical database. This approach can help you to build your knowledge and confidence in the migration process and tooling. You can also detect any flaws in the process in the early stages of the overall migration schedule. If you have multiple databases to migrate, the timelines can be parallelized. For example, to speed up the migration process, you might choose to migrate a group of small, static, or less mission-critical databases at the same time, as shown in the following diagram. In the example shown in the diagram, databases 1-4 are a group of small databases that are migrated at the same time. Define the preparation tasks The preparation tasks are all the activities that you need to complete to fulfill the migration prerequisites. If you don't complete the preparation tasks, the migration can't take place or the migrated database might be unusable as a result. Preparation tasks can be categorized as follows: Amazon RDS instance preparations and prerequisites Source database preparation and prerequisites Cloud SQL setup Migration specific setup Amazon RDS instance preparation and prerequisites Consider the following common setup and prerequisite tasks: Depending on your migration path, you might need to allow remote connections on your RDS instances. If your RDS instance is configured to be private in your VPC, private RFC 1918 connectivity must exist between Amazon and Google Cloud. You might need to configure a new security group to allow remote connections on required ports. By default, in AWS, network access is turned off for database instances. You can specify rules in a security group that allow access from an IP address range, port, or security group. The same rules apply to all database instances that are associated with that security group. For ongoing replication (streaming changes through CDC), you must use a full RDS instance and not a read replica, with CDC enabled. For details, see Using change data capture with SQL Server. If you're using third-party tools, upfront settings and configurations are usually required before using the tool. Check the documentation from the third-party tool. Source database preparation and prerequisites Ensure that the source database has buffer storage and memory available during the migration operations. For example, if you are using transaction log backup files, monitor the storage use and be sure to have additional buffer storage space. Document the settings of your database parameters, and apply them on your target instance before doing migration testing and validation. Take baseline measurements on your source environment in production use. Consider the following: Measure the size of your data, as well as your workload's performance. How long do important queries or transactions take, on average? How long during peak times? Document the baseline measurements for later comparison, to help you decide if the fidelity of your database migration is satisfactory. Decide if you can switch your production workloads and decommission your source environment, or if you still need it for fallback purposes. Cloud SQL setup Carefully choose the size and specifications of your target Cloud SQL database instance to match the source for similar performance needs. Pay special attention to disk size and throughput, IOPS, and number of vCPUs. Incorrect sizing can lead to long migration times, database performance problems, database errors, and application performance problems. Ensure that the destination is the correct fit. It's important to note that Amazon RDS configuration options might vary from Cloud SQL. In the event that Cloud SQL doesn't meet your requirements, consider options that include databases on Compute Engine. You must confirm the following properties and requirements before you create your Cloud SQL instances, because they can't be changed later without recreating them. Choose the project and region of your target Cloud SQL instances carefully. Cloud SQL instances can't be migrated between Google Cloud projects and regions without data transfer. Migrate to a matching major version on Cloud SQL. For example, if your source uses SQL Server 15.0, migrate to Cloud SQL for SQL Server 15.0. If the versions are different, the compatibility level setting should be the same to ensure the same engine capabilities. Replicate user information separately, if you are using built-in database engine backups or Database Migration Service. For details, review the limitations in the Database engine specific backups section. Review the database engine specific configuration flags and compare their source and target instance values. Make sure you understand their impact and whether they need to be the same or not. For example, we recommend comparing the values in the sys.configurations view on your source database with the values on the Cloud SQL instance. Note that not all flags must be the same and not all flag changes are allowed on a Cloud SQL instance. For more information about Cloud SQL setup, see the following: General best practices for SQL Server Instance configuration options for SQL Server Database engine specific flags for SQL Server Migration-specific setup If you use file export and import to migrate, or use the Database Migration Service migration tool, you need to create a Cloud Storage bucket. The bucket stores the database and transaction log backup files. For more information about using Database Migration Service, see Store backup files in a Cloud Storage bucket. If you use replication, you must ensure that the Cloud SQL replica has access to your primary database. This can be accomplished through the documented connectivity options. Depending on your scenario and criticality, you might need to implement a fallback scenario, which usually includes reversing the direction of the replication. In this case, you might need an additional replication mechanism from Cloud SQL back to your source Amazon instance. For most third-party tools, you need to provision migration specific resources. For example, for Striim, you need to use the Google Cloud Marketplace to begin. Then, to set up your migration environment in Striim, you can use the Flow Designer to create and change applications, or you can select a pre-existing template. Applications can also be coded using the Tungsten Query Language (TQL) programming language. Using a data validation dashboard, you can get a visual representation of data handled by your Striim application. You can decommission the resources that connect your Amazon and Google Cloud environment after the migration is completed and validated. Define the execution tasks Execution tasks implement the migration work itself. The tasks depend on your chosen migration tool, as described in the following subsections. Built-in database engine backups For more information and instructions for database specific backups, see Import data from a BAK file to Cloud SQL for SQL Server and Exporting data from RDS for SQL Server. For more information about how to automate transaction log file uploads, see Schedule transaction log file uploads for Amazon RDS. Database Migration Service migration jobs Define and configure migration jobs in Database Migration Service to migrate data from a source instance to the destination database. Migration jobs connect to the source database instance through user-defined connection profiles. Test all the prerequisites to ensure the job can run successfully. Choose a time when your workloads can afford a small downtime for the migration and production cut-over. The migration process usually involves the following tasks: Export a full backup of the source database, and then upload it to a Cloud Storage bucket. Take a backup of the transaction log files, and then upload it to the same Cloud Storage bucket. For more information about how to automate this process, see Schedule transaction log file uploads for Amazon RDS. In Database Migration Service, you monitor the processing of the transaction log backups. Stop writing to the source database. Wait for the source and target to be synchronized, which is when the final transaction log backup is processed. Stop the ongoing replication and promote the migration job.By promoting a migration job, the destination Cloud SQL instance is disconnected from the source database instance, and then it is promoted to a primary instance. Deploy the applications that point to the new target database. For a detailed migration setup process, see Migrate your SQL Server databases to Cloud SQL for SQL Server. Database engine built-in replication If you are using Amazon RDS Standard, you might need to first migrate to the Amazon RDS Custom version and then replicate to Cloud SQL. Cloud SQL supports replication for SQL Server. For more information about replication from an external server, see Migrating data from SQL Server 2017 to Cloud SQL for SQL Server using snapshot replication. Third-party tools Define any execution tasks for the third-party tool you've chosen. For example, if you decide to use Striim, you need to create apps in your namespace, and configure the CDC reader to connect to the Amazon instance. For details, see SQL Server setup in the Striim documentation. Define fallback scenarios Define fallback action items for each migration execution task, to safeguard against unforeseen issues that might occur during the migration process. The fallback tasks usually depend on the migration strategy and tools used. Fallback might require significant effort. As a best practice, don't perform production cut-over until your test results are satisfactory. Both the database migration and the fallback scenario should be properly tested to avoid a severe outage. Define success criteria and timebox all your migration execution tasks. Doing a migration dry run helps collect information about the expected times for each task. For example, for a scheduled maintenance migration, you can afford the downtime represented by the cut-over window. However, it's important to plan your next action in case the one-time migration job or the restore of the backup fails midway. Depending on how much time of your planned downtime has elapsed, you might have to postpone the migration if the migration task doesn't finish in the expected amount of time. A fallback plan usually refers to rolling back the migration after you perform the production cut-over, if issues on the target instance appear. If you implement a fallback plan, remember that it must be treated as a full database migration, including planning and testing. If you choose not to have a fallback plan, make sure you understand the possible consequences. Having no fallback plan can add unforeseen effort and cause avoidable disruptions in your migration process. Although a fallback is a last resort, and most database migrations don't end up using it, we recommend that you always have a fallback strategy. Simple fallback In this fallback strategy, you switch your applications back to the original source database instance. Adopt this strategy if you can afford downtime when you fall back or if you don't need the transactions committed on the new target system. If you do need all the written data on your target database, and you can afford some downtime, you can consider stopping writes to your target database instance, taking built-in backups and restoring them on your source instance, and then re-connecting your applications to the initial source database instance. Depending on the nature of your workload and amount of data written on the target database instance, you could bring it into your initial source database system at a later time, especially if your workloads aren't dependent on any specific record creation time or any time ordering constraints. Reverse replication In this strategy, you replicate the writes that happen on your new target database after production cut-over back to your initial source database. In this way, you keep the original source in sync with the new target database and have the writes happening on the new target database instance. Its main disadvantage is that you can't test the replication stream until after you cut-over to the target database instance, therefore it doesn't allow end-to-end testing and it has a small period of no fallback. Choose this approach when you can still keep your source instance for some time and you migrate using the continuous replication migration. Forward replication This strategy is a variation of reverse replication. You replicate the writes on your new target database to a third database instance of your choice. You can point your applications to this third database, which connects to the server and runs read-only queries while the server is unavailable. You can use any replication mechanism, depending on your needs. The advantage of this approach is that it can be fully end-to-end tested. Take this approach when you want to be covered by a fallback at all times or when you must discard your initial source database shortly after the production cut-over. Duplicate writes If you choose a Y (writing and reading) or data-access microservice migration strategy, this fallback plan is already set. This strategy is more complicated, because you need to refactor applications or develop tools that connect to your database instances. Your applications write to both initial source and target database instances, which lets you perform a gradual production cut-over until you are using only your target database instances. If there are any issues, you connect your applications back to the initial source with no downtime. You can discard the initial source and the duplicate writing mechanism when you consider the migration performed with no issues observed. We recommend this approach when it's critical to have no migration downtime, have a reliable fallback in place, and when you have time and resources to perform application refactoring. Perform testing and validation The goals of this step are to test and validate the following: Successful migration of the data in the database. Integration with existing applications after they are switched to use the new target instance. Define the key success factors, which are subjective to your migration. The following are examples of subjective factors: Which data to migrate. For some workloads, it might not be necessary to migrate all of the data. You might not want to migrate data that is already aggregated, redundant, archived, or old. You might archive that data in a Cloud Storage bucket, as a backup. An acceptable percentage of data loss. This particularly applies to data used for analytics workloads, where losing part of the data does not affect general trends or performance of your workloads. Data quality and quantity criteria, which you can apply to your source environment and compare to the target environment after the migration. Performance criteria. Some business transactions might be slower in the target environment, but the processing time is still within defined expectations. The storage configurations in your source environment might not map directly to Google Cloud environment targets. For example, configurations from the General Purpose SSD (gp2 and gp3) volumes with IOPS burst performance or Provisioned IOPS SSD. To compare and properly size the target instances, benchmark your source instances, in both the assessment and validation phases. In the benchmarking process, you apply production-like sequences of operations to the database instances. During this time, you capture and process metrics to measure and compare the relative performance of both source and target systems. For conventional, server based configurations, use relevant measurements observed during peak loads. For flexible resource capacity models like Aurora Serverless, consider looking at historical metric data to observe your scaling needs. The following tools can be used for testing, validation, and database benchmarking: HammerDB: an open source database benchmarking and load testing tool. It supports complex transactional and analytic workloads, based on industry standards, on multiple database engines (both TPROC-C and TPROC-H). HammerDB has detailed documentation and a wide community of users. You can share and compare results across several database engines and storage configurations. For more information, see Load testing SQL Server using HammerDB and Benchmark Amazon RDS SQL Server performance using HammerDB. DBT2 Benchmark Tool: benchmarking specialized for MySQL. A set of database workload kits mimics an application for a company that owns warehouses and involves a mix of read and write transactions. Use this tool if you want to use a ready-made online transaction processing (OLTP) load test. DbUnit: an open source unit testing tool used to test relational database interactions in Java. The setup and use is straightforward, and it supports multiple database engines (MySQL, PostgreSQL, SQL Server, and others). However, the test execution can be slow sometimes, depending on the size and complexity of the database. We recommend this tool when simplicity is important. DbFit: an open source database testing framework that supports test-driven code development and automated testing. It uses a basic syntax for creating test cases and features data-driven testing, version control, and test result reporting. However, support for complex queries and transactions is limited and it doesn't have large community support or extensive documentation, compared to other tools. We recommend this tool if your queries are not complex and you want to perform automated tests and integrate them with your continuous integration and delivery process. To run an end-to-end test, including testing of the migration plan, always perform a migration dry run exercise. A dry run performs the full-scope database migration without switching any production workloads, and it offers the following advantages: Lets you ensure that all objects and configurations are properly migrated. Helps you define and execute your migration test cases. Offers insights into the time needed for the actual migration, so you can calibrate your timeline. Represents an occasion to test, validate, and adapt the migration plan. Sometimes you can't plan for everything in advance, so this helps you to spot any gaps. Data testing can be performed on a small set of the databases to be migrated or the entire set. Depending on the total number of databases and the tools used for implementing their migration, you can decide to adopt a risk based approach. With this approach, you perform data validation on a subset of databases migrated through the same tool, especially if this tool is a managed migration service. For testing, you should have access to both source and target databases and do the following tasks: Compare source and target schemas. Check if all tables and executables exist. Check row counts and compare data at the database level. Run custom data validation scripts. Test that the migrated data is also visible in the applications that switched to use the target database (migrated data is read through the application). Perform integration testing between the switched applications and the target database by testing various use cases. This testing includes both reading and writing data to the target databases through the applications so that the workloads fully support migrated data together with newly created data. Test the performance of the most used database queries to observe if there's any degradation due to misconfigurations or wrong sizing. Ideally, all these migration test scenarios are automated and repeatable on any source system. The automated test cases suite is adapted to perform against the switched applications. If you're using Database Migration Service as your migration tool, see Verify a migration. Data Validation Tool For performing data validation, we recommend that you use the Data Validation Tool (DVT). The DVT is an open sourced Python CLI tool, backed by Google, that provides an automated and repeatable solution for validation across different environments. The DVT can help streamline the data validation process by offering customized, multi-level validation functions to compare source and target tables on the table, column, and row level. You can also add validation rules. The DVT covers many Google Cloud data sources, including AlloyDB for PostgreSQL, BigQuery, Cloud SQL, Spanner, JSON, and CSV files on Cloud Storage. It can also be integrated with Cloud Run functions and Cloud Run for event based triggering and orchestration. The DVT supports the following types of validations: Schema level comparisons Column (including 'AVG', 'COUNT', 'SUM', 'MIN', 'MAX', 'GROUP BY', and 'STRING_AGG') Row (including hash and exact match in field comparisons) Custom query results comparison For more information about the DVT, see the Git repository and Data validation made easy with Google Cloud's Data Validation Tool. Perform the migration The migration tasks include the activities to support the transfer from one system to another. Consider the following best practices for your data migration: Inform the involved teams whenever a plan step begins and finishes. If any of the steps take longer than expected, compare the time elapsed with the maximum amount of time allotted for that step. Issue regular intermediary updates to involved teams when this happens. If the time span is greater than the maximal amount of time reserved for each step in the plan, consider rolling back. Make "go or no-go" decisions for every step of the migration and cut-over plan. Consider rollback actions or alternative scenarios for each of the steps. Perform the migration by following your defined execution tasks, and refer to the documentation for your selected migration tool. Perform the production cut-over The high-level production cut-over process can differ depending on your chosen migration strategy. If you can have downtime on your workloads, then your migration cut-over begins by stopping writes to your source database. For continuous replication migrations, you typically do the following high-level steps in the cut-over process: Stop writing to the source database. Drain the source. Stop the replication process. Deploy the applications that point to the new target database. After the data has been migrated by using the chosen migration tool, you validate the data in the target database. You confirm that the source database and the target databases are in sync and the data in the target instance adheres to your chosen migration success standards. Once the data validation passes your criteria, you can perform the application level cut-over. Deploy the workloads that have been refactored to use the new target instance. You deploy the versions of your applications that point to the new target database instance. The deployments can be performed either through rolling updates, staged releases, or by using a blue-green deployment pattern. Some application downtime might be incurred. Follow the best practices for your production cut-over: Monitor your applications that work with the target database after the cut-over. Define a time period of monitoring to consider whether or not you need to implement your fallback plan. Note that your Cloud SQL or AlloyDB for PostgreSQL instance might need a restart if you change some database flags. Consider that the effort of rolling back the migration might be greater than fixing issues that appear on the target environment. Cleanup the source environment and configure the Cloud SQL instance After the cut-over is completed, you can delete the source databases. We recommend performing the following important actions before the cleanup of your source instance: Create a final backup of each source database. These backups provide you with an end state of the source databases. The backups might also be required in some cases for compliance with some data regulations. Collect the database parameter settings of your source instance. Alternatively, check that they match the ones you've gathered in the inventory building phase. Adjust the target instance parameters to match the ones from the source instance. Collect database statistics from the source instance and compare them to the ones in the target instance. If the statistics are disparate, it's hard to compare the performance of the source instance and target instance. In a fallback scenario, you might want to implement the replication of your writes on the Cloud SQL instance back to your original source database. The setup resembles the migration process but would run in reverse: the initial source database would become the new target. As a best practice to keep the source instances up to date after the cut-over, replicate the writes performed on the target Cloud SQL instances back to the source database. If you need to roll back, you can fall back to your old source instances with minimal data loss. Besides the source environment cleanup, the following critical configurations for your Cloud SQL instances must be done: Configure a maintenance window for your primary instance to control when disruptive updates can occur. Configure the storage on the instance so that you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL might perform. To receive an alert if available disk space gets lower than 20%, create a metrics-based alerting policy for the disk utilization metric. Don't start an administrative operation before the previous operation has completed. For more information, see the following: About maintenance on Cloud SQL instances. SQL Server engine specific settings Optimize your environment after migration Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. Establish your optimization requirements Review the following optimization requirements for your Google Cloud environment and choose the ones that best fit your workloads. Increase the reliability and availability of your database With Cloud SQL, you can implement a high availability and disaster recovery strategy that aligns with your recovery time objective (RTO) and recovery point objective (RPO). To increase reliability and availability, consider the following: In cases of read-heavy workloads, add read replicas to offload traffic from the primary instance. For mission critical workloads, use the high-availability configuration, replicas for regional failover, and a robust disaster recovery configuration. For less critical workloads, automated and on-demand backups can be sufficient. To prevent accidental removal of instances, use instance deletion protection. Enable point-in-time recovery (PITR) for your Cloud SQL for SQL Server instance. Automate SQL database backups by using maintenance plans. Increase the cost effectiveness of your database infrastructure To have a positive economic impact, your workloads must use the available resources and services efficiently. Consider the following options: Provide the database with the minimum required storage capacity by doing the following: To scale storage capacity automatically as your data grows, enable automatic storage increases. However, ensure that you configure your instances to have some buffer in peak workloads. Remember that database workloads tend to increase over time. Identify possible overestimated resources: Rightsizing your Cloud SQL instances can reduce the infrastructure cost without adding additional risks to the capacity management strategy. Cloud Monitoring provides predefined dashboards that help identify the health and capacity utilization of many Google Cloud components, including Cloud SQL. For details, see Create and manage custom dashboards. Identify instances that don't require high availability or disaster recovery configurations, and remove them from your infrastructure. Remove tables and objects that are no longer needed. You can store them in a full backup or an archival Cloud Storage bucket. Evaluate the most cost-effective storage type (SSD or HDD) for your use case. For most use cases, SSD is the most efficient and cost-effective choice. If your datasets are large (10 TB or more), latency-insensitive, or infrequently accessed, HDD might be more appropriate. For details, see Choose between SSD and HDD storage. Purchase committed use discounts for workloads with predictable resource needs. Use Active Assist to get cost insights and recommendations. For more information and options, see Do more with less: Introducing Cloud SQL cost optimization recommendations with Active Assist. To reduce licencing costs, specifically for Cloud SQL for SQL Server, consider the following: Migrate to SQL Server Standard Edition, if the SLAs match your requirements. Turn off simultaneous multithreading (SMT) and add 25% more cores. The additional cores might compensate for any performance impact from turning off SMT. This strategy can help reduce licensing costs, but it might impact your instance's performance. We recommend that you perform load testing on your instance to ensure your SLAs are not affected. Consider a heterogeneous migration from Cloud SQL for SQL Server to Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL, depending on your workload. Increase the performance of your database infrastructure Minor database-related performance issues frequently have the potential to impact the entire operation. To maintain and increase your Cloud SQL instance performance, consider the following guidelines: If you have a large number of database tables, they can affect instance performance and availability, and cause the instance to lose its SLA coverage. Ensure that your instance isn't constrained on memory or CPU. For performance-intensive workloads, ensure that your instance has at least 60 GB of memory. For slow database inserts, updates, or deletes, check the locations of the writer and database; sending data over long distances introduces latency. Improve query performance by using the predefined Query Insights dashboard in Cloud Monitoring (or similar database engine built-in features). Identify the most expensive commands and try to optimize them. Prevent database files from becoming unnecessarily large. Set autogrow in MBs rather than as a percentage, using increments appropriate to the requirement. Check reader and database location. Latency affects read performance more than write performance. Prevent data and index fragmentation. Set a schedule to rebuild your indexes in SQL Server, depending on how often your data changes. Change the SQL Server engine specific settings so that they work optimally for Cloud SQL. For index and statistics maintenance, see SQL Server Maintenance Solution. Update statistics regularly on Cloud SQL for SQL Server. Consider performing ETL operations on a read replica, because they might affect your instance's cache. For more information about increasing performance, see Performance in "Diagnose issues", and Cloud SQL - SQL Server Performance Analysis and Query Tuning. Increase database observability capabilities Diagnosing and troubleshooting issues in applications that connect to database instances can be challenging and time-consuming. For this reason, a centralized place where all team members can see what's happening at the database and instance level is essential. You can monitor Cloud SQL instances in the following ways: Cloud SQL uses built-in memory custom agents to collect query telemetry. Use Cloud Monitoring to collect measurements of your service and the Google Cloud resources that you use. Cloud Monitoring includes predefined dashboards for several Google Cloud products, including a Cloud SQL monitoring dashboard. You can create custom dashboards that help you monitor metrics and set up alert policies so that you can receive timely notifications. Alternatively, you can consider using third-party monitoring solutions that are integrated with Google Cloud, such as Datadog and Splunk. Cloud Logging collects logging data from common application components. Cloud Trace collects latency data and executed query plans from applications to help you track how requests propagate through your application. Database Center provides an AI-assisted, centralized database fleet overview. You can monitor the health of your databases, availability configuration, data protection, security, and industry compliance. View and query logs for your Cloud SQL instance. Observe the status of your databases to help you improve the performance of your environment, and to troubleshoot eventual issues. General Cloud SQL best practices and operational guidelines Apply the best practices for Cloud SQL to configure and tune the database. Some important Cloud SQL general recommendations are as follows: If you have large instances, we recommend that you split them into smaller instances, when possible. Configure storage to accommodate critical database maintenance. Ensure you have at least 20% available space to accommodate any critical database maintenance operations that Cloud SQL might perform. Having too many database tables can affect database upgrade time. Ideally, aim to have under 10,000 tables per instance. Choose the appropriate size for your instances to account for transaction (binary) log retention, especially for high write activity instances. To be able to efficiently handle any database performance issues that you might encounter, use the following guidelines until your issue is resolved: Scale up infrastructure: Increase resources (such as disk throughput, vCPU, and RAM). Depending on the urgency and your team's availability and experience, vertically scaling your instance can resolve most performance issues. Later, you can further investigate the root cause of the issue in a test environment and consider options to eliminate it. Perform and schedule database maintenance operations: Index defragmentation, statistics updates, vacuum analyze, and reindex heavily updated tables. Check if and when these maintenance operations were last performed, especially on the affected objects (tables, indexes). Find out if there was a change from normal database activities. For example, recently adding a new column or having lots of updates on a table. Perform database tuning and optimization: Are the tables in your database properly structured? Do the columns have the correct data types? Is your data model right for the type of workload? Investigate your slow queries and their execution plans. Are they using the available indexes? Check for index scans, locks, and waits on other resources. Consider adding indexes to support your critical queries. Eliminate non-critical indexes and foreign keys. Consider rewriting complex queries and joins. The time it takes to resolve your issue depends on the experience and availability of your team and can range from hours to days. Scale out your reads: Consider having read replicas. When scaling vertically isn't sufficient for your needs, and database tuning and optimization measures aren't helping, consider scaling horizontally. Routing read queries from your applications to a read replica improves the overall performance of your database workload. However, it might require additional effort to change your applications to connect to the read replica. Database re-architecture: Consider partitioning and indexing the database. This operation requires significantly more effort than database tuning and optimization, and it might involve a data migration, but it can be a long-term fix. Sometimes, poor data model design can lead to performance issues, which can be partially compensated by vertical scale-up. However, a proper data model is a long-term fix. Consider partitioning your tables. Archive data that isn't needed anymore, if possible. Normalize your database structure, but remember that denormalizing can also improve performance. Database sharding: You can scale out your writes by sharding your database. Sharding is a complicated operation and involves re-architecting your database and applications in a specific way and performing data migration. You split your database instance in multiple smaller instances by using a specific partitioning criteria. The criteria can be based on customer or subject. This option lets you horizontally scale both your writes and reads. However, it increases the complexity of your database and application workloads. It might also lead to unbalanced shards called hotspots, which would outweigh the benefit of sharding. Specifically for Cloud SQL for SQL Server, consider the following best practices: To update the SQL Server settings for optimal performance with Cloud SQL, see SQL Server settings. Determine the capacity of the I/O subsystem before you deploy SQL Server. If you have large instances, split them into smaller instances, where possible: A disk size of 4 TB or greater provides more throughput and IOPS. Higher vCPU provides more IOPS and throughput. When using higher vCPU, monitor the database waits for parallelism, which might also increase. Turn off SMT, if performance is diminished in some situations. For example, if an application executes threads that become a bottleneck and the architecture of the CPU doesn't handle this effectively. Set a schedule to reorganize or rebuild your indexes, depending on how often your data changes. Set an appropriate fill factor to reduce fragmentation. Monitor SQL Server for missing indexes that might offer improved performance. Prevent database files from becoming unnecessarily large. Set autogrow in MBs rather than as a percentage, using increments appropriate to the requirement. Also, proactively manage the database growth before the autogrow threshold is reached. To scale storage capacity automatically, enable automatic storage increases. Cloud SQL can add storage space if the database and the instance run out of space. Ensure you understand the locale requirements, sorting order, and case and accent sensitivity of the data that you're working with. When you select collation for your server, database, column, or expression, you're assigning certain characteristics to your data. Recursive hash joins or hash bailouts cause reduced performance in a server. If you see many hash warning events in a trace, update the statistics on the columns that are being joined. For more information, see Hash Warning Event Class. For more details, see General best practices and Operational guidelines for Cloud SQL for SQL Server. What's next Read about other AWS to Google Cloud migration journeys. Learn how to compare AWS and Azure services to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Alex Cârciu | Solutions ArchitectMarco Ferrari | Cloud Solutions ArchitectOther contributors: Derek Downey | Developer Relations EngineerPaweł Krentowski | Technical WriterMatthew Smith | Strategic Cloud EngineerSomdyuti Paul | Data Management SpecialistZach Seils | Networking Specialist Send feedback \ No newline at end of file diff --git a/Migrate_from_Kubernetes_to_GKE.txt b/Migrate_from_Kubernetes_to_GKE.txt new file mode 100644 index 0000000000000000000000000000000000000000..68c33086f6d2e94e85ade92c9488c236defaf27f --- /dev/null +++ b/Migrate_from_Kubernetes_to_GKE.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrating-containers-kubernetes-gke +Date Scraped: 2025-02-23T11:52:13.818Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate containers to Google Cloud: Migrate from Kubernetes to GKE Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-22 UTC This document helps you plan, design, and implement your migration from a self-managed Kubernetes environment to Google Kubernetes Engine (GKE). If done incorrectly, moving apps from one environment to another can be a challenging task, so you need to plan and execute your migration carefully. This document is useful if you're planning to migrate from a self-managed Kubernetes environment to GKE. Your environment might be running in an on-premises environment, in a private hosting environment, or in another cloud provider. This document is also useful if you're evaluating the opportunity to migrate and want to explore what it might look like. GKE is a Google-managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure, and provides features that help you manage your Kubernetes environment, such as: Two editions: GKE Standard and GKE Enterprise. With GKE Standard, you get access to a standard tier of core features. With GKE Enterprise, you get access to all the capabilities of GKE. For more information, see GKE editions. Two modes of operation: Standard and Autopilot. With Standard, you manage the underlying infrastructure and the configuration of each node in your GKE cluster. With Autopilot, GKE manages the underlying infrastructure such as node configuration, autoscaling, auto-upgrades, baseline security and network configuration. For more information about GKE modes of operation, see Choose a GKE mode of operation. Industry-unique service level agreement for Pods when using Autopilot in multiple zones. Automated node pool creation and deletion with node auto-provisioning. Google-managed multi-cluster networking to help you design and implement highly available, distributed architectures for your workloads. For more information about GKE, see GKE overview. For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Assess your environment In the assessment phase, you determine the requirements and dependencies to migrate your source environment to Google Cloud. The assessment phase is crucial for the success of your migration. You need to gain deep knowledge about the workloads you want to migrate, their requirements, their dependencies, and about your current environment. You need to understand your starting point to successfully plan and execute a Google Cloud migration. The assessment phase consists of the following tasks: Build a comprehensive inventory of your workloads. Catalog your workloads according to their properties and dependencies. Train and educate your teams on Google Cloud. Build experiments and proofs of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the migration strategy for your workloads. Choose your migration tools. Define the migration plan and timeline. Validate your migration plan. For more information about the assessment phase and these tasks, see Migrate to Google Cloud: Assess and discover your workloads. The following sections are based on information in that document. Build your inventories To scope your migration, you create two inventories: The inventory of your clusters. The inventory of your workloads that are deployed in those clusters. After you build these inventories, you: Assess your deployment and operational processes for your source environment. Assess supporting services and external dependencies. Build the inventory of your clusters To build the inventory of your clusters, consider the following for each cluster: Number and type of nodes. When you know how many nodes and the characteristics of each node that you have in your current environment, you size your clusters when you move to GKE. The nodes in your new environment might run on a different hardware architecture or generation than the ones you use in your environment. The performance of each architecture and generation is different, so the number of nodes you need in your new environment might be different from your environment. Evaluate any type of hardware that you're using in your nodes, such as high-performance storage devices, GPUs, and TPUs. Assess which operating system image that you're using on your nodes. Internal or external cluster. Evaluate which actors, either internal to your environment or external, that each cluster is exposed to. To support your use cases, this evaluation includes the workloads running in the cluster, and the interfaces that interact with your clusters. Multi-tenancy. If you're managing multi-tenant clusters in your environment, assess if it works in your new Google Cloud environment. Now is a good time to evaluate how to improve your multi-tenant clusters because your multi-tenancy strategy influences how you build your foundation on Google Cloud. Kubernetes version. Gather information about the Kubernetes version of your clusters to assess if there is a mismatch between those versions and the ones available in GKE. If you're running an older or a recently released Kubernetes version, you might be using features that are unavailable in GKE. The features might be deprecated, or the Kubernetes version that ships them is not yet available in GKE. Kubernetes upgrade cycle. To maintain a reliable environment, understand how you're handling Kubernetes upgrades and how your upgrade cycle relates to GKE upgrades. Node pools. If you're using any form of node grouping, you might want to consider how these groupings map to the concept of node pools in GKE because your grouping criteria might not be suitable for GKE. Node initialization. Assess how you initialize each node before marking it as available to run your workloads so you can port those initialization procedures over to GKE. Network configuration. Assess the network configuration of your clusters, their IP address allocation, how you configured their networking plugins, how you configured their DNS servers and DNS service providers, if you configured any form of NAT or SNAT for these clusters, and whether they are part of a multi-cluster environment. Compliance: Assess any compliance and regulatory requirements that your clusters are required to satisfy, and whether you're meeting these requirements. Quotas and limits. Assess how you configured quotas and limits for your clusters. For example, how many Pods can each node run? How many nodes can a cluster have? Labels and tags. Assess any metadata that you applied to clusters, node pools, and nodes, and how you're using them. For example, you might be generating reports with detailed, label-based cost attribution. The following items that you assess in your inventory focus on the security of your infrastructure and Kubernetes clusters: Namespaces. If you use Kubernetes Namespaces in your clusters to logically separate resources, assess which resources are in each Namespace, and understand why you created this separation. For example, you might be using Namespaces as part of your multi-tenancy strategy. You might have workloads deployed in Namespaces reserved for Kubernetes system components, and you might not have as much control in GKE. Role-based access control (RBAC). If you use RBAC authorization in your clusters, list a description of all ClusterRoles and ClusterRoleBindings that you configured in your clusters. Network policies. List all network policies that you configured in your clusters, and understand how network policies work in GKE. Pod security contexts. Capture information about the Pod security contexts that you configured in your clusters and learn how they work in GKE. Service accounts. If any process in your cluster is interacting with the Kubernetes API server, capture information about the service accounts that they're using. When you build the inventory of your Kubernetes clusters, you might find that some of the clusters need to be decommissioned as part of your migration. Make sure that your migration plan includes retiring these resources. Build the inventory of your Kubernetes workloads After you complete the Kubernetes clusters inventory and assess the security of your environment, build the inventory of the workloads deployed in those clusters. When evaluating your workloads, gather information about the following aspects: Pods and controllers. To size the clusters in your new environment, assess how many instances of each workload you have deployed, and if you're using Resource quotas and compute resource consumption limits. Gather information about the workloads that are running on the control plane nodes of each cluster and the controllers that each workload uses. For example, how many Deployments are you using? How many DaemonSets are you using? Jobs and CronJobs. Your clusters and workloads might need to run Jobs or CronJobs as part of their initialization or operation procedures. Assess how many instances of Jobs and CronJobs you have deployed, and the responsibilities and completion criteria for each instance. Kubernetes Autoscalers. To migrate your autoscaling policies in the new environment, learn how the Horizontal Pod Autoscaler and the Vertical Pod Autoscaler, work on GKE. Stateless and stateful workloads. Stateless workloads don't store data or state in the cluster or to persistent storage. Stateful applications save data for later use. For each workload, assess which components are stateless and which are stateful, because migrating stateful workloads is typically harder than migrating stateless ones. Kubernetes features. From the cluster inventory, you know which Kubernetes version each cluster runs. Review the release notes of each Kubernetes version to know which features it ships and which features it deprecates. Then assess your workloads against the Kubernetes features that you need. The goal of this task is to know whether you're using deprecated features or features that are not yet available in GKE. If you find any unavailable features, migrate away from deprecated features and adopt the new ones when they're available in GKE. Storage. For stateful workloads, assess if they use PersistenceVolumeClaims. List any storage requirements, such as size and access mode, and how these PersistenceVolumeClaims map to PersistenceVolumes. To account for future growth, assess if you need to expand any PersistenceVolumeClaim. Configuration and secret injection. To avoid rebuilding your deployable artifacts every time there is a change in the configuration of your environment, inject configuration and secrets into Pods using ConfigMaps and Secrets. For each workload, assess which ConfigMaps and Secrets that workload is using, and how you're populating those objects. Dependencies. Your workloads probably don't work in isolation. They might have dependencies, either internal to the cluster, or from external systems. For each workload, capture the dependencies, and if your workloads have any tolerance for when the dependencies are unavailable. For example, common dependencies include distributed file systems, databases, secret distribution platforms, identity and access management systems, service discovery mechanisms, and any other external systems. Kubernetes Services. To expose your workloads to internal and external clients, use Services. For each Service, you need to know its type. For externally exposed services, assess how that service interacts with the rest of your infrastructure. For example, how is your infrastructure supporting LoadBalancer services, Gateway objects, and Ingress objects? Which Ingress controllers did you deploy in your clusters? Service mesh. If you're using a service mesh in your environment, you assess how it's configured. You also need to know how many clusters it spans, which services are part of the mesh, and how you modify the topology of the mesh. Taints and tolerations and affinity and anti-affinity. For each Pod and Node, assess if you configured any Node taints, Pod tolerations, or affinities to customize the scheduling of Pods in your Kubernetes clusters. These properties might also give you insights about possible non-homogeneous Node or Pod configurations, and might mean that either the Pods, the Nodes, or both need to be assessed with special focus and care. For example, if you configured a particular set of Pods to be scheduled only on certain Nodes in your Kubernetes cluster, it might mean that the Pods need specialized resources that are available only on those Nodes. Authentication: Assess how your workloads authenticate against resources in your cluster, and against external resources. Assess supporting services and external dependencies After you assess your clusters and their workloads, evaluate the rest of the supporting services and aspects in your infrastructure, such as the following: StorageClasses and PersistentVolumes. Assess how your infrastructure is backing PersistentVolumeClaims by listing StorageClasses for dynamic provisioning, and statically provisioned PersistentVolumes. For each PersistentVolume, consider the following: capacity, volume mode, access mode, class, reclaim policy, mount options, and node affinity. VolumeSnapshots and VolumeSnapshotContents. For each PersistentVolume, assess if you configured any VolumeSnapshot, and if you need to migrate any existing VolumeSnapshotContents. Container Storage Interface (CSI) drivers. If deployed in your clusters, assess if these drivers are compatible with GKE, and if you need to adapt the configuration of your volumes to work with CSI drivers that are compatible with GKE. Data storage. If you depend on external systems to provision PersistentVolumes, provide a way for the workloads in your GKE environment to use those systems. Data locality has an impact on the performance of stateful workloads, because the latency between your external systems and your GKE environment is proportional to the distance between them. For each external data storage system, consider its type, such as block volumes, file storage, or object storage, and any performance and availability requirements that it needs to satisfy. Custom resources and Kubernetes add-ons. Collect information about any custom Kubernetes resources and any Kubernetes add-ons that you might have deployed in your clusters, because they might not work in GKE, or you might need to modify them. For example, if a custom resource interacts with an external system, you assess if that's applicable to your Google Cloud environment. Backup. Assess how you're backing up the configuration of your clusters and stateful workload data in your source environment. Assess your deployment and operational processes It's important to have a clear understanding of how your deployment and operational processes work. These processes are a fundamental part of the practices that prepare and maintain your production environment and the workloads that run there. Your deployment and operational processes might build the artifacts that your workloads need to function. Therefore, you should gather information about each artifact type. For example, an artifact can be an operating system package, an application deployment package, an operating system image, a container image, or something else. In addition to the artifact type, consider how you complete the following tasks: Develop your workloads. Assess the processes that development teams have in place to build your workloads. For example, how are your development teams designing, coding, and testing your workloads? Generate the artifacts that you deploy in your source environment. To deploy your workloads in your source environment, you might be generating deployable artifacts, such as container images or operating system images, or you might be customizing existing artifacts, such as third-party operating system images by installing and configuring software. Gathering information about how you're generating these artifacts helps you to ensure that the generated artifacts are suitable for deployment in Google Cloud. Store the artifacts. If you produce artifacts that you store in an artifact registry in your source environment, you need to make the artifacts available in your Google Cloud environment. You can do so by employing strategies like the following: Establish a communication channel between the environments: Make the artifacts in your source environment reachable from the target Google Cloud environment. Refactor the artifact build process: Complete a minor refactor of your source environment so that you can store artifacts in both the source environment and the target environment. This approach supports your migration by building infrastructure like an artifact repository before you have to implement artifact build processes in the target Google Cloud environment. You can implement this approach directly, or you can build on the previous approach of establishing a communication channel first. Having artifacts available in both the source and target environments lets you focus on the migration without having to implement artifact build processes in the target Google Cloud environment as part of the migration. Scan and sign code. As part of your artifact build processes, you might be using code scanning to help you guard against common vulnerabilities and unintended network exposure, and code signing to help you ensure that only trusted code runs in your environments. Deploy artifacts in your source environment. After you generate deployable artifacts, you might be deploying them in your source environment. We recommend that you assess each deployment process. The assessment helps ensure that your deployment processes are compatible with Google Cloud. It also helps you to understand the effort that will be necessary to eventually refactor the processes. For example, if your deployment processes work with your source environment only, you might need to refactor them to target your Google Cloud environment. Inject runtime configuration. You might be injecting runtime configuration for specific clusters, runtime environments, or workload deployments. The configuration might initialize environment variables and other configuration values such as secrets, credentials, and keys. To help ensure that your runtime configuration injection processes work on Google Cloud, we recommend that you assess how you're configuring the workloads that run in your source environment. Logging, monitoring, and profiling. Assess the logging, monitoring, and profiling processes that you have in place to monitor the health of your source environment, the metrics of interest, and how you're consuming data provided by these processes. Authentication. Assess how you're authenticating against your source environment. Provision and configure your resources. To prepare your source environment, you might have designed and implemented processes that provision and configure resources. For example, you might be using Terraform along with configuration management tools to provision and configure resources in your source environment. Plan and build your foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. The following sections integrate the considerations in Migrate to Google Cloud: Plan and build your foundation. Plan for multi-tenancy To design an efficient resource hierarchy, consider how your business and organizational structures map to Google Cloud. For example, if you need a multi-tenant environment on GKE, you can choose between the following options: Creating one Google Cloud project for each tenant. Sharing one project among different tenants, and provisioning multiple GKE clusters. Using Kubernetes namespaces. Your choice depends on your isolation, complexity, and scalability needs. For example, having one project per tenant isolates the tenants from one another, but the resource hierarchy becomes more complex to manage due to the high number of projects. However, although managing Kubernetes Namespaces is relatively easier than a complex resource hierarchy, this option doesn't guarantee as much isolation. For example, the control plane might be shared between tenants. For more information, see Cluster multi-tenancy. Configure identity and access management GKE supports multiple options for managing access to resources within your Google Cloud project and its clusters using RBAC. For more information, see Access control. Configure GKE networking Network configuration is a fundamental aspect of your environment. Before provisioning and configure any cluster, we recommend that you assess the GKE network model, the best practices for GKE networking, and how to plan IP addresses when migrating to GKE. Set up monitoring and alerting Having a clear picture of how your infrastructure and workloads are performing is key to finding areas of improvement. GKE has deep integrations with Google Cloud Observability, so you get logging, monitoring, and profiling information about your GKE clusters and workloads inside those clusters. Migrate data and deploy your workloads In the deployment phase, you do the following: Provision and configure your GKE environment. Configure your GKE clusters. Refactor your workloads. Refactor deployment and operational processes. Migrate data from your source environment to Google Cloud. Deploy your workloads in your GKE environment. Validate your workloads and GKE environment. Expose workloads running on GKE. Shift traffic from the source environment to the GKE environment. Decommission the source environment. Provision and configure your Google Cloud environment Before moving any workload to your new Google Cloud environment, you provision the GKE clusters. GKE supports enabling certain features on existing clusters, but there might be features that you can only enable at cluster creation time. To help you avoid disruptions and simplify the migration, we recommend that you enable the cluster features that you need at cluster creation time. Otherwise, you might need to destroy and recreate your clusters in case the cluster features you need cannot be enabled after creating a cluster. After the assessment phase, you now know how to provision the GKE clusters in your new Google Cloud environment to meet your needs. To provision your clusters, consider the following: The number of clusters, the number of nodes per cluster, the types of clusters, the configuration of each cluster and each node, and the scalability plans of each cluster. The mode of operation of each cluster. GKE offers two modes of operation for clusters: GKE Autopilot and GKE Standard. The number of private clusters. The choice between VPC-native or router-based networking. The Kubernetes versions and release channels that you need in your GKE clusters. The node pools to logically group the nodes in your GKE clusters, and if you need to automatically create node pools with node auto-provisioning. The initialization procedures that you can port from your environment to the GKE environment and new procedures that you can implement. For example, you can automatically bootstrap GKE nodes by implementing one or multiple, eventually privileged, initialization procedures for each node or node pool in your clusters. The scalability plans for each cluster. The additional GKE features that you need, such as Cloud Service Mesh, and GKE add-ons, such as Backup for GKE. For more information about provisioning GKE clusters, see: About cluster configuration choices. Manage, configure, and deploy GKE clusters. Understanding GKE security. Harden your cluster's security. GKE networking overview. Best practices for GKE networking. Storage for GKE clusters overview. Fleet management When you provision your GKE clusters, you might realize that you need a large number of them to support all the use cases of your environment. For example, you might need to separate production from non-production environments, or separate services across teams or geographies. For more information, see multi-cluster use cases. As the number of clusters increases, your GKE environment might become harder to operate because managing a large number of clusters poses significant scalability and operational challenges. GKE provides tools and features to help you manage fleets, a logical grouping of Kubernetes clusters. For more information, see Fleet management. Multi-cluster networking To help you improve the reliability of your GKE environment, and to distribute your workloads across several GKE clusters, you can use: Multi-Cluster Service Discovery, a cross-cluster service discovery and invocation mechanism. Services are discoverable and accessible across GKE clusters. For more information, see Multi-Cluster Service Discovery. Multi-cluster gateways, a cross-cluster ingress traffic load balancing mechanism. For more information, see Deploying multi-cluster Gateways. Multi-cluster mesh on managed Cloud Service Mesh. For more information, see Set up a multi-cluster mesh. For more information about migrating from a single-cluster GKE environment to a multi-cluster GKE environment, see Migrate to multi-cluster networking. Configure your GKE clusters After you provision your GKE clusters and before deploying any workload or migrating data, you configure namespaces, RBAC, network policies, service accounts, and other Kubernetes and GKE objects for each GKE cluster. To configure Kubernetes and GKE objects in your GKE clusters, we recommend that you: Ensure that you have the necessary credentials and permissions to access both the clusters in your source environment, and in your GKE environment. Assess if the objects in the Kubernetes clusters your source environment are compatible with GKE, and how the implementations that back these objects differ from the source environment and GKE. Refactor any incompatible object to make it compatible with GKE, or retire it. Create these objects to your GKE clusters. Configure any additional objects that your need in your GKE clusters. Config Sync To help you adopt GitOps best practices to manage the configuration of your GKE clusters as your GKE scales, we recommend that you use Config Sync, a GitOps service to deploy configurations from a source of truth. For example, you can store the configuration of your GKE clusters in a Git repository, and use Config Sync to apply that configuration. For more information, see Config Sync architecture. Policy Controller Policy Controller helps you apply and enforce programmable policies to help ensure that your GKE clusters and workloads run in a secure and compliant manner. As your GKE environment scales, you can use Policy Controller to automatically apply policies, policy bundles, and constraints to all your GKE clusters. For example, you can restrict the repositories from where container images can be pulled from, or you can require each namespace to have at least one label to help you ensure accurate resource consumption tracking. For more information, see Policy Controller. Refactor your workloads A best practice to design containerized workloads is to avoid dependencies on the container orchestration platform. This might not always be possible in practice due to the requirements and the design of your workloads. For example, your workloads might depend on environment-specific features that are available in your source environment only, such as add-ons, extensions, and integrations. Although you might be able to migrate most workloads as-is to GKE, you might need to spend additional effort to refactor workloads that depend on environment-specific features, in order to minimize these dependencies, eventually switching to alternatives that are available on GKE. To refactor your workloads before migrating them to GKE, you do the following: Review source environment-specific features, such as add-ons, extensions, and integrations. Adopt suitable alternative GKE solutions. Refactor your workloads. Review source environment-specific features If you're using source environment-specific features, and your workloads depend on these features, you need to: Find suitable alternatives GKE solutions. Refactor your workloads in order to make use of the alternative GKE solutions. As part of this review, we recommend that you do the following: Consider whether you can deprecate any of these source environment-specific features. Evaluate how critical a source environment-specific feature is for the success of the migration. Adopt suitable alternative GKE solutions After you reviewed your source environment-specific features, and mapped them to suitable GKE alternative solutions, you adopt these solutions in your GKE environment. To reduce the complexity of your migration, we recommend that you do the following: Avoid adopting alternative GKE solutions for source environment-specific features that you aim to deprecate. Focus on adopting alternative GKE solutions for the most critical source environment-specific features, and plan dedicated migration projects for the rest. Refactor your workloads While most of your workloads might work as is in GKE, you might need to refactor some of them, especially if they depended on source environment-specific features for which you adopted alternative GKE solutions. This refactoring might involve: Kubernetes object descriptors, such as Deployments, and Services expressed in YAML format. Container image descriptors, such as Dockerfiles and Containerfiles. Workloads source code. To simplify the refactoring effort, we recommend that you focus on applying the least amount of changes that you need to make your workloads suitable for GKE, and critical bug fixes. You can plan other improvements and changes as part of future projects. Refactor deployment and operational processes After you refactor your workloads, you refactor your deployment and operational processes to do the following: Provision and configure resources in your Google Cloud environment instead of provisioning resources in your source environment. Build and configure workloads, and deploy them in your Google Cloud instead of deploying them in your source environment. You gathered information about these processes during the assessment phase earlier in this process. The type of refactoring that you need to consider for these processes depends on how you designed and implemented them. The refactoring also depends on what you want the end state to be for each process. For example, consider the following: You might have implemented these processes in your source environment and you intend to design and implement similar processes in Google Cloud. For example, you can refactor these processes to use Cloud Build, Cloud Deploy, and Infrastructure Manager. You might have implemented these processes in another third-party environment outside your source environment. In this case, you need to refactor these processes to target your Google Cloud environment instead of your source environment. A combination of the previous approaches. Refactoring deployment and operational processes can be complex and can require significant effort. If you try to perform these tasks as part of your workload migration, the workload migration can become more complex, and it can expose you to risks. After you assess your deployment and operational processes, you likely have an understanding of their design and complexity. If you estimate that you require substantial effort to refactor your deployment and operational processes, we recommend that you consider refactoring these processes as part of a separate, dedicated project. For more information about how to design and implement deployment processes on Google Cloud, see: Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments This document focuses on the deployment processes that produce the artifacts to deploy, and deploy them in the target runtime environment. The refactoring strategy highly depends on the complexity of these processes. The following list outlines a possible, general, refactoring strategy: Provision artifact repositories on Google Cloud. For example, you can use Artifact Registry to store artifacts and build dependencies. Refactor your build processes to store artifacts both in your source environment and in Artifact Registry. Refactor your deployment processes to deploy your workloads in your target Google Cloud environment. For example, you can start by deploying a small subset of your workloads in Google Cloud, using artifacts stored in Artifact Registry. Then, you gradually increase the number of workloads deployed in Google Cloud, until all the workloads to migrate run on Google Cloud. Refactor your build processes to store artifacts in Artifact Registry only. If necessary, migrate earlier versions of the artifacts to deploy from the repositories in your source environment to Artifact Registry. For example, you can copy container images to Artifact Registry. Decommission the repositories in your source environment when you no longer require them. To facilitate eventual rollbacks due to unanticipated issues during the migration, you can store container images both in your current artifact repositories in Google Cloud while the migration to Google Cloud is in progress. Finally, as part of the decommissioning of your source environment, you can refactor your container image building processes to store artifacts in Google Cloud only. Although it might not be crucial for the success of a migration, you might need to migrate your earlier versions of your artifacts from your source environment to your artifact repositories on Google Cloud. For example, to support rolling back your workloads to arbitrary points in time, you might need to migrate earlier versions of your artifacts to Artifact Registry. For more information, see Migrate images from a third-party registry. If you're using Artifact Registry to store your artifacts, we recommend that you configure controls to help you secure your artifact repositories, such as access control, data exfiltration prevention, vulnerability scanning, and Binary Authorization. For more information, see Control access and protect artifacts. Deploy your workloads When your deployment processes are ready, you deploy your workloads to GKE. For more information, see Overview of deploying workloads. To prepare the workloads to deploy for GKE, we recommend that you analyze your Kubernetes descriptors because some Google Cloud resources that GKE automatically provisions for you are configurable by using Kubernetes labels and annotations, instead of having to manually provision these resources. For example, you can provision an internal load balancer instead of an external one by adding an annotation to a LoadBalancer Service. Validate your workloads After you deploy workloads in your GKE environment, but before you expose these workloads to your users, we recommend that you perform extensive validation and testing. This testing can help you verify that your workloads are behaving as expected. For example, you may: Perform integration testing, load testing, compliance testing, reliability testing, and other verification procedures that help you ensure that your workloads are operating within their expected parameters, and according to their specifications. Examine logs, metrics, and error reports in Google Cloud Observability to identify any potential issues, and to spot trends to anticipate problems before they occur. For more information about workload validation, see Testing for reliability. Expose your workloads Once you complete the validation testing of the workloads running in your GKE environment, expose your workloads to make them reachable. To expose workloads running in your GKE environment, you can use Kubernetes Services, and a service mesh. For more information about exposing workloads running in GKE, see: About Services About service networking About Gateway Shift traffic to your Google Cloud environment After you have verified that the workloads are running in your GKE environment, and after you have exposed them to clients, you shift traffic from your source environment to your GKE environment. To help you avoid big-scale migrations and all the related risks, we recommend that you gradually shift traffic from your source environment to your GKE. Depending on how you designed your GKE environment, you have several options to implement a load balancing mechanism that gradually shifts traffic from your source environment to your target environment. For example, you may implement a DNS resolution policy that resolves DNS records according to some policy to resolve a certain percentage of requests to IP addresses belonging to your GKE environment. Or you can implement a load balancing mechanism using virtual IP addresses and network load balancers. After you start gradually shifting traffic to your GKE environment, we recommend that you monitor how your workloads behave as their loads increase. Finally, you perform a cutover, which happens when you shift all the traffic from your source environment to your GKE environment. For more information about load balancing, see Load balancing at the frontend. Decommission the source environment After the workloads in your GKE environment are serving requests correctly, you decommission your source environment. Before you start decommissioning resources in your source environment, we recommend that you do the following: Back up any data to help you restore resources in your source environment. Notify your users before decommissioning the environment. To decommission your source environment, do the following: Decommission the workloads running in the clusters in your source environment. Delete the clusters in your source environment. Delete the resources associated with these clusters, such as security groups, load balancers, and virtual networks. To avoid leaving orphaned resources, the order in which you decommission the resources in your source environment is important. For example, certain providers require that you decommission Kubernetes Services that lead to the creation of load balancers before being able to decommission the virtual networks containing those load balancers. Optimize your Google Cloud environment Optimization is the last phase of your migration. In this phase, you iterate on optimization tasks until your target environment meets your optimization requirements. The steps of each iteration are as follows: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. You repeat this sequence until you've achieved your optimization goals. For more information about optimizing your Google Cloud environment, see Migrate to Google Cloud: Optimize your environment and Google Cloud Architecture Framework: Performance optimization. The following sections integrate the considerations in Migrate to Google Cloud: Optimize your environment. Establish your optimization requirements Optimization requirements help you narrow the scope of the current optimization iteration. For more information about optimization requirements and goals, see Establish your optimization requirements and goals. To establish your optimization requirements for your GKE environment, start by consider the following aspects: Security, privacy, and compliance: help you enhance the security posture of your GKE environment. Reliability: help you improve the availability, scalability, and resilience of your GKE environment. Cost optimization: help you optimize the resource consumption and resulting spending of your GKE environment. Operational efficiency: help you maintain and operate your GKE environment efficiently. Performance optimization: help you optimize the performance of the workloads deployed in your GKE environment. Security, privacy, and compliance Monitor the security posture of you GKE clusters. You can use the security posture dashboard to get opinionated, actionable recommendations to help you improve the security posture of your GKE environment. Harden your GKE environment. Understand the GKE security model, and how to harden harden your GKE clusters. Protect your software supply-chain. For security-critical workloads, Google Cloud provides a modular set of products that implement software supply chain security best practices across the software lifecycle. Reliability Improve the reliability of your clusters. To help you design a GKE cluster that is more resilient to unlikely zonal outages, prefer regional clusters over zonal or multi-zonal ones. Note: For more information about region-specific considerations, see Geography and regions. Workload backup and restore. Configure a workload backup and restore workflow with Backup for GKE. Cost optimization For more information about optimizing the cost of your GKE environment, see: Right-size your GKE workloads at scale. Reducing costs by scaling down GKE clusters during off-peak hours. Identify idle GKE clusters. Operational efficiency To help you avoid issues that affect your production environment, we recommend that you: Design your GKE clusters to be fungible. By considering your clusters as fungible and by automating their provisioning and configuration, you can streamline and generalize the operational processes to maintain them and also simplify future migrations and GKE cluster upgrades. For example, if you need to upgrade a fungible GKE cluster to a new GKE version, you can automatically provision and configure a new, upgraded cluster, automatically deploy workloads in the new cluster, and decommission the old, outdated GKE cluster. Monitor metrics of interest. Ensure that all the metrics of interest about your workloads and clusters are properly collected. Also, verify that all the relevant alerts that use these metrics as inputs are in place, and working. For more information about configuring monitoring, logging, and profiling in your GKE environment, see: Observability for GKE GKE Cluster notifications Performance optimization Set up cluster autoscaling and node auto-provisioning. Automatically resize your GKE cluster according to demand by using cluster autoscaling and node auto-provisioning. Automatically scale workloads. GKE supports several scaling mechanisms, such as: Automatically scale workloads based on metrics. Automatically scale workloads by changing the shape of the number of Pods your Kubernetes workloads by configuring Horizontal Pod autoscaling. Automatically scale workloads by adjusting resource requests and limits by configuring Vertical Pod autoscaling. For more information, see About GKE scalability. What's next Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Migrate_from_Mainframe.txt b/Migrate_from_Mainframe.txt new file mode 100644 index 0000000000000000000000000000000000000000..7085e3f664ae355c86ac6d5325b960765cc51da7 --- /dev/null +++ b/Migrate_from_Mainframe.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/mainframe-modernization +Date Scraped: 2025-02-23T11:58:16.492Z + +Content: +Google Cloud Mainframe Modernization with generative AIAccelerate digital transformation with Mainframe Modernization, enhanced by Gemini modelsModernize your mainframe applications with Google Cloud and unlock innovation, agility, and cost savings. Leverage our generative AI solutions, industry expertise, and partner ecosystem to accelerate your digital transformation journey. Learn moreProduct highlights Mainframe assessment enhanced with Gen AICode transformation enhanced with Gen AITest, certify, de-risk application modernizationAugmenting mainframe applications with Google Cloud capabilitiesRead our Mainframe Modernization solutions one-pagerOverviewSolutions for mainframe application modernization, testing, and augmentationComprehensive solutions made by Google Cloud for most in-demand Mainframe Modernization patterns. These include products for mainframe assessment, application rewrite with generative AI, automatic application refactor, application replatforming, mainframe augmentation, de-risking migrations and testing modernized applications.AI-enhanced solutions with Google Gemini modelsLeverage Google's Gemini models to transform and reimagine existing mainframe applications for the cloud. With the power of generative AI infused with mainframe-specific context, you can rewrite your mainframe applications codebase faster and at a lower cost.Deploy on secure, reliable, scalable, and cost-effective cloud infrastructure Deploy and operate your modernized mainframe applications on Google Cloud's secure, highly available, scalable, and cost-efficient infrastructure.View moreHow It WorksAccelerate and de-risk the modernization of mainframe applications and data with comprehensive solutions for most in-demand modernization patterns including assessment, refactor, rewrite, replatform, rehost, and augmentation of mainframes with Google Cloud.View documentationCommon UsesAssess your mainframe applications for modernizationMainframe Assessment Tool (MAT), enhanced with Gemini and generative AIAssess and analyze your mainframe estate, applications, and data to determine the best path forward with modernization. Google Cloud Mainframe Assessment Tool (MAT), enhanced with Gemini models, provides comprehensive code analysis, code explanations, application logic, and specification summaries, automated application documentation creation, identification of application dependencies, and generation of test cases. MAT leverages Gemini models that are customized with mainframe-specific context.4:04How-tosMainframe Assessment Tool (MAT), enhanced with Gemini and generative AIAssess and analyze your mainframe estate, applications, and data to determine the best path forward with modernization. Google Cloud Mainframe Assessment Tool (MAT), enhanced with Gemini models, provides comprehensive code analysis, code explanations, application logic, and specification summaries, automated application documentation creation, identification of application dependencies, and generation of test cases. MAT leverages Gemini models that are customized with mainframe-specific context.4:04Rewrite and modernize your mainframe applicationsMainframe Code Rewrite, enhanced by Gemini and generative AIReimagine and modernize your mainframe applications for the cloud era. Google Cloud's Mainframe Modernization Code Rewrite integrates with with Gemini Code Assist and leverages advanced generative AI to analyze and reimagine your legacy mainframe code into modern languages such as Java within an IDE that’s easy and natural for developers to use. Benefit from fully modernized applications and application architectures with increased agility, scalability, and security, while reducing maintenance costs and complexity.How-tosMainframe Code Rewrite, enhanced by Gemini and generative AIReimagine and modernize your mainframe applications for the cloud era. Google Cloud's Mainframe Modernization Code Rewrite integrates with with Gemini Code Assist and leverages advanced generative AI to analyze and reimagine your legacy mainframe code into modern languages such as Java within an IDE that’s easy and natural for developers to use. Benefit from fully modernized applications and application architectures with increased agility, scalability, and security, while reducing maintenance costs and complexity.Refactor and modernize your mainframe applicationsMainframe Refactor for BatchModernize mainframe applications to Google Cloud using automated code refactoring, and deploy them on Google Kubernetes Engine (GKE), or Cloud Run. Google Cloud Mainframe Refactor allows for the refactoring of mainframe applications to Java, ensuring that the original functionality of the mainframe application is retained as it is modernized, significantly increasing agility and eliminating all dependencies on legacy technologies and skills.2:20How-tosMainframe Refactor for BatchModernize mainframe applications to Google Cloud using automated code refactoring, and deploy them on Google Kubernetes Engine (GKE), or Cloud Run. Google Cloud Mainframe Refactor allows for the refactoring of mainframe applications to Java, ensuring that the original functionality of the mainframe application is retained as it is modernized, significantly increasing agility and eliminating all dependencies on legacy technologies and skills.2:20Replatform your mainframe applications "as-is"Mainframe Application ReplatformingReplatform your mainframe applications to Google Cloud with automated solutions from our certified technology partners. With Replatforming, your mainframe applications can run "as-is" in Google Cloud on Google Compute Engine (GCE), preserving existing application functionality, substantially reducing costs migrating away from the mainframe, and boosting developer productivity with access to modern IDEs.Read more about best practices in our Modernizing Mainframes with Rocket Enterprise Server and Google Cloud whitepaper.How-tosMainframe Application ReplatformingReplatform your mainframe applications to Google Cloud with automated solutions from our certified technology partners. With Replatforming, your mainframe applications can run "as-is" in Google Cloud on Google Compute Engine (GCE), preserving existing application functionality, substantially reducing costs migrating away from the mainframe, and boosting developer productivity with access to modern IDEs.Read more about best practices in our Modernizing Mainframes with Rocket Enterprise Server and Google Cloud whitepaper.De-risk and test your modernized mainframe applications Dual RunGoogle Cloud Dual Run enables testing, certifying, and de-risking of mainframe applications that are being modernized with Google Cloud. With Dual Run you can verify correctness, completeness, and performance of the modernized application during migration and before going live.Dual Run replays live events from the production mainframe system onto the modernized cloud application and compares the outputs between the two systems, ensuring the correctness of the updated business logic within the modernized application. 3:59How-tosDual RunGoogle Cloud Dual Run enables testing, certifying, and de-risking of mainframe applications that are being modernized with Google Cloud. With Dual Run you can verify correctness, completeness, and performance of the modernized application during migration and before going live.Dual Run replays live events from the production mainframe system onto the modernized cloud application and compares the outputs between the two systems, ensuring the correctness of the updated business logic within the modernized application. 3:59Augment your mainframe with cloud-native capabilitiesMainframe ConnectorUse Google Cloud Mainframe Connector to copy mainframe data, from databases, VSAM, and data files, into Cloud Storage and BigQuery. Unlock previously siloed mainframe data while retaining your mainframe applications. Using Google Mainframe Connector, you can now leverage mainframe data for AI/ML and analytics solutions in Google Cloud, while lowering MIPS consumption used for analytics on the mainframe.See Mainframe Connector in action.2:03How-tosMainframe ConnectorUse Google Cloud Mainframe Connector to copy mainframe data, from databases, VSAM, and data files, into Cloud Storage and BigQuery. Unlock previously siloed mainframe data while retaining your mainframe applications. Using Google Mainframe Connector, you can now leverage mainframe data for AI/ML and analytics solutions in Google Cloud, while lowering MIPS consumption used for analytics on the mainframe.See Mainframe Connector in action.2:03Start your Mainframe Modernization journeyKickstart your Mainframe Modernization and unlock modernization insights. Start with a detailed assessment enhanced with generative AI capabilities. Identify opportunities to modernize and transform your legacy applications, reduce costs, innovate with new business functions, and accelerate agility.Contact Google Cloud expertsInterested in learning more about Google Cloud solutions for Mainframe Modernization? Contact usLearn more about Google Cloud’s Mainframe Modernization solutions for most in-demand modernization patterns, including code assessment, application code rewrite enhanced by generative AI, automatic code refactoring, testing, and augmentation of mainframe applications.Read our solutions one-pager Harness the potential of Google Gemini models and generative AI to gain insights about your current mainframe estate and applications, including automatic creation of code logic summaries, code documentation, test cases, application dependencies, and more. Make informed decisions regarding your Mainframe Modernization using Google Mainframe Assessment Tool (MAT). Learn about Google Mainframe Assessment ToolUnlock the full potential of your most critical business data and extract maximum value from your mainframe applications by augmenting them with Google Cloud capabilities and services. Seamlessly copy data from mainframe databases and datasets to Google Cloud and BigQuery using the Google Cloud Mainframe Connector, with automatic conversion of codepages, data types, and encodings between mainframe and non-mainframe formats.Read the blogBusiness CaseLearn more about how customers accelerate their digital transformation journey through the modernization of mainframe applications using Google Cloud solutions"BoaVista migrates from mainframe to Google cloud and reduces infrastructure costs by 30%."Learn moreRelated Content"Arek Oy reduces mainframe use by 50%"UNITEC modernizes SPARC servers by migrating Solaris apps to Google Cloud in 24 daysCustomer success storiesArek Oy leverages Anthos and other Google Cloud solutions with the help of partners Tietoevry, Accenture, and Heirloom to boost capacity for its core applications and improve pension management for customers.Read moreBoaVista is able to leap forward with the company’s digital transformation journey, leveraging data analytics, reducing the time to ingest large volumes of data from 24+ hours to a few minutes and with a 9x increase in analytical models created.Read moreUNITEC benefits from virtually infinite scalability on migrating eight servers, 22 zones, and 25 TB of data to Google Cloud in 24 days.Read morePartners & IntegrationGoogle Mainframe Modernization certified partnersDelivery and generative AI partners Technology partnersCertified technology partners specialize in products and solutions for accelerating Mainframe Modernization to Google Cloud. Certified delivery and gen AI partners specialize in end-to-end Mainframe Modernization projects, scoping, implementation, and support.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_from_PaaS:_Cloud_Foundry,_Openshift.txt b/Migrate_from_PaaS:_Cloud_Foundry,_Openshift.txt new file mode 100644 index 0000000000000000000000000000000000000000..d58e18a88887c136529afd1cfc728a0a8eaa2ce4 --- /dev/null +++ b/Migrate_from_PaaS:_Cloud_Foundry,_Openshift.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/migrate-from-paas +Date Scraped: 2025-02-23T11:58:14.660Z + +Content: +Migrate from PaaS—OpenShift or Cloud FoundryCustomers adopted PaaS solutions (Cloud Foundry and OpenShift) to standardize their application platforms. Now they have become more of a constraint than a benefit. Our solution helps customers simplify their migration.Contact usBenefitsStandardize on Kubernetes and reduce licensing and operational costsComprehensive solutionsGoogle Cloud’s solutions provide an end-to-end approach including assessing and migrating customers’ PaaS landscape as well as operating them post-migration. Minimize riskEliminates cumbersome PaaS platform upgrades that result in customers staying on outdated platforms longer and being vulnerable to security issues.Cost reductionOrganizations can eliminate their expensive PaaS subscription costs and also reduce their operational costs by running their PaaS and Kubernetes applications on a unified platform.DocumentationMigrating from Cloud Foundry or OpenShiftLearn more about our solutions for migrating from OpenShift or Cloud Foundry.Best PracticeMigrate from OpenShiftOur solution allows you to migrate from OpenShift to GKE/Anthos using our tools and standardized approaches.Learn moreBest PracticeMigrate from Cloud FoundryLearn how you can migrate from Cloud Foundry to Cloud Run/GKE to minimize your TCO.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_from_manual_deployments_to_automated,_containerized_deployments.txt b/Migrate_from_manual_deployments_to_automated,_containerized_deployments.txt new file mode 100644 index 0000000000000000000000000000000000000000..846c6fd43cc800d8342f88c87acfd8799f7ff7ed --- /dev/null +++ b/Migrate_from_manual_deployments_to_automated,_containerized_deployments.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-automated-containerized-deployments +Date Scraped: 2025-02-23T11:51:42.637Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-08 UTC This document helps you plan and design a migration path from manual deployments to automated, containerized deployments in Google Cloud using cloud-native tools and Google Cloud managed services. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments (this document) Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs This document is useful if you're planning to modernize your deployment processes, if you're migrating from manual and legacy deployment processes to automated and containerized deployments, or if you're evaluating the opportunity to migrate and want to explore what it might look like. Before starting this migration, you should evaluate the scope of the migration and the status of your current deployment processes, and set your expectations and goals. You choose the starting point according to how you're currently deploying your workloads: You're deploying your workloads manually. You're deploying your workloads with configuration management (CM) tools. It's hard to move from manual deployments directly to fully automated and containerized deployments. Instead, we recommend the following migration steps: Deploy by using container orchestration tools. Deploy automatically. This migration path is an ideal one, but you can stop earlier in the migration process if the benefits of moving to the next step outweigh the costs for your particular case. For example, if you don't plan to automatically deploy your workloads, you can stop after you deploy by using container orchestration tools. You can revisit this document in the future, when you're ready to continue on the journey. When you move from one step of the migration to another, there is a transition phase where you might be using different deployment processes at the same time. In fact, you don't need to choose only one deployment option for all of your workloads. For example, you might have a hybrid environment where you deploy certain workloads using CM tools, while deploying other workloads with container orchestration tools. For this migration to Google Cloud, we recommend that you follow the migration framework described in Migrate to Google Cloud: Get started. The following diagram illustrates the path of your migration journey. You might migrate from your source environment to Google Cloud in a series of iterations—for example, you might migrate some workloads first and others later. For each separate migration iteration, you follow the phases of the general migration framework: Assess and discover your workloads and data. Plan and build a foundation on Google Cloud. Migrate your workloads and data to Google Cloud. Optimize your Google Cloud environment. For more information about the phases of this framework, see Migrate to Google Cloud: Get started. To design an effective migration plan, we recommend that you validate each step of the plan, and ensure that you have a rollback strategy. To help you validate your migration plan, see Migrate to Google Cloud: Best practices for validating a migration plan. Migrate to container orchestration tools One of your first steps to move away from manual deployments is to deploy your workloads with container orchestration tools. In this step, you design and implement a deployment process to handle containerized workloads by using container orchestration tools, such as Kubernetes. If your workloads aren't already containerized, you're going to spend a significant effort containerizing them. Not all workloads are suitable for containerization. If you're deploying a workload that isn't cloud-ready or ready for containerization, it might not be worth containerizing the workloads. Some workloads can't even support containerization for technical or licensing reasons. Assess and discover your workloads To scope your migration, you first need an inventory of the artifacts that you're producing and deploying along with their dependencies on other systems and artifacts. To build this inventory, you need to use the expertise of the teams that designed and implemented your current artifact production and deployment processes. The Migrate to Google Cloud: Assess and discover your workloads document discusses how to assess your environment during a migration and how to build an inventory of apps. For each artifact, you need to evaluate its test coverage. You should have proper test coverage for all your artifacts before moving on to the next step. If you have to manually test and validate each artifact, you don't benefit from the automation. Adopt a methodology that highlights the importance of testing, like test-driven development. When you evaluate your processes, consider how many different versions of your artifacts you might have in production. For example, if the latest version of an artifact is several versions ahead of instances that you must support, you have to design a model that supports both versions. Also consider the branching strategy that you use to manage your codebase. A branching strategy is only part of a collaboration model that you need to evaluate, and you need to assess the broader collaboration processes inside and outside your teams. For example, if you adopt a flexible branching strategy but don't adapt it to the communication process, the efficiency of those teams might be reduced. In this assessment phase, you also determine how you can make the artifacts you're producing more efficient and suitable for containerization than your current deployment processes. One way to improve efficiency is to assess the following: Common parts: Assess what your artifacts have in common. For example, if you have common libraries and other runtime dependencies, consider consolidating them in one runtime environment. Runtime environment requirements: Assess whether you can streamline the runtime environments to reduce their variance. For example, if you're using different runtime environments to run all your workloads, consider starting from a common base to reduce the maintenance burden. Unnecessary components: Assess whether your artifacts contain unnecessary parts. For example, you might have utility tools, such as debugging and troubleshooting tools, that are not strictly needed. Configuration and secret injection: Assess how you're configuring your artifacts according to the requirements of your runtime environment. For example, your current configuration injection system might not support a containerized environment. Security requirements: Assess whether your container security model meets your requirements. For example, the security model of a containerized environment might clash with the requirement of a workload to have super user privileges, direct access to system resources, or sole tenancy. Deployment logic requirements: Assess whether you need to implement advanced deployment processes. For example, if you need to implement a canary deployment process, you could determine whether the container orchestration tool supports that. Plan and build a foundation In the plan and build phase, you provision and configure the infrastructure to do the following: Support your workloads in your Google Cloud environment. Connect your source environment and your Google Cloud environment to complete the migration. The plan and build phase is composed of the following tasks: Build a resource hierarchy. Configure Google Cloud's Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see the Migrate to Google Cloud: Plan and build your foundation. To achieve the necessary flexibility to manage your Google Cloud resources, we recommend that you design a Google Cloud resource hierarchy that supports multiple environments such as for development, testing, and production workloads. When you're establishing user and service identities, for the best isolation you need at least a service account for each deployment process step. For example, if your process executes steps to produce the artifact and to manage the storage of that artifact in a repository, you need at least two service accounts. If you want to provision and configure development and testing environments for your deployment processes, you might need to create more service accounts. If you have a distinct set of service accounts per environment, you make the environments independent from each other. Although this configuration increases the complexity of your infrastructure and puts more burden on your operations team, it gives you the flexibility to independently test and validate each change to the deployment processes. You also need to provision and configure the services and infrastructure to support your containerized workloads: Set up a registry to store your container images, like Artifact Registry and to isolate this registry and the related maintenance tasks, you set it up in a dedicated Google Cloud project. Provision and configure the Kubernetes clusters you need to support your workloads. Depending on your current environment and your goals, you can use services like Google Kubernetes Engine (GKE). Provision and configure persistent storage for your stateful workloads. For more information, see Google Kubernetes Engine storage overview. By using container orchestration tools, you don't have to worry about provisioning your infrastructure when you deploy new workloads. For example, you can use Autopilot to automatically manage your GKE cluster configuration. Deploy your artifacts with container orchestration tools Based on the requirements you gathered in the assessment phase and the foundation phase of this step, you do the following: Containerize your workloads. Implement deployment processes to handle your containerized workloads. Containerizing your workloads is a nontrivial task. What follows is a generalized list of activities you need to adapt and extend to containerize your workloads. Your goal is to cover your own needs, such as networking and traffic management, persistent storage, secret and configuration injection, and fault tolerance requirements. This document covers two activities: building a set of container images to use as a base, and building a set of container images for your workloads. First, you automate the artifact production, so you don't have to manually produce a new image for each new deployment. The artifact building process should be automatically triggered each time the source code is modified so that you have immediate feedback about each change. You execute the following steps to produce each image: Build the image. Run the test suite. Store the image in a registry. For example, you can use Cloud Build to build your artifacts, run the test suites against them, and, if the tests are successful, store the results in Artifact Registry. You also need to establish rules and conventions for identifying your artifacts. When producing your images, label each one to make each execution of your processes repeatable. For example, a popular convention is to identify releases by using semantic versioning where you tag your container images when producing a release. When you produce images that still need work before release, you can use an identifier that ties them to the point in the codebase from which your process produced them. For example, if you're using Git repositories, you can use the commit hash as an identifier for the corresponding container image that you produced when you pushed a commit to the main branch of your repository. During the assessment phase of this step, you gathered information about your artifacts, their common parts, and their runtime requirements. With this information, you can design and build a set of base container images and another set of images for your workloads. You use the base images as a starting point to build the images for your workloads. The set of base images should be tightly controlled and supported to avoid proliferating unsupported runtime environments. When producing container images from base images, remember to extend your test suites to cover the images, not only the workloads inside each image. You can use tools like InSpec to run compliance test suites against your runtime environments. When you finish containerizing your workloads and implementing processes to automatically produce such container images, you implement the deployment processes to use container orchestration tools. In the assessment phase, you use the information about the deployment logic requirements that you gathered to design rich deployment processes. By using container orchestration tools, you can focus on composing the deployment logic using the provided mechanisms, instead of having to manually implement them. For example, you can use Cloud Deploy to implement your deployment processes. When designing and implementing your deployment processes, consider how to inject configuration files and secrets in your workloads, and how to manage data for stateful workloads. Configuration files and secret injection are instrumental to produce immutable artifacts. By deploying immutable artifacts, you can do the following: For example, you can deploy your artifacts in your development environment. Then, after testing and validating them, you move them to your quality assurance environment. Finally, you move them to the production environment. You lower the chances of issues in your production environments because the same artifact went through multiple testing and validation activities. If your workloads are stateful, we suggest you provision and configure the necessary persistent storage for your data. On Google Cloud, you have different options: Persistent disks managed with GKE Fully managed database services like Cloud SQL, Firestore and Spanner File storage services like Filestore Object store services like Cloud Storage Optimize your environment After implementing your deployment process, you can use container orchestration tools to start optimizing the deployment processes. For more information, see Migrate to Google Cloud: Optimize your environment. The requirements of this optimization iteration are the following: Extend your monitoring system as needed. Extend the test coverage. Increase the security of your environment. You extend your monitoring system to cover your new artifact production, your deployment processes, and all of your new runtime environments. If you want to effectively monitor, automate, and codify your processes as much as possible, we recommend that you increase the coverage of your tests. In the assessment phase, you ensured that you had at least minimum end-to-end test coverage. During the optimization phase, you can expand your test suites to cover more use cases. Finally, if you want to increase the security of your environments, you can configure binary authorization to allow only a set of signed images to be deployed in your clusters. You can also enable Artifact Analysis to scan container images stored in Artifact Registry for vulnerabilities. Migrate to deployment automation After migrating to container orchestration tools, you can move to full deployment automation, and you can extend the artifact production and deployment processes to automatically deploy your workloads. Assess and discover your workloads Building on the previous evaluation, you can now focus on the requirements of your deployment processes: Manual approval steps: Assess whether you need to support any manual steps in your deployment processes. Deployment-per-time units: Assess how many deployments-per-time units you need to support. Factors that cause a new deployment: Assess which external systems interact with your deployment processes. If you need to support manual deployment steps, it doesn't mean that your process cannot be automated. In this case, you automate each step of the process, and place the manual approval gates where appropriate. Supporting multiple deployments per day or per hour is more complex than supporting a few deployments per month or per year. However, if you don't deploy often, your agility and your ability to react to issues and to ship new features in your workloads might be reduced. For this reason, before designing and implementing a fully automated deployment process, it's a good idea to set your expectations and goals. Also evaluate which factors trigger a new deployment in your runtime environments. For example, you might deploy each new release in your development environment, but deploy the release in your quality assurance environment only if it meets certain quality criteria. Plan and build a foundation To extend the foundation that you built in the previous step, you provision and configure services to support your automated deployment processes. For each of your runtime environments, set up the necessary infrastructure to support your deployment processes. For example, if you provision and configure your deployment processes in your development, quality assurance, pre-production, and production environments, you have the freedom and flexibility to test changes to your processes. However, if you use a single infrastructure to deploy your runtime environments, your environments are simpler to manage, but less flexible when you need to change your processes. When provisioning the service accounts and roles, consider isolating your environments and your workloads from each other by creating dedicated service accounts that don't share responsibilities. For example, don't reuse the same service accounts for your different runtime environments. Deploy your artifacts with fully automated processes In this phase, you configure your deployment processes to deploy your artifacts with no manual interventions, other than approval steps. You can use tools like Cloud Deploy to implement your automated deployment processes, according to the requirements you gathered in the assessment phase of this migration step. For any given artifact, each deployment process should execute the following tasks: Deploy the artifact in the target runtime environment. Inject the configuration files and secrets in the deployed artifact. Run the compliance test suite against the newly deployed artifact. Promote the artifact to the production environment. Make sure that your deployment processes provide interfaces to trigger new deployments according to your requirements. Code review is a necessary step when implementing automated deployment processes, because of the short feedback loop that's part of these processes by design. For example, if you deploy changes to your production environment without any review, you impact the stability and reliability of your production environment. An unreviewed, malformed, or malicious change might cause a service outage. Optimize your environment After automating your deployment processes, you can run another optimization iteration. The requirements of this iteration are the following: Extend your monitoring system to cover the infrastructure supporting your automated deployment processes. Implement more advanced deployment patterns. Implement a break glass process. An effective monitoring system lets you plan further optimizations for your environment. When you measure the behavior of your environment, you can find any bottlenecks that are hindering your performance or other issues, like unauthorized or accidental accesses and exploits. For example, you configure your environment so that you receive alerts when the consumption of certain resources reaches a threshold. When you're able to efficiently orchestrate containers, you can implement advanced deployment patterns depending on your needs. For example, you can implement blue/green deployments to increase the reliability of your environment and reduce the impact of any issue for your users. What's next Optimize your environment. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Migrate_to_Containers(1).txt b/Migrate_to_Containers(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..60e1ed65051100f4fbed14a5fd226819bd0441c6 --- /dev/null +++ b/Migrate_to_Containers(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration/containers +Date Scraped: 2025-02-23T12:04:52.061Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate to ContainersIntelligently extract, migrate, and modernize applications to run natively on containersDocumentationFree migration cost assessmentMore informationExplore documentationFeaturesCraft your ideal migration journeyMost digital transformations will be a mix of strategies. For the workloads that will benefit from containers, Migrate to Containers delivers a fast, smooth path to modernization. For other workloads that are better suited as a VM, simply move them as is with Migrate to Virtual Machines and leverage VPC network integration with GKE. Don't settle for being locked into existing infrastructure or one migration path. With Google, run your workloads how you want, where you want.Upgrade to containers with easeSome workloads are simply written off by IT as being “unable to upgrade.” But Migrate to Containers strips away layers of manual effort, making migration and modernization a possibility for those workloads, even with small IT teams. Automate the extraction of existing applications from servers and VMs so they can run natively in containers, without having to rewrite or re-architect apps. This helps you eliminate complexity and knowledge gaps that have previously held businesses back from being able to upgrade to containers.Capitalize on the benefits of modernization fasterAccelerating the migration and adoption of the modern platform enables the business to operate more efficiently with unified policy, management, and skill sets across existing and newly developed apps. It also enables you to unlock additional funding for more modernization and new app development by being able to reallocate budget that was previously required to keep legacy environments up.Accelerate adoption of day-two operationsFor day-two operations, save on labor and costs associated with maintaining, patching, and updating VMs and physical servers by switching to modern CI/CD pipelines, image-based management, and desired-state configuration. Easily modernize IT landscapes by accelerating the adoption of modern services such as Service Mesh, Config Management, role based access control (RBAC), Cloud Logging, and more. Thereby unifying your policy enforcement and management practices across both migrated and newly developed applications.View all featuresHow It WorksMigrate to Containers makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. Our unique automated approach extracts the critical application elements from the VM so you can easily insert those elements into containers in Google Kubernetes Engine, Cloud Run, or Anthos clusters without the VM layers (like Guest OS) that become unnecessary with containers.View documentationCommon UsesGetting familiar with Migrate to ContainersBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreLearning resourcesBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreGetting your migration startedMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn moreTutorials, quickstarts, & labsMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn morePricingMigrate to ContainersPricing tableServicePriceMigrate to ContainersMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Migrate to ContainersPricing tableMigrate to ContainersPriceMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Google Cloud pricing calculatorAdd and configure products to get a cost estimate to share with your team.Use calculatorContact salesTalk to a Google Cloud migration specialist.Reach outLearn more about migration with Google CloudRapid Migration Program (RaMP)Learn moreData center migrationLearn moreMigrating your applicationsLearn moreGoogle Cloud VMware EngineLearn moreGoogle Cloud's migration product portfolioLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_to_Containers(2).txt b/Migrate_to_Containers(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..0a34ccdc3ffe8f521d3aa5d2ad1bb368d1607ce1 --- /dev/null +++ b/Migrate_to_Containers(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration/containers +Date Scraped: 2025-02-23T12:06:50.197Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate to ContainersIntelligently extract, migrate, and modernize applications to run natively on containersDocumentationFree migration cost assessmentMore informationExplore documentationFeaturesCraft your ideal migration journeyMost digital transformations will be a mix of strategies. For the workloads that will benefit from containers, Migrate to Containers delivers a fast, smooth path to modernization. For other workloads that are better suited as a VM, simply move them as is with Migrate to Virtual Machines and leverage VPC network integration with GKE. Don't settle for being locked into existing infrastructure or one migration path. With Google, run your workloads how you want, where you want.Upgrade to containers with easeSome workloads are simply written off by IT as being “unable to upgrade.” But Migrate to Containers strips away layers of manual effort, making migration and modernization a possibility for those workloads, even with small IT teams. Automate the extraction of existing applications from servers and VMs so they can run natively in containers, without having to rewrite or re-architect apps. This helps you eliminate complexity and knowledge gaps that have previously held businesses back from being able to upgrade to containers.Capitalize on the benefits of modernization fasterAccelerating the migration and adoption of the modern platform enables the business to operate more efficiently with unified policy, management, and skill sets across existing and newly developed apps. It also enables you to unlock additional funding for more modernization and new app development by being able to reallocate budget that was previously required to keep legacy environments up.Accelerate adoption of day-two operationsFor day-two operations, save on labor and costs associated with maintaining, patching, and updating VMs and physical servers by switching to modern CI/CD pipelines, image-based management, and desired-state configuration. Easily modernize IT landscapes by accelerating the adoption of modern services such as Service Mesh, Config Management, role based access control (RBAC), Cloud Logging, and more. Thereby unifying your policy enforcement and management practices across both migrated and newly developed applications.View all featuresHow It WorksMigrate to Containers makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. Our unique automated approach extracts the critical application elements from the VM so you can easily insert those elements into containers in Google Kubernetes Engine, Cloud Run, or Anthos clusters without the VM layers (like Guest OS) that become unnecessary with containers.View documentationCommon UsesGetting familiar with Migrate to ContainersBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreLearning resourcesBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreGetting your migration startedMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn moreTutorials, quickstarts, & labsMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn morePricingMigrate to ContainersPricing tableServicePriceMigrate to ContainersMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Migrate to ContainersPricing tableMigrate to ContainersPriceMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Google Cloud pricing calculatorAdd and configure products to get a cost estimate to share with your team.Use calculatorContact salesTalk to a Google Cloud migration specialist.Reach outLearn more about migration with Google CloudRapid Migration Program (RaMP)Learn moreData center migrationLearn moreMigrating your applicationsLearn moreGoogle Cloud VMware EngineLearn moreGoogle Cloud's migration product portfolioLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_to_Containers.txt b/Migrate_to_Containers.txt new file mode 100644 index 0000000000000000000000000000000000000000..c2e0a525c0e42c4a2e25ea36a205cd7e5733d691 --- /dev/null +++ b/Migrate_to_Containers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration/containers +Date Scraped: 2025-02-23T12:03:11.352Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate to ContainersIntelligently extract, migrate, and modernize applications to run natively on containersDocumentationFree migration cost assessmentMore informationExplore documentationFeaturesCraft your ideal migration journeyMost digital transformations will be a mix of strategies. For the workloads that will benefit from containers, Migrate to Containers delivers a fast, smooth path to modernization. For other workloads that are better suited as a VM, simply move them as is with Migrate to Virtual Machines and leverage VPC network integration with GKE. Don't settle for being locked into existing infrastructure or one migration path. With Google, run your workloads how you want, where you want.Upgrade to containers with easeSome workloads are simply written off by IT as being “unable to upgrade.” But Migrate to Containers strips away layers of manual effort, making migration and modernization a possibility for those workloads, even with small IT teams. Automate the extraction of existing applications from servers and VMs so they can run natively in containers, without having to rewrite or re-architect apps. This helps you eliminate complexity and knowledge gaps that have previously held businesses back from being able to upgrade to containers.Capitalize on the benefits of modernization fasterAccelerating the migration and adoption of the modern platform enables the business to operate more efficiently with unified policy, management, and skill sets across existing and newly developed apps. It also enables you to unlock additional funding for more modernization and new app development by being able to reallocate budget that was previously required to keep legacy environments up.Accelerate adoption of day-two operationsFor day-two operations, save on labor and costs associated with maintaining, patching, and updating VMs and physical servers by switching to modern CI/CD pipelines, image-based management, and desired-state configuration. Easily modernize IT landscapes by accelerating the adoption of modern services such as Service Mesh, Config Management, role based access control (RBAC), Cloud Logging, and more. Thereby unifying your policy enforcement and management practices across both migrated and newly developed applications.View all featuresHow It WorksMigrate to Containers makes it fast and easy to modernize traditional applications away from virtual machines and into native containers. Our unique automated approach extracts the critical application elements from the VM so you can easily insert those elements into containers in Google Kubernetes Engine, Cloud Run, or Anthos clusters without the VM layers (like Guest OS) that become unnecessary with containers.View documentationCommon UsesGetting familiar with Migrate to ContainersBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreLearning resourcesBenefits of migrating to containersMigrate to Containers is a tool to containerize existing VM-based applications to run on Google Kubernetes Engine (GKE), GKE Autopilot clusters, GKE Enterprise, or Cloud Run. By taking advantage of the GKE and GKE Enterprise ecosystems, Migrate to Containers provides a fast and simple way to move to modernized orchestration and application management. Modernization and management can be done without requiring access to source code, rewriting, or rearchitecting applications.Learn moreGetting your migration startedMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn moreTutorials, quickstarts, & labsMigrate a Linux VM using Migrate to Containers CLIIn this quickstart, you create a Compute Engine virtual machine (VM) instance, then use the Migrate to Containers CLI to migrate the VM to Google Kubernetes Engine (GKE).Learn morePricingMigrate to ContainersPricing tableServicePriceMigrate to ContainersMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Migrate to ContainersPricing tableMigrate to ContainersPriceMigrate to Containers is offered at no charge for migrating workloads to Google Cloud. Customers will still pay for all other Google Cloud services they consume (compute, storage, networking, etc.), but by itself, the use of this tool (Migrate to Containers) does not incur additional cost. Any customer can use Migrate to Containers (with or without an Anthos subscription).Google Cloud pricing calculatorAdd and configure products to get a cost estimate to share with your team.Use calculatorContact salesTalk to a Google Cloud migration specialist.Reach outLearn more about migration with Google CloudRapid Migration Program (RaMP)Learn moreData center migrationLearn moreMigrating your applicationsLearn moreGoogle Cloud VMware EngineLearn moreGoogle Cloud's migration product portfolioLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_to_Virtual_Machines(1).txt b/Migrate_to_Virtual_Machines(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..b183e083cd617089d4081f01ecd4dcecab70cbed --- /dev/null +++ b/Migrate_to_Virtual_Machines(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration/virtual-machines +Date Scraped: 2025-02-23T12:06:44.941Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate to Virtual MachinesFast, flexible, and safe migration of virtual machines to Google CloudTry it nowFree migration cost assessmentMore informationExplore documentationFeaturesFrictionless, enterprise-grade migration for any businessCloud migration creates a lot of questions. Migrate to Virtual Machines by Google Cloud has the answers. Whether you’re looking to migrate one application from on-premises or one thousand enterprise-grade applications across multiple data centers, Migrate to Virtual Machines gives any IT team, large or small, the power to migrate their workloads to Google Cloud.Fast, efficient migrationsWith Migrate to Virtual Machines simple “as a service” interface within Cloud Console and flexible migration options, it’s easy for anyone to reduce the time and toil that typically goes into a migration. Avoid complex deployments, setup, and configurations. Eliminate confusing and troublesome client-side migration tool agents. By using the right migration tool, you can save your migration team’s valuable time for what matters most: migrating workloads. Simplicity at scaleTake advantage of Migrate to Virtual Machines speed, scale, and flexibility to accelerate your migration program. Migrate a single app with a few clicks, execute a migration sprint with 100 systems using groups, or use our Cloud API to build your in-house migration factory. With Migrate to Virtual Machines, you are empowered to customize your migration in whatever way that is best for you.Minimal downtime and riskBuilt-in testing makes it fast and easy to validate before you migrate. No client-side software agents means no impact on the resources needed by the workload itself, no need to open access to or from the network where the workload is running, and the ability to complete your migration without the source systems running at all. And periodically replicating data from the source workload to the destination without manual steps or interruptions to the running workload minimizes workload downtime and enables fast cutover to the cloud.Lower cost, higher staff productivityMigrate to Virtual Machines as-a-service offering helps reduce migration labor and complexity. By leveraging speed and simplicity, you can eliminate costly on-premises hardware and software licenses. Plus, Migrate to Virtual Machines provides usage-driven analytics to help you rightsize destination instances and avoid cloud over-provisioning.View all featuresHow It WorksMigration to Virtual Machines is a part of Migration Center, Google Cloud's unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration.More on Migration CenterCommon UsesGetting familiar with Migrate to Virtual MachinesVM Migration lifecycle Migrate to Virtual Machines enables you to migrate (Lift and Shift) your virtual machines (VMs), with minor automatic modifications, from your source environment to Google Compute Engine. Migrate to Virtual Machines uses data replication technology which continuously replicates disk data from the source VMs to Google Cloud without causing any downtime on the source. You then create VM clones from replicated data for testing and perform predictable VM cut-over to your final workloads running on Google Cloud.Learn moreLearning resourcesVM Migration lifecycle Migrate to Virtual Machines enables you to migrate (Lift and Shift) your virtual machines (VMs), with minor automatic modifications, from your source environment to Google Compute Engine. Migrate to Virtual Machines uses data replication technology which continuously replicates disk data from the source VMs to Google Cloud without causing any downtime on the source. You then create VM clones from replicated data for testing and perform predictable VM cut-over to your final workloads running on Google Cloud.Learn moreGetting your migration startedMigrating from various sourcesMigrate to Virtual Machines lets you migrate virtual machine (VM) instances and disks of VMs from different migration sources such as vSphere on-premises data center, AWS cloud computing services, Azure cloud computing services, and Google Cloud VMware Engine to VM instances or Persistent Disk volumes on Google Cloud.Read guidesTutorials, quickstarts, & labsMigrating from various sourcesMigrate to Virtual Machines lets you migrate virtual machine (VM) instances and disks of VMs from different migration sources such as vSphere on-premises data center, AWS cloud computing services, Azure cloud computing services, and Google Cloud VMware Engine to VM instances or Persistent Disk volumes on Google Cloud.Read guidesPricingMigrate to Virtual MachinesPricing tableServicePriceMigrate to Virtual MachinesMigrate to Virtual Machines is provided at no charge for migrations into Google Cloud. Google Cloud charges for resources such as Compute Engine instances, storage, and networking when they are consumed.Migrate to Virtual MachinesPricing tableMigrate to Virtual MachinesPriceMigrate to Virtual Machines is provided at no charge for migrations into Google Cloud. Google Cloud charges for resources such as Compute Engine instances, storage, and networking when they are consumed.Google Cloud pricing calculatorAdd and configure products to get a cost estimate to share with your team.Use calculatorContact salesTalk to a Google Cloud migration specialist.Reach outLearn more about migration with Google CloudRapid Migration Program (RaMP)Learn moreData center migrationLearn moreMigrating your applicationsLearn moreGoogle Cloud VMware EngineLearn moreGoogle Cloud's migration product portfolioLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_to_Virtual_Machines.txt b/Migrate_to_Virtual_Machines.txt new file mode 100644 index 0000000000000000000000000000000000000000..cd2316b237394d9d5828323ec877a020f88b2e23 --- /dev/null +++ b/Migrate_to_Virtual_Machines.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration/virtual-machines +Date Scraped: 2025-02-23T12:02:38.561Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate to Virtual MachinesFast, flexible, and safe migration of virtual machines to Google CloudTry it nowFree migration cost assessmentMore informationExplore documentationFeaturesFrictionless, enterprise-grade migration for any businessCloud migration creates a lot of questions. Migrate to Virtual Machines by Google Cloud has the answers. Whether you’re looking to migrate one application from on-premises or one thousand enterprise-grade applications across multiple data centers, Migrate to Virtual Machines gives any IT team, large or small, the power to migrate their workloads to Google Cloud.Fast, efficient migrationsWith Migrate to Virtual Machines simple “as a service” interface within Cloud Console and flexible migration options, it’s easy for anyone to reduce the time and toil that typically goes into a migration. Avoid complex deployments, setup, and configurations. Eliminate confusing and troublesome client-side migration tool agents. By using the right migration tool, you can save your migration team’s valuable time for what matters most: migrating workloads. Simplicity at scaleTake advantage of Migrate to Virtual Machines speed, scale, and flexibility to accelerate your migration program. Migrate a single app with a few clicks, execute a migration sprint with 100 systems using groups, or use our Cloud API to build your in-house migration factory. With Migrate to Virtual Machines, you are empowered to customize your migration in whatever way that is best for you.Minimal downtime and riskBuilt-in testing makes it fast and easy to validate before you migrate. No client-side software agents means no impact on the resources needed by the workload itself, no need to open access to or from the network where the workload is running, and the ability to complete your migration without the source systems running at all. And periodically replicating data from the source workload to the destination without manual steps or interruptions to the running workload minimizes workload downtime and enables fast cutover to the cloud.Lower cost, higher staff productivityMigrate to Virtual Machines as-a-service offering helps reduce migration labor and complexity. By leveraging speed and simplicity, you can eliminate costly on-premises hardware and software licenses. Plus, Migrate to Virtual Machines provides usage-driven analytics to help you rightsize destination instances and avoid cloud over-provisioning.View all featuresHow It WorksMigration to Virtual Machines is a part of Migration Center, Google Cloud's unified platform that helps you accelerate your end-to-end cloud journey from your current on-premises or cloud environments to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration.More on Migration CenterCommon UsesGetting familiar with Migrate to Virtual MachinesVM Migration lifecycle Migrate to Virtual Machines enables you to migrate (Lift and Shift) your virtual machines (VMs), with minor automatic modifications, from your source environment to Google Compute Engine. Migrate to Virtual Machines uses data replication technology which continuously replicates disk data from the source VMs to Google Cloud without causing any downtime on the source. You then create VM clones from replicated data for testing and perform predictable VM cut-over to your final workloads running on Google Cloud.Learn moreLearning resourcesVM Migration lifecycle Migrate to Virtual Machines enables you to migrate (Lift and Shift) your virtual machines (VMs), with minor automatic modifications, from your source environment to Google Compute Engine. Migrate to Virtual Machines uses data replication technology which continuously replicates disk data from the source VMs to Google Cloud without causing any downtime on the source. You then create VM clones from replicated data for testing and perform predictable VM cut-over to your final workloads running on Google Cloud.Learn moreGetting your migration startedMigrating from various sourcesMigrate to Virtual Machines lets you migrate virtual machine (VM) instances and disks of VMs from different migration sources such as vSphere on-premises data center, AWS cloud computing services, Azure cloud computing services, and Google Cloud VMware Engine to VM instances or Persistent Disk volumes on Google Cloud.Read guidesTutorials, quickstarts, & labsMigrating from various sourcesMigrate to Virtual Machines lets you migrate virtual machine (VM) instances and disks of VMs from different migration sources such as vSphere on-premises data center, AWS cloud computing services, Azure cloud computing services, and Google Cloud VMware Engine to VM instances or Persistent Disk volumes on Google Cloud.Read guidesPricingMigrate to Virtual MachinesPricing tableServicePriceMigrate to Virtual MachinesMigrate to Virtual Machines is provided at no charge for migrations into Google Cloud. Google Cloud charges for resources such as Compute Engine instances, storage, and networking when they are consumed.Migrate to Virtual MachinesPricing tableMigrate to Virtual MachinesPriceMigrate to Virtual Machines is provided at no charge for migrations into Google Cloud. Google Cloud charges for resources such as Compute Engine instances, storage, and networking when they are consumed.Google Cloud pricing calculatorAdd and configure products to get a cost estimate to share with your team.Use calculatorContact salesTalk to a Google Cloud migration specialist.Reach outLearn more about migration with Google CloudRapid Migration Program (RaMP)Learn moreData center migrationLearn moreMigrating your applicationsLearn moreGoogle Cloud VMware EngineLearn moreGoogle Cloud's migration product portfolioLearn moreGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migrate_to_a_Google_Cloud_VMware_Engine_platform.txt b/Migrate_to_a_Google_Cloud_VMware_Engine_platform.txt new file mode 100644 index 0000000000000000000000000000000000000000..2208d4b85b45951ca448ff44d10f6f0e1c10d6f3 --- /dev/null +++ b/Migrate_to_a_Google_Cloud_VMware_Engine_platform.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/vmware-engine-blueprint +Date Scraped: 2025-02-23T11:52:11.537Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to a Google Cloud VMware Engine platform Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-17 UTC Many enterprises want to move their VMware clusters to the cloud to take advantage of the cloud's scalability, resiliency, elasticity, and higher-level services like Vertex AI Studio and BigQuery. Enterprises also want to shift expenditures from a capital-intensive hardware model to a more flexible operational expense model. To help enterprises rapidly build an operational environment that follows Google Cloud best practices, we have created the Google Cloud VMware Engine enterprise blueprint. This blueprint provides you with a comprehensive guide to deploying an enterprise-ready VMware environment so that you can migrate your VM workloads to the cloud. VMware Engine is a fully managed service that lets you run the VMware platform on Google Cloud. Your VMware workloads operate on dedicated Google Cloud hardware, fully integrated with Google Cloud services. Google takes care of the infrastructure, networking, and management. The blueprint lets you deploy a Google Cloud project that contains a VMware Engine private cloud, a Google-managed VMware Engine network, and the VPC network peering connections that let traffic flow end to end. The VMware Engine enterprise blueprint includes the following: GitHub repositories that contain the Terraform code and ancillary scripts that are necessary to deploy the VMware Engine platform A guide to the architecture, networking, and security controls that you use the GitHub repositories to implement (this document) The blueprint is designed to run on a foundation of base-level services, such as VPC networks. You can use the enterprise foundation blueprint or Fabric FAST to create the foundation for this blueprint. This document is intended for cloud architects, cloud platform administrators, VMware Engine administrators, and VMware Engine engineers who can use the blueprint to build and deploy VMware clusters on Google Cloud. The blueprint focuses on the design and deployment of a new VMware Engine private cloud and assumes that you are familiar with VMware and the VMware Engine managed service. VMware Engine enterprise blueprint overview The VMware Engine enterprise blueprint relies on a layered approach to enable the VMware Engine platform. The following diagram shows the interaction of various components of this blueprint with other blueprints and services. This diagram includes the following: The Google Cloud infrastructure provides you with security capabilities such as encryption at rest and encryption in transit, as well as basic building blocks such as compute and storage. The enterprise foundation provides you with a baseline of resources such as networking, identity, policies, monitoring, and logging. These resources let you rapidly adopt Google Cloud while meeting your organization's architectural requirements. The VMware Engine enterprise blueprint provides you with the following: A VMware Engine network private connectivity to Google Virtual Private Cloud networks, APIs, and services A VMware Engine private cloud Backup capabilities Cloud Load Balancing Google Cloud Armor Use cases include data center migrations, data center elastic capacity expansion, virtual desktop infrastructure (VDI), and data center disaster recovery scenarios. Deployment automation using a CI/CD pipeline provides you with the tools to automate the provisioning, configuration, and management of infrastructure. Automation helps you ensure consistent, reliable, and auditable deployments; minimize manual errors; and accelerate the overall development cycle. Architecture The following diagram shows the architecture that the VMware Engine enterprise blueprint deploys. The blueprint deploys the following: A Google Cloud project called standalone VMware Engine project that contains a VMware Engine private cloud A Google-managed project for the VMware Engine network The VPC network peering connections so that traffic can flow from VMware Engine applications to clients The VMware Engine private cloud consists of the following components: Management tools: VLAN and subnet for ESXi hosts' management network, DNS server, vCenter Server Backup: backup infrastructure for workload VMs Virtual machines: workload VMs vCenter Server: centralized management of private cloud vSphere environment NSX Manager: provides a single interface to configure, monitor, and manage NSX-T networking and security services ESXi hosts: hypervisor on dedicated nodes vSAN storage: hyper-converged, software-defined storage platform NSX-T overlay network: network virtualization and security software VMware HCX: application migration and workload rebalancing across data centers and clouds Overview of VMware Engine networking The VMware Engine network is a dedicated network that connects the VMware Engine private cloud, VPC networks, and on-premises environments. The VMware Engine network has the following capabilities: Private cloud connectivity: each VMware Engine private cloud is connected to a VMware Engine network, allowing communication between workloads within the private cloud. VMware Engine network connectivity: you can use VPC Network Peering to establish connectivity between VMware Engine networks and a Google VPC. This connectivity enables communication between workloads that run on VMware Engine and those running on other services in Google Cloud. On-premises connectivity: to create a hybrid cloud solution, you can extend VMware Engine networks to on-premises data centers using Cloud VPN or Cloud Interconnect. Network services: VMware Engine networks use various network services, including the following: Cloud DNS for name resolution of internal and external resources Cloud NAT for internet access for private cloud workloads VPC Network Peering for network connectivity to other VPCs and other VMware Engine networks Private connectivity to Google Cloud APIs and services With VMware Engine, you are responsible for creating and managing workload VMs using the VMware application management surface. Google Cloud is responsible for patching and upgrading the infrastructure components and remediating failed components. Key architectural decisions Decision area Decision Decision reasoning Foundation You can implement the VMware Engine enterprise blueprint on the enterprise foundation blueprint, Fabric FAST, or on a foundation that meets the defined prerequisites. Both the enterprise foundation blueprint and Fabric FAST provide the base capabilities that help enterprises adopt Google Cloud. Compute You can deploy a single private cluster in a particular region or you can deploy two private clusters in two regions. The single private cluster configuration allows for simplified management and cost optimization. The blueprint deploys one spare node. A single spare node lets you have capacity to handle failures, maintenance events, and workload fluctuations while minimizing costs. Backup and disaster recovery are managed using the Backup and DR Service. Backup and DR lets you use a managed service and lessen the amount of administration that is required for a VMware Engine deployment. Networking The blueprint enables hybrid connectivity. Hybrid connectivity lets you connect your on-premises environment with your Google Cloud environment. Private cloud uses a private, routable, and contiguous IP space. Contiguous IP space makes IP address management easier. When the IP space is routable, the private cloud can communicate with your on-premises resources. Internet access is provided through a Cloud Load Balancing and is protected by Google Cloud Armor. Google Cloud Armor enhances the workload security posture, while the Cloud Load Balancing helps enable workload scalability and high availability. The blueprint enables Cloud DNS. Cloud DNS resolves internal and external names. Platform personas The blueprint uses two user groups: a cloud platform engineering group and a VMware platform engineering group. These groups have the following responsibilities: The cloud platform engineering group is responsible for the deployment of the foundation for the VMware Engine blueprint and the deployment of the blueprint. The VMware platform engineering group is responsible for the configuration and operation of the VMware components that are part of the private cloud. If you are deploying the blueprint on the enterprise foundation blueprint or Fabric FAST, the cloud platform engineering group is created as part of the initial deployment process. The VMware platform engineering group is deployed as part of this blueprint. Organization structure The VMware Engine enterprise blueprint builds on the existing organizational structure of the enterprise foundation blueprint and Fabric FAST. It adds a standalone VMware Engine project in the production, non-production, and development environments. The following diagram shows the blueprint's structure. Networking The VMware Engine enterprise blueprint provides you with the following networking options: A single Shared VPC network for a VMware Engine private cloud Two Shared VPC instances for a private cloud Both options are deployed in a single region and let you manage traffic from your on-premises environment. The following diagram shows a single Shared VPC network for a single region. Separate Shared VPC instances let you group costs and network traffic to distinct business units, while maintaining logic separation in the VMware Engine private cloud. The following diagram shows multiple Shared VPC networks in a single region. Private cloud network Within the private cloud, networking is powered by NSX-T, which provides a software-defined networking layer with advanced features like micro-segmentation, routing, and load balancing. The VMware Engine blueprint creates a network for your VMware Engine service. This network is a single Layer 3 address space. Routing is enabled by default, allowing all private clouds and subnets within the region to communicate without extra configuration. As shown in the following diagram, when a private cloud is created, multiple subnets are created consisting of a management subnets, service subnets, workload subnets, and edge service subnets. When you configure your private cloud, you must select a CIDR range that doesn't overlap with other networks in your private cloud, your on-premises network, your private cloud management network, or subnet IP address ranges in your VPC network. After you select a CIDR range, VMware Engine automatically allocates IP addresses for various subnets. Using an example 10.0.0.0/24 CIDR range, the following table shows the blueprint's IP address ranges for its management subnets. Subnet Description IP address range System management VLAN and subnet for the ESXi hosts' management network, DNS server, and vCenter Server 10.0.0.0/26 VMotion VLAN and subnet for the vMotion network for ESXi hosts 10.0.0.64/28 HCX uplink Uplink for HCX IX (mobility) and NE (extension) appliances to reach their peers and enable the creation of the HCX service mesh 10.0.0.216/29 The workload VMs are contained in the NSX-T subnet. NST-T edge uplinks provide external connectivity. Your private cloud CIDR range size defines the number of ESXi nodes that can be supported in the NST-T subnet. ESXi nodes use the VSAN subnet for storage transport. The following table shows the IP address ranges for the NSX-T host transport subnet, NSX-T edge uplink subnets, and VSAN subnets, based on a 10.0.0.0/24 CIDR range. Subnet Description IP address range VSAN The VSAN subnet is responsible for storage traffic between ESXI hosts and VSAN storage clusters. 10.0.0.80/28 NSX-T host transport The VLAN and subnet for the ESXi host zone that is responsible for network connectivity, allowing firewalling, routing, load balancing, and other network services. 10.0.0.128/27 NSX-T edge uplink-N [N=1-4] The NSX-T edge uplink lets external systems access services and applications running on the NSX-T network. 10.0.0.160/29 10.0.0.168/29 10.0.0.176/29 10.0.0.184/29 For service subnets and the edge service subnet, the VMware Engine doesn't allocate a CIDR range or prefix. Therefore, you must specify a non-overlapping CIDR range and prefix. The following table shows the blueprint's CIDR blocks for the service subnets and the edge service subnet. Subnet Description IP address range Service-N [N=1-5] Service subnets let virtual machines bypass NSX transport and communicate directly with Google Cloud networking to enable high-speed communications. 10.0.2.0/24 10.0.3.0/24 10.0.4.0/24 10.0.5.0/24 Edge service Required if optional edge services, such as point-to-site VPN, internet access, and external IP address are enabled. Ranges are determined for each region. 10.0.1.0/26 Routing Except for networks that are stretched from your on-premises network or from other VMware Engine private clouds, all communications within VMware Engine and to external IP addresses is routed (over Layer 3) by default. The blueprint configures a Cloud Router that is associated with the on-premises hybrid connection (using Cloud VPN or Cloud Interconnect) with summary custom advertised routes for the VMware Engine IP address ranges. NSX segment routes are summarized at the Tier-0 level. The blueprint enables DHCP services through the NSX-T DHCP Relay to the DHCP services that are set up in the VMware Engine private cloud. DNS configuration VMware Engine lets you use a Cloud DNS zone in your project as a single DNS resolution endpoint for all connected management appliances in a peered VPC network. You can do this even if your private clouds are deployed across different regions. When configuring address resolution for multiple and single private clouds, you can set up global address resolution using Cloud DNS. By default, you can resolve the management zone from any of your VPC networks that has Cloud DNS enabled. When the blueprint creates a private cloud that is linked to a standard VMware Engine network, an associated management DNS zone is created and auto-populated with the management appliances entries. If the standard VMware Engine network is a VPC network that is peered with a VPC or another VMware Engine network, the blueprint automatically creates a management DNS zone binding. This zone binding ensures resolution of management appliances from your Google Cloud VMs on that network. The following diagram shows the Cloud DNS topology. Outbound traffic from VMware Engine to the internet The blueprint provides you with the following three options for outbound traffic going from VMware Engine to the internet: Outbound through the customer's on-premises environment Outbound through the VMware Engine Internet Gateway Outbound through the customer's attached VPC using an external IP address The following diagram shows these options. Inbound traffic from the internet to VMware Engine The blueprint provides you with the following three options for traffic coming from the internet to VMware Engine: Inbound through the customer's on-premises environment Inbound through a customer VPC with Cloud Load Balancing and potentially Google Cloud Armor Inbound through VMware Engine using an external IP address The following diagram shows these options. Logging The blueprint lets you send the VMware Engine administrative actions to Cloud Audit Logs using a log sink. By analyzing the VMware Engine audit logs, administrators can identify suspicious behavior, investigate incidents, and demonstrate compliance with regulatory requirements. Logging exports can also serve as ingestion sources for security information and event management (SIEM) systems. Google supports the following ingestion sources that serve VMware Engine: The hosting Google Cloud organization which includes cloud fabric and assets telemetry VMware service components Workloads running within VMware Engine Google SecOps includes a built-in automated log ingestion pipeline for ingesting organization data, and provides forwarding systems to push streaming telemetry from VMware Engine and workloads into the Google SecOps ingestion pipeline. Google SecOps enriches telemetry with contextual content and makes it searchable. You can use Google SecOps to find and track security issues as they develop. Monitoring The blueprint installs a standalone agent for Cloud Monitoring to forward metrics from your private cloud to Cloud Monitoring. The blueprint sets up predefined dashboards that provide an overview of your VMware Engine resources and resource utilization. In VMware vCenter Server, VMware provides tools to help you monitor your environment and to locate the source of problems. You can use these tools as part of your ongoing operations and as a supplement to other monitoring options. As seen in the following diagram, the blueprint automates the deployment of the standalone agent using a Managed instance group that is deployed in the customer VPC. The agent collects metrics and syslog logs from VMware vCenter and forwards them to Cloud Monitoring and Cloud Logging. Backups The blueprint uses Backup and DR to provide data protection services to your VMware workloads. The service uses a managed appliance that is deployed in the customer VPC. The appliance is connected to the Google control plane through Private Google Access and websockets. Backups are stored in Cloud Storage and the service provides granular recovery options, letting you restore individual files or entire VMs to a specific point in time. Operational best practices This section describes some of the best practices that you can implement, depending on your environment and requirements, after deploying the blueprint. Add more spare nodes VMware Engine clusters are automatically sized to have at least one spare node for resiliency. A spare node is an inherent behavior in vSphere HA, meaning that this node is available in the cluster and billed accordingly. You can add more spare nodes to the cluster for guaranteed capacity during maintenance windows. This decision can incur additional consumption costs and these nodes are managed directly by your organization. The spare nodes that you add appear as extra nodes in your vSphere cluster. Optionally, you can schedule workloads on the spare nodes. Consider the resource limits for private clouds VMware Engine private clouds have resource limits on compute, storage, and networking components. Consider these limits during your private cloud deployment so that your environment can scale with your workload demands. Implement cost management options You can implement one or more of the following options to manage your costs: Committed use discounts (CUDs) Auto-scaling Core count limits Oversubscription of compute capacity Use committed use discounts CUDs provide discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. VMware Engine CUDs apply to aggregate VMware Engine node usage in a region, giving you low, predictable costs, without requiring you to make any manual changes or updates. Discounts apply to VMware Engine node usage in the regions where the service is available and where you have purchased the CUDs. Use autoscaling VMware Engine lets you automatically add or remove nodes in a cluster based on predefined thresholds and watermarks. These policies are triggered if a specified condition is sustained for at least 30 minutes. When applying or updating an autoscale policy to a vSphere cluster (standard or stretched), consider the following: By default, autoscaling is disabled. You must enable it explicitly for each cluster. In a stretched cluster, the number of nodes that you specify in the policy are added or removed per zone, which impacts billing accordingly. Because compute, memory, and storage usage are often independent, autoscale policies that monitor multiple metrics use OR logic for node addition and AND logic for node removal. Autoscale maximums are determined by the quotas that are available in your Google Cloud project and VMware Engine private cloud. Enabling autoscaling and manually adding or removing a node is not mutually exclusive. For example, with the Storage Capacity Optimization Policy, you can manually remove a node if you can get the VM disk space reduced enough to accommodate all the VMs on the cluster. Although manually removing nodes is possible, it is not a best practice when using autoscaling. Limit core count VMware Engine lets administrators reduce the number of effective CPU cores that are exposed to the guest OS (which is the VM running on top of VMware Engine). Some software license agreements require that you reduce the cores that are exposed. Oversubscribe VMware Engine compute capacity Oversubscribing VMware Engine compute capacity is a standard practice and, unlike Compute Engine sole-tenant nodes, doesn't incur additional charges. A higher oversubscription ratio might help you to decrease the number of effective billable nodes in your environment, but can affect application performance. When sizing enterprise workloads, we recommend that you use a 4:1 ratio to start, and you then modify the ratio based on factors that are applicable to your use case. Deploy the blueprint You can deploy the blueprint on the enterprise foundation blueprint or Fabric FAST. To deploy the blueprint on the enterprise foundation blueprint, complete the following: Deploy the enterprise foundation blueprint. Deploy the VMware Engine enterprise blueprint. For instructions, see the VMware Engine enterprise blueprint repository. To deploy the blueprint on Fabric FAST, see the Fabric FAST repository. The Google Cloud VMware Engine Stage deploys the VMware Engine enterprise blueprint. Deploy the blueprint without the enterprise foundations blueprint or Fabric FAST To deploy the blueprint without first deploying the enterprise foundation blueprint or Fabric FAST, verify the following resources exist in your environment: An organization hierarchy with development, nonproduction, and production folders A Shared VPC network for each folder An IP address scheme that takes into account the required IP address ranges for your VMware Engine private clouds A DNS mechanism for your VMware Engine private clouds Firewall policies that are aligned with your security posture A mechanism to access Google Cloud APIs through internal IP addresses A connectivity mechanism with your on-premises environment Centralized logging for security and audit Organizational policies that are aligned with your security posture A pipeline that you can use to deploy VMware Engine What's next Read about VMware Engine. Learn how to migrate VMware VM instances to your private cloud. Read compute best practices. Read networking best practices. Read the best practices for VMware Engine security. Read storage best practices. Read costing best practices. Access the VMware Engine page from VMware. Send feedback \ No newline at end of file diff --git a/Migrating_Hadoop_Jobs_from_On-Premises_to_Google_Cloud_Platform.txt b/Migrating_Hadoop_Jobs_from_On-Premises_to_Google_Cloud_Platform.txt new file mode 100644 index 0000000000000000000000000000000000000000..7d52c9b50ad9208482ca48c3fe33ce084677cdb5 --- /dev/null +++ b/Migrating_Hadoop_Jobs_from_On-Premises_to_Google_Cloud_Platform.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-jobs +Date Scraped: 2025-02-23T11:52:42.441Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrating Hadoop Jobs from On-Premises to Dataproc Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC This guide describes how to move your Apache Hadoop jobs to Google Cloud (Google Cloud) by using Dataproc. This is the third of three guides describing how to move from on-premises Hadoop: Migrating On-Premises Hadoop Infrastructure to Google Cloud provides an overview of the migration process, with particular emphasis on moving from large, persistent clusters to an ephemeral model. Migrating HDFS Data from On-Premises to Google Cloud describes the process of moving your data to Cloud Storage and other Google Cloud products. This guide, focused on moving your Hadoop jobs to Dataproc. Running Hadoop jobs on Google Cloud You can use Dataproc to run most of your Hadoop jobs on Google Cloud. The following list summarizes the basic procedure: Update your job to point to your persistent data stored in Cloud Storage. Create a Dataproc cluster on which to run your job. This kind of temporary, single-use cluster is called an ephemeral cluster. Submit your job to the ephemeral cluster. Optionally, monitor your job logs using Cloud Logging or Cloud Storage. Logs are captured in Cloud Storage by default, using the staging bucket that you specify when you create the cluster. Check your job's output on Cloud Storage. When your job completes, delete the cluster. Supported jobs Dataproc runs Hadoop, so many kinds of jobs are supported automatically. When you create a cluster with Dataproc, the following technologies are configured by default: Hadoop Spark Hive Pig Dataproc provides several versions of machine images with different versions of open source software preinstalled. You can run many jobs with just the preconfigured software on an image. For some jobs, you might need to install other packages. Dataproc provides a mechanism called initialization actions, which enables you to customize the software running on the nodes of your cluster. You can use initialization actions to create scripts that run on every node when it is created. Updating data locations (URIs) Cloud Storage connector, which is preinstalled on Dataproc cluster nodes, enables your jobs to use Cloud Storage as a Hadoop compatible file system (HCFS). Store your data in Cloud Storage so that you can take advantage of the connector. If you do, the only necessary change to your jobs is to update the URIs, replacing hdfs:// with gs://. If you reorganize your data as part of your migration, note all source and destination paths so that you can easily update your jobs to work with the new data organization. It's possible to store your data in HDFS in persistent clusters in the cloud, but this isn't recommended. You can learn more about moving your data in the data migration guide. Configuring clusters to run your jobs In the recommended approach to running your jobs on Google Cloud, you create ephemeral clusters when you need them and delete them when your jobs are finished. This approach gives you a lot of flexibility in how you configure your clusters. You can use a different configuration for each job, or create several standard cluster configurations that serve groups of jobs. You can find the basic steps for creating clusters in the Dataproc documentation. The rest of this section describes some of the important cluster configuration considerations to help you decide how to proceed. Sizing your cluster The first thing you need to do to define a new cluster is decide what virtual hardware to use for it. It can be difficult to calculate the perfect cluster configuration, because each job has its particular needs and idiosyncrasies. Experiment with different configurations to find the right setup for your job. When you set up a cluster, you need to determine at a minimum: How many nodes to use. The type of virtual machine to use for your primary (master) node. The type of virtual machine to use for your worker nodes. Node types are defined by the number of virtual CPUs and the amount of memory they have available. The definitions correspond to the Compute Engine machine types. You can usually find a node type that corresponds to the configuration of on-premises nodes that you are migrating from. You can use that equivalency as a starting place, setting up a cluster that's similar to your on-premises cluster. From there, the best approach is to adjust the configuration and monitor the effect on job execution. As you begin to optimize the configuration of your jobs, you'll start to get a feel for how to approach additional jobs in your system. Keep in mind that you can scale your cluster as needed, so you don't need to have the perfect specification defined from the start. Choosing primary disk options You can specify the size of the primary disk used by your worker nodes. The right options for a cluster depend on the types of jobs you're going to run on it. Use the default value and evaluate the results unless you know that your jobs have unusual demands on primary disk usage. If your job is disk-intensive and is executing slowly on individual nodes, you can add more primary disk space. For particularly disk-intensive jobs, especially those with many individual read and write operations, you might be able to improve operation by adding local SSDs. Add enough SSDs to contain all of the space you need for local execution. Your local execution directories are spread across however many SSDs you add. Using preemptible worker nodes You can gain low-cost processing power for your jobs by adding preemptible worker nodes to your cluster. These nodes use preemptible virtual machines. Consider the inherent unreliability of preemptible nodes before choosing to use them. Dataproc attempts to smoothly handle preemption, but jobs might fail if they lose too many nodes. Only use preemptible nodes for jobs that are fault-tolerant or that are low enough priority that occasional job failure won't disrupt your business. If you decide to use preemptible worker nodes, consider the ratio of regular nodes to preemptible nodes. There is no universal formula to get the best results, but in general, the more preemptible nodes you use relative to standard nodes, the higher the chances are that the job won't have enough nodes to complete the task. You can determine the best ratio of preemptible to regular nodes for a job by experimenting with different ratios and analyzing the results. Note that SSDs are not available on preemptible worker nodes. If you use SSDs on your dedicated nodes, any preemptible worker nodes that you use will match every other aspect of the dedicated nodes, but will have no SSDs available. Running jobs Dataproc provides multiple interfaces you can use to launch your jobs, all of which are described in the product documentation. This section describes options and operations to consider when running your Hadoop jobs on Google Cloud. Getting job output Jobs you run on Dataproc usually have several types of output. Your job might write many kinds of output directly—for example, to files in a Cloud Storage bucket or to another cloud product, like BigQuery. Dataproc also collects logs and console output and puts them in the Cloud Storage staging bucket associated with the cluster you run the job on. Using restartable jobs When you submit a job, you can configure it to automatically restart if it encounters issues. This option is useful for jobs that rely on resources or circumstances that are highly variable. For example, jobs that stream data across potentially unreliable channels (such as the public internet) are especially prone to random failure due to timeout errors and similar networking issues. Run jobs as restartable if you can imagine situations where the job would fail but would successfully run a short time later. Scaling your cluster Dataproc makes it easy to add or remove nodes for your cluster at any time, including while your job is running. The Dataproc documentation includes detailed instructions for scaling your cluster. Scaling includes the option for gracefully decommissioning nodes. With this option, nodes that are going to be deleted are given time to complete in-progress processing. Managing jobs over time Dealing with individual jobs isn't usually complex, but a Hadoop system can include dozens or hundreds of jobs. Over time, the number of logs, output files, and other information associated with each job proliferates, which can make it difficult to find any individual piece of information. Here are some things that you can do to make it easier to manage your jobs for the future: Use custom labels to identify jobs, clusters, and other resources. Using labels makes it easy to use a filter to find resources later. Dataproc supports custom labels using the standard Google Cloud label system, so when you label a resource it can help you manage that resource in other Google Cloud services. Organize your Cloud Storage buckets to keep different types of jobs separate. Grouping your data into buckets that correspond to your business structure or functional areas can also make it easier to manage permissions. Define clusters for individual jobs or for closely related groups of jobs. It is much easier to update the setup for your ephemeral clusters if you use each configuration only for well-scoped jobs. What's next Check out the other parts of the Hadoop migration guide: Overview Data migration guide Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Migration.txt b/Migration.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab9019e20d038d9329cb93a08064d5d7a526d0fe --- /dev/null +++ b/Migration.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/cloud-migration +Date Scraped: 2025-02-23T12:06:38.510Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Google Cloud migrationGoogle Cloud’s flexible solutions and services help you migrate your data and apps to the cloud while modernizing and innovating at your own pace.Contact salesFree migration cost estimateCloud migration products and servicesNo matter what you have to migrate or why, Google Cloud has the solutions and services to meet your goals and help you succeed and modernize on your terms. Check out this video then keep scrolling to learn more.5:03Migrating with confidence to Google Cloud.Migrate workloads to the cloud: an essential guide & checklistAccess a thorough checklist that organizations can follow to ease the migration to the cloud.Rearchitecting your infrastructure for generative AILearn from 300+ decision-makers to get things right throughout your cloud migration journey.Build a large scale migration programLearn how to build a successful migration program using Google Cloud’s proven four-stage approach.Explore our migration and modernization optionsCategoryProductFeatures and Use CasesPlatforms & ProgramsMigration CenterUnified platform for migrating and modernizing with Google Cloud.Primary hub for migration and modernizationBuilt-in cost estimation, discovery, and assessmentIntelligent guidance for planning and migratingRapid Migration and Modernization Program (RaMP)A holistic, end-to-end migration program to help simplify and accelerate your migration path.Free IT landscape assessmentDetailed migration planning with Google Cloud expertsPre-vetted cloud migration partner optionsApplication migration and modernizationData Center MigrationFlexible solutions for exiting or reducing on-premises data centers or other clouds.Craft the right data center migration plan for your businessGrow confidently on our purpose-built infrastructureFind the right experts to drive your success with cloud migration servicesApplication MigrationModernize applications for improved cost, performance, and scalability.Free discovery and assessment to help plan your app migration and modernizationModernize at your own pace, picking the right technologies for each applicationEnhanced performance, agility & cost-efficiencyAccelerate your innovation effortsCloud Foundation ToolkitBest-practice, open-source, ready-made reference templates for Deployment Manager and Terraform.Quickly build a repeatable, enterprise-ready foundation in Google CloudEasily update foundation as needs change with IaCAutomate repeatable tasksEnterprise compliance and security featuresMigrate to Virtual MachinesFast, flexible, and safe migration to Google Cloud with Migrate to Virtual Machines.Automatic and seamless adaptations for Compute EngineBuilt-in pre-migration validation and testingFast on-premises stateful rollbackEasy provisioning and rightsizingMigrate to ContainersIntelligently extract, migrate, and modernize applications to run natively on containers in Google Kubernetes Engine.Automate extraction of existing apps from servers and VMsEasily convert workloads to run on containers in Google Kubernetes EngineModernize workloads without source codeVMware EngineMigrate and run your VMware workloads natively on Google Cloud in just a few clicks.Fully managed, native VMware Cloud Foundation software stackvSphere, vCenter, vSAN, NSX-T, and HCX includedLower your TCO while improving business agilitySAP on Google CloudMaintain business continuity on a secure cloud that gives you business agility while allows you to maximize the value of your SAP data.Smart infrastructure with agility & availability for business-critical workloadsIntelligence and analytics for SAP customers to succeed in a data-driven worldExperience, skills that give SAP customers a proven path to value in the cloudMainframesReduce costs and increase profits by modernizing and migrating your mainframe applications to Google Cloud.Automated code refactoring toolsPrescriptive guidance from expertsModernize with VMs or containersWindowsA first-class experience for migrating and modernizing Windows and Microsoft workloads to reduce license cost.Self-manage or leverage managed servicesBring your own licensesWindows Server, SQL Server, .NET, Windows Containers, Active DirectoryEnterprise-class support backed by MicrosoftData migration and modernizationDatabase MigrationMigrate to Google Cloud databases to run and manage your databases at global scale while optimizing for efficiency and flexibility.Migrate as-is to fully managed, databasesMove from proprietary licenses to open source databasesModernize from traditional databases to scalable, cloud-native databasesIntegrate databases across Google Cloud servicesDatabase Migration ServiceMake migrations to Cloud SQL simple. Available in preview for MySQL, with limited access to PostgreSQL, and SQL Server coming soon.Ease of use reduces migration complexityServerless experience eliminates the operational burden of migrationsReplicate data continuously for minimal downtimeAvailable at no additional chargeDatabase ModernizationModernize underlying operational databases to make your apps more secure, reliable, scalable, and easier to manage.Scale seamlessly to handle the unpredictableBuild faster, with less maintenanceDevelop powerful applications with integrations with GKE and other servicesLower overall operating costs & eliminate restrictive licensesData LakesPowers analysis on any type of data so your teams can securely and cost-effectively ingest, store, and analyze large volumes of data.Built on fully managed servicesIngest and store any type or volume of dataAutoscale data lake processing to manage SLAs and costs.Quickly build analytics using Dataproc, BigQuery, or AI PlatformData WarehousesSolve for today’s analytics demands and seamlessly scale your business by moving to Google Cloud’s modern data warehouse, BigQuery.Streamlined migration to BigQueryBuilt-in machine learning capabilitiesRobust data security and governance controlsLower TCO up to 52% by migrating to BigQueryOracle WorkloadsReduce overhead, drive innovation, and gain agility with a wide variety of database options for your Oracle workloads.Bare Metal Solution to lift and shift workloadsMigrate to compatible open-source platformsModernize with Cloud Spanner for scalability and availabilityBigQuery Data Transfer ServiceA data import service for scheduling and moving data into BigQuery.Automate data movement into BigQueryZero-coding foundation for a BigQuery data warehouseData backfills to recover from outages or gapsMicrosoft SQL ServerMigrate and run Microsoft SQL Server on Compute Engine or use managed Cloud SQL to reduce license and infrastructure cost.Bring your existing SQL Server licenses and run them on Compute EngineRun license-included SQL Server VM images on Compute EngineUse fully managed Cloud SQL for SQL Server to reduce operational overheadFull compatibility and configurable custom VM shapes help optimize costTransfer ApplianceRuggedized server to collect and physically move data from field locations with limited connectivity or from data centers for ultra low cost data transfers.Transfer data for archival, migrations, or analyticsCapture sensor data for machine learning and analysisMove data from bandwidth-constrained locationsStorage Transfer ServiceTools to help you perform a data transfer, either from another cloud provider or from your private data center.Collecting research dataMoving ML/AI training datasetsMigrating from S3 to Google CloudExplore our migration and modernization optionsPlatforms & ProgramsMigration CenterUnified platform for migrating and modernizing with Google Cloud.Primary hub for migration and modernizationBuilt-in cost estimation, discovery, and assessmentIntelligent guidance for planning and migratingApplication migration and modernizationData Center MigrationFlexible solutions for exiting or reducing on-premises data centers or other clouds.Craft the right data center migration plan for your businessGrow confidently on our purpose-built infrastructureFind the right experts to drive your success with cloud migration servicesData migration and modernizationDatabase MigrationMigrate to Google Cloud databases to run and manage your databases at global scale while optimizing for efficiency and flexibility.Migrate as-is to fully managed, databasesMove from proprietary licenses to open source databasesModernize from traditional databases to scalable, cloud-native databasesIntegrate databases across Google Cloud servicesSee how customers are migrating and modernizing with Google CloudCase StudyMLB migrates to BigQuery to help personalize the fan experience and lower costs.5-min readCase StudyHow Viant moved 600 VMs and 200+ TB of data in just 7 weeks5-min readCase StudyHow Global Infra Tech elevated blockchain services with innovation with Google Cloud5-min readCase StudyEvernote moved 5 billion user notes to Google Cloud in only 70 days.4-min readVideoHow running SAP on Google Cloud helps McKesson keep patients healthier.2:09See all customersORACLE® is a registered trademark of Oracle Corporation.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Migration_Center.txt b/Migration_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..d957d0d7999a1effd2539a49a49c6dfeec0ef0d6 --- /dev/null +++ b/Migration_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/migration-center/docs +Date Scraped: 2025-02-23T12:06:40.492Z + +Content: +Home Migration Center Documentation Stay organized with collections Save and categorize content based on your preferences. Google Cloud Migration Center documentation View all product documentation Google Cloud Migration Center is a unified migration platform that helps you accelerate your end-to-end cloud migration journey from your current on-premises environment to Google Cloud. With features like cloud spend estimation, asset discovery of your current environment, and a variety of tooling for different migration scenarios, Migration Center provides you with what you need for your migration. Learn more or try Migration Center now. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. emoji_objects Discover Migration Center Cost estimation Infrastructure discovery and assessment Migration and modernization tools format_list_numbered Guides Start a cost estimation Discover your assets Generate a TCO report Plan your migration Execute your migration find_in_page Reference Migration Center v1 API Rapid Migration Assessment v1 API Training Training and tutorials Estimate your Google Cloud costs Learn how to estimate the costs of running your infrastructure on Google Cloud with Migration Center. 10 minutes introductory Free Learn more arrow_forward Training Training and tutorials Introduction to Migration Center Assessments Get hands-on practice with Migration Center by running an infrastructure assessment with the discovery client. Learn more arrow_forward Use case Use cases Choose your migration path to Google Cloud Describes which tools and strategies to use to start the migration to Google Cloud. Migration Learn more arrow_forward Use case Use cases Get started with Migrate for VMs Describes how to migrate your virtual machines (VMs) from your source environment to Google Cloud by using Migrate for VMs. Migration Learn more arrow_forward Use case Use cases Migrating apps to containers Describes how Migrate to Containers provides an efficient way to modernize traditional applications away from virtual machines and into native containers. Migration Containers Learn more arrow_forward Use case Use cases Oracle® to BigQuery migration guide Highlights technical differences and provides guidance on migrating from the Oracle data warehouse to BigQuery. Migration Oracle Learn more arrow_forward Use case Use cases Database Migration Services Describes how to migrate and modernize your databases on Google Cloud. Migration Databases Learn more arrow_forward \ No newline at end of file diff --git a/Minimize_costs(1).txt b/Minimize_costs(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/Minimize_costs.txt b/Minimize_costs.txt new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/Mirrored_pattern.txt b/Mirrored_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..6144b3170ac22ffc10beea87d498a23dc761917e --- /dev/null +++ b/Mirrored_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/mirrored-pattern +Date Scraped: 2025-02-23T11:50:28.302Z + +Content: +Home Docs Cloud Architecture Center Send feedback Mirrored pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The mirrored pattern is based on replicating the design of a certain existing environment or environments to a new environment or environments. Therefore, this pattern applies primarily to architectures that follow the environment hybrid pattern. In that pattern, you run your development and testing workloads in one environment while you run your staging and production workloads in another. The mirrored pattern assumes that testing and production workloads aren't supposed to communicate directly with one another. However, it should be possible to manage and deploy both groups of workloads in a consistent manner. If you use this pattern, connect the two computing environments in a way that aligns with the following requirements: Continuous integration/continuous deployment (CI/CD) can deploy and manage workloads across all computing environments or specific environments. Monitoring, configuration management, and other administrative systems should work across computing environments. Workloads can't communicate directly across computing environments. If necessary, communication has to be in a fine-grained and controlled fashion. Architecture The following architecture diagram shows a high level reference architecture of this pattern that supports CI/CD, Monitoring, configuration management, other administrative systems, and workload communication: The description of the architecture in the preceding diagram is as follows: Workloads are distributed based on the functional environments (development, testing, CI/CD and administrative tooling) across separate VPCs on the Google Cloud side. Shared VPC is used for development and testing workloads. An extra VPC is used for the CI/CD and administrative tooling. With shared VPCs: The applications are managed by different teams per environment and per service project. The host project administers and controls the network communication and security controls between the development and test environments—as well as to outside the VPC. CI/CD VPC is connected to the network running the production workloads in your private computing environment. Firewall rules permit only allowed traffic. You might also use Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention without changing the design or routing. Cloud Next Generation Firewall Enterprise works by creating Google-managed zonal firewall endpoints that use packet intercept technology to transparently inspect the workloads for the configured threat signatures. It also protects workloads against threats. Enables communication among the peered VPCs using internal IP addresses. The peering in this pattern allows CI/CD and administrative systems to deploy and manage development and testing workloads. Consider these general best practices. You establish this CI/CD connection by using one of the discussed hybrid and multicloud networking connectivity options that meet your business and applications requirements. To let you deploy and manage production workloads, this connection provides private network reachability between the different computing environments. All environments should have overlap-free RFC 1918 IP address space. If the instances in the development and testing environments require internet access, consider the following options: You can deploy Cloud NAT into the same Shared VPC host project network. Deploying into the same Shared VPC host project network helps to avoid making these instances directly accessible from the internet. For outbound web traffic, you can use Secure Web Proxy. The proxy offers several benefits. For more information about the Google Cloud tools and capabilities that help you to build, test, and deploy in Google Cloud and across hybrid and multicloud environments, see the DevOps and CI/CD on Google Cloud explained blog. Variations To meet different design requirements, while still considering all communication requirements, the mirrored architecture pattern offers these options, which are described in the following sections: Shared VPC per environment Centralized application layer firewall Hub-and-spoke topology Microservices zero trust distributed architecture Shared VPC per environment The shared VPC per environment design option allows for application- or service-level separation across environments, including CI/CD and administrative tools that might be required to meet certain organizational security requirements. These requirements limit communication, administrative domain, and access control for different services that also need to be managed by different teams. This design achieves separation by providing network- and project-level isolation between the different environments, which enables more fine-grained communication and Identity and Access Management (IAM) access control. From a management and operations perspective, this design provides the flexibility to manage the applications and workloads created by different teams per environment and per service project. VPC networking, and its security features can be provisioned and managed by networking operations teams based on the following possible structures: One team manages all host projects across all environments. Different teams manage the host projects in their respective environments. Decisions about managing host projects should be based on the team structure, security operations, and access requirements of each team. You can apply this design variation to the Shared VPC network for each environment landing zone design option. However, you need to consider the communication requirements of the mirrored pattern to define what communication is allowed between the different environments, including communication over the hybrid network. You can also provision a Shared VPC network for each main environment, as illustrated in the following diagram: Centralized application layer firewall In some scenarios, the security requirements might mandate the consideration of application layer (Layer 7) and deep packet inspection with advanced firewalling mechanisms that exceed the capabilities of Cloud Next Generation Firewall. To meet the security requirements and standards of your organization, you can use an NGFW appliance hosted in a network virtual appliance (NVA). Several Google Cloud security partners offer options well suited to this task. As illustrated in the following diagram, you can place the NVA in the network path between Virtual Private Cloud and the private computing environment using multiple network interfaces. This design also can be used with multiple shared VPCs as illustrated in the following diagram. The NVA in this design acts as the perimeter security layer. It also serves as the foundation for enabling inline traffic inspection and enforcing strict access control policies. For a robust multilayer security strategy that includes VPC firewall rules and intrusion prevention service capabilities, include further traffic inspection and security control to both east-west and north-south traffic flows. Note: In supported cloud regions, and when technically feasible for your design, NVAs can be deployed without requiring multiple VPC networks or appliance interfaces. This deployment is based on using load balancing and policy-based routing capabilities. These capabilities enable a topology-independent, policy-driven mechanism for integrating NVAs into your cloud network. For more details, see Deploy network virtual appliances (NVAs) without multiple VPCs. Hub-and-spoke topology Another possible design variation is to use separate VPCs (including shared VPCs) for your development and different testing stages. In this variation, as shown in the following diagram, all stage environments connect with the CI/CD and administrative VPC in a hub-and-spoke architecture. Use this option if you must separate the administrative domains and the functions in each environment. The hub-and-spoke communication model can help with the following requirements: Applications need to access a common set of services, like monitoring, configuration management tools, CI/CD, or authentication. A common set of security policies needs to be applied to inbound and outbound traffic in a centralized manner through the hub. For more information about hub-and-spoke design options, see Hub-and-spoke topology with centralized appliances and Hub-and-spoke topology without centralized appliances. As shown in the preceding diagram, the inter-VPC communication and hybrid connectivity all pass through the hub VPC. As part of this pattern, you can control and restrict the communication at the hub VPC to align with your connectivity requirements. As part of the hub-and-spoke network architecture the following are the primary connectivity options (between the spokes and hub VPCs) on Google Cloud: VPC Network Peering VPN Using network virtual appliance (NVA) With multiple network interfaces With Network Connectivity Center (NCC) For more information on which option you should consider in your design, see Hub-and-spoke network architecture. A key influencing factor for selecting VPN over VPC peering between the spokes and the hub VPC is when traffic transitivity is required. Traffic transitivity means that traffic from a spoke can reach other spokes through the hub. Microservices zero trust distributed architecture Hybrid and multicloud architectures can require multiple clusters to achieve their technical and business objectives, including separating the production environment from the development and testing environments. Therefore, network perimeter security controls are important, especially when they're required to comply with certain security requirements. It's not enough to support the security requirements of current cloud-first distributed microservices architectures, you should also consider zero trust distributed architectures. The microservices zero trust distributed architecture supports your microservices architecture with microservice level security policy enforcement, authentication, and workload identity. Trust is identity-based and enforced for each service. By using a distributed proxy architecture, such as a service mesh, services can effectively validate callers and implement fine-grained access control policies for each request, enabling a more secure and scalable microservices environment. Cloud Service Mesh gives you the flexibility to have a common mesh that can span your Google Cloud and on-premises deployments. The mesh uses authorization policies to help secure service-to-service communications. You might also incorporate Apigee Adapter for Envoy, which is a lightweight Apigee API gateway deployment within a Kubernetes cluster, with this architecture. Apigee Adapter for Envoy is an open source edge and service proxy that's designed for cloud-first applications. For more information about this topic, see the following articles: Zero Trust Distributed Architecture GKE Enterprise hybrid environment Connect to Google Connect an on-premises GKE Enterprise cluster to a Google Cloud network. Set up a multicloud or hybrid mesh Deploy Cloud Service Mesh across environments and clusters. Mirrored pattern best practices The CI/CD systems required for deploying or reconfiguring production deployments must be highly available, meaning that all architecture components must be designed to provide the expected level of system availability. For more information, see Google Cloud infrastructure reliability. To eliminate configuration errors for repeated processes like code updates, automation is essential to standardize your builds, tests, and deployments. The integration of centralized NVAs in this design might require the incorporation of multiple segments with varying levels of security access controls. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by your NVA vendor. By not exporting on-premises IP routes over VPC peering or VPN to the development and testing VPC, you can restrict network reachability from development and testing environments to the on-premises environment. For more information, see VPC Network Peering custom route exchange. For workloads with private IP addressing that can require Google's APIs access, you can expose Google APIs by using a Private Service Connect endpoint within a VPC network. For more information, see Gated ingress, in this series. Review the general best practices for hybrid and multicloud networking architecture patterns. Previous arrow_back Architecture patterns Next Meshed pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Model_development_and_data_labeling_with_Google_Cloud_and_Labelbox.txt b/Model_development_and_data_labeling_with_Google_Cloud_and_Labelbox.txt new file mode 100644 index 0000000000000000000000000000000000000000..f5985e4d25ecb26dfca40f96926616075f546cc4 --- /dev/null +++ b/Model_development_and_data_labeling_with_Google_Cloud_and_Labelbox.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/model-development-data-labeling-labelbox-google-cloud +Date Scraped: 2025-02-23T11:46:10.140Z + +Content: +Home Docs Cloud Architecture Center Send feedback Model development and data labeling with Google Cloud and Labelbox Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-21 UTC Authors: David Mok, Robert Wood - Labelbox | Akash Gupta - Google This document provides a reference architecture for building a standardized pipeline with Google Cloud and Labelbox. This architecture can help you to develop your ML models more quickly, particularly models for computer vision, NLP and generative AI use cases. This document is intended for machine learning (ML) engineers and data scientists who want to incorporate automation and a human-in-the-loop (HITL) approach to data labeling and data curation. Architecture The following diagram shows the architecture for the standardized pipeline with Google Cloud and Labelbox that you create: The architecture has the following arrangement: Unstructured data is stored in Cloud Storage. That data, along with any associated metadata, is stored in a BigQuery table. You connect Labelbox to Google Cloud using identity and access management (IAM) delegated access for Google Cloud Storage and the Labelbox BigQuery Connector. You can then use Labelbox Catalog as a visual, no-code tool for exploring, organizing, and structuring training datasets. Using data that Labelbox has structured and prepared, your organization can train models in Vertex AI. Labelbox Model can also be used to diagnose performance by taking a data-focused approach. After model errors and data quality issues have been identified, your organization can use Labelbox Catalog to find and target similar, unlabeled or incorrectly labeled data to build and refine the next iteration of their model's dataset. The preceding diagram includes the following Labelbox components: Catalog: An unstructured data search platform which provides large scale data discovery and organization. Catalog is integrated with Cloud Storage and BigQuery. This integration lets analysts more effectively search and organize BigQuery tables when using unstructured data. Annotate: This component provides model assisted labeling combined with enterprise review and workforce management workflows to create the ground truth for model training. Labelbox can use the models that are available through Vertex AI for assisted labeling. Model: A model diagnostics platform that lets you search and filter each version of your dataset based on metrics or metadata to identify problem cases. This component also lets you observe and evaluate Vertex AI models and create an active learning loop. Boost: This component provides managed access to various types of specialized labeling solutions, including specialized engagements and human labeling services. Products used This reference architecture uses the following Google Cloud and third-party products: Labelbox: An end-to-end AI platform that you can use to create and manage high-quality training data. Cloud Storage: An enterprise-ready service that provides low-cost, limitless object storage for diverse data types. Data is accessible from both inside and outside of Google Cloud and is replicated across multiple geographic locations. BigQuery: A fully managed, highly scalable data warehouse with built-in ML capabilities. Vertex AI: A service for managing the AI and ML development lifecycle. Use cases This section describes some example use cases for which a standardized pipeline with Google Cloud and Labelbox is an appropriate choice. Online retail companies use Labelbox to build personalization and recommendation models for their ecommerce product listings, for example, Etsy. Each listing is represented by its hero image and associated categorical and customer interaction metadata. Business users can search through listings using structured (metadata search) and unstructured (vector search) methods. Content media and entertainment companies are also building personalization and recommendation models, using high-volumes of video and image content. These companies can effectively develop content personalization and recommendation algorithms by quickly organizing and structuring their content to train models. By using the Labelbox search interface, content specialists can build ML datasets without spending weeks or months sifting through content. Customers and organizations can achieve the following by using Labelbox with BigQuery and Vertex AI: Effectively structure their data through Labelbox Model Assisted Labeling and Enterprise Workflows to prepare datasets for training models on Vertex AI. After organizations have a trained Vertex AI model in place, Labelbox efficiently improves performance through data-centric error analysis. This analysis helps teams to pinpoint problematic edge cases, focus on issues, and label data more effectively to build the next version of their training dataset. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. For more best practices, see the Labelbox documentation. Security, privacy, and compliance This section describes factors that you should consider when you use this reference architecture to design and build a standardized pipeline with Google Cloud and Labelbox. Labelbox integrates with Cloud Storage through delegated access and ephemeral signed URLs. Data is encrypted in flight and can be sealed off behind a firewall using asset proxy servers and SSO. BigQuery and Vertex AI integration is secured through Identity and Access Management-authenticated client and service accounts. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Cost optimization The pipeline that this architecture describes can help you to optimize costs in the following ways: Helping you to reduce overall labeling costs and annual budgets dedicated to data labeling by providing transparency and workflows that optimize for labeling performance. Saving time by improving collaboration between data scientists, AI product owners, and teams responsible for data labeling operations which in turn reduces the manual back and forth nature of managing data labeling at each model iteration. Helping ensure model training costs are reduced by delivering better data quality and tools for quality assurance. Less manual human-labeled data is required for building performant models because this architecture lets you use tools for curating high-impact data. It also provides automation, which helps teams effectively iterate on the quality of labeled data. Teams can focus on building AI-powered products and delivering models, instead of building data labeling infrastructure. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Operational efficiency Labelbox enables organizations and teams to search through, curate, and label unstructured data, without the need for input from data scientists and machine learning engineers. Typically, organizations would spend years and millions of dollars building and managing these products and the pipelines to connect them together. For operational excellence principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Operational excellence in the Architecture Framework. Deployment Labelbox operates as a managed, hosted service on Google Cloud. Its unstructured data and models use a customer managed infrastructure on Google Cloud. You can purchase Labelbox from the Google Cloud Marketplace. This reference architecture uses GitHub repositories that let you do the following: Connect Labelbox and Google BigQuery Connect Labelbox and BigQuery What's next For more information about Labelbox, see the following: Learn about Labelbox See the Labelbox offering on Google Cloud Marketplace Review the Labelbox and BigQuery Connector details See Labelbox and Google Cloud retail use cases Learn more about the following Google Cloud products: Google Cloud Storage BigQuery Vertex AI For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Robert Wood | Solutions Architect (Labelbox)David Mok | Director Content and Partnerships (Labelbox)Akash Gupta | Partner EngineerOther contributors: Bala Desikan | Principal ArchitectSamson Leung | AI EngineerValentin Huerta | AI Engineer Send feedback \ No newline at end of file diff --git a/Modern_Infrastructure_Cloud.txt b/Modern_Infrastructure_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..ed5fda8bb4cc6e1c9c87f2c4230c8bf97f4fa17d --- /dev/null +++ b/Modern_Infrastructure_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/modern-infrastructure +Date Scraped: 2025-02-23T11:57:19.240Z + +Content: +Google receives the highest scores of any vendor evaluated in both Current Offering and Strategy in the Forrester Wave™: AI Infrastructure Solutions. Request report.Modern Infrastructure CloudGoogle Cloud can help businesses and governments build quickly, securely, and cost-effectively with the next generation of cloud infrastructure.Explore consoleTalk to salesTake a look under the hoodInnovate on an integrated, AI-optimized infrastructureBuild, run, and deploy AI workloads using infrastructure, including TPUs, GPUs, and CPUs, specifically designed for the performance and cost efficiency required for both small models and high-scale LLMs.Learn about AI Hypercomputer2:02Learn more about the inner workings of AI Hypercomputer, a supercomputing architecture designed specifically for AI workloads.LeaderGoogle is a Leader in the 2023 Gartner® Magic Quadrant™ for Container Management Build and run secure applications with managed containersKubernetes was born at Google and we make our deep expertise accessible to customers through GKE with industry-leading features, such as 15,000 nodes per cluster, for unmatched scalability.Explore Google Kubernetes EngineOptimize enterprise workloads with high reliability and price performanceGoogle Cloud continually invests in innovative approaches to improve infrastructure performance. We continue to evolve our built-in architecture enhancements like Titanium, our workload-optimized hardware/software offload architecture. We help customers minimize costs through things like pay-per-use models, dynamic workload scheduling, committed use discounts, and energy-efficient infrastructure. And we help customers get the most of their deployment with AI assistants like Cloud Assist.Learn about our robust Titanium foundationEtsy saw a 42% reduction in compute costs using committed-use discounts and other optimizations.1:39 Run Google Cloud where you want it with Google Distributed CloudEasily deploy workloads on-premises and at the edge for sovereignty, scalability, and controlFor organizations in highly regulated industries that need support to meet evolving sovereignty requirements, Google Cloud sovereign solutions and Google Distributed Cloud help them maintain control over their infrastructure and data while supporting compliance with strict data sovereignty, security, and privacy rules.Learn about Google Distributed CloudNearly 90% of generative AI unicorns and more than 60% of funded gen AI startups are Google Cloud customers. Source: Google Cloud internal data, July 2023Resourcesebook: Rearchitecting your infrastructure for generative AIThis ebook provides a guide to help technical leaders architect robust, scalable, and cost-effective generative AI systems. What you’ll discover inside:+ Actionable strategies to evaluate AI platforms+ Best practices for optimizing resources with cost-efficient AI technologies+ Examination of cost, scalability, security, and performance dimensions+ Paths for development and deployment using cutting-edge technologies like Vertex AI and Google Kubernetes Engine (GKE)Request ebookMore guides and reportsLearn best practices and see how Google Cloud compares to other cloud providers.Report: Google named a Leader in the Forrester Wave™: AI Infrastructure Solutions, Q1 2024Request reportReport: Google is a Leader in the 2023 Gartner® Magic Quadrant™ for Container ManagementRequest reportGuide: Accelerating software development with generative AI: An IT leader's guideGet the guideTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerExplore moreSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Modernize_Software_Delivery.txt b/Modernize_Software_Delivery.txt new file mode 100644 index 0000000000000000000000000000000000000000..8bfd73e2da07ef224b39a020b73a5e5e76fda111 --- /dev/null +++ b/Modernize_Software_Delivery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/software-delivery +Date Scraped: 2025-02-23T11:58:18.451Z + +Content: +Modernize software deliveryTransform your software delivery best practices, to improve productivity, optimize costs, implement CI/CD best practices and secure the software supply chain. Contact usSolutionsSoftware deliverySolutions address inner loop, outer loop, and security best practices. Audience includes application developers, devops engineers, platform engineers, and security teams. Improve developer productivityContainer developer workshopLearn the approach to optimize innerloop developer productivity, best practices to develop, test and debug container apps using Google and open source technologies.Ask your sales team for this hands-on workshopModernize CI and CDSoftware delivery workshopLearn the CI/CD best practices to establish a software delivery model using Google Cloud technologies such as Cloud Build, Cloud Deploy, Artifact Registry.Ask your sales team for hands on software delivery workshopSoftware delivery blueprintA reference architecture and supporting code to implement an opinionated software delivery model that can scale across multiple teams and multiple environments.Ask your sales team for the software delivery blueprintSecure software supply chainS3C WorkshopLearn principles and practices to secure your software supply chain by optimizing each state of your software delivery process. Ask your sales team for the hands-on S3C workshopS3C blueprintProvision a CI/CD pipeline that aligns with secure software supply chain principles to deploy and promote applications across environments.Ask your sales team for help with S3C blueprintPartnersImprove software delivery through Google partnersSee all partnersRelated productsSoftware delivery solutions- Implement fully serverless continuous integration using Cloud Build.- Streamline continuous delivery using Cloud Deploy.- Store, manage and secure container images and build artifacts in Artifact Registry Artifact analysis to identify vulnerabilities.- Deploy trusted images by gating with Binary Authorization.- Store passwords, certificates and other sensitive data in Secret Manager.- IDE integration and code analysis with Cloud Code.Learn moreCloud BuildCloud DeployArtifact RegistryBinary AuthZCloud CodeSecret ManagerTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all solutionsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Modernize_Traditional_Applications.txt b/Modernize_Traditional_Applications.txt new file mode 100644 index 0000000000000000000000000000000000000000..ee402b60541ca9e4c4b903af904ebb15274e28e1 --- /dev/null +++ b/Modernize_Traditional_Applications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/modernize-traditional-applications +Date Scraped: 2025-02-23T11:58:12.242Z + +Content: +Modernize Traditional ApplicationsThere are several reasons why enterprises still have not migrated a bulk of their traditional applications to the cloud. Our traditional applications solution addresses these challenges and helps your cloud journey. Contact usBenefitsCloud Native: Measure success in terms of business valueCost Management180+% Return on investment (ROI) over 3 years35% Reduction in environment setup time (Day 0/1 Ops)75% Reduction in ongoing management time (Day 2 Ops)Business Continuity50% Less time spent monitoring services, resulting in reinvested productivity97% Improvement in availability and avoided downtime95% Faster deploymentsCapabilities30% Improvement in developer efficiency and time to market40% Reduction in developer recruiting costs because of better developer retention80% Improvement in productivity for security tasksThe Total Economic Impact™ Of Google Kubernetes Engine, July 2021Key featuresRationalize and prioritize your traditional applications with Google CloudPlatform owners can leverage our fit assessment tool. App owners can leverage our partnership with CAST to assess and prioritize a portfolio of apps.Discover & assess against a VMware inventoryA large number of workloads can be assessed at once using Google Cloud's fit-assessment tool, resulting in either best “value for effort” or shortest “time to on-board” according to customer preference. Use Migrate to Containers to convert VM-based workloads into containers that run on Google Kubernetes Engine (GKE), Anthos clusters, or Cloud Run platform.Prioritize your apps based on business and technical criteriasThrough Google Cloud's partnership with CAST Software, you can perform automated source code analysis of hundreds of applications in a week for Cloud Readiness, Open Source risks, Resiliency, Agility. Objective software insights combined with qualitative surveys for business context.Use Apigee to encapsulate legacy services in modern APIsExtend the life of legacy applications, build modern services, and quickly deliver new experiences with Google’s API management platform as an abstraction layer on top of existing services.See how Google Cloud's CAMP can help youCloud Application Modernization Program (CAMP) Whitepaper.Get the Whitepaper2021 Accelerate State of DevOps Report.Get the reportDecision Point for Choosing a Cloud Migration Strategy for Your Application.Get the reportCustomersCustomer StoriesBlog postDelivering smiles and sparking innovation at 1-800-FLOWERS.COM, Inc.3-min readCase studyIn just 35 business days J.B. Hunt assessed, planned, and migrated their workloads to Google Cloud4-min readVideoHow CoreLogic is replatforming 10,000+ Cloud Foundry app-instances with KfVideo (16:16)See all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.EventJoin our Webinar to learn about Application RationalizationSign up nowBlog postApplication Rationalization through Google Cloud’s CAMP FrameworkRead the blogReportThe ROI of DevOps TransformationRead reportTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Monitor_on-premises_resources.txt b/Monitor_on-premises_resources.txt new file mode 100644 index 0000000000000000000000000000000000000000..7aa3666787b9907e1628e086abd657d297489444 --- /dev/null +++ b/Monitor_on-premises_resources.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/monitoring-on-premises-resources-with-bindplane +Date Scraped: 2025-02-23T11:53:31.164Z + +Content: +Home Docs Cloud Architecture Center Send feedback Monitor on-premises resources with BindPlane Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC This document is one part of a two-part series on extending Cloud Logging and Cloud Monitoring to include on-premises infrastructure and apps. Log on-premises resources with BindPlane: Read about how Cloud Logging supports logging from on-premises resources. Monitor on-premises resources with BindPlane (this document): Read about how Cloud Monitoring supports monitoring of on-premises resources. You might consider using Logging and Monitoring for logging and monitoring of your on-premises resources for the following reasons: You need a temporary solution as you move infrastructure to Google Cloud and you want to monitor your on-premises resources until they're decommissioned. You might have a diverse computing environment with multiple clouds and on-premises resources. In either case, with the Logging and Monitoring APIs and BindPlane, you can gain visibility into your on-premises resources. This document is intended for DevOps practitioners, managers, and executives who are interested in a monitoring strategy for resources in Google Cloud and their remaining on-premises infrastructure and apps. Ingesting metrics with Monitoring You can get metrics into Monitoring in the following two ways: Use BindPlane from observIQ to ingest metrics from your on-premises or other cloud sources. Use OpenCensus to write to the Cloud Monitoring API. Using BindPlane to ingest metrics The following diagram shows the architecture of how BindPlane collects metrics and then how these metrics are ingested into Monitoring. observIQ offers several versions of BindPlane: BindPlane for Google, self-hosted, SaaS, and Enterprise. For more information about these versions, see the BindPlane solutions page. Advantages: Requires configuration, not instrumentation of your apps, which reduces time to implement. Included in the cost of using Monitoring. Supported configuration by Monitoring product and support. Can extend to metrics not provided by the default configuration. Disadvantages: Requires the use of the observIQ BindPlane agent to relay metrics to Monitoring, which can add complexity to the overall system. This option is the recommended method because it requires the lowest amount of effort. This solution requires configuration rather than development. Using OpenCensus to write to the Monitoring API The following diagram shows the architecture of how OpenCensus collects metrics and how these metrics are ingested into Monitoring. Using the Monitoring API directly means that you need to add instrumentation code to your apps to send metrics directly to the API. You can do this directly by using the Monitoring API to write metrics or by instrumenting your app with the Monitoring exporter for OpenCensus. OpenCensus is an open source project that defines a standard data structure for traces and metrics. Using OpenCensus has the advantage of supporting multiple backends, including Monitoring. Using OpenCensus also implements all the low-level technical details of using the Monitoring API. Advantages: Provides flexibility because the instrumentation required is easily implemented with the use of the OpenCensus Exporter Disadvantages: Requires a separate solution for infrastructure metrics by writing a custom agent. Requires app instrumentation, which might mean higher cost to implement. Requires open source libraries. This option isn't the recommended method because it requires the highest amount of effort and doesn't cover infrastructure metrics. Using BindPlane This document covers using BindPlane from observIQ to ingest metrics into Monitoring. The BindPlane service works by defining a series of sources, ingesting those metrics, and then sending the metrics to Monitoring as the destination. BindPlane supports agents that run on select versions of Windows, Linux, and Kubernetes. Sources, agents, destinations, and processors BindPlane has the following features: Sources: Items that generate metrics such as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (Amazon EKS), or Microsoft Azure Container Service. Agents: Lightweight processes that remotely monitor your environment and forward metric data to BindPlane. Destinations: Services that BindPlane forwards the metrics. In this case, the destination is the process on BindPlane that uses the Monitoring API to write metrics to Monitoring. Processors: Configurations that can transform your data before it arrives at your destination. This includes adding attributes, filtering, and converting logs to metrics. For more detailed information about sources, agents, destinations, and processors, see the BindPlane QuickStart Guide. Example use case As an example, ExampleOrganization has resources deployed to Google Cloud, Microsoft Azure, and on-premises resources deployed by using vSphere. In Google Cloud, there is a GKE cluster and a demo app deployed, which runs the company's website. In the Microsoft Azure environment, Azure Kubernetes Service (AKS) is running a set of microservices, providing a REST API endpoint to external developers. In the vSphere environment, MySQL, Oracle, and Microsoft SQL Server support several corporate apps. With resources in each environment, ExampleOrganization wants to monitor each component regardless of where the component is deployed. Sending the metrics from each environment to Logging and Monitoring by using BindPlane brings all the metrics into a single location for monitoring and alerting purposes. Send metrics from BindPlane to Monitoring After BindPlane is set up and begins sending metrics, those metrics are sent to your Monitoring Workspace. You can then use Monitoring to view, configure, alert, and build dashboards from the time series like you can for any metrics or time series in Monitoring. For more information, see Metrics, time series, and resources. Use metrics in Monitoring In the previous example, BindPlane was configured to send metrics from Google Cloud, Microsoft Azure, and on-premises sources. The following three metrics appear in Monitoring: GKE cluster metrics AKS cluster metrics vSphere on-premises database metrics GKE cluster metrics If you have GKE clusters set up, the GKE cluster metrics appear in the Kubernetes Clusters page or Kubernetes Workloads page. You can see multiple views of the Kubernetes components running in Monitoring. The metrics, logs, and configuration are available for each pod. For details, see View observability metrics. AKS cluster metrics In the same Monitoring environment, metrics for AKS are collected. The metrics appear in Monitoring and can be used for any purposes in Monitoring including dashboards, alerting, and the Metrics Explorer. The Metrics Explorer page provides a way to find, filter, and build charts from metrics. Note that the metrics sent in by BindPlane have the workload.googleapis.com/THIRD_PARTY_APP_NAME prefix for the metric name. The Metrics Explorer can produce a chart for the metric. For more information about charts, see Create charts with Metrics Explorer. Like all metrics in Monitoring, you can use these metrics to build dashboards that display multiple charts. The dashboard can represent metrics produced by AKS, collected by BindPlane, and stored in Monitoring. For more information about dashboards, see View and customize Google Cloud dashboards. vSphere on-premises cluster metrics The last part of this example includes database metrics from vSphere. The metrics from vSphere appear in Monitoring and can be used in the same way as any other metric in Monitoring. The Oracle metrics from vSphere appear in the list of metrics on the Metrics Explorer page. Like all metrics in Monitoring, metrics can be used to build alerts. The alert can represent metrics produced by Oracle running in vSphere, collected by BindPlane, stored in Monitoring. For more information about alerts, see Alerting overview. Conclusion Monitoring provides dashboards, alerting, and incident response for you to get insights into your platforms. Together, Monitoring and BindPlane provide you the ability to gain visibility into your on-premises resources. What's next Cloud Logging and Cloud Monitoring BindPlane QuickStart Guide For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Multi-cloud_database_management.txt b/Multi-cloud_database_management.txt new file mode 100644 index 0000000000000000000000000000000000000000..77af44df129ae99b75b3566872f0d6c9722b619b --- /dev/null +++ b/Multi-cloud_database_management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/multi-cloud-database-management +Date Scraped: 2025-02-23T11:49:35.992Z + +Content: +Home Docs Cloud Architecture Center Send feedback Multicloud database management: Architectures, use cases, and best practices Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-06 UTC This document describes deployment architectures, use cases, and best practices for multicloud database management. It's intended for architects and engineers who design and implement stateful applications within and across multiple clouds. Multicloud application architectures that access databases are use-case dependent. No single stateful application architecture can support all multicloud use cases. For example, the best database solution for a cloud bursting use case is different from the best database solution for an application that runs concurrently in multiple cloud environments. For public clouds like Google Cloud, there are various database technologies that fit specific multicloud use cases. To deploy an application in multiple regions within a single public cloud, one option is to use a public cloud, provider-managed, multi-regional database such as Spanner. To deploy an application to be portable across public clouds, a platform-independent database might be a better choice, such as PostgreSQL. This document introduces a definition for a stateful database application followed by a multicloud database use-case analysis. It then presents a detailed database system categorization for multicloud deployment architectures based on the use cases. The document also introduces a decision tree for selecting databases which outlines key decision points for selecting an appropriate database technology. It concludes with a discussion about best practices for multicloud database management. Key terms and definitions This section provides a terminology and defines the generic stateful database application that's used in this document. Terminology Public cloud. A public cloud provides multi-tenant infrastructure (generally global) and services that customers can use to run their production workloads. Google Cloud is a public cloud that provides many managed services, including GKE, GKE Enterprise, and managed databases. Hybrid cloud. A hybrid cloud is a combination of a public cloud with one or more on-premises data centers. Hybrid cloud customers can combine their on-premises services with additional services provided by a public cloud. Multicloud. A multicloud is a combination of several public clouds and on-premises data centers. A hybrid cloud is a subset of multicloud. Deployment location. An infrastructure location is a physical location that can deploy and run workloads, including applications and databases. For example, in Google Cloud, deployment locations are zones and regions. At an abstract level, a public cloud region or zone and an on-premises data center are deployment locations. Stateful database applications To define multicloud use cases, a generic stateful database application architecture is used in this document, as shown in the following diagram. The diagram shows the following components: Database. A database can be a single instance, multi-instance, or distributed database, deployed on computing nodes or available as a cloud-managed service. Application services. These services are combined as an application that implements the business logic. Application services can be any of the following: Microservices on Kubernetes. Coarse-grained processes running on one or more virtual machines. A monolithic application on one large virtual machine. Serverless code in Cloud Run functions or Cloud Run. Some application services can access the database. It's possible to deploy each application service several times. Each deployment of an application service is an instance of that application service. Application clients. Application clients access the functionality that is provided by application services. Application clients can be one of the following: Deployed clients, where code runs on a machine, laptop, or mobile phone. Non-deployed clients, where the client code runs in a browser. Application client instances always access one or more application service instances. In the context of a multicloud database discussion, the architectural abstraction of a stateful application consists of a database, application services, and application clients. In an implementation of an application, factors such as the use of operating systems or the programming languages that are used can vary. However, these details don't affect multicloud database management. Queues and files as data storage services There are many persistence resources available for application services to persist data. These resources include databases, queues, and files. Each persistence resource provides storage data models and access patterns that are specialized for these models. Although queues, messaging systems, and file systems are used by applications, in the following section, the focus is specifically on databases. Although the same considerations for factors such as deployment location, sharing of state, synchronous and asynchronous replication for multicloud databases are applicable to queues and files, this discussion is out of the scope of this document. Networking In the architecture of a generic stateful application (shown again in the following diagram), each arrow between the components represents a communication relationship over a network connection—for example, an application client accessing an application service. A connection can be within a zone or across zones, regions, or clouds. Connections can exist between any combination of deployment locations. In multicloud environments, networking across clouds is an important consideration and there are several options that you can use. For more information about networking across clouds, see Connecting to Google Cloud: your networking options explained. In the use cases in this document, the following is assumed: A secure network connection exists between the clouds. The databases and their components can communicate with each other. From a non-functional perspective, the size of the network, meaning the throughput and latency, might affect the database latency and throughput. From a functional perspective, networking generally has no effect. Multicloud database use cases This section presents a selection of the most common use cases for multicloud database management. For these use cases, it's assumed that there's secure network connectivity between the clouds and database nodes. Application migration In the context of multicloud database management, application migration refers to the migration of an application, all application services, and the database from the current cloud to a target cloud. There are many reasons that an enterprise might decide to migrate an application, for example, to avoid a competitive situation with the cloud provider, to modernize technology, or to lower total cost of ownership (TCO). In application migration, the intent is to stop production in the current cloud and continue production in the target cloud after the migration completes. The application services must run in the target cloud. To implement the services, a lift and shift approach can be used. In this approach, the same code is deployed in the target cloud. To reimplement the service, the modern cloud technologies that are available in the target cloud can be used. From a database perspective, consider the following alternative choices for application migration: Database lift and shift: If the same database engine is available in the target cloud, it's possible to lift and shift the database to create an identical deployment in the target cloud. Database lift and move to managed equivalent: A self-managed database can be migrated to a managed version of the same database engine if the target cloud provides it. Database modernization: Different clouds offer different database technologies. Databases managed by a cloud provider could have advantages such as stricter service-level agreements (SLAs), scalability, and automatic disaster recovery. Regardless of the deployment strategy, database migration is a process that takes time because of the need to move data from the current cloud to the target cloud. While it's possible to follow an export and import approach that incurs downtime, minimal or zero downtime migration is preferable. This approach minimizes application downtime and has the least effect on an enterprise and its customers. However, it typically requires a more intricate migration setup as it involves an initial load, ongoing replication, monitoring, granular validation, and synchronization when switching. To support fallback scenarios, you need to implement a reverse replication mechanism, to synchronize the changes back to the source database after switching to the target database. Disaster recovery Disaster recovery refers to the ability of an application to continue providing services to application clients during a region outage. To ensure disaster recovery, an application must be deployed to at least two regions and be ready to execute at any time. In production, the application runs in the primary region. However, if an outage occurs, a secondary region becomes the primary region. The following are different models of readiness in disaster recovery: Hot standby. The application is deployed to two or more regions (primary and secondary), and the application is fully functioning in every region. If the primary region fails, the application in the secondary region can take on application client traffic immediately. Cold standby. The application is running in the primary region, however, it's ready for startup in a secondary region (but not running). If the primary region fails, the application is started up in the secondary region. An application outage occurs until the application is able to run and provide all application services to application clients. No standby. In this model, the application code is ready for deployment but not yet deployed in the secondary region (and so not using any deployed resources). If a primary region has an outage, the first deployment of the application must be in the secondary region. This deployment puts the application in the same state as a cold standby, which means that it must be started up. In this approach, the application outage is longer compared to the cold standby case because application deployment has to take place first, which includes creating cloud resources. From a database perspective, the readiness models discussed in the preceding list correspond to the following database approaches: Transactionally synchronized database. This database corresponds to the hot standby model. Every transaction in the primary region is committed in synchronous coordination in a secondary region. When the secondary region becomes the primary region during an outage, the database is consistent and immediately available. In this model, the recovery point objective (RPO) and the recovery time objective (RTO) are both zero. Asynchronously replicated database. This database also corresponds to the hot standby model. Because the database replication from the primary region to the secondary region is asynchronous, there's a possibility that if the primary region fails some transactions might be replicated to the secondary region. While the database in the secondary region is ready for production load, it might not have the most current data. For this reason, the application could incur a loss of transactions that aren't recoverable. Because of this risk, this approach has an RTO of zero, but the RPO is larger than zero. Idling database. This database corresponds to the cold standby model. The database is created without any data. When the primary region fails, data has to be loaded to the idling database. To enable this action, a regular backup has to be taken in the primary region and transferred to the secondary region. The backup can be full or incremental, depending on what the database engine supports. In either case, the database is set back to the last backup, and, from the perspective of the application, many transactions can be lost compared to the primary region. While this approach might be cost effective, the value is mitigated by the risk that all transactions since the last available backup might be lost due to the database state not being up to date. No database. This model is equivalent to the no standby case. The secondary region has no database installed, and if the primary region fails, a database must be created. After the database is created, as in the idling database case, it must be loaded with data before it's available for the application. The disaster recovery approaches that are discussed in this document also apply if a primary and a secondary cloud are used instead of a primary and secondary region. The main difference is that because of the network heterogeneity between clouds, the latency between clouds might increase compared to the network distance between regions within a cloud. Other discrepancies may come from different features and default settings of managed databases of different cloud providers. The likelihood of a whole cloud failing is less than that of a region failing. However, it might still be useful for enterprises to have an application deployed in two clouds. This approach could help to protect an enterprise against failure, or help it to meet business or industry regulations. Another disaster recovery approach is to have a primary and a secondary region and a primary and a secondary cloud. This approach allows enterprises to choose the best disaster recovery process to address a failure situation. To enable an application to run, either a secondary region or a secondary cloud can be used, depending on the severity of the outage. Cloud bursting Cloud bursting refers to a configuration that enables the scale up of application client traffic across different deployment locations. An application bursts when demand for capacity increases and a standby location provides additional capacity. A primary location supports the regular traffic whereas a standby location can provide additional capacity in case application client traffic is increasing beyond what the primary location can support. Both the primary and standby location have application service instances deployed. Cloud bursting is implemented across clouds where one cloud is the primary cloud and a second cloud is the standby cloud. It's used in a hybrid cloud context to augment a limited number of compute resources in on-premises data centers with elastic cloud compute resources in a public cloud. For database support, the following options are available: Primary location deployment. In this deployment, the database is only deployed in the primary location and not in the standby location. When an application bursts, the application in the standby location accesses the database in the primary location. Primary and standby location deployment. This deployment supports both the primary and standby location. A database instance is deployed in both locations and is accessed by the application service instances of that location, especially in the case of bursting. As in Disaster recovery within and across clouds, the two databases can be transactionally synchronized, or asynchronously synchronized. In asynchronous synchronization, there can be a delay. If updates are taking place in the standby location then these updates have to be propagated back to the primary location. If concurrent updates are possible in both locations, conflict resolution must be implemented. Cloud bursting is a common use case in hybrid clouds to increase capacity in on-premises data centers. It's also an approach that can be used across public clouds when data has to stay within a country. In situations where a public cloud has only one region in a country, it's possible to burst into a region of a different public cloud in the same country. This approach ensures that the data stays within the country while still addressing resource constraints in the region of the public cloud regions. Best-in-class cloud service use Some applications require specialized cloud services and products that are not available in a single cloud. For example, an application might perform business logic processing of business data in one cloud, and analytics of the business data in another cloud. In this use case, the business logic processing parts and the analytics parts of the application are deployed to different clouds. From a data-management perspective, the use cases are as follows: Partitioned data. Each part of the application has its own database (separate partition) and none of the databases are connected directly to each other. The application that manages the data writes any data that needs to be available in both databases (partitions) twice. Asynchronously replicated database. If data from one cloud needs to be available in the other cloud, an asynchronous replication relationship might be appropriate. For example, if an analytics application requires the same dataset or a subset of the dataset for a business application, the latter can be replicated between the clouds. Transactionally synchronized database. These kinds of databases let you make data available to both parts of the application. Each update from each of the applications is transactionally consistent and available to both databases (partitions) immediately. Transactionally synchronized databases effectively act as a single distributed database. Distributed services A distributed service is deployed and executed in two or more deployment locations. It's possible to deploy all service instances into all the deployment locations. Alternatively, it's possible to deploy some services in all locations, and some only in one of the locations, based on factors such as hardware availability or expected limited load. Data in a transactionally synchronized database is consistent in all locations. Therefore, such a database is the best option to deploy service instances to all deployment locations. When you use an asynchronous replicated database, there's a risk of the same data item being modified in two deployment locations concurrently. To determine which of the two conflicting changes is the final consistent state, a conflict-resolution strategy must be implemented. Although it's possible to implement a conflict resolution strategy, it's not always straightforward, and there are instances where manual intervention is needed to restore data to a consistent state. Distributed service relocation and failover If a whole cloud region fails, disaster recovery is initiated. If a single service in a stateful database application fails (not the region or the whole application), the service has to be recovered and restarted. An initial approach for disaster recovery is to restart the failed service in its original deployment location (a restart-in-place approach). Technologies like Kubernetes automatically restart a service based on its configuration. However, if this restart-in-place approach is not successful, an alternative is to restart the service in a secondary location. The service fails over from its primary location to a secondary location. If the application is deployed as a set of distributed services, then the failover of a single service can take place dynamically. From a database perspective, restarting the service in its original deployment location doesn't require a specific database deployment. When a service is moved to an alternative deployment location and the service accesses the database, then the same readiness models apply that were discussed in Distributed services earlier in this document. If a service is being relocated on a temporary basis, and if higher latencies are acceptable for an enterprise during the relocation, the service could access the database across deployment locations. Although the service is relocated, it continues to access the database in the same way as it would from its original deployment location. Context-dependent deployment In general, a single application deployment to serve all application clients includes all its application services and databases. However, there are exceptional use cases. A single application deployment can serve only a subset of clients (based on specific criteria), which means that more than one application deployment is needed. Each deployment serves a different subset of clients, and all deployments together serve all clients. Example use cases for a context-dependent deployment are as follows: When deploying a multi-tenant application for which one application is deployed for all small tenants, another application is deployed for every 10 medium tenants, and one application is deployed for each premium tenant. When there is a need to separate customers, for example, business customers and government customers. When there is a need to separate development, staging, and production environments. From a database perspective, it's possible to deploy one database for each application deployment in a one-to-one deployment strategy. As shown in the following diagram, this strategy is a straightforward approach where partitioned data is created because each deployment has its own dataset. The preceding diagram shows the following: A setup with three deployments of an application. Each dataset has its own respective database. No data is shared between the deployments. In many situations, a one-to-one deployment is the most appropriate strategy, however, there are alternatives. In the case of multi-tenancy, tenants might be relocated. A small tenant might turn into a medium tenant and have to be relocated to a different application. In this case, separate database deployments require database migration. If a distributed database is deployed and is used by all deployments at the same time, all tenant data resides in a single database system. For this reason, moving a tenant between databases doesn't require database migration. The following diagram shows an example of this kind of database: The preceding diagram shows the following: Three deployments of an application. The deployments all share a single distributed database. Applications can access all of the data in each deployment. There is no data partitioning implemented. If an enterprise often relocates tenants as part of lifecycle operations, database replication might be a useful alternative. In this approach, tenant data is replicated between databases before a tenant migration. In this case, independent databases are used for different application deployments and only set up for replication immediately before and during tenant migration. The following diagram shows a temporary replication between two application deployments during a tenant migration. The preceding diagram shows three deployments of an application with three separate databases holding the data that's associated with each respective deployment. To migrate data from one database to another, temporary database migration can be set up. Application portability Application portability ensures that an application can be deployed into different deployment locations, especially different clouds. This portability ensures that an application can be migrated at any time, without the need for migration-specific reengineering or additional application development to prepare for an application migration. To ensure application portability, you can use one of the following approaches, which are described later in this section: System-based portability API compatibility Functionality-based portability System-based portability uses the same technical components that are used in all possible deployments. To ensure system-based portability, each technology must be available in all potential deployment locations. For example, if a database like PostgreSQL is a candidate, its availability in all potential deployment locations has to be verified for the expected timeframe. The same is true for all other technologies—for example, programming languages and infrastructure technologies. As shown in the following diagram, this approach establishes a set of common functionalities across all the deployment locations based on technology. The preceding diagram shows a portable application deployment where the application expects the exact same database system in every location that it's deployed to. Because the same database system is used in each location, the application is portable. The application can expect that the exact same database system will be used across the deployment. Therefore, the exact same database system interface and behavior can be assumed. In the context of databases, in the API-compatibility system, the client uses a specific database access library (for example, a MySQL client library) to ensure that it can connect to any compliant implementation that might be available in a cloud environment. The following diagram illustrates the API compatibility. The preceding diagram shows application portability based on the database system API instead of the database system. Although the database systems can be different in each of the locations, the API is the same and exposes the same functionality. The application is portable because it can assume the same API in each location, even if the underlying database system is a different database technology. In functionality-based portability, different technologies in different clouds might be used that provide the same functionality. For example, it might be possible to restrict the use of databases to the relational model. Because any relational database system can support the application, different database systems on different versions can be used in different clouds without affecting the portability of the application. A drawback to functionality-based portability is that it can only use the parts of the database model that all relational database systems support. Instead of a database system that is compatible with all clouds, a database model must be used. The following diagram shows an example architecture for functionality-based portability. As the preceding diagram shows, the database system API and the database system might be different in each location. To ensure portability, the application must only use the parts of each database system and each API that are available in each location. Because only a subset of each database system is commonly available in each location, the application has to restrict its use to that subset. To ensure portability for all the options in this section, the complete architecture must be continuously deployed to all target locations. All unit and system test cases must be executed against these deployments. These are essential requirements for changes in infrastructures and technologies to be detected early and addressed. Vendor dependency prevention Vendor dependency (lock-in) prevention helps to mitigate the risk of dependency on a specific technology and vendor. It's superficially similar to application portability. Vendor dependency prevention applies to all technologies that are used, not only cloud services. For example, if MySQL is used as a database system and installed on virtual machines in clouds, then there is no dependency from a cloud perspective, but there is a dependence on MySQL. An application that's portable across clouds might still depend on technologies that are provided by different vendors than the cloud. To prevent vendor dependency, all technologies need to be replaceable. For this reason, thorough and structured abstraction of all application functionality is needed so that each application service can be reimplemented to a different technology base without affecting how the application is implemented. From a database perspective, this abstraction can be done by separating the use of a database model from a particular database-management system. Existing production database-management system While many multicloud applications are developed with database systems as part of their design, many enterprises develop multicloud applications as a part of their application modernization effort. These applications are developed with the assumption that the newly designed and implemented application accesses the existing databases. There are many reasons for not incorporating existing databases into a modernization effort. There might be specific features in use that aren't available from other database systems. An enterprise might have databases with complex and well-established management processes in place, making a move to a different system impractical or uneconomical. Or, an enterprise might choose to modernize an application in the first phase, and modernize the database in the second phase. When an enterprise uses an existing database system, the designer of the multicloud application has to decide if it will be the only database used, or if a different database system needs to be added for different deployment locations. For example, if a database is used on-premises and the application also needs to run in Google Cloud, they need to consider if the application services deployed on Google Cloud access the database on-premises. Or, alternatively, if a second database should be deployed both in Google Cloud and for the locally running application services. If a second database is deployed in Google Cloud, the use case might be the same as the use cases discussed in Cloud bursting or Distributed services. In either case, the same database discussion applies as in these sections. However, it's limited to the cross-location functionality that the existing database on-premises can support—for example, synchronization and replication. Deployment patterns The use cases discussed in this document raise a common question from a database perspective: if databases are in more than one deployment location, what's their relationship? The main kinds of relationships (deployment patterns), which are discussed in the next section, are as follows: Partitioned without cross-database dependency Asynchronous unidirectional replication Bidirectional replication with conflict resolution Fully active-active synchronized distributed system It's possible to map each use case in this document to one or more of the four deployment patterns. In the following discussion, it's assumed that clients access application services directly. Depending on the use case, a load balancer might be needed to dynamically direct client access to applications, as shown in the following diagram. In the preceding diagram, a cloud load balancer directs client calls to one of the available locations. The load balancer ensures that load balancing policies are enforced and that clients are directed to the correct location of the application and its database. Partitioned without cross-database dependency This deployment pattern is the simplest of all the patterns discussed in this document: each location or cloud has a database and the databases contain partitioned datasets that are not dependent on each other. A data item is stored in only one database. Each data partition is located in its own database. An example for this pattern is a multi-tenant application where a dataset is in one or the other database. The following diagram shows two fully partitioned applications. As the preceding diagram shows, an application is deployed in two locations, each responsible for a partition of the entire dataset. Each data item resides in only one of the locations, ensuring a partitioned dataset without any replication between the two. An alternative deployment pattern for partitioned databases is where the dataset is completely partitioned but still stored within the same database. There is only one database containing all datasets. Although the datasets are stored within the same database, the datasets are completely separate (partitioned) and a change to one doesn't cause changes in another. The following diagram shows two applications that share a single database. The preceding diagram shows the following: Two application deployments that both share a common database, which is in the first location. Each application can access all of the data in the deployment because the dataset isn't partitioned. Asynchronous unidirectional replication This deployment pattern has a primary database that replicates to one or more secondary databases. The secondary database can be used for read access, but the application has to factor in potential replication lag. An example for this pattern is when the best database for a particular use case is used as the primary database and the secondary database is used for analytics. The following diagram shows two applications accessing unidirectionally replicated databases. As the preceding diagram shows, one of the two databases is a replica of the other. The arrow in the diagram indicates the replication direction: the data from the database system in location 1 is replicated to the database system in location 2. Bidirectional replication with conflict resolution This deployment pattern has two primary databases that are asynchronously replicated to each other. If the same data is written at the same time to each database (for example, the same primary key) it can cause a write-write conflict. Because of this risk, there must be a conflict resolution in place to determine which state is the last state during replication. This pattern can be used in situations where the chance of a write-write conflict is rare. The following diagram shows two applications accessing bidirectionally replicated databases. As the preceding diagram shows, each database is replicated to the other database. The two replications are independent to each other, as indicated in the diagram by two separate blue arrows. Since the two replications are independent, a conflict can arise if by chance the same data item is changed by each of the applications and concurrently replicated. In this case, conflict resolution has to take place. Fully active-active synchronized distributed system This deployment pattern has a single database that has an active-active (also primary-primary) setup. In an active-active setup, an update of data in any primary database is transactionally consistent and synchronously replicated. An example use case for this pattern is distributed computing. The following diagram shows two applications accessing a fully synchronized primary- primary database. As the preceding diagram shows, this arrangement ensures that each application always accesses the last consistent state, without a delay or risk of conflict. A change in one database is immediately available in the other database. Any change is reflected in both databases when a changing transaction commit happens. Database system categorization Not all database-management systems can be used equally well for the deployment patterns that are discussed in this document. Depending on the use case, it might only be possible to implement one deployment pattern or a combination of deployment patterns with only a subset of database systems. In the following section, the different database systems are categorized and mapped to the four deployment patterns. It's possible to categorize databases by different dimensions such as data model, internal architecture, deployment model, and transaction types. In the following section, for the purpose of multicloud database management, two dimensions are used: Deployment architecture. The architecture of how a database management system is deployed onto resources of clouds (for example, compute engines or cloud-managed). Distribution model. The model of distribution a database system supports (for example, single instance or fully distributed). These two dimensions are the most relevant in the context of multicloud use cases and can support the four deployment patterns derived from the multicloud database use cases. A popular categorization is based on the data models that are supported by a database-management system. Some systems support only one model (for example, a graph model). Other systems support several data models at the same time (for example, relational and document models). However, in the context of multicloud database management, this categorization isn't relevant because multicloud applications can use any data model for their multicloud deployment. Database system by deployment architecture Multicloud database management includes the following four main classes of deployment architecture for database-management systems: Built-in cloud databases. Built-in cloud databases are designed, built, and optimized to work with cloud technology. For example, some database systems use Kubernetes as their implementation platform and use Kubernetes functionality. CockroachDB and YugaByte are examples of this kind of database. They can be deployed into any cloud that supports Kubernetes. Cloud provider-managed databases. Cloud provider-managed databases are built on cloud provider-specific technology and are a database service managed by a specific cloud provider. Spanner, Bigtable and AlloyDB for PostgreSQL are examples of this kind of database. Cloud provider-managed databases are only available in the cloud of the cloud provider and can't be installed and run elsewhere. However, AlloyDB for PostgreSQL is fully PostgreSQL compatible. Pre-cloud databases. Pre-cloud databases existed before the development of cloud technology (sometimes for a long time) and usually run on bare metal hardware and virtual machines (VMs). PostgreSQL, MySQL and SQL Server are examples of this kind of database. These systems can run on any cloud that supports the required VMs or bare metal hardware. Cloud partner-managed databases. Some public clouds have database partners that install and manage customers' databases in the public cloud. For this reason, customers don't have to manage these databases themselves. MongoDB Atlas and MariaDB are examples of this kind of database. There are some variants of these main categories. For example, a database vendor implementing a database that's built for the cloud might also provide an installation on technology built for the cloud and a managed offering to customers in their vendor-provided cloud. This approach is equivalent to the vendor providing a public cloud that supports only their database as the single service. Pre-cloud databases might also be in containers and they might be deployable into a Kubernetes cluster. However, these databases don't use Kubernetes-specific functionality like scaling, multi-service, or multi-pod deployment. Database vendors might partner with more than one public cloud provider at the same time, offering their database as a cloud partner-managed database in more than one public cloud. Database system by distribution model Different database-management systems are implemented according to the distribution models in the architecture of a database. The models for databases are as follows: Single instance. A single database instance runs on one VM or one container and acts as a centralized system. This system manages all database access. Because the single instance can't be connected to any other instance, the database system doesn't support replication. Multi-instance active-passive. In this common architecture, several database instances are linked together. The most common linking is an active-passive relationship where one instance is the active database instance that supports both instances and writes and reads. One or more passive systems are read-only, and receive all database changes from the primary either synchronously or asynchronously. Passive systems can provide read access. Active-passive is also referred to as primary-secondary or source-replica. Multi-instance active-active. In this relatively uncommon architecture, each instance is an active instance. In this case, each instance can execute read and write transactions and provide data consistency. For this reason, to prevent data inconsistencies, all instances are always synchronized. Multi-instance active-active with conflict resolution. This is also a relatively uncommon system. Each instance is available for write and read access, however, the databases are synchronized in an asynchronous mode. Concurrent updates of the same data item are permitted, which leads to an inconsistent state. A conflict resolution policy has to decide which of the states is the last consistent state. Multi-instance sharding. Sharding is based on the management of (disjointed) partitions of data. A separate database instance manages each partition. This distribution is scalable because more shards can be added dynamically over time. However, cross-shard queries might not be possible because this functionality is not supported by all systems. All the distribution models that are discussed in this section can support sharding and can be a sharded system. However, not all systems are designed to provide a sharding option. Sharding is a scalability concept and isn't generally relevant for architectural database selection in multicloud environments. Distribution models are different for cloud and partner-managed databases. Because these databases are tied to the architecture of a cloud provider, these systems implement architectures based on the following deployment locations: Zonal system. A zonal-managed database system is tied to a zone. When the zone is available, the database system is available too. However, if the zone becomes unavailable, it's not possible to access the database. Regional system. A regional-managed database is tied to a region and is accessible when at least one zone is accessible. Any combination of zones can become inaccessible. Cross-regional system. A cross-regional system is tied to two or more regions and functions properly when at least one region is available. A cross-regional system can also support a cross-cloud system if the database can be installed in all the clouds that an enterprise intends to use. There are other possible alternatives to the managed-database architectures discussed in this section. A regional system might share a disk between two zones. If either of the two zones becomes inaccessible, the database system can continue in the remaining zone. However, if an outage affects both zones, the database system is unavailable even though other zones might still be fully online. Mapping database systems to deployment patterns The following table describes how the deployment patterns and deployment architectures that are discussed in this document relate to each other. The fields state the conditions that are needed for a combination to be possible, based on deployment pattern and deployment architecture. Deployment architecture Deployment pattern Partitioned without cross-database dependency Asynchronous unidirectional replication Bidirectional replication with conflict resolution Fully active-active synchronized distributed system Built-in cloud databases Possible for all clouds that provide built-in cloud technology used by database systems. If the same database isn't available, different database systems can be used. Cloud database that provides replication. Cloud database that provides bidirectional replication. Cloud database that provides primary-primary synchronization. Cloud provider-managed databases Database systems can be different in different clouds. Replica doesn't have to be the cloud provider-managed database (see The role of database migration technology in deployment patterns). Only within a cloud, not across clouds, if the database provides bidirectional replication. Only within a cloud, not across clouds, if the database provides primary-primary synchronization. Pre-cloud databases Database systems can be the same or different in different clouds. Replication is possible across several clouds. Database system provides bidirectional replication and conflict resolution. Database system provides primary-primary synchronization. Cloud partner-managed databases Database systems can be different in different clouds. If the partner provides a managed database system in all required clouds, the same database can be used. Replica doesn't have to be the cloud provider-managed database. If the partner provides a managed database system in all required clouds, the same database can be used. Database system provides bidirectional replication and conflict resolution. Database system provides primary-primary synchronization. If a database system doesn't provide built-in replication, it might be possible to use database replication technology instead. For more information, see The role of database migration technology in deployment patterns. The following table relates the deployment patterns to distribution models. The fields state the conditions for which the combination is possible given a deployment pattern and a distribution model. Distribution model Deployment pattern Partitioned without cross-database dependency Asynchronous unidirectional replication Bidirectional replication with conflict resolution Fully active-active synchronized distributed system Single instance Possible with the same or different database system in the involved clouds. Not applicable Not applicable Not applicable Multi-instance active-passive Possible with the same or different database system in the involved clouds. Replication is possible across clouds. Replication is possible across clouds. Not applicable Multi-instance active-active Possible with the same or different database system in the involved clouds. Not applicable Not applicable Synchronization is possible across clouds. Multi-instance active-active with conflict resolution Possible with the same or different database system in the involved clouds. Not applicable Not applicable Applicable if bidirectional replication is possible across clouds. Some implementations of distribution models that add additional abstraction based on the underlying database technology don't have such a distribution model built into it—for example, Postgres-BDR, an active-active system. Such systems are included in the preceding table in the respective category. From a multicloud perspective, it's irrelevant how a distribution model is implemented. Database migration and replication Depending on the use case, an enterprise might need to migrate a database from one deployment location to another. Alternatively, for downstream processing, it might need to replicate the data for a database to another location. In the following section, database migration and database replication are discussed in more detail. Database migration Database migration is used when a database is being moved from one deployment location to another. For example, a database running in an on-premises data center is migrated to run on the cloud instead. After migration is complete, the database is shut down in the on-premises data center. The main approaches to database migration are as follows: Lift and shift. The VM and the disks running the database instances are copied to the target environment as they are. After they are copied, they are started up and the migration is complete. Export and import and backup and restore. These approaches both use database system functionality to externalize a database and recreate it at the target location. Export/import usually is based on an ASCII format, whereas backup and restore is based on a binary format. Zero downtime migration. In this approach, a database is migrated while the application systems access the source system. After an initial load, changes are transmitted from the source to the target database using change data capture (CDC) technologies. The application incurs downtime from the time it's stopped on the source database system, until it's restarted on the target database system after migration is complete. Database migration becomes relevant in multicloud use cases when a database is moved from one cloud to another, or from one kind of database engine to another. Database migration is a multi-faceted process. For more information, see Database migration: Concepts and principles (Part 1) and Database migration: Concepts and principles (Part 2). Built-in database technologies can be used to do database migration—for example export and import, backup and restore, or built-in replication protocols. When the source and target system are different database systems, migration technologies are the best option for database migration. Database Migration Service, Striim and Debezium are examples of database migration technologies. Database replication Database replication is similar to database migration. However, during database replication, the source database system stays in place while every change is transmitted to the target database. Database replication is a continuous process that sends changes from the source database to the target database. When this process is asynchronous, the changes arrive at the target database after a short delay. If the process is synchronous, the changes to the source system are made to the target system at the same time and to the same transactions. Aside from replicating a source to a target database, a common practice is to replicate data from a source database to a target analytics system. As with database migration, if replication protocols are available, built-in database technology can be used for database replication. If there are no built-in replication protocols, it's possible to use replication technology such as Striim or Debezium. The role of database migration technology in deployment patterns Built-in database technology to enable replication isn't generally available when different database systems are used in deployment patterns—for example, asynchronous (heterogeneous) replication. Instead, database migration technology can be deployed to enable this kind of replication. Some of these migration systems also implement bidirectional replication. If database migration or replication technology is available for the database systems used in combinations marked as "Not applicable" in Table 1 or Table 2 in Mapping database systems to deployment patterns then it might be possible to use it for database replication. The following diagram shows an approach for database replication using a migration technology. In the preceding diagram, the database in location 1 is replicated to the database in location 2. Instead of a direct database system replication, a migration server is deployed to implement the replication. This approach is used for database systems that don't have database replication functionality built into their implementation and that need to rely on a system separate from the database system to implement replication. Multicloud database selection The multicloud database use cases combined with the database system categorization helps you to decide which databases are best for a given use case. For example, to implement the use case in Application portability, there are two options. The first option is to ensure that the same database engine is available in all clouds. This approach ensures system portability. The second option is to ensure that the same data model and query interface is available to all clouds. Although the database systems might be different, the portability is provided on a functional interface. The decision trees in the following sections show the decision-making criteria for the multicloud database use cases in this document. The decision trees suggest the best database to consider for each use case. Best practices for existing database system If a database system is in production, a decision must be made about whether to keep or replace it. The following diagram shows the questions to ask in your decision process: The questions and answers in the decision tree are as follows: Is a database system in production? If no database system is in production, select a database system (skip to the Decision on multicloud database management). If a database system is in production, evaluate whether it should be retained. If a database system is in production, evaluate whether it should be retained. If the database system should be retained, then the decision is made and the decision process is complete. If the database system should be changed or if the decision is still being made, select a database system (skip to the Decision on multicloud database management). Decision on multicloud database management The following decision tree is for a use case with a multi-location database requirement (including a multicloud database deployment). It uses the deployment pattern as the basis for the decision-making criteria. The questions and answers in the decision tree are as follows: Is the data partitioned in different databases without any cross-database dependency? If yes, the same or different database systems can be selected for each location. If no, continue to the next question. Is asynchronous unidirectional replication required? If yes, then evaluate if a database replication system is acceptable. If yes, select the same or different database systems that are compatible with the replication system. If no, select a database system that can implement an active-passive distribution model. If no, continue to the next question. Select a database system with synchronized instances. Is conflict resolution acceptable? If yes, select a bidirectional replicating database system or an active-active database system. If no, select an active-active system. If more than one multicloud use case is implemented, an enterprise must decide if it wants to use one database system to support all use cases, or multiple database systems. If an enterprise wants to use one database system to support all use cases, the system with the best synchronization is the best choice. For example, if unidirectional replication is required as well as synchronized instances, the best choice is the synchronized instances. The hierarchy of synchronization quality is (from none to best): partitioned, unidirectional replication, bidirectional replication, and fully synchronized replication. Deployment best practices This section highlights best practices to follow when choosing a database for multicloud application migration or development. Existing database-management system It can be a good practice to retain a database and not make changes to the database system unless required by a specific use case. For an enterprise with an established database-management system in place and that has effective development, operational, and maintenance processes, it might be best to avoid making changes. A cloud bursting use case that doesn't require a database system in the cloud might not need a change of database. Another use case is asynchronous replication to a different deployment location within the same cloud or to another cloud. For these use cases, a good approach is to implement a benchmark and verify that the cross-location communication and that the cross-location communication and latency satisfies application requirements when accessing the database. Database system as a Kubernetes service If an enterprise is considering running a database system within Kubernetes as a service based on StatefulSets, then the following factors must be considered: If the database provides the database model required by the application. Non-functional factors which determine how operationalization is implemented in a database system as a Kubernetes service—for example, how scaling is done (scaling up and down), how backup and restore are managed, and how monitoring is made available by the system. To help them understand the requirements of a Kubernetes-based database system, enterprises should use their experiences with databases as a point of comparison. High availability and disaster recovery. To provide high availability, the system needs to continue operating when a zone within a region fails. The database must be able to failover fast from one zone to another. In the best case scenario, the database has an instance running in each zone that's fully synchronized for an RTO and RPO of zero. Disaster recovery to address the failure of a region (or cloud). In an ideal scenario, the database continues to operate in a second region with an RPO and RTO of zero. In a less ideal scenario, the database in the secondary region might not be fully caught up on all transactions from the primary database. To determine how best to run a database within Kubernetes, a full database evaluation is a good approach, especially when the system needs to be comparable to a system in production outside of Kubernetes. Kubernetes-independent database system It's not always necessary for an application that's implemented as services in Kubernetes to have the database running in Kubernetes at the same time. There are many reasons that a database system needs to be run outside of Kubernetes, —for example, established processes, product knowledge within an enterprise, or unavailability. Both cloud providers and cloud partner-managed databases fit into this category. It's equally possible and feasible to run a database on a Compute Engine. When selecting a database system, it's a good practice to do a full database evaluation to ensure that all of the requirements for an application are met. From an application design perspective, connection pooling is an important design consideration. An application service accessing a database might use a connection pool internally. Using a connection pool is good for efficiency and reduced latency. Requests are taken from the pool instead without the need for them to be initiated, and there's no wait for connections to be created. If the application is scaled up by adding application service instances, each instance creates a connection pool. If best practices are followed, each pool pre-creates a minimum set of connections. Each time another application service instance is created for application scaling, connections are added to the database. From a design perspective, because databases can't support unlimited connections, the addition of connections has to be managed to avoid overload. Best database system versus database system portability When selecting database systems, it's common for enterprises to select the best database system to address the requirements of an application. In a multicloud environment, the best database in each cloud can be selected, and they can be connected to each other based on the use case. If these systems are different, any replication—one-directional or bidirectional—requires significant effort. This approach might be justified if the benefit of using the best system outweighs the effort required to implement it. However, a good practice is to consider and evaluate concurrently an approach for a database system that's available in all required clouds. While such a database might not be as ideal as the best option, it might be a lot easier to implement, operate, and maintain such a system. A concurrent database system evaluation demonstrates the advantages and disadvantages of both database systems, providing a solid basis for selection. Built-in versus external database system replication For use cases that require a database system in all deployment locations (zones, regions or clouds), replication is an important feature. Replication can be asynchronous, bidirectional, or fully synchronized active-active replication. Database systems don't all support all of these forms of replication. For the systems that don't support replication as part of their system implementation replication, systems like Striim can be used to complement the architecture (as discussed in Database migration). A best practice is to evaluate alternative database-management systems to determine the advantages and disadvantages of a system that has built-in replication or a system that requires replication technology. A third class of technology plays a role in this case as well. This class provides add-ons to existing database systems to provide replication. One example is MariaDB Galera Cluster. If the evaluation process permits, evaluating these systems is a good practice. What's next Learn about hybrid and multicloud patterns and practices. Read about patterns for connecting other cloud service providers with Google Cloud. Learn about monitoring and logging architectures for hybrid and multicloud deployments on Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Alex Cârciu | Solutions Architect Send feedback \ No newline at end of file diff --git a/Multi-regional.txt b/Multi-regional.txt new file mode 100644 index 0000000000000000000000000000000000000000..1066362dd697232a3ffde13c5b78595ac8f8e585 --- /dev/null +++ b/Multi-regional.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/multiregional +Date Scraped: 2025-02-23T11:44:42.071Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud multi-regional deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide describes the multi-regional deployment archetype. In a cloud architecture that uses the multi-regional deployment archetype, the application runs in two or more Google Cloud regions. Application data is replicated across all the regions in the architecture. To ensure fast and synchronous replication of data, the regions are typically within a continent. The following diagram shows the cloud topology for an application that runs in two Google Cloud regions: The preceding diagram shows two isolated multi-tier application stacks that run independently in two Google Cloud regions. In each region, the application runs in three zones. The databases in the two regions are replicated. If the workload has a low recovery point objective (RPO) or if it requires strong cross-region consistency of data, then the database replication needs to be synchronous. Otherwise, the databases can be replicated asynchronously. User requests are routed to regional load balancers by using a DNS routing policy. If an outage occurs in any one of the two regions, DNS routes user requests to the load balancer in the other region. Use cases The following sections provide examples of use cases for which the multi-regional deployment archetype is an appropriate choice. High availability for geographically dispersed users We recommend a multi-regional deployment for applications that are business-critical and where high availability and robustness against region outages are essential. If a region becomes unavailable for any reason (even a large-scale disruption caused by a natural disaster), users of the application don't experience any downtime. Traffic is routed to the application in the other available regions. If data is replicated synchronously, the recovery time objective (RTO) is near zero. Low latency for application users If your users are within a specific geographical area, such as a continent, you can use a multi-regional deployment to achieve an optimal balance between availability and performance. When one of the regions has an outage, the global load balancer sends requests that originate in that region to another region. Users don't perceive significant performance impact because the regions are within a geographical area. Compliance with data residency and sovereignty requirements The multi-regional deployment archetype can help you meet regulatory requirements for data residency and operational sovereignty. For example, a country in Europe might require that all user data be stored and accessed in data centers that are located physically within the country. You can deploy the application to Google Cloud regions in Europe and use DNS with a geofenced routing policy to route traffic to the appropriate region. Design considerations When you provision and manage redundant resources across locations, the volume of cross-location network traffic can be high. You also store and replicate data across multiple regions. When you build an architecture that uses the multi-regional deployment archetype, consider the potentially higher cost of cloud resources and network traffic, and the complexity of operating the deployment. For business-critical applications, the availability advantage of a multi-region architecture might outweigh the increased cost and operational complexity. Reference architecture For a reference architecture that you can use to design a multi-regional deployment on Compute Engine VMs, see Multi-regional deployment on Compute Engine. Previous arrow_back Regional Next Global arrow_forward Send feedback \ No newline at end of file diff --git a/Multi-regional_deployment_on_Compute_Engine.txt b/Multi-regional_deployment_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..7441fd03eeb62a64fe6cab2202e2839b6b63bc15 --- /dev/null +++ b/Multi-regional_deployment_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/multiregional-vms +Date Scraped: 2025-02-23T11:45:01.340Z + +Content: +Home Docs Cloud Architecture Center Send feedback Multi-regional deployment on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-20 UTC This document provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in multiple regions in Google Cloud. The document also provides guidance to help you build an architecture that uses other Google Cloud infrastructure services. It describes the design factors that you should consider when you build a multi-regional architecture for your cloud applications. The intended audience for this document is cloud architects. Architecture Figure 1 shows an architecture for an application that runs in active-active mode in isolated stacks that are deployed across two Google Cloud regions. In each region, the application runs independently in three zones. The architecture is aligned with the multi-regional deployment archetype, which ensures that your Google Cloud topology is robust against zone and region outages and that it provides low latency for application users. Figure 1. A global load balancer routes user requests to regionally isolated application stacks. The architecture is based on the infrastructure as a service (IaaS) cloud model. You provision the required infrastructure resources (compute, networking, and storage) in Google Cloud, and you retain full control over and responsibility for the operating system, middleware, and higher layers of the application stack. To learn more about IaaS and other cloud models, see PaaS vs. IaaS vs. SaaS vs. CaaS: How are they different? The preceding diagram includes the following components: Component Purpose Global external load balancer The global external load balancer receives and distributes user requests to the application. The global external load balancer advertises a single anycast IP address, but it's implemented as a large number of proxies on Google Front Ends (GFEs). Client requests are directed to the GFE that's closest to the client. Depending on your requirements, you can use a global external Application Load Balancer or a global external proxy Network Load Balancer. For more information, see Choose a load balancer. Regional managed instance groups (MIGs) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of regional MIGs. These MIGs are the backends for the global load balancer. Each MIG contains Compute Engine VMs in three different zones. Each of these VMs hosts an independent instance of the web tier of the application. Regional internal load balancers The internal load balancer in each region distributes traffic from the web tier VMs to the application tier VMs in that region. Depending on your requirements, you can use a regional internal Application Load Balancer or Network Load Balancer. For more information, see Choose a load balancer. Regional MIGs for the application tier The application tier is deployed on Compute Engine VMs that are part of regional MIGs. The MIG in each region is the backend for the internal load balancer in that region. Each MIG contains Compute Engine VMs in three different zones. Each VM hosts an independent instance of the application tier. Third-party database deployed on Compute Engine VMs A third-party database (such as PostgreSQL) is deployed on Compute Engine VMs in the two regions. You can set up cross-region replication for the databases and configure the database in each region to fail over to the database in the other region. The replication and failover capabilities depend on the database that you use. Installing and managing a third-party database involves additional effort and operational cost for replication, applying updates, monitoring, and ensuring availability. You can avoid the overhead of installing and managing a third-party database and take advantage of built-in high availability (HA) features by using a fully managed database like a multi-region Spanner instance. Virtual Private Cloud network and subnets All the Google Cloud resources in the architecture use a single VPC network that has subnets in two different regions. Cloud Storage dual-region buckets Backups of application data are stored in dual-region Cloud Storage buckets. Alternatively, you can use Backup and DR Service to create, store, and manage the database backups. Use cases This section describes use cases for which a multi-regional deployment on Compute Engine is an appropriate choice. Efficient migration of on-premises applications You can use this reference architecture to build a Google Cloud topology to rehost (lift and shift) on-premises applications to the cloud with minimal changes to the applications. All the tiers of the application in this reference architecture are hosted on Compute Engine VMs. This approach lets you migrate on-premises applications efficiently to the cloud and take advantage of the cost benefits, reliability, performance, and operational simplicity that Google Cloud provides. High availability for geo-dispersed users We recommend a multi-regional deployment for applications that are business-critical and where high availability and robustness against region outages are essential. If a region becomes unavailable for any reason (even a large-scale disruption caused by a natural disaster), users of the application don't experience any downtime. Traffic is routed to the application in the other available regions. If data is replicated synchronously, the recovery time objective (RTO) is near zero. Low latency for application users If your users are within a specific geographical area, such as a continent, you can use a multi-regional deployment to achieve an optimal balance between availability and performance. When one of the regions has an outage, the global load balancer sends requests that originate in that region to another region. Users don't perceive significant performance impact because the regions are within a geographical area. Design alternative An architecture that uses a global load balancer (figure 1) supports certain features that help you to enhance the reliability of your deployments, such as edge caching using Cloud CDN. This section presents an alternative architecture that uses regional load balancers and Cloud DNS, as shown in figure 2. This alternative architecture supports the following additional features: Transport Layer Security (TLS) termination in specified regions. Ability to serve content from the region that you specify. However, that region might not be the best performing region at a given time. A wider range of connection protocols if you use a Passthrough Network Load Balancer. For more information about the differences between regional and global load balancers, see the following documentation: Global versus regional load balancing in "Choose a load balancer" Modes of operation in "External Application Load Balancer overview" Figure 2. Cloud DNS routes user requests to regional load balancers. Like the architecture in figure 1, the architecture in figure 2 is robust against zone and region outages. A Cloud DNS public zone routes user requests to the appropriate region. Regional external load balancers receive user requests and distribute them across the web tier instances of the application within each region. The other components in this architecture are identical to the components in the global load balancer-based architecture. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for system design, security and compliance, reliability, operational efficiency, cost, and performance. Note: The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions for your multi-regional deployment and to select appropriate Google Cloud services. Region selection When you choose the Google Cloud regions where your applications must be deployed, consider the following factors and requirements: Availability of Google Cloud services in each region. For more information, see Products available by location. Availability of Compute Engine machine types in each region. For more information, see Regions and zones. End-user latency requirements Cost of Google Cloud resources Cross-regional data transfer costs Regulatory requirements Some of these factors and requirements might involve trade-offs. For example, the most cost-efficient region might not have the lowest carbon footprint. Compute services The reference architecture in this document uses Compute Engine VMs for all the tiers of the application. Depending on the requirements of your application, you can choose from other Google Cloud compute services: You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. If you prefer to focus your IT efforts on your data and applications instead of setting up and operating infrastructure resources, then you can use serverless services like Cloud Run and Cloud Run functions. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. For more information about choosing appropriate compute services for your workloads in Google Cloud, see Hosting Applications on Google Cloud in the Google Cloud Architecture Framework. Storage services The architectures shown in this document use regional Persistent Disk volumes for all the tiers. Persistent disks provide synchronous replication of data across two zones within a region. Other storage options for multi-regional deployments include Cloud Storage dual-region or multi-region buckets. Objects stored in a dual-region or multi-region bucket are stored redundantly in at least two separate geographic locations. Metadata is written synchronously across regions, and data is replicated asynchronously. For dual-region buckets, you can use turbo replication, which ensures that objects are replicated across region pairs, with a recovery point objective (RPO) of 15 minutes. For more information, see Data availability and durability. To store data that's shared across multiple VMs in a region, such as across all the VMs in the web tier or application tier, you can use a Filestore Enterprise instance. The data that you store in a Filestore Enterprise instance is replicated synchronously across three zones within the region. This replication ensures high availability and robustness against zone outages. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. If your database is Microsoft SQL Server, you can deploy a failover cluster instance (FCI) and use the fully managed Google Cloud NetApp Volumes to provide continuous availability (CA) SMB storage for the database. When you design storage for your multi-regional workloads, consider the functional characteristics of the workloads, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Database services The reference architecture in this document uses a third-party database, like PostgreSQL, that's deployed on Compute Engine VMs. Installing and managing a third-party database involves effort and cost for operations like applying updates, monitoring and ensuring availability, performing backups, and recovering from failures. You can avoid the effort and cost of installing and managing a third-party database by using a fully managed database service like Cloud SQL, AlloyDB for PostgreSQL, Bigtable, Spanner, or Firestore. These Google Cloud database services provide uptime service-level agreements (SLAs), and they include default capabilities for scalability and observability. If your workloads require an Oracle database, you can use Bare Metal Solution provided by Google Cloud. For an overview of the use cases that each Google Cloud database service is suitable for, see Google Cloud databases. When you choose and set up the database for a multi-regional deployment, consider your application's requirements for cross-region data consistency, and be aware of the performance and cost trade-offs. If the application requires strong consistency (all users must read the same data at all times), then the data must be replicated synchronously across all regions in the architecture. However, synchronous replication can lead to higher cost and decreased performance, because any data that's written must be replicated in real time across the regions before the data is available for read operations. If your application can tolerate eventual consistency, then you can replicate data asynchronously. This can help improve performance because the data doesn't need to be replicated synchronously across regions. However, users in different regions might read different data because the data might not have been fully replicated at the time of the request. Security and compliance This section describes factors that you should consider when you use this reference architecture to design and build a multi-regional topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against threats To protect your application against threats like distributed denial of service (DDoS) attacks and cross-site scripting (XSS), you can use Google Cloud Armor security policies. Each policy is a set of rules that specifies certain conditions that should be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the incoming traffic's source IP address matches a specific IP address or CIDR range, then the traffic must be denied. In addition, you can apply preconfigured web application firewall (WAF) rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the application tier, web tier, and databases don't need inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only a private, internal IP address can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only private IP addresses, like the Compute Engine VMs in this reference architecture, you can use Cloud NAT. VM image security To ensure that your VMs use only approved images (that is, images with software that meets your policy or security requirements), you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. The default service account is granted the Editor IAM role (roles/editor) unless this behavior is disabled. By default, the default service account is attached to all VMs that you create by using the Google Cloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each application. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges in "Best practices for using service accounts." Data residency considerations You can use regional load balancers to build a multi-regional architecture that helps you to meet data residency requirements. For example, a country in Europe might require that all user data be stored and accessed in data centers that are located physically within Europe. To meet this requirement, you can use the regional load balancer-based architecture in figure 2. In that architecture, the application runs in Google Cloud regions in Europe and you use Cloud DNS with a geofenced routing policy to route traffic through regional load balancers. To meet data residency requirements for the database tier, use a sharded architecture instead of replication across regions. With this approach, the data in each region is isolated, but you can't implement cross-region high availability and failover for the database. More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations provided in the Security foundations blueprint. Reliability This section describes design factors that you should consider when you use this reference architecture to build and operate reliable infrastructure for your multi-regional deployments in Google Cloud. MIG autoscaling When you run your application on multiple regional MIGs, the application remains available during isolated zone outages or region outages. The autoscaling capability of stateless MIGs lets you maintain application availability and performance at predictable levels. To control the autoscaling behavior of your stateless MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling for stateless MIGs. Stateful MIGs can't be autoscaled. For more information, see Autoscaling groups of instances. VM autohealing Sometimes the VMs that host your application might be running and available, but there might be issues with the application itself. It might freeze, crash, or not have sufficient memory. To verify whether an application is responding as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configure autohealing, see Set up an application health check and autohealing. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs that are distributed across multiple zones. This distribution ensures that your application is robust against zone outages. To improve this robustness further, you can create a spread placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs within each zone on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. For more information, see Apply spread placement policies to VMs. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or shared across multiple projects. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Persistent disk state A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your persistent disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them easily to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Data durability You can use Backup and DR to create, store, and manage backups of the Compute Engine VMs. Backup and DR stores backup data in its original, application-readable format. When required, you can restore your workloads to production by directly using data from long-term backup storage without time-consuming data movement or preparation activities. To store database backups and transaction logs, you can use regional Cloud Storage buckets, which provide lowest cost backup storage that's redundant across zones. Compute Engine provides the following options to help you to ensure the durability of data that's stored in Persistent Disk volumes: You can use standard snapshots to capture the point-in-time state of Persistent Disk volumes. The snapshots are stored redundantly in multiple regions, with automatic checksums to ensure the integrity of your data. Snapshots are incremental by default, so they use less storage space and you save money. Snapshots are stored in a Cloud Storage location that you can configure. For more recommendations about using and managing snapshots, see Best practices for Compute Engine disk snapshots. Regional Persistent Disk volumes let you run highly available applications that aren't affected by failures in persistent disks. When you create a regional Persistent Disk volume, Compute Engine maintains a replica of the disk in a different zone in the same region. Data is replicated synchronously to the disks in both zones. If any one of the two zones has an outage, the data remains available. Database availability To implement cross-zone failover for the database in each region, you need a mechanism to identify failures of the primary database and a process to fail over to the standby database. The specifics of the failover mechanism depend on the database that you use. You can set up an observer instance to detect failures of the primary database and orchestrate the failover. You must configure the failover rules appropriately to avoid a split-brain situation and prevent unnecessary failover. For example architectures that you can use to implement failover for PostgreSQL databases, see Architectures for high availability of PostgreSQL clusters on Compute Engine. More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a multi-regional Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the resource utilization of your VM instances, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match your workload's compute requirements. For workloads with predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce your Compute Engine costs for the VMs in the application and web tiers. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have high availability requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, MIGs might not be able to scale out (that is, create VMs) automatically to the specified target size until the required capacity becomes available again. Resource utilization The autoscaling capability of stateless MIGs enables your application to handle increases in traffic gracefully, and it helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Third-party licensing When you migrate third-party workloads to Google Cloud, you might be able to reduce cost by bringing your own licenses (BYOL). For example, to deploy Microsoft Windows Server VMs, instead of using a premium image that incurs additional cost for the third-party license, you can create and use a custom Windows BYOL image. You then pay only for the VM infrastructure that you use on Google Cloud. This strategy helps you continue to realize value from your existing investments in third-party licenses. If you decide to use the BYOL approach, we recommend that you do the following: Provision the required number of compute CPU cores independently of memory by using custom machine types. By doing this, you limit the third-party licensing cost to the number of CPU cores that you need. Reduce the number of vCPUs per core from 2 to 1 by disabling simultaneous multithreading (SMT), and reduce your licensing costs by 50%. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors that you should consider when you use this reference architecture to design and build a multi-regional Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (such as the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using the update method that you choose: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom images that contain the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts to install third-party software, make sure that the scripts explicitly specify software-installation parameters such as the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors that you should consider when you use this reference architecture to design and build a multi-regional topology in Google Cloud that meets the performance requirements of your workloads. VM placement For workloads that require low inter-VM network latency, you can create a compact placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. For more information, see Reduce latency by using compact placement policies. VM machine types Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on your cost and performance requirements. The machine types are grouped into machine series and families. The following table provides a summary of the recommended machine families and series for different workload types: Requirement Recommended machine family Example machine series Best price-performance ratio for a variety of workloads General-purpose machine family C3, C3D, E2, N2, N2D, Tau T2D, Tau T2A Highest performance per core and optimized for compute-intensive workloads Compute-optimized machine family C2, C2D, H3 High memory-to-vCPU ratio for memory-intensive workloads Memory-optimized machine family M3, M2, M1 GPUs for massively parallelized workloads Accelerator-optimized machine family A2, G2 For more information, see Machine families resource and comparison guide. VM multithreading Each virtual CPU (vCPU) that you allocate to a Compute Engine VM is implemented as a single hardware multithread. By default, two vCPUs share a physical CPU core. For workloads that are highly parallel or that perform floating point calculations (such as genetic sequence analysis, and financial risk modeling), you can improve performance by reducing the number of threads that run on each physical CPU core. For more information, see Set the number of threads per core. Network Service Tiers Network Service Tiers lets you optimize the network cost and performance of your workloads. You can choose from the following tiers: Premium Tier uses Google's highly reliable global backbone to help you achieve minimal packet loss and latency. Traffic enters and leaves the Google network at a global edge point of presence (PoP) that's closest to your end user's ISP. We recommend using Premium Tier as the default tier for optimal performance. Premium Tier supports both regional external IP addresses and global external IP addresses for VMs and load balancers. Standard Tier is available only for resources that use regional external IP addresses. Traffic enters and leaves the Google network at an edge PoP that's closest to the region where your Google Cloud workload runs. The pricing for Standard Tier is lower than Premium Tier. Standard Tier is suitable for traffic that isn't sensitive to packet loss and that doesn't have low latency requirements. Caching If your application serves static website assets and if your architecture includes a global external Application Load Balancer (as shown in figure 1), then you can use Cloud CDN to cache regularly accessed static content closer to your users. Cloud CDN can help to improve performance for your users, reduce your infrastructure resource usage in the backend, and reduce your network delivery costs. For more information, see Faster web performance and improved web protection for load balancing. More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Learn more about the Google Cloud products used in this reference architecture: Cloud Load Balancing overview Instance groups Cloud DNS overview Get started with migrating your workloads to Google Cloud. Explore and evaluate deployment archetypes that you can choose to build architectures for your cloud workloads. Review architecture options for designing reliable infrastructure for your workloads in Google Cloud. For more reference architectures, design guides, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Ben Good | Solutions ArchitectCarl Franklin | Director, PSO Enterprise ArchitectureDaniel Lees | Cloud Security ArchitectGleb Otochkin | Cloud Advocate, DatabasesMark Schlagenhauf | Technical Writer, NetworkingPawel Wenda | Group Product ManagerSean Derrington | Group Outbound Product Manager, StorageSekou Page | Outbound Product ManagerShobhit Gupta | Solutions ArchitectSimon Bennett | Group Product ManagerSteve McGhee | Reliability AdvocateVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Multicloud(1).txt b/Multicloud(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..5046475e8cab42309082e476e0b09f4d8497a07a --- /dev/null +++ b/Multicloud(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/multicloud +Date Scraped: 2025-02-23T11:57:13.058Z + +Content: +5 ways Google can help you succeed in the hybrid and multicloud worldDrive transformation with Google's multicloud solutionsWe offer you the flexibility to migrate, build, and optimize apps across hybrid and multicloud environments while minimizing vendor lock-in, leveraging best-in-breed solutions, and meeting regulatory requirements.Contact salesManage apps and data anywhereGoogle Cloud empowers you to quickly build new apps and modernize existing ones to increase your agility and reap the benefits of the multicloud. We offer a consistent platform and data analysis for your deployments no matter where they reside, along with a service-centric view of all your environments.55%Anthos increases platform operations efficiency by up to 55%Source: Forrester50+Looker supports 50+ distinct SQL dialects across multiple clouds34%Save 26%–34% on your total cost of ownership over a 3-year periodSources: Looker, ESG reportBreak down silos and uncover new insightsProcess and analyze petabytes of data on a highly scalable, cost-effective and secure data warehouse solution across clouds. Serve up real-time dashboards for more in-depth, consistent analysis and harness the power of our industry-leading AI & ML services for improved business outcomes.Accelerate application deliveryBuild enterprise-grade containerized applications faster with best-in-class managed Kubernetes and serverless platform on cloud and on-premises environments. You can build a fast, scalable software delivery pipeline no matter where you run by seamlessly implementing DevOps and SRE practices with cloud-native tooling and expert guidance from Google.No. 1Contributor to CNCF Open Source projectsSource: Stackalytics Scale with open, flexible technologyGoogle is one of the largest contributors to the open source ecosystem. We work with the open source community to develop well known open source technologies like Kubernetes, then roll these out as managed services to give users maximum choice and increase your IT investments’ longevity and survivability. Overall, Google is making a lot of progress in multicloud, which allows you to not have to think about the vendor and just adopt what you need to do the job well. Dave Johnson, VP of Informatics, Data Science, and AI at Moderna Learn more about ModernaUse the best of both worldsWhen you use Google Cloud and Oracle Cloud Infrastructure (OCI) together, you can build a multicloud solution that harnesses the unique capabilities of each platform. Our whitepaper explains how to connect these clouds together to create a robust, multicloud, Oracle environment.Read the whitepaperFind a partnerOur global partner program, Partner Advantage, can help you innovate faster, scale smarter, and stay secure. If you’re looking for advanced solutions and services in a particular area, find a Google Cloud partner with Specialization or Expertise on our partner directory.Discover solutions that make multicloud a realityModernize existing applications and build cloud-native apps anywhere Learn more about AnthosBring multicloud analytics to your data Learn more about BigQuery OmniDerive business insights across all your environments with Looker Data PlatformLearn more about LookerTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Multicloud.txt b/Multicloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..06d41a15deed290068ca7adf4229a00959829f12 --- /dev/null +++ b/Multicloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/multicloud +Date Scraped: 2025-02-23T11:44:49.180Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud multicloud deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide describes the multicloud deployment archetype, provides examples of use cases, and discusses design considerations. In an architecture that uses the multicloud deployment archetype, some parts of the application run in Google Cloud while others are deployed in other cloud platforms. Use cases The following sections provide examples of use cases for which the multicloud deployment archetype is an appropriate choice. Note: For each of these use cases, the architecture in each cloud can use the zonal, regional, multi-regional, or global deployment archetype. Google Cloud as the primary site and another cloud as a DR site To manage disaster recovery (DR) for mission-critical applications in Google Cloud, you can back up the data and maintain a passive replica in another cloud platform, as shown in the following diagram. If the application in Google Cloud is down, you can use the external replica to restore the application to production. Enhancing applications with Google Cloud capabilities Google Cloud offers advanced capabilities in areas like storage, artificial intelligence (AI) and machine learning (ML), big data, and analytics. The multicloud deployment archetype lets you take advantage of these advanced capabilities in Google Cloud for applications that you want to run on other cloud platforms. The following are examples of these capabilities: Low-cost, unlimited archive storage. AI and ML applications for data generated by applications deployed in other cloud platforms. Data warehousing and analytics processes using BigQuery for data ingested from applications that run in other cloud platforms. The following diagram shows a multicloud topology that enhances an application running on another cloud platform with advanced data-processing capabilities in Google Cloud. More information For more information about the rationale and use cases for the multicloud deployment archetype, see Build hybrid and multicloud architectures using Google Cloud. Design considerations When you build an architecture that's based on the multicloud deployment archetype, consider the following design factors. Cost of redundant resources A multicloud architecture often costs more than an architecture where the application runs entirely in Google Cloud, due to the following factors: Data might need to be stored redundantly within each cloud rather than in a single cloud. The storage and data transfer costs might be higher. If an application runs in multiple cloud platforms, some of the redundant resources might be underutilized, leading to higher overall cost of the deployment. Inter-cloud connectivity For efficient network communication between your resources in multiple cloud platforms, you need secure and reliable cross-cloud connectivity. For example, you can use Google Cloud Cross-Cloud Interconnect to establish high-bandwidth dedicated connectivity between Google Cloud and another cloud service provider. For more information, see Patterns for connecting other cloud service providers with Google Cloud. Setup effort and operational complexity Setting up and operating a multicloud topology requires significantly more effort than an architecture that uses only Google Cloud: Security features and tools aren't standard across cloud platforms. Your security administrators need to learn the skills and knowledge that are necessary to manage security for resources distributed across all the cloud platforms that you use. You need to efficiently provision and manage resources across multiple public cloud platforms. Tools like Terraform can help reduce the effort to provision and manage resources. To manage containerized multicloud applications, you can use GKE Enterprise, which is a cross-cloud orchestration platform. Example architectures For examples of architectures that use the multicloud deployment archetype, see Build hybrid and multicloud architectures using Google Cloud. Previous arrow_back Hybrid Next Comparative analysis arrow_forward Send feedback \ No newline at end of file diff --git a/Network_Connectivity_Center.txt b/Network_Connectivity_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..38805378d9684fbbdb607f34efa0b76009c8953a --- /dev/null +++ b/Network_Connectivity_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/network-connectivity-center +Date Scraped: 2025-02-23T12:07:14.105Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '24. Let's go.Jump to Network Connectivity CenterNetwork Connectivity CenterReimagine how you deploy, manage, and scale your networks.Go to consoleView documentationSingle place to manage global connectivityElastic connectivity across Google Cloud, multicloud, and hybrid networksDeep visibility into Google Cloud and tight integration with third-party solutions12:23Configure a Network Connectivity Center hub to inter-connect distributed VPCsKey featuresSimple, flexible, and insightful cloud connectivitySimplified connectivityNetwork Connectivity Center offers the unique ability to easily connect your on-premises, Google Cloud, and other cloud enterprise networks and manage them as spokes through a single, centralized logical hub on Google Cloud.Flexible cloud connectivityNetwork Connectivity Center delivers a unified connectivity experience by allowing you to use Google’s global network, leveraging Partner and Dedicated Interconnects, Cloud VPN connections, and third-party router / SD-WAN VMs to transfer data reliably across on-premises sites, cloud resources, and other clouds (enabled through VPN connectivity).Deep insights of your global networkNetwork Connectivity Center pairs seamlessly with Network Intelligence Center to deliver end-to-end visibility so you can monitor, visualize, and test the connectivity of hub-and-spoke architectures deployed on Google Cloud, on-premises, and other clouds.What's newSee the latest updates about Network Connectivity CenterSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog post Increase Compute Engine VM performance with custom queuesRead the blogBlog postAnnouncing Network Connectivity CenterRead the blogBlog postCisco SD-WAN Cloud Hub with Google Cloud is now availableRead the blogVideoTransform your enterprise networkingWatch videoDocumentationFind resources and documentation for Network Connectivity CenterGoogle Cloud BasicsNetwork Connectivity Center overviewGet an overview of Network Connectivity Center—a hub-and-spoke model for network connectivity management in Google Cloud.Learn moreQuickstartWorking with hubs and spokesLearn how to list, create, describe, and delete Network Connectivity Center hubs and spokes. You can also add one or more labels to a hub or spoke.Learn moreTutorialConnecting two branch offices using HA VPN spokesThis tutorial describes how to use a Network Connectivity Center hub and Cloud VPN spokes to set up data transfer between two branch offices.Learn moreNot seeing what you’re looking for?View all product documentationUse casesExplore common use cases for Network Connectivity CenterUse caseSimplified data transfer over Google’s networkNetwork Connectivity Center enables connecting different enterprise networks together that are outside of Google Cloud by leveraging Google's network—providing enterprises instant access to planet-scale reach and high reliability. Traffic between non-Google networks is referred to as data transfer traffic, which can occur using existing standard cloud hybrid connectivity resources such as Cloud VPN, Dedicated, or Partner Interconnect.Use case SD-WAN / router appliance integrationNetwork Connectivity Center is the center point on Google Cloud for enterprises to attach SD-WAN solutions. SD-WAN / Router instances deployed on Google Cloud can be attached to Network Connectivity Center using the router appliance spoke.By integrating the SD-WAN solution with Google’s network, enterprises get a simple, reliable way to consume connectivity on demand while extending the benefits of the SD-WAN to Google Cloud networks.Use caseMulticloud VPN connectivityNetwork Connectivity Center supports VPN-based multicloud connectivity at a global level. Network Connectivity Center can be used directly or in conjunction with partner solutions to deploy and manage applications that require simple, reliable connectivity when spanning multiple clouds.View all technical guidesPricingNetwork Connectivity Center pricingGo here for pricing detailsPartnersPartnersTransform your enterprise networking with these Google Cloud partners.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Network_Intelligence_Center.txt b/Network_Intelligence_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..3fea4943185ba2e192d43810bdc50d9624eba853 --- /dev/null +++ b/Network_Intelligence_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/network-intelligence-center +Date Scraped: 2025-02-23T12:07:15.731Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Jump to Network Intelligence CenterNetwork Intelligence CenterSingle console for Google Cloud network observability, monitoring, and troubleshooting. Reduce the risk of outages and ensure security and compliance.Go to consoleDesign and implement robust network observability strategies with these five modulesProactively detect network problems or miconfigurations without manual interventionExplore the latest news, articles, and videos for Network Intelligence Center and its featuresBLOGTroubleshoot your network with Connectivity TestsBenefitsDiagnose connectivity issues and prevent outagesNetwork Intelligence Center provides unmatched visibility into your network in the cloud along with proactive network verification.Improve network security and complianceVerify network security and compliance through a series of connectivity checks. Tighten your security boundaries with insights into firewall rules usage.Save time with intelligent monitoringMonitor real-time performance metrics and easily visualize network health. Proactively prevent network outages and performance issues resulting from mis- or suboptimal configurations.Key featuresKey featuresNetwork TopologyVisualize the topology of your Virtual Private Cloud (VPC) networks, hybrid connectivity to and from your on-premises networks, connectivity to Google-managed services, and the associated metrics.Connectivity TestsDiagnostics tool that lets you check connectivity between network endpoints like a source or destination of network traffic, such as a VM, GKE cluster, load balancer forwarding rule, or an IP address on the internet.Performance DashboardVisibility into the performance of the entire Google Cloud network and to the performance of your project's resources.Firewall InsightsProvides data about how firewall rules are being used, exposes misconfigurations, and identifies rules that could be made more strict.Network AnalyzerAutomatically monitors your VPC network configurations and detects misconfigurations and suboptimal configurations. Identifies failures caused by the underlying network, provides root cause information, and suggests possible resolutions.BLOGOne stop shop to detect service and network issuesNetwork Intelligence Center has transformed how we optimize our network operations. Using Network Intelligence Center we discovered that data transferred to a particular compute engine region was significantly higher than expected. Network Topology helped us diagnose and fix this issue—and significantly reduce costs.Rob Lyon, Enterprise Architect, Kochava, a mobile app analytics companyWhat's newWhat’s newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postExpanding the Google Cloud network observability partner ecosystemRead the blogBlog postProactively manage your subnet IP address allocation with Network AnalyzerRead the blogVideoNetwork Intelligence Center for troubleshooting and monitoringWatch videoVideoGet started with firewall insights in Network Intelligence CenterWatch videoVideoGet started with Network Topology in Network Intelligence CenterWatch videoPricingPricingPricing for Network Intelligence Center varies by module.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Network_Service_Tiers.txt b/Network_Service_Tiers.txt new file mode 100644 index 0000000000000000000000000000000000000000..2c38bc551205443576c2fabcb828e9d37836af42 --- /dev/null +++ b/Network_Service_Tiers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/network-tiers +Date Scraped: 2025-02-23T12:07:17.392Z + +Content: +Home Products Network Service Tiers Send feedback Stay organized with collections Save and categorize content based on your preferences. Empowering customers to optimize their cloud network for performance or price With Network Service Tiers, Google Cloud is the first major public cloud to offer a tiered cloud network. Premium Tier Benefit Standard Tier Benefit Premium Standard Delivering choice with Network Service Tiers Premium Give users an exceptional high-performing network experience by using Google's global network. Standard Get control over network costs while delivering performance comparable with other cloud providers. Category Premium Standard Network Network High-performance routing Lower-performance network Network services Network Services Network services such as Cloud Load Balancing are global (single VIP for backends in multiple regions) Network services such as Cloud Load Balancing are regional (one VIP per region) Service level Service Level 99.99% availability SLA High performance and reliability 99.9% availability SLA Performance and availability comparable to other public cloud providers (lower than Premium) Use case Use Case Performance, reliability, global footprint, and user experience are your main considerations Cost is your main consideration, and you're willing to trade-off some network performance Premium Tier Premium Tier delivers Google Cloud traffic over Google's well-provisioned, low-latency, highly reliable global network. This network consists of an extensive global private fiber network with over 100 points of presence (POPs) across the globe. By this measure, Google's network is the largest of any public cloud provider. See the Google Cloud network map. Google Cloud customers benefit from the global features within global load balancing, another Premium Tier feature. You not only get the management simplicity of a single anycast IPv4 or IPv6 Virtual IP (VIP), but can also expand seamlessly across regions and overflow or fail over to other regions. Standard Tier Our new Standard Tier delivers Google Cloud traffic over a transit ISP's network with the latency and reliability typical of transit ISPs, and with a network quality comparable to that of other public clouds, at a lower price than our Premium Tier. We also provide only regional network services in Standard Tier, such as the new regional Cloud Load Balancing service. In this tier, your load-balancing VIP is regional, similar to other public cloud offerings, and it adds management complexity compared to Premium Tier global load balancing if you require a multi-region deployment. Performance-optimized Premium Tier Deliver your traffic on Google's high-performance global network Deliver the best possible application experience to your users Deploy global load balancing, with the management simplicity of a single anycast IPv4 or IPv6 Virtual IP across multiple regions. Seamlessly expand, overflow, or fail over into other regions. Premium Tier supported for Cloud CDN Lower outbound traffic costs with Standard Tier Run cost-sensitive workloads Performance comparable with other public cloud offerings Deploy regional load balancing to lower costs for single-region workloads Choose the right tier for each workload Full flexibility in selecting the tier for each application Specify the tier per instance or instance template Specify the tier per load balancer for Compute Engine and Cloud Storage backends Enable the tier at the project level to apply it to all underlying applications (coming soon) Network Service Tiers availability Features Premium Tier Standard Tier Plain VM instance Plain VM instance Yes - Regional Yes - Regional Application Load Balancer Application Load Balancer Yes - Global only Yes - Regional Network Load Balancer (non-HTTP traffic) Network Load Balancer (non-HTTP traffic) Yes - Global or regional Yes - Regional Internal load balancers Internal load balancers Yes - Regional VIP (+ client can be anywhere) No Google Cloud Storage, Google Kubernetes Engine Google Cloud Storage, Google Kubernetes Engine Yes Yes - Regional but only by using load balancing Cloud CDN Cloud CDN Yes No Cloud VPN/Cloud Router Cloud VPN/Cloud Router Yes No IPv6 Yes No “Google’s global network is one of the strongest features for choosing Google Cloud.” —Ravi Yeddula Sr. Director Platform Architecture & Application Development, The Home Depot View Documentation \ No newline at end of file diff --git a/Network_security.txt b/Network_security.txt new file mode 100644 index 0000000000000000000000000000000000000000..bb7d1793305a86148cdb53c452edb7ffca6884ae --- /dev/null +++ b/Network_security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design/security +Date Scraped: 2025-02-23T11:50:57.140Z + +Content: +Home Docs Cloud Architecture Center Send feedback Network security for distributed applications in Cross-Cloud Network Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC This document is part of a design guide series for Cross-Cloud Network. This part explores the network security layer. The series consists of the following parts: Cross-Cloud Network for distributed applications Network segmentation and connectivity for distributed applications in Cross-Cloud Network Service networking for distributed applications in Cross-Cloud Network Network security for distributed applications in Cross-Cloud Network (this document) Security surfaces When you design the security layer for the Cross-Cloud Network, you must consider the following security surfaces: Workload security Domain perimeter security Workload security controls the communication between workloads across and within the Virtual Private Cloud (VPC). Workload security uses security enforcement points that are close to the workloads in the architecture. Whenever possible, Cross-Cloud Network provides workload security by using Cloud Next Generation Firewall from Google Cloud. Perimeter security is required at all network boundaries. Because the perimeter usually interconnects networks that are managed by different organizations, tighter security controls are often required. You must ensure that the following communications across networks are secured: Communications across VPCs Communications across hybrid connections to other cloud providers or on-premises data centers Communications to the internet The ability to insert third-party network virtual appliances (NVAs) within the Google Cloud environment is critical to address the requirements for perimeter security across hybrid connections. Workload security in cloud Use firewall policies in Google Cloud to secure workloads and provide stateful firewall capabilities that are horizontally scalable and applied to each VM instance. The distributed nature of Google Cloud firewalls helps you implement security policies for network micro-segmentation without negatively impacting the performance of your workloads. Use Hierarchical firewall policies to improve the manageability and enforce posture compliance for your firewall policies. Hierarchical firewall policies let you create and enforce a consistent firewall policy across your organization. You can assign Hierarchical firewall policies to the organization or to individual folders. In addition, Hierarchical firewall policies rules can delegate evaluation to lower-level policies (global or regional network firewall policies) with a goto_next action. Lower-level rules cannot override a rule from a higher level in the resource hierarchy. This rule structure lets organization-wide administrators manage mandatory firewall rules in one place. Common use cases for Hierarchical firewall policies include organization or multi-project bastion host access, permitting centralized probing or health check systems, and enforcing a virtual network boundary across an organization or group of projects. For additional examples of using Hierarchical firewall policies, see Hierarchical firewall policies examples. Use global and regional network firewall policies to define rules on an individual VPC-network basis, either for all regions of the network (global) or a single region (regional). To achieve more granular controls enforced at the virtual machine (VM) level, we recommend that you use Identity and Access Management (IAM)-governed tags at the organization or project level. IAM-governed tags allows applying firewall rules based on the identity of workload host, as opposed to host IP address, and works across VPC Network Peering. Firewall rules deployed using tags can provide intra-subnet micro-segmentation with policy coverage that automatically applies to workloads wherever they are deployed, independent of the network architecture. In addition to stateful inspection capabilities and tag support, Cloud Next Generation Firewall also supports Threat Intelligence, FQDN, and geolocation filtering. We recommend that you migrate from VPC firewall rules to firewall policies. To assist with the migration, use the migration tool, which creates a global network firewall policy and converts existing VPC firewall rules into the new policy. Perimeter security in cloud In a multicloud network environment, perimeter security is usually implemented at each network. For example, the on-premises network has its own set of perimeter firewalls, while each cloud network implements separate perimeter firewalls. Because the Cross-Cloud Network is designed to be the hub for all communications, you can unify and centralize the perimeter security controls and deploy a single set of perimeter firewalls in your Cross-Cloud Network. To deliver a built-in perimeter security stack of choice, Cross-Cloud Network provides flexible options for you to insert NVAs. In the designs shown in the diagrams, you can deploy third-party NVAs in the transit VPC in the hub project. NVAs can be deployed over a single network interface (single-NIC mode) or over multiple network interfaces across multiple VPCs (multi-NIC mode). For Cross-Cloud Network, we recommend a single-NIC deployment for NVAs, because this option lets you do the following: Insert the NVAs with policy-based routes. Avoid creating rigid topologies. Deploy in a variety of inter-VPC topologies. Enable autoscaling for the NVAs. Scale to many VPCs over time, without required changes to the NVA interface deployment. If your design requires multi-NIC, recommendations are detailed in Multi-NIC NVA perimeter security. To accomplish the traffic steering that's required for NVA deployment, this guide recommends the selective enforcement of policy-based and static routes in the VPC routing tables. The policy-based routes are more flexible than standard routes because policy-based routes match on both source and destination information. These policy-based routes are also enforced only at specific places in the cloud network topology. This granularity allows the definition of very specific traffic steering behavior for very specific connectivity flows. In addition, this design enables the resiliency mechanisms required by the NVAs. NVAs are fronted by an internal TCP/UDP load balancer to enable NVA redundancy, autoscaling for elastic capacity, and flow symmetry to support stateful bi-directional traffic processing. Single-NIC NVA perimeter security In the design described in Inter-VPC connectivity for centralized services, the transit VPC acts as a hub to the spoke VPCs that are connected by using VPC Network Peering and HA VPN. The transit VPC also enables connectivity between the external networks and the spoke VPCs. For the purpose of single-NIC NVA insertion, this design combines the following two patterns: Insert NVAs at a VPC Network Peering hub with external hybrid connections Insert NVAs at an HA VPN VPC hub with external hybrid connections The following diagram shows NVAs inserted at the hubs for VPC Network Peering and HA VPN: The preceding diagram illustrates a combined pattern: A transit VPC that hosts the Cloud Interconnect VLAN attachments that provide hybrid or multicloud connectivity. This VPC also contains the single-NIC NVAs that monitor the hybrid connections. Application VPCs connected to the transit VPC over VPC Network Peering. A central services VPC connected to the transit VPC over HA VPN. In this design the spokes that are connected using HA VPN use the transit VPC to communicate with the spokes that are connected by VPC Network Peering. The communication is steered through the third-party NVA firewalls by using the following combination of passthrough load balancing, static routes, and policy-based routes: To steer HA VPN traffic to the internal load balancer, apply untagged policy-based routes to the Transit VPC. On these policy-based routes, use source and destination CIDR ranges that provide for traffic symmetry. To steer incoming traffic to the internal passthrough Network Load Balancer, apply policy-based routes to the Cloud Interconnect connections in the Transit VPC. These are regional routes. So that traffic leaving the NVA doesn't get routed directly back to the NVA, make all NVA interfaces the target of a skip-policy-based route to skip other policy-based routes. Traffic then follows the VPC routing table once it has been processed by the NVAs. To steer traffic to the NVA internal load balancers in the Transit VPC, apply static routes to the application VPCs. These can be scoped regionally using network tags. Multi-NIC NVA perimeter security In multi-NIC mode, the topology is more static because NVAs bridge the connectivity between the different VPCs in which the different network interfaces reside. When interface-based zones are required in a firewall, the following multi-NIC design enables the required external connectivity. This design assigns different firewall interfaces to the external networks. The external networks are referred to by security practitioners as untrusted networks and the internal networks are known as trusted networks. For the multi-NIC NVA deployment, this design is implemented using trusted and untrusted VPCs. For internal communications, firewalling can be enforced using a single-NIC insertion model that corresponds to a CIDR-based zone model. In this design, you insert NVAs by configuring the following: To steer HA VPN traffic to the internal load balancer, apply untagged policy-based routes to the trusted VPC. On these policy-based routes, use source and destination CIDR ranges that provide for traffic symmetry. To steer incoming traffic to the internal passthrough Network Load Balancer, apply policy-based routes to the Cloud Interconnect connections in the untrusted VPC. These are regional routes. So that traffic leaving the NVA doesn't get routed directly back to the NVA, make all NVA interfaces the target of a skip-policy-based route to skip other policy-based routes. Traffic then follows the VPC routing table once it has been processed by the NVAs. To steer traffic to the NVA internal load balancers in the trusted VPC, apply static routes to the application VPCs. These can be scoped regionally using network tags. The following diagram shows multi-NIC NVAs inserted between the untrusted and trusted VPC networks in the hub project: What's next Learn more about the Google Cloud products used in this design guide: VPC networks VPC Network Peering Cloud Interconnect HA VPN For more reference architectures, design guides, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Victor Moreno | Product Manager, Cloud NetworkingGhaleb Al-habian | Network SpecialistDeepak Michael | Networking Specialist Customer EngineerOsvaldo Costa | Networking Specialist Customer EngineerJonathan Almaleh | Staff Technical Solutions ConsultantOther contributors: Zach Seils | Networking SpecialistChristopher Abraham | Networking Specialist Customer EngineerEmanuele Mazza | Networking Product SpecialistAurélien Legrand | Strategic Cloud EngineerEric Yu | Networking Specialist Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMark Schlagenhauf | Technical Writer, NetworkingMarwan Al Shawi | Partner Customer EngineerAmmett Williams | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Networking(1).txt b/Networking(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f3b0e5cce741df64d8195ced9c59c5b8cc0559cd --- /dev/null +++ b/Networking(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/networking +Date Scraped: 2025-02-23T12:07:03.482Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayGoogle Cloud networking productsPlanet-scale networking for a smart and connected world.Contact salesGo to consoleFast, reliable, secure networking that scalesGoogle Cloud offers a broad portfolio of networking services built on top of planet-scale infrastructure that leverages automation, advanced AI, and programmability, enabling enterprises to connect, scale, secure, modernize and optimize their infrastructure.Read more about major industry and technology trends and how Google Cross-Cloud Network helps customers speed up digital transformation and gain long-lasting returns on investment in this recent IDC whitepaper.Take a tour of cloud networking for an overview of the building blocks of Google Cloud networking with this self-paced lab. Connect anywhere with Google Cloud's Cross-Cloud ConnectFind out how these companies use Google Cross-Cloud Network together with AI to optimize network performance, reliability, and savings. Cross-Cloud Network: Private, customizable and flexible networkingTransform enterprise networks, simplify multicloud networking, and secure distributed applications with Google Cross-Cloud Network.What’s new with Google Cloud network observability partner ecosystemIntroducing new network observability solutions and feature enhancements from our partners, as well as two new partners with customized solutions for GKE network observability: Selector and Tigera. sparkLooking to build a solution?I want to build a site that can handle sudden traffic spikes without downtime or performance issuesI want to build a fast video streaming platform for a global audienceI want to connect my Google Cloud environment with my environment from another cloud providerMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionPrevent downtimeprompt_suggestionDeliver content effectivelyprompt_suggestionConnect multiple cloudsGoogle Cloud networking services or technologiesCategoryProductsFeaturesConnectCloud ConnectivityFrom high-performance options such as Dedicated Interconnect and Partner Interconnect, to Cloud VPN for lower volume needs, and even direct and carrier peering options, Google Cloud Connectivity has a solution for connecting your infrastructure to the cloud that fits your needs.Uptime, guaranteedFewer disruptions and dropsConnect from anywhereFlexible, low-cost VPNVirtual Private Cloud (VPC)Provision, connect, or isolate Google Cloud resources using the Google global network. Define fine-grained networking policies with Google Cloud, on-premises, or public cloud infrastructure. VPC network includes granular IP address range selection, routes, firewall, Cloud VPN (Virtual Private Network), and Cloud Router.VPC networksPacket mirroringFirewall rulesCloud DNSA scalable, reliable, programmable, and managed authoritative domain naming system (DNS) service running on the same infrastructure as Google. Cloud DNS translates domain names like www.google.com into IP addresses like 74.125.29.101. Use our simple interface, a command line, or API to publish and manage millions of DNS zones and records.Authoritative DNS lookupDomain registration and managementFast anycast name serversNetwork Connectivity CenterNetwork Connectivity Center offers the unique ability to easily connect your on-premises, Google Cloud, and other cloud enterprise networks and manage them as spokes through a single, centralized, logical hub on Google Cloud.Use Google’s network as your ownSingle place to manage global connectivitySimple, flexible, and insightful cloud connectivityPrivate Service ConnectSet up private connectivity to your own, third-party and Google services from your VPC. Private Service Connect helps you consume services faster, simplify network management and secure your data by keeping it inside the Google Cloud network.Access Google APIs and servicesConnect to a service in another VPC networkPublish a service by using an internal load balancerScaleCloud Load BalancingQuickly scale applications on Compute Engine—no pre-warming needed. Distribute load-balanced compute resources in single or multiple regions, and near users, while meeting high-availability requirements. Cloud Load Balancing can put resources behind a single anycast IP, scale up or down with intelligent autoscaling, and integrate with Cloud CDN.Application and Network Load BalancingSeamless autoscalingHigh-fidelity health checksCloud CDNAccelerate content delivery for websites and applications served out of Compute Engine by leveraging Google's globally distributed edge caches. Cloud CDN lowers network latency, offloads origins, and reduces serving costs. Once you’ve set up Application Load Balancer, simply enable Cloud CDN with a single checkbox.Global distribution with anycast IPOptimized for last-mile performanceIntegrated with Google CloudMedia CDNEfficiently and intelligently deliver streaming experiences to viewers anywhere in the world with Media CDN. Media CDN is running on infrastructure in over 200 counties and 1,300 cities that get content closer to the user with low rebuffer times and high offload rates.Unparalleled planet-scale reachCloud-native and developer-friendly operationsReal-time visibility and planet-scale advanced securityCloud Service MeshCloud Service Mesh combines the control plane of Traffic Director and Anthos Service Mesh to provide traffic management, observability, and security features for your cloud applications. Fully managed, full stopSupports hybrid and multicloud deploymentsWorks for VM-based, serverless, and containerized applicationsSecureCloud ArmorCloud Armor works with Application Load Balancer to provide built-in defenses against infrastructure DDoS attacks. Google Cloud Armor benefits from more than a decade of experience protecting the world's largest internet properties like Google Search, Gmail, and YouTube.IP-based and geo-based access controlSupport for hybrid and multicloud deploymentsPre-configured WAF rulesNamed IP ListsCloud IDSCloud IDS is an intrusion detection service that provides threat detection for intrusions, malware, spyware, and command-and-control attacks on your network. It provides full visibility into network traffic, including both north-south and east-west traffic, letting you monitor VM-to-VM communication to detect lateral movement.Network-based threat detectionBacked by industry-leading threat researchSupports compliance requirements, including PCI 11.4Cloud NATGoogle Cloud's managed network address translation service enables you to provision application instances without public IP addresses, while allowing controlled, efficient internet access. Outside resources cannot directly access any of the private instances behind the Cloud NAT gateway, helping keep your Google Cloud VPCs isolated and secure.Managed NAT serviceMultiple NAT IPs per gatewayConfigurable NAT timeout timersVPC Service ControlsAllows users to define a security perimeter for API-based services (like Cloud Storage buckets, Cloud Bigtable instances, and BigQuery datasets) to help mitigate data exfiltration risks. It enables enterprises to keep their sensitive data private while leveraging Google Cloud’s fully managed storage and data processing capabilities.Centrally manage multi-tenant service access at scaleIdentity and context help securely access multi-tenant servicesEstablish virtual security perimeters for API-based servicesOptimizeNetwork Intelligence CenterNetwork Intelligence Center provides comprehensive network observability along with proactive network verification. Centralized monitoring cuts down troubleshooting time and effort, increases network security, and allows for optimization of the overall user experience.Network Topology: Visualize and monitor the health of your networkConnectivity Tests: Diagnose and prevent connectivity issuesPerformance Dashboard: See real-time network performance metricsFirewall Insights: Keep your firewall rules strict and efficientNetwork Service TiersImprove network experience performance and gain control over network costs with Network Service Tiers. Deliver your traffic on Google's high-performance global network, run cost-sensitive workloads, choose the right tier for the workload, and more.Premium Tier for highest performanceStandard Tier for cost controlGoogle Cloud networking services or technologiesConnectCloud ConnectivityFrom high-performance options such as Dedicated Interconnect and Partner Interconnect, to Cloud VPN for lower volume needs, and even direct and carrier peering options, Google Cloud Connectivity has a solution for connecting your infrastructure to the cloud that fits your needs.Uptime, guaranteedFewer disruptions and dropsConnect from anywhereFlexible, low-cost VPNScaleCloud Load BalancingQuickly scale applications on Compute Engine—no pre-warming needed. Distribute load-balanced compute resources in single or multiple regions, and near users, while meeting high-availability requirements. Cloud Load Balancing can put resources behind a single anycast IP, scale up or down with intelligent autoscaling, and integrate with Cloud CDN.Application and Network Load BalancingSeamless autoscalingHigh-fidelity health checksSecureCloud ArmorCloud Armor works with Application Load Balancer to provide built-in defenses against infrastructure DDoS attacks. Google Cloud Armor benefits from more than a decade of experience protecting the world's largest internet properties like Google Search, Gmail, and YouTube.IP-based and geo-based access controlSupport for hybrid and multicloud deploymentsPre-configured WAF rulesNamed IP ListsOptimizeNetwork Intelligence CenterNetwork Intelligence Center provides comprehensive network observability along with proactive network verification. Centralized monitoring cuts down troubleshooting time and effort, increases network security, and allows for optimization of the overall user experience.Network Topology: Visualize and monitor the health of your networkConnectivity Tests: Diagnose and prevent connectivity issuesPerformance Dashboard: See real-time network performance metricsFirewall Insights: Keep your firewall rules strict and efficientReady to build your applications on the Google Cloud Network?Hear from our specialists on how to utilize the global network infrastructure for your apps and services.Contact usFind Google Cloud connectivity partners through Cloud Pathfinder by Cloudscene (a third-party website).Find your pathwayRegister for the Professional Cloud Network Engineer certification.Learn from our customersSee how network and security specialists are using Google Cloud for better user experiences.VideoLearn how PayPal, Uber, Applovin, and Electronic Arts used Google's Cross-Cloud Network to accelerate growth2:39Blog postHow Google Cloud NAT helped strengthen Macy’s security5-min readCase StudyU-NEXT: Streaming video to millions with a modern CDN5-min readBlog postCelebrating women in tech: highlighting Symbl.ai5-min readBlog postHow MEDITECH adds advanced security to its cloud-based healthcare solutions with Cloud IDS5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Networking.txt b/Networking.txt new file mode 100644 index 0000000000000000000000000000000000000000..939290d1caaf57673232721db5ee10964cd936e1 --- /dev/null +++ b/Networking.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/networking +Date Scraped: 2025-02-23T11:45:29.252Z + +Content: +Home Docs Cloud Architecture Center Send feedback Networking Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC Networking is required for resources to communicate within your Google Cloud organization and between your cloud environment and on-premises environment. This section describes the structure in the blueprint for VPC networks, IP address space, DNS, firewall policies, and connectivity to the on-premises environment. Network topology The blueprint repository provides the following options for your network topology: Use separate Shared VPC networks for each environment, with no network traffic directly allowed between environments. Use a hub-and-spoke model that adds a hub network to connect each environment in Google Cloud, with the network traffic between environments gated by a network virtual appliance (NVA). Choose the dual Shared VPC network topology when you don't want direct network connectivity between environments. Choose the hub-and-spoke network topology when you want to allow network connectivity between environments that is filtered by an NVA such as when you rely on existing tools that require a direct network path to every server in your environment. Both topologies use Shared VPC as a principal networking construct because Shared VPC allows a clear separation of responsibilities. Network administrators manage network resources in a centralized host project, and workload teams deploy their own application resources and consume the network resources in service projects that are attached to the host project. Both topologies include a base and restricted version of each VPC network. The base VPC network is used for resources that contain non-sensitive data, and the restricted VPC network is used for resources with sensitive data that require VPC Service Controls. For more information on implementing VPC Service Controls, see Protect your resources with VPC Service Controls. Dual Shared VPC network topology If you require network isolation between your development, non-production, and production networks on Google Cloud, we recommend the dual Shared VPC network topology. This topology uses separate Shared VPC networks for each environment, with each environment additionally split between a base Shared VPC network and a restricted Shared VPC network. The following diagram shows the dual Shared VPC network topology. The diagram describes these key concepts of the dual Shared VPC topology: Each environment (production, non-production, and development) has one Shared VPC network for the base network and one Shared VPC network for the restricted network. This diagram shows only the production environment, but the same pattern is repeated for each environment. Each Shared VPC network has two subnets, with each subnet in a different region. Connectivity with on-premises resources is enabled through four VLAN attachments to the Dedicated Interconnect instance for each Shared VPC network, using four Cloud Router services (two in each region for redundancy). For more information, see Hybrid connectivity between on-premises environment and Google Cloud. By design, this topology doesn't allow network traffic to flow directly between environments. If you do require network traffic to flow directly between environments, you must take additional steps to allow this network path. For example, you might configure Private Service Connect endpoints to expose a service from one VPC network to another VPC network. Alternatively, you might configure your on-premises network to let traffic flow from one Google Cloud environment to the on-premises environment and then to another Google Cloud environment. Hub-and-spoke network topology If you deploy resources in Google Cloud that require a direct network path to resources in multiple environments, we recommend the hub-and-spoke network topology. The hub-and-spoke topology uses several of the concepts that are part of the dual Shared VPC topology, but modifies the topology to add a hub network. The following diagram shows the hub-and-spoke topology. The diagram describes these key concepts of hub-and-spoke network topology: This model adds a hub network, and each of the development, non-production, and production networks (spokes) are connected to the hub network through VPC Network Peering. Alternatively, if you anticipate exceeding the quota limit, you can use an HA VPN gateway instead. Connectivity to on-premises networks is allowed only through the hub network. All spoke networks can communicate with shared resources in the hub network and use this path to connect to on-premises networks. The hub networks include an NVA for each region, deployed redundantly behind internal Network Load Balancer instances. This NVA serves as the gateway to allow or deny traffic to communicate between spoke networks. The hub network also hosts tooling that requires connectivity to all other networks. For example, you might deploy tools on VM instances for configuration management to the common environment. The hub-and-spoke model is duplicated for a base version and restricted version of each network. To enable spoke-to-spoke traffic, the blueprint deploys NVAs on the hub Shared VPC network that act as gateways between networks. Routes are exchanged from hub-to-spoke VPC networks through custom routes exchange. In this scenario, connectivity between spokes must be routed through the NVA because VPC Network Peering is non-transitive, and therefore, spoke VPC networks can't exchange data with each other directly. You must configure the virtual appliances to selectively allow traffic between spokes. Project deployment patterns When creating new projects for workloads, you must decide how resources in this project connect to your existing network. The following table describes the patterns for deploying projects that are used in the blueprint. Pattern Description Example usage Shared base projects These projects are configured as service projects to a base Shared VPC host project. Use this pattern when resources in your project have the following criteria: Require network connectivity to the on-premises environment or resources in the same Shared VPC topology. Require a network path to the Google services that are contained on the private virtual IP address. Don't require VPC Service Controls. example_base_shared_vpc_project.tf Shared restricted projects These projects are configured as service projects to a restricted Shared VPC host project. Use this pattern when resources in your project have the following criteria: Require network connectivity to the on-premises environment or resources in the same Shared VPC topology. Require a network path to the Google services contained on the restricted virtual IP address. Require VPC Service Controls. example_restricted_shared_vpc_project.tf Floating projects Floating projects are not connected to other VPC networks in your topology. Use this pattern when resources in your project have the following criteria: Don't require full mesh connectivity to an on-premises environment or resources in the Shared VPC topology. Don't require a VPC network, or you want to manage the VPC network for this project independently of your main VPC network topology (such as when you want to use an IP address range that clashes with the ranges already in use). You might have a scenario where you want to keep the VPC network of a floating project separate from the main VPC network topology but also want to expose a limited number of endpoints between networks. In this case, publish services by using Private Service Connect to share network access to an individual endpoint across VPC networks without exposing the entire network. example_floating_project.tf Peering projects Peering projects create their own VPC networks and peer to other VPC networks in your topology. Use this pattern when resources in your project have the following criteria: Require network connectivity in the directly peered VPC network, but don't require transitive connectivity to an on-premises environment or other VPC networks. Must manage the VPC network for this project independently of your main network topology. If you create peering projects, it's your responsibility to allocate non-conflicting IP address ranges and plan for peering group quota. example_peering_project.tf IP address allocation This section introduces how the blueprint architecture allocates IP address ranges. You might need to change the specific IP address ranges used based on the IP address availability in your existing hybrid environment. The following table provides a breakdown of the IP address space that's allocated for the blueprint. The hub environment only applies in the hub-and-spoke topology. Purpose VPC type Region Hub environment Development environment Non-production environment Production environment Primary subnet ranges Base Region 1 10.0.0.0/18 10.0.64.0/18 10.0.128.0/18 10.0.192.0/18 Region 2 10.1.0.0/18 10.1.64.0/18 10.1.128.0/18 10.1.192.0/18 Unallocated 10.{2-7}.0.0/18 10.{2-7}.64.0/18 10.{2-7}.128.0/18 10.{2-7}.192.0/18 Restricted Region 1 10.8.0.0/18 10.8.64.0/18 10.8.128.0/18 10.8.192.0/18 Region 2 10.9.0.0/18 10.9.64.0/18 10.9.128.0/18 10.9.192.0/18 Unallocated 10.{10-15}.0.0/18 10.{10-15}.64.0/18 10.{10-15}.128.0/18 10.{10-15}.192.0/18 Private services access Base Global 10.16.0.0/21 10.16.8.0/21 10.16.16.0/21 10.16.24.0/21 Restricted Global 10.16.32.0/21 10.16.40.0/21 10.16.48.0/21 10.16.56.0/21 Private Service Connect endpoints Base Global 10.17.0.1/32 10.17.0.2/32 10.17.0.3/32 10.17.0.4/32 Restricted Global 10.17.0.5/32 10.17.0.6/32 10.17.0.7/32 10.17.0.8/32 Proxy-only subnets Base Region 1 10.18.0.0/23 10.18.2.0/23 10.18.4.0/23 10.18.6.0/23 Region 2 10.19.0.0/23 10.19.2.0/23 10.19.4.0/23 10.19.6.0/23 Unallocated 10.{20-25}.0.0/23 10.{20-25}.2.0/23 10.{20-25}.4.0/23 10.{20-25}.6.0/23 Restricted Region 1 10.26.0.0/23 10.26.2.0/23 10.26.4.0/23 10.26.6.0/23 Region 2 10.27.0.0/23 10.27.2.0/23 10.27.4.0/23 10.27.6.0/23 Unallocated 10.{28-33}.0.0/23 10.{28-33}.2.0/23 10.{28-33}.4.0/23 10.{28-33}.6.0/23 Secondary subnet ranges Base Region 1 100.64.0.0/18 100.64.64.0/18 100.64.128.0/18 100.64.192.0/18 Region 2 100.65.0.0/18 100.65.64.0/18 100.65.128.0/18 100.65.192.0/18 Unallocated 100.{66-71}.0.0/18 100.{66-71}.64.0/18 100.{66-71}.128.0/18 100.{66-71}.192.0/18 Restricted Region 1 100.72.0.0/18 100.72.64.0/18 100.72.128.0/18 100.72.192.0/18 Region 2 100.73.0.0/18 100.73.64.0/18 100.73.128.0/18 100.73.192.0/18 Unallocated 100.{74-79}.0.0/18 100.{74-79}.64.0/18 100.{74-79}.128.0/18 100.{74-79}.192.0/18 The preceding table demonstrates these concepts for allocating IP address ranges: IP address allocation is subdivided into ranges for each combination of base Shared VPC, restricted Shared VPC, region, and environment. Some resources are global and don't require subdivisions for each region. By default, for regional resources, the blueprint deploys in two regions. In addition, there are unused IP address ranges so that you can can expand into six additional regions. The hub network is only used in the hub-and-spoke network topology, while the development, non-production, and production environments are used in both network topologies. The following table introduces how each type of IP address range is used. Purpose Description Primary subnet ranges Resources that you deploy to your VPC network, such as virtual machine instances, use internal IP addresses from these ranges. Private services access Some Google Cloud services such as Cloud SQL require you to preallocate a subnet range for private services access. The blueprint reserves a /21 range globally for each of the Shared VPC networks to allocate IP addresses for services that require private services access. When you create a service that depends on private services access, you allocate a regional /24 subnet from the reserved /21 range. Private Service Connect The blueprint provisions each VPC network with a Private Service Connect endpoint to communicate with Google Cloud APIs. This endpoint lets your resources in the VPC network reach Google Cloud APIs without relying on outbound traffic to the internet or publicly advertised internet ranges. Proxy-based load balancers Some types of Application Load Balancers require you to preallocate proxy-only subnets. Although the blueprint doesn't deploy Application Load Balancers that require this range, allocating ranges in advance helps reduce friction for workloads when they need to request a new subnet range to enable certain load balancer resources. Secondary subnet ranges Some use cases, such as container-based workloads, require secondary ranges. The blueprint allocates ranges from the RFC 6598 IP address space for secondary ranges. Centralized DNS setup For DNS resolution between Google Cloud and on-premises environments, we recommend that you use a hybrid approach with two authoritative DNS systems. In this approach, Cloud DNS handles authoritative DNS resolution for your Google Cloud environment and your existing on-premises DNS servers handle authoritative DNS resolution for on-premises resources. Your on-premises environment and Google Cloud environment perform DNS lookups between environments through forwarding requests. The following diagram demonstrates the DNS topology across the multiple VPC networks that are used in the blueprint. The diagram describes the following components of the DNS design that is deployed by the blueprint: The DNS hub project in the common folder is the central point of DNS exchange between the on-premises environment and the Google Cloud environment. DNS forwarding uses the same Dedicated Interconnect instances and Cloud Routers that are already configured in your network topology. In the dual Shared VPC topology, the DNS hub uses the base production Shared VPC network. In the hub-and-spoke topology, the DNS hub uses the base hub Shared VPC network. Servers in each Shared VPC network can resolve DNS records from other Shared VPC networks through DNS forwarding, which is configured between Cloud DNS in each Shared VPC host project and the DNS hub. On-premises servers can resolve DNS records in Google Cloud environments using DNS server policies that allow queries from on-premises servers. The blueprint configures an inbound server policy in the DNS hub to allocate IP addresses, and the on-premises DNS servers forward requests to these addresses. All DNS requests to Google Cloud reach the DNS hub first, which then resolves records from DNS peers. Servers in Google Cloud can resolve DNS records in the on-premises environment using forwarding zones that query on-premises servers. All DNS requests to the on-premises environment originate from the DNS hub. The DNS request source is 35.199.192.0/19. Firewall policies Google Cloud has multiple firewall policy types. Hierarchical firewall policies are enforced at the organization or folder level to inherit firewall policy rules consistently across all resources in the hierarchy. In addition, you can configure network firewall policies for each VPC network. The blueprint combines these firewall policies to enforce common configurations across all environments using Hierarchical firewall policies and to enforce more specific configurations at each individual VPC network using network firewall policies. The blueprint doesn't use legacy VPC firewall rules. We recommend using only firewall policies and avoid mixing use with legacy VPC firewall rules. Hierarchical firewall policies The blueprint defines a single hierarchical firewall policy and attaches the policy to each of the production, non-production, development, bootstrap, and common folders. This hierarchical firewall policy contains the rules that should be enforced broadly across all environments, and delegates the evaluation of more granular rules to the network firewall policy for each individual environment. The following table describes the hierarchical firewall policy rules deployed by the blueprint. Rule description Direction of traffic Filter (IPv4 range) Protocols and ports Action Delegate the evaluation of inbound traffic from RFC 1918 to lower levels in the hierarchy. Ingress 192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12 all Go to next Delegate the evaluation of outbound traffic to RFC 1918 to lower levels in the hierarchy. Egress 192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12 all Go to next IAP for TCP forwarding Ingress 35.235.240.0/20 tcp:22,3389 Allow Windows server activation Egress 35.190.247.13/32 tcp:1688 Allow Health checks for Cloud Load Balancing Ingress 130.211.0.0/22, 35.191.0.0/16, 209.85.152.0/22, 209.85.204.0/22 tcp:80,443 Allow Network firewall policies The blueprint configures a network firewall policy for each network. Each network firewall policy starts with a minimum set of rules that allow access to Google Cloud services and deny egress to all other IP addresses. In the hub-and-spoke model, the network firewall policies contain additional rules to allow communication between spokes. The network firewall policy allows outbound traffic from one to the hub or another spoke, and allows inbound traffic from the NVA in the hub network. The following table describes the rules in the global network firewall policy deployed for each VPC network in the blueprint. Rule description Direction of traffic Filter Protocols and ports Allow outbound traffic to Google Cloud APIs. Egress The Private Service Connect endpoint that is configured for each individual network. See Private access to Google APIs. tcp:443 Deny outbound traffic not matched by other rules. Egress all all Allow outbound traffic from one spoke to another spoke (for hub-and-spoke model only). Egress The aggregate of all IP addresses used in the hub-and-spoke topology. Traffic that leaves a spoke VPC is routed to the NVA in the hub network first. all Allow inbound traffic to a spoke from the NVA in the hub network (for hub-and-spoke model only). Ingress Traffic originating from the NVAs in the hub network. all When you first deploy the blueprint, a VM instance in a VPC network can communicate with Google Cloud services, but not to other infrastructure resources in the same VPC network. To allow VM instances to communicate, you must add additional rules to your network firewall policy and tags that explicitly allow the VM instances to communicate. Tags are added to VM instances, and traffic is evaluated against those tags. Tags additionally have IAM controls so that you can define them centrally and delegate their use to other teams. Note: All references to tags in this document refer to tags with IAM controls. We don't recommend VPC firewall rules with legacy network tags. The following diagram shows an example of how you can add custom tags and network firewall policy rules to let workloads communicate inside a VPC network. The diagram demonstrates the following concepts of this example: The network firewall policy contains Rule 1 that denies outbound traffic from all sources at priority 65530. The network firewall policy contains Rule 2 that allows inbound traffic from instances with the service=frontend tag to instances with the service=backend tag at priority 999. The instance-2 VM can receive traffic from instance-1 because the traffic matches the tags allowed by Rule 2. Rule 2 is matched before Rule 1 is evaluated, based on the priority value. The instance-3 VM doesn't receive traffic. The only firewall policy rule that matches this traffic is Rule 1, so outbound traffic from instance-1 is denied. Private access to Google Cloud APIs To let resources in your VPC networks or on-premises environment reach Google Cloud services, we recommend private connectivity instead of outbound internet traffic to public API endpoints. The blueprint configures Private Google Access on every subnet and creates internal endpoints with Private Service Connect to communicate with Google Cloud services. Used together, these controls allow a private path to Google Cloud services, without relying on internet outbound traffic or publicly advertised internet ranges. The blueprint configures Private Service Connect endpoints with API bundles to differentiate which services can be accessed in which network. The base network uses the all-apis bundle and can reach any Google service, and the restricted network uses the vpcsc bundle which allows access to a limited set of services that support VPC Service Controls. For access from hosts that are located in an on-premises environment, we recommend that you use a convention of custom FQDN for each endpoint, as described in the following table. The blueprint uses a unique Private Service Connect endpoint for each VPC network, configured for access to a different set of API bundles. Therefore, you must consider how to route service traffic from the on-premises environment to the VPC network with the correct API endpoint, and if you're using VPC Service Controls, ensure that traffic to Google Cloud services reaches the endpoint inside the intended perimeter. Configure your on-premise controls for DNS, firewalls, and routers to allow access to these endpoints, and configure on-premise hosts to use the appropriate endpoint. For more information, see access Google APIs through endpoints. The following table describes the Private Service Connect endpoints created for each network. VPC Environment API bundle Private Service Connect endpoint IP address Custom FQDN Base Common all-apis 10.17.0.1/32 c.private.googleapis.com Development all-apis 10.17.0.2/32 d.private.googleapis.com Non-production all-apis 10.17.0.3/32 n.private.googleapis.com Production all-apis 10.17.0.4/32 p.private.googleapis.com Restricted Common vpcsc 10.17.0.5/32 c.restricted.googleapis.com Development vpcsc 10.17.0.6/32 d.restricted.googleapis.com Non-production vpcsc 10.17.0.7/32 n.restricted.googleapis.com Production vpcsc 10.17.0.8/32 p.restricted.googleapis.com To ensure that traffic for Google Cloud services has a DNS lookup to the correct endpoint, the blueprint configures private DNS zones for each VPC network. The following table describes these private DNS zones. Private zone name DNS name Record type Data googleapis.com. *.googleapis.com. CNAME private.googleapis.com. (for base networks) or restricted.googleapis.com. (for restricted networks) private.googleapis.com (for base networks) or restricted.googleapis.com (for restricted networks) A The Private Service Connect endpoint IP address for that VPC network. gcr.io. *.gcr.io CNAME gcr.io. gcr.io A The Private Service Connect endpoint IP address for that VPC network. pkg.dev. *.pkg.dev. CNAME pkg.dev. pkg.dev. A The Private Service Connect endpoint IP address for that VPC network. The blueprint has additional configurations to enforce that these Private Service Connect endpoints are used consistently. Each Shared VPC network also enforces the following: A network firewall policy rule that allows outbound traffic from all sources to the IP address of the Private Service Connect endpoint on TCP:443. A network firewall policy rule that denies outbound traffic to 0.0.0.0/0, which includes the default domains that are used for access to Google Cloud services. Internet connectivity The blueprint doesn't allow inbound or outbound traffic between its VPC networks and the internet. For workloads that require internet connectivity, you must take additional steps to design the access paths required. For workloads that require outbound traffic to the internet, we recommend that you manage outbound traffic through Cloud NAT to allow outbound traffic without unsolicited inbound connections, or through Secure Web Proxy for more granular control to allow outbound traffic to trusted web services only. For workloads that require inbound traffic from the internet, we recommend that you design your workload with Cloud Load Balancing and Google Cloud Armor to benefit from DDoS and WAF protections. We don't recommend that you design workloads that allow direct connectivity between the internet and a VM using an external IP address on the VM. Hybrid connectivity between an on-premises environment and Google Cloud To establish connectivity between the on-premises environment and Google Cloud, we recommend that you use Dedicated Interconnect to maximize security and reliability. A Dedicated Interconnect connection is a direct link between your on-premises network and Google Cloud. The following diagram introduces hybrid connectivity between the on-premises environment and a Google Virtual Private Cloud network. The diagram describes the following components of the pattern for 99.99% availability for Dedicated Interconnect: Four Dedicated Interconnect connections, with two connections in one metropolitan area (metro) and two connections in another metro. Within each metro, there are two distinct zones within the colocation facility. The connections are divided into two pairs, with each pair connected to a separate on-premises data center. VLAN attachments are used to connect each Dedicated Interconnect instance to Cloud Routers that are attached to the Shared VPC topology. Each Shared VPC network has four Cloud Routers, two in each region, with the dynamic routing mode set to global so that every Cloud Router can announce all subnets, independent of region. With global dynamic routing, Cloud Router advertises routes to all subnets in the VPC network. Cloud Router advertises routes to remote subnets (subnets outside of the Cloud Router's region) with a lower priority compared to local subnets (subnets that are in the Cloud Router's region). Optionally, you can change advertised prefixes and priorities when you configure the BGP session for a Cloud Router. Traffic from Google Cloud to an on-premises environment uses the Cloud Router closest to the cloud resources. Within a single region, multiple routes to on-premises networks have the same multi-exit discriminator (MED) value, and Google Cloud uses equal cost multi-path (ECMP) routing to distribute outbound traffic between all possible routes. On-premises configuration changes To configure connectivity between the on-premises environment and Google Cloud, you must configure additional changes in your on-premises environment. The Terraform code in the blueprint automatically configures Google Cloud resources but doesn't modify any of your on-premises network resources. Some of the components for hybrid connectivity from your on-premises environment to Google Cloud are automatically enabled by the blueprint, including the following: Cloud DNS is configured with DNS forwarding between all Shared VPC networks to a single hub, as described in DNS setup. A Cloud DNS server policy is configured with inbound forwarder IP addresses. Cloud Router is configured to export routes for all subnets and custom routes for the IP addresses used by the Private Service Connect endpoints. To enable hybrid connectivity, you must take the following additional steps: Order a Dedicated Interconnect connection. Configure on-premises routers and firewalls to allow outbound traffic to the internal IP address space defined in IP address space allocation. Configure your on-premises DNS servers to forward DNS lookups bound for Google Cloud to the inbound forwarder IP addresses that is already configured by the blueprint. Configure your on-premises DNS servers, firewalls, and routers to accept DNS queries from the Cloud DNS forwarding zone (35.199.192.0/19). Configure on-premise DNS servers to respond to queries from on-premises hosts to Google Cloud services with the IP addresses defined in private access to Cloud APIs. For encryption in transit over the Dedicated Interconnect connection, configure MACsec for Cloud Interconnect or configure HA VPN over Cloud Interconnect for IPsec encryption. For more information, see Private Google Access for on-premises hosts. What's next Read about detective controls (next document in this series). Send feedback \ No newline at end of file diff --git a/Networking_for_hybrid_and_multicloud_workloads.txt b/Networking_for_hybrid_and_multicloud_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..6ff21f930a5ed2a8187b3dd11b1ad865901f1d2a --- /dev/null +++ b/Networking_for_hybrid_and_multicloud_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/network-hybrid-multicloud +Date Scraped: 2025-02-23T11:53:01.228Z + +Content: +Home Docs Cloud Architecture Center Send feedback Networking for hybrid and multi-cloud workloads: Reference architectures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This document is part of a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. The series consists of the following documents: Designing networks for migrating enterprise workloads: Architectural approaches Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures (this document) This document discusses networking for a scenario where workloads run in more than one place, such as on-premises and the cloud, or in multiple cloud environments. Lift-and-shift architecture The first hybrid workload access scenario is a lift-and-shift architecture. Establishing private connectivity You can establish connectivity to on-premises networks using either Dedicated Interconnect or Partner Interconnect. The topology illustrated in figure 1 shows how you can use four Dedicated Interconnect connections in two different metros and different edge availability domains to achieve 99.99% availability. (You can also achieve 99.99% availability using Partner Interconnect.) For connectivity between your Google Cloud networks and your networks hosted by another cloud provider, use Cross-Cloud Interconnect. For more information and detailed recommendations, see Hybrid connectivity between an on-premises environment and Google Cloud in the enterprise foundations blueprint. Figure 1. Configuration of redundant Dedicated Interconnect connections for 99.99% availability. Note: You need to consider the Cloud Router properties when advertising prefixes and assigning route priorities. Network Connectivity Center lets you use Google's network for data transfer between multiple on-premises or cloud-hosted sites. This approach lets you take advantage of the reach and reliability of Google's network when you need to move data. You can use your existing Cloud VPN, Cloud Interconnect, SD-WAN router appliances, and VPC networks as Network Connectivity Center spokes to support data transfer between the on-premises networks, branch sites, other cloud providers, and Google Cloud VPC networks as shown in figure 2. Figure 2. Network Connectivity Center configuration connecting different on-premises enterprise and other cloud networks outside of Google Cloud using the Google backbone network. For more information about setting up Network Connectivity Center, see Considerations in the Network Connectivity Center documentation. SD-WAN appliances Network Connectivity Center lets you use a third-party router appliance as a Network Connectivity Center spoke to establish connectivity between an external site and your VPC network resources. A router appliance could be a third-party SD-WAN router supported by one of our partners or another virtual appliance that lets you exchange routes with the Cloud Router instance. These appliance-based solutions are in addition to the current site-to-cloud connectivity options that are available with Cloud VPN and Cloud Interconnect as spokes. Figure 3 shows a topology that uses SD-WAN appliances. Figure 3. Network Connectivity Center configuration using router appliance for integrating your SD-WAN implementation with Google's network. You can use third-party appliances to perform security functions. The security capabilities of the appliance can be integrated in the router appliance as shown in figure 3. It's also a common pattern to use a networking virtual appliance, where traffic from on-premises lands in a transit VPC network, and the appliance establishes the connectivity with the workload VPC network, as shown in figure 4. For more information about setting up Network Connectivity Center, see Considerations in the Network Connectivity Center documentation. Hybrid services architecture As in the intra-cloud use case discussed in Networking for secure intra-cloud access: Reference architectures, Network Connectivity Center enables connectivity from your branch sites and on-premises networks to Google Cloud. Private Service Connect provides private access to Google-managed services or lets you consume other services that are built and deployed in the cloud. You can also implement network security by using a combination of VPC Service Controls, Google Cloud firewalls, and network virtual appliances, as shown in figure 4. Figure 4. Networks with an architecture that uses both a lift-and-shift pattern and a hybrid services design pattern that is designed to provide a secure data plane. Zero Trust Distributed Architecture In a hybrid environment, microservices run in service meshes that are deployed across different cloud providers and on-premises environments. You can help secure communication between the microservices by using mutual Transport Layer Security (mTLS) and authorization policies. It's a common practice for enterprises to build service meshes in the cloud and to extend the meshes to on-premises. Figure 5 shows an example in which services that are deployed on-premises access the services in the cloud. End-to-end mTLS between the services is enabled using the east-west gateway and Server Name Indication (SNI)-based routing. Cloud Service Mesh helps you secure service-to-service communications, letting you configure authorization policies for the services and deploy certificates and keys that are provided by a managed certificate authority. Hybrid environments typically feature multiple mesh deployments, such as multiple GKE clusters. An important component in this flow is SNI routing, which is used at the GKE east-west gateway for each cluster. This configuration allows direct-to-workload mTLS routing by the gateway while preserving end-to-end mTLS connectivity. Figure 5. Zero-trust service mesh deployed across an on-premises environment and Google Cloud. Enterprises can use Cloud Service Mesh to deploy across clouds. To address challenges in managing identity and certificates across cloud providers, Cloud Service Mesh provides workload identity and an intermediate in-cluster certificate authority (CA), using CA Service (CA Service). The intermediate CA can be chained to an external CA or can be hosted in Google. You can customize CA attributes like region and the signature algorithm, using both enterprise-owned HSMs and Cloud HSM. Workload identity lets you assign distinct, fine-grained identities and authorization for each microservice in your cluster. Cloud Service Mesh manages the process of issuing certificates and of automatically rotating keys and certificates, without disrupting communications. It also provides a single root of trust across GKE clusters. Figure 6 shows an architecture that uses Cloud Service Mesh to manage identity and authorization. Services in the mesh can access Google Cloud services using workload identity federation. This feature lets services act with the authority of a Google service account when they invoke Google Cloud APIs. Workload identity federation also lets the service mesh that's installed in other cloud providers access the Google cloud APIs. Figure 6. Zero-trust service mesh deployed across clouds. You can use Cloud Service Mesh to route traffic from the mesh to on-premises or to any other cloud. For example, you can create services in Cloud Service Mesh called on-prem-service and other-cloud-service and add hybrid connectivity network endpoint groups (NEGs) that have endpoints 10.1.0.1:80 and 10.2.0.1:80. Cloud Service Mesh then sends the traffic to its clients, which are mesh sidecar proxies that run alongside your applications. Thus, when your application sends a request to the on-prem-service service, the Cloud Service Mesh client inspects the request and directs it to the 10.2.0.1:80 endpoint. Figure 7 illustrates this configuration. Figure 7. Traffic steered from a service mesh using Cloud Service Mesh. You can also incorporate advanced functionality such as weight-based traffic steering, as shown in figure 8. This capability lets you enable critical enterprise needs such as cloud migration. Cloud Service Mesh serves as a versatile, globally managed control plane for your service meshes. Figure 8. Weighted traffic steered using Cloud Service Mesh. What's next Networking for secure intra-cloud access: Reference architectures. Networking for internet-facing application delivery: Reference architectures Migration to Google Cloud can help you to plan, design, and implement the process of migrating your workloads to Google Cloud. Landing zone design in Google Cloud has guidance for creating a landing zone network. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Networking_for_internet-facing_application_delivery.txt b/Networking_for_internet-facing_application_delivery.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b98cff971233c3ad6cac0eaeba6a48f958785a5 --- /dev/null +++ b/Networking_for_internet-facing_application_delivery.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/network-application-delivery +Date Scraped: 2025-02-23T11:52:59.210Z + +Content: +Home Docs Cloud Architecture Center Send feedback Networking for internet-facing application delivery: Reference architectures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This document is part of a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. The series consists of the following documents: Designing networks for migrating enterprise workloads: Architectural approaches Networking for secure intra-cloud access: Reference architectures Networking for internet-facing application delivery: Reference architectures (this document) Networking for hybrid and multi-cloud workloads: Reference architectures Google offers a set of products and capabilities that help secure and scale your most critical internet-facing applications. Figure 1 shows an architecture that uses Google Cloud services to deploy a web application with multiple tiers. Figure 1. Typical multi-tier web application deployed on Google Cloud. Note: You need to consider limitations of using Application Load Balancers. For more information, see the Limitations section in the "External Application Load Balancer overview" documentation. Lift-and-shift architecture As internet-facing applications move to the cloud, they must be able to scale, and they must have security controls and visibility that are equivalent to those controls in the on-premises environment. You can provide these controls by using network virtual appliances that are available in the marketplace. Figure 2. Application deployed with an appliance-based external load balancer. These virtual appliances provide functionality and visibility that is consistent with your on-premises environments. When you use a network virtual appliance, you deploy the software appliance image by using autoscaled managed instance groups. It's up to you to monitor and manage the health of the VM instances that run the appliance, and you also maintain software updates for the appliance. After you perform your initial shift, you might want to transition from self-managed network virtual appliances to managed services. Google Cloud offers a number of managed services to deliver applications at scale. Figure 2 shows a network virtual appliance configured as the frontend of a web tier application. For a list of partner ecosystem solutions, see the Google Cloud Marketplace page in the Google Cloud console. Hybrid services architecture Google Cloud offers the following approaches to manage internet-facing applications at scale: Use Google's global network of anycast DNS name servers that provide high availability and low latency to translate requests for domain names into IP addresses. Use Google's global fleet of external Application Load Balancers to route traffic to an application that's hosted inside Google Cloud, hosted on-premises, or hosted on another public cloud. These load balancers scale automatically with your traffic and ensure that each request is directed to a healthy backend. By setting up hybrid connectivity network endpoint groups, you can bring the benefits of external Application Load Balancer networking capabilities to services that are running on your existing infrastructure outside of Google Cloud. The on-premises network or the other public cloud networks are privately connected to your Google Cloud network through a VPN tunnel or through Cloud Interconnect. Use other network edge services such as Cloud CDN to distribute content, Google Cloud Armor to protect your content, and Identity-Aware Proxy (IAP) to control access to your services. Figure 3 shows hybrid connectivity that uses external Application Load Balancer. Figure 3. Hybrid connectivity configuration using external Application Load Balancer and network edge services. Figure 4 shows a different connectivity option—using hybrid connectivity network endpoint groups. Figure 4. External Application Load Balancer configuration using hybrid connectivity network endpoint groups. Use a Application Load Balancer (HTTP/HTTPS) to route requests based on their attributes, such as the HTTP uniform resource identifier (URI). Use a proxy Network Load Balancer to implement TLS offload, TCP proxy, or support for external load balancing to backends in multiple regions. Use a passthrough Network Load Balancer to preserve client source IP addresses, avoid the overhead of proxies, and to support additional protocols like UDP, ESP, and ICMP. Protect your service with Google Cloud Armor. This product is an edge DDoS defense and WAF security product that's available to all services that are accessed through load balancers. Use Google-managed SSL certificates. You can reuse certificates and private keys that you already use for other Google Cloud products. This eliminates the need to manage separate certificates. Enable caching on your application to take advantage of the distributed application delivery footprint of Cloud CDN. Use Cloud Next Generation Firewall to inspect and filter traffic in your VPC networks. Use Cloud IDS to detect threats in north-south traffic, as shown in figure 6. Figure 6. Cloud IDS configuration to mirror and inspect all internet and internal traffic. Zero Trust Distributed Architecture You can expand Zero Trust Distributed Architecture to include application delivery from the internet. In this model, the Google external Application Load Balancer provides global load balancing across GKE clusters that have Cloud Service Mesh meshes in distinct clusters. For this scenario, you adopt a composite ingress model. The first-tier load balancer provides cluster selection, and then a Cloud Service Mesh-managed ingress gateway provides cluster-specific load balancing and ingress security. An example of this multi-cluster ingress is the Cymbal Bank reference architecture as described in the enterprise application blueprint. For more information about Cloud Service Mesh edge ingress, see From edge to mesh: Exposing service mesh applications through GKE Ingress. Figure 7 shows a configuration in which a external Application Load Balancer directs traffic from the internet to the service mesh through an ingress gateway. The gateway is a dedicated proxy in the service mesh. Figure 7. Application delivery in a zero-trust microservices environment. What's next Networking for secure intra-cloud access: Reference architectures. Networking for hybrid and multi-cloud workloads: Reference architectures. Use Google Cloud Armor, load balancing, and Cloud CDN to deploy programmable global front ends Migration to Google Cloud can help you to plan, design, and implement the process of migrating your workloads to Google Cloud. Landing zone design in Google Cloud has guidance for creating a landing zone network. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Networking_for_secure_intra-cloud_access.txt b/Networking_for_secure_intra-cloud_access.txt new file mode 100644 index 0000000000000000000000000000000000000000..71cd112ce0b79290b445fa368ca5ef4e4fd149a0 --- /dev/null +++ b/Networking_for_secure_intra-cloud_access.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/network-secure-intra-cloud-access +Date Scraped: 2025-02-23T11:52:57.022Z + +Content: +Home Docs Cloud Architecture Center Send feedback Networking for secure intra-cloud access: Reference architectures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-13 UTC This document is part of a series that describes networking and security architectures for enterprises that are migrating data center workloads to Google Cloud. The series consists of the following documents: Designing networks for migrating enterprise workloads: Architectural approaches Networking for secure intra-cloud access: Reference architectures (this document) Networking for internet-facing application delivery: Reference architectures Networking for hybrid and multi-cloud workloads: Reference architectures Workloads for intra-cloud use cases reside in VPC networks and need to connect to other resources in Google Cloud. They might consume services that are provided natively in the cloud, like BigQuery. The security perimeter is provided by a variety of first-party (1P) and third-party (3P) capabilities like firewalls, VPC Service Controls, and network virtual appliances. In many cases, these workloads span multiple Google Cloud VPC networks, and the boundaries between the VPC networks need to be secured. This document covers these security and connectivity architectures in depth. Lift-and-shift architecture The first scenario for an intra-cloud use case is a lift-and-shift architecture where you're moving established workloads to the cloud as is. Cloud NGFW You can help establish a secure perimeter by configuring Cloud Next Generation Firewall. You can use Tags, service accounts, and network tags to apply fine-grained firewall rules to VMs. For implementation guidelines on how to manage traffic with Google Cloud firewall rules, see Network firewall policies in the enterprise foundations blueprint. You can also use Firewall Rules Logging to audit and verify the effects of the firewall rule setting. You can use VPC Flow Logs for network forensics and also stream the logs to integrate with SIEM. This overall system can provide real-time monitoring, correlation of events, analysis, and security alerts. Figure 1 shows how firewall rules can use network tags to help restrict traffic among VMs in a VPC network. Figure 1. Network firewall configuration that uses network tags to apply fine-grained egress control. Network virtual appliance A network virtual appliance (NVA) is a VM that has security functions such as web application firewalls (WAF) or security application-level firewalls. NVAs with multiple network interfaces can be used to bridge between VPC networks. You can use NVAs to implement security functions traffic between VPC networks, especially when you're using a hub-spoke configuration, as shown in figure 2. Figure 2. Centralized network appliance configuration in a Shared VPC network. Note: In order to enable packets to be forwarded by the network virtual appliance, you need to enable IP forwarding when you create the VM. Doing so disables the packet source and destination checking (anti-spoofing). Additionally, you must configure the required static routes with the next hop as the instance name or the internal IP address. For more information, see Considerations for next hop instances in the routes documentation. Cloud IDS Cloud Intrusion Detection System (Cloud IDS) lets you implement native security inspection and logging by mirroring traffic from a subnet in your VPC network. By using Cloud IDS, you can inspect and monitor a wide variety of threats at the network layer and at the application layer for analysis. You create Cloud IDS endpoints in your Google Cloud VPC network. These endpoints monitor ingress and egress traffic to and from that network, as well as intra-VPC network traffic, by using the packet mirroring functionality that's built into the Google Cloud networking stack. You must enable private services access in order to connect to the service producer project (the Google-managed project) that hosts the Cloud IDS processes. If you have a hub-and-spoke architecture, traffic from each of the spokes can be mirrored to the Cloud IDS instances, as shown in figure 3. Figure 3. Cloud IDS configuration to mirror VPC traffic that uses private services access. Cloud IDS can be secured in your VPC Service Controls service perimeter using an additional step. You can read more about VPC Service Controls support in supported products. Network Connectivity Center Network Connectivity Center is an orchestration framework that simplifies network connectivity among resources that are connected to a central management resource called a hub. Network Connectivity Center supports among the following types of networks: Google Cloud VPC networks On-premises and other cloud networks using Cloud Interconnect or HA VPN Encrypted connections anchored by VMs Network Connectivity Center is the control plane of the architecture. Connections to networks are called spokes. You can use Network Connectivity Center to connect networks together in either a full-mesh or hub-and-spoke topology. VPC Network Peering For applications that span multiple VPC networks, whether they belong to the same Google Cloud project or to the same organization resource, VPC Network Peering enables connectivity between VPC networks. This connectivity lets traffic stay within Google's network so that it does not traverse the public internet. A hub-and-spoke architecture is a popular model for VPC connectivity. This model is useful when an enterprise has various applications that need to access a common set of services, such as logging or authentication. The model is also useful if the enterprise needs to implement a common set of security policies for traffic that's exiting the network through the hub. For guidance on setting up a hub-and-spoke architecture using VPC Network Peering, see Cross-Cloud Network inter-VPC connectivity using VPC Network Peering. Shared VPC You can use Shared VPC, to maintain centralized control over network resources like subnets, routes, and firewalls in host projects. This level of control lets you implement the security best practice of least privilege for network administration, auditing, and access control because you can delegate network administration tasks to network and security administrators. You can assign the ability to create and manage VMs to instance administrators by using service projects. Using a service project ensures that the VM administrators are only given the ability to create and manage instances, and that they are not allowed to make any network-impacting changes in the Shared VPC network. For example, you can provide more isolation by defining two VPC networks that are in two host projects and by attaching multiple service projects to each network, one for production and one for testing. Figure 6 shows an architecture that isolates a production environment from a testing environment by using separate projects. For more information about best practices for building VPC networks, see Best practices and reference architectures for VPC design. Figure 6. Shared VPC network configuration that uses multiple isolated hosts and service projects (test and production environments). Hybrid services architecture The hybrid services architecture provides additional cloud-native services that are designed to let you connect and secure services in a multi-VPC environment. These cloud-native services supplement what is available in the lift-and-shift architecture and can make it easier to manage a VPC-segmented environment at scale. Private Service Connect Private Service Connect lets a service that's hosted in one VPC network be surfaced in another VPC network. There is no requirement that the services be hosted by the same organization resource, so Private Service Connect can be used to privately consume services from another VPC network, even if it's attached to another organization resource. You can use Private Service Connect in two ways: to access Google APIs or to access services hosted in other VPC networks. Use Private Service Connect to access Google APIs When you use Private Service Connect, you can expose Google APIs by using a Private Service Connect endpoint that's a part of your VPC network, as shown in figure 7. Figure 7. Private Service Connect configuration to send traffic to Google APIs by using a Private Service Connect endpoint that's private to your VPC network. Workloads can send traffic to a bundle of global Google APIs by using a Private Service Connect endpoint. In addition, you can use a Private Service Connect backend to access a single Google API, extending the security features of load balancers to API services. Figure 8 shows this configuration. Figure 8. Private Service Connect configuration to send traffic to Google APIs by using a Private Service Connect backend. Use Private Service Connect between VPC networks or entities Private Service Connect also lets a service producer offer services to a service consumer in another VPC network either in the same organization resource or in a different one. A service producer VPC network can support multiple service consumers. The consumer can connect to the producer service by sending traffic to a Private Service Connect endpoint located in the consumer's VPC network. The endpoint forwards the traffic to the VPC network containing the published service. Note: If you're configuring Private Service Connect by using forwarding rules, the consumer's source IP address is translated by using source NAT (SNAT) to an IP address that's selected from one of the Private Service Connect subnets. If you want to retain the consumer connection IP address information, you need to configure that option. For more information, see Consumer connection information. Figure 9. Private Service Connect configuration to publish a managed service through a service attachment and consume the service through an endpoint. Private services access Private Service Connect is the recommended way for a service producer to provide a service to a service consumer. However, Private Service Connect doesn't support all services. You can use private services access to get access to these listed services. VPC serverless access connector A VPC serverless access connector handles traffic between your serverless environment and your VPC network. When you create a connector in your Google Cloud project, you attach it to a specific VPC network and region. You can then configure your serverless services to use the connector for outbound network traffic. You can specify a connector by using a subnet or a CIDR range. Traffic sent through the connector into the VPC network originates from the subnet or the CIDR range that you specified, as shown in figure 10. Figure 10. Serverless VPC access connector configuration to access Google Cloud serverless environments by using internal IP addresses inside your VPC network. Serverless VPC Access connectors are supported in every region that supports Cloud Run, Cloud Run functions, or the App Engine standard environment. For more information, see the list of supported services and supported networking protocols for using VPC Serverless access connector. Direct VPC egress Direct VPC egress lets your Cloud Run service send traffic to a VPC network without setting up a Serverless VPC Access connector. VPC Service Controls VPC Service Controls helps you prevent data exfiltration from services such as Cloud Storage or BigQuery by preventing authorized accesses from the internet or from projects that are not a part of a security perimeter. For example, consider a scenario where human error or incorrect automation causes IAM policies to be set incorrectly on a service such as Cloud Storage or BigQuery. As a result, resources in these services become publicly accessible. In that case, there is a risk of data exposure. If you have these services configured as part of the VPC Service Controls perimeter, ingress access to the resources is blocked, even if IAM policies allow access. VPC Service Controls can create perimeters based on client attributes such as identity type (service account or user) and network origin (IP address or VPC network). VPC Service Controls helps mitigate the following security risks: Access from unauthorized networks that use stolen credentials. Data exfiltration by malicious insiders or compromised code. Public exposure of private data caused by misconfigured IAM policies. Figure 11 shows how VPC Service Controls lets you establish a service perimeter to help mitigate these risks. Figure 11. VPC service perimeter extended to hybrid environments by using private access services. By using ingress and egress rules, you can enable communication between two service perimeters, as shown in figure 12. Figure 12. Configuring ingress and egress rules to communicate between service perimeters. For detailed recommendations for VPC Service Controls deployment architectures, see Design and architect service perimeters. For more information about the list of services that are supported by VPC Service Controls, see Supported products and limitations. Zero Trust Distributed Architecture Network perimeter security controls are necessary but not sufficient to support the security principles of least privilege and defense in depth. Zero Trust Distributed Architectures build on, but don't solely rely on, the network perimeter edge for security enforcement. As distributed architectures, they are composed of microservices with per-service enforcement of security policy, strong authentication, and workload identity. You can implement Zero Trust Distributed Architectures as services managed by Cloud Service Mesh and Cloud Service Mesh. Cloud Service Mesh Cloud Service Mesh provides an out-of-the-box mTLS Zero Trust Distributed Architecture microservice mesh that's built on Istio foundations. You set up the mesh by using an integrated flow. Managed Cloud Service Mesh, with Google-managed data and control planes, is supported on GKE. An in-cluster control plane is also available, which is suitable for other environments such as Google Distributed Cloudises or GKE Multi-Cloud. Cloud Service Mesh manages identity and certificates for you, providing an Istio-based authorization policy model. Cloud Service Mesh relies on fleets for managing multi-cluster service deployment configuration and identity. As with Cloud Service Mesh, when your workloads operate in a flat (or shared) VPC network connectivity environment, there are no special network connectivity requirements beyond firewall configuration. When your architecture includes multiple Cloud Service Mesh clusters across separate VPC networks or networking environments, such as across a Cloud Interconnect connection, you also need an east-west gateway. Best practices for networking for Cloud Service Mesh are the same as those that are described in Best practices for GKE networking. Cloud Service Mesh also integrates with Identity-Aware Proxy (IAP). IAP lets you set fine-grained access policies so that you can control user access to a workload based on attributes of the originating request, such as user identity, IP address, and device type. This level of control enables an end-to-end zero-trust environment. You need to consider GKE cluster requirements when you use Cloud Service Mesh. For more information, see the Requirements section in the "Single project installation on GKE" documentation. What's next Networking for internet-facing application delivery: Reference architectures. Networking for hybrid and multi-cloud workloads: Reference architectures. Network security for distributed applications in Cross-Cloud Network. Migration to Google Cloud can help you to plan, design, and implement the process of migrating your workloads to Google Cloud. Landing zone design in Google Cloud has guidance for creating a landing zone network. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/New_Business_Channels_Using_APIs.txt b/New_Business_Channels_Using_APIs.txt new file mode 100644 index 0000000000000000000000000000000000000000..1119d079cc38743510811796dfd1ae98a7ac1e3b --- /dev/null +++ b/New_Business_Channels_Using_APIs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/new-channels-using-apis +Date Scraped: 2025-02-23T11:58:54.967Z + +Content: +New business channels using APIsUnlock new digital channels and business models by making valuable data and services available as APIs to your partners and developers. An API program fuels your ecosystem and generates new revenue sources.Contact usThe 2020 Gartner Magic Quadrant for Full Life Cycle API ManagementRegister to download the reportBenefitsEmpower your ecosystem to build new channels of growthDrive ecosystem participationIncrease the adoption and consumption of your APIs with a seamless onboarding experience.Generate new revenue sourcesOpen up new channels for driving incremental revenue from existing services and data.Monitor and manage APIs to measure successGain end-to-end visibility across API programs with metrics for operations, developer engagement, and business.Key featuresLeverage API life cycle management to support new business growthManage your APIs as products with the API life cycle management capabilities of Apigee such as governance, visibility, analytics, and monetization.SecurityProvides the capabilities you need to securely share APIs outside your organization. The API runtime allows configurable policies for controlling HTTP traffic, enforcing security policies, rate limits, quotas, and message transformation.Developer ecosystemProvides a seamless onboarding journey for API consumers (application developers) to discover and access APIs and the management of multiple developer audiences. Apigee capabilities include developer portal, API catalog, audience management, and self-service and onboarding.Monitoring and analyticsProvides real-time and long-term visibility into API traffic and consumption patterns, as well as anomaly detection, allowing the organization to tune their API management strategy from an operational and business perspective.API monetizationProvides capabilities supporting the creation of a broad range of API packages, revenue models, and reports. Capabilities include the creation and configuration of a broad range of API packages, revenue models, reports, payment gateways, and developer portal integrations.Ready to get started? Contact usSee how APIs can help empower your ecosystemHow to thrive in the digital economyRegister to read ebook Maximizing Value with API MonetizationRegister to read ebook Creating world-class developer experiencesRegister to read ebook CustomersDeveloper adoption and business growth as a result of API-driven ecosystemsCase studyAccuweather onboarded 60,000 developers through their Apigee developer portal.6-min readCase studySwisscom delivers value to customers with 70 consumer apps and 159 APIs.5-min readVideo15% of all Ticketmaster ticket sales in 2018 came via partners.7:26Blog postMagalu experienced 60% growth year over year in ecommerce sales.6-min readBlog postBank BRI gained $50M in revenue through Apigee’s API monetization features.6-min readSee all customersRelated servicesRecommended products and servicesAPI managementGoogle Cloud’s Apigee enables enterprises to develop, secure, monitor, monetize, and analyze APIs with full life cycle API management.API developer portalThe Apigee developer portal empowers API providers to create world-class user experiences for their products.API monetizationApigee API Monetization helps enterprises unlock their APIs’ business value to reach new markets and drive incremental revenue.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/OWASP_Top_10_2021_mitigation_options_on_Google_Cloud.txt b/OWASP_Top_10_2021_mitigation_options_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..07d09325ab65a287c2eb1b7f9bf2f9246a961414 --- /dev/null +++ b/OWASP_Top_10_2021_mitigation_options_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/owasp-top-ten-mitigation +Date Scraped: 2025-02-23T11:56:50.007Z + +Content: +Home Docs Cloud Architecture Center Send feedback OWASP Top 10 2021 mitigation options on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-31 UTC This document helps you identify Google Cloud products and mitigation strategies that can help you defend against common application-level attacks that are outlined in OWASP Top 10. OWASP Top 10 is a list by the Open Web Application Security (OWASP) Foundation of the top 10 security risks that every application owner should be aware of. Although no security product can guarantee full protection against these risks, applying these products and services when they make sense in your architecture can contribute to a strong multi-layer security solution. Google infrastructure is designed to help you build, deploy, and operate services in a secure way. Physical and operational security, data encryption at rest and in transit, and many other important facets of a secure infrastructure are managed by Google. You inherit these benefits by deploying your applications to Google Cloud, but you might need to take additional measures to protect your application against specific attacks. The mitigation strategies listed in this document are sorted by application security risk and Google Cloud product. Many products play a role in creating a defense-in-depth strategy against web security risks. This document provides information about how other products can mitigate OWASP Top 10 risks, but it provides additional detail about how Google Cloud Armor and Apigee can mitigate a wide range of those risks. Google Cloud Armor, acting as a web application firewall (WAF), and Apigee, acting as an API gateway, can be especially helpful in blocking different kinds of attacks. These products are in the traffic path from the internet and can block external traffic before it reaches your applications in Google Cloud. Note: Some descriptions in this document refer to vulnerabilities in the OWASP Juice Shop application and show how to mitigate these vulnerabilities. The Juice Shop application is a useful example application used for security training and awareness, because it contains instances of each of the OWASP Top 10 security vulnerabilities—by design—that an attacker can exploit for testing purposes. Because the purpose of the Juice Shop is to learn about attacks, the Juice Shop challenges refer to challenges in attacking the insecure Juice Shop application, while the challenge solutions provide instructions about exploiting vulnerabilities. In this document, you learn how to mitigate some of the Juice Shop built-in vulnerabilities by using Google Cloud services. Product overviews The Google Cloud products listed in the following table can help defend against the top 10 security risks: Product Summary A01 A02 A03 A04 A05 A06 A07 A08 A09 A10 Access Transparency Expand visibility and control over your cloud provider with administrator access logs and approval controls ✓ ✓ Apigee Design, secure, and scale application programming interfaces ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Artifact Registry Centrally stores artifacts and build dependencies ✓ Binary Authorization Ensure only trusted container images are deployed on Google Kubernetes Engine ✓ ✓ Cloud Asset Inventory View, monitor, and analyze all your Google Cloud and Google Distributed Cloud or multi-cloud assets across projects and services ✓ ✓ ✓ ✓ Cloud Build Build, test, and deploy in Google Cloud ✓ Cloud Key Management Service Manage encryption keys on Google Cloud ✓ ✓ Cloud Load Balancing Control which ciphers your SSL proxy or HTTPS load balancer negotiates ✓ ✓ ✓ ✓ Cloud Logging Real-time log management and analysis at scale ✓ Cloud Monitoring Collect and analyze metrics, events, and metadata from Google Cloud services and a wide variety of applications and third-party services ✓ Cloud Source Repositories Store, manage, and track code in a single place for your team ✓ Google Cloud Armor A web application firewall (WAF) deployed at the edge of Google's network to help defend against common attack vectors ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Google Security Operations Automatically find threats in real time and at scale using Google's infrastructure, detection techniques, and signals ✓ Identity Platform Add identity and access management capabilities to applications, protect user accounts, and scale identity management ✓ ✓ Identity-Aware Proxy (IAP) Use identity and context to guard access to your applications and VMs ✓ ✓ ✓ reCAPTCHA Help protect your website from fraudulent activity, spam, and abuse ✓ Secret Manager Store API keys, passwords, certificates, and other sensitive data ✓ ✓ Security Command Center Centralized visibility for security analytics and threat intelligence to help identify vulnerabilities in your applications ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Sensitive Data Protection Discover, classify, and protect your most sensitive data ✓ ✓ ✓ Titan Security Keys Help protect high-value users with phishing-resistant 2FA devices that are built with a hardware chip (with firmware engineered by Google) to verify the integrity of the key ✓ Virtual Private Cloud firewalls Allow or deny connections to or from your virtual machine (VM) instances ✓ VirusTotal Analyze suspicious files and URLs to detect types of malware; automatically share them with the security community ✓ ✓ VPC Service Controls Isolate resources of multi-tenant Google Cloud services to mitigate data exfiltration risks ✓ ✓ Google Cloud security bulletins The latest security bulletins related to Google Cloud products ✓ A01: Broken access control Broken access control refers to access controls that are only partially enforced on the client side, or weakly implemented. Mitigating these controls often requires a rewrite on the application side to properly enforce that resources are accessed only by authorized users. Apigee Use case: Access control enforcement Limit data manipulation Apigee supports a layered approach to implement access controls to keep the bad actors from making unauthorized changes or accessing the system. Configure role-based access control (RBAC) to only allow users access to the resources and configuration that they need. Create encrypted key value maps to store sensitive key-value pairs, which appear masked in the Edge UI and in management API calls. Configure single sign-on with your company's identity provider. Configure developer portals to show specific API products according to user role. Configure the portal to show or hide content based on user role. Cloud Asset Inventory Use case: Monitor for unauthorized IT (also known as shadow IT) Outdated compute instances One of the most common vectors for data exposure is orphaned or unauthorized IT infrastructure. Set up real-time notifications to alert you for unexpected running resources, which might be improperly secured or using outdated software. Cloud Load Balancing Use case: Fine-grained SSL and TLS cipher control Prevent the use of weak SSL or TLS ciphers by assigning a predefined group or custom list of ciphers that Cloud Load Balancing can use. Google Cloud Armor Use case: Filter cross-origin requests Filter local or remote file inclusion attacks Filter HTTP parameter pollution attacks Many cases of broken access control cannot be mitigated by using a web application firewall, because applications don't require or don't properly check access tokens for every request, and data can be manipulated client side. Multiple Juice Shop challenges related to broken access control. For example, posting feedback in another user's name uses the fact that some requests are not authenticated server side. As you can see in the challenge solution, the exploit for this vulnerability is completely client-side and can therefore not be mitigated using Google Cloud Armor. Some challenges can be partially mitigated server side if the application cannot be immediately patched. For example, if cross-site request forgery (CSRF) attacks are possible because your web server implements cross-origin resource sharing (CORS) poorly, as demonstrated in the CSRF Juice Shop challenge, you can mitigate this issue by blocking requests from unexpected origins altogether with a custom rule. The following rule matches all requests with origins other than example.com and google.com: has(request.headers['origin']) && !((request.headers['origin'] == 'https://example.com')|| (request.headers['origin'] == 'https://google.com') ) When traffic that matches such a rule is denied, the solution for the CSRF challenge stops working. The basket manipulation challenge uses HTTP parameter pollution (HPP) so that you can see how to attack the shop by following the challenge solution. HPP is detected as part of the protocol attack rule set. To help block this kind of attack, use the following rule: evaluatePreconfiguredExpr('protocolattack-stable'). Identity-Aware Proxy and Context-Aware Access Use case: Centralized access control Works with cloud and on-premises Protects HTTP and TCP connections Context-Aware Access IAP lets you use identity and context to form a secure authentication and authorization wall around your application. Prevent broken authorization or access control to your public-facing application with a centrally managed authentication and authorization system built on Cloud Identity and IAM. Enforce granular access controls to web applications, VMs, Google Cloud APIs, and Google Workspace applications based on a user's identity and the context of the request without the need for a conventional VPN. Use a single platform for both your cloud and on-premises applications and infrastructure resources. Security Command Center Security Command Center includes two services that help you address broken access controls: Security Health Analytics and Web Security Scanner. Security Health Analytics supports the following use cases: MFA or 2FA enforcement API key protection SSL policy monitoring Security Health Analytics helps prevent broken access control by monitoring for multi-factor authentication compliance, SSL policy, and the health of your API keys. Web Security Scanner supports the following use cases: Repositories exposed to the public Insecure request header validation Web Security Scanner scans your web applications for vulnerabilities, such as publicly visible code repositories and misconfigured validation of request headers. A02: Cryptographic failures Cryptographic failures can happen due to a lack of encryption or weak encryption in transit, or accidentally exposed sensitive data. Attacks against those vulnerabilities are usually specific to the application and therefore, need a defense-in-depth approach to mitigate. Apigee Use case: Protect sensitive data Use one-way and two-way TLS to guard sensitive information at the protocol level. Use policies such as Assign Message policy and JavaScript policy to remove sensitive data before it's returned to the client. Use standard OAuth techniques and consider adding HMAC, hash, state, nonce, PKCE, or other techniques to improve the level of authentication for each request. Mask sensitive data in the Edge Trace tool. Encrypt sensitive data at rest in key value maps. Cloud Asset Inventory Use case: Search service Access analyzer One of the most common vectors for data exposure is orphaned or unauthorized IT infrastructure. You can identify servers that nobody is maintaining and buckets with over-broad sharing rules by analyzing the cloud asset time series data. Set up real-time notifications to alert you to unexpected provisioning of resources which might be improperly secured or unauthorized. Cloud Data Loss Prevention API (part of Sensitive Data Protection) Use case: Sensitive data discovery and classification Automatic data masking The Cloud Data Loss Prevention API (DLP API) lets you scan for any potentially sensitive data stored in buckets or databases to prevent unintended information leakage. If disallowed data is identified, it can be automatically flagged or redacted. Cloud Key Management Service Use case: Secure cryptographic key management (Cloud KMS) helps to prevent potential exposure of your cryptographic keys. Use this cloud-hosted key management service to manage symmetric and asymmetric cryptographic keys for your cloud services the same way that you do on-premises. You can generate, use, rotate, and destroy AES256, RSA 2048, RSA 3072, RSA 4096, EC P256, and EC P384 cryptographic keys. Cloud Load Balancing Use case: Fine-grained SSL and TLS cipher control SSL policies can help prevent sensitive data exposure by giving you control over the SSL and TLS features and ciphers that are allowed in a load balancer. Block unapproved or insecure ciphers as needed. Google Cloud Armor Use case: Filter known attack URLs Restrict sensitive endpoint access In general, sensitive data exposure should be stopped at the source, but because every attack is application specific, web application firewalls can only be used in a limited way to stop data exposure broadly. However, if your application can't be immediately patched, you can restrict access to vulnerable endpoints or request patterns by using Google Cloud Armor custom rules. For example, several Juice Shop challenges about sensitive data exposure can be exploited due to insecure directory traversal and null byte injection attacks. You can mitigate these injections by checking for the strings in the URL with the following custom expression: request.path.contains("%00") || request.path.contains("%2500") You can solve the exposed metrics challenge by accessing the /metrics subdirectory that is used by Prometheus. If you have a sensitive endpoint that is exposed and you can't immediately remove access, you can restrict access to it except for certain IP address ranges. Use a rule similar to the following custom expression: request.path.contains("/metrics") && !(inIpRange(origin.ip, '1.2.3.4/32') Replace 1.2.3.4/32 with the IP address range that should have access to the metrics interface. Accidentally exposed log files are used to solve one of the Juice Shop challenges. To avoid exposing logs, set a rule disallowing access to log files completely: request.path.endsWith(".log"). Identity-Aware Proxy and Context-Aware Access Use case: Secure remote access to sensitive services Centralized access control Context-Aware Access Use identity and context to form a secure authentication and authorization perimeter around your application. Deploy tools, such as internal bug reporting, corporate knowledge base, or email behind IAP, in order to allow Context-Aware Access to only authorized individuals from anywhere on the internet. With Context-Aware Access, you can enforce granular access controls to web applications, virtual machines (VMs), Google Cloud APIs, and Google Workspace applications based on a user's identity and context of the request without a conventional VPN. Based on the zero-trust security model and Google's BeyondCorp implementation, Context-Aware Access lets you provide access for your users, enforce granular controls, and use a single platform for both your cloud and on-premises applications and infrastructure resources. Secret Manager Use case: Crypto keys API keys Other system credentials Secret Manager is a secure storage service for your most valuable data such as API keys, service account passwords, and cryptographic assets. Centrally storing these secrets lets you rely on Google Cloud's authentication and authorization systems, including IAM, to determine whether any given request for access is valid. Secret Manager isn't designed for massive scale operations such as credit card tokenization or individual user password storage. Such applications should rely on Identity Platform for customer identity and access management (CIAM), Cloud Identity for members of your organization, or dedicated tokenization software. Security Command Center Security Command Center includes two services that help you address cryptographic failures: Security Health Analytics and Web Security Scanner. Security Health Analytics supports the following use cases: MFA/2FA enforcement API key protection API key rotation enforcement Compute image privacy SSH key rule enforcement Secure boot monitoring API access security SSL policy monitoring Disabled logging Public bucket ACL alerts Security Health Analytics helps prevent sensitive data exposure by monitoring for multi-factor authentication compliance and the health of your API keys. Get alerts for insecure configurations in container image storage, Cloud Storage, SSL policy, SSH key policy, logging, API access, and more. Web Security Scanner supports the following use case: Unencrypted passwords transmitted over the network Web Security Scanner scans your web applications and reports findings of errors and vulnerabilities. If your application transmits passwords in clear text, Web Security Scanner generates a CLEAR_TEXT_PASSWORD finding. VirusTotal Use case: Phishing prevention VirusTotal lets you scan URLs for malicious content before presenting them to your users or employees, whether they're found in user input, emails, chat, logs, or other locations. VPC Service Controls Use case: Firewall for managed services Wrap critically managed services in a firewall in order to control who can call the service and who the service can respond to. Block unauthorized egress and data exfiltration with outbound perimeter rules on services such as Cloud Run functions. Prevent requests from unauthorized users and locations to managed data stores and databases. Create secure perimeters around powerful or potentially costly APIs. Web Application Scanner Use case: Web application security risk scanner Source repository availability scanner To prevent your web application from exposing sensitive data, ensure that passwords are not sent in clear text. Avoid leaking potentially devastating raw source code by checking for exposed git and Apache Subversion source code repositories. These scans are designed to cover specific OWASP top 10 controls. A03: Injection Injection flaws, such as SQL, NoSQL, OS, and LDAP injection, occur when untrusted data is sent to an interpreter as part of a command or query. The attacker's hostile data can trick the interpreter into running unintended commands or accessing data without proper authorization. We recommend that user data is sanitized or filtered by the application before it is sent to an interpreter. The following sections discuss the Google Cloud products that can help mitigate this risk. Apigee Use case: SQL injection blocking NoSQL injection blocking LDAP injection blocking JavaScript injection blocking Apigee provides several input validation policies to verify that the values provided by a client match your configured expectations before allowing the further processing of the policies or rules. Apigee, acting as a gateway for the incoming API requests, runs a limit check to ensure that the payload structure falls within an acceptable range. You can configure an API proxy so that the input validation routine transforms the input in order to remove risky character sequences, and then replace them with safe values. There are several approaches to validating input with the Apigee platform: JSONThreatProtection checks the JSON payload for threats. XMLThreatProtection checks the XML payload for threats. JavaScript validates parameters and headers. The RegularExpressionProtection policy handles SQL code injections. The OASValidation policy validates an incoming request or response message against an OpenAPI Specification (JSON or YAML). The SOAPMessageValidation policy validates any XML message against their XSD schemas and can also validate SOAP messages against a WSDL definition. Google Cloud Armor Use case: SQL injection filtering PHP injection filtering Google Cloud Armor can block common injection attacks before they reach your application. For SQL injection (SQLi), Google Cloud Armor has a predefined rule set that is based on the OWASP Modsecurity core rule set. You can build security policies that block common SQLi attacks defined in the core rule set by using the evaluatePreconfiguredExpr('sqli-stable') rule either by itself or in conjunction with other custom rules. For example, you can limit SQLi blocking to specific applications by using a URL path filter. For PHP injection, another preconfigured rule set exists. You can use the evaluatePreconfiguredExpr('php-stable') rule to block common PHP injection attacks. Depending on your application, activating the preconfigured expressions might lead to some false positives because some of the rules in the rule set are quite sensitive. For more information, see troubleshooting false positives and how to tune the rule set to different sensitivity levels. For injection attacks other than those targeting SQL or PHP, you can create custom rules to block requests when specific keywords or escape patterns in those protocols are used in the request path or query. Make sure that these patterns don't appear in valid requests. You can also limit these rules to only be used for specific endpoints or paths that might interpret data passed to them. Additionally, some injection attacks can be mitigated by using the preconfigured rules for remote code execution and remote file injection. Security Command Center Security Command Center includes two services that help you address injection flaws: Container Threat Detection and Web Security Scanner. Container Threat Detection supports the following use cases: Malicious script detection Reverse shell detection Malware installation detection The Malicious Script Executed detector of Container Threat Detection analyzes every shell script executed on the system and reports ones that look malicious. This detector lets you discover shell command injection attacks. After a successful shell command injection, an attacker can spawn a reverse shell, which triggers the Reverse Shell detector. Alternatively, they can install malware, which triggers the Added Binary Executed and Added Library Loaded detectors. Web Security Scanner supports the following use cases: Monitoring for cross-site scripting Monitoring for SQL injection Web Security Scanner scans your web applications for vulnerabilities and provides detectors that monitor for cross-site scripting and SQL injection attacks. A04: Insecure Design Insecure design occurs when organizations don't implement the means to evaluate and address threats during the development lifecycle. Threat modeling, when done early in the design and refine phases, and continued throughout the development and testing phases, helps organizations analyze assumptions and failure flaws. A blameless culture of learning from mistakes is key to secure design. Apigee Use cases: Input validation Access controls Fault handling Content protection policies Password management Apigee lets you validate incoming requests and responses to your application using the OASValidation policy. In addition, to protect access, you can configure single sign-on (SSO), role-based access control (RBAC), limit access to APIs (using Auth0 for example) and restrict which IP addresses have access to your environment. Using fault handling rules, you can customize how the API proxy reacts to errors. To protect against unsafe passwords for Apigee global users, Apigee provides password expiration, lockout, and reset password options. In addition, you can enable two-factor authentication (2FA). Cloud Data Loss Prevention API (part of Sensitive Data Protection) Use case: Identify and redact confidential data Using the Cloud Data Loss Prevention API, you can identify confidential data and tokenize it. The DLP API can help you limit the exposure of confidential data, because after data has been tokenized and stored, you can set up access controls to restrict who can view the data. For more information, see Automating the classification of data uploaded to Cloud Storage and De-identification and re-identification of PII in large-scale datasets using Sensitive Data Protection. Secret Manager Use case: Protect storage of credentials Secret Manager lets applications and pipelines access the values of named secrets based on permissions granted with IAM. It also provides programmatic access to secrets so automated processes can access secret values. When enabled, every interaction with Secret Manager provides an audit trail. Use these audit trails to assist with forensics and compliance needs. Security Command Center The Web Security Scanner service that is part of Security Command Center supports the following use case: Identify security vulnerabilities in your applications. Web Security Scanner scans your web applications for vulnerabilities. It follows links and attempts to exercise as many user inputs and event handlers as possible. Its CACHEABLE_PASSWORD_INPUT detector generates a finding if passwords entered on the web application can be cached in a regular browser cache instead of a secure password storage. A05: Security misconfiguration Security misconfiguration refers to unpatched application flaws, open default accounts, and unprotected files and directories that can typically be prevented with application hardening. Security misconfiguration can happen in many ways, such as trusting default configurations, making partial configurations that might be insecure, letting error messages contain sensitive details, storing data in the cloud without proper security controls, or misconfiguring HTTP headers. Apigee Use case: Manage security configurations Monitor security configurations A shared flow lets API developers combine policies and resources into a reusable group. By capturing reusable policies and resources in one place, a shared flow helps you ensure consistency, shorten development time, and manage code. You can include a shared flow inside individual API proxies using a FlowCallout policy or you can place shared flows in flow hooks to automatically run shared flow logic for every API proxy deployed in the same environment. Cloud Asset Inventory Use case: Real-time notification service Real-time notifications can alert you to unexpected provisioning of resources that might be improperly secured or unauthorized. Cloud Load Balancing Use case: Fine-grained SSL and TLS cipher control Prevent the usage of known-vulnerable SSL or TLS ciphers by assigning a predefined group or custom list of ciphers usable by a load balancer. Google Cloud Armor Use case: Filter insecure endpoints Filter local or remote file inclusion attacks Filter protocol attacks Because security misconfiguration can happen at the application level, the OWASP Foundation recommends hardening and patching your application directly and removing all unnecessary functionality. Although a web application firewall (WAF), such as Google Cloud Armor, can't help you fix the underlying misconfiguration, you can block access to parts of the application either fully or for everyone except specific IP addresses or countries. Restricting access can reduce the risk of those misconfigurations being exploited. For example, if your application exposes an administrative interface using a common URL such as /admin, you can restrict access to this interface even if it is authenticated. You can do this with a deny rule—for example: request.path.contains("/admin") && !(inIpRange(origin.ip, '1.2.3.4/32') Replace 1.2.3.4/32 with the IP address range that should have access to the administrator interface. Some misconfigurations can be partially mitigated by using the predefined local file inclusion (LFI) or remote file inclusion (RFI) rulesets. For example, exploiting the Juice Shop cross-site imaging challenge doesn't succeed when the LFI ruleset is applied. Use the evaluatePreconfiguredExpr('lfi-stable') || evaluatePreconfiguredExpr('rfi-stable') rule to block requests using the LFI and RFI rule sets and tune the rules as necessary. You can verify that the challenge solution no longer succeeds. Some HTTP attacks can also be mitigated using preconfigured rulesets: To avoid HTTP verb tampering, use the method enforcement rule set. Use the evaluatePreconfiguredExpr('methodenforcement-stable') rule to disallow HTTP request methods other than the GET, HEAD, POST, and OPTIONS methods To block common attacks against HTTP parsing and proxies, such as HTTP request smuggling, HTTP response splitting, and HTTP header injection, use the protocol attack rule set by using the evaluatePreconfiguredExpr('protocolattack-stable') rule. Security Command Center Security Command Center includes two services that help you address security misconfigurations: Security Health Analytics and Web Security Scanner. Security Health Analytics supports the following use case: Security control monitoring and alerting Security Health Analytics monitors many signals through a single interface to ensure your application is maintaining security best practices. Web Security Scanner supports the following use cases: Web application scanner tailored for OWASP Top 10 HTTP server configuration errors Mixed HTTP/HTTPS content XML external entity (XXE) Web Security Scanner monitors for common security errors, such as content-type mismatches, invalid security headers, and mixed content serving. Web Security Scanner also monitors for vulnerabilities, like XXE vulnerabilities. These scans are designed to cover the OWASP top 10 controls. The following detectors scan for security misconfigurations: INVALID_CONTENT_TYPE INVALID_HEADER MISMATCHING_SECURITY_HEADER_VALUES MISSPELLED_SECURITY_HEADER_NAME MIXED_CONTENT XXE_REFLECTED_FILE_LEAKAGE For more information on these and other detectors, see Overview of Web Security Scanner. A06: Vulnerable and outdated components Components with known vulnerabilities is a category for generic attack vectors, and such vulnerabilities are best mitigated by monitoring and quickly upgrading all of your application components. Binary Authorization Use case: Restrict GKE clusters to trusted containers Binary Authorization is a deploy-time security control that helps ensure that only trusted container images are deployed on Google Kubernetes Engine (GKE). With Binary Authorization, you can require that images are signed by trusted authorities during the development process and then enforce signature validation when deploying. By enforcing validation, you can be assured that your build-and-release process uses only verified images. Cloud Load Balancing Use case: Fine-grained SSL and TLS cipher control Prevent the use of known-vulnerable SSL or TLS ciphers by assigning a predefined group or custom list of ciphers that Cloud Load Balancing can use. Google Cloud Armor Use case: Block access to unused application endpoints Block common attack vectors A web application firewall (WAF) like Google Cloud Armor shouldn't be used as a single mitigation strategy to block attacks against this category, because attacks are often library specific and cannot be blocked by preconfigured rule sets or cannot be patched server side. Regularly monitoring and upgrading all components of your application is the only option to mitigate these kind of vulnerabilities. However, Google Cloud Armor can help mitigate some common attacks against vulnerable applications through its preconfigured rules for remote code execution, local file inclusion, or remote file inclusion. If you're aware of vulnerable components in your application but can't patch the application immediately, you can block access to these parts of your application to temporarily lower the risk of an exploit of these components. Build a custom rule that matches either the URL path or queries that access these vulnerable components and deny access. If you require access to these components from specific users or locations, you can still allow certain trusted source IP addresses to access these components. A rule using the URL path looks similar to the following: `request.path.contains("/component") && !(inIpRange(origin.ip, '1.2.3.4/32') Replace the following: /component: the path of the component with known vulnerabilities 1.2.3.4/32: the IP address range that should keep access to the interface. If there are parts of your application—for example, certain directories or file types that never need to be accessed by end users—you can also block or restrict access to these resources with a custom rule, proactively mitigating the risk if these components become vulnerable in the future. Google Cloud Security Bulletins Use case: Security bulletin monitoring CVEs for Google Cloud products Google Cloud Security Bulletins are an authoritative source for security bulletins that impact Google Cloud. Posts include background information, CVE links, and recommendations for further action. Security Command Center Security Command Center includes three services that help you address vulnerable and outdated components: Container Threat Detection, Event Threat Detection, and Web Security Scanner. Container Threat Detection supports the following use cases: Malicious script detection Reverse shell detection Malware installation detection If an attacker exploits a vulnerable component and runs a malicious script, the Malicious Script Executed detector of Container Threat Detection generates a finding. If an attacker spawns a reverse shell, the Reverse Shell detector generates a finding. If an attacker installs malware, the Added Binary Executed, and Added Library Loaded detectors generate findings. Event Threat Detection supports the following use cases: Cryptomining detection Malware detection Data exfiltration Outgoing DoS Event Threat Detection monitors your Cloud Logging stream and applies detection logic and threat intelligence at a granular level. When Event Threat Detection detects a threat, it writes a finding to Security Command Center and to a Cloud Logging project. The following detection rules are useful for detecting the effects of using components with known vulnerabilities: Cryptomining. Detect cryptomining based on DNS requests or connection to known mining addresses. Malware. Detect malware-based DNS requests or connection to known bad addresses. Exfiltration to external table. Detect resources that are saved outside the organization, including copy or transfer operations. Outgoing DoS. Detect exploited vulnerabilities attempting denial of service attacks. Web Security Scanner supports the following use cases: Outdated libraries Vulnerabilities and findings dashboards Web Security Scanner monitors for outdated libraries included in your web application. You can monitor these findings in the Security Command Center dashboard. A07: Identification and authentication failures Identification and authentication failures are common risks because application authentication and session management are often implemented incorrectly. Attackers can exploit implementation flaws, such as compromised passwords, keys, and session tokens, in order to temporarily or permanently assume other users' identities. Access Transparency Use case: Service provider monitoring Access justifications Usually, if you wanted custom support from external vendors, you had to grant and share temporary credentials, which creates the potential for orphaned or leaked credentials. Access Approval is an integrated service that lets you approve or dismiss requests for access by Google employees working to support your account. Each access request includes an access justification so you can view the reason for each access, including references to support tickets. Apigee Use case: Key validation Token validation OAuth policies Apigee provides VerifyApiKey, OAuth, and JSON Web Token (JWT) policies, which help protect against this risk. API key validation is the simplest form of app-based security that can be configured for an API. A client application presents an API key with its request. Apigee Edge, through a policy attached to an API proxy, checks to see that the API key is in an approved state for the resource that is being requested. OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or on its own behalf by allowing the third-party application to obtain access. JSON Web Tokens or JWT, are commonly used to share claims or assertions between connected applications. Apigee provides JWT support using three policies. Google Cloud Armor Use case: Limit authentication endpoint access Restrict unauthorized token use Attacks against vulnerabilities that are classified under the broken authentication risk are best mitigated on the application level or by other controls. However, Google Cloud Armor can help limit the attack surface or block known-attack vectors. For example, if your application has a limited user base and those users come from a known set of IP addresses or countries, you can create a security policy that limits access to your application to users from those IP address blocks or countries. This policy can help mitigate against automated scanning from endpoints outside of these areas. If other security mechanisms detect that passwords, keys, or session tokens have been compromised, you can block access for requests that contain those parameters in a query string by using a custom rule. You can update rules that you previously defined by using the securityPolicy.patchRule method. You might be able to identify potential stolen tokens by using anomaly detection mechanisms over HTTP load balancing logs. You can also detect potential adversaries by scanning for common passwords in those logs. You can block common session fixation attacks by using the preconfigured ModSecurity rule set for session fixation. You can use the rule set by adding the predefined evaluatePreconfiguredExpr('sessionfixation-stable') rule to your security policy. If your application includes password changes in the query string, you can also block the use of common passwords by using a custom rule that matches the request.query attribute. However, such checks are much better implemented on the application side if possible. Identity-Aware Proxy (IAP) Use case: Centralized access control Works with cloud and on-premises Protect HTTP and TCP connections Context-Aware Access IAP integrates with HTTP(S) load balancing so you can use identity and context to form a secure authentication and authorization wall around your application. Prevent broken authentication to your public-facing application by provisioning external users in Identity Platform (more information in the following section). You can also prevent broken authentication to administrative interfaces by protecting them with Identity-Aware Proxy and authenticating users provisioned with Identity and Access Management or Cloud Identity. Any attempt to access the tool results in a logged authentication attempt followed by an authorization check to ensure the authenticated user is allowed to access the requested resource. Identity Platform Use case: Authentication as a service Multi-factor authentication Enterprise SLA Broad protocol support Google Account protection intelligence Identity Platform is the CIAM platform for Google Cloud customers. Identity Platform helps provide secure authentication as a service with multi-protocol support by using SDKs and APIs. It offers multi-factor authentication, integration with third-party authentication services, and auditable activity tracking. reCAPTCHA Use case: Automated login attempts Content scraping Credential stuffing Fraudulent transactions Account takeovers Fake accounts Money laundering reCAPTCHA provides highly effective filtering against bots and other forms of automation and bulk traffic by scoring the risk level of access attempts. You can tune your site-specific model with automated feedback. reCAPTCHA adapts future scores to fit your site. Security Command Center Security Command Center includes three services that help you address identification and authentication failures: Event Threat Detection, Security Health Analytics, and Web Security Scanner. Event Threat Detection supports the following use cases: Brute force detection IAM abuse detection Event Threat Detection monitors your Cloud Logging stream and applies detection logic and proprietary threat intelligence at a granular level. When Event Threat Detection detects a threat, it writes a finding to Security Command Center and to Cloud Logging in the project of your choosing. The following event types are useful for identifying broken authentication: Brute force SSH. Detect successful brute force of SSH on a host. Anomalous grant. Detect privileges granted to Identity and Access Management (IAM) users outside of your Google Cloud organization. Security Health Analytics supports the following use cases: MFA/2FA enforcement API key protection API key rotation enforcement Security Command Center helps prevent broken authentication by monitoring for multi-factor authentication compliance and the health of your API keys. You can identify suspicious requests and block them or flag them for special handling. Web Security Scanner supports the following use case: Session identifier leaks Web Security Scanner scans your web applications for vulnerabilities like session ID leaks, which let other parties impersonate or uniquely identify a user. Titan Security Keys Use case: Phishing-resistant 2FA Mobile and PC authentication Titan Security Keys use public key cryptography to verify a user's identity and the URL of the login page to help ensure that attackers can't access your account even if you are tricked into providing your username and password. A08: Software and data integrity failures Software and data integrity failures can happen when integrity checks don't occur during software updates, processing confidential data, or any process in the CI/CD pipeline. Artifact Registry Use case: Centralize artifacts in a single, trusted location Use version management, vulnerability scanning, approval workflows Artifact Registry is a single place for your organization to manage container images and language packages (such as Maven and npm). It can integrate with your existing development tools and provides vulnerability scanning for your containers using Artifact Analysis. Binary Authorization Use case: Ensure only trusted containers are deployed Binary Authorization verifies the integrity of containers so that only trusted container images are deployed. You can create policies to allow or deny deployment based on the presence or absence of attestations. Binary Authorization applies policies at a cluster level, so you can configure different policies for different environments. This distinction allows for progressive attestation requirements as environments get closer to production. Cloud Asset Inventory Use case: Search service Access analyzer One of the most common vectors for data exposure is orphaned or unauthorized IT infrastructure. You can identify servers that nobody is maintaining and buckets with over-broad sharing rules by analyzing the cloud asset time series data. Set up real-time notifications to alert you to unexpected provisioning of resources which might be improperly secured or unauthorized. Cloud Build Use case: Review code changes Run tests Standardize build deployments Cloud Build lets you create a build config to provide instructions on your build deployment, including running static analysis and integration tests. Google Cloud Armor Use case: Block remote code execution Because most attacks against software and data integrity are application specific, there are only a few ways to help mitigate these attacks—for example, using a web application firewall (WAF) like Google Cloud Armor. OWASP recommends that you don't accept serialized objects from untrusted sources. If possible, you can restrict endpoints accepting those objects to a set of trusted IP addresses with a deny rule similar to the following: request.path.contains("/endpoint") && !(inIpRange(origin.ip, '1.2.3.4/32') Replace the following: /endpoint: the path of the endpoint accepting serialized objects 1.2.3.4/32: the IP address range that should keep access to the interface. To mitigate typical attacks against software and data integrity that use remote code execution (RCE), use the predefined rule set against RCE attacks. You can use the evaluatePreconfiguredExpr('rce-stable') rule to block common RCE attacks against UNIX and Windows Shells. The RCE attacks described in the Juice Shop challenges for insecure deserializations run functions and regular expressions in Node.js on the server. These kinds of attacks are not blocked by the predefined RCE rule set and the corresponding OWASP Modsecurity rule and have to be mitigated by using patches on the server side or custom rules. VirusTotal Use case: Untrusted data scanning The VirusTotal API lets you upload and scan files for malware. You can scan images, documents, binaries, and other untrusted data before it is processed to eliminate certain categories of malicious input. Security Command Center The Web Security Scanner service in Security Command Center supports the following use case: Insecure deserialization Web Security Scanner scans your web applications for vulnerabilities. For example, if you're using an Apache Struts version that makes your application vulnerable to remote command injection attacks, Web Security Scanner generates a STRUTS_INSECURE_DESERIALIZATION finding. A09: Security logging and monitoring failures If you don't adequately log, monitor, or manage incidents in your systems, attackers can perform deeper and more prolonged attacks on data and software. Access Transparency Use case: Service provider access monitoring and auditing Access justifications Resource and method identification Inability to audit cloud provider access can be a barrier to migrate from on-premises to cloud. Access Transparency enables verification of cloud provider access, bringing your audit controls closer to on-premises conditions. You can record the reason for each access, including references to relevant support tickets. Resource and method identification names which resources are accessed and which methods were run by which administrator. Access Approval lets you approve or dismiss requests for access by Google employees who are working to support your service. Apigee Use case: Export Apigee logs to SIEM Use Apigee monitoring UI Follow monitoring best practices Apigee has several ways to perform logging, monitoring, error handling, and audit logging: Logging Log messages can be sent to Splunk or other syslog endpoints using the MessageLogging policy. API analytics data can be pulled through the analytics API and imported or exported into other systems. In Edge for Private Cloud, you can use the MessageLogging policy to write to local log files. Log files from each of the running components are available as well. The JavaScript policy can be used to send log messages to a REST logging endpoint synchronously or asynchronously. Monitoring Use the API Monitoring UI or API to regularly monitor APIs and backends and trigger alerts. Use health monitoring to regularly monitor target server backends. Apigee provides recommendations for monitoring Edge for Private Cloud. Apigee also provides best practices that your team can use for monitoring your API program. Error handling Apigee offers a powerful, versatile fault handling mechanism for API proxies. Similar to how a Java program would catch exceptions, API proxies can catch faults and determine how to return appropriate responses to clients. Apigee's custom fault handling lets you add functionality such as message logging whenever an error occurs. Audit logs The Apigee platform keeps an audit log that tracks changes to API proxies, products, and organization history. This log is available through the UI or through the Management API. Google Security Operations Use case: Threat detection Early warning Security teams can send their security telemetry to Google Security Operations to let you apply powerful detection rules to a unified set of data. Sensitive Data Protection Use case: Automatic sensitive data masking Identify compliance-sensitive information in your log streams and mask or transform it appropriately before archiving it in logs. For example, an error message or core dump might contain sensitive information such as credit card numbers or personally identifiable information that needs to be masked. Cloud Key Management Service Use case: Cryptographic key request event logging Access justifications Key Access Justifications give you historical visibility into every request for an encryption key by logging the stated justification and a record of approval or denial of that request. Cloud Logging Use case: Log aggregation Log storage Log search Log analysis Cloud Logging lets you store, search, analyze, monitor, and alert on logging data and events from Google Cloud and Amazon Web Services. It includes access to the BindPlane service, which you can use to collect logging data from over 150 common application components, on-premises systems, and hybrid cloud systems. Cloud Monitoring Use case: Log monitoring Event alerting Cloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. It provides a monitoring dashboard, event monitors, and alerting through multiple channels. Cloud Source Repositories Use case: Code change attribution Access audit logging Get insights into what actions were performed on your repository, including where and when, with Cloud Audit Logs generated by Cloud Source Repositories. Error Reporting Use case: Capture internal application errors in Cloud Logging Collect crash reports outside of the crashed compute instance Internal application errors can be an indicator of a security problem, broken functionality, or attempts to circumvent security. Error Reporting counts, analyzes, and aggregates the crashes in your running cloud services. A centralized error management interface displays the results with sorting and filtering capabilities. A dedicated view shows the error details—for example, time chart, occurrences, affected user count, first- and last-seen dates, and a cleaned exception stack trace. Opt-in to receive email and mobile alerts on new errors. Google Cloud Armor Use case: Security policy logging Monitoring dashboards Alerting on traffic anomalies Google Cloud Armor request logs are part of Cloud Logging for external Application Load Balancers To have access to logging information—such as which security policy rule matched traffic—enable logging on all backend services that have attached security policies. Use rules in preview mode to test them and log results without enforcing the effects. Google Cloud Armor also offers monitoring dashboards for security policies that let you get an overview of the amount of traffic that passed or was denied by any of your security policies. Google Cloud Armor publishes findings about traffic anomalies, such as spikes in allowed traffic or increased denied traffic, in Security Command Center. Google Cloud Armor automatically writes Admin Activity audit logs, which record operations that modify the configuration or metadata of a resource. This service can also be configured to write Data Access audit logs which contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. Identity Platform Use case: Admin Activity audit logs Data access audit logs System event audit logs Policy denied audit logs Authentication activity logs Identity Platform is the CIAM platform for Google Cloud that logs authentication activity by default. Enable several powerful audit logs including administrator activity, data access, system events, and denied authentication attempts. Security Command Center Use cases: Alert monitoring Threat management Vulnerability scan reporting Compliance monitoring Asset monitoring Security scan findings With the compliance dashboard, you can continuously monitor compliance with controls from PCI-DSS, CIS Google Cloud Computing Foundations Benchmark, and more. The Assets page provides a detailed display of all Google Cloud resources, called assets, in your organization. The page lets you view assets for your entire organization or you can filter assets within a specific project, by asset type, or by change type. Finally, you can review a detailed findings inventory for all your organization assets so you can view potential security risks. In addition, the Event Threat Detection service of Security Command Center supports the following use cases: Brute force Cryptomining IAM abuse Malware Phishing Event Threat Detection monitors your Cloud Logging stream and applies detection logic and proprietary threat intelligence at a granular level. Event Threat Detection identifies notable entries in your logs and elevates them for review. When Event Threat Detection detects a threat, it writes a finding to Security Command Center and to a Cloud Logging project. A10: Server-Side Request Forgery (SSRF) An SSRF attack occurs when an attacker forces a vulnerable server to trigger unwanted malicious requests to third-party servers or internal resources. SSRF flaws can occur when a web application fetches a remote resource without validating the user-supplied URL. Apigee Use case: Block SSRF attacks by using LFI or RFI Apigee has built-in XML and JSON parsers that use XPath or JSONPath to extract data. It has an XMLThreatProtection policy to guard against malicious XML payloads and a JSONThreatProtection policy to help protect against malicious JSON payloads. The Apigee ExtractVariables policy lets you extract the content from a request or response and assign that content to a variable. You can extract any part of the message, including headers, URI paths, JSON and XML payloads, form parameters, and query parameters. The policy works by applying a text pattern to the message content and when it finds a match, sets a variable with the specified message content. Google Cloud Armor Use case: Filter SSRF attacks by using LFI or RFI Because SSRF attacks can be complex and come in different forms, the mitigation possibilities by web application firewalls are limited. Attacks are better mitigated by patching XML or JSON parsers, disallowing external entities, and limiting XML or JSON data transfers on public web servers to a minimum. However, depending on the application and type of attack, Google Cloud Armor can still help defend against data exfiltration and other impacts. Although no rules in the OWASP ModeSecurity Core Rule Set specifically defend against SSRF attacks, the local file inclusion (LFI) and remote file inclusion (RFI) rules can help against some of these attacks. To stop an attacker from retrieving local files on the server, you use the evaluatePreconfiguredExpr('lfi-stable') rule in a Google Cloud Armor Security policy. The SSRF Juice Shop challenge uses the preconfigured remote file inclusion (RFI) or local file inclusion (LFI) rule sets to help mitigate some of these attacks because they block inclusion of URLs or path traversal. For example, the following rule enables both rule sets: evaluatePreconfiguredExpr('lfi-stable') || evaluatePreconfiguredExpr('rfi-stable') When such a rule is implemented, the solution for the SSRF challenge also stops working. VPC Service Controls Use case: Network perimeters to segment servers To reduce the impact of SSRF attacks, you can use VPC Service Controls to create perimeters that segment servers from other resources in your organization. These perimeters provide protection against data exfiltration. When run in enforced mode, API requests to restricted services don't cross the perimeter boundary unless the conditions of the necessary ingress and egress rules of the perimeter are satisfied. Virtual Private Cloud (VPC) firewall Use case: Enforce "deny by default" firewall policies or network access control rules to block all but essential intranet traffic. VPC firewalls apply to inbound and outbound traffic for your projects and VPC network. You can create firewall rules that block all traffic except the traffic that you want to allow. For more information, see the VPC firewall rules overview. Security Command Center The Web Security Scanner service in Security Command Center supports the following use case: Web application monitoring Web Security Scanner scans your web applications for vulnerabilities. For example, if your application is vulnerable to server-side request forgery, Web Security Scanner generates a SERVER_SIDE_REQUEST_FORGERY finding. What's next Web application and API protection on Google Cloud OWASP Top 10 Google Cloud security bulletins Google Cloud security best practices center Compliance offerings CIS benchmark for Google Cloud Security Command Center Apigee Google Cloud Armor All Google Cloud security products Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Observability(1).txt b/Observability(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..be34b416ea94a57e14a779e8446b41a7f839cbde --- /dev/null +++ b/Observability(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/observability +Date Scraped: 2025-02-23T12:04:55.046Z + +Content: +Jump to Observability suiteGoogle Cloud's ObservabilityIntegrated monitoring, logging, and trace managed services for applications and systems running on Google Cloud and beyond.Go to consoleContact salesStart using the Observability with Monitoring and Logging quickstart guidesResearch shows successful reliability is 4.1 times more likely to incorporate observabilityLearn how Google Cloud’s Observability helps customers improve cloud observabilityStay up-to-date with the latest blogs and our o11y in-depth video seriesDownload the overview one-pager: Observability in Google CloudVIDEOWhat is Cloud Operations?4:51Key featuresKey featuresReal-time log management and analysisCloud Logging is a fully managed service that performs at scale and can ingest application and platform log data, as well as custom log data from GKE environments, VMs, and other services inside and outside of Google Cloud. Get advanced performance, troubleshooting, security, and business insights with Log Analytics, integrating the power of BigQuery into Cloud Logging. Built-in metrics observability at scaleCloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Collect metrics, events, and metadata from Google Cloud services, hosted uptime probes, application instrumentation, and a variety of common application components. Visualize this data on charts and dashboards and create alerts so you are notified when metrics are outside of expected ranges.Stand-alone managed service for running and scaling PrometheusManaged Service for Prometheus is a fully managed Prometheus-compatible monitoring solution, built on top of the same globally scalable data store as Cloud Monitoring. Keep your existing visualization, analysis, and alerting services, as this data can be queried with PromQL or Cloud Monitoring.Monitor and improve your application's performanceApplication Performance Management (APM) combines the monitoring and troubleshooting capabilities of Cloud Logging and Cloud Monitoring with Cloud Trace and Cloud Profiler to help you reduce latency and cost so you can run more efficient applications.View all featuresData governance: Principles for securing and managing logsGet the whitepaperCustomersLearn from customers using operations toolsBlog postHow The Home Depot gets a single pane of glass for metrics across 2,200 stores4-min readBlog postHow Lowe’s evolved app dev and deployment with Google Cloud6-min readBlog postHow Lowe’s meets customer demand with Google SRE practices4-min readCase studyGannett improves observability with Google Cloud's Observability7-min readVideoNiantic shares best practices for custom metric telemetry on Google CloudVideo (3:22)VideoShopify analyzes distributed trace data to identify performance bottlenecksVideo (5:35)See all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.EventListen to a recent Twitter Space event about Log Analytics from Cloud LoggingLearn moreBlog postThe digital forecast: 30-plus cloud computing stats and trends to know in 2023Read the blogBlog postLog Analytics in Cloud Logging is now GALearn moreBlog postTop 10 reasons to get started with Log Analytics todayRead the blogBlog postIntroducing a high-usage tier for Managed Service for PrometheusRead the blogBlog postGoogle Cloud Managed Service for Prometheus is now GARead the blogDocumentationDocumentationTutorialObservability documentationView all documentation for the Observability.Learn moreTutorialGet started with Cloud MonitoringLearn about metrics scopes, the monitoring agent, uptime checks, and other features.Learn moreTutorialGet started with Cloud LoggingGuides and set-up docs to help you get up and running with Cloud Logging.Learn moreGoogle Cloud BasicsMonitoring and logging support for GKELearn about Google Kubernetes Engine’s native integration with Cloud Monitoring and Cloud Logging.Learn moreArchitectureGoogle Cloud metricsSee which metrics Cloud Monitoring supports.Learn moreTutorialHands-on labs: Google Cloud’s ObservabilityIn this skill badge, you’ll learn the ins and outs of Google Cloud's Observability to generate insights into the health of your applications.Learn moreTutorialDashboard API: Build your own Cloud Monitoring dashboardTips for shareable and reusable dashboard creation.Learn moreTutorialCloud Audit LogsLearn how Cloud Audit Logs maintains three audit logs: admin activity, data access, and system event.Learn moreArchitectureHybrid and multi-cloud deploymentsThis document discusses monitoring and logging architectures for hybrid and multi-cloud deployments.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for the Observability Use casesUse casesUse caseMonitor your infrastructureCloud Logging and Cloud Monitoring provide your IT Ops/SRE/DevOps teams with out-of-the box observability needed to monitor your infrastructure and applications. Cloud Logging automatically ingests Google Cloud audit and platform logs so that you can get started right away. Cloud Monitoring provides a view of all Google Cloud metrics at zero cost and integrates with a variety of providers for non Google Cloud monitoring.Best practiceTips and tricks: Service MonitoringLearn to effectively set SLOs in Cloud Monitoring with this step-by-step guide.Learn moreGoogle Cloud basicsView and customize your dashboardsLearn about Cloud Monitoring's dashboard customizations so you can get the exact views of your data that you need. Learn moreGoogle Cloud basicsLogging and monitoring on-premisesConsolidate Cloud Logging and Cloud Monitoring by integrating with non-Google Cloud environments as well.Learn moreUse caseTroubleshoot your applicationsReduce Mean Time to Recover (MTTR) and optimize your application’s performance with the full suite of cloud ops tools. Use dashboards to gain insights into your applications with both service and custom application metrics. Use Monitoring SLOs and alerting to help identify errors.TutorialDebugging apps on Google Kubernetes EngineLearn about how you can use Google Cloud operations tools to troubleshoot your applications running on GKE.Learn moreBest practiceIdentifying causes of app latencyLearn to identify the causes of tail latency with OpenCensus and Cloud Monitoring to monitor metrics and distributed tracing for app developers.Learn moreArchitectureConfiguring your containerized appsLearn about best practices for configuring your containerized applications to capture the data you will need for troubleshooting.Learn moreView all technical guidesAll featuresAll featuresLog managementLog Router allows customers to control where logs are sent. All logs, including audit logs, platform logs, and user logs, are sent to the Cloud Logging API where they pass through the log router. The log router checks each log entry against existing rules to determine which log entries to discard, which to ingest, and which to include in exports.Proactive monitoringCloud Monitoring allows you to create alerting policies to notify you when metrics, health check results, and uptime check results meet specified criteria. Integrated with a wide variety of notification channels, including Slack and PagerDuty.Prometheus as a managed serviceOffload the scaling and management of Prometheus infrastructure, updates, storage, and more with Managed Service for Prometheus. Avoid vendor lock-in and keep all of the open source tools you use today for visualization, alerting, and analysis of Prometheus metrics.Custom visualizationCloud Monitoring provides default out-of-the-box dashboards and allows you to define custom dashboards with powerful visualization tools to suit your needs.Health check monitoringCloud Monitoring provides uptime checks to web applications and other internet-accessible services running on your cloud environment. You can configure uptime checks associated with URLs, groups, or resources, such as instances and load balancers.Service monitoringService monitoring provides out-of-the-box telemetry and dashboards that allow troubleshooting in context through topology and context graphs, plus automation of health monitoring through SLOs and error budget management.Latency managementCloud Trace provides latency sampling and reporting for App Engine, including per-URL statistics and latency distributions.Performance and cost managementCloud Profiler provides continuous profiling of resource consumption in your production applications, helping you identify and eliminate potential performance issues.Security managementCloud Audit Logs provides near real-time user activity visibility across Google Cloud.PricingPricingThe pricing for Google Cloud's Observability lets you control your usage and spending. Google Cloud's Observability products are priced by data volume or usage. You can use the free data usage allotments to get started with no upfront fees or commitments.Cloud LoggingFeaturePriceFree allotment per monthEffective dateLogging storage1$0.50/GiB;One-time charge for streaming logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data.First 50 GiB/project/monthJuly 1, 2018Logging retention2$0.01 per GiB per month for logs retained more than 30 days; billed monthly according to retention.Logs retained for the default retention period don't incur a retention cost.April 1, 2023Log Router3No additional chargeNot applicableNot applicableLog Analytics4No additional chargeNot applicableNot applicableCloud MonitoringFeaturePriceFree allotment per monthEffective dateAll Monitoring data except data ingested by using Managed Service for Prometheus$0.2580/MiB6: first 150–100,000 MiB$0.1510/MiB: next 100,000–250,000 MiB$0.0610/MiB: >250,000 MiBAll non-chargeable Google Cloud metricsFirst 150 MiB per billing account for metrics charged by bytes ingestedJuly 1, 2018Metrics ingested by using Google Cloud Managed Service for Prometheus, including GKE control plane metrics$0.060/million samples†: first 0-50 billion samples ingested#$0.048/million samples: next 50-250 billion samples ingested$0.036/million samples: next 250-500 billion samples ingested$0.024/million samples: >500 billion samples ingestedNot applicableAugust 8, 2023Monitoring API calls$0.01/1,000 Read API calls (Write API calls are free)First one million Read API calls included per billing accountJuly 1, 2018Execution of Monitoring uptime checks$0.30/1,000 executions‡One million executions per Google Cloud projectOctober 1, 2022Execution of Monitoring synthetic monitors$1.20/1,000 executions*100 executions per billing accountNovember 1, 2023Alerting policies$1.50 per month for each condition in an alerting policy$0.35 per 1,000,000 time series returned by the query of a metric alerting policy condition♣Not applicableJanuary 7, 2025Cloud TraceFeaturePriceFree allotment per monthEffective dateTrace ingestion$0.20/million spansFirst 2.5 million spansNovember 1, 20181 Storage volume counts the actual size of the log entries prior to indexing. There are no storage charges for logs stored in the Required log bucket.2 There are no retention charges for logs stored in the _Required log bucket, which has a fixed retention period of 400 days.3 Log routing is defined as forwarding received logs to a supported destination. Destination charges might apply to routed logs.4 There is no charge to upgrade a log bucket to use Log Analytics or to issue SQL queries from the Log Analytics page.Note: The pricing language for Cloud Logging changed on July 19, 2023; however, the free allotments and the rates haven't changed. Your bill might refer to the old pricing language.6 For pricing purposes, all units are treated as binary measures, for example, as mebibytes (MiB, or 220 bytes) or gibibytes (GiB, or 230 bytes).† Google Cloud Managed Service for Prometheus uses Cloud Monitoring storage for externally created metric data and uses the Monitoring API to retrieve that data. Managed Service for Prometheus meters based on samples ingested instead of bytes to align with Prometheus' conventions. For more information about sample-based metering, see Pricing for controllability and predictability. For computational examples, see Pricing examples based on samples ingested.# Samples are counted per billing account.‡ Executions are charged to the billing account in which they are defined. For more information, see Pricing for uptime-check execution.* Executions are charged to the billing account in which they are defined. For each execution, you might incur additional charges from other Google Cloud services, including services, such as Cloud Functions, Cloud Storage, and Cloud Logging. For information about these additional charges, see the pricing document for the respective Google Cloud service.♣ For more information, see Pricing for alerting.View pricing detailsPartnersPartnersGet support from a rich and growing ecosystem of technology integrations to expand the IT ops, security, and compliance capabilities available to Google Cloud customers.Expand allFeatured partnersPartner integrationsSee all partnersBindPlane is a registered trademark of observIQ, Inc.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Observability(2).txt b/Observability(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..8b8316ef842e2572e7829f7d7e166ef0a46ef81b --- /dev/null +++ b/Observability(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/observability +Date Scraped: 2025-02-23T12:05:49.331Z + +Content: +Jump to Observability suiteGoogle Cloud's ObservabilityIntegrated monitoring, logging, and trace managed services for applications and systems running on Google Cloud and beyond.Go to consoleContact salesStart using the Observability with Monitoring and Logging quickstart guidesResearch shows successful reliability is 4.1 times more likely to incorporate observabilityLearn how Google Cloud’s Observability helps customers improve cloud observabilityStay up-to-date with the latest blogs and our o11y in-depth video seriesDownload the overview one-pager: Observability in Google CloudVIDEOWhat is Cloud Operations?4:51Key featuresKey featuresReal-time log management and analysisCloud Logging is a fully managed service that performs at scale and can ingest application and platform log data, as well as custom log data from GKE environments, VMs, and other services inside and outside of Google Cloud. Get advanced performance, troubleshooting, security, and business insights with Log Analytics, integrating the power of BigQuery into Cloud Logging. Built-in metrics observability at scaleCloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Collect metrics, events, and metadata from Google Cloud services, hosted uptime probes, application instrumentation, and a variety of common application components. Visualize this data on charts and dashboards and create alerts so you are notified when metrics are outside of expected ranges.Stand-alone managed service for running and scaling PrometheusManaged Service for Prometheus is a fully managed Prometheus-compatible monitoring solution, built on top of the same globally scalable data store as Cloud Monitoring. Keep your existing visualization, analysis, and alerting services, as this data can be queried with PromQL or Cloud Monitoring.Monitor and improve your application's performanceApplication Performance Management (APM) combines the monitoring and troubleshooting capabilities of Cloud Logging and Cloud Monitoring with Cloud Trace and Cloud Profiler to help you reduce latency and cost so you can run more efficient applications.View all featuresData governance: Principles for securing and managing logsGet the whitepaperCustomersLearn from customers using operations toolsBlog postHow The Home Depot gets a single pane of glass for metrics across 2,200 stores4-min readBlog postHow Lowe’s evolved app dev and deployment with Google Cloud6-min readBlog postHow Lowe’s meets customer demand with Google SRE practices4-min readCase studyGannett improves observability with Google Cloud's Observability7-min readVideoNiantic shares best practices for custom metric telemetry on Google CloudVideo (3:22)VideoShopify analyzes distributed trace data to identify performance bottlenecksVideo (5:35)See all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.EventListen to a recent Twitter Space event about Log Analytics from Cloud LoggingLearn moreBlog postThe digital forecast: 30-plus cloud computing stats and trends to know in 2023Read the blogBlog postLog Analytics in Cloud Logging is now GALearn moreBlog postTop 10 reasons to get started with Log Analytics todayRead the blogBlog postIntroducing a high-usage tier for Managed Service for PrometheusRead the blogBlog postGoogle Cloud Managed Service for Prometheus is now GARead the blogDocumentationDocumentationTutorialObservability documentationView all documentation for the Observability.Learn moreTutorialGet started with Cloud MonitoringLearn about metrics scopes, the monitoring agent, uptime checks, and other features.Learn moreTutorialGet started with Cloud LoggingGuides and set-up docs to help you get up and running with Cloud Logging.Learn moreGoogle Cloud BasicsMonitoring and logging support for GKELearn about Google Kubernetes Engine’s native integration with Cloud Monitoring and Cloud Logging.Learn moreArchitectureGoogle Cloud metricsSee which metrics Cloud Monitoring supports.Learn moreTutorialHands-on labs: Google Cloud’s ObservabilityIn this skill badge, you’ll learn the ins and outs of Google Cloud's Observability to generate insights into the health of your applications.Learn moreTutorialDashboard API: Build your own Cloud Monitoring dashboardTips for shareable and reusable dashboard creation.Learn moreTutorialCloud Audit LogsLearn how Cloud Audit Logs maintains three audit logs: admin activity, data access, and system event.Learn moreArchitectureHybrid and multi-cloud deploymentsThis document discusses monitoring and logging architectures for hybrid and multi-cloud deployments.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for the Observability Use casesUse casesUse caseMonitor your infrastructureCloud Logging and Cloud Monitoring provide your IT Ops/SRE/DevOps teams with out-of-the box observability needed to monitor your infrastructure and applications. Cloud Logging automatically ingests Google Cloud audit and platform logs so that you can get started right away. Cloud Monitoring provides a view of all Google Cloud metrics at zero cost and integrates with a variety of providers for non Google Cloud monitoring.Best practiceTips and tricks: Service MonitoringLearn to effectively set SLOs in Cloud Monitoring with this step-by-step guide.Learn moreGoogle Cloud basicsView and customize your dashboardsLearn about Cloud Monitoring's dashboard customizations so you can get the exact views of your data that you need. Learn moreGoogle Cloud basicsLogging and monitoring on-premisesConsolidate Cloud Logging and Cloud Monitoring by integrating with non-Google Cloud environments as well.Learn moreUse caseTroubleshoot your applicationsReduce Mean Time to Recover (MTTR) and optimize your application’s performance with the full suite of cloud ops tools. Use dashboards to gain insights into your applications with both service and custom application metrics. Use Monitoring SLOs and alerting to help identify errors.TutorialDebugging apps on Google Kubernetes EngineLearn about how you can use Google Cloud operations tools to troubleshoot your applications running on GKE.Learn moreBest practiceIdentifying causes of app latencyLearn to identify the causes of tail latency with OpenCensus and Cloud Monitoring to monitor metrics and distributed tracing for app developers.Learn moreArchitectureConfiguring your containerized appsLearn about best practices for configuring your containerized applications to capture the data you will need for troubleshooting.Learn moreView all technical guidesAll featuresAll featuresLog managementLog Router allows customers to control where logs are sent. All logs, including audit logs, platform logs, and user logs, are sent to the Cloud Logging API where they pass through the log router. The log router checks each log entry against existing rules to determine which log entries to discard, which to ingest, and which to include in exports.Proactive monitoringCloud Monitoring allows you to create alerting policies to notify you when metrics, health check results, and uptime check results meet specified criteria. Integrated with a wide variety of notification channels, including Slack and PagerDuty.Prometheus as a managed serviceOffload the scaling and management of Prometheus infrastructure, updates, storage, and more with Managed Service for Prometheus. Avoid vendor lock-in and keep all of the open source tools you use today for visualization, alerting, and analysis of Prometheus metrics.Custom visualizationCloud Monitoring provides default out-of-the-box dashboards and allows you to define custom dashboards with powerful visualization tools to suit your needs.Health check monitoringCloud Monitoring provides uptime checks to web applications and other internet-accessible services running on your cloud environment. You can configure uptime checks associated with URLs, groups, or resources, such as instances and load balancers.Service monitoringService monitoring provides out-of-the-box telemetry and dashboards that allow troubleshooting in context through topology and context graphs, plus automation of health monitoring through SLOs and error budget management.Latency managementCloud Trace provides latency sampling and reporting for App Engine, including per-URL statistics and latency distributions.Performance and cost managementCloud Profiler provides continuous profiling of resource consumption in your production applications, helping you identify and eliminate potential performance issues.Security managementCloud Audit Logs provides near real-time user activity visibility across Google Cloud.PricingPricingThe pricing for Google Cloud's Observability lets you control your usage and spending. Google Cloud's Observability products are priced by data volume or usage. You can use the free data usage allotments to get started with no upfront fees or commitments.Cloud LoggingFeaturePriceFree allotment per monthEffective dateLogging storage1$0.50/GiB;One-time charge for streaming logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data.First 50 GiB/project/monthJuly 1, 2018Logging retention2$0.01 per GiB per month for logs retained more than 30 days; billed monthly according to retention.Logs retained for the default retention period don't incur a retention cost.April 1, 2023Log Router3No additional chargeNot applicableNot applicableLog Analytics4No additional chargeNot applicableNot applicableCloud MonitoringFeaturePriceFree allotment per monthEffective dateAll Monitoring data except data ingested by using Managed Service for Prometheus$0.2580/MiB6: first 150–100,000 MiB$0.1510/MiB: next 100,000–250,000 MiB$0.0610/MiB: >250,000 MiBAll non-chargeable Google Cloud metricsFirst 150 MiB per billing account for metrics charged by bytes ingestedJuly 1, 2018Metrics ingested by using Google Cloud Managed Service for Prometheus, including GKE control plane metrics$0.060/million samples†: first 0-50 billion samples ingested#$0.048/million samples: next 50-250 billion samples ingested$0.036/million samples: next 250-500 billion samples ingested$0.024/million samples: >500 billion samples ingestedNot applicableAugust 8, 2023Monitoring API calls$0.01/1,000 Read API calls (Write API calls are free)First one million Read API calls included per billing accountJuly 1, 2018Execution of Monitoring uptime checks$0.30/1,000 executions‡One million executions per Google Cloud projectOctober 1, 2022Execution of Monitoring synthetic monitors$1.20/1,000 executions*100 executions per billing accountNovember 1, 2023Alerting policies$1.50 per month for each condition in an alerting policy$0.35 per 1,000,000 time series returned by the query of a metric alerting policy condition♣Not applicableJanuary 7, 2025Cloud TraceFeaturePriceFree allotment per monthEffective dateTrace ingestion$0.20/million spansFirst 2.5 million spansNovember 1, 20181 Storage volume counts the actual size of the log entries prior to indexing. There are no storage charges for logs stored in the Required log bucket.2 There are no retention charges for logs stored in the _Required log bucket, which has a fixed retention period of 400 days.3 Log routing is defined as forwarding received logs to a supported destination. Destination charges might apply to routed logs.4 There is no charge to upgrade a log bucket to use Log Analytics or to issue SQL queries from the Log Analytics page.Note: The pricing language for Cloud Logging changed on July 19, 2023; however, the free allotments and the rates haven't changed. Your bill might refer to the old pricing language.6 For pricing purposes, all units are treated as binary measures, for example, as mebibytes (MiB, or 220 bytes) or gibibytes (GiB, or 230 bytes).† Google Cloud Managed Service for Prometheus uses Cloud Monitoring storage for externally created metric data and uses the Monitoring API to retrieve that data. Managed Service for Prometheus meters based on samples ingested instead of bytes to align with Prometheus' conventions. For more information about sample-based metering, see Pricing for controllability and predictability. For computational examples, see Pricing examples based on samples ingested.# Samples are counted per billing account.‡ Executions are charged to the billing account in which they are defined. For more information, see Pricing for uptime-check execution.* Executions are charged to the billing account in which they are defined. For each execution, you might incur additional charges from other Google Cloud services, including services, such as Cloud Functions, Cloud Storage, and Cloud Logging. For information about these additional charges, see the pricing document for the respective Google Cloud service.♣ For more information, see Pricing for alerting.View pricing detailsPartnersPartnersGet support from a rich and growing ecosystem of technology integrations to expand the IT ops, security, and compliance capabilities available to Google Cloud customers.Expand allFeatured partnersPartner integrationsSee all partnersBindPlane is a registered trademark of observIQ, Inc.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Observability.txt b/Observability.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a5573b463c9395d656c4f82bc3b266d29012b0b --- /dev/null +++ b/Observability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/observability +Date Scraped: 2025-02-23T11:59:57.215Z + +Content: +Catch up on the latest launches, demos, and training from Next '24. Let's go. ObservabilityObservability solutions with Google CloudDeliver deep cloud infrastructure observability supporting all cloud workloads with both partner and Google Cloud solutions.Go to consoleProduct highlightsProvide infrastructure insights aligned to your businessWork with your chosen partner solutionsOffer unique partner integrationsUnifying observability for Cloud Interconnect with VPC Flow LogsOverviewNetwork Intelligence CenterNetwork Intelligence Center is a single console for Google Cloud network observability, monitoring, and troubleshooting. Reduce the risk of outages and ensure security and compliance.VPC Flow LogsGain deep insights into your network traffic with VPC Flow Logs. Monitor network activity, troubleshoot connectivity issues, and enhance security posture with detailed visibility into your VPC. Leverage this valuable data for network planning, security analysis, forensics, cost optimization, and compliance auditing.Partner solutionsA broad ecosystem of ISV and system integration (SI) partners provide observability offerings on and/or integrated with Google Cloud. These provide Google Cloud customers with freedom of choice and facilitate frictionless use of preferred third-party products and services.View moreObservability partnersPartnerResourcesAppNeta by BroadcomSolution pageSolution briefTipsheetView on the Cloud MarketplaceCatchpointSolution pageSolution briefView on the Cloud MarketplaceDatadogSolution pageSolution briefView on the Cloud MarketplaceKentikSolution pageGoogle Cloud VPC flow logs for KentikSolution briefView on the Cloud MarketplaceNew RelicSet up Google VPC Flow Logs monitoringSolution briefView on the Cloud MarketplaceSplunkSolution pageSolution briefView on the Cloud MarketplaceSelectorSolution pageSolution briefThousandEyesSolution pageSolution briefView on the Cloud MarketplaceTigeraSolution pageSolution briefGet support from a rich and growing ecosystem of technology partners to expand the Google Cloud operations capabilities.AppNeta by BroadcomResourcesSolution pageSolution briefTipsheetView on the Cloud MarketplaceCatchpointResourcesSolution pageSolution briefView on the Cloud MarketplaceDatadogResourcesSolution pageSolution briefView on the Cloud MarketplaceKentikResourcesSolution pageGoogle Cloud VPC flow logs for KentikSolution briefView on the Cloud MarketplaceNew RelicResourcesSet up Google VPC Flow Logs monitoringSolution briefView on the Cloud MarketplaceSplunkResourcesSolution pageSolution briefView on the Cloud MarketplaceSelectorResourcesSolution pageSolution briefThousandEyesResourcesSolution pageSolution briefView on the Cloud MarketplaceTigeraResourcesSolution pageSolution briefGet support from a rich and growing ecosystem of technology partners to expand the Google Cloud operations capabilities.How It WorksSingle console for Google Cloud network observability, monitoring, and troubleshooting. Reduce the risk of outages and ensure security and compliance.Go to consoleCommon UsesPerformance monitoringGet visibility into your cloud resources and servicesNetwork Intelligence Center's Performance Dashboard gives you visibility into the performance of the entire Google Cloud network, as well as to the performance of your project's resources.5:42Learn how to view your project’s network performanceHow-tosGet visibility into your cloud resources and servicesNetwork Intelligence Center's Performance Dashboard gives you visibility into the performance of the entire Google Cloud network, as well as to the performance of your project's resources.5:42Learn how to view your project’s network performanceEnd-to-end observabilityPartner ecosystemGoogle Cloud provides an ecosystem that offers you choice and also empowers you to leverage your existing observability solutions, workflows, and skill sets to observe your Google Cloud workloads. You can take advantage of these partner solutions to provide end-to-end observability from within your private data centers and organization locations, across the public internet or hybrid connectivity, and within your Google Cloud VPCs.Active network performance monitoring for any office, data center, and cloud environmentProactive observability for the internet’s most critical servicesContinuous, real-time visibility into appsAdditional resourcesPartner ecosystemGoogle Cloud provides an ecosystem that offers you choice and also empowers you to leverage your existing observability solutions, workflows, and skill sets to observe your Google Cloud workloads. You can take advantage of these partner solutions to provide end-to-end observability from within your private data centers and organization locations, across the public internet or hybrid connectivity, and within your Google Cloud VPCs.Active network performance monitoring for any office, data center, and cloud environmentProactive observability for the internet’s most critical servicesContinuous, real-time visibility into appsAI/ML-based analysis and recommendationsGet AI/ML-based insights into your projectsGain insights, recommendations, and metrics about how your firewall rules are being used with Firewall Insights. Get recommendations on how to fix issues with a network failure with Network Analyzer.7:39View firewall insights in Recommendation HubLearn how to get predictions on future firewall usageLearn how to use the Recommender CLI and API How-tosGet AI/ML-based insights into your projectsGain insights, recommendations, and metrics about how your firewall rules are being used with Firewall Insights. Get recommendations on how to fix issues with a network failure with Network Analyzer.7:39View firewall insights in Recommendation HubLearn how to get predictions on future firewall usageLearn how to use the Recommender CLI and API TroubleshootingQuickly identify and remediate issuesGain the visibility into your network traffic necessary to manage performance bottlenecks, identify top talkers, pinpoint connectivity issues, and assist with root cause analysis.VPC Flow Logs, Flow Analyzer, and Network Intelligence Center empower you to make data-driven decisions, leveraging real-time monitoring and deep historical insights, and optimize your network for peak efficiency and cost savings.4:59Learn more about Network Intelligence Center Cloud networkingLearn more about VPC Flow logsnetworking iconLearn more about Flow AnalyzerHow-tosQuickly identify and remediate issuesGain the visibility into your network traffic necessary to manage performance bottlenecks, identify top talkers, pinpoint connectivity issues, and assist with root cause analysis.VPC Flow Logs, Flow Analyzer, and Network Intelligence Center empower you to make data-driven decisions, leveraging real-time monitoring and deep historical insights, and optimize your network for peak efficiency and cost savings.4:59Learn more about Network Intelligence Center Cloud networkingLearn more about VPC Flow logsnetworking iconLearn more about Flow Analyzer Try Network Intelligence CenterGo to consoleTry Flow AnalyzerGo to consoleLearn more about Network Intelligence CenterRead the documentationGoogle Cloud MarketplaceFind a partnerContact salesContact us to get startedGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Okta_user_provisioning_and_single_sign-on.txt b/Okta_user_provisioning_and_single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..453a9d65a5756bfe5c3cdf248623167fa0d9bd86 --- /dev/null +++ b/Okta_user_provisioning_and_single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/okta-provisioning-and-single-sign-on +Date Scraped: 2025-02-23T11:55:46.676Z + +Content: +Home Docs Cloud Architecture Center Send feedback Okta user provisioning and single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-08 UTC This document shows you how to set up user provisioning and single sign-on between an Okta organization and your Cloud Identity or Google Workspace account. The document assumes that you already use Okta in your organization and want to use Okta for allowing users to authenticate with Google Cloud. Objectives Configure Okta to automatically provision users and, optionally, groups to Cloud Identity or Google Workspace. Configure single sign-on to allow users to sign in to Google Cloud by using an Okta user account. Costs If you are using the free edition of Cloud Identity, setting up federation with Okta won't use any billable components of Google Cloud. Check the Okta pricing page for any fees that might apply to using Okta. Before you begin Sign up for Cloud Identity if you don't have an account already. If you're using the free edition of Cloud Identity and intend to provision more than 50 users, request an increase of the total number of free Cloud Identity users through your support contact. If you suspect that any of the domains you plan to use for Cloud Identity could have been used by employees to register consumer accounts, consider migrating these user accounts first. For more details, see Assessing existing user accounts. Preparing your Cloud Identity or Google Workspace account Create a user for Okta To let Okta access your Cloud Identity or Google Workspace account, you must create a user for Okta in your Cloud Identity or Google Workspace account. The Okta user is only intended for automated provisioning. Therefore, it's best to keep it separate from other user accounts by placing it in a separate organizational unit (OU). Using a separate OU also ensures that you can later disable single sign-on for the Okta user. To create a new OU, do the following: Open the Admin Console and log in using the super-admin user created when you signed up for Cloud Identity or Google Workspace. In the menu, go to Directory > Organizational units. Click Create organizational unit and provide a name and description for the OU: Name: Automation Description: Automation users Click Create. Create a user account for Okta and place it in the Automation OU: In the menu, go to Directory > Users and click Add new user to create a user. Provide an appropriate name and email address such as the following: First Name: Okta Last Name: Provisioning Primary email: okta-provisioning Keep the primary domain for the email address. Click Manage user's password, organizational unit, and profile photo and configure the following settings: Organizational unit: Select the Automation OU that you created previously. Password: Select Create password and enter a password. Ask for a password change at the next sign-in: Disabled. Click Add new user. Click Done. Assign privileges to Okta To let Okta create, list, and suspend users and groups in your Cloud Identity or Google Workspace account, you must make the okta-provisioning user a super-admin: Locate the newly created user in the list and click the user's name to open their account page. Under Admin roles and privileges, click Assign roles. Enable the super-admin role. Click Save. Note: The super-admin role grants the user full access to Cloud Identity, Google Workspace, and Google Cloud resources.Warning: To protect the user against credential theft and malicious use, we recommend that you enable 2-step verification for the user. For more details on how to protect super-admin users, see Security best practices for administrator accounts. Configuring Okta provisioning You are now ready to connect Okta to your Cloud Identity or Google Workspace account by setting up the Google Workspace application from the Okta catalog. Note: This application is an Okta product and is not maintained or supported by Google. The Google Workspace application can handle both user provisioning and single sign-on. Use this application even if you're using Cloud Identity and you're only planning to set up single sign-on for Google Cloud. Create an application To set up the Google Workspace application, do the following: Open the Okta admin dashboard and sign in as a user with Super Administrator privileges. In the menu, go to Applications > Applications. Click Browse app catalog. Search for Google Workspace and select the Google Workspace application. Click Add integration. On the General settings page, configure the following: Application label: Google Cloud Your Google Apps company domain: the primary domain name used by your Cloud Identity or Google Workspace account. Display the following links: Set Account to enabled. Set other links to enabled if you're using Google Workspace, set other links to disabled otherwise. Application Visibility: set to enabled if you're using Google Workspace, disabled otherwise Browser plugin auto-submit: set to disabled Click Next. On the Sign-on options page, configure the following: Sign on methods: select SAML 2.0 Default Relay State: leave empty Advanced Sign-on Settings > RPID: leave empty Decide how you want to populate the primary email address for users in Cloud Identity or Google Workspace. A user's primary email address must use either the primary domain of your Cloud Identity or Google Workspace account or one of its secondary domains. Okta usernameTo use user's Okta username as primary email address, use the following settings: Application username format: Okta username Update application username on: Create and update. EmailTo use user's Okta username as primary email address, use the following settings: Application username format: Email Update application username on: Create and update. Click Done. Configure user provisioning In this section, you configure Okta to automatically provision users and groups to Google Cloud. On the settings page for the Google Cloud application, open the Provisioning tab. Click Configure API Integration and configure the following: Enable API integration: set to to enabled Import Groups: set to disabled unless you have existing groups in Cloud Identity or Google Workspace that you want to import to Okta Click Authenticate with Google Workspace. Sign in using the okta-provisioning@DOMAIN user you created earlier, where DOMAIN is the primary domain of your Cloud Identity or Google Workspace account. Review the Google Terms of Service and privacy policy. If you agree to the terms, click I understand. Confirm access to the Cloud Identity API by clicking Allow. Click Save. Okta is connected to your Cloud Identity or Google Workspace account, but provisioning is still disabled. To enable provisioning, do the following: On the settings page for the Google Cloud application, open the Provisioning tab. Click Edit and configure the following: Create users: set to enabled Update user attributes: set to enabled Deactivate users: set to enabled Sync password: set to disabled Note: You might need to adjust these settings if you're planning to consolidate existing consumer accounts. For more details, see Making Okta federation safe for account consolidation Optionally, click Go to profile editor to customize attribute mappings. If you use custom mappings, you must map userName, nameGivenName, and nameFamilyName. All other attribute mappings are optional. Click Save. Configure user assignment In this section, you configure which Okta users to provision to Cloud Identity or Google Workspace: On the settings page for the Google Cloud application, open the Assignments tab. Click Assign > Assign to people or Assign > Assign to groups. Select a user or group and click Assign. On the assignment dialog that appears, keep the default settings and click Save and go back. Click Done. Repeat the steps in this section for each user or group that you want to provision. To provision all users to Cloud Identity or Google Workspace, assign the Everyone group. Configure group assignment Optionally, you can let Okta provision groups to Cloud Identity or Google Workspace. Instead of selecting groups individually, it's best to configure Okta to provision groups based on a naming convention. For example, to let Okta provision all groups that begin with google-cloud, do the following: On the settings page for the Google Cloud application, open the Push groups tab. Click Push groups > Find groups by role. On the Push groups by rule page, configure the following rule: Rule name: name for the role, for example Google Cloud. Group name: starts with google-cloud Click Create rule. Troubleshooting To troubleshoot user or group provisioning, click View logs on the settings page for the Google Cloud application. To let Okta retry a failed attempt to provision users, do the following: Go to Dashboard > Tasks. Find the failed task and open the details. On the details page, click Retry selected. Configuring Okta for single sign-on If you've followed the steps to configure Okta provisioning, all relevant Okta users are now automatically being provisioned to Cloud Identity or Google Workspace. To allow these users to sign in, configure single sign-on: On the settings page for the Google Cloud application, open the Sign on tab. Click SAML 2.0 > More details. Click Download to download the signing certificate. Note the Sign-on URL, Sign-out URL, and Issuer values, you need these in one of the following steps. Create a SAML profile Create a SAML profile in your Cloud Identity or Google Workspace account: Return to the Admin Console and go to SSO with third-party IdP. Go to SSO with third-party IdP Click Third-party SSO profiles > Add SAML profile. On the SAML SSO profile page, enter the following settings: Name: Okta IDP entity ID: Enter the Issuer from the Okta admin dashboard. Sign-in page URL: Enter the Sign-on URL from the Okta admin dashboard. Sign-out page URL:: Enter the Sign-out URL from the Okta admin dashboard. Change password URL:: https://ORGANIZATION.okta.com/enduser/settings where ORGANIZATION is the name of your Okta organization. Under Verification certificate, click Upload certificate, and then pick the token signing certificate that you downloaded previously. Click Save. The SAML SSO profile page that appears contains an Entity ID in the format https://accounts.google.com/samlrp/RPID where RPID is a unique ID. Note the RPID value. You need it in the next step. Assign the SAML profile Select the users for which the new SAML profile should apply: In the Admin Console, on the SSO with third-party IDPs page, click Manage SSO profile assignments > Manage. Go to Manage SSO profile assignments On the left pane, select the group or organizational unit for which you want to apply the SSO profile. To apply the profile to all users, select the root organizational unit. On the right pane, in the menu, select the Okta - SAML SSO profile that you created earlier. Click Save. To assign the SAML profile to another group or organizational unit, repeat the steps above. Update the SSO settings for the Automation OU to disable single sign-on: On the left pane, select the Automation OU. Change the SSO profile assignment to None. Click Override. Complete the SSO configuration in Okta Return to Okta and complete the SSO configuration: In the Okta admin dashboard, on the settings page for the Google Cloud application, open the Sign on tab. Click Edit and update the following settings: Advanced Sign-on Settings > RPID: enter the RPID that you copied from the Admin Console. Note: Make sure you only enter the RPID value, not the entire Entity ID. The RPID value must not contain any / character. Click Save. Optional: Configure login challenges Google sign-in might ask users for additional verification when they sign in from unknown devices or when their sign-in attempt looks suspicious for other reasons. These login challenges help to improve security, and we recommend that you leave login challenges enabled. If you find that login challenges cause too much inconvenience, you can disable login challenges by doing the following: In the Admin Console, go to Security > Authentication > Login challenges. In the left pane, select an organizational unit for which you want to disable login challenges. To disable login challenges for all users, select the root organizational unit. Under Settings for users signing in using other SSO profiles, select Don't ask users for additional verifications from Google. Click Save. Add the Google Cloud console and other Google services to the app dashboard To add the Google Cloud console and, optionally, other Google services to your users' Okta app dashboard, do the following: In the Okta admin dashboard, select Applications > Applications. Click Browse app catalog. Search for Bookmark app and select the Bookmark app application. Click Add integration. On the General settings page, configure the following: Application label: Google Cloud console URL: https://www.google.com/a/PRIMARY_DOMAIN/ServiceLogin?continue=https://console.cloud.google.com/, replacing PRIMARY_DOMAIN with the primary domain name used by your Cloud Identity or Google Workspace account. Click Done. Change the application logo to the Google Cloud logo. Open the Sign on tab. Click User authentication > Edit and configure the following: Authentication policy: set to Okta dashboard Click Save. Open the Assignment tab and assign one or more users. Assigned users see the Google Cloud console link in their user dashboard. Important: The assignment settings of the Bookmark app only control visibility. To grant a user permission to use single sign-on and access Google Cloud, you must also assign them to the Google Workspace app. Optionally, repeat the steps above for any additional Google services you want to include in user dashboards. The table below contains the URLs and logos for commonly used Google services: Google service URL Logo Google Cloud console https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com Google Docs https://docs.google.com/a/DOMAIN Google Sheets https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://sheets.google.com Google Sites https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://slides.google.com Google Drive https://drive.google.com/a/DOMAIN Gmail https://mail.google.com/a/DOMAIN Google Groups https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://groups.google.com Google Keep https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://keep.google.com Looker Studio https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://lookerstudio.google.com YouTube https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://www.youtube.com/ Test single sign-on After you've completed the single sign-on configuration in both Okta and Cloud Identity or Google Workspace, you can access Google Cloud in two ways: Through the list in your Okta user dashboard. Directly by opening https://console.cloud.google.com/. To check that the second option works as intended, run the following test: Pick an Okta user that has been provisioned to Cloud Identity or Google Workspace and that doesn't have super-admin privileges assigned. Users with super-admin privileges always have to sign in using Google credentials and are therefore not suitable for testing single sign-on. Open a new browser window and go to https://console.cloud.google.com/. In the Google Sign-In page that appears, enter the email address of the user and click Next. You are redirected to Okta and will see another sign-in prompt. Enter your email address of the user and follow the steps to authenticate. After successful authentication, Okta should redirect you back to Google Sign-In. Because this is the first time you've signed in using this user, you're asked to accept the Google Terms of Service and privacy policy. If you agree to the terms, click I understand. You are redirected to the Google Cloud console, which asks you to confirm preferences and accept the Google Cloud Terms of Service. If you agree to the terms, choose Yes and click Agree and continue. Click the avatar icon on the top left of the page, and then click Sign out. You are redirected to a Okta page confirming that you have been successfully signed out. Keep in mind that users with super-admin privileges are exempted from single sign-on, so you can still use the Admin Console to verify or change settings. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. What's next Learn more about best practices for planning accounts and organizations and best practices for federating Google Cloud with an external identity provider. Acquaint yourself with our best practices for managing super-admin accounts. Send feedback \ No newline at end of file diff --git a/Onboarding_best_practices_for_state,_local,_and_education_organizations.txt b/Onboarding_best_practices_for_state,_local,_and_education_organizations.txt new file mode 100644 index 0000000000000000000000000000000000000000..e931f4b20a7036b1dc4b77075cb62947e925c3de --- /dev/null +++ b/Onboarding_best_practices_for_state,_local,_and_education_organizations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/best-practices-for-iam-and-billing-in-higher-education +Date Scraped: 2025-02-23T11:48:41.217Z + +Content: +Home Docs Cloud Architecture Center Send feedback Onboarding best practices for state, local, and education organizations Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2022-10-15 UTC State, local, and education (SLED) organizations often have unique IT needs compared to other enterprises. This guide defines onboarding considerations and best practices for creating a Google Cloud and Google Workspace environment for a SLED organization. This document is intended for administrators who are setting up Google Cloud and Google Workspace or Google Workspace for Education for their organization. Identity overview Before you create a Google Cloud environment, you must understand how Google Cloud provides authentication, authorization, and auditing. Three cloud services work together for authentication, access control, and auditing: Cloud Identity provides authentication. If your organization uses Google Workspace or Google Workspace for Education, you're already using Cloud Identity. Identity and Access Management provides authorization. Cloud Audit Logs provide auditing. Cloud Identity Cloud Identity is an identity as a service (IDaaS) product that lets you centrally manage users and groups who access Google Cloud, Google Workspace, or Google Workspace for Education resources. Cloud Identity is available in both a free or paid (premium) version. During the onboarding to Google Cloud process, Cloud Identity provides primary domain validation using a TXT record. The benefits of using Cloud Identity are the following: Lets you create and manage groups using the Google Workspace Admin console. Provides account security control including single sign-on (SSO) and two-factor authentication (2FA). Can be used as an identity provider for third-party applications because it supports Security Assertion Markup Language (SAML) and LDAP. Set up identity At a high level, the recommended steps for establishing an identity are the following: If you are not already using Cloud Identity, Google Workspace, or Google Workspace for Education, start with one of the following signup pages: Cloud Identity sign-up page Google Workspace for Education sign-up page Foundation checklist in the Google Cloud console (for an example, see the enterprise onboarding checklist) Use an administrator account to navigate the in-console checklist. Use Cloud Identity to verify your domain. Domain verification automatically creates an organization, which serves as the root node of the Google Cloud resource hierarchy. If you encounter an error that states that a domain is already claimed, complete the domain reclaim process. This process can take up to five business days. After domain verification, log in to the Google Workspace Admin Console with the newly created administrator account. Identify the organization administrators who will manage the new Google Cloud organization in the Google Cloud console. Add users using Workforce identity federation or Cloud Identity. With Cloud Identity, you can add users using one of the following ways: Using the Google Workspace Admin Console (individually or in bulk). Using Google Cloud Directory Sync, which synchronizes the data in your Google Account with Active Directory or LDAP. Using Google Workspace Admin SDK. Using a third-party synchronization service, such as Azure Active Directory. Manage resources This section describes the best practices for managing resources in a SLED organization. Implement a centralized approach to Google Cloud resource management Google Cloud provides a hierarchical container system that consists of organizations, folders, and projects. Within those structures, you can organize other resources such as Compute Engine virtual machines (VMs), databases, and Cloud Storage buckets. This hierarchy helps you manage aspects such as access control and configuration settings that are common to multiple resources. You can programmatically manage these resources by using Resource Manager. Large organizations often have many projects and users that interact directly with Google Cloud resources. To best support existing IT governance and access control strategies, we recommend that you implement a centralized approach to Google Cloud resource organization. Use organization nodes to organize your resources Resources are organized with the organization node at the root. Folders can be nested up to four levels beneath the organization node. These folders can contain projects, which in turn contain other resources as children below the projects. Each resource has exactly one parent. When you set access control policies and configuration settings for a parent resource, its child resources inherit those policies and settings. An organization node ensures that all the projects that users create in your domain are visible to the super administrators. Each primary domain has one organization node. By default, the Google Workspace super administrator has irrevocable access to set the policy for the organization. For organizations that have separate IT and cloud administrations, the Google Workspace super administrator must assign an organization administrator to manage the organization. For more information, see Super administrator account best practices. If projects were created before you created the organization node, you can migrate these unaffiliated projects into the organization node. Note: All recommendations presented in this section assume that you've configured a domain organization node in your Google Cloud console. Also, Google Accounts that administer Google Cloud resources must be defined in the ORGANIZATION_NAME.tld configuration file for the Google Workspace or Cloud Identity domain. When you begin using Google Cloud, the default configuration is to have a single organization node. The following sections discuss single versus multiple organization node approaches. For more information about these options, see Decide a resource hierarchy for your Google Cloud landing zone. Option 1: Single organization node In this option, you map a single organization node to the Workspace domain, which is the source of truth for IAM. Each folder can have its own administrators along with separate roles and other policies. The following diagram shows a single organization node, using an education institution as an example. You can add further subfolders for teams and environments as needed. You can host global resources such as cross-project networks and shared images in a folder with permissions that allow access to all users in the organization. For more information, see the following: Cloud Resource Manager Overview of IAM Option 2: Separate organization nodes If you want to treat departments within your organization as isolated entities with no central administration, consider creating separate organizations. The following diagram shows separate organization nodes, using a school as an example. To implement this configuration, you set up school.edu and lab3.school.edu as separate primary Google Workspace domains, resulting in discrete organization nodes. Use this option only if all of the following items are true: You want to maintain separate identity domains. You must keep access controls, custom roles, billing, quota, and configuration settings for the second organization distinct from the central school.edu organization node. For many organizations with centralized IT governance, managing two separate Google Cloud environments creates additional overhead. For example, security policies among multiple organization nodes can diverge over time unless administrators are careful to keep the policies synchronized. For more information about the implications of this approach, see Use a single organization node. Use folders to organize resources Folders let you organize your Google Cloud resources, apply policies, delegate administrative privileges, and give departments and teams more autonomy. Folders also help you administer policies and control access for a group of projects at the same time. Folders, projects, and resources that are nested within a folder inherit the policies of the parent folder. The following are a few scenarios where using folders might be appropriate: Your organization has distinct business units, each with its own IT group. You're mapping to an established structure that is based on an LDAP directory, such as Microsoft Active Directory. You want to segregate projects by use case, such as IT infrastructure, research computing, or teaching and learning. For more information, see Manage your Google Cloud resources. Use projects to organize resources Any Google Cloud resources that you allocate and use must belong to a project. A project is the organizing entity for what you're building. A project consists of the settings, permissions, and other metadata that describe your applications. Resources within a single project work together by communicating through an internal network. The resources that each project uses remain separate across project boundaries. You can link resources only through an external network connection or a shared Virtual Private Cloud (VPC) network. Each Google Cloud project has the following: A project name, which you provide. A project ID, which you can provide or Google Cloud can provide for you. A project number, which Google Cloud provides. When you create a project, consider the following: Determine the ownership of projects and create separate projects for different workloads or teams. Use separate projects to divide an application into production and non-production environments. This way, changes made in the non-production environment don't affect the production environment, and changes can be promoted or propagated by using deployment scripts. Separate the computing and data resources between labs or even work within a lab. This separation allows for full autonomy and data separation between projects, which is useful if a lab is working on multiple projects with competing stakeholders. When you create a project, you must associate it with a billing account. You must have the Billing Account Administrator role or Billing Account User role on the target billing account to be able to associate a new project with an existing billing account. Use Active Assist to manage resources at scale As your organization grows, the amount of complexity generally increases. Projects become unused, VMs sit idle, and permissions get granted but aren't removed when they are no longer needed. To reduce complexity, we recommend keeping an up-to-date asset inventory and reviewing it using the recommendations and insights from Active Assist. Active Assist provides recommendations for finding idle VMs, removing excess IAM permissions, deleting or reclaiming unused projects. Using these recommendations can have significant benefits to your organization. These benefits include reducing unnecessary spending and decreasing security risks, and increasing the performance and manageability of your organization. To access an inventory and history of all Google Cloud projects and associated resources, you can use Cloud Asset Inventory. You can export asset history to BigQuery or Cloud Storage. Manage access controls This section describes best practices for managing access to Google Cloud and Google Workspace services. Use groups for policy management It's an IAM best practice to use groups instead of individuals in policies. As team members join and leave, you can adjust group membership, and the correct policy changes happen automatically. To implement this practice, create Google Groups that are based on job functions for each project or folder. You then assign multiple roles to each group as required for the job function. To manage groups, a super administrator or delegated administrator can use the Google Workspace Admin Console. Use least privilege to create trust boundaries When you decide on a project structure, consider IT trust boundaries, which likely follow an existing IT governance or security model. For example, consider if your organization includes separate departments, such as engineering, business, and law, that must maintain trust boundaries between each other. When you apply the security best practice of least privilege, you can grant different roles to user accounts and service accounts across projects. If a user has administrator-level access to one project but only requires viewer access to another project, you can define those roles explicitly by using an allow policy. For more information, see the IAM guidance on least privilege. Use service accounts, roles, and policies to manage access to resources Large organizations often separate operations teams, such as security and network administration, from other departments. This separation requires using resources that are managed by other teams and following the principle of least privilege. Configure this separation using IAM and service accounts. With IAM, you manage access control by defining who has what level of access to which resources. You grant roles to users by creating an allow policy. To give granular access to specific Google Cloud resources, use predefined roles or define custom roles. In Google Cloud, a super administrator in Google Workspace gets assigned the Organization Administrator role by default. This role is used to grant permissions to other users and cannot be removed from the super administrator account. The most important role for a super administrator to grant is the Project Creator role so that designated users can begin creating their own projects. For more information about service accounts, see Understanding service accounts. Create privileged accounts sparingly Following the principle of least privilege, assign super administrator roles to separate accounts from your administrators' regular accounts. For example, you might use alex@school.edu for day-to-day activities but use alex.admin@school.edu when you make changes to the Google Workspace Admin Console or Google Cloud console. Create identities for workloads Google Cloud uses service accounts to invoke Google API calls to ensure that user credentials aren't directly involved. These accounts are treated as both an identity and as a resource in the following ways: When a service account acts as an identity, you grant it roles so that it can access a resource, such as a storage bucket. When a service account acts as a resource, you must grant users the permission to access that account in the same way that you grant permission to access a BigQuery dataset. You can grant a user the role of Owner, Editor, Viewer, or Service Account User. Users who have the Service Account Users role can access all the resources to which the service account has access. Manage billing There are two types of Cloud Billing accounts: self-serve (or online) account and invoiced (or offline) account. The features of a self-serve account are the following: If supported in your country or region, payment is made using a payment instrument such as a credit card, debit card, or ACH direct debit. Costs are charged automatically to the payment instrument that is connected to the Cloud Billing account. You can sign up for self-serve accounts without help from us. The documents generated for self-serve accounts include statements, payment receipts, and tax invoices. You can access these documents from the console. The features of an invoiced account are the following: Payment is made using a check or wire transfer. Invoices are sent by mail or electronically. You can access invoices and payment receipts from the console. You must be eligible for invoiced billing. For more information, see invoiced billing eligibility. The following sections describe the best practices for both types of Cloud Billing accounts. Export Cloud Billing data to BigQuery for analysis To analyze your usage and costs, export your billing data to a BigQuery dataset. Exporting billing data to BigQuery can help you find projects that are spending over a limit that you set. You can also query the list of services you're being charged for. For example, the following query lists all projects that have spent over $0.10 in the current month: SELECT project.name, cost FROM YOUR_BIGQUERY_TABLE WHERE cost > 0.1 AND usage_month IN "YYYY-MM" ORDER BY cost DESC Replace the following: YOUR_BIGQUERY_TABLE with your table name. YYYY-MM with the current month and date. For example, 2022-10. Use a single billing account to manage your billing and budgets Use the Google Cloud console to manage your Cloud Billing account. From the console, you can update account settings such as payment methods and administrative contacts. You can also configure the console to set budgets, trigger alerts, view your payment history, and export billing data. For most organizations, a single billing account for all projects is sufficient. Organization-wide discounts apply to all projects that are associated with the billing account. You can make a single payment to Google to settle the monthly invoice, and you can charge agency-specific, department-specific, or lab-specific projects using an internal IT chargeback process. The following diagram shows how the single billing account works: Billing considerations can also drive how you organize projects and folders in Google Cloud. Depending on your internal cost centers, you might decide to organize as in the following diagram: In this diagram, folders identify all the projects and assets that are associated with a cost center, department, or IT project. Cost is shown for each project, and project IDs are included in the billing export to BigQuery. Consider multiple billing accounts if cost centers must pay a separate invoice or if your organization has workloads that must pay with a separate currency. This approach might require a signed agreement for each billing account or engagement with a Google Cloud reseller. Assign billing account roles Billing account roles help you manage billing accounts. You can assign the following billing roles to specific users at the organization level. Role Description Billing Account Administrator Manages all billing accounts in the organization. Billing Account Creator Creates billing accounts in the organization. Billing Account User Links projects with billing accounts. Project Billing Manager Provides access to assign a project's billing account or disable project billing. Billing Account Costs Manager Manages budgets and views and exports cost information for billing accounts (but cannot view or export pricing information). Billing Account Viewer Views billing account cost information and transactions. To let users view all the billing accounts in your organization, grant the Billing Account Administrator role at the organization level. To limit who can create billing accounts and how, use the Billing Account Creator role and restrict which users have this permission. For more information, see Create, modify, or close your billing account and Overview of billing access control. To change the billing account for a project, see How to change the project's billing account. Create budgets and alerts to monitor billing To monitor your billing account and individual projects, you can create budgets and send email alerts to billing administrators and billing users. Budgets generate alerts but don't turn off billing for projects, which means that a project continues to run even if it exceeds the budget. If a project is running over budget, you must disable billing manually. In addition, because the budget doesn't update in real time, you might not discover the overspending issue for a day or two. To send budget alerts to users who are not billing administrators or billing users, configure Cloud Monitoring notification channels. Organize resources using labels Labels are key-value pairs that help you organize your Google Cloud resources. Labels are forwarded to the billing system and are included in the billing export to BigQuery. Labels let you query your billed charges by label. Adding labels to resources by department, college, workload, or lab can help associate billed charges to the right entity, without requiring you to create separate billing accounts for each new project. For more information on labels, see Creating and managing labels. Monitor quotas and limits Many resources within Google Cloud are limited by quotas. For example, a new project that is linked to a newly associated billing account has a quota of eight virtual CPUs on Compute Engine. You can monitor your quota usage in the Google Cloud console. Also, you can request a quota increase to access more resources or new resources, such as GPUs. Manage your network This section describes best practices for managing your Google Cloud network. Choose a networking approach Isolate your cloud services with VPC. For example, use VPC to set up a network, which includes a common and private RFC 1918 IP space that spans all your projects. You then add instances from any project to this network or its subnetworks. A default VPC network is created for each new project. This default network is suitable for testing or development, but you should replace it with a custom VPC network for production. You can also attach a Cloud VPN connection to a single network, which can be used by all or a subset of the projects. Use the VPN connection to connect to a Google Cloud-specific RFC 1918 IP space or to extend the RFC 1918 IP address space of your on-premises network. The following table describes two of the most common networking options in Google Cloud. Networking option Description Direct peering If you have a registered Autonomous System Number (ASN) and you have publicly routable IP prefixes, connect to Google using direct peering. This option uses the same interconnection model as the public internet. However, unlike the public internet, there is no service provider. For more information, see Google edge network. Carrier Peering If you do not have public ASNs, or you want to connect to Google by using a service provider, use Carrier Peering. Carrier Peering is designed for customers who want enterprise-grade connectivity to the Google edge network. It's often difficult to predict the cost that's associated with egress traffic. To help Internet2 Higher Education members, Google waives internet egress fees that are calculated at list prices up to a maximum of 15% of the total monthly consumption cost. This offer applies to specific Internet egress SKUs. For more information about networking options, see Choosing a Network Connectivity product. Get help This section describes the best practices for getting help from Google. Choose a support plan that meets your needs Choose a support plan that meets the needs of your organization and document who has the appropriate permissions to create a support case. Basic support is free and available to all Google Cloud users. Basic includes billing support but not technical support. To get technical support, you must purchase a technical support plan. You can only purchase Google Cloud technical support plans at the organization level. The fee for technical support is charged at the project level to all projects in the organization. For a detailed feature and cost comparison of support plans, see Cloud Customer Care. At a minimum, we recommend that SLED organizations purchase the Standard Support plan. However, if your organization is running business critical workloads, consider purchasing an Enhanced Support plan. The Enhanced Support plan lets you access Customer Care 24/7, create cases using the Customer Support API, and escalate cases. For your organization, you might want a Technical Account Manager to assist with guided onboarding, case management, case escalation, and monthly operational health reviews. If you want a Technical Account Manager but don't want a Premium Support plan, we recommend the Technical Account Advisor Service. You can purchase Standard Support and Enhanced Support using the Google Cloud console. To purchase Premium Support, contact sales. Create and escalate support cases To create a case, you can use the Google Cloud console or the Cloud Support API (Enhanced or Premium Support required). When you create a case, keep in mind the appropriate support case priority. If you have already set the appropriate case priority for your case and are encountering issues with the support process, you can escalate your case. For a list of possible reasons for escalating a case, see Escalate a case. If you have Premium Support, you can also request an escalation by contacting your Technical Account Manager during local business hours. What's next Learn more about each of the Google Cloud services. Learn more about Network Intelligence Center. Learn how Google Cloud services map to Amazon Web Services (AWS) or Microsoft Azure services. Take a class or get certified on Google Cloud training fundamentals. Explore Cloud Skills Boost for Google Cloud training labs and pathways. Send feedback \ No newline at end of file diff --git a/Open_Banking_APIx.txt b/Open_Banking_APIx.txt new file mode 100644 index 0000000000000000000000000000000000000000..07cee998c10d159652c90617d2132a37c09d3496 --- /dev/null +++ b/Open_Banking_APIx.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/open-banking-apix +Date Scraped: 2025-02-23T11:58:58.044Z + +Content: +Open banking and embedded financeSimplify and accelerate the process of delivering open banking as required by PSD2 by enabling speedy and secure delivery of APIs. Embed your financial services in the non-financial products consumers already use.Contact usGartner recognizes Google (Apigee) as a Leader in the Magic Quadrant for Full Life Cycle API ManagementGet the reportBenefitsSimplify and accelerate secure delivery of open banking compliant APIsAccelerate open banking complianceUse preconfigured API proxies to authenticate and secure financial communications.Grow an ecosystem of partners and customersQuickly and securely share data with the Apigee open banking APIx developer portal.Promote internal and external innovationEnable different access models for internal apps or third-party providers with the pre-integrated OAuth security framework.Learn more about how Apigee can help with your open banking initiatives with our video, Open Banking, powered by Apigee API Management.Key featuresCompliance acceleratorsRegulators are mandating that financial institutions open programmatic access to financial data and payments capabilities. Our accelerators can help.API accelerator for open banking (UK) and PSD2 (EU) complianceProvides preconfigured proxies for banking APIs according to CMA’s open banking specification; an integrated OAuth security framework to support various access models; a banking-specific developer portal with API docs, tools, and an API sandbox with test cases; and other features to help accelerate regulated entities towards offering compliant APIs.API accelerator for Australia and CDR complianceWith an Open Data Sandbox (mock data), reusable logic that can be used for implementing CDS compliant APIs, and the ability to authenticate and authorize, including consent, with a mock OIDC provider, it helps simplify and accelerate secure delivery of open banking compliant APIs.API accelerator for Brazil (Sistema Financeiro Aberto)Provides a reference implementation of the specification as required by the Central Bank of Brazil and defined by the Open Banking governance structure. It provides technical representations, preconfigured APIs with mock data, and leverages reusable logic to speed up the delivery of compliant APIs.Ready to get started? Contact usCustomersSee how customers are transforming their business with Apigee open banking APIxVideoABN Amro is transforming their banking model with APIs.03:25Case studyBank Rakyat Indonesia (Bank BRI) increases financial inclusion through award-winning digital banking.5-min readVideoMacquarie Group uses open APIs for banking innovation.02:55See all customersTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Open_Source_Databases.txt b/Open_Source_Databases.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1a6899632651a1788cdc7231bfd200cefeef3be --- /dev/null +++ b/Open_Source_Databases.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/open-source-databases +Date Scraped: 2025-02-23T11:59:25.368Z + +Content: +Is your PostgreSQL ready for the future? Discover how AlloyDB combines our love of PostgreSQL with the best of Google.Open source databasesFully managed open source databases promote innovation without vendor lock-in or high licensing fees. Google Cloud and our partners help you deploy secure open source databases at scale without managing infrastructure.Contact usGoogle named a Leader in the 2023 Gartner® Magic Quadrant™ for Cloud Database Management SystemsGet the reportBenefitsMake the most of Google Cloud's commitment to open sourceCommunity supportGoogle Cloud has long been a leading open source contributor and partner to organizations that are focused on fully managed open source databases and community-driven innovation.Partner-driven innovationUnlike vendors that deploy older versions of open source code, Google Cloud works directly with key partners to provide a fully integrated platform that offers our customers the latest technology.CompatibilityGoogle Cloud databases like AlloyDB and Cloud SQL combine the ease and efficiency of a managed service and the flexibility of open source engines, and give you access to the latest community enhancements.Key featuresOne unified experience for easy database managementFirst-line supportWe provide first-line support for open source databases so you can manage and log support tickets from a single window.Simple billingWhether you’re using NoSQL or relational databases, you’ll only see one bill from Google Cloud.Single consoleYou can provision and manage partner open source database services straight from your Google Cloud console.Ready to get started? Contact usMigrate to fully managed open source databasesVideoWhat is AlloyDB for PostgreSQL?Watch videoVideoChoosing a PostgreSQL database on Google CloudWatch videoVideoMigrate MySQL and PostgreSQL using Database Migration ServiceWatch videoCustomersHow Google Cloud customers are succeeding with open sourceVideoHow Bayer Crop Science unlocked harvest data efficiency with AlloyDBVideo (2:09)Case studySpoke builds a smarter AI-powered helpdesk ticketing solution6-min readBlog postAutoTrader migrated from Oracle to Cloud SQL for PostgreSQL5-min readSee all customersPartnersRecommended open source database partnersOur partnerships with these open source-centric companies offer your enterprise a seamless user experience and the ability to turn their innovations into fully managed services.See all partnersExplore our marketplaceRelated servicesRecommended products and servicesMigrate to fully managed open source databases from Google Cloud.AlloyDB for PostgreSQLGoogle Cloud’s fully managed PostgreSQL-compatible database for your most demanding enterprise workloads.Cloud SQL for MySQLGet 100% open-source-compatible MySQL, fully managed by Cloud SQL for added availability, stability, and security.Cloud SQL for PostgreSQLMigrate to Cloud SQL for PostgreSQL to build highly available and fully managed PostgreSQL instances, all 100% compatible.Memorystore for RedisBuild an in-memory caching layer with fully managed Memorystore for Redis to achieve sub-millisecond latency.Oracle to Cloud SQLMigrate from Oracle to PostgreSQL with minimal downtime with Datastream.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Operational_excellence.txt b/Operational_excellence.txt new file mode 100644 index 0000000000000000000000000000000000000000..fd24238b9bd2138132295afbcf34f6022e8cf7df --- /dev/null +++ b/Operational_excellence.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml/operational-excellence +Date Scraped: 2025-02-23T11:44:19.237Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and ML perspective: Operational excellence Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in the Architecture Framework: AI and ML perspective provides an overview of the principles and recommendations to help you to build and operate robust AI and ML systems on Google Cloud. These recommendations help you to set up foundational elements like observability, automation, and scalability. This document's recommendations align with the operational excellence pillar of the Architecture Framework. Operational excellence within the AI and ML domain is the ability to seamlessly deploy, manage, and govern the intricate AI and ML systems and pipelines that power your organization's strategic objectives. Operational excellence lets you respond efficiently to changes, reduce operational complexity, and ensure that operations remain aligned with business goals. Build a robust foundation for model development Establish a robust foundation to streamline model development, from problem definition to deployment. Such a foundation ensures that your AI solutions are built on reliable and efficient components and choices. This kind of foundation helps you to release changes and improvements quickly and easily. Consider the following recommendations: Define the problem that the AI system solves and the outcome that you want. Identify and gather relevant data that's required to train and evaluate your models. Then, clean and preprocess the raw data. Implement data validation checks to ensure data quality and integrity. Choose the appropriate ML approach for the task. When you design the structure and parameters of the model, consider the model's complexity and computational requirements. Adopt a version control system for code, model, and data. Automate the model-development lifecycle From data preparation and training to deployment and monitoring, automation helps you to improve the quality and efficiency of your operations. Automation enables seamless, repeatable, and error-free model development and deployment. Automation minimizes manual intervention, speeds up release cycles, and ensures consistency across environments. Consider the following recommendations: Use a managed pipeline orchestration system to orchestrate and automate the ML workflow. The pipeline must handle the major steps of your development lifecycle: preparation, training, deployment, and evaluation. Implement CI/CD pipelines for the model-development lifecycle. These pipelines should automate the building, testing, and deployment of models. The pipelines should also include continuous training to retrain models on new data as needed. Implement phased release approaches such as canary deployments or A/B testing, for safe and controlled model releases. Implement observability When you implement observability, you can gain deep insights into model performance, data drift, and system health. Implement continuous monitoring, alerting, and logging mechanisms to proactively identify issues, trigger timely responses, and ensure operational continuity. Consider the following recommendations: Implement permanent and automated performance monitoring for your models. Use metrics and success criteria for ongoing evaluation of the model after deployment. Monitor your deployment endpoints and infrastructure to ensure service availability. Set up custom alerting based on business-specific thresholds and anomalies to ensure that issues are identified and resolved in a timely manner. Use explainable AI techniques to understand and interpret model outputs. Build a culture of operational excellence Operational excellence is built on a foundation of people, culture, and professional practices. The success of your team and business depends on how effectively your organization implements methodologies that enable the reliable and rapid development of AI capabilities. Consider the following recommendations: Champion automation and standardization as core development methodologies. Streamline your workflows and manage the ML lifecycle efficiently by using MLOps techniques. Automate tasks to free up time for innovation, and standardize processes to support consistency and easier troubleshooting. Prioritize continuous learning and improvement. Promote learning opportunities that team members can use to enhance their skills and stay current with AI and ML advancements. Encourage experimentation and conduct regular retrospectives to identify areas for improvement. Cultivate a culture of accountability and ownership. Define clear roles so that everyone understands their contributions. Empower teams to make decisions within boundaries and track progress by using transparent metrics. Embed AI ethics and safety into the culture. Prioritize responsible systems by integrating ethics considerations into every stage of the ML lifecycle. Establish clear ethics principles and foster open discussions about ethics-related challenges. Design for scalability Architect your AI solutions to handle growing data volumes and user demands. Use scalable infrastructure so that your models can adapt and perform optimally as your project expands. Consider the following recommendations: Plan for capacity and quotas. Anticipate future growth, and plan your infrastructure capacity and resource quotas accordingly. Prepare for peak events. Ensure that your system can handle sudden spikes in traffic or workload during peak events. Scale AI applications for production. Design for horizontal scaling to accommodate increases in the workload. Use frameworks like Ray on Vertex AI to parallelize tasks across multiple machines. Use managed services where appropriate. Use services that help you to scale while minimizing the operational overhead and complexity of manual interventions. ContributorsAuthors: Sannya Dang | AI Solution ArchitectFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerRyan Cox | Principal ArchitectStef Ruinard | Generative AI Field Solutions Architect Previous arrow_back Overview Next Security arrow_forward Send feedback \ No newline at end of file diff --git a/Operations(1).txt b/Operations(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..5d14663bb536032db746e63ebf3fbcb2cedc630a --- /dev/null +++ b/Operations(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/operations +Date Scraped: 2025-02-23T12:07:26.279Z + +Content: +Jump to Observability suiteGoogle Cloud's ObservabilityIntegrated monitoring, logging, and trace managed services for applications and systems running on Google Cloud and beyond.Go to consoleContact salesStart using the Observability with Monitoring and Logging quickstart guidesResearch shows successful reliability is 4.1 times more likely to incorporate observabilityLearn how Google Cloud’s Observability helps customers improve cloud observabilityStay up-to-date with the latest blogs and our o11y in-depth video seriesDownload the overview one-pager: Observability in Google CloudVIDEOWhat is Cloud Operations?4:51Key featuresKey featuresReal-time log management and analysisCloud Logging is a fully managed service that performs at scale and can ingest application and platform log data, as well as custom log data from GKE environments, VMs, and other services inside and outside of Google Cloud. Get advanced performance, troubleshooting, security, and business insights with Log Analytics, integrating the power of BigQuery into Cloud Logging. Built-in metrics observability at scaleCloud Monitoring provides visibility into the performance, uptime, and overall health of cloud-powered applications. Collect metrics, events, and metadata from Google Cloud services, hosted uptime probes, application instrumentation, and a variety of common application components. Visualize this data on charts and dashboards and create alerts so you are notified when metrics are outside of expected ranges.Stand-alone managed service for running and scaling PrometheusManaged Service for Prometheus is a fully managed Prometheus-compatible monitoring solution, built on top of the same globally scalable data store as Cloud Monitoring. Keep your existing visualization, analysis, and alerting services, as this data can be queried with PromQL or Cloud Monitoring.Monitor and improve your application's performanceApplication Performance Management (APM) combines the monitoring and troubleshooting capabilities of Cloud Logging and Cloud Monitoring with Cloud Trace and Cloud Profiler to help you reduce latency and cost so you can run more efficient applications.View all featuresData governance: Principles for securing and managing logsGet the whitepaperCustomersLearn from customers using operations toolsBlog postHow The Home Depot gets a single pane of glass for metrics across 2,200 stores4-min readBlog postHow Lowe’s evolved app dev and deployment with Google Cloud6-min readBlog postHow Lowe’s meets customer demand with Google SRE practices4-min readCase studyGannett improves observability with Google Cloud's Observability7-min readVideoNiantic shares best practices for custom metric telemetry on Google CloudVideo (3:22)VideoShopify analyzes distributed trace data to identify performance bottlenecksVideo (5:35)See all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.EventListen to a recent Twitter Space event about Log Analytics from Cloud LoggingLearn moreBlog postThe digital forecast: 30-plus cloud computing stats and trends to know in 2023Read the blogBlog postLog Analytics in Cloud Logging is now GALearn moreBlog postTop 10 reasons to get started with Log Analytics todayRead the blogBlog postIntroducing a high-usage tier for Managed Service for PrometheusRead the blogBlog postGoogle Cloud Managed Service for Prometheus is now GARead the blogDocumentationDocumentationTutorialObservability documentationView all documentation for the Observability.Learn moreTutorialGet started with Cloud MonitoringLearn about metrics scopes, the monitoring agent, uptime checks, and other features.Learn moreTutorialGet started with Cloud LoggingGuides and set-up docs to help you get up and running with Cloud Logging.Learn moreGoogle Cloud BasicsMonitoring and logging support for GKELearn about Google Kubernetes Engine’s native integration with Cloud Monitoring and Cloud Logging.Learn moreArchitectureGoogle Cloud metricsSee which metrics Cloud Monitoring supports.Learn moreTutorialHands-on labs: Google Cloud’s ObservabilityIn this skill badge, you’ll learn the ins and outs of Google Cloud's Observability to generate insights into the health of your applications.Learn moreTutorialDashboard API: Build your own Cloud Monitoring dashboardTips for shareable and reusable dashboard creation.Learn moreTutorialCloud Audit LogsLearn how Cloud Audit Logs maintains three audit logs: admin activity, data access, and system event.Learn moreArchitectureHybrid and multi-cloud deploymentsThis document discusses monitoring and logging architectures for hybrid and multi-cloud deployments.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for the Observability Use casesUse casesUse caseMonitor your infrastructureCloud Logging and Cloud Monitoring provide your IT Ops/SRE/DevOps teams with out-of-the box observability needed to monitor your infrastructure and applications. Cloud Logging automatically ingests Google Cloud audit and platform logs so that you can get started right away. Cloud Monitoring provides a view of all Google Cloud metrics at zero cost and integrates with a variety of providers for non Google Cloud monitoring.Best practiceTips and tricks: Service MonitoringLearn to effectively set SLOs in Cloud Monitoring with this step-by-step guide.Learn moreGoogle Cloud basicsView and customize your dashboardsLearn about Cloud Monitoring's dashboard customizations so you can get the exact views of your data that you need. Learn moreGoogle Cloud basicsLogging and monitoring on-premisesConsolidate Cloud Logging and Cloud Monitoring by integrating with non-Google Cloud environments as well.Learn moreUse caseTroubleshoot your applicationsReduce Mean Time to Recover (MTTR) and optimize your application’s performance with the full suite of cloud ops tools. Use dashboards to gain insights into your applications with both service and custom application metrics. Use Monitoring SLOs and alerting to help identify errors.TutorialDebugging apps on Google Kubernetes EngineLearn about how you can use Google Cloud operations tools to troubleshoot your applications running on GKE.Learn moreBest practiceIdentifying causes of app latencyLearn to identify the causes of tail latency with OpenCensus and Cloud Monitoring to monitor metrics and distributed tracing for app developers.Learn moreArchitectureConfiguring your containerized appsLearn about best practices for configuring your containerized applications to capture the data you will need for troubleshooting.Learn moreView all technical guidesAll featuresAll featuresLog managementLog Router allows customers to control where logs are sent. All logs, including audit logs, platform logs, and user logs, are sent to the Cloud Logging API where they pass through the log router. The log router checks each log entry against existing rules to determine which log entries to discard, which to ingest, and which to include in exports.Proactive monitoringCloud Monitoring allows you to create alerting policies to notify you when metrics, health check results, and uptime check results meet specified criteria. Integrated with a wide variety of notification channels, including Slack and PagerDuty.Prometheus as a managed serviceOffload the scaling and management of Prometheus infrastructure, updates, storage, and more with Managed Service for Prometheus. Avoid vendor lock-in and keep all of the open source tools you use today for visualization, alerting, and analysis of Prometheus metrics.Custom visualizationCloud Monitoring provides default out-of-the-box dashboards and allows you to define custom dashboards with powerful visualization tools to suit your needs.Health check monitoringCloud Monitoring provides uptime checks to web applications and other internet-accessible services running on your cloud environment. You can configure uptime checks associated with URLs, groups, or resources, such as instances and load balancers.Service monitoringService monitoring provides out-of-the-box telemetry and dashboards that allow troubleshooting in context through topology and context graphs, plus automation of health monitoring through SLOs and error budget management.Latency managementCloud Trace provides latency sampling and reporting for App Engine, including per-URL statistics and latency distributions.Performance and cost managementCloud Profiler provides continuous profiling of resource consumption in your production applications, helping you identify and eliminate potential performance issues.Security managementCloud Audit Logs provides near real-time user activity visibility across Google Cloud.PricingPricingThe pricing for Google Cloud's Observability lets you control your usage and spending. Google Cloud's Observability products are priced by data volume or usage. You can use the free data usage allotments to get started with no upfront fees or commitments.Cloud LoggingFeaturePriceFree allotment per monthEffective dateLogging storage1$0.50/GiB;One-time charge for streaming logs into log bucket storage for indexing, querying, and analysis; includes up to 30 days of storage in log buckets. No additional charges for querying and analyzing log data.First 50 GiB/project/monthJuly 1, 2018Logging retention2$0.01 per GiB per month for logs retained more than 30 days; billed monthly according to retention.Logs retained for the default retention period don't incur a retention cost.April 1, 2023Log Router3No additional chargeNot applicableNot applicableLog Analytics4No additional chargeNot applicableNot applicableCloud MonitoringFeaturePriceFree allotment per monthEffective dateAll Monitoring data except data ingested by using Managed Service for Prometheus$0.2580/MiB6: first 150–100,000 MiB$0.1510/MiB: next 100,000–250,000 MiB$0.0610/MiB: >250,000 MiBAll non-chargeable Google Cloud metricsFirst 150 MiB per billing account for metrics charged by bytes ingestedJuly 1, 2018Metrics ingested by using Google Cloud Managed Service for Prometheus, including GKE control plane metrics$0.060/million samples†: first 0-50 billion samples ingested#$0.048/million samples: next 50-250 billion samples ingested$0.036/million samples: next 250-500 billion samples ingested$0.024/million samples: >500 billion samples ingestedNot applicableAugust 8, 2023Monitoring API calls$0.01/1,000 Read API calls (Write API calls are free)First one million Read API calls included per billing accountJuly 1, 2018Execution of Monitoring uptime checks$0.30/1,000 executions‡One million executions per Google Cloud projectOctober 1, 2022Execution of Monitoring synthetic monitors$1.20/1,000 executions*100 executions per billing accountNovember 1, 2023Alerting policies$1.50 per month for each condition in an alerting policy$0.35 per 1,000,000 time series returned by the query of a metric alerting policy condition♣Not applicableJanuary 7, 2025Cloud TraceFeaturePriceFree allotment per monthEffective dateTrace ingestion$0.20/million spansFirst 2.5 million spansNovember 1, 20181 Storage volume counts the actual size of the log entries prior to indexing. There are no storage charges for logs stored in the Required log bucket.2 There are no retention charges for logs stored in the _Required log bucket, which has a fixed retention period of 400 days.3 Log routing is defined as forwarding received logs to a supported destination. Destination charges might apply to routed logs.4 There is no charge to upgrade a log bucket to use Log Analytics or to issue SQL queries from the Log Analytics page.Note: The pricing language for Cloud Logging changed on July 19, 2023; however, the free allotments and the rates haven't changed. Your bill might refer to the old pricing language.6 For pricing purposes, all units are treated as binary measures, for example, as mebibytes (MiB, or 220 bytes) or gibibytes (GiB, or 230 bytes).† Google Cloud Managed Service for Prometheus uses Cloud Monitoring storage for externally created metric data and uses the Monitoring API to retrieve that data. Managed Service for Prometheus meters based on samples ingested instead of bytes to align with Prometheus' conventions. For more information about sample-based metering, see Pricing for controllability and predictability. For computational examples, see Pricing examples based on samples ingested.# Samples are counted per billing account.‡ Executions are charged to the billing account in which they are defined. For more information, see Pricing for uptime-check execution.* Executions are charged to the billing account in which they are defined. For each execution, you might incur additional charges from other Google Cloud services, including services, such as Cloud Functions, Cloud Storage, and Cloud Logging. For information about these additional charges, see the pricing document for the respective Google Cloud service.♣ For more information, see Pricing for alerting.View pricing detailsPartnersPartnersGet support from a rich and growing ecosystem of technology integrations to expand the IT ops, security, and compliance capabilities available to Google Cloud customers.Expand allFeatured partnersPartner integrationsSee all partnersBindPlane is a registered trademark of observIQ, Inc.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Operations.txt b/Operations.txt new file mode 100644 index 0000000000000000000000000000000000000000..abe768456ca75f0aa53f99bbfc87a38ae113f6c9 --- /dev/null +++ b/Operations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/ops-developer-platform-applications +Date Scraped: 2025-02-23T11:47:12.669Z + +Content: +Home Docs Cloud Architecture Center Send feedback Operations for both the developer platform and applications Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC Operating a developer platform and containerized applications requires a number of different administrative tasks that you must conduct on an ongoing basis. Such tasks include, for example, creating new applications from a template, authorizing new developer groups to use the developer platform, planning capacity needs, and debugging run-time issues. Operations can be automated or performed manually. Common automated operations The blueprint provides automation for some of the most common tasks in the form of webhook triggers, which are a simple type of API. Triggers are automatically connected to webhook events that come from one of the source control repositories. Developer platform developers can connect the other triggers. Typically, developer platform developers write a developer portal, which can be a simple web form that calls a webhook trigger when a form is submitted. The following table describes the common tasks that the blueprint automates using webhook triggers. The task frequencies are meant to be illustrative because the frequency of a task depends on many factors. Tasks don't necessarily recur at precise intervals. Task User Description Task frequency Add a tenant. Developer platform administrator The administrator submits a form on the developer portal. The new tenant form fields include the tenant name and team members. An automated trigger creates the resources for the new tenant. A few times each year Add an application based on an existing application template. Application developer The developer submits a form on the developer portal. The new application form fields include the tenant name, the application name, and the base application template. An automated trigger creates resources for a new application. A few times each year Build and deploy source code changes for an application to the development environment. Application developer The developer edits the source code, runs and tests the code locally, and commits the code. The blueprint isn't involved in local developer workflows, but the Skaffold tool supports a local builds step. A few times each day for each application Deploy YAML configuration changes for an application to the development environment. An example of YAML configuration change is to increase the CPU of a deployment resource. Application developer The developer edits the application configuration and commits the change. A few times each week for each application Deploy application infrastructure changes to the development environment. The application infrastructure is the cloud resources in an application's project. An example change is an increase to the CPU count for an AlloyDB for PostgreSQL instance. Application developer The developer edits the application resource Terraform project and commits the change. The developer submits a form on the developer portal. An automated trigger starts the plan and apply pipeline. Many times each year Promote application changes from development to non-production (or from non-production to production). Application changes can include new application images or application YAML configuration changes. Application operator The operator merges changes from the development branch to the non-production branch (or from the non-production branch to production branch). The operator supervises the rollout. Several times each week for each application Promote application infrastructure changes from development to non-production (or from non-production to production). Application operator The operator merges select changes from the development branch to the non-production branch (or from the non-production branch to the production branch). The operator supervises rollout. Several times each quarter for each application Common manual operations Some developer platform operations are less structured in nature, and don't use automation with a developer platform. You can develop your own playbooks based on this blueprint and perform these tasks in the Google Cloud console. The following table describes these non-automated tasks. The task frequencies are meant to be illustrative because the frequency of a task depends on many factors. Tasks don't necessarily recur at precise intervals. Task User Description Task Frequency Define a new application template. Developer platform developer The developer modifies an application template that is based on a blueprint template, or ports a template to a new language. A few times each year Investigate service run-time errors in the development environment. Application developer The developer uses the Logs Explorer and Metrics Explorer in the Google Cloud console to review the error logs, monitoring metrics, and time series data for tenants and applications. A few times each month Investigate service run-time errors in production or non-production environments. Application operator The operator uses the Logs Explorer and Metrics Explorer in the Google Cloud console to review the error logs, monitoring metrics, and time series data for tenants and applications. A few times each month Investigate build errors. Application developers The developer views the Cloud Build history, including build status and logs, in the Google Cloud console. A few times each week Investigate deployment errors in the development environment Application developers The developer views the Cloud Deploy release and rollout history in the Google Cloud console for success status and logs from a deployment attempt, including any errors. A few times each month Investigate deployment errors in the non-production and production environments Application operators The operator views the Cloud Deploy release and rollout history in the Google Cloud console for success status and logs from a deployment attempt, including error logs. A few times each month Connect to clusters to debug GKE issues. Developer platform administrator The administrator uses the Connect gateway to connect to private clusters. For common issues, such as unscheduled pods, the administrator can review information about common issues (such as unscheduled pods) in the Google Cloud console. A few times each month Plan capacity and optimize costs. Developer platform administrator The administrator reviews GKE resource utilization, aggregated by scope or namespace, in the Google Cloud console. Scheduled as a monthly recurring task. Resize, add, or remove node pools. Developer platform administrator The administrator edits the IaC as appropriate and redeploys the applications. Done in response to capacity planning. Check security posture. Developer platform administrator The administrator checks for vulnerabilities and compliance to standards using the GKE security posture dashboard. Scheduled as a monthly recurring task. Upgrade cluster system software versions (for example, the Kubernetes version). Developer platform administrator The administrator uses the GKE maintenance windows and exclusions to allow upgrades only during planned times. The administrator uses the open upgrade window in the development environment first. After assessing the health of the upgrade, the administrator upgrades the non-production environment and then the production environment. Scheduled as a quarterly recurring task. Install critical cluster security updates. None Automatic, done by GKE. A few times each year Test regional failover. Developer platform administrator and application administrator The administrators schedule and manually initiate a regional failover of the environment as appropriate. Yearly as part of disaster recovery exercises Add a region. Developer platform administrator, developer platform developer, and application administrator The developer platform administrator deploys additional GKE clusters in the new region. The administrator updates the application template to add the new deployment step for relevant environments. The application operator then integrates the change to add deployment sequence to include the new region. Very rarely Move to a new region. Developer platform administrator, developer platform developer, and application administrator The users add the new region as described in Add a region. After testing the new configuration, the users remove the old region. Very rarely What's next Read about managing costs and attributions for the developer platform (next document in this series). Send feedback \ No newline at end of file diff --git a/Operations_best_practices.txt b/Operations_best_practices.txt new file mode 100644 index 0000000000000000000000000000000000000000..42086c3802b97efadd0c3166625e46196183d050 --- /dev/null +++ b/Operations_best_practices.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/operation-best-practices +Date Scraped: 2025-02-23T11:45:38.780Z + +Content: +Home Docs Cloud Architecture Center Send feedback Operations best practices Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC This section introduces operations that you must consider as you deploy and operate additional workloads into your Google Cloud environment. This section isn't intended to be exhaustive of all operations in your cloud environment, but introduces decisions related to the architectural recommendations and resources deployed by the blueprint. Update foundation resources Although the blueprint provides an opinionated starting point for your foundation environment, your foundation requirements might grow over time. After your initial deployment, you might adjust configuration settings or build new shared services to be consumed by all workloads. To modify foundation resources, we recommend that you make all changes through the foundation pipeline. Review the branching strategy for an introduction to the flow of writing code, merging it, and triggering the deployment pipelines. Decide attributes for new workload projects When creating new projects through the project factory module of the automation pipeline, you must configure various attributes. Your process to design and create projects for new workloads should include decisions for the following: Which Google Cloud APIs to enable Which Shared VPC to use, or whether to create a new VPC network Which IAM roles to create for the initial project-service-account that is created by the pipeline Which project labels to apply The folder that the project is deployed to Which billing account to use Whether to add the project to a VPC Service Controls perimeter Whether to configure a budget and billing alert threshold for the project For a complete reference of the configurable attributes for each project, see the input variables for the project factory in the automation pipeline. Manage permissions at scale When you deploy workload projects on top of your foundation, you must consider how you will grant access to the intended developers and consumers of those projects. We recommend that you add users into a group that is managed by your existing identity provider, synchronize the groups with Cloud Identity, and then apply IAM roles to the groups. Always keep in mind the principle of least privilege. We also recommend that you use IAM recommender to identify allow policies that grant over-privileged roles. Design a process to periodically review recommendations or automatically apply recommendations into your deployment pipelines. Coordinate changes between the networking team and the application team The network topologies that are deployed by the blueprint assume that you have a team responsible for managing network resources, and separate teams responsible for deploying workload infrastructure resources. As the workload teams deploy infrastructure, they must create firewall rules to allow the intended access paths between components of their workload, but they don't have permission to modify the network firewall policies themselves. Plan how teams will work together to coordinate the changes to the centralized networking resources that are needed to deploy applications. For example, you might design a process where a workload team requests tags for their applications. The networking team then creates the tags and adds rules to the network firewall policy that allows traffic to flow between resources with the tags, and delegates the IAM roles to use the tags to the workload team. Optimize your environment with the Active Assist portfolio In addition to IAM recommender, Google Cloud provides the Active Assist portfolio of services to make recommendations about how to optimize your environment. For example, firewall insights or the unattended project recommender provide actionable recommendations that can help tighten your security posture. Design a process to periodically review recommendations or automatically apply recommendations into your deployment pipelines. Decide which recommendations should be managed by a central team and which should be the responsibility of workload owners, and apply IAM roles to access the recommendations accordingly. Grant exceptions to organization policies The blueprint enforces a set of organization policy constraints that are recommended to most customers in most scenarios, but you might have legitimate use cases that require limited exceptions to the organization policies you enforce broadly. For example, the blueprint enforces the iam.disableServiceAccountKeyCreation constraint. This constraint is an important security control because a leaked service account key can have a significant negative impact, and most scenarios should use more secure alternatives to service account keys to authenticate. However, there might be use cases that can only authenticate with a service account key, such as an on-premises server that requires access to Google Cloud services and cannot use workload identity federation. In this scenario, you might decide to allow an exception to the policy, so long as additional compensating controls like best practices for managing service account keys are enforced. Therefore, you should design a process for workloads to request an exception to policies, and ensure that the decision makers who are responsible for granting exceptions have the technical knowledge to validate the use case and consult on whether additional controls must be in place to compensate. When you grant an exception to a workload, modify the organization policy constraint as narrowly as possible. You can also conditionally add constraints to an organization policy by defining a tag that grants an exception or enforcement for policy, then applying the tag to projects and folders. Protect your resources with VPC Service Controls The blueprint helps prepare your environment for VPC Service Controls by separating the base and restricted networks. However, by default, the Terraform code doesn't enable VPC Service Controls because this enablement can be a disruptive process. A perimeter denies access to restricted Google Cloud services from traffic that originates outside the perimeter, which includes the console, developer workstations, and the foundation pipeline used to deploy resources. If you use VPC Service Controls, you must design exceptions to the perimeter that allow the access paths that you intend. A VPC Service Controls perimeter is intended for exfiltration controls between your Google Cloud organization and external sources. The perimeter isn't intended to replace or duplicate allow policies for granular access control to individual projects or resources. When you design and architect a perimeter, we recommend using a common unified perimeter for lower management overhead. If you must design multiple perimeters to granularly control service traffic within your Google Cloud organization, we recommend that you clearly define the threats that are addressed by a more complex perimeter structure and the access paths between perimeters that are needed for intended operations. To adopt VPC Service Controls, evaluate the following: Which of your use cases require VPC Service Controls. Whether the required Google Cloud services support VPC Service Controls. How to configure breakglass access to modify the perimeter in case it disrupts your automation pipelines. How to use best practices for enabling VPC Service Controls to design and implement your perimeter. After the perimeter is enabled, we recommend that you design a process to consistently add new projects to the correct perimeter, and a process to design exceptions when developers have a new use case that is denied by your current perimeter configuration. Test organization-wide changes in a separate organization We recommend that you never deploy changes to production without testing. For workload resources, this approach is facilitated by separate environments for development, non-production, and production. However, some resources at the organization don't have separate environments to facilitate testing. For changes at the organization-level, or other changes that can affect production environments like the configuration between your identity provider and Cloud Identity, consider creating a separate organization for test purposes. Control remote access to virtual machines Because we recommend that you deploy immutable infrastructure through the foundation pipeline, infrastructure pipeline, and application pipeline, we also recommend that you only grant developers direct access to a virtual machine through SSH or RDP for limited or exceptional use cases. For scenarios that require remote access, we recommend that you manage user access using OS Login where possible. This approach uses managed Google Cloud services to enforce access control, account lifecycle management, two-step verification, and audit logging. Alternatively, if you must allow access through SSH keys in metadata or RDP credentials, it is your responsibility to manage the credential lifecycle and store credentials securely outside of Google Cloud. In any scenario, a user with SSH or RDP access to a VM can be a privilege escalation risk, so you should design your access model with this in mind. The user can run code on that VM with the privileges of the associated service account or query the metadata server to view the access token that is used to authenticate API requests. This access can then be a privilege escalation if you didn't deliberately intend for the user to operate with the privileges of the service account. Mitigate overspending by planning budget alerts The blueprint implements best practices introduced in the Google Cloud Architecture Framework: Cost Optimization for managing cost, including the following: Use a single billing account across all projects in the enterprise foundation. Assign each project a billingcode metadata label that is used to allocate cost between cost centers. Set budgets and alert thresholds. It's your responsibility to plan budgets and configure billing alerts. The blueprint creates budget alerts for workload projects when the forecasted spending is on track to reach 120% of the budget. This approach lets a central team identify and mitigate incidents of significant overspending. Significant unexpected increases in spending without a clear cause can be an indicator of a security incident and should be investigated from the perspectives of both cost control and security. Note: Budget alerts cover a different type of notification than the Billing category of Essential Contacts. Budget alerts are related to the consumption of budgets that you define for each project. Billing notifications from Essential Contacts are related to pricing updates, errors, and credits. Depending on your use case, you might set a budget that is based on the cost of an entire environment folder, or all projects related to a certain cost center, instead of setting granular budgets for each project. We also recommend that you delegate budget and alert setting to workload owners who might set more granular alerting threshold for their day-to-day monitoring. For guidance on building FinOps capabilities, including forecasting budgets for workloads, see Getting started with FinOps on Google Cloud. Allocate costs between internal cost centers The console lets you view your billing reports to view and forecast cost in multiple dimensions. In addition to the prebuilt reports, we recommend that you export billing data to a BigQuery dataset in the prj-c-billing-export project. The exported billing records allow you to allocate cost on custom dimensions, such as your internal cost centers, based on project label metadata like billingcode. The following SQL query is a sample query to understand costs for all projects that are grouped by the billingcode project label. #standardSQL SELECT (SELECT value from UNNEST(labels) where key = 'billingcode') AS costcenter, service.description AS description, SUM(cost) AS charges, SUM((SELECT SUM(amount) FROM UNNEST(credits))) AS credits FROM PROJECT_ID.DATASET_ID.TABLE_NAME GROUP BY costcenter, description ORDER BY costcenter ASC, description ASC To set up this export, see export Cloud Billing data to BigQuery. If you require internal accounting or chargeback between cost centers, it's your responsibility to incorporate the data that is obtained from this query into your internal processes. Ingest findings from detective controls into your existing SIEM Although the foundation resources help you configure aggregated destinations for audit logs and security findings, it is your responsibility to decide how to consume and use these signals. If you have a requirement to aggregate logs across all cloud and on-premise environments into an existing SIEM, decide how to ingest logs from the prj-c-logging project and findings from Security Command Center into your existing tools and processes. You might create a single export for all logs and findings if a single team is responsible for monitoring security across your entire environment, or you might create multiple exports filtered to the set of logs and findings needed for multiple teams with different responsibilities. Alternatively, if log volume and cost are prohibitive, you might avoid duplication by retaining Google Cloud logs and findings only in Google Cloud. In this scenario, ensure that your existing teams have the right access and training to work with logs and findings directly in Google Cloud. For audit logs, design log views to grant access to a subset of logs in your centralized logs bucket to individual teams, instead of duplicating logs to multiple buckets which increases log storage cost. For security findings, grant folder-level and project-level roles for Security Command Center to let teams view and manage security findings just for the projects for which they are responsible, directly in the console. Continuously develop your controls library The blueprint starts with a baseline of controls to detect and prevent threats. We recommend that you review these controls and add additional controls based on your requirements. The following table summarizes the mechanisms to enforce governance policies and how to extend these for your additional requirements: Policy controls enforced by the blueprint Guidance to extend these controls Security Command Center detects vulnerabilities and threats from multiple security sources. Define custom modules for Security Health Analytics and custom modules for Event Threat Detection. The Organization Policy service enforces a recommended set of organization policy constraints on Google Cloud services. Enforce additional constraints from the premade list of available constraints or create custom constraints. Open Policy Agent (OPA) policy validates code in the foundation pipeline for acceptable configurations before deployment. Develop additional constraints based on the guidance at GoogleCloudPlatform/policy-library. Alerting on log-based metrics and performance metrics configures log-based metrics to alert on changes to IAM policies and configurations of some sensitive resources. Design additional log-based metrics and alerting policies for log events that you expect shouldn't occur in your environment. A custom solution for automated log analysis regularly queries logs for suspicious activity and creates Security Command Center findings. Write additional queries to create findings for security events that you want to monitor, using security log analytics as a reference. A custom solution to respond to asset changes creates Security Command Center findings and can automate remediation actions. Create additional Cloud Asset Inventory feeds to monitor changes for particular asset types and write additional Cloud Run functions with custom logic to respond to policy violations. These controls might evolve as your requirements and maturity on Google Cloud change. Manage encryption keys with Cloud Key Management Service Google Cloud provides default encryption at rest for all customer content, but also provides Cloud Key Management Service (Cloud KMS) to provide you additional control over your encryption keys for data at rest. We recommend that you evaluate whether the default encryption is sufficient, or whether you have a compliance requirement that you must use Cloud KMS to manage keys yourself. For more information, see decide how to meet compliance requirements for encryption at rest. The blueprint provides a prj-c-kms project in the common folder and a prj-{env}-kms project in each environment folder for managing encryption keys centrally. This approach lets a central team audit and manage encryption keys that are used by resources in workload projects, in order to meet regulatory and compliance requirements. Depending on your operational model, you might prefer a single centralized project instance of Cloud KMS under the control of a single team, you might prefer to manage encryption keys separately in each environment, or you might prefer multiple distributed instances so that accountability for encryption keys can be delegated to the appropriate teams. Modify the Terraform code sample as needed to fit your operational model. Optionally, you can enforce customer-managed encryption keys (CMEK) organization policies to enforce that certain resource types always require a CMEK key and that only CMEK keys from an allowlist of trusted projects can be used. Store and audit application credentials with Secret Manager We recommend that you never commit sensitive secrets (such as API keys, passwords, and private certificates) to source code repositories. Instead, commit the secret to Secret Manager and grant the Secret Manager Secret Accessor IAM role to the user or service account that needs to access the secret. We recommend that you grant the IAM role to an individual secret, not to all secrets in the project. When possible, you should generate production secrets automatically within the CI/CD pipelines and keep them inaccessible to human users except in breakglass situations. In this scenario, ensure that you don't grant IAM roles to view these secrets to any users or groups. The blueprint provides a single prj-c-secrets project in the common folder and a prj-{env}-secrets project in each environment folder for managing secrets centrally. This approach lets a central team audit and manage secrets used by applications in order to meet regulatory and compliance requirements. Depending on your operational model, you might prefer a single centralized instance of Secret Manager under the control of a single team, or you might prefer to manage secrets separately in each environment, or you might prefer multiple distributed instances of Secret Manager so that each workload team can manage their own secrets. Modify the Terraform code sample as needed to fit your operational model. Plan breakglass access to highly privileged accounts Although we recommend that changes to foundation resources are managed through version-controlled IaC that is deployed by the foundation pipeline, you might have exceptional or emergency scenarios that require privileged access to modify your environment directly. We recommend that you plan for breakglass accounts (sometimes called firecall or emergency accounts) that have highly privileged access to your environment in case of an emergency or when the automation processes break down. The following table describes some example purposes of breakglass accounts. Breakglass purpose Description Super admin Emergency access to the Super admin role used with Cloud Identity, to, for example, fix issues that are related to identity federation or multi-factor authentication (MFA). Organization administrator Emergency access to the Organization Administrator role, which can then grant access to any other IAM role in the organization. Foundation pipeline administrator Emergency access to modify the resources in your CICD project on Google Cloud and external Git repository in case the automation of the foundation pipeline breaks down. Operations or SRE An operations or SRE team needs privileged access to respond to outages or incidents. This can include tasks like restarting VMs or restoring data. Your mechanism to permit breakglass access depends on the existing tools and procedures you have in place, but a few example mechanisms include the following: Use your existing tools for privileged access management to temporarily add a user to a group that is predefined with highly-privileged IAM roles or use the credentials of a highly-privileged account. Pre-provision accounts intended only for administrator usage. For example, developer Dana might have an identity dana@example.com for daily use and admin-dana@example.com for breakglass access. Use an application like just-in-time privileged access that allows a developer to self-escalate to more privileged roles. Regardless of the mechanism you use, consider how you operationally address the following questions: How do you design the scope and granularity of breakglass access? For example, you might design a different breakglass mechanism for different business units to ensure that they cannot disrupt each other. How does your mechanism prevent abuse? Do you require approvals? For example, you might have split operations where one person holds credentials and one person holds the MFA token. How do you audit and alert on breakglass access? For example, you might configure a custom Event Threat Detection module to create a security finding when a predefined breakglass account is used. How do you remove the breakglass access and resume normal operations after the incident is over? For common privilege escalation tasks and rolling back changes, we recommend designing automated workflows where a user can perform the operation without requiring privilege escalation for their user identity. This approach can help reduce human error and improve security. For systems that require regular intervention, automating the fix might be the best solution. Google encourages customers to adopt a zero-touch production approach to make all production changes using automation, safe proxies, or audited breakglass. Google provides the SRE books for customers who are looking to adopt Google's SRE approach. What's next Read Deploy the blueprint (next document in this series). Send feedback \ No newline at end of file diff --git a/Optimize_AI_and_ML_workloads_with_Parallelstore.txt b/Optimize_AI_and_ML_workloads_with_Parallelstore.txt new file mode 100644 index 0000000000000000000000000000000000000000..639e68819a511e0c81065a3f6a19fa25e035051f --- /dev/null +++ b/Optimize_AI_and_ML_workloads_with_Parallelstore.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/optimize-ai-ml-workloads-parallelstore +Date Scraped: 2025-02-23T11:46:38.440Z + +Content: +Home Docs Cloud Architecture Center Send feedback Optimize AI and ML workloads with Parallelstore Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-20 UTC This document provides a reference architecture that shows how you can use Parallelstore to optimize performance for artificial intelligence (AI) or machine learning (ML) workloads. Parallelstore is a parallel file system storage service that helps you to reduce costs, improve resource utilization, and accelerate training times for your AI and ML workloads. The intended audience for this document includes architects and technical practitioners who design, provision, and manage storage for their AI and ML workloads on Google Cloud. The document assumes that you have an understanding of the ML lifecycle, processes, and capabilities. Parallelstore is a fully managed, high-performance scratch file system in Google Cloud that's built on the Distributed Asynchronous Object Storage (DAOS) architecture. Parallelstore is ideal for AI and ML workloads that use up to 100 TiB of storage capacity and that need to provide low-latency (sub-millisecond) access with high throughput and high input/output operations per second (IOPS). Parallelstore offers several advantages for AI and ML workloads, such as the following: Lower total cost of ownership (TCO) for training: Parallelstore accelerates training time by efficiently delivering data to compute nodes. This functionality helps to reduce the total cost of ownership for AI and ML model training. Lower TCO for serving: Parallelstore's high-performance capabilities enable faster model loading and optimized inference serving. These capabilities help to lower compute costs and improve resource utilization. Efficient resource utilization: Parallelstore lets you combine training, checkpointing, and serving within a single instance. This resource utilization helps to maximize the efficient use of read and write throughput in a single, high-performance storage system. Architecture The following diagram shows a sample architecture for using Parallelstore to optimize performance of a model training workload and serving workload: The workloads that are shown in the preceding architecture are described in detail in later sections. The architecture includes the following components: Component Purpose Google Kubernetes Engine (GKE) cluster GKE manages the compute hosts on which your AI and ML model training and serving processes execute. GKE manages the underlying infrastructure of clusters, including the control plane, nodes, and all system components. Kubernetes Scheduler The GKE control plane schedules workloads and manages their lifecycle, scaling, and upgrades. The Kubernetes node agent (kubelet), which isn't shown in the diagram, communicates with the control plane. The kubelet is responsible for starting and running containers scheduled on the GKE nodes. You can deploy GPUs for batch and AI workloads with Dynamic Workload Scheduler, which lets you request GPUs without a large commitment. For more information about the scheduler, see AI/ML orchestration on GKE. Virtual Private Cloud (VPC) network All of the Google Cloud resources that are in the architecture use a single VPC network. Depending on your requirements, you can choose to build an architecture that uses multiple networks. For more information about how to configure a VPC network for Parallelstore, see Configure a VPC network. Cloud Load Balancing In this architecture, Cloud Load Balancing efficiently distributes incoming inference requests from application users to the serving containers in the GKE cluster. The use of Cloud Load Balancing helps to ensure high availability, scalability, and optimal performance for the AI and ML application. For more information, see Understanding GKE load balancing. Graphics Processing Unit (GPU) or Tensor Processing Units (TPUs) GPUs and TPUs are specialized machine accelerators that improve the performance of your AI and ML workload. For more information about how to choose an appropriate processor type, see Accelerator options later in this document. Parallelstore Parallelstore accelerates AI and ML training and serving by providing a high-performance, parallel file system that's optimized for low latency and high throughput. Compared to using Cloud Storage alone, using Parallelstore significantly reduces training time and improves the responsiveness of your models during serving. These improvements are especially realized in demanding workloads that require fast and consistent access to shared data. Cloud Storage Cloud Storage provides persistent and cost-effective storage for your AI and ML workloads. Cloud Storage serves as the central repository for your raw training datasets, model checkpoints, and final trained models. Using Cloud Storage helps to ensure data durability, long-term availability, and cost-efficiency for data that isn't actively being used in computations. Training workload In the preceding architecture, the following are the steps in the data flow during model training: Upload training data to Cloud Storage: You upload training data to a Cloud Storage bucket, which serves as a secure and scalable central repository and source of truth. Copy data to Parallelstore: The training data corpus is transferred through a bulk API import to a Parallelstore instance from Cloud Storage. Transferring the training data lets you take advantage of Parallelstore's high-performance file system capabilities to optimize data loading and processing speeds during model training. Run training jobs in GKE: The model training process runs on GKE nodes. By using Parallelstore as the data source instead of loading data from Cloud Storage directly, the GKE nodes can access and load training data with significantly increased speed and efficiency. Using Parallelstore helps to reduce data loading times and accelerate the overall training process, especially for large datasets and complex models. Depending on your workload requirements, you can use GPUs or TPUs. For information about how to choose an appropriate processor type, see Accelerator options later in this document. Save training checkpoints to Parallelstore: During the training process, checkpoints are saved to Parallelstore based on metrics or intervals that you define. The checkpoints capture the state of the model at frequent intervals. Save checkpoints and model to Cloud Storage: We recommend that you use a bulk API export from the Parallelstore instance to save some checkpoints and the trained model to Cloud Storage. This practice ensures fault tolerance and enables future use cases like resuming training from a specific point, deploying the model for production, and conducting further experiments. As a best practice, store checkpoints in a different bucket from your training data. Restore checkpoints or model: When your AI and ML workflow requires that you restore checkpoints or model data, you need to locate the asset that you want to restore in Cloud Storage. Select the asset to restore based on timestamp, performance metric, or a specific version. Use API import to transfer the asset from Cloud Storage to Parallelstore, and then load the asset into your training container. You can then use the restored checkpoint or model to resume training, fine-tune parameters, or evaluate performance on a validation set. Serving workload In the preceding architecture, the following are the steps in the data flow during model serving: Load model for serving: After training is complete, your pods load the trained model to the serving nodes. If the Parallelstore instance that you used during training has sufficient IOPS capacity, you can accelerate model loading and reduce costs by using the training instance to serve the model. Reusing the training instance enables efficient resource sharing between training and serving. However, to maintain optimal performance and compatibility, use an accelerator type (GPU or TPU) for training that's consistent with the accelerator type that's available on the serving GKE nodes. Inference request: Application users send inference requests through the AI and ML application. These requests are directed to the Cloud Load Balancing service. Cloud Load Balancing distributes the incoming requests across the serving containers in the GKE cluster. This distribution ensures that no single container is overwhelmed and that requests are processed efficiently. Serving inference requests: During production, the system efficiently handles inference requests by utilizing the model serving cache. The compute nodes interact with the cache by first checking for a matching prediction. If a matching prediction is found, it's returned directly, which helps to optimize response times and resource usage. Otherwise, the model processes the request, generates a prediction, and stores it in the cache for future efficiency. Response delivery: The serving containers send the responses back through Cloud Load Balancing. Cloud Load Balancing routes the responses back to the appropriate application users, which completes the inference request cycle. Products used This reference architecture uses the following Google Cloud products: Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Google Kubernetes Engine (GKE): A Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Parallelstore: A fully managed parallel file system for AI, high performance computing (HPC), and data-intensive applications. Use cases Parallelstore is ideal for AI and ML workloads with up to 100 TiB of storage capacity and that need to provide low-latency (sub-millisecond) access with high throughput and high IOPS. The following sections provide examples of use cases for which you can use Parallelstore. Text-based processing and text generation Large language models (LLMs) are specialized AI models that are designed specifically for understanding and processing text-based data. LLMs are trained on massive text datasets, enabling them to perform a variety of tasks, including machine translation, question answering, and text summarization. Training LLM models demands low-latency access to the datasets for efficient request processing and text generation. Parallelstore excels in data-intensive applications by providing the high throughput and low latency that's needed for both training and inference, leading to more responsive LLM-powered applications. High-resolution image or video processing Traditional AI and ML applications or multi-modal generative models that process high-resolution images or videos, such as medical imaging analysis or autonomous driving systems, require large storage capacity and rapid data access. Parallelstore's high-performance scratch file system allows for fast data loading to accelerate the application performance. For example, Parallelstore can temporarily hold and process large volumes of patient data, such as MRI and CT scans, that are pulled from Cloud Storage. This functionality enables AI and ML models to quickly analyze the data for diagnosis and treatment. Design alternatives The following sections present alternative design approaches that you can consider for your AI and ML application in Google Cloud. Platform alternative Instead of hosting your model training and serving workflow on GKE, you can consider Compute Engine with Slurm. Slurm is a highly configurable and open source workload and resource manager. Using Compute Engine with Slurm is particularly well-suited for large-scale model training and simulations. We recommend using Compute Engine with Slurm if you need to integrate proprietary AI and ML intellectual property (IP) into a scalable environment with the flexibility and control to optimize performance for specialized workloads. On Compute Engine, you provision and manage your virtual machines (VMs), which gives you granular control over instance types, storage, and networking. You can tailor your infrastructure to your exact needs, including the selection of specific VM machine types. You can also use the accelerator-optimized machine family for enhanced performance with your AI and ML workloads. For more information about machine type families that are available on Compute Engine, see Machine families resource and comparison guide. Slurm offers a powerful option for managing AI and ML workloads and it lets you control the configuration and management of the compute resources. To use this approach, you need expertise in Slurm administration and Linux system management. Accelerator options Machine accelerators are specialized processors that are designed to speed up the computations required for AI and ML workloads. You can choose either Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs). GPU accelerators provide excellent performance for a wide range of tasks, including graphic rendering, deep learning training, and scientific computing. Google Cloud has a wide selection of GPUs to match a range of performance and price points. For information about GPU models and pricing, see GPU pricing. TPUs are custom-designed AI accelerators, which are optimized for training and inference of large AI models. They are ideal for a variety of use cases, such as chatbots, code generation, media content generation, synthetic speech, vision services, recommendation engines, personalization models, among others. For more information about TPU models and pricing, see TPU pricing. Serving storage alternatives Cloud Storage FUSE with a multi-regional or dual-region bucket provides the highest level of availability because your trained AI and ML models are stored in Cloud Storage and multiple regions. Although Cloud Storage FUSE achieves a lower throughput per VM than Parallelstore, Cloud Storage FUSE lets you take advantage of the scalability and cost-effectiveness of Cloud Storage. To accelerate model loading and improve performance, especially for demanding workloads, you can use existing or new Parallelstore instances in each region. For information about how to improve performance with Cloud Storage FUSE, see Optimize Cloud Storage FUSE CSI driver for GKE performance. Google Cloud Hyperdisk ML is a high-performance block storage solution that's designed to accelerate large-scale AI and ML workloads that require read-only access to large datasets. Hyperdisk ML can be provisioned with higher aggregate throughput, but it achieves a lower throughput per VM compared to Parallelstore. Additionally, Hyperdisk ML volumes can only be accessed by GPU or TPU VMs in the same zone. Therefore, for regional GKE clusters that serve from multiple zones, you must provision separate Hyperdisk ML volumes in each zone. This placement differs from Parallelstore, where you need only one instance per region. It's also important to note that Hyperdisk ML is read-only. For more information about using Hyperdisk ML in AI and ML workloads, see Accelerate AI/ML data loading with Hyperdisk ML. Design considerations To design a Parallelstore deployment that optimizes the performance and cost-efficiency of your AI and ML workloads on Google Cloud, use the guidelines in the following sections. The guidelines describe recommendations to consider when you use Parallelstore as part of a hybrid solution that combines multiple storage options for specific tasks within your workflow. Training AI and ML model training requires that you iteratively feed data to your model, adjust its parameters, and evaluate its performance with each iteration. This process can be computationally intensive and it generates a high volume of I/O requests due to the constant need to read training data and write updated model parameters. To maximize the performance benefits during training, we recommend the following: Caching: Use Parallelstore as a high-performance cache on top of Cloud Storage. Prefetching: Import data to Parallelstore from Cloud Storage to minimize latency during training. You can also use GKE Volume Populator to pre-populate PersistentVolumesClaims with data from Cloud Storage. Cost optimization: Export your data to a lower-cost Cloud Storage class after training in order to minimize long-term storage expenses. Because your persistent data is stored in Cloud Storage, you can destroy and recreate Parallelstore instances as needed for your training jobs. GKE integration: Integrate with the GKE container storage interface (CSI) driver for simplified management. For information about how to connect a GKE cluster to a Parallelstore instance, see Google Kubernetes Engine Parallelstore CSI driver. A3 VM performance: Deliver more than 20 GB/s (approximately 2.5 GB/s per GPU) on A3 variants for optimal data delivery. Concurrent access: Use the Parallelstore instance to accommodate full duplex read and writes. When you deploy Parallelstore for training, consider the following: Scratch file system: Configure checkpointing intervals throughout the training process. Parallelstore is a scratch file system, which means that data is stored temporarily. At the 100 TiB range, the estimated mean time to data loss is two months. At the 23 TiB range, the estimated mean time to data loss is twelve months or more. File and directory striping: Optimize file and directory striping for your predominant file size to maximize performance. Cost optimization: Optimize costs by appropriately staging data in Cloud Storage instead of in Parallelstore. Zone selection: Optimize cost and performance by locating GPU or TPU compute clients and storage nodes in the same zone. For more information about how to configure your Parallelstore environment to optimize performance, see Performance considerations. Checkpointing Checkpointing is a critical aspect of AI and ML model training. Checkpointing lets you save the state of your model at various points during the process, so that you can resume training from a saved checkpoint in case of interruptions, system failures, or to explore different hyperparameter configurations. When you use Parallelstore for training, it's crucial to also use it for checkpointing in order to take advantage of its high write throughput and to minimize training time. This approach ensures efficient utilization of resources and helps lower the TCO for your GPU resources by keeping both training and checkpointing as fast as possible. To optimize your checkpointing workflow with Parallelstore, consider these best practices: Fast checkpointing: Take advantage of fast checkpoint writes with Parallelstore. You can achieve a throughput of 0.5 GB/s per TiB of capacity and more than 12 GB/s per A3 VM. Selective checkpoint storage: Export selected checkpoints from Parallelstore to Cloud Storage for long-term storage and disaster recovery. Concurrent operations: Benefit from read and write full duplexing by using Parallelstore simultaneously for training and checkpoint writes. Serving Serving involves deploying your trained AI and ML models to handle inference requests. To achieve optimal performance, it's crucial to minimize the time that it takes to load these models into memory. Although Parallelstore is primarily designed for training workloads, you can use Parallelstore's high throughput per VM (more than 20 GB/s) and aggregate cluster throughput in order to minimize model load times across thousands of VMs. To track key metrics that enable you to identify bottlenecks and ensure optimal efficiency, use Cloud Monitoring. When you deploy Parallelstore for serving, consider the following: High throughput: Maximize Parallelstore performance by using Cloud Monitoring to help ensure that you deploy sufficient capacity to achieve up to 125 GB/s throughput at 100 TiB. Potential for service interruptions: Because Parallelstore is a scratch file system, it can experience occasional service interruptions. The mean time to data loss is approximately 2 months for a 100 TiB cluster. Restore data: If a service interruption occurs, you need to restore Parallelstore data from your latest Cloud Storage backup. Data is transferred at a speed of approximately 16 GB/s. Shared instances: Using one Parallelstore instance for training and serving maximizes resource utilization and can be cost-efficient. However, there can be potential resource contention if both workloads have high throughput demands. If spare IOPS are available after training, using the same instance can accelerate model loading for serving. Use Cloud Monitoring to help ensure that you allocate sufficient resources to meet your throughput demands. Separate instances: Using separate instances provides performance isolation, enhances security by isolating training data, and improves data protection. Although access control lists can manage security within a single instance, separate instances offer a more robust security boundary. Placement options To minimize latency and maximize performance, create your Parallelstore instance in a region that's geographically close to your GPU or TPU compute clients. For training and checkpointing: For optimal results, ensure that the clients and Parallelstore instances are in the same zone. This colocation minimizes data transfer times and maximizes the utilization of Parallelstore's write throughput. For serving: Although colocating with compute clients in the same zone is ideal, having one Parallelstore instance per region is sufficient. This approach avoids extra costs that are associated with deploying multiple instances and helps to maximize compute performance. However, if you require additional capacity or throughput, you might consider deploying more than one instance per region. Deploying Parallelstore in two regions can significantly improve performance by keeping data geographically closer to the GPUs or TPUs that are used for serving. This placement reduces latency and allows for faster data access during inference. If a regional outage occurs, both training and serving applications will become unavailable to users. To ensure high availability and reliability, you should instantiate a replica of this architecture in a different region. When you create a geographically redundant architecture, your AI and ML application can continue operating even if one region experiences an outage. To back up and restore your cluster data and Cloud Storage data and restore them in a different region as needed, you can use Backup for GKE. For information about the supported locations for Parallelstore instances, see Supported locations. Deployment To create and deploy this reference architecture, we recommend that you use Cluster Toolkit. Cluster Toolkit is a modular, Terraform-based toolkit that's designed for deployment of repeatable AI and ML environments on Google Cloud. To define your environment, use the GKE and Parallelstore training blueprint. To provision and manage Parallelstore instances for your clusters, reference the Parallelstore module. For information about how to manually deploy Parallestore, see Create a Parallelstore instance. To further improve scalability and enhance performance with dynamic provisioning, you can create and use a volume backed by a Parallelstore instance in GKE. What's next Learn more about how to use parallel file systems for HPC workloads. Learn more about best practices for implementing machine learning on Google Cloud. Learn more about how to design storage for AI and ML workloads in Google Cloud. Learn more about how to train a TensorFlow model with Keras on GKE. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Samantha He | Technical WriterOther contributors: Dean Hildebrand | Technical Director, Office of the CTOKumar Dhanagopal | Cross-Product Solution DeveloperSean Derrington | Group Outbound Product Manager, Storage Send feedback \ No newline at end of file diff --git a/Optimize_continuously.txt b/Optimize_continuously.txt new file mode 100644 index 0000000000000000000000000000000000000000..99bb3d978ac7afb77c724a40c1852f771c7e2815 --- /dev/null +++ b/Optimize_continuously.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization/optimize-continuously +Date Scraped: 2025-02-23T11:43:58.257Z + +Content: +Home Docs Cloud Architecture Center Send feedback Optimize continuously Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-25 UTC This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you optimize the cost of your cloud deployments based on constantly changing and evolving business goals. As your business grows and evolves, your cloud workloads need to adapt to changes in resource requirements and usage patterns. To derive maximum value from your cloud spending, you must maintain cost-efficiency while continuing to support business objectives. This requires a proactive and adaptive approach that focuses on continuous improvement and optimization. Principle overview To optimize cost continuously, you must proactively monitor and analyze your cloud environment and make suitable adjustments to meet current requirements. Focus your monitoring efforts on key performance indicators (KPIs) that directly affect your end users' experience, align with your business goals, and provide insights for continuous improvement. This approach lets you identify and address inefficiencies, adapt to changing needs, and continuously align cloud spending with strategic business goals. To balance comprehensive observability with cost effectiveness, understand the costs and benefits of monitoring resource usage and use appropriate process-improvement and optimization strategies. Recommendations To effectively monitor your Google Cloud environment and optimize cost continuously, consider the following recommendations. Focus on business-relevant metrics Effective monitoring starts with identifying the metrics that are most important for your business and customers. These metrics include the following: User experience metrics: Latency, error rates, throughput, and customer satisfaction metrics are useful for understanding your end users' experience when using your applications. Business outcome metrics: Revenue, customer growth, and engagement can be correlated with resource usage to identify opportunities for cost optimization. DevOps Research & Assessment (DORA) metrics: Metrics like deployment frequency, lead time for changes, change failure rate, and time to restore provide insights into the efficiency and reliability of your software delivery process. By improving these metrics, you can increase productivity, reduce downtime, and optimize cost. Site Reliability Engineering (SRE) metrics: Error budgets help teams to quantify and manage the acceptable level of service disruption. By establishing clear expectations for reliability, error budgets empower teams to innovate and deploy changes more confidently, knowing their safety margin. This proactive approach promotes a balance between innovation and stability, helping prevent excessive operational costs associated with major outages or prolonged downtime. Use observability for resource optimization The following are recommendations to use observability to identify resource bottlenecks and underutilized resources in your cloud deployments: Monitor resource utilization: Use resource utilization metrics to identify Google Cloud resources that are underutilized. For example, use metrics like CPU and memory utilization to identify idle VM resources. For Google Kubernetes Engine (GKE), you can view a detailed breakdown of costs and cost-related optimization metrics. For Google Cloud VMware Engine, review resource utilization to optimize CUDs, storage consumption, and ESXi right-sizing. Use cloud recommendations: Active Assist is a portfolio of intelligent tools that help you optimize your cloud operations. These tools provide actionable recommendations to reduce costs, increase performance, improve security and even make sustainability-focused decisions. For example, VM rightsizing insights can help to optimize resource allocation and avoid unnecessary spending. Correlate resource utilization with performance: Analyze the relationship between resource utilization and application performance to determine whether you can downgrade to less expensive resources without affecting the user experience. Balance troubleshooting needs with cost Detailed observability data can help with diagnosing and troubleshooting issues. However, storing excessive amounts of observability data or exporting unnecessary data to external monitoring tools can lead to unnecessary costs. For efficient troubleshooting, consider the following recommendations: Collect sufficient data for troubleshooting: Ensure that your monitoring solution captures enough data to efficiently diagnose and resolve issues when they arise. This data might include logs, traces, and metrics at various levels of granularity. Use sampling and aggregation: Balance the need for detailed data with cost considerations by using sampling and aggregation techniques. This approach lets you collect representative data without incurring excessive storage costs. Understand the pricing models of your monitoring tools and services: Evaluate different monitoring solutions and choose options that align with your project's specific needs, budget, and usage patterns. Consider factors like data volume, retention requirements, and the required features when making your selection. Regularly review your monitoring configuration: Avoid collecting excessive data by removing unnecessary metrics or logs. Tailor data collection to roles and set role-specific retention policies Consider the specific data needs of different roles. For example, developers might primarily need access to traces and application-level logs, whereas IT administrators might focus on system logs and infrastructure metrics. By tailoring data collection, you can reduce unnecessary storage costs and avoid overwhelming users with irrelevant information. Additionally, you can define retention policies based on the needs of each role and any regulatory requirements. For example, developers might need access to detailed logs for a shorter period, while financial analysts might require longer-term data. Consider regulatory and compliance requirements In certain industries, regulatory requirements mandate data retention. To avoid legal and financial risks, you need to ensure that your monitoring and data retention practices help you adhere to relevant regulations. At the same time, you need to maintain cost efficiency. Consider the following recommendations: Determine the specific data retention requirements for your industry or region, and ensure that your monitoring strategy meets the requirements of those requirements. Implement appropriate data archival and retrieval mechanisms to meet audit and compliance needs while minimizing storage costs. Implement smart alerting Alerting helps to detect and resolve issues in a timely manner. However, a balance is necessary between an approach that keeps you informed, and one that overwhelms you with notifications. By designing intelligent alerting systems, you can prioritize critical issues that have higher business impact. Consider the following recommendations: Prioritize issues that affect customers: Design alerts that trigger rapidly for issues that directly affect the customer experience, like website outages, slow response times, or transaction failures. Tune for temporary problems: Use appropriate thresholds and delay mechanisms to avoid unnecessary alerts for temporary problems or self-healing system issues that don't affect customers. Customize alert severity: Ensure that the most urgent issues receive immediate attention by differentiating between critical and noncritical alerts. Use notification channels wisely: Choose appropriate channels for alert notifications (email, SMS, or paging) based on the severity and urgency of the alerts. Previous arrow_back Optimize resource usage Send feedback \ No newline at end of file diff --git a/Optimize_resource_usage.txt b/Optimize_resource_usage.txt new file mode 100644 index 0000000000000000000000000000000000000000..26c4f2d5e2ca751a2395d1573494b4ab1d151c48 --- /dev/null +++ b/Optimize_resource_usage.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization/optimize-resource-usage +Date Scraped: 2025-02-23T11:43:56.367Z + +Content: +Home Docs Cloud Architecture Center Send feedback Optimize resource usage Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-25 UTC This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you plan and provision resources to match the requirements and consumption patterns of your cloud workloads. Principle overview To optimize the cost of your cloud resources, you need to thoroughly understand your workloads resource requirements and load patterns. This understanding is the basis for a well defined cost model that lets you forecast the total cost of ownership (TCO) and identify cost drivers throughout your cloud adoption journey. By proactively analyzing and forecasting cloud spending, you can make informed choices about resource provisioning, utilization, and cost optimization. This approach lets you control cloud spending, avoid overprovisioning, and ensure that cloud resources are aligned with the dynamic needs of your workloads and environments. Recommendations To effectively optimize cloud resource usage, consider the following recommendations. Choose environment-specific resources Each deployment environment has different requirements for availability, reliability and scalability. For example, developers might prefer an environment that lets them rapidly deploy and run applications for short durations, but might not need high availability. On the other hand, a production environment typically needs high availability. To maximize the utilization of your resources, define environment-specific requirements based on your business needs. The following table lists examples of environment-specific requirements. Note: The requirements that are listed in this table are not exhaustive or prescriptive. They're meant to serve as examples to help you understand how requirements can vary based on the environment type. Environment Requirements Production High availability Predictable performance Operational stability Security with robust resources Development and testing Cost efficiency Flexible infrastructure with burstable capacity Ephemeral infrastructure when data persistence is not necessary Other environments (like staging and QA) Tailored resource allocation based on environment-specific requirements Choose workload-specific resources Each of your cloud workloads might have different requirements for availability, scalability, security, and performance. To optimize costs, you need to align resource choices with the specific requirements of each workload. For example, a stateless application might not require the same level of availability or reliability as a stateful backend. The following table lists more examples of workload-specific requirements. Note: The requirements that are listed in this table are not exhaustive or prescriptive. They're meant to serve as examples to help you understand how requirements can vary based on the workload type. Workload type Workload requirements Resource options Mission-critical Continuous availability, robust security, and high performance Premium resources and managed services like Spanner for high availability and global consistency of data. Non-critical Cost-efficient and autoscaling infrastructure Resources with basic features and ephemeral resources like Spot VMs. Event-driven Dynamic scaling based on the current demand for capacity and performance Serverless services like Cloud Run and Cloud Run functions. Experimental workloads Low cost and flexible environment for rapid development, iteration, testing, and innovation Resources with basic features, ephemeral resources like Spot VMs, and sandbox environments with defined spending limits. A benefit of the cloud is the opportunity to take advantage of the most appropriate computing power for a given workload. Some workloads are developed to take advantage of processor instruction sets, and others might not be designed in this way. Benchmark and profile your workloads accordingly. Categorize your workloads and make workload-specific resource choices (for example, choose appropriate machine families for Compute Engine VMs). This practice helps to optimize costs, enable innovation, and maintain the level of availability and performance that your workloads need. The following are examples of how you can implement this recommendation: For mission-critical workloads that serve globally distributed users, consider using Spanner. Spanner removes the need for complex database deployments by ensuring reliability and consistency of data in all regions. For workloads with fluctuating load levels, use autoscaling to ensure that you don't incur costs when the load is low and yet maintain sufficient capacity to meet the current load. You can configure autoscaling for many Google Cloud services, including Compute Engine VMs, Google Kubernetes Engine (GKE) clusters, and Cloud Run. When you set up autoscaling, you can configure maximum scaling limits to ensure that costs remain within specified budgets. Select regions based on cost requirements For your cloud workloads, carefully evaluate the available Google Cloud regions and choose regions that align with your cost objectives. The region with lowest cost might not offer optimal latency or it might not meet your sustainability requirements. Make informed decisions about where to deploy your workloads to achieve the desired balance. You can use the Google Cloud Region Picker to understand the trade-offs between cost, sustainability, latency, and other factors. Use built-in cost optimization options Google Cloud products provide built-in features to help you optimize resource usage and control costs. The following table lists examples of cost optimization features that you can use in some Google Cloud products: Product Cost optimization feature Compute Engine Automatically add or remove VMs based on the current load by using autoscaling. Avoid overprovisioning by creating and using custom machine types that match your workload's requirements. For non-critical or fault-tolerant workloads, reduce costs by using Spot VMs. In development environments, reduce costs by limiting the run time of VMs or by suspending or stopping VMs when you don't need them. GKE Automatically adjust the size of GKE clusters based on the current load by using cluster autoscaler. Automatically create and manage node pools based on workload requirements and ensure optimal resource utilization by using node auto-provisioning. Cloud Storage Automatically transition data to lower-cost storage classes based on the age of data or based on access patterns by using Object Lifecycle Management. Dynamically move data to the most cost-effective storage class based on usage patterns by using Autoclass. BigQuery Reduce query processing costs for steady-state workloads by using capacity-based pricing. Optimize query performance and costs by using partitioning and clustering techniques. Google Cloud VMware Engine Reduce VMware costs by using cost-optimization strategies like CUDs, optimizing storage consumption, and rightsizing ESXi clusters. Optimize resource sharing To maximize the utilization of cloud resources, you can deploy multiple applications or services on the same infrastructure, while still meeting the security and other requirements of the applications. For example, in development and testing environments, you can use the same cloud infrastructure to test all the components of an application. For the production environment, you can deploy each component on a separate set of resources to limit the extent of impact in case of incidents. The following are examples of how you can implement this recommendation: Use a single Cloud SQL instance for multiple non-production environments. Enable multiple development teams to share a GKE cluster by using the fleet team management feature in GKE Enterprise with appropriate access controls. Use GKE Autopilot to take advantage of cost-optimization techniques like bin packing and autoscaling that GKE implements by default. For AI and ML workloads, save GPU costs by using GPU-sharing strategies like multi-instance GPUs, time-sharing GPUs, and NVIDIA MPS. Develop and maintain reference architectures Create and maintain a repository of reference architectures that are tailored to meet the requirements of different deployment environments and workload types. To streamline the design and implementation process for individual projects, the blueprints can be centrally managed by a team like a Cloud Center of Excellence (CCoE). Project teams can choose suitable blueprints based on clearly defined criteria, to ensure architectural consistency and adoption of best practices. For requirements that are unique to a project, the project team and the central architecture team should collaborate to design new reference architectures. You can share the reference architectures across the organization to foster knowledge sharing and expand the repository of available solutions. This approach ensures consistency, accelerates development, simplifies decision-making, and promotes efficient resource utilization. Review the reference architectures provided by Google for various use cases and technologies. These reference architectures incorporate best practices for resource selection, sizing, configuration, and deployment. By using these reference architectures, you can accelerate your development process and achieve cost savings from the start. Enforce cost discipline by using organization policies Consider using organization policies to limit the available Google Cloud locations and products that team members can use. These policies help to ensure that teams adhere to cost-effective solutions and provision resources in locations that are aligned with your cost optimization goals. Estimate realistic budgets and set financial boundaries Develop detailed budgets for each project, workload, and deployment environment. Make sure that the budgets cover all aspects of cloud operations, including infrastructure costs, software licenses, personnel, and anticipated growth. To prevent overspending and ensure alignment with your financial goals, establish clear spending limits or thresholds for projects, services, or specific resources. Monitor cloud spending regularly against these limits. You can use proactive quota alerts to identify potential cost overruns early and take timely corrective action. In addition to setting budgets, you can use quotas and limits to help enforce cost discipline and prevent unexpected spikes in spending. You can exercise granular control over resource consumption by setting quotas at various levels, including projects, services, and even specific resource types. The following are examples of how you can implement this recommendation: Project-level quotas: Set spending limits or resource quotas at the project level to establish overall financial boundaries and control resource consumption across all the services within the project. Service-specific quotas: Configure quotas for specific Google Cloud services like Compute Engine or BigQuery to limit the number of instances, CPUs, or storage capacity that can be provisioned. Resource type-specific quotas: Apply quotas to individual resource types like Compute Engine VMs, Cloud Storage buckets, Cloud Run instances, or GKE nodes to restrict their usage and prevent unexpected cost overruns. Quota alerts: Get notifications when your quota usage (at the project level) reaches a percentage of the maximum value. By using quotas and limits in conjunction with budgeting and monitoring, you can create a proactive and multi-layered approach to cost control. This approach helps to ensure that your cloud spending remains within defined boundaries and aligns with your business objectives. Remember, these cost controls are not permanent or rigid. To ensure that the cost controls remain aligned with current industry standards and reflect your evolving business needs, you must review the controls regularly and adjust them to include new technologies and best practices. Previous arrow_back Foster a culture of cost awareness Next Optimize continuously arrow_forward Send feedback \ No newline at end of file diff --git a/Optimize_your_environment.txt b/Optimize_your_environment.txt new file mode 100644 index 0000000000000000000000000000000000000000..738ad714d411c598a34e68e6c04e7819ea701b21 --- /dev/null +++ b/Optimize_your_environment.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-optimizing-your-environment +Date Scraped: 2025-02-23T11:51:46.066Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Optimize your environment Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-07 UTC This document helps you plan and design the optimization phase of your migration to Google Cloud. After you've deployed your workloads in Google Cloud, you can start optimizing your environment. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment (this document) Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs The following diagram illustrates the path of your migration journey. In the optimization phase, you refine your environment to make it more efficient than your initial deployment. This document is useful if you're planning to optimize an existing environment after migrating to Google Cloud, or if you're evaluating the opportunity to optimize and want to explore what it might look like. The structure of the optimization phase follows the migration framework described in this series: assess, plan, deploy, and optimize. You can use this versatile framework to plan your entire migration and to break down independent actions in each phase. When you've completed the last step of the optimization phase, you can start this phase over and find new targets for optimization. The optimization phase is defined as an optimization loop. An execution of the loop is defined as an optimization iteration. Optimization is an ongoing and continuous task. You constantly optimize your environment as it evolves. To avoid uncontrolled and duplicative efforts, you can set measurable optimization goals and stop when you meet these goals. After that, you can always set new and more ambitious goals, but consider that optimization has a cost, in terms of resources, time, effort, and skills. The following diagram shows the optimization loop. For a larger image of this diagram, see Optimization decision tree. In this document, you perform the following repeatable steps of the optimization loop: Assess your environment, teams, and the optimization loop that you're following. Establish optimization requirements and goals. Optimize your environment and train your teams. Tune the optimization loop. This document discusses some of the site reliability engineering (SRE) principles and concepts. Google developed the SRE discipline to efficiently and reliably run a global infrastructure serving billions of users. Adopting the complete SRE discipline in your organization might be impractical if you need to modify many of your business and collaboration processes. It might be simpler to apply a subset of the SRE discipline that best suits your organization. Assess your environment, teams, and optimization loop Before starting any optimization task, you need to evaluate your environment. You also need to assess your teams's skills because optimizing your environment might require skills that your teams might lack. Finally, you need to assess the optimization loop. The loop is a resource that you can optimize like any other resource. Assess your environment You need a deep understanding of your environment. For any successful optimization, you need to understand how your environment works and you need to identify potential areas of improvement. This assessment establishes a baseline so that you can compare your assessment against the optimization phase and the next optimization iterations. Migrate to Google Cloud: Assess and discover your workloads contains extensive guidance about assessing your workloads and assessing your environments. If you recently completed a migration to Google Cloud, you already have detailed information on how your environment is configured, managed, and maintained. Otherwise, you use that guidance to assess your environment. Assess your teams When you have a clear understanding of your environment, assess your teams to understand their skills. You start by listing all skills, the level of expertise for each skill, and which team members are the most knowledgeable for each skill. Use this assessment in the next phase to discover any missing skills that you need to meet your optimization goals. For example, if you start using a managed service, you need the skills to provision, configure, and interact with that service. If you want to add a caching layer to an application in your environment by using Memorystore, you need expertise to use that service. Take into account that optimizing your environment might impact your business and collaboration processes. For example, if you start using a fully managed service instead of a self-managed one, you can give your operators more time to eliminate toil. Assess your optimization loop The optimization loop is a resource that you can optimize too. Use the data gathered in this assessment to gain clear insights into how your teams performed during the last optimization iteration. For example, if you aim to shorten the iteration duration, you need data about your last iteration, including its complexity and the goals you were pursuing. You also need information about all blockers that you encountered during the last iteration to ensure that you have a mitigation strategy if those blockers reoccur. If this optimization iteration is the first one, you might not have enough data to establish a baseline to compare your performance. Draft a set of hypotheses about how you expect your teams to perform during the first iteration. After the first optimization iteration, evaluate the loop and your teams' performance and compare it against the hypotheses. Establish your optimization requirements and goals Before starting any optimization task, draft a set of clearly measurable goals for the iteration. In this step, you perform the following activities: Define your optimization requirements. Set measurable optimization goals according to your optimization requirements. Define your optimization requirements You list your requirements for the optimization phase. A requirement expresses a need for improvement and doesn't necessarily have to be measurable. Starting from a set of quality characteristics for your workloads, your environment, and your own optimization loop, you can draft a questionnaire to guide you in setting your requirements. The questionnaire covers the characteristics that you find valuable for your environment, processes, and workloads. There are many sources to guide you in defining the quality characteristics. For example, the ISO/IEC 25010 standard defines the quality characteristics for a software product, or you can review the Google Cloud setup checklist. For example, the questionnaire can ask the following questions: Can your infrastructure and its components scale vertically or horizontally? Does your infrastructure support rolling back changes without manual intervention? Do you already have a monitoring system that covers your infrastructure and your workloads? Do you have an incident management system for your infrastructure? How much time and effort does it take to implement the planned optimizations? Were you able to meet all goals in your past iterations? Starting from the answers to the questionnaire, you draft the list of requirements for this optimization iteration. For example, your requirements might be the following: Increase the performance of an application. Increase the availability of a component of your environment. Increase the reliability of a component of your environment. Reduce the operational costs of your environment. Shorten the duration of the optimization iteration to reduce the inherent risks. Increase development velocity and reduce time-to-market. When you have the list of improvement areas, evaluate the requirements in the list. In this evaluation, you analyze your optimization requirements, look for conflicts, and prioritize the requirements in the list. For example, increasing the performance of an application might conflict with operational cost reduction. Set measurable goals After you finalize the list of requirements, define measurable goals for each requirement. A goal might contribute to more than one requirement. If you have any area of uncertainty or if you're not able to define all goals that you need to cover your requirements, go back to the assessment phase of this iteration to gather any missing information, and then refine your requirements. For help defining these goals, you can follow one of the SRE disciplines, the definition of service level indicators (SLIs) and service level objectives (SLOs): SLIs are quantitative measures of the level of service that you provide. For example, a key SLI might be the average request latency, error rate, or system throughput. SLOs are target values or ranges of values for a service level that is measured by an SLI. For example, an SLO might be that the average request latency is lower than 100 milliseconds. After defining SLIs and SLOs, you might realize that you're not gathering all metrics that you need to measure your SLIs. This metrics collection is the first optimization goal that you can tackle. You set the goals related to extending your monitoring system to gather all metrics that you need for your SLIs. Optimize your environment and your teams After assessing your environment, teams, and optimization loop, as well as establishing requirements and goals for this iteration, you're ready to perform the optimization step. In this step, you perform the following activities: Measure your environment, teams, and optimization loop. Analyze the data coming from these measurements. Perform the optimization activities. Measure and analyze again. Measure your environment, teams, and optimization loop You extend your monitoring system to gather data about the behavior of your environment, teams, and the optimization loop to establish a baseline against which you can compare after optimizing. This activity builds on and extends what you did in the assessment phase. After you establish your requirements and goals, you know which metrics to gather for your measurements to be relevant to your optimization goals. For example, if you defined SLOs and the corresponding SLIs to reduce the response latency for one of the workloads in your environment, you need to gather data to measure that metric. Understanding these metrics also applies to your teams and to the optimization loop. You can extend your monitoring system to gather data so that you measure the metrics relevant to your teams and the optimization loop. For example, if you have SLOs and SLIs to reduce the duration of the optimization iteration, you need to gather data to measure that metric. When you design the metrics that you need to extend the monitoring system, take into account that gathering data might affect the performance of your environment and your processes. Evaluate the metrics that you need to implement for your measurements, and their sample intervals, to understand if they might affect performance. For example, a metric with a high sample frequency might degrade performance, so you need to optimize further. On Google Cloud, you can use Cloud Monitoring to implement the metrics that you need to gather data. To implement custom metrics in your workloads directly, you can use Cloud Client Libraries for Cloud Monitoring, or OpenTelemetry. If you're using Google Kubernetes Engine (GKE), you can use GKE usage metering to gather information about resource usage, such as CPU, GPU, and TPU usage, and then divide resource usage by namespace or label. Finally, you can use the Cloud Architecture Center and Google Cloud Whitepapers as starting points to find new skills that your teams might require to optimize your environment. Analyze data After gathering your data, you analyze and evaluate it to understand how your environment, teams, and optimization loop are performing against your optimization requirements and goals. In particular, you evaluate your environment against the following: SLOs. Industry best practices. An environment without any technical debt. The SLOs that you established according to your optimization goals can help you understand if you're meeting your expectations. If you're not meeting your SLOs, you need to enhance your teams or the optimization loop. For example, if you established an SLO for the response latency for a workload to be in a given percentile and that workload isn't meeting that mark, that is a signal that you need to optimize that part of the workload. Additionally, you can compare your situation against a set of recognized best practices in the industry. For example, the Google Cloud setup checklist helps you configure a production-ready environment for enterprise workloads. After collecting data, you can consider how to optimize your environment to make it more cost efficient. You can export Cloud Billing data to BigQuery and analyze data with Looker Studio to understand how many resources you're using, and extract any spending pattern from it. Finally, you compare your environment to one where you don't have any technical debt, to see whether you're meeting your long-term goals and to see if the technical debt is increasing. For example, you might establish an SLO for how many resources in your environment you're monitoring versus how many resources you have provisioned since the last iteration. If you didn't extend the monitoring system to cover those new resources, your technical debt increased. When analyzing the changes in your technical debt, also consider the factors that led to those changes. For example, a business need might require an increment in technical debt, or it might be unexpected. Knowing the factors that caused a change in your technical debt gives you insights for future optimization targets. To monitor your environment on Google Cloud, you can use Monitoring to design charts, dashboards, and alerts. You can then route Cloud Logging data for a more in-depth analysis and extended retention period. For example, you can create aggregated sinks and use Cloud Storage, Pub/Sub, or BigQuery as destinations. If you export data to BigQuery, you can then use Looker Studio to visualize data so that you can identify trends and make predictions. You can also use evaluation tools such as Recommender and Security Command Center to automatically analyze your environment and processes, looking for optimization targets. After you analyze all of the measurement data, you need to answer two questions: Are you meeting your optimization goals? If you answered yes, then this optimization iteration is completed, and you can start a new one. If you answered no, you can move to the second question. Given the resources that you budgeted, can you achieve the optimization goals that you set for this iteration? To answer this question, consider all resources that you need, such as time, money, and expertise. If you answered yes, you can move to the next section; otherwise, refine your optimization goals, considering the resources you can use for this iteration. For example, if you're constrained by a fixed schedule, you might need to schedule some optimization goals for the next iteration. Optimize your teams Optimizing the environment is a continuous challenge and can require skills that your teams might lack, which you discovered during the assessment and the analysis. For this reason, optimizing your teams by acquiring new skills and making your processes more efficient is crucial to the success of your optimization activities. To optimize your teams, you need to do the following: Design and implement a training program. Optimize your team structure and culture. For your teams to acquire the skills that they are missing, you need to design and implement a training program or choose one that professional Google Cloud trainers prepared. For more information, see Migrate to Google Cloud: Assess and discover your workloads. While optimizing your teams, you might find that there is room to improve structure and culture. It's difficult to prescribe an ideal situation upfront, because every company has its own history and idiosyncrasies that contributed to the evolution of your teams' structure and culture. Transformational leadership is a good starting point to learn general frameworks for executing and measuring organizational changes aimed at adopting DevOps practices. For practical guidance on how to implement an effective DevOps culture in your organization, refer to Site Reliability Engineering, a comprehensive description of the SRE methodology. The Site Reliability Workbook, the companion to the book, uses concrete examples to show you how to put SRE principles and practices to work. Optimize your environment After measuring and analyzing metrics data, you know which areas you need to optimize. This section covers general optimization techniques for your Google Cloud environment. You can also perform any optimization activity that's specific to your infrastructure and to the services that you're using. Codify everything One of the biggest advantages of adopting a public cloud environment like Google Cloud, is that you can use well-defined interfaces such as Cloud APIs to provision, configure, and manage resources. You can use your own choice of tools to define your Infrastructure as Code (IaC) process, and your own choice of version control systems. You can use tools such as Terraform to provision your Google Cloud resources, and then tools such as Ansible, Chef, or Puppet to configure your these resources. An IaC process helps you implement an effective rollback strategy for your optimization tasks. You can revert any change that you applied to the code that describes your infrastructure. Also, you can avoid unexpected failures while updating your infrastructure by testing your changes. Furthermore, you can apply similar processes to codify other aspects of your environment, like policies as code, using tools such as Open Policy Agent, and operations as code, such as GitOps. Therefore, if you adopt an IaC process in the early optimization iterations, you can define further optimization activities as code. You can also adopt the process gradually, so you can evaluate if it's suitable to your environment. Automate everything To completely optimize your entire environment, you need to use resources efficiently. This means that you need to eliminate toil to save resources and to reinvest in more important tasks that produce value, like optimization activities. Per the SRE recommendation, the way to eliminate toil is by increasing automation. Not all automation tasks require highly specialized software engineerings skills or great efforts. Sometimes a short executable script executed periodically can save several hours per day. Google Cloud provides tools such as Google Cloud CLI and managed services such as Cloud APIs, Cloud Scheduler, Cloud Composer, and Cloud Run that your teams can use to automate repetitive tasks. Monitor everything If you can't gather detailed measures about your environment, you can't improve it, because you lack data to back up your assumptions. This means that you don't know what to do to meet your optimization goals. A comprehensive monitoring system is a necessary component for your environment. The system monitors all essential metrics that you need to evaluate for your optimization goals. When you design your monitoring system, plan to monitor the four golden signals at minimum. You can use managed services such as Monitoring and Logging to monitor your environment without having to set up a complicated monitoring solution. You might need to implement a monitoring system that can monitor hybrid and multicloud environments to satisfy data restriction policies that force you to store data only in certain physical locations, or services that use multiple cloud environments simultaneously. Adopt a cloud-ready approach Cloud-ready is a paradigm that describes an efficient way for designing and running an application on the cloud. The Cloud Native Computing Foundation (CNCF) defines a cloud-native application as an application that is scalable, resilient, manageable, and observable by technologies such as containers, service meshes, microservices, immutable infrastructure, and declarative APIs. Google Cloud provides managed services such as GKE, Cloud Run, Cloud Service Mesh, Logging, and Monitoring to empower users to design and run cloud-ready applications. Learn more about cloud-ready technologies from CNCF Trail Map and CNCF Cloud Native Interactive Landscape. Cost management Because of their different billing and cost models, optimizing costs of a public cloud environment like Google Cloud is different than optimizing an on-premises environment. For more information, see Migrate to Google Cloud: Minimize costs. Measure and analyze again When you complete the optimization activities for this iteration, you repeat the measurements and the analysis to check if you reached your goals. Answer the following question: Did you meet your optimization goals? If you answered yes, you can move to the next section. If you answered no, go back to the beginning of the Optimize your environment and your teams phase. Tune the optimization loop In this section, you update and modify the optimization loop that you followed in this iteration to better fit your team structure and environment. Codify the optimization loop To optimize the optimization loop efficiently, you need to document and define the loop in a form that is standardized, straightforward, and manageable, allowing room for changes. You can use a fully managed service such as Cloud Composer to create, schedule, monitor, and manage your workflows. You can also first represent your processes with a language such as the business process model and notation (BPMN). After that, you can codify these processes with a standardized language such as the business process execution language (BPEL). After adopting IaC, describing your processes with code lets you manage them as you do the rest of your environment. Automate the optimization loop After you codify the optimization loop, you can automate repetitive tasks to eliminate toil, save time, and make the optimization loop more efficient. You can start automating all tasks where a human decision is not required, such as measuring data and producing aggregate reports for your teams to analyze. For example, you can automate data analysis with Cloud Monitoring to check if your environment meets the SLOs that you defined. Given that optimization is a never-ending task and that you iterate on the optimization loop, even small automations can significantly increase efficiency. Monitor the optimization loop As you did for all the resources in your environment, you need to monitor the optimization loop to verify that it's working as expected and also look for bottlenecks and future optimization goals. You can start monitoring it by tracking how much time and how many resources your teams spent on each optimization step. For example, you can use an issue tracking system and a project management tool to monitor your processes and extract relevant statistics about metrics like issue resolution time and time to completion. What's next Read about Best practices for validating a migration plan. Read the SRE books to learn about other concepts and techniques to prepare for optimization. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Organization_structure.txt b/Organization_structure.txt new file mode 100644 index 0000000000000000000000000000000000000000..26d4d7505911e85e4a1b49350e89dc2a6d5ba1a0 --- /dev/null +++ b/Organization_structure.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/organization-structure +Date Scraped: 2025-02-23T11:45:26.923Z + +Content: +Home Docs Cloud Architecture Center Send feedback Organization structure Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC The root node for managing resources in Google Cloud is the organization. The Google Cloud organization provides a resource hierarchy that provides an ownership structure for resources and attachment points for organization policies and access controls. The resource hierarchy consists of folders, projects, and resources, and it defines the structure and use of Google Cloud services within an organization. Resources lower in the hierarchy inherit policies such as IAM allow policies and organization policies. All access permissions are denied by default, until you apply allow policies directly to a resource or the resource inherits the allow policies from a higher level in the resource hierarchy. The following diagram shows the folders and projects that are deployed by the blueprint. The following sections describe the folders and projects in the diagram. Folders The blueprint uses folders to group projects based on their environment. This logical grouping is used to apply configurations like allow policies and organization policies at the folder level and then all resources within the folder inherit the policies. The following table describes the folders that are part of the blueprint. Folder Description bootstrap Contains the projects that are used to deploy foundation components. common Contains projects with resources that are shared by all environments. production Contains projects with production resources. nonproduction Contains a copy of the production environment to let you test workloads before you promote them to production. development Contains the cloud resources that are used for development. networking Contains the networking resources that are shared by all environments. Projects The blueprint uses projects to group individual resources based on their functionality and intended boundaries for access control. This following table describes the projects that are included in the blueprint. Folder Project Description bootstrap prj-b-cicd Contains the deployment pipeline that's used to build out the foundation components of the organization. For more information, see deployment methodology. prj-b-seed Contains the Terraform state of your infrastructure and the Terraform service account that is required to run the pipeline. For more information, see deployment methodology. common prj-c-secrets Contains organization-level secrets. For more information, see store application credentials with Secret Manager. prj-c-logging Contains the aggregated log sources for audit logs. For more information, see centralized logging for security and audit. prj-c-scc Contains resources to help configure Security Command Center alerting and other custom security monitoring. For more information, see threat monitoring with Security Command Center. prj-c-billing-export Contains a BigQuery dataset with the organization's billing exports. For more information, see allocate costs between internal cost centers. prj-c-infra-pipeline Contains an infrastructure pipeline for deploying resources like VMs and databases to be used by workloads. For more information, see pipeline layers. prj-c-kms Contains organization-level encryption keys. For more information, see manage encryption keys. networking prj-net-{env}-shared-base Contains the host project for a Shared VPC network for workloads that don't require VPC Service Controls. For more information, see network topology. prj-net-{env}-shared-restricted Contains the host project for a Shared VPC network for workloads that do require VPC Service Controls. For more information, see network topology. prj-net-interconnect Contains the Cloud Interconnect connections that provide connectivity between your on-premises environment and Google Cloud. For more information, see hybrid connectivity. prj-net-dns-hub Contains resources for a central point of communication between your on-premises DNS system and Cloud DNS. For more information, see centralized DNS setup. prj-{env}-secrets Contains folder-level secrets. For more information, see store and audit application credentials with Secret Manager. prj-{env}-kms Contains folder-level encryption keys. For more information, see manage encryption keys. application projects Contains various projects in which you create resources for applications. For more information, see project deployment patterns and pipeline layers. Governance for resource ownership We recommend that you apply labels consistently to your projects to assist with governance and cost allocation. The following table describes the project labels that are added to each project for governance in the blueprint. Label Description application The human-readable name of the application or workload that is associated with the project. businesscode A short code that describes which business unit owns the project. The code shared is used for common projects that are not explicitly tied to a business unit. billingcode A code that's used to provide chargeback information. primarycontact The username of the primary contact that is responsible for the project. Because project labels can't include special characters such as the ampersand (@), it is set to the username without the @example.com suffix. secondarycontact The username of the secondary secondary contact that is responsible for the project. Because project labels can't include special characters such as @, set only the username without the @example.com suffix. environment A value that identifies the type of environment, such as bootstrap, common, production, non-production,development, or network. envcode A value that identifies the type of environment, shortened to b, c, p, n, d, or net. vpc The ID of the VPC network that this project is expected to use. Google might occasionally send important notifications such as account suspensions or updates to product terms. The blueprint uses Essential Contacts to send those notifications to the groups that you configure during deployment. Essential Contacts is configured at the organization node and inherited by all projects in the organization. We recommend that you review these groups and ensure that emails are monitored reliably. Essential Contacts is used for a different purpose than the primarycontact and secondarycontact fields that are configured in project labels. The contacts in project labels are intended for internal governance. For example, if you identify non-compliant resources in a workload project and need to contact the owners, you could use the primarycontact field to find the person or team responsible for that workload. What's next Read about networking (next document in this series). Send feedback \ No newline at end of file diff --git a/Other_considerations.txt b/Other_considerations.txt new file mode 100644 index 0000000000000000000000000000000000000000..34a1495b4d85f88e55cce1765ac6e6e24bc13f1a --- /dev/null +++ b/Other_considerations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/other-considerations +Date Scraped: 2025-02-23T11:49:48.337Z + +Content: +Home Docs Cloud Architecture Center Send feedback Other considerations Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This document highlights the core design considerations that play a pivotal role in shaping your overall hybrid and multicloud architecture. Holistically analyze and assess these considerations across your entire solution architecture, encompassing all workloads, not just specific ones. Refactor In a refactor migration, you modify your workloads to take advantage of cloud capabilities, not just to make them work in the new environment. You can improve each workload for performance, features, cost, and user experience. As highlighted in Refactor: move and improve, some refactor scenarios let you modify workloads before migrating them to the cloud. This refactoring approach offers the following benefits, especially if your goal is to build a hybrid architecture as a long term targeted architecture: You can improve the deployment process. You can help speed up the release cadence and shorten feedback cycles by investing in continuous integration/continuous deployment (CI/CD) infrastructure and tooling. You can use refactoring as a foundation to build and manage hybrid architecture with application portability. To work well, this approach typically requires certain investments in on-premises infrastructure and tooling. For example, setting up a local Container Registry and provisioning Kubernetes clusters to containerize applications. Google Kubernetes Engine (GKE) Enterprise edition can be useful in this approach for hybrid environments. More information about GKE Enterprise is covered in the following section. You can also refer to the GKE Enterprise hybrid environment reference architecture for more details. Workload portability With hybrid and multicloud architectures, you might want to be able to shift workloads between the computing environments that host your data. To help enable the seamless movement of workloads between environments, consider the following factors: You can move an application from one computing environment to another without significantly modifying the application and its operational model: Application deployment and management are consistent across computing environments. Visibility, configuration, and security are consistent across computing environments. The ability to make a workload portable shouldn't conflict with the workload being cloud-first. Infrastructure automation Infrastructure automation is essential for portability in hybrid and multicloud architectures. One common approach to automating infrastructure creation is through infrastructure as code (IaC). IaC involves managing your infrastructure in files instead of manually configuring resources—like a VM, a security group, or a load balancer—in a user interface. Terraform is a popular IaC tool to define infrastructure resources in a file. Terraform also lets you automate the creation of those resources in heterogeneous environments. For more information about Terraform core functions that can help you automate provisioning and managing Google Cloud resources, see Terraform blueprints and modules for Google Cloud. You can use configuration management tools such as Ansible, Puppet, or Chef to establish a common deployment and configuration process. Alternatively, you can use an image-baking tool like Packer to create VM images for different platforms. By using a single, shared configuration file, you can use Packer and Cloud Build to create a VM image for use on Compute Engine. Finally, you can use solutions such as Prometheus and Grafana to help ensure consistent monitoring across environments. Based on these tools, you can assemble a common tool chain as illustrated in the following logical diagram. This common tool chain abstracts away the differences between computing environments. It also lets you unify provisioning, deployment, management, and monitoring. Although a common tool chain can help you achieve portability, it's subject to several of the following shortcomings: Using VMs as a common foundation can make it difficult to implement true cloud-first applications. Also, using VMs only can prevent you from using cloud-managed services. You might miss opportunities to reduce administrative overhead. Building and maintaining a common tool chain incurs overhead and operational costs. As the tool chain expands, it can develop unique complexities tailored to the specific needs of your company. This increased complexity can contribute to rising training costs. Before deciding to develop tooling and automation, explore the managed services your cloud provider offers. When your provider offers managed services that support the same use case, you can abstract away some of its complexity. Doing so lets you focus on the workload and the application architecture rather than the underlying infrastructure. For example, you can use the Kubernetes Resource Model to automate the creation of Kubernetes clusters using a declarative configuration approach. You can use Deployment Manager convert to convert your Deployment Manager configurations and templates to other declarative configuration formats that Google Cloud supports (like Terraform and the Kubernetes Resource Model) so they're portable when you publish. You can also consider automating the creation of projects and the creation of resources within those projects. This automation can help you adopt an infrastructure-as-code approach for project provisioning. Containers and Kubernetes Using cloud-managed capabilities helps to reduce the complexity of building and maintaining a custom tool chain to achieve workload automation and portability. However, only using VMs as a common foundation makes it difficult to implement truly cloud-first applications. One solution is to use containers and Kubernetes instead. Containers help your software to run reliably when you move it from one environment to another. Because containers decouple applications from the underlying host infrastructure, they facilitate the deployment across computing environments, such as hybrid and multicloud. Kubernetes handles the orchestration, deployment, scaling, and management of your containerized applications. It's open source and governed by the Cloud Native Computing Foundation. Using Kubernetes provides the services that form the foundation of a cloud-first application. Because you can install and run Kubernetes on many computing environments, you can also use it to establish a common runtime layer across computing environments: Kubernetes provides the same services and APIs in a cloud or private computing environment. Moreover, the level of abstraction is much higher than when working with VMs, which generally translates into less required groundwork and improved developer productivity. Unlike a custom tool chain, Kubernetes is widely adopted for both development and application management, so you can tap into existing expertise, documentation, and third-party support. Kubernetes supports all container implementations that: Support the Kubernetes Container Runtime Interface (CRI) Are industry-adopted for application Aren't tied to any specific vendor When a workload is running on Google Cloud, you can avoid the effort of installing and operating Kubernetes by using a managed Kubernetes platform such as Google Kubernetes Engine (GKE). Doing so can help operations staff shift their focus from building and maintaining infrastructure to building and maintaining applications. You can also use Autopilot, a GKE mode of operation that manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. When using GKE Autopilot, consider your scaling requirements and its scaling limits. Technically, you can install and run Kubernetes on many computing environments to establish a common runtime layer. Practically, however, building and operating such an architecture can create complexity. The architecture gets even more complex when you require container-level security control (service mesh). To simplify managing multi-cluster deployments, you can use GKE Enterprise to run modern applications anywhere at scale. GKE includes powerful managed open source components to secure workloads, enforce compliance policies, and provide deep network observability and troubleshooting. As illustrated in the following diagram, using GKE Enterprise means you can operate multi-cluster applications as fleets. GKE Enterprise helps with the following design options to support hybrid and multicloud architectures: Design and build cloud-like experiences on-premises or unified solutions for transitioning applications to GKE Enterprise hybrid environment. For more information, see the GKE Enterprise hybrid environment reference architecture. Design and build a solution to solve multicloud complexity with a consistent governance, operations, and security posture with GKE Multi-Cloud. For more information, see the GKE Multi-Cloud documentation. GKE Enterprise also provides logical groupings of similar environments with consistent security, configuration, and service management. For example, GKE Enterprise powers zero trust distributed architecture. In a zero trust distributed architecture, services that are deployed on-premises or in another cloud environment can communicate across environments through end-to-end mTLS secure service-to-service communications. Workload portability considerations Kubernetes and GKE Enterprise provide a layer of abstraction for workloads that can hide the many intricacies and differences between computing environments. The following list describes some of those abstractions: An application might be portable to a different environment with minimal changes, but that doesn't mean that the application performs equally well in both environments. Differences in underlying compute, infrastructure security capabilities, or networking infrastructure, along with proximity to dependent services, might lead to substantially different performance. Moving a workload between computing environments might also require you to move data. Different environments can have different data storage and management services and facilities. The behavior and performance of load balancers provisioned with Kubernetes or GKE Enterprise might differ between environments. Data movement Because it can be complex to move, share, and access data at scale between computing environments, enterprise-level companies might hesitate to build a hybrid or multicloud architecture. This hesitation might increase if they are already storing most of their data on-premises or in one cloud. However, the various data movement options offered by Google Cloud, provide enterprises with a comprehensive set of solutions to help move, integrate, and transform their data. These options help enterprises to store, share, and access data across different environments in a way that meets their specific use cases. That ability ultimately makes it easier for business and technology decision-makers to adopt hybrid and multicloud architectures. Data movement is an important consideration for hybrid and multicloud strategy and architecture planning. Your team needs to identify your different business use cases and the data that powers them. You should also think about storage type, capacity, accessibility, and movement options. If an enterprise has a data classification for regulated industries, that classification can help to identify storage locations and cross-region data movement restrictions for certain data classes. For more information, see Sensitive Data Protection. Sensitive Data Protection is a fully managed service designed to help you discover, classify, and protect your data assets. To explore the process, from planning a data transfer to using best practices in implementing a plan, see Migration to Google Cloud: Transferring your large datasets. Security As organizations adopt hybrid and multicloud architectures, their attack surface can increase depending on the way their systems and data are distributed across different environments. Combined with the constantly evolving threat landscape, increased attack surfaces can lead to an increased risk of unauthorized access, data loss, and other security incidents. Carefully consider security when planning and implementing hybrid or multicloud strategies. For more information, see Attack Surface Management for Google Cloud. When architecting for a hybrid architecture, it's not always technically feasible or viable to extend on-premises security approaches to the cloud. However, many of the networking security capabilities of hardware appliances are cloud-first features and they operate in a distributed manner. For more information about the cloud-first network security capabilities of Google Cloud, see Cloud network security. Hybrid and multicloud architectures can introduce additional security challenges, such as consistency and observability. Every public cloud provider has its own approach to security, including different models, best practices, infrastructure and application security capabilities, compliance obligations, and even the names of security services. These inconsistencies can increase security risk. Also, the shared responsibility model of each cloud provider can differ. It's essential to identify and understand the exact demarcation of responsibilities in a multicloud architecture. Observability is key to gaining insights and metrics from the different environments. In a multicloud architecture, each cloud typically provides tools to monitor for security posture and misconfigurations. However, using these tools results in siloed visibility, which prevents building advanced threat intelligence across the entire environment. As a result, the security team must switch between tools and dashboards to keep the cloud secure. Without an overarching end-to-end security visibility for the hybrid and multicloud environments, it's difficult to prioritize and mitigate vulnerabilities. To obtain the full visibility and posture of all your environments, prioritize your vulnerabilities, and mitigate the vulnerabilities you identify. We recommend a centralized visibility model. A centralized visibility model avoids the need for manual correlation between different tools and dashboards from different platforms. For more information, see Hybrid and multicloud monitoring and logging patterns. As part of your planning to mitigate security risks and deploy workloads on Google Cloud, and to help you plan and design your cloud solution for meeting your security and compliance objectives, explore the Google Cloud security best practices center and the enterprise foundations blueprint. Compliance objectives can vary, as they are influenced by both industry-specific regulations and the varying regulatory requirements of different regions and countries. For more information, see the Google Cloud compliance resource center. The following are some of the primary recommended approaches for architecting secure hybrid and multicloud architecture: Develop a unified tailored cloud security strategy and architecture. Hybrid and multicloud security strategies should be tailored to the specific needs and objectives of your organization. It's essential to understand the targeted architecture and environment before implementing security controls, because each environment can use different features, configurations, and services. Consider a unified security architecture across hybrid and multicloud environments. Standardize cloud design and deployments, especially security design and capabilities. Doing so can improve efficiency and enable unified governance and tooling. Use multiple security controls. Typically, no single security control can adequately address all security protection requirements. Therefore, organizations should use a combination of security controls in a layered defense approach, also known as defense-in-depth. Monitor and continuously improve security postures: Your organization should monitor its different environments for security threats and vulnerabilities. It should also try to continuously improve its security posture. Consider using cloud security posture management (CSPM) to identify and remediate security misconfigurations and cybersecurity threats. CSPM also provides vulnerability assessments across hybrid and multicloud environments. Security Command Center is a built-in security and risk management solution for Google Cloud that helps to identify misconfigurations and vulnerabilities and more. Security Health Analytics is a managed vulnerability assessment scanning tool. It's a feature of Security Command Center that identifies security risks and vulnerabilities in your Google Cloud environment and provides recommendations for remediating them. Mandiant Attack Surface Management for Google Cloud lets your organization better see their multicloud or hybrid cloud environment assets. It automatically discovers assets from multiple cloud providers, DNS, and the extended external attack surface to give your enterprise a deeper understanding of its ecosystem. Use this information to prioritize remediation on the vulnerabilities and exposures that present the most risk. Cloud security information and event management (SIEM) solution: Helps to collect and analyze security logs from hybrid and multicloud environments to detect and respond to threats. Google Security Operations SIEM from Google Cloud helps to provide security Information and event management by collecting, analyzing, detecting, and investigating all of your security data in one place. Previous arrow_back Architectural approaches to adopt a hybrid or multicloud architecture Next What's next arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(10).txt b/Overview(10).txt new file mode 100644 index 0000000000000000000000000000000000000000..9c275f918c707b4f4a1d7c170efafe1bfd3936b5 --- /dev/null +++ b/Overview(10).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations +Date Scraped: 2025-02-23T11:45:22.761Z + +Content: +Home Docs Cloud Architecture Center Send feedback Enterprise foundations blueprint Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC This content was last updated in December 2023, and represents the status quo as of the time it was written. Google's security policies and systems may change going forward, as we continually improve protection for our customers. This document describes the best practices that let you deploy a foundational set of resources in Google Cloud. A cloud foundation is the baseline of resources, configurations, and capabilities that enable companies to adopt Google Cloud for their business needs. A well-designed foundation enables consistent governance, security controls, scale, visibility, and access to shared services across all workloads in your Google Cloud environment. After you deploy the controls and governance that are described in this document, you can deploy workloads to Google Cloud. The enterprise foundations blueprint (formerly known as the security foundations blueprint) is intended for architects, security practitioners, and platform engineering teams who are responsible for designing an enterprise-ready environment on Google Cloud. This blueprint consists of the following: A terraform-example-foundation GitHub repository that contains the deployable Terraform assets. A guide that describes the architecture, design, and controls that you implement with the blueprint (this document). You can use this guide in one of two ways: To create a complete foundation based on Google's best practices. You can deploy all the recommendations from this guide as a starting point, and then customize the environment to address your business' specific requirements. To review an existing environment on Google Cloud. You can compare specific components of your design against Google-recommended best practices. Supported use cases The enterprise foundation blueprint provides a baseline layer of resources and configurations that help enable all types of workloads on Google Cloud. Whether you're migrating existing compute workloads to Google Cloud, building containerized web applications, or creating big data and machine learning workloads, the enterprise foundation blueprint helps you build your environment to support enterprise workloads at scale. After you deploy the enterprise foundation blueprint, you can deploy workloads directly or deploy additional blueprints to support complex workloads that require additional capabilities. A defense-in-depth security model Google Cloud services benefit from the underlying Google infrastructure security design. It is your responsibility to design security into the systems that you build on top of Google Cloud. The enterprise foundation blueprint helps you to implement a defense-in-depth security model for your Google Cloud services and workloads. The following diagram shows a defense-in-depth security model for your Google Cloud organization that combines architecture controls, policy controls, and detective controls. The diagram describes the following controls: Policy controls are programmatic constraints that enforce acceptable resource configurations and prevent risky configurations. The blueprint uses a combination of policy controls including infrastructure-as-code (IaC) validation in your pipeline and organization policy constraints. Architecture controls are the configuration of Google Cloud resources like networks and resource hierarchy. The blueprint architecture is based on security best practices. Detective controls let you detect anomalous or malicious behavior within the organization. The blueprint uses platform features such as Security Command Center, integrates with your existing detective controls and workflows such as a security operations center (SOC), and provides capabilities to enforce custom detective controls. Key decisions This section summarizes the high-level architectural decisions of the blueprint. The diagram describes how Google Cloud services contribute to key architectural decisions: Cloud Build: Infrastructure resources are managed using a GitOps model. Declarative IaC is written in Terraform and managed in a version control system for review and approval, and resources are deployed using Cloud Build as the continuous integration and continuous deployment (CI/CD) automation tool. The pipeline also enforces policy-as-code checks to validate that resources meet expected configurations before deployment. Cloud Identity: Users and group membership are synchronized from your existing identity provider. Controls for user account lifecycle management and single sign-on (SSO) rely on the existing controls and processes of your identity provider. Identity and Access Management (IAM): Allow policies (formerly known as IAM policies) allow access to resources and are applied to groups based on job function. Users are added to the appropriate groups to receive view-only access to foundation resources. All changes to foundation resources are deployed through the CI/CD pipeline which uses privileged service account identities. Resource Manager: All resources are managed under a single organization, with a resource hierarchy of folders that organizes projects by environments. Projects are labeled with metadata for governance including cost attribution. Networking: Network topologies use Shared VPC to provide network resources for workloads across multiple regions and zones, separated by environment, and managed centrally. All network paths between on-premises hosts, Google Cloud resources in the VPC networks, and Google Cloud services are private. No outbound traffic to or inbound traffic from the public internet is permitted by default. Cloud Logging: Aggregated log sinks are configured to collect logs relevant for security and auditing into a centralized project for long-term retention, analysis, and export to external systems. Organization Policy Service: Organization policy constraints are configured to prevent various high-risk configurations. Secret Manager: Centralized projects are created for a team responsible for managing and auditing the use of sensitive application secrets to help meet compliance requirements. Cloud Key Management Service (Cloud KMS): Centralized projects are created for a team responsible for managing and auditing encryption keys to help meet compliance requirements. Security Command Center: Threat detection and monitoring capabilities are provided using a combination of built-in security controls from Security Command Center and custom solutions that let you detect and respond to security events. For alternatives to these key decisions, see alternatives. What's next Read about authentication and authorization (next document in this series). Send feedback \ No newline at end of file diff --git a/Overview(11).txt b/Overview(11).txt new file mode 100644 index 0000000000000000000000000000000000000000..b13d88afb1873c7dc908d236ee8197ddc377ed3d --- /dev/null +++ b/Overview(11).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/microservices-architecture-introduction +Date Scraped: 2025-02-23T11:46:47.731Z + +Content: +Home Docs Cloud Architecture Center Send feedback Introduction to microservices Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This reference guide is the first in a four-part series about designing, building, and deploying microservices. This series describes the various elements of a microservices architecture. The series includes information about the benefits and drawbacks of the microservices architecture pattern, and how to apply it. Introduction to microservices (this document) Refactor a monolith into microservices Interservice communication in a microservices setup Distributed tracing in a microservices application This series is intended for application developers and architects who design and implement the migration to refactor a monolith application to a microservices application. Monolithic applications A monolithic application is a single-tiered software application in which different modules are combined into a single program. For example, if you're building an ecommerce application, the application is expected to have a modular architecture that is aligned with object-oriented programming (OOP) principles. The following diagram shows an example ecommerce application setup, in which the application consists of various modules. In a monolithic application, modules are defined using a combination of programming language constructs (such as Java packages) and build artifacts (such as Java JAR files). Figure 1. Diagram of a monolithic ecommerce application with several modules using a combination of programming language constructs. In figure 1, different modules in the ecommerce application correspond to business logic for payment, delivery, and order management. All of these modules are packaged and deployed as a single logical executable. The actual format depends on the application's language and framework. For example, many Java applications are packaged as JAR files and deployed on application servers such as Tomcat or Jetty. Similarly, a Rails or Node.js application is packaged as a directory hierarchy. Monolith benefits Monolithic architecture is a conventional solution for building applications. The following are some advantages of adopting a monolithic design for your application: You can implement end-to-end testing of a monolithic application by using tools like Selenium. To deploy a monolithic application, you can simply copy the packaged application to a server. All modules in a monolithic application share memory, space, and resources, so you can use a single solution to address cross-cutting concerns such as logging, caching, and security. The monolithic approach can provide performance advantages, because modules can call each other directly. By contrast, microservices typically require a network call to communicate with each other. Monolith challenges Complex monoliths often become progressively harder to build, debug, and reason about. At some point, the problems outweigh the benefits. Applications typically grow over time. It can become complicated to implement changes in a large and complex application that has tightly coupled modules. Because any code change affects the whole system, you have to thoroughly coordinate changes. Coordinating changes makes the overall development and testing process much longer compared to microservice applications. It can be complicated to achieve continuous integration and deployment (CI/CD) with a large monolith. This complexity is because you must redeploy the entire application in order to update any one part of it. Also, it's likely that you have to do extensive manual testing of the entire application to check for regressions. Monolithic applications can be difficult to scale when different modules have conflicting resource requirements. For example, one module might implement CPU-intensive image-processing logic. Another module might be an in-memory database. Because these modules are deployed together, you have to compromise on the choice of hardware. Because all modules run within the same process, a bug in any module, such as a memory leak, can potentially bring down the entire system. Monolithic applications add complexity when you want to adopt new frameworks and languages. For example, it is expensive (in both time and money) to rewrite an entire application to use a new framework, even if that framework is considerably better. Microservices-based applications A microservice typically implements a set of distinct features or functionality. Each microservice is a mini-application that has its own architecture and business logic. For example, some microservices expose an API that's consumed by other microservices or by the application's clients, such as third-party integrations with payment gateways and logistics. Figure 1 showed a monolithic ecommerce application with several modules. The following diagram shows a possible decomposition of the ecommerce application into microservices: Figure 2. Diagram of an ecommerce application with functional areas implemented by microservices. In figure 2, a dedicated microserve implements each functional area of the ecommerce application. Each backend service might expose an API, and services consume APIs provided by other services. For example, to render web pages, the UI services invoke the checkout service and other services. Services might also use asynchronous, message-based communication. For more information about how services communicate with each other, see the third document in this series, Interservice communication in a microservices setup. The microservices architecture pattern significantly changes the relationship between the application and the database. Instead of sharing a single database with other services, we recommend that each service have its own database that best fits its requirements. When you have one database for each service, you ensure loose coupling between services because all requests for data go through the service API and not through the shared database directly. The following diagram shows a microservices architecture pattern in which each service has its own database: Figure 3. Each service in a microservice architecture has its own database. In figure 3, the order service in the ecommerce application functions well using a document-oriented database that has real-time search capabilities. The payment and delivery services rely on the strong atomicity, consistency, isolation, durability (ACID) guarantees of a relational database. Microservices benefits The microservices architecture pattern addresses the problem of complexity described in the preceding Monolith challenges section. A microservices architecture provides the following benefits: Although the total functionality is unchanged, you use microservices to separate the application into manageable chunks or services. Each service has a well-defined boundary in the form of an RPC or message-driven API. Therefore, individual services can be faster to develop, and easier to understand and maintain. Autonomous teams can independently develop individual services. You can organize microservices around business boundaries, not the technical capabilities of a product. You organize your teams for a single, independent responsibility for the entire lifecycle of their assigned piece of software from development to testing to deployment to maintenance and monitoring. Independent microservice development process also lets your developers write each microservice in a different programming language, creating a polyglot application. When you use the most effective language for each microservice, you can develop an application more quickly and optimize your application to reduce code complexity and to increase performance and functionality. When you decouple capabilities out of a monolith, you can have the independent teams release their microservice independently. Independent release cycles can help improve your teams' velocity and product time to market. Microservices architecture also lets you scale each service independently. You can deploy the number of instances of each service that satisfy its capacity and availability constraints. You can also use the hardware that best matches a service's resource requirements. When you scale services independently, you help increase the availability and the reliability of the entire system. The following are some specific instances in which it can be beneficial to migrate from a monolith to a microservice architecture: Implementing improvements in scalability, manageability, agility, or speed of delivery. Incrementally rewriting a large legacy application to a modern language and technology stack to meet new business demands. Extracting cross-cutting business applications or cross-cutting services so that you can reuse them across multiple channels. Examples of services you might want to reuse include payment services, login services, encryption services, flight search services, customer profile services, and notification services. Adopting a purpose-built language or framework for a specific functionality of an existing monolith. Microservices challenges Microservices have some challenges when compared to monoliths, including the following: A major challenge of microservices is the complexity that's caused because the application is a distributed system. Developers need to choose and implement an inter-services communication mechanism. The services must also handle partial failures and unavailability of upstream services. Another challenge with microservices is that you need to manage transactions across different microservices (also referred to as a distributed transaction). Business operations that update multiple business entities are fairly common, and they are usually applied in an atomic manner in which either all operations are applied or everything fails. When you wrap multiple operations in a single database transaction, you ensure atomicity. In a microservices-based application, business operations might be spread across different microservices, so you need to update multiple databases that different services own. If there is a failure, it's non-trivial to track the failure or success of calls to the different microservices and roll back state. The worst case scenario can result in inconsistent data between services when the rollback of state due to failures didn't happen correctly. For information about the various methodologies to set up distributed transactions between services, see the third document in this series, Interservice communication in a microservices setup. Comprehensive testing of microservices-based applications is more complex than testing a monolithic application. For example, to test the functionality of processing an order in a monolithic ecommerce service, you select items, add them to a cart, and then check out. To test the same flow in a microservices-based architecture, multiple services - such as frontend, order, and payment - call each other to complete the test run. Deploying a microservices-based application is more complex than deploying a monolithic application. A microservice application typically consists of many services, each of which has multiple runtime instances. You also need to implement a service discovery mechanism that enables a service to discover the locations of any other services it needs to communicate with. A microservices architecture adds operations overhead because there are more services to monitor and alert on. Microservice architecture also has more points of failure due to the increased points of service-to-service communication. A monolithic application might be deployed to a small application server cluster. A microservices-based application might have tens of separate services to build, test, deploy and run, potentially in multiple languages and environments. All of these services need to be clustered for failover and resilience. Productionizing a microservices application requires high-quality monitoring and operations infrastructure. The division of services in a microservice architecture allows the application to perform more functions at the same time. However, because the modules run as isolated services, latency is introduced in the response time due to network calls between services. Not all applications are large enough to break down into microservices. Also, some applications require tight integration between components—for example, applications that must process rapid streams of real-time data. Any added layers of communication between services may slow real-time processing down. Thinking about the communication between services beforehand can provide helpful insights in clearly marking the service boundaries. When deciding whether microservice architecture is best for your application, consider the following points: Microservice best practices require per-service databases. When you do data modeling for your application, notice whether per-service databases fit your application. When you implement a microservice architecture, you must instrument and monitor the environment so that you can identify bottlenecks, detect and prevent failures, and support diagnostics. In a microservice architecture, each service has separate access controls. To help ensure security, you need to secure access to each service both within the environment and from external applications that consume its APIs. Synchronous interservice communication typically reduces the availability of an application. For example, if the order service in an ecommerce application synchronously invokes other services upstream, and if those services are unavailable, it can't create an order. Therefore, we recommend that you implement asynchronous, message-based communication. When to migrate a monolithic application to microservices If you're already successfully running a monolith, adopting microservices is a significant investment cost for your team. Different teams implement the principles of microservices in different ways. Each engineering team has unique outcomes for how small their microservices are, or how many microservices they need. To determine if microservices are the best approach for your application, first identify the key business goals or pain points you want to address. There might be simpler ways to achieve your goals or address the issues that you identify. For example, if you want to scale your application up faster, you might find that autoscaling is a more efficient solution. If you're finding bugs in production, you can start by implementing unit tests and continuous integration (CI). If you believe that a microservice approach is the best way to achieve your goals, start by extracting one service from the monolith and develop, test, and deploy it in production. For more information, see the next document in this series, Refactor a monolith into microservices. After you have successfully extracted one service and have it running in production, start extraction of the next service and continue learning from each cycle. The microservice architecture pattern decomposes a system into a set of independently deployable services. When you develop a monolithic application, you have to coordinate large teams, which can cause slow software development. When you implement a microservices architecture, you enable small, autonomous teams to work in parallel, which can accelerate your development. In the next document in this series, Refactor a monolith into microservices, you learn about various strategies for refactoring a monolithic application into microservices. What's next Read the next document in this series to learn about application refactoring strategies to decompose microservices. Read the third document in this series to learn about interservice communication in a microservices setup. Read the fourth, final document in this series to learn about distributed tracing of requests between microservices. Send feedback \ No newline at end of file diff --git a/Overview(12).txt b/Overview(12).txt new file mode 100644 index 0000000000000000000000000000000000000000..8e38dc877fd32f2c031410f019641947cfd78d33 --- /dev/null +++ b/Overview(12).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint +Date Scraped: 2025-02-23T11:47:00.003Z + +Content: +Home Docs Cloud Architecture Center Send feedback Deploy an enterprise developer platform on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC As enterprises shift to container-based application development and deployment, they must learn how to manage distributed teams with separate engineering workflows. To help large enterprises complete the shift to container-based applications, we created the enterprise application blueprint. This blueprint deploys an internal developer platform that enables cloud platform teams to provide a managed platform for software development and delivery that their organization's application development groups can use. The enterprise application blueprint includes the following: A GitHub repository that contains a set of Terraform configurations and scripts. The Terraform configuration sets up a developer platform in Google Cloud that supports multiple development teams. A guide to the architecture, design, security controls, and operational processes that you use this blueprint to implement (this document). The enterprise application blueprint is designed to be compatible with the enterprise foundations blueprint. The enterprise foundations blueprint provides a number of base-level services that the enterprise application blueprint relies on, such as Cloud Identity. You can deploy the enterprise application blueprint without deploying the enterprise foundations blueprint if your Google Cloud environment provides the necessary functionality to support the enterprise application blueprint. This document is intended for cloud architects and assumes that you're using the enterprise application blueprint to deploy new enterprise applications on Google Cloud. However, if you already have existing containerized enterprise applications on Google Cloud, you can incrementally adopt this reference architecture. This document also assumes that you understand Kubernetes components, including services, namespaces and clusters. For background information on Kubernetes and its implementation in Google Cloud, see the Google Kubernetes Engine (GKE) Enterprise edition technical overview. Enterprise application blueprint overview In most enterprises, a developer platform manages the shared infrastructure that is used by all developers. The developer platform creates build pipelines, deployment pipelines, and runtime environments for each application component on demand. Developer teams and application operators have access to only those application components for which they are responsible. The platform is designed to support the deployment of highly available and secure applications. This blueprint deploys a developer platform on top of the enterprise foundations blueprint (or its equivalent). The developer platform includes resources such as Google Kubernetes Engine (GKE) clusters, GKE fleet, the application factory, infrastructure pipelines, platform monitoring, and platform logging. In addition, the developer platform sets up the users (developer platform administrators and application developers) who manage the solution. This blueprint enables organizations to provide different application development teams (called tenants) access to the platform. A tenant is a group of users with common ownership over a set of resources. A tenant owns one or more applications that run on the platform as a container-based service. An application on the developer platform is a bundle of source code and configuration. Each application is built and deployed by a dedicated CI/CD pipeline. Tenants and applications are isolated from one another at run time and in the CI/CD pipelines. Portions of the blueprint provide automation are used by all tenants, and are referred to as multi-tenant. To illustrate how the developer platform is used, the blueprint includes a sample application, called Cymbal Bank. Cymbal Bank is a microservices application that is designed to run on GKE. The application is intended to simulate a highly-available application that is deployed in an active-active configuration to enable disaster recovery. Cymbal Bank assumes that the application is developed and operated by several independent developer teams. What's next Read about architecture (next document in this series). Send feedback \ No newline at end of file diff --git a/Overview(13).txt b/Overview(13).txt new file mode 100644 index 0000000000000000000000000000000000000000..9fdfa5f87f11c4a9babe885c1680ce0353930092 --- /dev/null +++ b/Overview(13).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices +Date Scraped: 2025-02-23T11:48:06.530Z + +Content: +Home Docs Cloud Architecture Center Send feedback Connected device architectures on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-09 UTC To maximize the value of data from their connected devices, organizations need to be able to perform data analysis. There are many ways for organizations to connect their devices to their analytics applications, and the benefits of specific connected device architectures can vary depending on the use case of your organization. To help guide you, this document describes a set of connected device architectures on Google Cloud. These architectures address a broad range of use cases and requirements for connected devices. This document is part of a series of documents that provide information about IoT architectures on Google Cloud. The other documents in this series include the following: Connected device architectures on Google Cloud overview (this document). A standalone MQTT Broker: An MQTT broker provides bidirectional communication between connected devices and Google Cloud projects, and between the devices. An IoT platform architecture on Google Cloud: An IoT platform provides additional device management capabilities along with data connectivity, which is important when you deploy a large fleet of connected devices. A direct connection to Pub/Sub: For data ingestion, the best choice might be for your devices to connect directly to Pub/Sub. Best practices for running an IoT backend on Google Cloud. Best practices for automatically provisioning and configuring edge and bare metal systems and servers. Connected device architectures summary This document groups connected device use cases into three categories, based on the following dimensions that you need to consider when you plan a connected device architecture: Number of devices: It's important to consider how many devices are directly connected to your application. If your application has many end devices (such as machines, sensors, or cameras), and if these devices are connected to an intermediate gateway or other device (such as a mobile phone), it's important to identify whether those end devices must be represented and managed in your application. In some cases you might need to represent each individual device; in other cases, only the intermediate device might need to be represented. Fleet management: Consider whether you need capabilities like device status monitoring, software and firmware updates, configuration management, and other fleet management features. These requirements help to determine your choice of application architecture. Inter-device messaging: Device communication through your application architecture is an important factor. For example, some applications depend on communication between the connected devices through your application architecture. Other applications have data flows that occur strictly between each device and your application, with no messaging between devices. Summary table Understanding the characteristics of your application can help you to choose the best architecture for your use case. To help guide your choice, the following table summarizes the support that each of the connected architectures that are described in this series offers: Device support limits Inter-device messaging Fleet management support MQTT Broker Millions Recommended Not supported IoT platform Millions Some support Recommended Device to Pub/Sub Hundreds Some support Not supported What's next Read about the best connected device architecture for your use case: A standalone MQTT Broker: An IoT platform architecture on Google Cloud. A direct connection to Pub/Sub. Learn how to connect devices and build IoT applications on Google Cloud using Intelligent Products Essentials. Learn about practices for automatically provisioning and configuring edge and bare metal systems and servers. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Overview(14).txt b/Overview(14).txt new file mode 100644 index 0000000000000000000000000000000000000000..d481daf7720564cabede47e5a925e649b9d5dcf7 --- /dev/null +++ b/Overview(14).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns +Date Scraped: 2025-02-23T11:49:40.430Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build hybrid and multicloud architectures using Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-24 UTC This architecture guide provides practical guidance on planning and architecting your hybrid and multicloud environments using Google Cloud. This document is the first of three documents in the set. It examines the opportunities and considerations associated with these architectures from a business and technology point of view. It also analyzes and discusses many proven hybrid and multicloud architecture patterns. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud (this article). Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy. Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective. You can read each of these architecture articles independently, but for the most benefit, we recommend reading them in sequence before making an architectural decision. The rapid pace of change in market demands has increased the requirements and expectations that are placed on enterprise IT, such as dynamic scale, increased performance for optimized user experience, and security. Many enterprise-level companies find it challenging to meet these demands and expectations using only traditional infrastructure and processes. IT departments are also under pressure to improve their cost effectiveness, making it difficult to justify additional capital investments in data centers and equipment. A hybrid cloud strategy that uses public cloud computing capabilities provides a pragmatic solution. By using the public cloud, you can extend the capacity and capabilities of your computing platforms without up-front capital investment costs. By adding one or more public cloud based solutions, like Google Cloud, to your existing infrastructure, you not only preserve your existing investments, but you also avoid committing yourself to a single cloud vendor. Also, by using a hybrid strategy, you can modernize applications and processes incrementally as resources permit. To help you plan for your architectural decision and hybrid or multicloud strategy planning, there are several potential challenges and design considerations that you should consider. This multi-part architecture guide highlights both the potential benefits of various architectures and the potential challenges. Note: This guide doesn't discuss multicloud architectures that use SaaS products, like customer relationship management (CRM) systems or email, alongside a cloud service provider (CSP). Overview of hybrid cloud and multicloud Because workloads, infrastructure, and processes are unique to each enterprise, each hybrid cloud strategy must be adapted to your specific needs. The result is that the terms hybrid cloud and multicloud are sometimes used inconsistently. Within the context of this Google Cloud architecture guide, the term hybrid cloud describes an architecture in which workloads are deployed across multiple computing environments, one based in the public cloud, and at least one being private—for example, an on-premises data center or a colocation facility. The term multicloud describes an architecture that combines at least two public CSPs. As illustrated in the following diagram, sometimes this architecture includes a private computing environment (that might include the use of a private cloud component). That arrangement is called a hybrid and multicloud architecture. Note: The term hybrid and multicloud refers to any combination of the three architectures displayed in the preceding diagram. However, where possible this series attempts to be specific when discussing architecture patterns. ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Next Drivers, considerations, strategy, and patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(15).txt b/Overview(15).txt new file mode 100644 index 0000000000000000000000000000000000000000..b181594231e494cef43c07cb49d2b3361b6f9e6d --- /dev/null +++ b/Overview(15).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices +Date Scraped: 2025-02-23T11:49:55.306Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-24 UTC This document is the second of three documents in a set. It discusses common hybrid and multicloud architecture patterns. It also describes the scenarios that these patterns are best suited for. Finally, it provides the best practices you can use when deploying such architectures in Google Cloud. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud. Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy (this document). Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective. Every enterprise has a unique portfolio of application workloads that place requirements and constraints on the architecture of a hybrid or multicloud setup. Although you must design and tailor your architecture to meet these constraints and requirements, you can rely on some common patterns to define the foundational architecture. An architecture pattern is a repeatable way to structure multiple functional components of a technology solution, application, or service to create a reusable solution that addresses certain requirements or use cases. A cloud-based technology solution is often made of several distinct and distributed cloud services. These services collaborate to deliver required functionality. In this context, each service is considered a functional component of the technology solution. Similarly, an application can consist of multiple functional tiers, modules, or services, and each can represent a functional component of the application architecture. Such an architecture can be standardized to address specific business use cases and serve as a foundational, reusable pattern. To generally define an architecture pattern for an application or solution, identify and define the following: The components of the solution or application. The expected functions for each component—for example, frontend functions to provide a graphical user interface or backend functions to provide data access. How the components communicate with each other and with external systems or users. In modern applications, these components interact through well-defined interfaces or APIs. There are a wide range of communication models such as asynchronous and synchronous, request-response, or queue-based. The following are the two main categories of hybrid and multicloud architecture patterns: Distributed architecture patterns: These patterns rely on a distributed deployment of workloads or application components. That means they run an application (or specific components of that application) in the computing environment that suits the pattern best. Doing so lets the pattern capitalize on the different properties and characteristics of distributed and interconnected computing environments. Redundant architecture patterns: These patterns are based on redundant deployments of workloads. In these patterns, you deploy the same applications and their components in multiple computing environments. The goal is to either increase the performance capacity or resiliency of an application, or to replicate an existing environment for development and testing. When you implement the architecture pattern that you select, you must use a suitable deployment archetype. Deployment archetypes are zonal, regional, multi-regional, or global. This selection forms the basis for constructing application-specific deployment architectures. Each deployment archetype defines a combination of failure domains within which an application can operate. These failure domains can encompass one or more Google Cloud zones or regions, and can be expanded to include your on-premises data centers or failure domains in other cloud providers. This series contains the following pages: Distributed architecture patterns Tiered hybrid pattern Partitioned multicloud pattern Analytics hybrid and multicloud pattern Edge hybrid pattern Redundant architecture patterns Environment hybrid pattern Business continuity hybrid and multicloud patterns Cloud bursting pattern ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Next Distributed architecture patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(16).txt b/Overview(16).txt new file mode 100644 index 0000000000000000000000000000000000000000..2701a2f29fca2703f8641503079b46fcee03548c --- /dev/null +++ b/Overview(16).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns +Date Scraped: 2025-02-23T11:50:21.291Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud secure networking architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-29 UTC This document is the third of three documents in a set. It discusses hybrid and multicloud networking architecture patterns. This part explores several common secure network architecture patterns that you can use for hybrid and multicloud architectures. It describes the scenarios that these networking patterns are best suited for, and provides best practices for implementing them with Google Cloud. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud. Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy. Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective (this document). Connecting private computing environments to Google Cloud securely and reliably is essential for any successful hybrid and multicloud architecture. The hybrid networking connectivity and cloud networking architecture pattern you choose for a hybrid and multicloud setup must meet the unique requirements of your enterprise workloads. It must also suit the architecture patterns you intend to apply. Although you might need to tailor each design, there are common patterns you can use as a blueprint. The networking architecture patterns in this document shouldn't be considered alternatives to the landing zone design in Google Cloud. Instead, you should design and deploy the architecture patterns you select as part of the overall Google Cloud landing zone design, which spans the following areas: Identities Resource management Security Networking Monitoring Different applications can use different networking architecture patterns, which are incorporated as part of a landing zone architecture. In a multicloud setup, you should maintain the consistency of the landing zone design across all environments. This series contains the following pages: Design considerations Architecture patterns Mirrored pattern Meshed pattern Gated patterns Gated egress Gated ingress Gated egress and ingress Handover General best practices ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Next Design considerations arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(17).txt b/Overview(17).txt new file mode 100644 index 0000000000000000000000000000000000000000..5d6f4fb053319a2be3d512ad64f855ecb4b203ca --- /dev/null +++ b/Overview(17).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design +Date Scraped: 2025-02-23T11:50:49.787Z + +Content: +Home Docs Cloud Architecture Center Send feedback Cross-Cloud Network for distributed applications Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC Cross-Cloud Network enables an architecture for the assembly of distributed applications. Cross-Cloud Network lets you distribute workloads and services across multiple cloud and on-premises networks. This solution provides application developers and operators the experience of a single cloud across multiple clouds. This solution uses and also expands the established uses of hybrid and multicloud networking. This guide is intended for network architects and engineers who want to design and build distributed applications on Cross-Cloud Network. This guide provides you with a comprehensive understanding of Cross-Cloud Network design considerations. This design guide is a series that includes the following documents: Cross-Cloud Network design for distributed applications (this document) Network segmentation and connectivity for distributed applications in Cross-Cloud Network Service networking for distributed applications in Cross-Cloud Network Network security for distributed applications in Cross-Cloud Network The architecture supports regional and global application stacks, and it's organized in the following functional layers: Network segmentation and connectivity: involves the Virtual Private Cloud (VPC) segmentation structure and IP connectivity across VPCs and to external networks. Service networking: involves the deployment of application services, which are load balanced and made available across projects and organizations. Network security: enables the enforcement of security for intra-cloud and inter-cloud communications, using both built-in cloud security and network virtual appliances (NVAs). Network segmentation and connectivity Segmentation structure and connectivity is the foundation of the design. The following diagram shows a VPC segmentation structure, which you can implement by using either a consolidated or segmented infrastructure. This diagram doesn't show the connections between the networks. This structure includes the following components: Transit VPC: Handles external network connections and routing policies. This VPC can also provide connectivity between other VPCs. Services access VPCs: Contain access points to different services. The service access points in these VPCs can be reached from other networks. Managed services VPCs: Contain services produced by other entities. The services are made accessible to applications running in VPC networks by using Private Service Connect or private services access. Application VPCs: Contain the workloads that make up the software services that your organization creates and hosts itself. Your choice of segmentation structure for the application VPCs depends on the scale of application VPCs required, whether you plan to deploy perimeter firewalls in Cross-Cloud Network or externally, and the choice of central or distributed service publication. Cross-Cloud Network supports the deployment of regional application stacks and global application stacks. Both of these application resiliency archetypes are supported by the proposed segmentation structure with the inter-VPC connectivity pattern. You can achieve inter-VPC connectivity with Network Connectivity Center or by using a combination of VPC Network Peering and HA VPN hub-and-spoke patterns. The design of the DNS infrastructure is also defined in the context of the segmentation structure, independent of the connectivity pattern. Service networking Different application deployment archetypes lead to different patterns for service networking. For Cross-Cloud Network design, focus on the Multi-regional deployment archetype, in which an application stack runs independently in multiple zones across two or more Google Cloud regions. A multi-regional deployment archetype has the following features that are useful for Cross-Cloud Network design: You can use DNS routing policies to route incoming traffic to the regional load balancers. The regional load balancers can then distribute the traffic to the application stack. You can implement regional failover by re-anchoring the DNS mappings of the application stack with a DNS failover routing policy. An alternative to the multi-regional deployment archetype would be the global deployment archetype, in which a single stack is built on global load balancers and spans multiple regions. Consider the following features of this archetype when working with Cross-Cloud Network design: The load balancers distribute traffic to the region that's nearest to the user. The internet-facing frontends are global, but the internal-facing frontends are regional with global access, so you can reach them in failover scenarios. You can use geolocation DNS routing policies and DNS health checks on the internal service layers of the application stack. How you provide access to managed published services depends on the service that needs to be reached. The different private reachability models are modularized and orthogonal to the design of the application stack. Depending on the service, you can use Private Service Connect or private services access for private access. You can build an application stack by combining built-in services and services published by other organizations. The service stacks can be regional or global to meet your required level of resiliency and optimized access latency. Network security For workload security, we recommend that you use firewall policies from Google Cloud. If your organization requires additional advanced capabilities to meet security or compliance requirements, you can incorporate perimeter security firewalls by inserting Next-Generation Firewall (NGFW) Network Virtual Appliances (NVAs). You can insert NGFW NVAs in a single network interface (single-NIC mode) or over multiple network interfaces (multi-NIC mode). The NGFW NVAs can support security zones or Classless Inter-Domain Routing (CIDR)-based perimeter policies. Cross-Cloud Network deploys perimeter NGFW NVAs by using a transit VPC and VPC routing policies. What's next Design the network segmentation and connectivity for Cross-Cloud Network applications. Learn more about the Google Cloud products used in this design guide: VPC networks Shared VPC VPC Network Peering Private Service Connect Private services access For more reference architectures, design guides, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Victor Moreno | Product Manager, Cloud NetworkingGhaleb Al-habian | Network SpecialistDeepak Michael | Networking Specialist Customer EngineerOsvaldo Costa | Networking Specialist Customer EngineerJonathan Almaleh | Staff Technical Solutions ConsultantOther contributors: Zach Seils | Networking SpecialistChristopher Abraham | Networking Specialist Customer EngineerEmanuele Mazza | Networking Product SpecialistAurélien Legrand | Strategic Cloud EngineerEric Yu | Networking Specialist Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMark Schlagenhauf | Technical Writer, NetworkingMarwan Al Shawi | Partner Customer EngineerAmmett Williams | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Overview(18).txt b/Overview(18).txt new file mode 100644 index 0000000000000000000000000000000000000000..e8c6f286d7738baa4bb4f7b0388a8f6aa737f942 --- /dev/null +++ b/Overview(18).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/authenticating-corporate-users-in-a-hybrid-environment +Date Scraped: 2025-02-23T11:51:10.481Z + +Content: +Home Docs Cloud Architecture Center Send feedback Authenticate workforce users in a hybrid environment Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document is the first part of a multi-part series that discusses how to extend your identity management solution to Google Cloud to enable your workforce to authenticate and consume services in a hybrid computing environment. The series consists of these parts: Authenticating workforce users in a hybrid environment (this document) Patterns for authenticating workforce users in a hybrid environment Introduction Managing user accounts and controlling employee access to applications and computing resources is a key responsibility of enterprise IT departments. To ensure consistency and administrative efficiency, most enterprises consider identity management a central function and use a unified system to manage identities. Most commonly, enterprises rely on Microsoft Active Directory Domain Services (AD DS) for this purpose. When you extend an IT landscape to Google Cloud as part of a hybrid strategy, you want to maintain a single point where identities are managed. A unified identity management system minimizes administrative effort in managing accounts and access control. This system also helps ensure that users and applications can authenticate securely across a hybrid environment. This document looks at the ways to integrate Google Cloud with your identity management system. The document helps you choose the approach that best fits your requirements. Although most of the discussion also applies to Google Workspace, the document focuses solely on Cloud Identity. Assess requirements for hybrid identity management The best way to extend your identity management system to Google Cloud depends on multiple factors: The pools of identities in your organization The identity providers used to provide authentication services for your workforce identities The resources and applications you want your users to be able to access on Google Cloud Your business continuity requirements The following sections look at each of these factors. Identities The first factor to look at when integrating Google Cloud and your identity management system is how you manage and distinguish between identity types. Most organizations use the following types of identities: Workforce identities are the identities you manage for employees of your organization. These identities are used for signing in to workstations, accessing email, or using corporate applications. External identities are the identities you manage for non-employees such as contractors or partners which need to be given access to corporate resources. Guest identities are identities managed by a different party such as a customer or partner who need access to corporate resources. Customer identities are the identities you manage for users in order to interact with your website or customer-facing applications. Workload identities are the identities you manage to enable applications to interact with other applications or the underlying platform. Often, workforce identities form a single pool of identities, where each employee has a single identity and all identities managed in the same way, using the same systems. However, this doesn't have to be the case—as a result of a merger or acquisition, for example, you might have multiple pools of workforce identities, each managed differently using different systems. Technically, this means that you might have to integrate multiple LDAP sources, Active Directory forests, or Azure AD tenants with Google Cloud. Integrating multiple pools adds to the complexity of integrating with Google Cloud. Therefore, you should evaluate: Do all identity pools need access to Google Cloud, or only a select subset? Should all identity pools have access to the same organization in Google Cloud, or each to a different one? Are there options to reduce the number of pools, for example, by creating cross-forest trusts between Active Directory forests? External identities are often treated similarly to workforce identities, with these exceptions: Their account might be valid for only a limited time. They might be granted fewer rights by default. They might be managed by a separate LDAP directory, Active Directory forest, or Azure AD tenant. Unlike external identities, guest identities are not managed by you but by a different party. In SaaS applications such as Google Workspace, guest identities can remove the need for maintaining external identities for customers or partners. You rarely encounter guest identities in on-premises environments. This document focuses on workforce identities and external identities. If you have used services such as Google Ads, some of your employees might already have a Google Account that isn't connected to their workforce identity, meaning they have two identities. If so, consider consolidating these identities. Identity providers The second factor to look at is your identity providers. An identity provider (IdP) is a system that provides authentication services for your workloads and ultimately decides whether to authenticate a user. In addition to providing authentication services, IdPs often manage the lifecycle of identities. This doesn't have to be the case, though, because identities might also be provisioned from a separate human resources system. Common identity providers include: Active Directory (AD DS) Azure Active Directory (Azure AD) IDaaS providers such as ForgeRock, Okta, or Ping Identity Other LDAP directories, including Active Directory Lightweight Directory Services (AD LDS) Instead of using only one system, you might be using several systems—to manage different user pools, to handle external accounts, or to provide different authentication services for the same user pools. Examples where multiple IdPs are used to manage the same pools include: Active Directory federated with Azure AD Active Directory federated with an IDaaS provider such as ForgeRock, Okta, or Ping Identity Active Directory with AD LDS replicas To use your IdP on Google Cloud, you can follow two basic approaches: Federate your identity provider with Cloud Identity: By federating with Cloud Identity, you enable Google to become an additional IdP that your workforce users can use and that applications deployed on Google Cloud can rely on. Instead of maintaining Google identities for each of your users, you can depend on the federation relationship to maintain identities automatically. This relationship also helps ensure that your IdP remains the source of truth. Extend your identity provider to Google Cloud: You can allow applications deployed on Google Cloud to reuse your IdP—by connecting to it directly or by maintaining a replica of your IdP on Google Cloud. Depending on the other identity management factors, you might need to combine both approaches to support your hybrid usage scenarios. Resources The third factor to look at is which Google Cloud resources you plan to grant your workforce users access to. This factor includes Google Cloud itself—you must allow responsible teams to manage Google Cloud by using the Google Cloud console, the gcloud CLI, or APIs. Additional resources might include: Applications deployed on App Engine, Compute Engine, or Google Kubernetes Engine (GKE) Applications protected with Identity-Aware Proxy (IAP) Linux VMs, accessed using SSH Windows VMs, accessed using RDP Other Google tools and services such as Google Workspace, Looker Studio, or Google Ads These resources differ in whether they must, could, or cannot use Google as identity provider. The following sections look at these three cases. Resources that must use Google as IdP Resources that must use Google as IdP include the Google Cloud console, the gcloud CLI, applications protected with IAP, and other Google tools and services. To grant your workforce users access to these resources, you must provision a Google identity for each user. Maintaining separate Google identities runs counter to the idea of unified identity management. So granting users access to any of these resources implies that you must federate your IdP with Google Cloud. After you federate your IdP with Google Cloud, consider using Google as IdP for more resources—including applications that might use other means to authenticate users. Resources that could use Google as IdP Resources that could use Google as IdP, but currently don't, include: New applications, aimed at workforce users, that you plan to deploy on Google Cloud. Existing applications, aimed at workforce users, that you plan to lift and shift or move and improve to Google Cloud. Whether these applications can use Google as IdP depends on the protocols you use for authentication and authorization. Web single sign-on protocols Google supports several industry standard protocols for handling authentication, authorization, and single sign-on. Supported protocols include: OAuth 2.0, which applies to mobile clients, fat clients, and other non-browser applications. OpenID Connect 1.0 (OIDC), which applies to browser-based applications. SAML 2.0, which applies to integrating third-party applications. For applications that you plan to develop, OAuth 2.0 or OIDC should be your preferred choice. These protocols are widely adopted, and you can take advantage of many well-tested libraries and tools. Also, relying on these protocols implies that you can optionally use Google as IdP while you retain the flexibility of switching identity providers as needed. If you have applications that use SAML, OAuth 2.0, or OIDC, switching them to using Google as identity provider should be possible with little or no code changes. LDAP One of the most versatile and relied-on protocols for authentication and authorization is the Lightweight Directory Access Protocol (LDAP). There are multiple ways an application can use LDAP for authentication and authorization. The two most common scenarios are: Using LDAP for authentication and querying user information: In this scenario, an application doesn't use single sign-on. Instead, it shows the user a sign-on form asking for username and password and then uses the provided credentials to attempt an LDAP bind operation. If the attempt succeeds, the credentials are considered valid. And further information about the user such as name and group membership might be queried from the directory and used to authorize access. Using SAML for authentication and LDAP for querying user information: In this scenario, the application authenticates the user by using a single sign-on protocol—applications often use SAML for this purpose. After the user has been authenticated, the application uses the LDAP server to query additional information about the user such as name and group memberships. The critical difference between the two scenarios is that the first scenario requires both the application and the LDAP server to have access to the user's password in order to verify credentials. In the second scenario, the application and the server don't require access to the user's password; the application can perform its LDAP queries by using a dedicated service user. With Secure LDAP, you can access user and group information in Cloud Identity by using the LDAP protocol. If Google is your primary IdP, Secure LDAP lets you support both scenarios. However, if you integrate Cloud Identity with an external IdP, Cloud Identity doesn't maintain a copy of user passwords. As a result, only the second scenario can be enabled with Secure LDAP. Kerberos and NTLM If you plan to migrate Windows-based workloads to Google Cloud, some of these applications might rely on Integrated Windows Authentication (IWA) instead of using standard protocols. IWA is a common choice for ASP.NET-based applications running on Microsoft IIS. IWA is popular because it allows a seamless single sign-on experience for users who have logged in to their Windows workstation using domain credentials. IWA relies on NTLM or Kerberos. It requires the user's workstation and the server the application is running on to be joined to the same Active Directory domain or to trusting domains. One consequence of relying on NTLM and Kerberos is that an application using IWA cannot use Google as IdP. However, you might still be able to refactor the application to use OIDC. OIDC doesn't require the user's workstation or the server to be domain-joined. So refactoring might simplify deployment and help you pursue alternative deployment options. Considering the seamless single sign-on experience provided by IWA, using OIDC instead of IWA might seem like a step backward in terms of user experience. However, this doesn't have to be the case if you ensure that users can seamlessly sign on to AD FS or Azure AD: If you federate Google Cloud with Active Directory and AD FS, any authentication methods configured in AD FS apply. If you configure AD FS to allow IWA, users who have logged in to their Windows workstation using domain credentials can be authenticated seamlessly to any application that uses Google as IdP. If you federate Google Cloud with Azure AD, you can enable Azure AD Seamless SSO to the same effect. The following diagram shows a simplified example of how you can use Cloud Identity, AD FS, and IWA to implement seamless single-sign on for a web application: The browser requests a protected page using a web browser. The web application initiates a login using OIDC (OIDC authentication flow). This flow redirects the browser to the Google sign-in endpoint. The Google sign-in endpoint returns the Google sign-in page to the user, prompting for the email address. The user enters their email address. Based on the email address, the Google sign-in endpoint identifies the Cloud Identity account and recognizes that it is configured to use SSO. The sign-in endpoint then initiates a SAML login with AD FS. AD FS, configured to use IWA, requests the browser to present a Kerberos ticket, which triggers the browser to request the underlying Windows operating system to obtain a suitable ticket. Unless a suitable ticket has been cached, Windows contacts the Active Directory key distribution center (KDC) and requests a suitable service ticket to be issued based on the ticket granting ticket (TGT) that was obtained when the user signed in to Windows. The browser presents the newly obtained ticket to AD FS. AD FS validates the ticket by checking its cryptographic signature, extracts the user identity from the ticket, and issues a SAML token to Google sign-in endpoint. Using the authentication information from the SAML token, Google sign-in endpoint completes the OIDC login and issues OpenID Connect tokens to the web application. When the user is authenticated, the protected page can be returned to the user. SSH public-key authentication When you plan to run Linux virtual machines (VMs) on Google Cloud, you likely need SSH access to these machines. The most common authentication method for SSH is public-key authentication. Unlike the single sign-on protocols previously discussed, SSH public-key authentication doesn't rely on a centralized IdP to make authentication decisions. Instead, authentication decisions are decentralized—each machine handles authentication based on a local set of authorized public keys. You can bridge the gap between decentralized SSH public-key authentication and centralized identity management by using OS Login. OS Login ties the lifecycle of SSH keys to the lifecycle of user accounts by: Publishing an SSH public key when a user is created or is attempting to use SSH for the first time. Provisioning the user's public key to machines a user is entitled to access. Deprovisioning the user's public key when the account is revoked access, disabled, or deleted. Using OS Login effectively makes Cloud Identity the IdP for your Linux instances. Resources that cannot use Google as IdP Some resources cannot directly use Google as IdP. But you can still accommodate these resources on Google Cloud by combining two approaches: Federate your external IdP with Google Cloud to allow corporate users to use the Google Cloud console, the gcloud CLI, and other resources that must or could use Google as IdP. Extend your IdP to Google Cloud to enable resources that cannot use Google as IdP to be run on Google Cloud. If a resource relies on protocols that the Google IdP doesn't support, that resource cannot use Google as IdP. Such protocols include: LDAP for authentication: Although you can use Secure LDAP to facilitate querying user information from Cloud Identity through LDAP, Cloud Identity does not support using LDAP for authentication when federated with an external IdP. To allow applications to use LDAP for authentication, you can expose or replicate an on-premises LDAP directory or you can extend your Active Directory to Google Cloud. WS-Trust, WS-Federation: Especially if you use AD FS, you might still rely on WS-Trust or WS-Federation to handle token-based authentication. Unless you can change affected applications to use SAML or OpenID Connect, it's best to expose your on-premises AD FS to Google Cloud and have the applications use AD FS directly. OpenID Connect with AD FS-specific claims: AD FS and Google support OpenID Connect. If you have been using AD FS as an OpenID Connect provider, you might rely on certain AD FS-specific claims that Google doesn't support. If so, consider exposing your on-premises AD FS to Google Cloud and have affected applications directly use AD FS. Kerberos, NTLM: If some of your applications use Kerberos or NTLM for authentication, you might be able to modify them to use OpenID Connect or SAML instead. If this isn't possible, you can deploy these applications on domain-joined Windows servers and either expose or replicate your on-premises Active Directory to Google Cloud. Windows virtual machines: If you run Windows workloads on Google Cloud, you must be able to sign in to these VMs through Remote Desktop Protocol (RDP), through a Remote Desktop Gateway, or by other means. If a user has write access to the Google Cloud project where the VM was created, Google Cloud lets the user create a user and password, which creates an account in the VM's local Security Account Manager (SAM) database. Crucially, the resulting Windows SAM account isn't connected to the user's Google Account and isn't subject to the same account lifecycle. If you suspend or delete the user's Google Account, the Windows SAM account is unaffected and might continue to provide access to the VM. If you have a moderate number of Windows VMs and users that must be able to sign in to these machines, then letting users generate Windows user accounts and passwords might be a viable approach. But when managing larger fleets of Windows servers, it can be better to extend an on-premises Active Directory to Google Cloud and domain-join the respective servers. Domain-joining servers is also a requirement if you rely on Network Level Authentication. Availability The final factor to look at is availability. The ability to authenticate users is likely critical for many of your workloads, and an IdP outage can have far-reaching consequences. The right approach to ensuring suitable availability depends on how you intend to use Google Cloud and how it fits into your hybrid strategy. Distributed workloads To capitalize on the unique capabilities that each computing environment offers, you might use a hybrid multi-cloud approach to distribute workloads across those environments. These environments might have dependencies on one another that are critical to availability of your workloads. Dependencies might include VPN tunnels or interconnects, applications communicating with one another, or systems accessing data across computing environments. When federating or extending your external IdP to Google Cloud, ensure that availability of your external IdP and other systems required for authentication meets or exceeds the availability of other critical dependencies. This requirement means that you might need to both redundantly deploy the external IdP and its dependencies and ensure redundant network connectivity. Redundant workloads If you use Google Cloud for ensuring business continuity, your workloads on Google Cloud are going to mirror some of the workloads you have in your computing environment. The purpose of such a setup is to enable one computing environment to take over the role of the other environment if a failure occurs. So you need to look at every dependency between these environments. By having Google Cloud rely on an external IdP running on-premises, you create a dependency. That dependency could undermine the intent of having Google Cloud be an independent copy of your computing environment. Try to replicate your IdP to Google Cloud so that all workloads on Google Cloud are unaffected by an outage of your on-premises computing environment or of connectivity between Google Cloud and your on-premises network. What's next Review common patterns for authenticating workforce users in a hybrid environment. Review the identity provisioning options for Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Overview(19).txt b/Overview(19).txt new file mode 100644 index 0000000000000000000000000000000000000000..9a6de7b0380a1819e2bf0291b8d4576e0d9641ea --- /dev/null +++ b/Overview(19).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop +Date Scraped: 2025-02-23T11:52:32.199Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrating On-Premises Hadoop Infrastructure to Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-15 UTC This guide provides an overview of how to move your on-premises Apache Hadoop system to Google Cloud. It describes a migration process that not only moves your Hadoop work to Google Cloud, but also enables you to adapt your work to take advantage of the benefits of a Hadoop system optimized for cloud computing. It also introduces some fundamental concepts you need to understand to translate your Hadoop configuration to Google Cloud. This is the first of several guides describing how to move from on-premises Hadoop: This guide, which provides context and planning advice for your migration. Migrating HDFS Data from On-Premises to Google Cloud provides additional context for incrementally moving your data to Google Cloud. Migrating data from HBase to Bigtable Migrating Hadoop Jobs from On-Premises to Dataproc describes the process of running your jobs on Dataproc and other Google Cloud products. The benefits of migrating to Google Cloud There are many ways in which using Google Cloud can save you time, money, and effort compared to using an on-premises Hadoop solution. In many cases, adopting a cloud-based approach can make your overall solution simpler and easy to manage. Built-in support for Hadoop Google Cloud includes Dataproc, which is a managed Hadoop and Spark environment. You can use Dataproc to run most of your existing jobs with minimal alteration, so you don't need to move away from all of the Hadoop tools you already know. Managed hardware and configuration When you run Hadoop on Google Cloud, you never need to worry about physical hardware. You specify the configuration of your cluster, and Dataproc allocates resources for you. You can scale your cluster at any time. Simplified version management Keeping open source tools up to date and working together is one of the most complex parts of managing a Hadoop cluster. When you use Dataproc, much of that work is managed for you by Dataproc versioning. Flexible job configuration A typical on-premises Hadoop setup uses a single cluster that serves many purposes. When you move to Google Cloud, you can focus on individual tasks, creating as many clusters as you need. This removes much of the complexity of maintaining a single cluster with growing dependencies and software configuration interactions. Planning your migration Migrating from an on-premises Hadoop solution to Google Cloud requires a shift in approach. A typical on-premises Hadoop system consists of a monolithic cluster that supports many workloads, often across multiple business areas. As a result, the system becomes more complex over time and can require administrators to make compromises to get everything working in the monolithic cluster. When you move your Hadoop system to Google Cloud, you can reduce the administrative complexity. However, to get that simplification and to get the most efficient processing in Google Cloud with the minimal cost, you need to rethink how to structure your data and jobs. Because Dataproc runs Hadoop on Google Cloud, using a persistent Dataproc cluster to replicate your on-premises setup might seem like the easiest solution. However, there are some limitations to that approach: Keeping your data in a persistent HDFS cluster using Dataproc is more expensive than storing your data in Cloud Storage, which is what we recommend, as explained later. Keeping data in an HDFS cluster also limits your ability to use your data with other Google Cloud products. Augmenting or replacing some of your open-source-based tools with other related Google Cloud services can be more efficient or economical for particular use cases. Using a single, persistent Dataproc cluster for your jobs is more difficult to manage than shifting to targeted clusters that serve individual jobs or job areas. The most cost-effective and flexible way to migrate your Hadoop system to Google Cloud is to shift away from thinking in terms of large, multi-purpose, persistent clusters and instead think about small, short-lived clusters that are designed to run specific jobs. You store your data in Cloud Storage to support multiple, temporary processing clusters. This model is often called the ephemeral model, because the clusters you use for processing jobs are allocated as needed and are released as jobs finish. The following diagram shows a hypothetical migration from an on-premises system to an ephemeral model on Google Cloud. The example moves four jobs that run on two on-premises clusters to Dataproc. The ephemeral clusters that are used to run the jobs in Google Cloud are defined to maximize efficiency for individual jobs. The first two jobs use the same cluster, while the third and fourth jobs each run on their own cluster. When you migrate your own jobs, you can customize and optimize clusters for individual jobs or for groups of jobs as makes sense for your specific work. Dataproc helps you quickly define multiple clusters, bring them online, and scale them to suit your needs. The data in the example is moved from two on-premises HDFS clusters to Cloud Storage buckets. The data in the first cluster is divided among multiple buckets, and the second cluster is moved to a single bucket. You can customize the structure of your data in Cloud Storage to suit the needs of your applications and your business. The example migration captures the beginning and ending states of a complete migration to Google Cloud. This implies a single step, but you'll get the best results if you don't think of moving to Google Cloud as a one-time, complete migration. Instead, think of it as refactoring your solutions to use a new set of tools in ways that weren't possible on-premises. To make such a refactoring work, we recommend migrating incrementally. Recommended sequence for migrating to Google Cloud Here are the recommended steps for migrating your workflows to Google Cloud: Move your data first Move your data into Cloud Storage buckets. Start small. Use backup or archived data to minimize the impact to your existing Hadoop system. Experiment Use a subset of data to test and experiment. Make a small-scale proof of concept for each of your jobs. Try new approaches to working with your data. Adjust to Google Cloud and cloud-computing paradigms. Think in terms of specialized, ephemeral clusters. Use the smallest clusters you can—scope them to single jobs or small groups of closely related jobs. Create clusters each time you need them for a job and delete them when you're done. Use Google Cloud tools wherever appropriate. Moving to an ephemeral model The biggest shift in your approach between running an on-premises Hadoop workflow and running the same workflow on Google Cloud is the shift away from monolithic, persistent clusters to specialized, ephemeral clusters. You spin up a cluster when you need to run a job and then delete it when the job completes. The resources required by your jobs are active only when they're being used, so you only pay for what you use. This approach enables you to tailor cluster configurations for individual jobs. Because you aren't maintaining and configuring a persistent cluster, you reduce the costs of resource use and cluster administration. This section describes how to move your existing Hadoop infrastructure to an ephemeral model. Separate data from computation Using Cloud Storage as the persistent storage for your workflows has the following advantages: It's easier to manage access permissions. It's a Hadoop Compatible File System (HCFS), so it's easy to use with your existing jobs. It's faster than HDFS in many cases. It requires less maintenance than HDFS. It is easier to migrate data than HDFS. It enables you to easily use your data with the whole range of Google Cloud products. It's considerably less expensive than keeping your data in HDFS on a persistent Dataproc cluster. With your data stored persistently in Cloud Storage, you can run your jobs on ephemeral Hadoop clusters managed by Dataproc. In some cases, it might make more sense to move data to another Google Cloud technology, such as BigQuery or Bigtable. But most general-purpose data should persist in Cloud Storage. More detail about these alternative storage options is provided later in this guide. Run jobs on ephemeral clusters Dataproc makes it easy to create and delete clusters so that you can move away from using one monolithic cluster to using many ephemeral clusters. This approach has several advantages: You can avoid single points of failure and increase reliability of your data pipeline. When a shared long-running cluster runs into an error state, the whole data pipeline can get blocked. Repairing a stateful long-running cluster can take a long time, causing service level objective (SLO) violations. By contrast, a problematic stateless ephemeral cluster can easily be deleted, then recreated with job retries. You can have more predictable job performances and avoid SLO violations by eliminating resource contentions among jobs. You can optimize cluster configurations and autoscaling policies for individual jobs. You can get the latest security patches, bug fixes, and optimizations when you create an ephemeral cluster for a job. You can avoid common issues with long-running clusters, such as disks getting filled up with logs and temporary files, or the cluster failing to scale up due to stockout in the zone. You don't need to maintain clusters over time because ephemeral clusters are configured every time you use them. Not having to maintain clusters eliminates the administrative burden of managing tools across jobs. You don't need to maintain separate infrastructure for development, testing, and production. You can use the same definitions to create as many different versions of a cluster as you need. You can troubleshoot issues more quickly with the isolated single-job cluster. You only pay for resources when your jobs are using them. You can use Dataproc initialization actions to define the configuration of nodes in a cluster. This makes it easy to maintain the different cluster configurations you need to closely support individual jobs and related groups of jobs. You can use the provided sample initialization actions to get started. The samples demonstrate how to make your own initialization actions. Minimize the lifetime of ephemeral clusters The point of ephemeral clusters is to use them only for the jobs' lifetime. When it's time to run a job, follow this process: Create a properly configured cluster. Run your job, sending output to Cloud Storage or another persistent location. Delete the cluster. Use your job output however you need to. View logs in Cloud Logging or Cloud Storage. This process is shown in the following diagram: Use small persistent clusters only when absolutely necessary If you can't accomplish your work without a persistent cluster, you can create one. This option may be costly and isn't recommended if there is a way to get your job done on ephemeral clusters. You can minimize the cost of a persistent cluster by: Creating the smallest cluster you can. Scoping your work on that cluster to the smallest possible number of jobs. Scaling the cluster to the minimum workable number of nodes, adding more dynamically to meet demand. Migrating incrementally There are many advantages to migrating incrementally. You can: Isolate individual jobs in your existing Hadoop infrastructure from the complexity that's inherent in a mature environment. Examine each job in isolation to evaluate its needs and to determine the best path for migration. Handle unexpected problems as they arise without delaying dependent tasks. Create a proof of concept for each complex process without affecting your production environment. Move your workloads to the recommended ephemeral model thoughtfully and deliberately. Your migration is unique to your Hadoop environment, so there is no universal plan that fits all migration scenarios. Make a plan for your migration that gives you the freedom to translate each piece to a cloud-computing paradigm. Here's a typical incremental migration sequence: Move some portion of your data to Google Cloud. Experiment with that data: Replicate your existing jobs that use the data. Make new prototypes that work with the data. Repeat with additional data. Start with the least critical data that you can. In the early stages, it's a good approach to use backup data and archives. One kind of lower-risk job that makes a good starting test is backfilling by running burst processing on archive data. You can set up jobs that fill gaps in processing for data that existed before your current jobs were in place. Starting with burst jobs can often provide experience with scaling on Google Cloud early in your migration plan. This can help you when you begin migrating more critical jobs. The following diagram shows an example of a typical backfill hybrid architecture. The example has two major components. First, scheduled jobs running on the on-premises cluster push data to Cloud Storage through an internet gateway. Second, backfill jobs run on ephemeral Dataproc clusters. In addition to backfilling, you can use ephemeral clusters in Google Cloud for experimentation and creating proofs of concept for future work. Planning with the completed migration in mind So far, this guide has assumed that your goal is to move your entire Hadoop system from on-premises to Google Cloud. A Hadoop system running entirely on Google Cloud is easier to manage than one that operates both in the cloud and on-premises. However, a hybrid approach is often necessary to meet your business or technological needs. Designing a hybrid solution Here are some reasons you might need a hybrid solution: You are in the process of developing cloud-native systems, so existing systems that depend on them must continue to run on-premises until you are finished. You have business requirements to keep your data on-premises. You have to share data with other systems that run on-premises, and those systems can't interact with Google Cloud due to technical or business restrictions. A typical hybrid solution has four major parts: An on-premises Hadoop cluster. A connection from the on-premises cluster to Google Cloud. Centralized data storage in Google Cloud. Cloud-native components that work on data in Google Cloud. An issue you must address with a hybrid cloud solution is how to keep your systems in sync. That is, how will you make sure that changes you make to data in one place are reflected in the other? You can simplify synchronization by making clear distinctions between how your data is used in the different environments. For example, you might have a hybrid solution where only your archived data is stored in Google Cloud. You can set up scheduled jobs to move your data from the on-premises cluster to Google Cloud when the data reaches a specified age. Then you can set up all of your jobs that work on the archived data in Google Cloud so that you never need to synchronize changes to your on-premises clusters. Another way to divide your system is to move all of the data and work for a specific project or working group to Google Cloud while keeping other work on premises. Then you can focus on your jobs instead of creating complex data synchronization systems. You might have security or logistical concerns that complicate how you connect your on-premises cluster to Google Cloud. One solution is to use a Virtual Private Cloud connected to your on-premises network using Cloud VPN. The following diagram shows an example hybrid cloud setup: The example setup uses Cloud VPN to connect a Google Cloud VPC to an on-premises cluster. The system uses Dataproc inside the VPC to manage persistent clusters that process data coming from the on-premises system. This might involve synchronizing data between the systems. Those persistent Dataproc clusters also transfer data coming from the on-premises system to the appropriate storage services in Google Cloud. For the sake of illustration, the example uses Cloud Storage, BigQuery, and Bigtable for storage—those are the most common destinations for data processed by Hadoop workloads in Google Cloud. The other half of the example solution shows multiple ephemeral clusters that are created as needed in the public cloud. Those clusters can be used for many tasks, including those that collect and transform new data. The results of these jobs are stored in the same storage services used by the clusters running in the VPC. Designing a cloud-native solution By contrast, a cloud-native solution is straightforward. Because you run all of your jobs on Google Cloud using data stored in Cloud Storage, you avoid the complexity of data synchronization altogether, although you must still be careful about how your different jobs use the same data. The following diagram shows an example of a cloud-native system: The example system has some persistent clusters and some ephemeral ones. Both kinds of clusters share cloud tools and resources, including storage and monitoring. Dataproc uses standardized machine images to define software configurations on VMs in a cluster. You can use these predefined images as the basis for the VM configuration you need. The example shows most of the persistent clusters running on version 1.1, with one cluster running on version 1.2. You can create new clusters with customized VM instances whenever you need them. This lets you isolate testing and development environments from critical production data and jobs. The ephemeral clusters in the example run a variety of jobs. This example shows Apache Airflow running on Compute Engine being used to schedule work with ephemeral clusters. Working with Google Cloud services This section discusses some additional considerations for migrating Hadoop to Google Cloud. Replacing open source tools with Google Cloud services Google Cloud offers many products that you can use with your Hadoop system. Using a Google Cloud product can often have benefits over running the equivalent open source product in Google Cloud. Learn about Google Cloud products and services to discover the breadth of what the platform has to offer. Using regions and zones You should understand the repercussions of geography and regions before you configure your data and jobs. Many Google Cloud services require you to specify regions or zones in which to allocate resources. The latency of requests can increase when the requests are made from a different region than the one where the resources are stored. Additionally, if the service's resources and your persistent data are located in different regions, some calls to Google Cloud services might copy all of the required data from one zone to another before processing. This can have a severe impact on performance. Configuring authentication and permissions Your control over permissions in Google Cloud services is likely to be less fine-grained than you are accustomed to in your on-premises Hadoop environment. Make sure you understand how access management works in Google Cloud before you begin your migration. Identity and Access Management (IAM) manages access to cloud resources. It works on the basis of accounts and roles. Accounts identify a user or request (authentication), and the roles granted to an account dictate the level of access (authorization). Most Google Cloud services provide their own set of roles to help you fine-tune permissions. As part of the planning process for your migration, you should learn about how IAM interacts with Cloud Storage and with Dataproc. Learn about the permissions models of each additional Google Cloud service as you add it to your system, and consider how to define roles that work across the services that you use. Monitoring jobs with Cloud Logging Your Google Cloud jobs send their logs to Cloud Logging, where the logs are easily accessible. You can get your logs in these ways: With a browser-based graphical interface using Logs Explorer in Google Cloud console. From a local terminal window using the Google Cloud CLI. From scripts or applications using the Cloud Client Libraries for the Cloud Logging API. By calling the Cloud Logging REST API. Managing your edge nodes with Compute Engine You can use Compute Engine to access an edge node in a Dataproc Hadoop cluster. As with most Google Cloud products, you have multiple options for management: through the web-based console, from the command line, and through web APIs. Using Google Cloud big data services Cloud Storage is the primary way to store unstructured data in Google Cloud, but it isn't the only storage option. Some of your data might be better suited to storage in products designed explicitly for big data. You can use Bigtable to store large amounts of sparse data. Bigtable is an HBase-compliant API that offers low latency and high scalability to adapt to your jobs. For data warehousing, you can use BigQuery. What's next Check out the other parts of the Hadoop migration guide: Data migration guide Job migration guide Explore the Dataproc documentation. Get an overview of Google Cloud. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Overview(2).txt b/Overview(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..1b33f1870a6ca759fc4297f1488d72f596d0a9bf --- /dev/null +++ b/Overview(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework +Date Scraped: 2025-02-23T11:42:35.296Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC The Google Cloud Architecture Framework provides recommendations to help architects, developers, administrators, and other cloud practitioners design and operate a cloud topology that's secure, efficient, resilient, high-performing, and cost-effective. The Google Cloud Architecture Framework is our version of a well-architected framework. A cross-functional team of experts at Google validates the recommendations in the Architecture Framework. The team curates the Architecture Framework to reflect the expanding capabilities of Google Cloud, industry best practices, community knowledge, and feedback from you. For a summary of the significant changes to the Architecture Framework, see What's new. The Architecture Framework is relevant to applications built for the cloud and for workloads migrated from on-premises to Google Cloud, hybrid cloud deployments, and multi-cloud environments. Architecture Framework pillars and perspectives The Google Cloud Architecture Framework is organized into five pillars, as shown in the following diagram. We also provide cross-pillar perspectives that focus on recommendations for selected domains, industries, and technologies like AI and machine learning (ML). To view the content in all of the pillars and perspectives on a single page or to to get a PDF output of the content, see View in one page. Pillars construction Operational excellence Efficiently deploy, operate, monitor, and manage your cloud workloads. security Security, privacy, and compliance Maximize the security of your data and workloads in the cloud, design for privacy, and align with regulatory requirements and standards. restore Reliability Design and operate resilient and highly available workloads in the cloud. payment Cost optimization Maximize the business value of your investment in Google Cloud. speed Performance optimization Design and tune your cloud resources for optimal performance. Perspectives saved_search AI and ML A cross-pillar view of recommendations that are specific to AI and ML workloads. Core principles Before you explore the recommendations in each pillar of the Architecture Framework, review the following core principles: Design for change No system is static. The needs of its users, the goals of the team that builds the system, and the system itself are constantly changing. With the need for change in mind, build a development and production process that enables teams to regularly deliver small changes and get fast feedback on those changes. Consistently demonstrating the ability to deploy changes helps to build trust with stakeholders, including the teams responsible for the system, and the users of the system. Using DORA's software delivery metrics can help your team monitor the speed, ease, and safety of making changes to the system. Document your architecture When you start to move your workloads to the cloud or build your applications, lack of documentation about the system can be a major obstacle. Documentation is especially important for correctly visualizing the architecture of your current deployments. Quality documentation isn't achieved by producing a specific amount of documentation, but by how clear content is, how useful it is, and how it's maintained as the system changes. A properly documented cloud architecture establishes a common language and standards, which enable cross-functional teams to communicate and collaborate effectively. The documentation also provides the information that's necessary to identify and guide future design decisions. Documentation should be written with your use cases in mind, to provide context for the design decisions. Over time, your design decisions will evolve and change. The change history provides the context that your teams require to align initiatives, avoid duplication, and measure performance changes effectively over time. Change logs are particularly valuable when you onboard a new cloud architect who is not yet familiar with your current design, strategy, or history. Analysis by DORA has found a clear link between documentation quality and organizational performance — the organization's ability to meet their performance and profitability goals. Simplify your design and use fully managed services Simplicity is crucial for design. If your architecture is too complex to understand, it will be difficult to implement the design and manage it over time. Where feasible, use fully managed services to minimize the risks, time, and effort associated with managing and maintaining baseline systems. If you're already running your workloads in production, test with managed services to see how they might help to reduce operational complexities. If you're developing new workloads, then start simple, establish a minimal viable product (MVP), and resist the urge to over-engineer. You can identify exceptional use cases, iterate, and improve your systems incrementally over time. Decouple your architecture Research from DORA shows that architecture is an important predictor for achieving continuous delivery. Decoupling is a technique that's used to separate your applications and service components into smaller components that can operate independently. For example, you might separate a monolithic application stack into individual service components. In a loosely coupled architecture, an application can run its functions independently, regardless of the various dependencies. A decoupled architecture gives you increased flexibility to do the following: Apply independent upgrades. Enforce specific security controls. Establish reliability goals for each subsystem. Monitor health. Granularly control performance and cost parameters. You can start the decoupling process early in your design phase or incorporate it as part of your system upgrades as you scale. Use a stateless architecture A stateless architecture can increase both the reliability and scalability of your applications. Stateful applications rely on various dependencies to perform tasks, such as local caching of data. Stateful applications often require additional mechanisms to capture progress and restart gracefully. Stateless applications can perform tasks without significant local dependencies by using shared storage or cached services. A stateless architecture enables your applications to scale up quickly with minimum boot dependencies. The applications can withstand hard restarts, have lower downtime, and provide better performance for end users. Send feedback \ No newline at end of file diff --git a/Overview(20).txt b/Overview(20).txt new file mode 100644 index 0000000000000000000000000000000000000000..9dd3c13cefd8b687f74de48322dde9aa587b4205 --- /dev/null +++ b/Overview(20).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-data +Date Scraped: 2025-02-23T11:52:37.493Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrating HDFS Data from On-Premises to Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC This guide describes the process of moving data from on-premises Hadoop Distributed File System (HDFS) to Google Cloud (Google Cloud). This is the second of four guides describing how to move from on-premises Hadoop: Migrating On-Premises Hadoop Infrastructure to Google Cloud provides an overview of the migration process, with particular emphasis on moving from large, persistent clusters to an ephemeral model. This guide, focused on moving your data to Google Cloud. Migrating data from HBase to Bigtable Migrating Hadoop Jobs from On-Premises to Dataproc describes the process of running your jobs on Dataproc and other Google Cloud products. Planning your data move The following sections describe best practices for planning your data migration from on-premises HDFS to Google Cloud. Plan to migrate incrementally so you can leave time to migrate jobs, experiment, and test after moving each body of data. Deciding how to move your data There are two different migration models you should consider for transferring HDFS data to the cloud: push and pull. Both models use Hadoop DistCp to copy data from your on-premises HDFS clusters to Cloud Storage, but they use different approaches. The push model is the simplest model: the source cluster runs the distcp jobs on its data nodes and pushes files directly to Cloud Storage, as shown in the following diagram: The pull model is more complex, but has some advantages. An ephemeral Dataproc cluster runs the distcp jobs on its data nodes, pulls files from the source cluster, and copies them to Cloud Storage, as shown in the following diagram: The push model is the simplest model because the source cluster can push data directly to Cloud Storage and you don't need to create extra compute resources to perform the copy. However, if you intend to continue using the source cluster during the migration for other regular data processing jobs, you should ensure that enough resources, such as CPU, RAM, and network bandwidth, are available on the source cluster to also perform the copy jobs. If the source cluster is already running at compute capacity, and if you cannot increase the resources on the source cluster to perform the copy, then you should consider using the pull model instead. While more complex than the push model, the pull model has a number of advantages: Impact on the source cluster's CPU and RAM resources is minimized, because the source nodes are used only for serving blocks out of the cluster. You can also fine-tune the specifications of the pull cluster's resources on Google Cloud to handle the copy jobs, and tear down the pull cluster when the migration is complete. Traffic on the source cluster's network is reduced, which allows for higher outbound bandwidths and faster transfers. There is no need to install the Cloud Storage connector on the source cluster as the ephemeral Dataproc cluster, which already has the connector installed, handles the data transfer to Cloud Storage. To understand the implications for network usage for both models, consider how Hadoop handles data replication in HDFS. Hadoop splits each file into multiple blocks — the block size is usually 128-256 megabytes. Hadoop replicates those blocks across multiple data nodes and across multiple racks to avoid losing data in the event of a data node failure or a rack failure. The standard configuration is to store 3 replicas of each block. Hadoop also employs "rack awareness", which ensures that 2 copies of each block are in different data nodes inside the same rack for lower latency, and a third copy in a different rack for increased redundancy and availability. This replication logic is summarized in the following diagram: When copying a file using the push model, that is, when the distcp map task runs on a data node of the source cluster, all of the file's blocks are first collected from the various data nodes. The file's blocks are then pushed out of the source cluster and over to Cloud Storage. Traffic on the network could take up to nearly twice the file's total size, as illustrated in the following diagram: When you copy a file using the pull model (that is, when the distcp map task runs on a data node of the pull cluster in Google Cloud), each block travels over the network only once by exiting the source cluster directly. The overall network traffic is limited to the file's total size, as illustrated in the following diagram: When you use the pull model, you should monitor the number of distcp map tasks and bandwidth used to avoid overwhelming the source cluster with too many parallel connections. Deciding where to move your data The end result of your Hadoop migration can be a cloud-native solution or a hybrid solution. The difference between these is whether your system will retain any on-premises components. In a cloud-native solution, you house your data in the cloud and run jobs against it there. In a hybrid solution, some of your data remains on-premises. You might run jobs against that data on-premises as well, or you might run jobs in the cloud that work with on-premises data. A cloud-native solution is the easiest to maintain, but you might have business or technical requirements that keep some data or processing on-premises. Every hybrid solution is highly case-dependent, including its own mix of technologies and services to meet the needs of your workload. Moving data to products other than Cloud Storage Move most of your data to Cloud Storage. However, there are some cases where you might consider moving data to a different Google Cloud product: If you are migrating data from Apache HBase, consider moving it to Bigtable, which provides equivalent features. If you are migrating data from Apache Impala, consider moving it to BigQuery, which provides equivalent features. You might have data in HBase or Impala that you can use without storing it in Bigtable or BigQuery. If your job doesn't require the features of Bigtable or BigQuery, store the data in Cloud Storage. Planning data locations with permissions in mind Google Cloud doesn't use the same fine-grained permissions for files that you can achieve with HDFS on-premises. The least complicated approach to file permissions is to set them at the level of each Cloud Storage bucket. If you’ve set different permissions for different sets of HDFS data, consider creating different buckets in Cloud Storage that each have different permissions. Then put the HDFS data into the bucket that has the proper permissions for that data. If you move files to a structure that's different in Cloud Storage than it is in HDFS, remember to keep track of all of the changes. When you move your jobs to Dataproc you'll need to provide the right paths to your data in its new locations. Planning an incremental move Plan on moving your data in discrete chunks over time. Schedule plenty of time to move the corresponding jobs to the cloud between data moves. Start your migration by moving low-priority data, such as backups and archives. Splitting your data If you plan to move your data incrementally, you must split your data into multiple parts. The following sections describe the most common strategies for splitting datasets. Splitting data by timestamp A common approach to splitting data for an incremental move is to store older data in the cloud, while keeping your new data in HDFS in your on-premises system. This enables you to test new and migrated jobs on older, less critical data. Splitting your data in this way enables you to start your move with small amounts of data. Important considerations: Can you split your data using another method in addition to splitting by time? You can get a more targeted set of data by splitting data by the jobs it supports or the organization that owns it and then splitting it further by time. Should you use an absolute date/time or a relative date/time? Sometimes it makes sense to split data at a point in time, such as archiving all data generated before an important change in your system. Using an absolute date/time is often appropriate if you want to create backfilling jobs to test your system in the cloud to see if you can process old data to bring it up to your current standards. In other cases, you might want to move data to the cloud based on a timestamp relative to the current date. For example, you might move all data that was created more than a year ago, or all data that hasn't been edited in the last three months. What date/time value are you using to make the decision? Files often have multiple date/time values. Sometimes the file creation date is the most important. Other times you might want to use the last edited time, or another timestamp from the file's metadata. Does all of your data have the same timestamp format? There are many ways to handle timestamps. If your data comes from more than one source, it's possible that the timestamps are stored in different formats. Ensure that you have consistent timestamps before using them to split your data. Splitting data by jobs Sometimes you can split your data by looking at the jobs that use it. This can be especially helpful if you are migrating jobs incrementally. You can move just the data that is used by the jobs that you move. Important considerations: Are the jobs you are moving to the cloud self-contained? Splitting by jobs is a great option for jobs that work on self-contained units of data, but can become hard to manage if the jobs work with data that is spread out across your system. Is the data likely to have the same uses in the future? Think carefully before isolating data if you might use it in different ways. Is the job migration scoped appropriately to result in manageable chunks of data? You can also use broader functional criteria to split data instead of just sets of jobs. For example, you could run all of your ETL work in the cloud and then run analysis and reporting jobs on-premises. Splitting data by ownership In some cases, you can split your data by areas of ownership. One advantage of taking this approach is that your data organization aligns with the organization of your business. Another advantage is that it allows you to leverage Google Cloud role-based access management. For example, migrating data used by a particular business role to an isolated Cloud Storage bucket makes it easier to set up permissions. Important considerations: Are the boundaries between owners clear? It's usually clear who the primary owner of a given data item is, sometimes the people who most often access data are not the owners. Are there other splitting criteria you can combine with ownership? You might end up with very large datasets after splitting by ownership. It can be useful to narrow things down even more by splitting the data again by task or by time. Keeping your data synchronized in a hybrid solution One of the challenges of using a hybrid solution is that sometimes a job needs to access data from both Google Cloud and from on-premises systems. If a dataset must be accessed in both environments, you need to establish a primary storage location for it in one environment and maintain a synchronized copy in the other. The challenges of synchronizing data are similar, regardless of the primary location you choose. You need a way to identify when data has changed and a mechanism to propagate changes to the corresponding copies. If there is a potential for conflicting changes, you also need to develop a strategy for resolving them. Important considerations: Always make copies of data read-only if possible. Your synchronization strategy becomes more complex with every potential source of new edits. In a hybrid solution, avoid maintaining more than two copies of data. Keep only one copy on premises and only one in Google Cloud. You can use technologies such as Apache Airflow to help manage your data synchronization. Moving your data The following sections outline considerations for moving your data to Google Cloud. Configuring access to your data File permissions work differently on Cloud Storage than they do for HDFS. Before you move your data to Cloud Storage, you need to become familiar with Identity and Access Management (IAM). The easiest way to handle access control is to sort data into Cloud Storage buckets based on access requirements and configure your authorization policy at the bucket level. You can assign roles that grant access to individual users or to groups. Grant access by groups, and then assigning users to groups as needed. You need to make data access decisions as you import and structure your data in Cloud Storage. Every Google Cloud product has its own permissions and roles. Make sure you understand the relationships between Cloud Storage access controls and those for Dataproc. Evaluate the policies for each product separately. Don't assume that the roles and permissions map directly between products. Familiarize yourself with the following documentation to prepare for making policy decisions for your cloud-based Hadoop system: Overview of IAM for Cloud Storage List of Dataproc permissions and IAM roles If you need to assign more granular permissions to individual files, Cloud Storage supports access control lists (ACLs). However, IAM is the strongly preferred option. Only use ACLs if your permissions are particularly complex. Using DistCp to copy your data to Cloud Storage Because Cloud Storage is a Hadoop compatible file system, you can use Hadoop DistCp to move your data from your on-premises HDFS to Cloud Storage. You can move data several ways using DistCp. We recommend this way: Establish a private link between your on-premises network and Google's network using Cloud Interconnect or Cloud VPN. Create a Dataproc cluster to use for the data transfer. Use the Google Cloud CLI to connect to your cluster's master instance. For example: gcloud compute ssh [CLUSTER_NAME]-m Where CLUSTER_NAME is the name of the Dataproc cluster you created for the job. The suffix -m identifies the master instance. On the cluster's master instance, run DistCp commands to move the data. The actual DistCp commands you need to move your data are similar to the following: hadoop distcp hdfs://nn1:8020/20170202/ gs://bucket/20170202/ In this example nn1 and 8020 are the namenode and port where your source data is stored, and bucket is the name of the Cloud Storage bucket that you are copying the file to. Cloud Storage traffic is encrypted by default with Transport Layer Security (TLS). Validating data transfers When you're copying or moving data between distinct storage systems such as multiple HDFS clusters or between HDFS and Cloud Storage, it's a good idea to perform some type of validation to guarantee data integrity. This validation is essential to be sure data wasn't altered during transfer. For more details, refer to the guide on Validating data transfers. Ramping up the request rate Cloud Storage maintains an index of object keys for each bucket in order to provide consistent object listing. This index is stored in lexicographical order and is updated whenever objects are written to or deleted from a bucket. Adding and deleting objects whose keys all exist in a small range of the index naturally increases the chances of contention. Cloud Storage detects such contention and automatically redistributes the load on the affected index range across multiple servers. If you're writing objects under a new prefix and anticipate that you will get to a rate greater than 1000 write requests per second, you should ramp up the request rate gradually, as described in the Cloud Storage documentation. Not doing so may result in temporarily higher latency and error rates. Improving data migration speed The simplest way to transfer data from your on-premises clusters to Google Cloud is to use a VPN tunnel over the public internet. If a single VPN tunnel doesn't provide the necessary throughput, multiple VPN tunnels might be created and Google Cloud will automatically distribute traffic across tunnels to provide additional bandwidth. Sometimes even multiple VPN tunnels don't provide sufficient bandwidth or connection stability to meet the needs of your migration. Consider the following approaches to improve throughput: Use direct peering between your network and Google's edge points of presence (PoPs). This option reduces the number of hops between your network and Google Cloud. Use a Cloud Interconnect service provider to establish a direct connection to Google's network. The service details vary for different partners. Most offer an SLA for network availability and performance. Contact a service provider directly to learn more. Working with Google Cloud partners Google Cloud works with a variety of partners that can assist you in migrating your data. Check out the partners working with Cloud Storage for more information about services available to help you with your data migration. The available services and terms vary by provider, so work with them directly to get details. What's next Check out the other parts of the Hadoop migration guide: Overview Job migration guide Learn more about Cloud Storage. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Overview(21).txt b/Overview(21).txt new file mode 100644 index 0000000000000000000000000000000000000000..b4886bc74d47aa05c8fc23e32c8ca6f49f1e63a3 --- /dev/null +++ b/Overview(21).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/logging-and-monitoring-on-premises-resources-with-bindplane +Date Scraped: 2025-02-23T11:53:26.731Z + +Content: +Home Docs Cloud Architecture Center Send feedback Log and monitor on-premises resources with BindPlane Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-02 UTC Cloud Logging and Cloud Monitoring provide advanced logging and monitoring services for Google Cloud. Cloud Logging and Cloud Monitoring support logging and monitoring for Google Cloud and Amazon Web Services (AWS), and support logging and monitoring for hybrid and on-premises resources with BindPlane by observIQ. This solution describes the considerations and design patterns for using Logging, Monitoring, and BindPlane to provide logging and monitoring services for on-premises resources. This series is intended for people in DevOps who are interested in logging or monitoring their hybrid or on-premises resources. Using Logging, Monitoring, and BindPlane can provide a powerful way to consolidate your logging and monitoring either as a part of a transition to Google Cloud or as part of a strategy to centralize logging and monitoring for your apps and infrastructure. For example, you might want to use a single tool for your logging and monitoring needs to simplify work for your DevOps and SRE teams, to reduce costs, or to standardize tooling. Or, you might want to monitor your on-premises resources through Logging and Monitoring if you've deployed Google Cloud apps and infrastructure and need to monitor both your Google Cloud and your remaining on-premises apps and infrastructure. This series consists of the following two parts: Log on-premises resources with BindPlane: Read about how Cloud Logging supports logging from on-premises resources. Monitor on-premises resources with BindPlane: Read about how Cloud Monitoring supports monitoring of on-premises resources. Send feedback \ No newline at end of file diff --git a/Overview(22).txt b/Overview(22).txt new file mode 100644 index 0000000000000000000000000000000000000000..9a7ca80bcaee9a4aacb51dc23f248ca24bc17b5e --- /dev/null +++ b/Overview(22).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/dr-scenarios-planning-guide +Date Scraped: 2025-02-23T11:54:27.853Z + +Content: +Home Docs Cloud Architecture Center Send feedback Disaster recovery planning guide Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-05 UTC This document is the first part of a series that discusses disaster recovery (DR) in Google Cloud. This part provides an overview of the DR planning process: what you need to know in order to design and implement a DR plan. Subsequent parts discuss specific DR use cases with example implementations on Google Cloud. The series consists of the following parts: Disaster recovery planning guide (this document) Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Service-interrupting events can happen at any time. Your network could have an outage, your latest application push might introduce a critical bug, or you might have to contend with a natural disaster. When things go awry, it's important to have a robust, targeted, and well-tested DR plan. With a well-designed, well-tested DR plan in place, you can make sure that if catastrophe hits, the impact on your business's bottom line will be minimal. No matter what your DR needs look like, Google Cloud has a robust, flexible, and cost-effective selection of products and features that you can use to build or augment the solution that is right for you. Basics of DR planning DR is a subset of business continuity planning. DR planning begins with a business impact analysis that defines two key metrics: A recovery time objective (RTO), which is the maximum acceptable length of time that your application can be offline. This value is usually defined as part of a larger service level agreement (SLA). A recovery point objective (RPO), which is the maximum acceptable length of time during which data might be lost from your application due to a major incident. This metric varies based on the ways that the data is used. For example, user data that's frequently modified could have an RPO of just a few minutes. In contrast, less critical, infrequently modified data could have an RPO of several hours. (This metric describes only the length of time; it doesn't address the amount or quality of the data that's lost.) Typically, the smaller your RTO and RPO values are (that is, the faster your application must recover from an interruption), the more your application will cost to run. The following graph shows the ratio of cost to RTO/RPO. Because smaller RTO and RPO values often mean greater complexity, the associated administrative overhead follows a similar curve. A high-availability application might require you to manage distribution between two physically separated data centers, manage replication, and more. RTO and RPO values typically roll up into another metric: the service level objective (SLO), which is a key measurable element of an SLA. SLAs and SLOs are often conflated. An SLA is the entire agreement that specifies what service is to be provided, how it is supported, times, locations, costs, performance, penalties, and responsibilities of the parties involved. SLOs are specific, measurable characteristics of the SLA, such as availability, throughput, frequency, response time, or quality. An SLA can contain many SLOs. RTOs and RPOs are measurable and should be considered SLOs. You can read more about SLOs and SLAs in the Google Site Reliability Engineering book. You might also be planning an architecture for high availability (HA). HA doesn't entirely overlap with DR, but it's often necessary to take HA into account when you're thinking about RTO and RPO values. HA helps to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. When you run production workloads on Google Cloud, you might use a globally distributed system so that if something goes wrong in one region, the application continues to provide service even if it's less widely available. In essence, that application invokes its DR plan. Why Google Cloud? Google Cloud can greatly reduce the costs that are associated with both RTO and RPO when compared to fulfilling RTO and RPO requirements on premises. For example, DR planning requires you to account for a number of requirements, including the following: Capacity: securing enough resources to scale as needed. Security: providing physical security to protect assets. Network infrastructure: including software components such as firewalls and load balancers. Support: making available skilled technicians to perform maintenance and to address issues. Bandwidth: planning suitable bandwidth for peak load. Facilities: ensuring physical infrastructure, including equipment and power. By providing a highly managed solution on a world-class production platform, Google Cloud helps you bypass most or all of these complicating factors, removing many business costs in the process. In addition, Google Cloud's focus on administrative simplicity means that the costs of managing a complex application are reduced as well. Google Cloud offers several features that are relevant to DR planning, including the following: A global network. Google has one of the largest and most advanced computer networks in the world. The Google backbone network uses advanced software-defined networking and edge-caching services to deliver fast, consistent, and scalable performance. Redundancy. Multiple points of presence (PoPs) across the globe mean strong redundancy. Your data is mirrored automatically across storage devices in multiple locations. Scalability. Google Cloud is designed to scale like other Google products (for example, search and Gmail), even when you experience a huge traffic spike. Managed services such as Cloud Run, Compute Engine, and Firestore give you automatic scaling that enables your application to grow and shrink as needed. Security. The Google security model is built on decades of experience with helping to keep customers safe on Google applications like Gmail and Google Workspace. In addition, the site reliability engineering teams at Google help ensure high availability and help prevent abuse of platform resources. Compliance. Google undergoes regular independent third-party audits to verify that Google Cloud is in alignment with security, privacy, and compliance regulations and best practices. Google Cloud complies with certifications such as ISO 27001, SOC 2/3, and PCI DSS 3.0. DR patterns DR patterns are considered to be cold, warm, or hot. These patterns indicate how readily the system can recover when something goes wrong. An analogy might be what you would do if you were driving and punctured a car tire. How you deal with a flat tire depends on how prepared you are: Cold: You have no spare tire, so you must call someone to come to you with a new tire and replace it. Your trip stops until help arrives to make the repair. Warm: You have a spare tire and a replacement kit, so you can get back on the road using what you have in your car. However, you must stop your journey to repair the problem. Hot: You have run-flat tires. You might need to slow down a little, but there is no immediate impact on your journey. Your tires run well enough that you can continue (although you must eventually address the issue). Creating a detailed DR plan This section provides recommendations for how to create your DR plan. Design according to your recovery goals When you design your DR plan, you need to combine your application and data recovery techniques and look at the bigger picture. The typical way to do this is to look at your RTO and RPO values and which DR pattern you can adopt to meet those values. For example, in the case of historical compliance-oriented data, you probably don't need speedy access to the data, so a large RTO value and cold DR pattern is appropriate. However, if your online service experiences an interruption, you'll want to be able to recover both the data and the user-facing part of the application as quickly as possible. In that case, a hot pattern would be more appropriate. Your email notification system, which typically isn't business critical, is probably a candidate for a warm pattern. For guidance on using Google Cloud to address common DR scenarios, review the application recovery scenarios. These scenarios provide targeted DR strategies for a variety of use cases and offer example implementations on Google Cloud for each. Design for end-to-end recovery It isn't enough just to have a plan for backing up or archiving your data. Make sure your DR plan addresses the full recovery process, from backup to restore to cleanup. We discuss this in the related documents about DR data and recovery. Make your tasks specific When it's time to run your DR plan, you don't want to be stuck guessing what each step means. Make each task in your DR plan consist of one or more concrete, unambiguous commands or actions. For example, "Run the restore script" is too general. In contrast, "Open a shell and run /home/example/restore.sh" is precise and concrete. Implementing control measures Add controls to prevent disasters from occurring and to detect issues before they occur. For example, add a monitor that sends an alert when a data-destructive flow, such as a deletion pipeline, exhibits unexpected spikes or other unusual activity. This monitor could also terminate the pipeline processes if a certain deletion threshold is reached, preventing a catastrophic situation. Preparing your software Part of your DR planning is to make sure that the software you rely on is ready for a recovery event. Verify that you can install your software Make sure that your application software can be installed from source or from a preconfigured image. Make sure that you are appropriately licensed for any software that you will be deploying on Google Cloud—check with the supplier of the software for guidance. Make sure that needed Compute Engine resources are available in the recovery environment. This might require preallocating instances or reserving them. Design continuous deployment for recovery Your continuous deployment (CD) toolset is an integral component when you are deploying your applications. As part of your recovery plan, you must consider where in your recovered environment you will deploy artifacts. Plan where you want to host your CD environment and artifacts—they need to be available and operational in the event of a disaster. Implementing security and compliance controls When you design a DR plan, security is important. The same controls that you have in your production environment must apply to your recovered environment. Compliance regulations will also apply to your recovered environment. Configure security the same for the DR and production environments Make sure that your network controls provide the same separation and blocking that the source production environment uses. Learn how to configure Shared VPC and firewalls to let you establish centralized networking and security control of your deployment, to configure subnets, and to control inbound and outbound traffic. Understand how to use service accounts to implement least privilege for applications that access Google Cloud APIs. Make sure to use service accounts as part of the firewall rules. Make sure that you grant users the same access to the DR environment that they have in the source production environment. The following list outlines ways to synchronize permissions between environments: If your production environment is Google Cloud, replicating IAM policies in the DR environment is straightforward. You can use infrastructure as code (IaC) tools like Terraform to deploy your IAM policies to production. You then use the same tools to bind the policies to corresponding resources in the DR environment as part of the process of standing up your DR environment. If your production environment is on-premises, you map the functional roles, such as your network administrator and auditor roles, to IAM policies that have the appropriate IAM roles. The IAM documentation has some example functional role configurations—for example, see the documentation for creating networking and audit logging functional roles. You have to configure IAM policies to grant appropriate permissions to products. For example, you might want to restrict access to specific Cloud Storage buckets. If your production environment is another cloud provider, map the permissions in the other provider's IAM policies to Google Cloud IAM policies. Verify your DR security After you've configured permissions for the DR environment, make sure that you test everything. Create a test environment. Verify that the permissions that you grant to users match those that the users have on-premises. Make sure users can access the DR environment Don't wait for a disaster to occur before checking that your users can access the DR environment. Make sure that you have granted appropriate access rights to users, developers, operators, data scientists, security administrators, network administrators, and any other roles in your organization. If you are using an alternative identity system, make sure that accounts have been synced with your Cloud Identity account. Because the DR environment will be your production environment for a while, get your users who will need access to the DR environment to sign in, and resolve any authentication issues. Incorporate users who are logging in to the DR environment as part of the regular DR tests that you implement. To centrally manage who has administrative access to virtual machines (VMs) that are launched, enable the OS login feature on the Google Cloud projects that constitute your DR environment. Train users Users need to understand how to undertake the actions in Google Cloud that they're used to accomplishing in the production environment, such as logging in and accessing VMs. Using the test environment, train your users how to perform these tasks in ways that safeguard your system's security. Make sure that the DR environment meets compliance requirements Verify that access to your DR environment is restricted to only those who need access. Make sure that PII data is redacted and encrypted. If you perform regular penetration tests on your production environment, you should include your DR environment as part of that scope and carry out regular tests by standing up a DR environment. Make sure that while your DR environment is in service, any logs that you collect are backfilled into the log archive of your production environment. Similarly, make sure that as part of your DR environment, you can export audit logs that are collected through Cloud Logging to your main log sink archive. Use the export sink facilities. For application logs, create a mirror of your on-premises logging and monitoring environment. If your production environment is another cloud provider, map that provider's logging and monitoring to the equivalent Google Cloud services. Have a process in place to format input into your production environment. Treat recovered data like production data Make sure that the security controls that you apply to your production data also apply to your recovered data: the same permissions, encryption, and audit requirements should all apply. Know where your backups are located and who is authorized to restore data. Make sure your recovery process is auditable—after a disaster recovery, make sure you can show who had access to the backup data and who performed the recovery. Making sure your DR plan works Make sure that if a disaster does occur, your DR plan works as intended. Maintain more than one data recovery path In the event of a disaster, your connection method to Google Cloud might become unavailable. Implement an alternative means of access to Google Cloud to help ensure that you can transfer data to Google Cloud. Regularly test that the backup path is operational. Test your plan regularly After you create a DR plan, test it regularly, noting any issues that come up and adjusting your plan accordingly. Using Google Cloud, you can test recovery scenarios at minimal cost. We recommend that you implement the following to help with your testing: Automate infrastructure provisioning. You can use IaC tools like Terraform to automate the provisioning of your Google Cloud infrastructure. If you're running your production environment on premises, make sure that you have a monitoring process that can start the DR process when it detects a failure and can trigger the appropriate recovery actions. Monitor your environments with Google Cloud Observability. Google Cloud has excellent logging and monitoring tools that you can access through API calls, allowing you to automate the deployment of recovery scenarios by reacting to metrics. When you're designing tests, make sure that you have appropriate monitoring and alerting in place that can trigger appropriate recovery actions. Perform the testing noted earlier: Test that permissions and user access work in the DR environment like they do in the production environment. Perform penetration testing on your DR environment. Perform a test in which your usual access path to Google Cloud doesn't work. What's next? Read about Google Cloud geography and regions. Read other documents in this DR series: Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Grace Mollison | Solutions LeadMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Overview(23).txt b/Overview(23).txt new file mode 100644 index 0000000000000000000000000000000000000000..694d2735dac12e1144fa5b85c5a4af2ea75d415d --- /dev/null +++ b/Overview(23).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity +Date Scraped: 2025-02-23T11:54:59.857Z + +Content: +Home Docs Cloud Architecture Center Send feedback Overview of identity and access management Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC Identity and access management (generally referred to as IAM) is the practice of granting the right individuals access to the right resources for the right reasons. This series explores the general practice of IAM and the individuals who are subject to it, including the following: Corporate identities: The identities that you manage for employees of your organization. These identities are used for signing in to workstations, accessing email, or using corporate applications. Corporate identities might also include non-employees such as contractors or partners that need access to corporate resources. Customer identities: The identities that you manage for users in order to interact with your website or customer-facing applications. Service identities: The identities that you manage in order to enable applications to interact with other applications or the underlying platform. You might need to grant access to the following resources: Google services such as Google Cloud, Google Analytics, or Google Workspace Resources in Google Cloud, such as projects, Cloud Storage buckets, or virtual machines (VMs) Custom applications or resources managed by such applications The guides in this series break down the discussion of IAM into the following parts: Managing corporate, customer, and service identities forms the foundation of IAM. These topics are boxes 4, 5, and 6 (in green). Relying on identity management as the foundation, boxes 2 and 3 (in blue) denote access management topics. These topics include managing access to Google services, to Google Cloud resources, and to your custom workloads and applications. Box 1 (in yellow) indicates access management topics that are beyond the scope of these guides. To learn about access management for Google Workspace, Google Marketing Platform, and other services, see the individual product documentation. Identity management Identity management focuses on the following processes: Provisioning, managing, migrating, and deprovisioning identities, users, and groups. Enabling secure authentication to Google services and to your custom workloads. The processes and technologies differ depending on whether you are dealing with corporate identities, application identities, or customer identities. Manage corporate identities Corporate identities are the identities that you manage for your organization's employees. Employees use these identities for signing in to workstations, accessing email, or using corporate applications. In the context of managing corporate identities, the following are typical requirements: Maintaining a single place to manage identities across your organization. Enabling employees to use a single identity and single sign-on across multiple applications in a hybrid computing environment. Enforcing policies such as multi-factor authentication or password complexity for all employees. Meeting compliance criteria that might apply to your business. Google Workspace and Cloud Identity are Google's products that let you address these requirements and centrally manage identities and policies. If you use Google services in a hybrid or multi-cloud context, addressing these requirements might require that you integrate Google's IAM capabilities with external identity management solutions or identity providers such as Active Directory. The Reference architectures document explains how Google Workspace or Cloud Identity let you realize such an integration. Some of your employees might rely on Gmail accounts or other consumer user accounts to access corporate resources. Using these types of user accounts might not comply with your individual requirements or policies, however, so you can migrate these users to Google Workspace or Cloud Identity. For more details, see Assessing your existing user accounts and Assessing onboarding plans. To help you adopt Google Workspace or Cloud Identity, see our assessment and planning guides for guidance on how to access your requirements and how to approach the adoption process. Manage application identities Application identities are the identities that you manage in order to let applications interact with other applications or with the underlying platform. In the context of managing application identities, the following are typical requirements: Integrating with third-party APIs and authentication solutions. Enabling authentication across environments in a hybrid or multi-cloud scenario. Preventing leakage of credentials. Google Cloud lets you manage application identities, and address these requirements, by using Google Cloud service accounts and Kubernetes service accounts. For more information about service accounts and best practices for using them, see the Understanding service accounts. Manage customer identities Customer identities are the identities that you manage for users to let them interact with your website or customer-facing applications. Managing customer identities and their access is also referred to as customer identity and access management (CIAM). In the context of managing customer identities, the following are typical requirements: Letting customers sign up for a new account but guarding against abuse, which might include detecting and blocking the creation of bot accounts. Supporting social sign-on and integrating with third-party identity providers. Supporting multi-factor authentication and enforcing password complexity requirements. Google's Identity Platform lets you manage customer identities and address these requirements. For more details on the feature set and how to integrate Identity Platform with your custom applications, see the Identity Platform documentation. Access management Access management focuses on the following processes: Granting or revoking access to specific resources for identities. Managing roles and permissions. Delegating administrative capabilities to trusted individuals. Enforcing access control. Auditing accesses that are performed by identities. Manage access to Google services Your organization might rely on a combination of Google services. For example, you might use Google Workspace for collaboration, Google Cloud for deploying custom workloads, and Google Analytics for measuring advertising success metrics. Google Workspace or Cloud Identity lets you centrally control which corporate identities can use which Google services. By restricting access to certain services, you establish a base level of access control. You can then use the access management capabilities of the individual services to configure finer-grained access control. For more details, read about how to control who can access Google Workspace and Google services. Manage access to Google Cloud In Google Cloud, you can use IAM to grant corporate identities granular access to specific resources. By using IAM, you can implement the security principle of least privilege, where you grant these identities permissions to access only the resources that you specify. For more information, see the IAM documentation. Manage access to your workloads and applications Your custom workloads and applications might differ based on the audience they are intended for: Some workloads might cater to corporate users—for example, internal line-of-business applications, dashboards, or content management systems. Other applications might cater to your customers—for example, your website, a customer self-service portal, or backends for mobile applications. The right way to manage access, enforce access control, and audit access depends on the audience and the way you deploy the application. To learn more about how to protect applications and other workloads that cater to corporate users, see the IAP documentation. You can also directly integrate Sign-In With Google or use standard protocols such as OAuth 2.0 or OpenID Connect. You can find out how to enforce access to APIs in Istio and Cloud Endpoints documentation. You can use both products whether your applications cater to corporate users or to end users. What's next Understand the concepts and capabilities of identity management by reading the Concepts section. Learn about prescriptive guidance to consider in your architecture or design by reading the Best practices section. Learn how to assess your requirements and identify a suitable design by reading the Assess and plan section. Send feedback \ No newline at end of file diff --git a/Overview(24).txt b/Overview(24).txt new file mode 100644 index 0000000000000000000000000000000000000000..f1ebbb35ecda4a8dabbf3b84af9c549b2247d807 --- /dev/null +++ b/Overview(24).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/overview-consolidating-accounts +Date Scraped: 2025-02-23T11:55:49.120Z + +Content: +Home Docs Cloud Architecture Center Send feedback Overview of consolidating accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC If your organization isn't already using Cloud Identity or Google Workspace, some of your employees might be using consumer accounts to access Google services. A consumer account is owned and managed by the individual who created the account. Your organization therefore has no control over the configuration, security, and lifecycle of these consumer accounts. This document describes how to consolidate existing consumer accounts so that you achieve the following results: Only managed user accounts are used to access Google services. Your organization has full control over the configuration, security, and lifecycle of user accounts. If you use an external IdP, all user accounts have a matching identity in your external identity provider (IdP) and can be used for single sign-on. Before you begin Before you consolidate your consumer accounts, make sure that you identify a suitable onboarding plan and complete the prerequisites for consolidating your existing user accounts. When you consolidate existing user accounts, you might need to collaborate between multiple teams and stakeholders in your organization, including the following: Administrators of your external IdP, if you use one. Administrators of your email system. Users responsible for managing access to Google services used in your organization, such as Google Marketing Platform, Google Ads, or Google Play. If you use separate Cloud Identity or Google Workspace organizations for staging and production, we recommend that you perform a test run of the consolidation process first: For each class of existing consumer accounts that you need to consolidate, create a test user account that uses a similar configuration. When you assign email addresses to these test user accounts, choose email addresses that match one of the domains of your staging account. Perform the consolidation process by using the test user accounts and your staging Google Workspace or Cloud Identity account. Performing a test run lets you familiarize yourself with the process before you apply it in your production environment. It also helps you identify potential issues before you apply them to thousands of users. Consolidation process The consolidation process consists of the following streams: Migrating consumer accounts to Cloud Identity or Google Workspace. Evicting consumer accounts that you don't want to keep. Identifying and removing access for Gmail accounts. Sanitizing Gmail accounts that use a corporate email address as an alternate address. Depending on the sets of existing accounts that you have identified, some of these streams might not apply to you. The following flow chart illustrates the consolidation process. The streams, indicated by parallel lines, are independent of one another so you can do them in parallel. The diagram shows this flow: Identify a set of consumer accounts to migrate. If you have a large number of consumer accounts, it's best to do the migration in batches. Start with a small batch of approximately 10 users, and then make your batches larger in subsequent migrations. Announce to affected users your intent to transfer consumer accounts. Make sure that users understand both the importance and consequences of accepting or declining a transfer request. For an example of what an announcement email message might look like, see Advance communication for user account migration. Migrate the selected consumer accounts by using the transfer tool for unmanaged users. This process is described in more detail in Migrating consumer accounts. Wait for most of the users (a quorum) to accept or decline transfer requests, and resend transfer requests if necessary. You can see a user has responded by looking at the transfer tool for unmanaged users. If you're using an external IdP, some of the migrated user accounts might end up without a matching identity in the external IdP. Reconcile these orphaned managed user accounts to ensure that all managed user accounts have a matching identity in the external IdP. Evict all consumer accounts that you don't want to migrate. Search your Identity and Access Management (IAM) policies for Gmail accounts (search for *@gmail.com entries). Revoke access to these accounts and provide affected users with managed user accounts as replacements. In order to minimize impact on users, make sure that these managed user accounts have the same or similar access to resources as previous Gmail accounts. If there are Gmail accounts that use a corporate email address as their alternate email address, sanitize these Gmail accounts. Best practices We recommend the following best practices when you are consolidating existing user accounts: If you are migrating from an external email system to Google Workspace, remember that consumer accounts might use an email address that is also subject to migration. To ensure that the owners of these consumer accounts continue to receive email, don't change DNS MX records until after you migrate all affected consumer accounts. After you complete the consolidation, consider provisioning all users and limiting authentication by single sign-on to block new consumer account sign-ups. What's next Find out how to migrate consumer accounts and how to evict unwanted consumer accounts. Learn how you can sanitize Gmail accounts. See how to reconcile orphaned managed user accounts. Send feedback \ No newline at end of file diff --git a/Overview(25).txt b/Overview(25).txt new file mode 100644 index 0000000000000000000000000000000000000000..cd483d33a547ea43dfe7f1c4d584a10a520685a5 --- /dev/null +++ b/Overview(25).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/mitigating-ransomware-attacks +Date Scraped: 2025-02-23T11:56:44.910Z + +Content: +Home Docs Cloud Architecture Center Send feedback Mitigating ransomware attacks using Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2021-11-15 UTC Code created by a third party to infiltrate your systems to hijack, encrypt, and steal data is referred to as ransomware. To help you mitigate ransomware attacks, Google Cloud provides you with controls for identifying, protecting, detecting, responding, and recovering from attacks. These controls help you accomplish the following: Assess your risk. Protect your business from threats. Maintain continuous operations. Enable rapid response and recovery. This document is part of a series that is intended for security architects and administrators. It describes how Google Cloud can help your organization mitigate the effects of ransomware attacks. It also describes the ransomware attack sequence and the built-in security controls in Google products that help you to prevent ransomware attacks. The series has the following parts: Mitigating ransomware attacks using Google Cloud (this document) Best practices for mitigating ransomware attacks using Google Cloud Ransomware attack sequence Ransomware attacks can start as mass campaigns looking for potential vulnerabilities or as directed campaigns. A directed campaign starts with identification and reconnaissance, where an attacker determines which organizations are vulnerable and what attack vector to use. There are many ransomware attack vectors. The most common are phishing emails with malicious URLs or exploiting an exposed software vulnerability. This software vulnerability can be in the software that your organization uses, or a vulnerability that exists in your software supply chain. Ransomware attackers target organizations, their supply chain, and their customers. When the initial attack is successful, the ransomware installs itself and contacts the command and control server to retrieve the encryption keys. As ransomware spreads throughout the network, it can infect resources, encrypt data using the keys that it retrieved, and exfiltrate data. Attackers demand a ransom, typically in cryptocoins, from the organization so that they can get the decryption key. The following diagram summarizes the typical ransomware attack sequence explained in the previous paragraphs, from identification and reconnaissance to data exfiltration and ransom demand. Ransomware is often difficult to detect. According to Sophos, it takes about 11 days for an organization to discover a ransomware attack, while FireEye reports an average time of 24 days. It's critical, therefore, that you put in place prevention, monitoring, and detection capabilities, and that your organization is ready to respond swiftly when someone discovers an attack. Security and resiliency controls in Google Cloud Google Cloud includes built-in security and resiliency controls to help protect customers against ransomware attacks. These controls include the following: Global infrastructure designed with security throughout the information-processing lifecycle. Built-in security features for Google Cloud products and services, such as monitoring, threat detection, data loss prevention, and access controls. High availability with regional clusters and global load balancers. Built-in backup, with easily scalable services. Automation capabilities using Infrastructure as Code and configuration guardrails. Google Cloud Threat Intelligence for Google Security Operations and VirusTotal track and respond to many types of malware, including ransomware, across Google infrastructure and products. Google Cloud Threat Intelligence for Google Security Operations is a team of threat researchers that develop threat intelligence for Google Security Operations. VirusTotal is a malware database and visualization solution that provides you with a better understanding of how malware operates within your enterprise. For more information about built-in security controls, see the Google security paper and Google Infrastructure Security Design Overview. Security and resiliency controls in Google Workspace, Chrome browser, and Chromebooks In addition to the controls within Google Cloud, other Google products like Google Workspace, Google Chrome browser, and Chromebooks include security controls that can help protect your organization against ransomware attacks. For example, Google products provide security controls that allow remote workers to access resources from anywhere, based on their identity and context (such as location or IP address). As described in the Ransomware attack sequence section, email is a key vector for many ransomware attacks. It can be exploited to phish credentials for fraudulent network access and to distribute ransomware binaries directly. Advanced phishing and malware protection in Gmail provides controls to quarantine emails, defends against dangerous attachment types, and helps protect users from inbound spoofing emails. Security Sandbox is designed to detect the presence of previously unknown malware in attachments. Chrome browser includes Google Safe Browsing, which is designed to provide warnings to users when they attempt to access an infected or malicious site. Sandboxes and site isolation help protect against the spread of malicious code within different processes on the same tab. Password protection is designed to provide alerts when a corporate password is being used on a personal account, and checks whether any of the user's saved passwords have been compromised in an online breach. In this scenario, the browser prompts the user to change their password. The following Chromebook features help to protect against phishing and ransomware attacks: Read-only operating system (Chrome OS). This system is designed to update constantly and invisibly. Chrome OS helps protect against the most recent vulnerabilities and includes controls that ensure that applications and extensions can't modify it. Sandboxing. Each application runs in an isolated environment, so one harmful application can't easily infect other applications. Verified boot. While the Chromebook is booting, it is designed to check that the system hasn't been modified. Safe Browsing. Chrome periodically downloads the most recent Safe Browsing list of unsafe sites. It is designed to check the URLs of each site that a user visits and checks each file that a user downloads against this list. Titan C security chips. These chips help protect users from phishing attacks by enabling two-factor authentication and they protect the operating system from malicious tampering. To help reduce your organization's attack surface, consider Chromebooks for users who work primarily in a browser. What's next Implement best practices for mitigating ransomware attacks (next document). See 5 pillars of protection to prevent ransomware attacks for prevention and response techniques to ransomware attacks. Read more about zero trust solutions in Devices and zero trust. Help ensure continuity and protect your business against adverse cyber events using the Security and resilience framework. Send feedback \ No newline at end of file diff --git a/Overview(3).txt b/Overview(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..8d84d2b076c8fd4146e24c80270baa083412506a --- /dev/null +++ b/Overview(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/operational-excellence +Date Scraped: 2025-02-23T11:42:40.302Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Operational excellence Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-31 UTC The operational excellence pillar in the Google Cloud Architecture Framework provides recommendations to operate workloads efficiently on Google Cloud. Operational excellence in the cloud involves designing, implementing, and managing cloud solutions that provide value, performance, security, and reliability. The recommendations in this pillar help you to continuously improve and adapt workloads to meet the dynamic and ever-evolving needs in the cloud. The operational excellence pillar is relevant to the following audiences: Managers and leaders: A framework to establish and maintain operational excellence in the cloud and to ensure that cloud investments deliver value and support business objectives. Cloud operations teams: Guidance to manage incidents and problems, plan capacity, optimize performance, and manage change. Site reliability engineers (SREs): Best practices that help you to achieve high levels of service reliability, including monitoring, incident response, and automation. Cloud architects and engineers: Operational requirements and best practices for the design and implementation phases, to help ensure that solutions are designed for operational efficiency and scalability. DevOps teams: Guidance about automation, CI/CD pipelines, and change management, to help enable faster and more reliable software delivery. To achieve operational excellence, you should embrace automation, orchestration, and data-driven insights. Automation helps to eliminate toil. It also streamlines and builds guardrails around repetitive tasks. Orchestration helps to coordinate complex processes. Data-driven insights enable evidence-based decision-making. By using these practices, you can optimize cloud operations, reduce costs, improve service availability, and enhance security. Operational excellence in the cloud goes beyond technical proficiency in cloud operations. It includes a cultural shift that encourages continuous learning and experimentation. Teams must be empowered to innovate, iterate, and adopt a growth mindset. A culture of operational excellence fosters a collaborative environment where individuals are encouraged to share ideas, challenge assumptions, and drive improvement. For operational excellence principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Operational excellence in the Architecture Framework. Core principles The recommendations in the operational excellence pillar of the Architecture Framework are mapped to the following core principles: Ensure operational readiness and performance using CloudOps: Ensure that cloud solutions meet operational and performance requirements by defining service level objectives (SLOs) and by performing comprehensive monitoring, performance testing, and capacity planning. Manage incidents and problems: Minimize the impact of cloud incidents and prevent recurrence through comprehensive observability, clear incident response procedures, thorough retrospectives, and preventive measures. Manage and optimize cloud resources: Optimize and manage cloud resources through strategies like right-sizing, autoscaling, and by using effective cost monitoring tools. Automate and manage change: Automate processes, streamline change management, and alleviate the burden of manual labor. Continuously improve and innovate: Focus on ongoing enhancements and the introduction of new solutions to stay competitive. ContributorsAuthors: Ryan Cox | Principal ArchitectHadrian Knotz | Enterprise ArchitectOther contributors: Daniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMZach Seils | Networking SpecialistWade Holmes | Global Solutions Director Next Ensure operational readiness and performance using CloudOps arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(4).txt b/Overview(4).txt new file mode 100644 index 0000000000000000000000000000000000000000..e688786c8c0062336b649e2f7119d0a269443004 --- /dev/null +++ b/Overview(4).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security +Date Scraped: 2025-02-23T11:42:52.423Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Security, privacy, and compliance Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC The Security, Privacy and Compliance pillar in the Google Cloud Architecture Framework provides recommendations to help you design, deploy, and operate cloud workloads that meet your requirements for security, privacy, and compliance. This document is designed to offer valuable insights and meet the needs of a range of security professionals and engineers. The following table describes the intended audiences for this document: Audience What this document provides Chief information security officers (CISOs), business unit leaders, and IT managers A general framework to establish and maintain security excellence in the cloud and to ensure a comprehensive view of security areas to make informed decisions about security investments. Security architects and engineers Key security practices for the design and operational phases to help ensure that solutions are designed for security, efficiency, and scalability. DevSecOps teams Guidance to incorporate overarching security controls to plan automation that enables secure and reliable infrastructure. Compliance officers and risk managers Key security recommendations to follow a structured approach to risk management with safeguards that help to meet compliance obligations. To ensure that your Google Cloud workloads meet your security, privacy, and compliance requirements, all of the stakeholders in your organization must adopt a collaborative approach. In addition, you must recognize that cloud security is a shared responsibility between you and Google. For more information, see Shared responsibilities and shared fate on Google Cloud. The recommendations in this pillar are grouped into core security principles. Each principle-based recommendation is mapped to one or more of the key deployment focus areas of cloud security that might be critical to your organization. Each recommendation highlights guidance about the use and configuration of Google Cloud products and capabilities to help improve your organization's security posture. Core principles The recommendations in this pillar are grouped within the following core principles of security. Every principle in this pillar is important. Depending on the requirements of your organization and workload, you might choose to prioritize certain principles. Implement security by design: Integrate cloud security and network security considerations starting from the initial design phase of your applications and infrastructure. Google Cloud provides architecture blueprints and recommendations to help you apply this principle. Implement zero trust: Use a never trust, always verify approach, where access to resources is granted based on continuous verification of trust. Google Cloud supports this principle through products like Chrome Enterprise Premium and Identity-Aware Proxy (IAP). Implement shift-left security: Implement security controls early in the software development lifecycle. Avoid security defects before system changes are made. Detect and fix security bugs early, fast, and reliably after the system changes are committed. Google Cloud supports this principle through products like Cloud Build, Binary Authorization, and Artifact Registry. Implement preemptive cyber defense: Adopt a proactive approach to security by implementing robust fundamental measures like threat intelligence. This approach helps you build a foundation for more effective threat detection and response. Google Cloud's approach to layered security controls aligns with this principle. Use AI securely and responsibly: Develop and deploy AI systems in a responsible and secure manner. The recommendations for this principle are aligned with guidance in the AI and ML perspective of the Architecture Framework and in Google's Secure AI Framework (SAIF). Use AI for security: Use AI capabilities to improve your existing security systems and processes through Gemini in Security and overall platform-security capabilities. Use AI as a tool to increase the automation of remedial work and ensure security hygiene to make other systems more secure. Meet regulatory, compliance, and privacy needs: Adhere to industry-specific regulations, compliance standards, and privacy requirements. Google Cloud helps you meet these obligations through products like Assured Workloads, Organization Policy Service, and our compliance resource center. Organizational security mindset A security-focused organizational mindset is crucial for successful cloud adoption and operation. This mindset should be deeply ingrained in your organization's culture and reflected in its practices, which are guided by core security principles as described earlier. An organizational security mindset emphasizes that you think about security during system design, assume zero trust, and integrate security features throughout your development process. In this mindset, you also think proactively about cyber-defense measures, use AI securely and for security, and consider your regulatory, privacy, and compliance requirements. By embracing these principles, your organization can cultivate a security-first culture that proactively addresses threats, protects valuable assets, and helps to ensure responsible technology usage. Focus areas of cloud security This section describes the areas for you to focus on when you plan, implement, and manage security for your applications, systems, and data. The recommendations in each principle of this pillar are relevant to one or more of these focus areas. Throughout the rest of this document, the recommendations specify the corresponding security focus areas to provide further clarity and context. Focus area Activities and components Related Google Cloud products, capabilities, and solutions Infrastructure security Secure network infrastructure. Encrypt data in transit and at rest. Control traffic flow. Secure IaaS and PaaS services. Protect against unauthorized access. Firewall Policies VPC Service Controls Google Cloud Armor Cloud Next Generation Firewall Secure Web Proxy Identity and access management Use authentication, authorization, and access controls. Manage cloud identities. Manage identity and access management policies. Cloud Identity Google's Identity and Access Management (IAM) service Workforce Identity Federation Workload Identity Federation Data security Store data in Google Cloud securely. Control access to the data. Discover and classify the data. Design necessary controls, such as encryption, access controls, and data loss prevention. Protect data at rest, in transit, and in use. Google's IAM service Sensitive Data Protection VPC Service Controls Cloud KMS Confidential Computing AI and ML security Apply security controls at different layers of the AI and ML infrastructure and pipeline. Ensure model safety. Google's SAIF Model Armor Security operations (SecOps) Adopt a modern SecOps platform and set of practices, for effective incident management, threat detection, and response processes. Monitor systems and applications continuously for security events. Google Security Operations Application security Secure applications against software vulnerabilities and attacks. Artifact Registry Artifact Analysis Binary Authorization Assured Open Source Software Google Cloud Armor Web Security Scanner Cloud governance, risk, and compliance Establish policies, procedures, and controls to manage cloud resources effectively and securely. Organization Policy Service Cloud Asset Inventory Security Command Center Enterprise Resource Manager Logging, auditing, and monitoring Analyze logs to identify potential threats. Track and record system activities for compliance and security analysis. Cloud Logging Cloud Monitoring Cloud Audit Logs VPC Flow Logs ContributorsAuthors: Wade Holmes | Global Solutions DirectorHector Diaz | Cloud Security ArchitectCarlos Leonardo Rosario | Google Cloud Security SpecialistJohn Bacon | Partner Solutions ArchitectSachin Kalra | Global Security Solution ManagerOther contributors: Anton Chuvakin | Security Advisor, Office of the CISODaniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerGino Pelliccia | Principal ArchitectJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperLaura Hyatt | Enterprise Cloud ArchitectMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistNoah McDonald | Cloud Security ConsultantOsvaldo Costa | Networking Specialist Customer EngineerRadhika Kanakam | Senior Program Manager, Cloud GTMSusan Wu | Outbound Product Manager Next Implement security by design arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(5).txt b/Overview(5).txt new file mode 100644 index 0000000000000000000000000000000000000000..3f2f6558ac206003d285e8ba12a8a105bb24f067 --- /dev/null +++ b/Overview(5).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability +Date Scraped: 2025-02-23T11:43:15.089Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Reliability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC The reliability pillar in the Google Cloud Architecture Framework provides principles and recommendations to help you design, deploy, and manage reliable workloads in Google Cloud. This document is intended for cloud architects, developers, platform engineers, administrators, and site reliability engineers. Reliability is a system's ability to consistently perform its intended functions within the defined conditions and maintain uninterrupted service. Best practices for reliability include redundancy, fault-tolerant design, monitoring, and automated recovery processes. As a part of reliability, resilience is the system's ability to withstand and recover from failures or unexpected disruptions, while maintaining performance. Google Cloud features, like multi-regional deployments, automated backups, and disaster recovery solutions, can help you improve your system's resilience. Reliability is important to your cloud strategy for many reasons, including the following: Minimal downtime: Downtime can lead to lost revenue, decreased productivity, and damage to reputation. Resilient architectures can help ensure that systems can continue to function during failures or recover efficiently from failures. Enhanced user experience: Users expect seamless interactions with technology. Resilient systems can help maintain consistent performance and availability, and they provide reliable service even during high demand or unexpected issues. Data integrity: Failures can cause data loss or data corruption. Resilient systems implement mechanisms such as backups, redundancy, and replication to protect data and ensure that it remains accurate and accessible. Business continuity: Your business relies on technology for critical operations. Resilient architectures can help ensure continuity after a catastrophic failure, which enables business functions to continue without significant interruptions and supports a swift recovery. Compliance: Many industries have regulatory requirements for system availability and data protection. Resilient architectures can help you to meet these standards by ensuring systems remain operational and secure. Lower long-term costs: Resilient architectures require upfront investment, but resiliency can help to reduce costs over time by preventing expensive downtime, avoiding reactive fixes, and enabling more efficient resource use. Organizational mindset To make your systems reliable, you need a plan and an established strategy. This strategy must include education and the authority to prioritize reliability alongside other initiatives. Set a clear expectation that the entire organization is responsible for reliability, including development, product management, operations, platform engineering, and site reliability engineering (SRE). Even the business-focused groups, like marketing and sales, can influence reliability. Every team must understand the reliability targets and risks of their applications. The teams must be accountable to these requirements. Conflicts between reliability and regular product feature development must be prioritized and escalated accordingly. Plan and manage reliability holistically, across all your functions and teams. Consider setting up a Cloud Centre of Excellence (CCoE) that includes a reliability pillar. For more information, see Optimize your organization's cloud journey with a Cloud Center of Excellence. Focus areas for reliability The activities that you perform to design, deploy, and manage a reliable system can be categorized in the following focus areas. Each of the reliability principles and recommendations in this pillar is relevant to one of these focus areas. Scoping: To understand your system, conduct a detailed analysis of its architecture. You need to understand the components, how they work and interact, how data and actions flow through the system, and what could go wrong. Identify potential failures, bottlenecks, and risks, which helps you to take actions to mitigate those issues. Observation: To help prevent system failures, implement comprehensive and continuous observation and monitoring. Through this observation, you can understand trends and identify potential problems proactively. Response: To reduce the impact of failures, respond appropriately and recover efficiently. Automated responses can also help reduce the impact of failures. Even with planning and controls, failures can still occur. Learning: To help prevent failures from recurring, learn from each experience, and take appropriate actions. Core principles The recommendations in the reliability pillar of the Architecture Framework are mapped to the following core principles: Define reliability based on user-experience goals Set realistic targets for reliability Build highly available systems through redundant resources Take advantage of horizontal scalability Detect potential failures by using observability Design for graceful degradation Perform testing for recovery from failures Perform testing for recovery from data loss Conduct thorough postmortems Note: To learn about the building blocks of infrastructure reliability in Google Cloud, see Google Cloud infrastructure reliability guide. ContributorsAuthors: Laura Hyatt | Enterprise Cloud ArchitectJose Andrade | Enterprise Infrastructure Customer EngineerGino Pelliccia | Principal ArchitectOther contributors: Andrés-Leonardo Martínez-Ortiz | Technical Program ManagerBrian Kudzia | Enterprise Infrastructure Customer EngineerDaniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMRyan Cox | Principal ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Next Define reliability based on user-experience goals arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(6).txt b/Overview(6).txt new file mode 100644 index 0000000000000000000000000000000000000000..cbb099bb1d243e01f9c21716103c387f4566f74d --- /dev/null +++ b/Overview(6).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/cost-optimization +Date Scraped: 2025-02-23T11:43:45.512Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Cost optimization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC The cost optimization pillar in the Google Cloud Architecture Framework describes principles and recommendations to optimize the cost of your workloads in Google Cloud. The intended audience includes the following: CTOs, CIOs, CFOs, and other executives who are responsible for strategic cost management. Architects, developers, administrators, and operators who make decisions that affect cost at all the stages of an organization's cloud journey. The cost models for on-premises and cloud workloads differ significantly. On-premises IT costs include capital expenditure (CapEx) and operational expenditure (OpEx). On-premises hardware and software assets are acquired and the acquisition costs are depreciated over the operating life of the assets. In the cloud, the costs for most cloud resources are treated as OpEx, where costs are incurred when the cloud resources are consumed. This fundamental difference underscores the importance of the following core principles of cost optimization. Note: You might be able to classify the cost of some Google Cloud services (like Compute Engine sole-tenant nodes) as capital expenditure. For more information, see Sole-tenancy accounting FAQ. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Core principles The recommendations in the cost optimization pillar of the Architecture Framework are mapped to the following core principles: Align cloud spending with business value: Ensure that your cloud resources deliver measurable business value by aligning IT spending with business objectives. Foster a culture of cost awareness: Ensure that people across your organization consider the cost impact of their decisions and activities, and ensure that they have access to the cost information required to make informed decisions. Optimize resource usage: Provision only the resources that you need, and pay only for the resources that you consume. Optimize continuously: Continuously monitor your cloud resource usage and costs, and proactively make adjustments as needed to optimize your spending. This approach involves identifying and addressing potential cost inefficiencies before they become significant problems. These principles are closely aligned with the core tenets of cloud FinOps. FinOps is relevant to any organization, regardless of its size or maturity in the cloud. By adopting these principles and following the related recommendations, you can control and optimize costs throughout your journey in the cloud. ContributorsAuthor: Nicolas Pintaux | Customer Engineer, Application Modernization SpecialistOther contributors: Anuradha Bajpai | Solutions ArchitectDaniel Lees | Cloud Security ArchitectEric Lam | Head of Google Cloud FinOpsFernando Rubbo | Cloud Solutions ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKent Hua | Solutions ManagerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerRadhika Kanakam | Senior Program Manager, Cloud GTMSteve McGhee | Reliability AdvocateSergei Lilichenko | Solutions ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Next Align spending with business value arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(7).txt b/Overview(7).txt new file mode 100644 index 0000000000000000000000000000000000000000..d7f5287d69005f0f33a033391ef8c91402dc31a9 --- /dev/null +++ b/Overview(7).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/performance-optimization +Date Scraped: 2025-02-23T11:44:00.981Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework: Performance optimization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This pillar in the Google Cloud Architecture Framework provides recommendations to optimize the performance of workloads in Google Cloud. This document is intended for architects, developers, and administrators who plan, design, deploy, and manage workloads in Google Cloud. The recommendations in this pillar can help your organization to operate efficiently, improve customer satisfaction, increase revenue, and reduce cost. For example, when the backend processing time of an application decreases, users experience faster response times, which can lead to higher user retention and more revenue. The performance optimization process can involve a trade-off between performance and cost. However, optimizing performance can sometimes help you reduce costs. ​​For example, when the load increases, autoscaling can help to provide predictable performance by ensuring that the system resources aren't overloaded. Autoscaling also helps you to reduce costs by removing unused resources during periods of low load. Performance optimization is a continuous process, not a one-time activity. The following diagram shows the stages in the performance optimization process: The performance optimization process is an ongoing cycle that includes the following stages: Define requirements: Define granular performance requirements for each layer of the application stack before you design and develop your applications. To plan resource allocation, consider the key workload characteristics and performance expectations. Design and deploy: Use elastic and scalable design patterns that can help you meet your performance requirements. Monitor and analyze: Monitor performance continually by using logs, tracing, metrics, and alerts. Optimize: Consider potential redesigns as your applications evolve. Rightsize cloud resources and use new features to meet changing performance requirements. As shown in the preceding diagram, continue the cycle of monitoring, re-assessing requirements, and adjusting the cloud resources. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Core principles The recommendations in the performance optimization pillar of the Architecture Framework are mapped to the following core principles: Plan resource allocation Take advantage of elasticity Promote modular design Continuously monitor and improve performance ContributorsAuthors: Daniel Lees | Cloud Security ArchitectGary Harmson | Customer EngineerLuis Urena | Developer Relations EngineerZach Seils | Networking SpecialistOther contributors: Filipe Gracio, PhD | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRyan Cox | Principal ArchitectRadhika Kanakam | Senior Program Manager, Cloud GTMWade Holmes | Global Solutions Director Next Plan resource allocation arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(8).txt b/Overview(8).txt new file mode 100644 index 0000000000000000000000000000000000000000..b8d4d6e0a6027cf03d44f99362ae55ad8e77477a --- /dev/null +++ b/Overview(8).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml +Date Scraped: 2025-02-23T11:44:17.366Z + +Content: +Home Docs Cloud Architecture Center Send feedback Architecture Framework: AI and ML perspective Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in the Google Cloud Architecture Framework describes principles and recommendations to help you to design, build, and manage AI and ML workloads in Google Cloud that meet your operational, security, reliability, cost, and performance goals. The target audience for this document includes decision makers, architects, administrators, developers, and operators who design, build, deploy, and maintain AI and ML workloads in Google Cloud. The following pages describe principles and recommendations that are specific to AI and ML, for each pillar of the Google Cloud Architecture Framework: AI and ML perspective: Operational excellence AI and ML perspective: Security AI and ML perspective: Reliability AI and ML perspective: Cost optimization AI and ML perspective: Performance optimization ContributorsAuthors: Benjamin Sadik | AI and ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerIsaac Lo | AI Business Development ManagerKamilla Kurta | GenAI/ML Specialist Customer EngineerMohamed Fawzi | Benelux Security and Compliance LeadRick (Rugui) Chen | AI Infrastructure Solutions ArchitectSannya Dang | AI Solution ArchitectOther contributors: Daniel Lees | Cloud Security ArchitectGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMRyan Cox | Principal ArchitectStef Ruinard | Generative AI Field Solutions ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Next Operational excellence arrow_forward Send feedback \ No newline at end of file diff --git a/Overview(9).txt b/Overview(9).txt new file mode 100644 index 0000000000000000000000000000000000000000..5989f9e51fe93a4940ae4f0d6e0018a13b26a8aa --- /dev/null +++ b/Overview(9).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes +Date Scraped: 2025-02-23T11:44:35.061Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud deployment archetypes Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC As a cloud architect or decision maker, when you plan to deploy an application in Google Cloud, you need to choose a deployment archetype1 that's suitable for your application. This guide describes six deployment archetypes—zonal, regional, multi-regional, global, hybrid, and multicloud, and presents use cases and design considerations for each deployment archetype. The guide also provides a comparative analysis to help you choose the deployment archetypes that meet your requirements for availability, cost, performance, and operational efficiency. What is a deployment archetype? A deployment archetype is an abstract, provider-independent model that you use as the foundation to build application-specific deployment architectures that meet your business and technical requirements. Each deployment archetype specifies a combination of failure domains where an application can run. These failure domains can be one or more Google Cloud zones or regions, and they can extend to include your on-premises data centers or failure domains in other cloud providers. The following diagram shows six applications deployed in Google Cloud. Each application uses a deployment archetype that meets its specific requirements. As the preceding diagram shows, in an architecture that uses the hybrid or multicloud deployment archetype, the cloud topology is based on one of the basic archetypes: zonal, regional, multi-regional, or global. In this sense, the hybrid and multicloud deployment archetypes can be considered as composite deployment archetypes that include one of the basic archetypes. Note: Deployment archetypes are different from location scopes. The location scope of a Google Cloud resource defines its availability boundary. For example, the location scope of a Compute Engine VM is zonal. This means that if the Google Cloud zone in which a VM is provisioned has an outage, the availability of the VM is affected. However, by distributing VMs across multiple zones, you can build a highly available architecture that's based on the regional deployment archetype. Choosing a deployment archetype helps to simplify subsequent decisions regarding the Google Cloud products and features that you should use. For example, for a highly available containerized application, if you choose the regional deployment archetype, then regional Google Kubernetes Engine (GKE) clusters are more appropriate than zonal GKE clusters. When you choose a deployment archetype for an application, you need to consider tradeoffs between factors like availability, cost, and operational complexity. For example, if an application serves users in multiple countries and needs high availability, you might choose the multi-regional deployment archetype. But for an internal application that's used by employees in a single geographical region, you might prioritize cost over availability and, therefore, choose the regional deployment archetype. Overview of the deployment archetypes The following tabs provide definitions for the deployment archetypes and a summary of the use cases and design considerations for each. Zonal Your application runs within a single Google Cloud zone, as shown in the following diagram: Use cases Development and test environments. Applications that don't need high availability. Low-latency networking between application components. Migrating commodity workloads. Applications that use license-restricted software. Design considerations Downtime during zone outages. For business continuity, you can provision a passive replica of the application in another zone in the same region. If a zone outage occurs, you can restore the application to production by using the passive replica. More information See the following sections: Zonal deployment archetype Comparative analysis of all the deployment archetypes Regional Your application runs independently in two or more zones within a single Google Cloud region, as shown in the following diagram: Use cases Highly available applications that serves users within a geographic area. Compliance with data residency and sovereignty requirements. Design considerations Downtime during region outages. For business continuity, you can back up the application and data to another region. If a region outage occurs, you can use the backups in the other region to restore the application to production. Cost and effort to provision and manage redundant resources. More information See the following sections: Regional deployment archetype Comparative analysis of all the deployment archetypes Multi-regional Your application runs independently in multiple zones across two or more Google Cloud regions. You can use DNS routing policies to route incoming traffic to the regional load balancers. The regional load balancers then distribute the traffic to the zonal replicas of the application, as shown in the following diagram: Use cases Highly available application with geographically dispersed users. Applications that require low end-user latency experience. Compliance with data residency and sovereignty requirements by using a geofenced DNS routing policy. Design considerations Cost for cross-region data transfer and data replication. Operational complexity. More information See the following sections: Multiregional deployment archetype Comparative analysis of all the deployment archetypes Global Your application runs across Google Cloud regions worldwide, either as a globally distributed (location-unaware) stack or as regionally isolated stacks. A global anycast load balancer distributes traffic to the region that's nearest to the user. Other components of the application stack can also be global, such as the database, cache, and object store. The following diagram shows the globally distributed variant of the global deployment archetype. A global anycast load balancer forwards requests to an application stack that's distributed across multiple regions and that uses a globally replicated database. The following diagram shows a variant of the global deployment archetype with regionally isolated application stacks. A global anycast load balancer forwards requests to an application stack in one of the regions. All the application stacks use a single, globally replicated database. Use cases Highly available applications that serve globally dispersed users. Opportunity to optimize cost and simplify operations by using global resources instead of multiple instances of regional resources. Design considerations Costs for cross-region data transfer and data replication. More information See the following sections: Global deployment archetype Comparative analysis of all the deployment archetypes Hybrid Certain parts of your application are deployed in Google Cloud, while other parts run on-premises, as shown in the following diagram. The topology in Google Cloud can use the zonal, regional, multi-regional, or global deployment archetype. Use cases Disaster recovery (DR) site for on-premises workloads. On-premises development for cloud applications. Progressive migration to the cloud for legacy applications. Enhancing on-premises applications with cloud capabilities. Design considerations Setup effort and operational complexity. Cost of redundant resources. More information See the following sections: Hybrid deployment archetype Comparative analysis of all the deployment archetypes Multicloud Some parts of your application are deployed in Google Cloud, and other parts are deployed in other cloud platforms, as shown in the following diagram. The topology in each cloud platform can use the zonal, regional, multi-regional, or global deployment archetype. Use cases Google Cloud as the primary site and another cloud as a DR site. Enhancing applications with advanced Google Cloud capabilities. Design considerations Setup effort and operational complexity. Cost of redundant resources and cross-cloud network traffic. More information See the following sections: Multicloud deployment archetype Comparative analysis of all the deployment archetypes ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Anna Berenberg | Engineering FellowAnshu Kak | Distinguished EngineerJeff Welsch | Director, Product ManagementMarwan Al Shawi | Partner Customer EngineerSekou Page | Outbound Product ManagerSteve McGhee | Reliability AdvocateVictor Moreno | Product Manager, Cloud Networking Anna Berenberg and Brad Calder, Deployment Archetypes for Cloud Applications, ACM Computing Surveys, Volume 55, Issue 3, Article No.: 61, pp 1-48 ↩ Next Zonal arrow_forward Send feedback \ No newline at end of file diff --git a/Overview_of_Google_identity_management.txt b/Overview_of_Google_identity_management.txt new file mode 100644 index 0000000000000000000000000000000000000000..d1342e998eb9035bcfb9a75cc39014e2eae006e6 --- /dev/null +++ b/Overview_of_Google_identity_management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/overview-google-authentication +Date Scraped: 2025-02-23T11:55:01.989Z + +Content: +Home Docs Cloud Architecture Center Send feedback Overview of Google identity management Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC All Google services, including Google Cloud, Google Marketing Platform, and Google Ads, rely on Google Sign-In to authenticate users. This document explains the domain model that Google Sign-In relies on for authentication and identity management. The domain model helps you understand how Google Sign-In works in a corporate context, how identities are managed, and how you can facilitate an integration with an external identity provider (IdP). The following diagram shows how these entities interact. As this diagram shows, at the center of the model is the Google identity, which is used by Google Sign-In. The Google identity is related to a number of other entities that are all relevant in the context of managing identities: Google for consumers contains the entities that are relevant for consumer-focused usage of Google services such as Gmail. Google for organizations contains entities managed by Cloud Identity or Google Workspace. These entities are the most relevant for managing corporate identities. Google Cloud contains the entities that are specific to Google Cloud. External contains entities that are relevant if you integrate Google with an external IdP. Solid arrows in the diagram indicate that entities reference each other or contain each other. In contrast, dashed arrows denote a federation relationship. Google identities Identities, users, and user accounts play a crucial role in identity management. The three terms are closely related and sometimes even used interchangeably. However, in the context of identity management, it's worthwhile to differentiate between the concepts: An identity is a name that uniquely identifies the person who is interacting with a Google service. Google uses email addresses for this purpose. A person's email address is considered that person's Google identity. The process of verifying the association between a person and an identity is called authentication or sign-in, making the person prove that this is indeed their identity. A person might have multiple email addresses. Because Google services use an email address as identity, such a person would be considered to have multiple identities. A user account is a data structure that keeps track of attributes, activities, and configurations that should be applied whenever a certain identity interacts with a Google service. User accounts are not created on the fly, but need to be provisioned before the first sign-on. User accounts are identified by an ID that is not exposed externally. User interfaces or APIs therefore require you to reference the user account indirectly by its associated identity, such as alice@gmail.com. Despite this indirection, any data and configuration details are associated with the user account, not with the identity. In most cases, there is a one-to-one relationship between user accounts and identities, which makes them easy to conflate. However, this is not always the case, as the following edge cases illustrate: The relationship between user accounts and identities is not fixed. You can change the primary email address of a user account, which associates a different identity with the user. As a Cloud Identity or Google Workspace administrator, you can even swap the primary email addresses of two users. For example, if you swapped the primary email addresses of Alice (alice@example.com) and Bob (bob@example.com), then Alice would be using Bob's previous user account and Bob would be using Alice's previous user account. Because data and configuration is associated with the user account and not the identity, Alice would also now use Bob's existing configuration and data (and Bob would now use Alice's configuration and data). The following figure shows this relationship. In a non-federated setup, you'd also have to reset passwords in order for Alice and Bob to swap user accounts. In a federated setup where Alice and Bob use an external IdP to authenticate, resetting passwords wouldn't be necessary. The relationship between identity and users might not be 1:1. A consumer account can intentionally be associated with multiple identities, as in the following diagram. It's also possible that one identity refers to two different user accounts. We recommend that you avoid this situation, but it can arise in the case of a conflicting user account. In such a case, a user is shown a ballot screen during authentication in which they select the user account to use. Google differentiates between two types of user accounts, consumer user accounts and managed user accounts. The following sections cover both types of user accounts and their related entities in more detail. Google for consumers If you own a Gmail email address like alice@gmail.com, then your Gmail account is a consumer account. Similarly, if you use the Create account link on the Google Sign-In page and during sign-up you provide a custom email address that you own, such as alice@example.com, then the resulting account is also a consumer account. Consumer account Consumer accounts are created by self-service and are primarily intended to be used for private purposes. The person who created the consumer account has full control of the account and any data created by using the account. The email address that that person used during sign-up becomes the primary email address of the consumer account and serves as its identity. That person can add email addresses to the consumer account. These email addresses serve as additional identities and can also be used for signing in. When a consumer account uses a primary email address that corresponds to the primary or secondary domain of a Cloud Identity or Google Workspace account, then the consumer account is also referred to as an unmanaged user account. A consumer account can be a member of any number of groups. Google for organizations If your organization uses Google services, it's best to use managed user accounts. These accounts are called managed because their lifecycle and configuration can be fully controlled by the organization. Managed user accounts are a feature of Cloud Identity and Google Workspace. Cloud Identity or Google Workspace account A Cloud Identity or Google Workspace account is the top-level container for users, groups, configuration, and data. A Cloud Identity or Google Workspace account is created when a company signs up for Cloud Identity or Google Workspace and corresponds to the notion of a tenant. Cloud Identity and Google Workspace share a common technical platform. Both products use the same set of APIs and administrative tools and share the notion of an account as a container for users and groups; that container is identified by a domain name. For the purpose of managing users, groups, and authentication, the two products can largely be considered equivalent. An account contains groups and one or more organizational units. Important: A Cloud Identity or Google Workspace account is not a user account, but a directory of user accounts. Organizational unit An organizational unit (OU) is a sub-container for user accounts that lets you segment the user accounts defined in the Cloud Identity or Google Workspace account into disjoint sets to make them easier to manage. Organizational units are organized hierarchically. Each Cloud Identity or Google Workspace account has a root OU, under which you can create more OUs as needed. You can also nest your OUs. Cloud Identity and Google Workspace lets you apply certain configurations by OU, such as license assignment or 2-step verification. These settings automatically apply to all users in the OU and are also inherited by child OUs. Organizational units therefore play a key role in managing Cloud Identity and Google Workspace configuration. A user account cannot belong to more than one OU, which makes OUs different from groups. Although OUs are useful for applying configuration to user accounts, they are not intended to be used for managing access. For managing access, we recommend that you use groups. Although OUs resemble Google Cloud folders, the two entities serve different purposes and are unrelated to another. Managed user account Managed user accounts work similarly to consumer user accounts, but they can be fully controlled by administrators of the Cloud Identity or Google Workspace account. The identity of a managed user account is defined by its primary email address. The primary email address has to use a domain that corresponds to one of the primary, secondary, or alias domains added to the Cloud Identity or Google Workspace account. Managed user accounts can have additional alias email addresses and a recovery email address, but these addresses aren't considered identities and cannot be used for signing in. For example, if Alice uses alice@example.com as her primary email address and has configured ally@example.com as an alias email address and alice@gmail.com as a recovery email address, then the only email address Alice can use for signing in is alice@example.com. Managed user accounts are contained by an organizational unit and can be a member of any number of groups. Managed user accounts are intended to be used by human users rather than machine users. A machine user account is a special kind of account used by an application or a virtual machine (VM) instance, not a person. For machine users, Google Cloud provides service accounts. (Service accounts are discussed in more detail later in this document.) Note: In the context of Google Workspace, Cloud Identity, and Google Cloud, the managed prefix is sometimes left out in other documentation, and managed user accounts are simply referred to as user accounts. In contrast, consumer user accounts are always qualified with the consumer prefix. Group Groups let you bundle multiple users. You can use groups to manage a mailing list or to apply common access control or configuration to multiple users. Cloud Identity and Google Workspace identify groups by email address—for example, billing-admins@example.com. Similar to a user's primary email address, the group email address must use one of the primary, secondary, or alias domains of the Cloud Identity or Google Workspace account. The email address doesn't need to correspond to a mailbox unless the group is used as a mailing list. Authentication still happens using the user's email rather than the group email, so a user can't sign in using a group email address. A group can have the following entities as members: Users (managed users or consumer accounts) Other groups Service accounts Unlike an organizational unit, groups don't act as a container: A user or group can be a member of any number of groups, not just one. Deleting a group does not delete any of the member users or groups. Groups can contain members from any Cloud Identity or Google Workspace account as well as consumer accounts. You can use the disallow members outside your organization setting to restrict members to user accounts of the same Cloud Identity or Google Workspace account. External identities By federating a Cloud Identity or Google Workspace account with an external IdP, you can enable employees to use their existing identity and credentials to sign in to Google services. At the most basic level, federation entails setting up single sign-on by using SAML, which links identities in Cloud Identity or Google Workspace to identities managed by your external IdP. To link an identity like alice@example.com and enable it for single sign-on to Google, you must meet two prerequisites: Your external IdP must recognize the identity alice@example.com and allow it to be used for single sign-on. Your Cloud Identity or Google Workspace account must contain a user account that uses alice@example.com as its identity. This user account must exist before the first single sign-on attempt. Instead of manually creating and maintaining user accounts in Cloud Identity or Google Workspace, you can automate the process by combining SAML-based single sign-on with automatic user provisioning. The idea of automatic user provisioning is to synchronize all or a subset of the users and groups from an external authoritative source to Cloud Identity or Google Workspace. Depending on your choice of IdP, SAML-based single sign-on and automatic user provisioning might be handled by the same software component or might require separate components. The domain model therefore distinguishes between a SAML identity provider and an external authoritative source. External SAML identity provider The external IdP is the sole system for authentication and provides a single sign-on experience for your employees that spans applications. It is external to Google and therefore referred to as an external identity provider. When you configure single sign-on, Cloud Identity or Google Workspace relays authentication decisions to a SAML IdP. In SAML terms, Cloud Identity or Google Workspace acts as a service provider that trusts the SAML IdP to verify a user's identity on its behalf. Commonly used external IdPs include Active Directory Federation Services (AD FS), Entra ID, Okta, or Ping Identity. External authoritative source The authoritative source for identities is the sole system that you use to create, manage, and delete identities for your employees. It's external to Google and therefore referred to as an external authoritative source. From the external authoritative source, user accounts and groups can be automatically provisioned to Cloud Identity or Google Workspace. Provisioning might be handled by the authoritative source itself, or by means of provisioning middleware. For automatic user provisioning to be effective, users have to be provisioned with an identity that your SAML IdP recognizes. If you map between identities (for example, if you map the identity alice@example.com in Cloud Identity or Google Workspace to u12345@corp.example.com in your SAML IdP), then both the SAML IdP and the provisioning middleware must perform the same mapping. External user account External identity providers are assumed to have the concept of a user account that keeps track of the name, attributes, and configuration. The authoritative source (or provisioning middleware) is expected to provision all (or a subset of) external user accounts to Cloud Identity or Google Workspace in order to facilitate a sign-on experience. In many cases, it's sufficient to propagate only a subset of the user's attributes (such as email address, first name, and last name) to Cloud Identity or Google Workspace so that you can limit data redundancy. External group If your external IdP supports the notion of a group, then you can optionally map these groups to groups in Cloud Identity or Google Workspace. Mapping and auto-provisioning groups is optional and not required for single sign-on, but both steps can be useful if you want to reuse existing groups to control access in Google Workspace or Google Cloud. Google Cloud Like other Google services, Google Cloud relies on Google Sign-In to authenticate users. Google Cloud also closely integrates with Google Workspace and Cloud Identity to allow you to manage resources efficiently. Google Cloud introduces the notion of organization nodes, folders, and projects. These entities are primarily used for managing access and configuration, so they are only tangentially relevant in the context of identity management. However, Google Cloud also includes an additional type of user account: service accounts. Service accounts belong to projects and play a crucial role in identity management. Organization node An organization is the root node in the Google Cloud resource hierarchy and a container for projects and folders. Organizations let you structure resources hierarchically and are key to managing resources centrally and efficiently. Each organization belongs to a single Cloud Identity or Google Workspace account. The name of the organization is derived from the primary domain name of the corresponding Cloud Identity or Google Workspace account. Note: Google Cloud organizations are unrelated to organizations in Google Marketing Platform. Folder Folders are nodes in the Google Cloud resource hierarchy and can contain projects, other folders, or a combination of both. You use folders to group resources that share common Identity and Access Management (IAM) policies or organizational policies. These policies automatically apply to all projects in the folder and are also inherited by child folders. Folders are similar, but unrelated, to organizational units. Organizational units help you manage users and apply common configuration or policies to users, whereas folders help you manage Google Cloud projects and apply common configuration or policies to projects. Project A project is a container for resources. Projects play a crucial role for managing APIs, billing, and managing access to resources. In the context of identity management, projects are relevant because they are the containers for service accounts. Service account A service account (or Google Cloud service account) is a special kind of user account that is intended to be used by applications and other types of machine users. Each service account belongs to a Google Cloud project. As is the case with managed user accounts, administrators can fully control the lifecycle and configuration of a service account. Service accounts also use an email address as their identity, but unlike with managed user accounts, the email address always uses a Google-owned domain such as developer.gserviceaccount.com. Service accounts don't participate in federation and also don't have a password. On Google Cloud, you use IAM to control the permission that a service account has for a compute resource such as a virtual machine (VM) or Cloud Run function, removing the need to manage credentials. Outside of Google Cloud, you can use service account keys to let an application authenticate by using a service account. Kubernetes service account Kubernetes service accounts are a concept of Kubernetes and are relevant when you use Google Kubernetes Engine (GKE). Similar to Google Cloud service accounts, Kubernetes service accounts are meant to be used by applications, not humans. Kubernetes service accounts can be used to authenticate when an application calls the Kubernetes API of a Kubernetes cluster, but they cannot be used outside of the cluster. They are not recognized by any Google APIs and are therefore not a replacement for a Google Cloud service account. When you deploy an application as a Kubernetes Pod, you can associate the Pod with a service account. This association lets the application use the Kubernetes API without having to configure or maintain certificates or other credentials. By using Workload Identity, you can link a Kubernetes service account to a Google Cloud service account. This link lets an application also authenticate to Google APIs, again without having to maintain certificates or other credentials. What's next Review our reference architectures for identity management. Send feedback \ No newline at end of file diff --git a/Overview_of_reliability.txt b/Overview_of_reliability.txt new file mode 100644 index 0000000000000000000000000000000000000000..19d6900f01ee567fbb917ff75f48ba4d6d18a5a3 --- /dev/null +++ b/Overview_of_reliability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide +Date Scraped: 2025-02-23T11:54:03.860Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud infrastructure reliability guide Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC Reliable infrastructure is a critical requirement for workloads in the cloud. As a cloud architect, to design reliable infrastructure for your workloads, you need a good understanding of the reliability capabilities of your cloud provider of choice. This document describes the building blocks of reliability in Google Cloud (zones, regions, and location-scoped resources) and the availability levels that they provide. This document also provides guidelines for assessing the reliability requirements of your workloads, and presents architectural recommendations for building and managing reliable infrastructure in Google Cloud. This document is divided into the following parts: Overview of reliability (this part) Building blocks of reliability in Google Cloud Assess the reliability requirements for your cloud workloads Design reliable infrastructure for your workloads in Google Cloud Manage traffic and load for your workloads in Google Cloud Manage and monitor your Google Cloud infrastructure If you've read this guide previously and want to see what's changed, see the Release notes. Overview of reliability An application or workload is reliable when it meets your current objectives for availability and resilience to failures. Availability (or uptime) is the percentage of time that an application is usable. For example, for an application that has an availability target of 99.99%, the total downtime must not exceed 8.64 seconds during a 24-hour period. Sometimes, availability is measured as the proportion of requests that the application serves successfully during a given period. For example, for an application that has an availability target of 99.99%, for every 100,000 requests received, not more than ten requests can fail. Availability is often expressed as the number of nines in the percentage. For example, 99.99% availability is expressed as "4 nines". Depending on the purpose of the application, you might have different sets of indicators for how reliable the application is. The following are examples of such reliability indicators: For applications that serve content, availability, latency, and throughput are important reliability indicators. They indicate whether the application can respond to requests, how long the application takes to respond to requests, and how many requests the application can process successfully in a given period. For databases and storage systems, latency, throughput, availability, and durability (how well data is protected against loss or corruption), are indicators of reliability. They indicate how long the system takes to read or write data, and whether data can be accessed on demand. For big data and analytics workloads such as data processing pipelines, consistent pipeline performance (throughput and latency) is essential to ensure freshness of the data products, and is an important reliability indicator. It indicates how much data can be processed, and how long it takes for the pipeline to progress from data ingestion to data processing. Most applications have data correctness as an essential reliability indicator. For further guidelines to define the reliability objectives for your applications, see Assess the reliability requirements for your cloud workloads. Note: Planning for disaster recovery (DR) is related to reliability, and DR is essential for business continuity. For detailed guidance about DR planning, see the Disaster recovery planning guide. Factors that affect application reliability The reliability of an application that's deployed in Google Cloud depends on the following factors: The internal design of the application. The secondary applications or components that the application depends on. Google Cloud infrastructure resources such as compute, networking, storage, databases, and security that the application runs on, and how the application uses the infrastructure. Infrastructure capacity that you provision, and how the capacity scales. The DevOps processes and tools that you use to build, deploy, and maintain the application, its dependencies, and the Google Cloud infrastructure. These factors are summarized in the following diagram: As shown in the preceding diagram, the reliability of an application that's deployed in Google Cloud depends on multiple factors. The focus of this guide is the reliability of the Google Cloud infrastructure. What's next Building blocks of reliability in Google Cloud Assess the reliability requirements for your cloud workloads Design reliable infrastructure for your workloads in Google Cloud Manage traffic and load for your workloads in Google Cloud Manage and monitor your Google Cloud infrastructure ContributorsAuthors: Nir Tarcic | Cloud Lifecycle SRE UTLKumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Alok Kumar | Distinguished EngineerAndrew Fikes | Engineering Fellow, ReliabilityChris Heiser | SRE TLDavid Ferguson | Director, Site Reliability EngineeringJoe Tan | Senior Product CounselKrzysztof Duleba | Principal EngineerNarayan Desai | Principal SRESailesh Krishnamurthy | VP, EngineeringSteve McGhee | Reliability AdvocateSudhanshu Jain | Product ManagerYaniv Aknin | Software Engineer Next Building blocks of reliability arrow_forward Send feedback \ No newline at end of file diff --git a/Parallel_file_systems_for_HPC_workloads.txt b/Parallel_file_systems_for_HPC_workloads.txt new file mode 100644 index 0000000000000000000000000000000000000000..aafdb78107d9390941169cabde0223211bd68f59 --- /dev/null +++ b/Parallel_file_systems_for_HPC_workloads.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/parallel-file-systems-for-hpc +Date Scraped: 2025-02-23T11:57:03.412Z + +Content: +Home Docs Cloud Architecture Center Send feedback Parallel file systems for HPC workloads Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-05 UTC This document introduces the storage options in Google Cloud for high performance computing (HPC) workloads, and explains when to use parallel file systems for HPC workloads. In a parallel file system, several clients use parallel I/O paths to access shared data that's stored across multiple networked storage nodes. The information in this document is intended for architects and administrators who design, provision, and manage storage for data-intensive HPC workloads. The document assumes that you have a conceptual understanding of network file systems (NFS), parallel file systems, POSIX, and the storage requirements of HPC applications. What is HPC? HPC systems solve large computational problems fast by aggregating multiple computing resources. HPC drives research and innovation across industries such as healthcare, life sciences, media, entertainment, financial services, and energy. Researchers, scientists, and analysts use HPC systems to perform experiments, run simulations, and evaluate prototypes. HPC workloads such as seismic processing, genomics sequencing, media rendering, and climate modeling generate and access large volumes of data at ever increasing data rates and ever decreasing latencies. High-performance storage and data management are critical building blocks of HPC infrastructure. Storage options for HPC workloads in Google Cloud Setting up and operating HPC infrastructure on-premises is expensive, and the infrastructure requires ongoing maintenance. In addition, on-premises infrastructure typically can't be scaled quickly to match changes in demand. Planning, procuring, deploying, and decommissioning hardware on-premises takes considerable time, resulting in delayed addition of HPC resources or underutilized capacity. In the cloud, you can efficiently provision HPC infrastructure that uses the latest technology, and you can scale your capacity on-demand. Google Cloud and our technology partners offer cost-efficient, flexible, and scalable storage options for deploying HPC infrastructure in the cloud and for augmenting your on-premises HPC infrastructure. Scientists, researchers, and analysts can quickly access additional HPC capacity for their projects when they need it. To deploy an HPC workload in Google Cloud, you can choose from the following storage services and products, depending on the requirements of your workload: Workload type Recommended storage services and products Workloads that need low-latency access to data but don't require extreme I/O to shared datasets, and that have limited data sharing between clients. Use NFS storage. Choose from the following options: Filestore Zonal with a higher capacity band Google Cloud NetApp Volumes Workloads that generate complex, interdependent, and large-scale I/O, such as tightly coupled HPC applications that use the Message-Passing Interface (MPI) for reliable inter-process communication. Use a parallel file system. Choose from the following options: Parallelstore, a fully-managed parallel file system based on the open source DAOS storage project DDN EXAScaler Cloud, an enterprise version of Lustre Weka Data Platform, an enterprise storage platform supporting multiple file service protocols including full POSIX, NFS, SMB and S3, with full data share-ability across protocols Sycomp Storage Fueled by IBM Spectrum Scale For more information about the workload requirements that parallel file systems can support, see When to use parallel file systems. Note: For workloads that don't require low latency or concurrent write access, you can use low-cost Cloud Storage, which supports parallel read access and automatically scales to meet your workload's capacity requirement. When to use parallel file systems In a parallel file system, several clients store and access shared data across multiple networked storage nodes by using parallel I/O paths. Parallel file systems are ideal for tightly coupled HPC workloads such as data-intensive artificial intelligence (AI) workloads and analytics workloads that use SAS applications. Consider using a parallel file system like Lustre for latency-sensitive HPC workloads that have any of the following requirements: Tightly coupled data processing: HPC workloads like weather modeling and seismic exploration need to process data repetitively by using many interdependent jobs that run simultaneously on multiple servers. These processes typically use MPI to exchange data at regular intervals, and they use checkpointing to recover quickly from failures. Parallel file systems enable interdependent clients to store and access large volumes of shared data concurrently over a low-latency network. Support for POSIX I/O API and for semantics: Parallel file systems like Lustre are ideal for workloads that need both the POSIX API and semantics. A file system's API and its semantics are independent capabilities. For example, NFS supports the POSIX API, which is how applications read and write data by using functions like open(), read(), and write(). But the way NFS coordinates data access between different clients is not the same as POSIX semantics for coordinating data access between different threads on a machine. For example, NFS doesn't support POSIX read-after-write cache consistency between clients; it relies on weak consistency in NFSv3 and close-to-open consistency in NFSv4. Petabytes of capacity: Parallel file systems can be scaled to multiple petabytes of capacity in a single file system namespace. NetApp Volumes and Filestore Zonal support up to 100 TiB per dataset. Cloud Storage offers low-cost and reliable capacity that scales automatically, but might not meet the data-sharing semantics and low-latency requirements of HPC workloads. Low latency and high bandwidth: For HPC workloads that need high-speed access to very large files or to millions of small files, parallel file systems can outperform NFS and object storage. The latency offered by parallel file systems (0.5 ms to 10 ms) is significantly lower than object storage, which can affect the maximum IOPS. In addition, the maximum bandwidth that's supported by parallel file systems can be orders of magnitude higher than in NFS-based systems. For example, DDN EXAScaler on Google Cloud has demonstrated 10+ Tbps read bandwidth, greater than 700 GBps write bandwidth, and 1.9 million file stat() calls per second using the IO500 benchmark. Extreme client scaling: While NFS storage can support thousands of clients, parallel file systems can scale to support concurrent access to shared data from over 10,000 clients. Examples of tightly coupled HPC applications This section describes examples of tightly coupled HPC applications that need the low-latency and high-throughput storage provided by parallel file systems. AI-enabled molecular modeling Pharmaceutical research is an expensive and data-intensive process. Modern drug research organizations rely on AI to reduce the cost of research and development, to scale operations efficiently, and to accelerate scientific research. For example, researchers use AI-enabled applications to simulate the interactions between the molecules in a drug and to predict the effect of changes to the compounds in the drug. These applications run on powerful, parallelized GPU processors that load, organize, and analyze an extreme amount of data to complete simulations quickly. Parallel file systems provide the storage IOPS and throughput that's necessary to maximize the performance of AI applications. Credit risk analysis using SAS applications Financial services institutions like mortgage lenders and investment banks need to constantly analyze and monitor the credit-worthiness of their clients and of their investment portfolios. For example, large mortgage lenders collect risk-related data about thousands of potential clients every day. Teams of credit analysts use analytics applications to collaboratively review different parts of the data for each client, such as income, credit history, and spending patterns. The insights from this analysis help the credit analysts make accurate and timely lending recommendations. To accelerate and scale analytics for large datasets, financial services institutions use Grid computing platforms such as SAS Grid Manager. Parallel file systems like DDN EXAScaler on Google Cloud support the high-throughput and low-latency storage requirements of multi-threaded SAS applications. Weather forecasting To predict weather patterns in a given geographic region, meteorologists divide the region into several cells, and deploy monitoring devices such as ground radars and weather balloons in every cell. These devices observe and measure atmospheric conditions at regular intervals. The devices stream data continuously to a weather-prediction application running in an HPC cluster. The weather-prediction application processes the streamed data by using mathematical models that are based on known physical relationships between the measured weather parameters. A separate job processes the data from each cell in the region. As the application receives new measurements, every job iterates through the latest data for its assigned cell, and exchanges output with the jobs for the other cells in the region. To predict weather patterns reliably, the application needs to store and share terabytes of data that thousands of jobs running in parallel generate and access. CFD for aircraft design Computational fluid dynamics (CFD) involves the use of mathematical models, physical laws, and computational logic to simulate the behavior of a gas or liquid around a moving object. When aircraft engineers design the body of an airplane, one of the factors that they consider is aerodynamics. CFD enables designers to quickly simulate the effect of design changes on aerodynamics before investing time and money in building expensive prototypes. After analyzing the results of each simulation run, the designers optimize attributes such as the volume and shape of individual components of the airplane's body, and re-simulate the aerodynamics. CFD enables aircraft designers to collaboratively simulate the effect of hundreds of such design changes quickly. To complete design simulations efficiently, CFD applications need submillisecond access to shared data and the ability to store large volumes of data at speeds of up to 100 GBps. Overview of Lustre and EXAScaler Cloud Lustre is an open source parallel file system that provides high-throughput and low-latency storage for tightly coupled HPC workloads. In addition to standard POSIX mount points in Linux, Lustre supports data and I/O libraries such as NetCDF, HDF5, and MPI-IO, enabling parallel I/O for a wide range of application domains. Lustre powers many of the largest HPC deployments globally. A Lustre file system has a scalable architecture that contains the following components: A management server (MGS) stores and manages configuration information about one or more Lustre file systems, and provides this information to the other components. Metadata servers (MDS) manage client access to a Lustre file system's namespace, using metadata (for example, directory hierarchy, filenames, and access permissions). Object storage servers (OSS) manage client access to the files stored in a Lustre file system. Lustre client software allows clients to mount the Lustre file system. Multiple instances of MDS and OSS can exist in a file system. You can add new MDS and OSS instances when required. For more information about the Lustre file system and how it works, see the Lustre documentation. EXAScaler Cloud is an enterprise version of Lustre that's offered by DDN, a Google partner. EXAScaler Cloud is a shared-file solution for high-performance data processing and for managing the large volumes of data required to support AI, HPC, and analytics workloads. EXAScaler Cloud is ideal for deep-learning and inference AI workloads in Google Cloud. You can deploy it in a hybrid-cloud architecture to augment your on-premises HPC capacity. EXAScaler Cloud can also serve as a repository for storing longer-term assets from an on-premises EXAScaler deployment. Overview of Sycomp Storage Fueled by IBM Storage Scale Sycomp Storage Fueled by IBM Storage Scale in Google Cloud Marketplace lets you run your high performance computing (HPC), artificial intelligence (AI), machine learning (ML), and big data workloads in Google Cloud. With Sycomp Storage you can concurrently access data from thousands of VMs, reduce costs by automatically managing tiers of storage, and run your application on-premises or in Google Cloud. Sycomp Storage Fueled by IBM Storage Scale is available in Cloud Marketplace, can be quickly deployed, and supports access to your data through NFS and the IBM Storage Scale client. IBM Storage Scale is a parallel file system that helps to securely manage large volumes (PBs) of data. Sycomp Storage Scale is a parallel file system that's well suited for HPC, AI, ML, big data, and other applications that require a POSIX-compliant shared file system. With adaptable storage capacity and performance scaling, Sycomp Storage can support small to large HPC, AI, and ML workloads. After you deploy a cluster in Google Cloud, you decide how you want to use it. Choose whether you want to use the cluster only in the cloud or in hybrid mode by connecting to existing on-premises IBM Storage Scale clusters, third-party NFS NAS solutions, or other object-based storage solutions. What's next Learn more about DDN EXAScaler Cloud and DDN's partnership with Google. Read about the Google Cloud submission that demonstrates a 10+ Tbps, Lustre-based scratch file system on the IO500 ranking of HPC storage systems. Learn more about Lustre. Learn more about Sycomp Storage Fueled by IBM Spectrum Scale. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Barak Epstein | Product ManagerCarlos Boneti | Senior Staff Software EngineerDean Hildebrand | Technical Director, Office of the CTOSean Derrington | Group Outbound Product Manager, StorageWyatt Gorman | HPC Solutions Manager Send feedback \ No newline at end of file diff --git a/Parallelstore.txt b/Parallelstore.txt new file mode 100644 index 0000000000000000000000000000000000000000..943d927d1385013f5c20622cdb4969fcc962aed7 --- /dev/null +++ b/Parallelstore.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/parallelstore +Date Scraped: 2025-02-23T12:10:12.353Z + +Content: +Parallelstore is now GA. Check out the details on the blog.ParallelstoreHigh performance, managed parallel file serviceParallelstore is based on DAOS and delivers up to 6x greater read throughput performance compared to competitive Lustre scratch offerings.Get startedProduct highlightsFlexibly supports high-bandwidth, high IOPS, and <0.5ms latencyDifferentiated scratch offer with software-managed redundancyBuilt on architecture that supports AI/ML workload patternsSee how it's usedFeaturesFast, scalable performanceHigh bandwidth, high IOPS, and ultra-low latency, utilizing byte-addressable media to store metadata and small I/O, and locally attached NVMe with software-managed redundancy for bulk I/O.Accelerate time to valueConfigure Parallelstore to ideally fit your use case and build a system of the right scale for extreme generative AI and HPC simulation use cases.Futureproof your architectureBuilding with Parallelstore prepares your business for HPC scale, AI/ML convergence, and Kubernetes integration, which means you can grow and scale with minimal disruption. Distributed metadata management, extreme IOPS, and key value store architecture are well-aligned to emerging patterns in AI workloads. Open source flexibilityParallelstore is built on Intel DAOS, which is open source. We have a long history of leadership in open source—from projects like Kubernetes, TensorFlow, and more. Open source gives you the flexibility to deploy—and, if necessary, migrate—critical workloads across or off public cloud platforms. Bring products and services to market faster, without operational overhead or requirement of specialized skills.View all featuresHow It Works"With DAOS on Google Cloud, users can easily and quickly provision storage clusters that can scale to similar performance levels as similar on-premises hardware, but then also be able to dynamically grow and shrink those clusters as needed." - Andrey Kudryavtsev, IntelView reportCommon UsesAI/ML trainingFor systems that need high-speed access to very large files or millions of small files, parallel file systems can outperform NFS and object storage. The latency offered by parallel file systems is significantly lower than other options, which can affect the maximum IOPS, making Parallelstore a great choice for AI/ML workload scratch space.Explore details and tutorialsLearning resourcesFor systems that need high-speed access to very large files or millions of small files, parallel file systems can outperform NFS and object storage. The latency offered by parallel file systems is significantly lower than other options, which can affect the maximum IOPS, making Parallelstore a great choice for AI/ML workload scratch space.Explore details and tutorialsQuantitative trading analysisParallelstore is ideally suited to provide scratch analysis of intermediate data in high performance quantitative analysis and trading, and is also well-suited to other complex, high-speed financial services use cases such as fraud detection.Explore documentationLearning resourcesParallelstore is ideally suited to provide scratch analysis of intermediate data in high performance quantitative analysis and trading, and is also well-suited to other complex, high-speed financial services use cases such as fraud detection.Explore documentationComputer-Aided Engineering (CAE)Parallelstore is a good option for manufacturing, auto, aerospace, and biomedical applications doing computational fluid dynamics (CFD), complex modeling, crash simulation, chemical engineering, and more.Explore documentationLearning resourcesParallelstore is a good option for manufacturing, auto, aerospace, and biomedical applications doing computational fluid dynamics (CFD), complex modeling, crash simulation, chemical engineering, and more.Explore documentationPricingHow Parallelstore pricing worksParallelstore pricing is based on instance capacity and the location where your Parallelstore instance is provisioned.ProductPricingInstance PricingFrom $0.0001917808 per GiB per hourData transfer pricing using the Parallelstore APIFrom $0.001 per GiB transferredExplore pricing detailsHow Parallelstore pricing worksParallelstore pricing is based on instance capacity and the location where your Parallelstore instance is provisioned.Instance PricingPricingFrom $0.0001917808 per GiB per hourData transfer pricing using the Parallelstore APIPricingFrom $0.001 per GiB transferredExplore pricing detailsPricing calculatorEstimate your monthly charges using Google Cloud.Estimate your costCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteBegin testing nowTry Parallelstore todayContact usUnderstand the technical benefits of DAOSExplore whitepaperLearn about parallel file systems for HPCVisit documentationWhen to use parallel file systemsLearn moreSee common architecturesBrowse catalog of architecture contentGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Partitioned_multicloud_pattern.txt b/Partitioned_multicloud_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..bd815639c8f0617df98877cf74958aeeb2e56556 --- /dev/null +++ b/Partitioned_multicloud_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/partitioned-multicloud-pattern +Date Scraped: 2025-02-23T11:50:02.101Z + +Content: +Home Docs Cloud Architecture Center Send feedback Partitioned multicloud pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The partitioned multicloud architecture pattern combines multiple public cloud environments that are operated by different cloud service providers. This architecture provides the flexibility to deploy an application in an optimal computing environment that accounts for the multicloud drivers and considerations discussed in the first part of this series. The following diagram shows a partitioned multicloud architecture pattern. This architecture pattern can be built in two different ways. The first approach is based on deploying the application components in different public cloud environments. This approach is also referred to as a composite architecture and is the same approach as the tiered hybrid architecture pattern. Instead of using an on-premises environment with a public cloud, however, it uses at least two cloud environments. In a composite architecture, a single workload or application uses components from more than one cloud. The second approach deploys different applications on different public cloud environments. The following non-exhaustive list describes some of the business drivers for the second approach: To fully integrate applications hosted in disparate cloud environments during a merger and acquisition scenario between two enterprises. To promote flexibility and cater to diverse cloud preferences within your organization. Adopt this approach to encourage organizational units to choose the cloud provider that best suits their specific needs and preferences. To operate in a multi-regional or global-cloud deployment. If an enterprise is required to adhere to data residency regulations in specific regions or countries, then they need to choose from among the available cloud providers in that location if their primary cloud provider does not have a cloud region there. With the partitioned multicloud architecture pattern, you can optionally maintain the ability to shift workloads as needed from one public cloud environment to another. In that case, the portability of your workloads becomes a key requirement. When you deploy workloads to multiple computing environments, and want to maintain the ability to move workloads between environments, you must abstract away the differences between the environments. By using GKE Enterprise, you can design and build a solution to solve multicloud complexity with consistent governance, operations, and security postures. For more information, see GKE Multi-Cloud. As previously mentioned, there are some situations where there might be both business and technical reasons to combine Google Cloud with another cloud provider and to partition workloads across those cloud environments. Multicloud solutions offer you the flexibility to migrate, build, and optimize applications portability across multicloud environments while minimizing lock-in, and helping you to meet your regulatory requirements. For example, you might connect Google Cloud with Oracle Cloud Infrastructure (OCI), to build a multicloud solution that harnesses the capabilities of each platform using a private Cloud Interconnect to combine components running in OCI with resources running on Google Cloud. For more information, see Google Cloud and Oracle Cloud Infrastructure – making the most of multicloud. In addition, Cross-Cloud Interconnect facilitates high-bandwidth dedicated connectivity between Google Cloud and other supported cloud service providers, enabling you to architect and build multicloud solutions to handle high inter-cloud traffic volume. Advantages While using a multicloud architecture offers several business and technical benefits, as discussed in Drivers, considerations, strategy, and approaches, it's essential to perform a detailed feasibility assessment of each potential benefit. Your assessment should carefully consider any associated direct or indirect challenges or potential roadblocks, and your ability to navigate them effectively. Also, consider that the long-term growth of your applications or services can introduce complexities that might outweigh the initial benefits. Here are some key advantages of the partitioned multicloud architecture pattern: In scenarios where you might need to minimize committing to a single cloud provider, you can distribute applications across multiple cloud providers. As a result, you could relatively reduce vendor lock-in with the ability to change plans (to some extent) across your cloud providers. Open Cloud helps to bring Google Cloud capabilities, like GKE Enterprise, to different physical locations. By extending Google Cloud capabilities on-premises, in multiple public clouds, and on the edge, it provides flexibility, agility, and drives transformation. For regulatory reasons, you can serve a certain segment of your user base and data from a country where Google Cloud doesn't have a cloud region. The partitioned multicloud architecture pattern can help to reduce latency and improve the overall quality of the user experience in locations where the primary cloud provider does not have a cloud region or a point of presence. This pattern is especially useful when using high-capacity and low latency multicloud connectivity, such as Cross-Cloud Interconnect and CDN Interconnect with a distributed CDN. You can deploy applications across multiple cloud providers in a way that lets you choose among the best services that the other cloud providers offer. The partitioned multicloud architecture pattern can help facilitate and accelerate merger and acquisition scenarios, where the applications and services of the two enterprises might be hosted in different public cloud environments. Best practices Start by deploying a non-mission-critical workload. This initial deployment in the secondary cloud can then serve as a pattern for future deployments or migrations. However, this approach probably isn't applicable in situations where the specific workload is legally or regulatorily required to reside in a specific cloud region, and the primary cloud provider doesn't have a region in the required territory. Minimize dependencies between systems that are running in different public cloud environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges. To abstract away the differences between environments, consider using containers and Kubernetes where supported by the applications and feasible. Ensure that CI/CD pipelines and tooling for deployment and monitoring are consistent across cloud environments. Select the optimal network architecture pattern that provides the most efficient and effective communication solution for the applications you're using. Communication must be fine-grained and controlled. Use secure APIs to expose application components. Consider using either the meshed architecture pattern or one of the gated networking patterns, based on your specific business and technical requirements. To meet your availability and performance expectations, design for end-to-end high availability (HA), low latency, and appropriate throughput levels. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available, based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect. If you're using multiple CDNs as part of your multicloud partitioned architecture pattern, and you're populating your other CDN with large data files from Google Cloud, consider using CDN Interconnect links between Google Cloud and supported providers to optimize this traffic and, potentially, its cost. Extend your identity management solution between environments so that systems can authenticate securely across environment boundaries. To effectively balance requests across Google Cloud and another cloud platform, you can use Cloud Load Balancing. For more information, see Routing traffic to an on-premises location or another cloud. If the outbound data transfer volume from Google Cloud toward other environments is high, consider using Cross-Cloud Interconnect. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. In some of the following cases, using Cloud Load Balancing with an API gateway can provide a robust and secure solution for managing, securing, and distributing API traffic at scale across multiple regions: Deploying multi-region failover for Apigee API runtimes in different regions. Increasing performance with Cloud CDN. Providing WAF and DDoS protection through Google Cloud Armor. Use consistent tools for logging and monitoring across cloud environments where possible. You might consider using open source monitoring systems. For more information, see Hybrid and multicloud monitoring and logging patterns. If you're deploying application components in a distributed manner where the components of a single application are deployed in more than one cloud environment, see the best practices for the tiered hybrid architecture pattern. Previous arrow_back Tiered hybrid pattern Next Analytics hybrid and multicloud patterns arrow_forward Send feedback \ No newline at end of file diff --git a/Patterns_and_practices_for_identity_and_access_governance_on_Google_Cloud.txt b/Patterns_and_practices_for_identity_and_access_governance_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9e25e086da3d3b663c37dd27ac3b0a32b91aebf --- /dev/null +++ b/Patterns_and_practices_for_identity_and_access_governance_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-practices-identity-access-governance-google-cloud +Date Scraped: 2025-02-23T11:47:39.575Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns and practices for identity and access governance on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC There are a number of Google Cloud products and services that you can use to help your organization develop an approach for identity governance and access management for applications and workloads running on Google Cloud. This document is intended for security administrators, operations managers, and enterprise architects who work in customer teams and who want to learn about these tools and controls and how to use them. This document assumes that you have the following: A Google Cloud project. A user account with administrative access to manage Cloud Identity groups and users. You need this access to run the example procedures in this document. A user account without administrative access to manage Cloud Identity groups and users. You need this account to test some of the controls that you set in the example procedures in this document. If you don't already have access to a Google Cloud project and administrative access to Cloud Identity, see Creating a Google Cloud project and Setting up Cloud Identity. Discover unused accounts and permissions It's a best practice to remove user accounts when they are no longer needed because unused (orphaned) user accounts and service accounts can pose a security risk. You can use Google Cloud Policy Intelligence in the following ways to help your enterprise understand and reduce risk: Helping administrators in your enterprise discover accounts and permissions that aren't being used anymore, for reasons like an employee has left the company or changed roles. Helping to identify service accounts that have been abandoned after the completion of tasks. View and apply IAM recommendations The Identity and Access Management (IAM) recommender is part of the Policy Intelligence suite of tools and services. It uses machine learning (ML) to make smart access control recommendations to help you identify accounts that no longer need access to Google Cloud resources. You can then review the recommendations and decide whether to apply them. IAM recommender also helps you to maintain the principle of least privilege across all the members in your organization. In addition to providing recommendations, the Recommender service uses ML to provide detailed insights. Insights are findings that highlight notable patterns in resource usage. For example, you can collect additional information about permission usage in your project, identify permissions that aren't being used and are no longer needed, and identify unused service accounts. It's possible to view and apply IAM recommendations in the Google Cloud console at an enterprise-level scale. In the following example procedure, you use BigQuery to review and rightsize access permissions in your organization. To set up the BigQuery integration, you configure an export of recommendations made by IAM recommender to a BigQuery dataset. This data can then be queried and reviewed using visualization tools such as Looker Studio and Looker. Implementation In the Google Cloud console, on the project selector page, select or create a Google Cloud project. BigQuery is automatically enabled in new projects. To activate BigQuery in a pre-existing project, enable the BigQuery API. Enable the API Configure the BigQuery Data Transfer Service to pull data from IAM recommender. To learn more, see Exporting recommendations to BigQuery. Go to the BigQuery page. Go to BigQuery Copy and paste the following query into the Editor field: SELECT recommendation_details FROM PROJECT_ID.DATASET.TABLE_NAME WHERE recommender = "google.iam.policy.Recommender" AND recommender_subtype = "REMOVE_ROLE" Replace the following: PROJECT_ID: the Google Cloud project ID that you are using to execute this example DATASET: the name of the dataset that you selected when setting up the BigQuery Data Transfer Service job. TABLE_NAME: the name of the table created by the BigQuery Data Transfer Service job. You run this query to identify the recommender_subtype subtype of IAM RecommenderREMOVE_ROLE recommendations. Click Run. You use the query result to identify unused roles and rightsize IAM role bindings. You can save query results to Sheets. To learn more, see Saving query results to Sheets. Give users the ability to request access to resources Enterprise administrators need the ability to let users request access to resources. Typically, these requests go through an approval process where a designated approver or a group of approvers must approve the request before access is given. Google Groups lets you apply an access policy to a collection of users, enabling you to follow the policy management best practice of giving access to resources based on group membership. This approach keeps policies relevant as join, move, and leave events occur through group membership changes. You can give and change access controls for a whole group with Google Groups, instead of giving or changing access controls one at a time for individual users or service accounts. You can also easily add members to and remove members from a Google group instead of updating an IAM policy to add or remove users. Set up resource access using Google Groups You can create and manage a Google group using Cloud Identity. Cloud Identity is an identity as a service (IDaaS) solution that manages users and groups. You can also configure Cloud Identity to federate identities between Google and other identity providers, such as Active Directory and Azure Active Directory. Google Groups also lets a user request membership to a group. This request is routed to group administrators who can then approve or decline that request. To learn more, see Create a group and choose group settings. When you create and manage a Google group to give access to Google Cloud resources, make sure to consider the implications of the settings that you select. Although we recommend that you minimize the number of users who can manage the group, we recommend that you set up more than one group administrator to always have access to the group. We also recommend that you restrict group membership to users in your organization. Implementation In this example procedure, you create a Google group and give the viewer group access to a sample Google Cloud project. Members that you add to this group (or that you give access to upon request) are able to view the sample Google Cloud project. Create a sample Google group The following steps assume that you have Cloud Identity configured. To learn more, see Setting up Cloud Identity. Make sure that you have the permissions that you need to manage groups. In the Google Cloud console, go to the Groups page. Go to Groups Click Create. Fill in the details for your group. To add members to the group, click Add member, then enter the email address for the member and choose their Google Groups role. When you're finished, click Submit to create the group. Group settings can only be managed within Google Groups. To configure group settings, click Manage this group in Google Groups. To select who can join the group, in the Who can join the group menu, select Organization users only. Click Create Group. Grant the group access to a Google Cloud project In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Open Cloud Shell: Go to Cloud Shell Run the following command to give the group viewer access to the project: gcloud projects add-iam-policy-binding PROJECT_ID \ --member=group:GROUP_EMAIL --role=roles/viewer Replace the following: GROUP_EMAIL: the email address of the group you created PROJECT_ID: the ID of your Google Cloud project Test the user access request process for users in your organization In the following procedure, you use a test user account to demonstrate the steps that users in your organization use to request access to a Google group. Sign in to Google Groups as a non-administrative user. The group that you created in Create a sample Google group appears under All Groups. If the group doesn't appear, use search to find it. To request access to the group, click Ask to join group. Once access is given, the non-administrative user account that you used to make the request should be able to view the Google Cloud PROJECT_ID project that the group has Viewer access to. Give time-bound access to Google Cloud resources There might be situations when users in your enterprise require temporary, short-term access to Google Cloud resources. Short-term access is useful when developers need temporary access to Google Cloud resources to perform certain tasks. Short-term access also offers the following advantages: Reducing administrative overhead. Ensuring that the principle of least privilege and timely access is followed. Being able to give this type of access is useful for administrators when users need access to resources in emergency situations that require quick and direct intervention. However, it can be difficult to manually track short-term access permissions and ensure that they are removed in a timely manner. IAM conditional access policies let you set temporary (expiring) access to Google Cloud resources using conditional role bindings, helping to reduce this overhead for administrators. Use conditional role bindings and group membership expiration You can add conditional role bindings to new or existing IAM policies to further control access to Google Cloud resources. Some examples of when you might use conditional role bindings to give a user or a group temporary access are as follows: Access to a project that expires after a specified time. Access to a project that recurs every month or quarter. Access to Compute Engine instances to administer tasks such as stopping instances. When you use Google Groups to give users access to Google Cloud resources, you can use the group membership expiration feature to set expirations for group membership using the Cloud Identity Groups API. When the time that you specify has passed, users are removed from the group automatically. Implementation You can use a conditional role binding to give developers temporary access to administer a specific Compute Engine instance. In this example, the role binding is set to expire on December 31, 2021. In Cloud Shell, set the following variables: export INSTANCE=create example-instance-1 export ZONE=us-west1-b export USER=USER_ID_TO_GIVE_TEMPORARY_ACCESS_TO Replace USER_ID_TO_GIVE_TEMPORARY_ACCESS_TO with the username of the user in your organization that you want to give temporary access to. Create a sample Compute Engine instance: gcloud compute instances create $INSTANCE \ --zone $ZONE \ --machine-type g1-small You give temporary access to this instance to a user in your organization in the following steps. Give the user that you selected temporary access: gcloud compute instances add-iam-policy-binding $INSTANCE \ --zone=$ZONE \ --member="user:$USER" \ --role='roles/compute.instanceAdmin.v1' \ --condition='expression=request.time < timestamp("2022-01-01T00:00:00Z"),title=expires_end_of_2021,description=Expires at midnight on 2021-12-31' Retain the Compute Engine instance that you create. You use this instance later in this document in Managing privileged access. Alternatively, you can delete the example-instance-1 instance by running the following command: gcloud compute instances delete $INSTANCE Log lifecycle events related to identity If you need to review IAM lifecycle events such as policy changes, service account creation, and service account assignments for auditing, Cloud Audit Logs can help. Administrators can use Cloud Audit Logs to look back at historical data for forensics and analysis. Analyzing audit logs can help you to understand access patterns and access anomalies. Audit log analysis can also be important for the following scenarios: Analyzing permissions and access to resources during a data breach. Analyzing production issues caused by a change in IAM policy, particularly if you want to verify which user or what process made the change. Cloud Audit Logs stores information about the actions that users take, where the activity occurred, and when. Audit logs are classified as follows: Admin Activity audit logs Data Access audit logs System Event audit logs Policy Denied audit logs We recommend that you use the following audit logs for identity and access related administrative logging: Admin Activity audit logs Policy Denied audit logs Admin Activity audit logs store changes made to Google Cloud resources such as projects, Compute Engine instances, and service accounts. The following are examples of events that Admin Activity audit logs store: The creation of a service account. A change in an IAM policy. The download of a service account key. Policy Denied audit logs record when a user or a service account is denied access to a Google Cloud service due to a security policy violation. Set up Cloud Audit Logs for identity lifecycle events You can view audit logs in the Google Cloud console or query the logs using the Cloud Logging API or the command-line interface. All audit logs have a retention period. If your enterprise needs to store audit logs for longer than the default retention period, you need to export the logs to BigQuery or other sink destinations by creating a log sink. Exporting logs to BigQuery lets you view a subset of data columns and selected data (over time or other dimensions), and do aggregate analysis. Implementation The following example procedure shows you how to query Google Cloud project logs to check if any of the following events have occurred: There have been IAM policy changes. New service accounts have been created. New service account keys have been generated. View changes to IAM policies In the Google Cloud console, go to the Logging > Logs Explorer page. On the Logs Explorer page, select an existing Google Cloud project. Paste the following query into Query Builder: logName="projects//logs/cloudaudit.googleapis.com%2Factivity" AND (resource.type="project" OR resource.type="service_account") AND resource.labels.project_id="" AND (protoPayload.methodName="SetIamPolicy" OR protoPayload.methodName="google.iam.admin.v1.CreateServiceAccount" OR protoPayload.methodName="google.iam.admin.v1.CreateServiceAccountKey") Replace PROJECT with your Google Cloud project ID. Click Run query. View changes to group membership Changes to Google group membership are tracked in Activity logs. To learn how to access these logs, see Viewing group membership change logs. Access certification The Policy Analyzer can be used to help your enterprise verify that users have the appropriate access rights to Google Cloud resources on a set or periodic basis. This verification is important for compliance and audit purposes. It's also useful for security personnel and auditors to review which users have access to which resource and in what capacity. The Policy Analyzer helps you to identify which identities or principals (users, service accounts, groups, and domains) have access to which Google Cloud resources across the resource hierarchy in your organization. It also helps to identify what the type of access is. Some example questions that the Policy Analyzer can help you answer are as follows: Which users can access a service account. Which users can read data in a BigQuery dataset that contains personally identifiable information (PII). Policy Analyzer can be used with the following methods: Using the Google Cloud console. Using APIs. Exporting IAM policies data to BigQuery for asynchronous analysis. Use Policy Analyzer to check user access The following example queries show the type insights that you can gain into user access with the Policy Analyzer: What roles or permissions a principal (user, service account, group, and domain) has; for example, checking what access a former employee has to your production project. Which resources a user has access to; for example, the access a former employee has to your production project resources. Which principals have a certain level of access to a resource; for example, which buckets a specific user can delete in a project. Implementation In the following example procedure, you use the Policy Analyzer to verify the permissions that a user has. In Cloud Shell, enable the Cloud Asset API for the project: Enable the API Enter the following command to find out which resources a user can access: gcloud asset analyze-iam-policy --organization="YOUR_ORG_ID" \ --identity="user:USERNAME_TO_CERTIFY" Make the following replacements: YOUR_ORG_ID: your Google Cloud Organization ID USERNAME_TO_CERTIFY: the username of the user whose Google Cloud access permissions you want to verify. Extract the IAM policy data to BigQuery. To learn more, see Writing policy analysis to BigQuery. Manage privileged access Some users in your organization might need privileged access to certain Google Cloud resources to perform administrative tasks. For example, these users might need to manage specific Google Cloud projects, set up project billing and budgets, or administer Compute Engine instances. Instead of permanently granting users privileged access to resources, you can let users request just-in-time privileged access. Using just-in-time privileged access management can help you do the following: Reduce the risk of someone accidentally modifying or deleting resources. For example, when users have privileged access only when it's needed, it helps prevent them from running scripts at other times that unintentionally affect resources that they shouldn't be able to change. Create an audit trail that indicates why privileges were activated. Conduct audits and reviews for analyzing past activity. Alternatively, you can grant privileged access to a service account and allow users to impersonate the service account. Give privileged access to users Broadly, management of privileged access to enterprise users in Google Cloud can be summarized as follows: Giving users in the enterprise the ability to request privileged access. Reviewing Cloud Audit Logs to analyze privileged access requests and access patterns. Administrators can review privileged access patterns and detect anomalies using these logs. We recommend that enterprises consider exporting these logs to persist as necessary and appropriate for audit purposes. Ensuring that privileged access either expires automatically or is reviewed periodically. Enable 2-step verification (also called multi-factor authentication) for all users that have privileged access to resources. You can also create fine-grained, attribute-based access control by using Access Context Manager, which enforces an extra layer of security when privileged access is used. For example, you can have an access level that specifies that users must be on the corporate network when using privileged access to resources. Implementation In this example procedure, you (as an administrator) create a Google group for privileged access to Compute Engine instances. You create a service account in Google Cloud that is given access to administer Compute Engine instances. You associate the group with the service account so that group members are able to impersonate the service account for the period that they are given membership to the privileged group. Create a Google group for privileged access As a Google Cloud administrator, select or create a Google Cloud project. Go to Manage Resources Enable billing for your project. Enable Billing Follow the steps in Give users the ability to request access to resources to create a new Google group. Name the group as follows: elevated-compute-access Create a Google Cloud service account In Cloud Shell, enable the IAM Service Account Credentials API for the project that you created in Create a Google group for privileged access. Enable the APIs Set the following variables: export PROJECT_ID=$DEVSHELL_PROJECT_ID export PRIV_SERVICE_ACCOUNT_NAME=elevated-compute-access export DELEGATE_GROUP=GROUP_EMAIL_ADDRESS Replace GROUP_EMAIL_ADDRESS with the full name of the Google group that you created. Create the service account: gcloud IAMservice-accounts create $PRIV_SERVICE_ACCOUNT_NAME \ --description="Elevated compute access" \ --display-name="Elevated compute access" Give the service account the compute administrator role: gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:$PRIV_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/compute.admin" Give the Google group you created Service Usage consumer access for your project: gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="group:$DELEGATE_GROUP" \ --role="roles/serviceusage.serviceUsageConsumer" This permission lets the Google group members impersonate the service account that you created. Give the Google group the ability to impersonate the service account you created: gcloud IAMservice-accounts add-iam-policy-binding $PRIV_SERVICE_ACCOUNT_NAME@$PROJECT_ID.iam.gserviceaccount.com --member="group :$DELEGATE_GROUP" --role="roles/iam.serviceAccountTokenCreator" Skip this step if you created and retained a sample Compute Engine instance for the procedure in Give time-bound access to Google Cloud resources. You can use the sample instance to run the steps in this example. Alternatively, use the following command to create a sample Compute Engine instance: gcloud compute instances create example-instance-1 \ --zone us-west1-b \ --machine-type g1-small You use the instances in this example to validate that users who are given membership to the privileged group can access the instance. Enable audit logs Administrators in your enterprise can enable Cloud Audit Logs to make sure that privileged access is logged and available for review and analysis. The procedure in this section shows you how to enable audit logging. Get the current IAM policies for the project: gcloud projects get-iam-policy $PROJECT_ID > /tmp/policy.yaml Modify the policy file to enable data access logs for Compute Engine API: cat <> /tmp/policy.yaml auditConfigs: - auditLogConfigs: - logType: ADMIN_READ - logType: DATA_READ - logType: DATA_WRITE service: compute.googleapis.com EOF Set the new policy: gcloud projects set-iam-policy $PROJECT_ID /tmp/policy.yaml Test impersonation with the non-administrative user account You can use the non-administrative user account to test the setup by requesting membership to the group and impersonating the service account once membership is given. The procedure in this section shows you how enterprise users can request privileged access to Google Cloud resources. In this example procedure, the Google Cloud resources are the Compute Engine instances for a Google Cloud project. To demonstrate how users in your organization can impersonate a service account once they are given membership to the group, you request membership to relevant Google groups. Sign in to Google groups with the non-administrative user account and request membership to the elevated-compute-access group. Use the same account to sign in to Google Cloud. You should have access to the group once an administrator approves the request. In this example procedure, it is assumed that your Group Membership request is approved. Note: Group administrators should give expiring membership to users as appropriate to specific business requirements in your enterprise. They must follow the guidance outlined in the Cloud Identity API documentation. In this document, it's assumed that the user account that you use to set up the Google group in Create a Google group for privileged access is the group administrator. In Cloud Shell, run the following command to set the default project: gcloud config set project PROJECT_ID Replace PROJECT_ID with the project ID that you created earlier in the Create a Google group for privileged access section. Attempt to list the Compute Engine instances in this project: gcloud compute instances list You see an error message informing you that your Google Cloud user doesn't have the permission to access Compute Engine resources. Run the following command: gcloud compute instances list --impersonate-service-account=elevated-compute-access@$PROJECT_ID.iam.gserviceaccount.com This command lists the Compute Engine instances in the project by impersonating the service account that you gained access to when you were given membership to the elevated-compute-access Google group. You see the example-instance-1 Compute Engine instance that you created with your administrator account. Check audit logs As a Google Cloud administrator, you can access and review the generated audit logs. Sign in to the Google Cloud console with a user account that has administrative privileges to access audit logs. In Cloud Logging, enter the following query to review data access logs: logName="projects//logs/cloudaudit.googleapis.com%2Fdata_access" AND protoPayload.authenticationInfo.principalEmail="elevated-compute-access@PROJECT_ID.iam.gserviceaccount.com" Replace PROJECT_ID with your project ID and then run the query. This query shows you which user in the Google group impersonated the service account to access the Compute Engine instance. It also shows you other relevant details such as when the service account was impersonated and the details of the request headers. Review the audit log payload, specifically the protoPayload.authenticationInfo object in the payload. The username of the user who impersonated the service account is logged as the value of the principalEmail key of the firstPartyPrincipal object. As an administrator, you can also review event-threat findings in the Security Command Center dashboard. To learn more about Security Command Center, see Using Event Threat Detection. What's next Learn about zero trust. Discover how Policy Intelligence offers smart access control for your Google Cloud resources. Read about how you can troubleshoot policy and access problems on Google Cloud Read about best practices for creating and managing Google Cloud service accounts. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud(1).txt b/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..9310a248021d6d0c3c39fed4b2a2d09ce7f04660 --- /dev/null +++ b/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-for-connecting-other-csps-with-gcp +Date Scraped: 2025-02-23T11:53:43.769Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for connecting other cloud service providers with Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC This document helps cloud architects and operations professionals decide how to connect Google Cloud with other cloud service providers (CSP) such as Amazon Web Services (AWS) and Microsoft Azure. In a multicloud design, connections between CSPs allow data transfers between your virtual networks. This document also provides information on how to implement the option that you choose. Many organizations operate on multiple cloud platforms, either as a temporary measure during a migration or because the organization has a long-term multicloud strategy. For data exchange between Google Cloud and other CSPs, there are multiple options discussed in this document: Transfer of data through public IP addresses over the internet. Using a managed VPN service between Google Cloud and the other CSP. Using Partner Interconnect on Google Cloud to connect to a partner that can directly connect with the other CSP. Using Dedicated Interconnect on Google Cloud to transfer data to the other CSP through a common point of presence (POP). Using Cross-Cloud Interconnect on Google Cloud for a managed connection to another CSP. These options differ in terms of transfer speed, latency, reliability, service level agreements (SLAs), complexity, and costs. This document describes each option and its advantages and disadvantages in detail and ends with a comparison of all of the options. This document covers data transfers between virtual machines residing in Google Cloud and other CSP's virtual networks. For data stored in other Google Cloud products such as Cloud Storage and BigQuery, see the section covering these products. This document can act as a guide to evaluate the options to transfer data between Google Cloud and one or more other CSPs, based on your requirements and capabilities. The concepts in this document apply in multiple cases: When you are planning to transfer large amounts of data for a short period of time, for example, for a data migration project. If you run continuous data transfers between multiple cloud providers, for example, because your compute workloads run on another CSP while your big data workloads use Google Cloud. Initial considerations Before you choose how to connect your cloud environments, it's important that you look at which regions you choose in each environment and that you have a strategy for transferring data that doesn't reside in virtual network environments. Choose cloud regions If both Google Cloud and other CSP's resources are in regions that are geographically close to each other, there is a cost and latency advantage for data transfers between cloud providers. The following diagram illustrates the flow of data between Google Cloud and other CSPs. Regardless of the method of transfer, data flows from Google Cloud to the other CSP as follows: From the Google Cloud region where resources are hosted to Google's edge POP. Through a third-party facility between Google Cloud and the other CSP. From the other CSP's edge POP to the region where resources are located inside the other CSP's network. Data that flows from the other CSP to Google Cloud travels the same path, but in the opposite direction. The end-to-end path determines the latency of the data transfer. For some solutions, the network costs also increase when the edge POPs of both providers aren't in one metropolitan area. Details are listed in the following pricing section of each solution. Make sure you choose suitable regions in each cloud that can host your intended workloads. Visit the Locations page for Google Cloud and similar pages for other CSPs, such as the AWS region table or Azure products by region. For general help in selecting one or multiple locations in Google Cloud, review Best practices for Compute Engine region selection. Transfer of data in Cloud Storage and BigQuery Only data that resides inside a Google Cloud VPC environment passes a Cloud VPN tunnel or a Cloud Interconnect connection by default. If you want to transfer data to and from other Google services, you can use Private Service Connect and Private Google Access for on-premises hosts from the other CSP's environment. If you want to transfer another CSP's object storage, database, data warehouse, or other products, check their documentation to see if data can pass through their interconnect or managed VPN product. Otherwise, you might be able to pass this data through proxy virtual machines that you set up in the respective CSP's environment to make it pass through the connection you want. For transferring data to Cloud Storage or BigQuery, you can also use Storage transfer service or BigQuery transfer service. Transfer through external IP addresses over the internet The easiest way to transfer data between Google Cloud and another CSP is to use the internet and transfer the data by using external IP addresses. The following diagram illustrates the elements for this solution. Between Google's network edge and the other CSP's network edge, data passes through the internet or uses a direct peering between Google Cloud and the other CSP. Data can only pass between resources that have public IP addresses assigned. How Google connects to other networks Google's edge POPs are where Google's network connects to other networks that collectively form the internet. Google is present in numerous locations around the world. On the internet, every network is assigned an autonomous system number (ASN) that encompasses the network's internal network infrastructure and routes. Google's primary ASN is 15169. There are two primary ways a network can send or receive traffic to or from Google: Buy internet service from an internet service provider (ISP) that already has connectivity to Google (AS15169). This option is generally referred to as IP transit and is similar to what consumers and businesses purchase from access providers at their homes and businesses. Connect directly to Google (AS15169). This option, called peering, lets a network directly send and receive traffic to Google (AS15169) without using a third-party network. At scale, peering is generally preferred over transit because network operators have more control over how and where traffic is exchanged, allowing for optimization of performance and cost. Peering is a voluntary system; when choosing to peer, network operators decide jointly which facilities to connect in, how much bandwidth to provision, how to split infrastructure costs, and any other details required to set up the connections. AS15169 has an open peering policy, which means as long as a network meets the technical requirements, Google will peer with them. Peering is a private, mutually beneficial agreement between two independent networks, and as such, networks generally don't disclose publicly who they peer with at particular locations, how much bandwidth is available, etc. But due to scale and an open policy, Google is directly peered with almost all major ISPs and cloud services providers in multiple locations and across regions. The Google network team works directly with their counterparts at these networks to provide sufficient peering capacity to meet demand. You can read more about how internet peering works at The Internet Peering Playbook. Implementation In this setup, all virtual machines that transfer data between Google Cloud and other cloud service providers must have a public IP address. On one end, the firewall must be opened to allow a connection from the public IP address of the other cloud provider. No extra steps are necessary because the data exchange happens over the existing internet connectivity. Transfer speed and latency While there is no guaranteed speed and latency on the path through the internet, typically, major CSPs and Google exchange data directly in multiple locations around the world. Capacity is shared with other customers and services, but, often due to the direct connection between both providers, latency is similar or lower than other options. We recommend that you test the latency and bandwidth between Google Cloud and the other CSPs in your chosen regions. You can do a quick benchmark by using tools such as iperf or netperf, or run a more complete custom benchmark based on your app. While conditions might change over time, the benchmark can provide an indication of the performance you can expect and if this solution fulfills your needs. If you require a dedicated bandwidth between both environments, you should choose another solution. Note that products from different vendors may have performance characteristics that might not always align. For example, per-tunnel IPsec VPN capacity might vary from vendor to vendor. Security Traffic over the internet isn't encrypted and might pass through third-party internet service providers (ISPs), autonomous systems, and facilities. Therefore, you should encrypt sensitive traffic at the application layer or choose another solution. Reliability and SLA Google Cloud generally has multiple diverse paths for internet connectivity from regions, and there are direct peering connections with other major CSPs at multiple locations around the globe. However, Google Cloud doesn't provide any SLAs for connectivity to other CSPs over the internet. While you should check SLAs for your other CSPs, they typically only refer to internet connectivity as a whole and not specific providers. Providers may have different routing policies which can impact availability. For example, on its peeringdb page, Amazon explains that many AWS regions announce only local routes, because AWS VPCs are regional only (Google Cloud VPCs are global.) This means customers may be relying on links at a single peering location, as traffic leaving Google Cloud can only use those links to reach the destination. This is fine under normal operation with traffic being exchanged in-region, but it is advisable for customers to architect for multi-region deployments to tolerate regional failures. This could include setting up additional gateways, HA VPNs, virtual network peering, or other multi-region topologies in the third-party cloud. Applications should also be built in such a way that they 'fail open' as Google SRE recommends in the SRE book. For example, if you build an application that relies on being able to reach a third-party service using Internet routing, make sure that the application still functions, or at least returns helpful error messages to the user in the event of connectivity issues. When issues with Internet routing do occur, the Google network team will attempt to restore connectivity with the third-party. However, not all issues will be under Google's control. So in some cases, repair might depend on a third party (ISP or cloud provider) taking restorative action. Customers have the most influence over how operators respond to outages, so make sure you have support coverage with all providers and a plan to escalate issues should something go wrong. Also perform periodic BCP (business continuity process) drills to ensure the resilience of applications architected on multicloud. Pricing For data transfers over the internet, normal internet egress rates apply for traffic leaving Google Cloud. In cases where latency isn't critical, using the Standard Network Service Tier provides a lower pricing tier. The other CSPs have their own charges for data transfers. In many cases, they only charge for traffic egressing their network. Consult your CSP's price list for example for AWS, see EC2 data transfer charges and for Azure, see Bandwidth Pricing Details. Managed VPN between cloud providers You can use managed VPN services from both cloud providers, which has two benefits. It provides an encrypted channel between virtual networks in both cloud environments, and it lets you transfer data by using private IP addresses. This is an extension to the previous solution of transferring over the internet without requiring any hardware or partners. The following diagram illustrates the elements for this solution. Using this solution, data is encrypted on Google Cloud using Cloud VPN and a VPN solution for the other CSP. The data transfer between Google Cloud and the other CSP uses the internet like in the previous solution. Implementation Google offers Cloud VPN as a managed VPN service for encrypted IPsec tunnels, which can be used on the Google end. Other CSPs offer their own managed VPN products, which you can use to build IPsec VPN tunnels between both environments. For example, AWS offers AWS Site-to-Site VPN and Azure offers Azure VPN gateway. You can connect your virtual networks between the environments by using VPN tunnels. IP addresses in the two environments must not overlap because there is no network address translation (NAT) functionality available in Cloud VPN. On Cloud Router, overlapping routes could be removed from being exchanged between environments, but then no communication between the services using these IP addresses is possible. With Cloud Router in global dynamic routing mode, you can reach all regions in a global VPC network by using only VPN tunnels to that region. For other CSPs, you might need VPN tunnels per region. If you have multiple virtual networks in a cloud environment that aren't peered, you have to connect all virtual networks that need to communicate with each other independently by using a VPN. Google Cloud offers interoperability guides, which have step-by-step instructions for setting up VPN tunnels to other major cloud providers: Warning: Certain Classic VPN dynamic routing functionality is deprecated. For more information, see Classic VPN dynamic routing partial deprecation. Alibaba Cloud VPN gateway without redundancy: supports static routes only. Alibaba Cloud VPN gateway with redundancy: supports static routes only. AWS: supports dynamic routing with Cloud Router. Microsoft Azure: supports dynamic routing with Cloud Router. Transfer speed and latency When you use managed VPN tunnels, data still follows the same internet path as if data were directly exchanged by using public IP addresses. The latency observed should be similar to the previous option with only a small latency overhead for the VPN tunnel. The available bandwidth is limited by the maximum bandwidth per VPN tunnel on Google Cloud, the maximum bandwidth of the other CSP's tunnels, and the available bandwidth over the internet path. For higher bandwidth, you can deploy multiple tunnels in parallel. For more information on how to deploy such a solution, see Building high-throughput VPNs. You can test latency and bandwidth as described in the last section, but conditions might vary over time, and there are no guarantees on latency or bandwidth. Security Traffic over IPsec VPN tunnels is encrypted by using ciphers accepted by both CSPs. For more information, see Cloud VPN supported IKE ciphers, AWS VPN FAQ, and Azure VPN IPsec/IKE parameters. Reliability and SLA Cloud VPN offers an SLA of 99.9%-99.99% depending on if Classic VPN or HA VPN is selected. Other CSPs sometimes offer SLAs on their managed VPN product, for example, AWS Site-to-Site VPN SLA and Azure SLA for VPN Gateway. However, these SLAs only cover the VPN gateways availability and don't include connectivity to other CSPs over the internet, so there is no SLA on the end-to-end solution. To increase reliability, consider using multiple VPN gateways and tunnels on both Google Cloud and the other CSPs. Pricing For the managed VPN service, charges apply. For Google Cloud, an hourly charge applies see Cloud VPN pricing. For other CSPs, consult their price lists, for example, see AWS Site-to-Site VPN connection pricing or Azure VPN Gateway pricing. In addition to hourly pricing for the VPN service, you have to pay for data transferred through the VPN gateways. For Google Cloud and many CSPs, standard internet data transfer charges apply, as detailed in Transfer through external IP addresses over the internet. In many cases, data transfer charges exceed the fixed cost for this solution. Partner Interconnect with multicloud enabled partners Partner Interconnect lets you connect a Virtual Private Cloud to another CSP's virtual networks through the network of select partners that offer direct multicloud solutions. You connect by deploying one or more virtual routing instances that take care of the necessary Border Gateway Protocol (BGP) setup. The following diagram shows a redundant setup using two Partner Interconnect connections. Routes are exchanged between Cloud Router and the gateway on the other CSP's side through a virtual routing instance that is managed by the partner providing the interconnect. Traffic flows through the partner network between Google Cloud and the other CSP. Implementation This solution requires you to set up multiple components: On the Google Cloud side, you set up Partner Interconnect with an interconnect service provider that's serving your Google Cloud regions and offering multicloud connectivity to the other CSP. On the other CSP, you have to use their interconnect product to connect to the same partner. For example, on AWS you can use Direct Connect and on Azure you can use ExpressRoute. On the service provider partner side, you have to configure the virtual routing equipment providing the BGP sessions to Google Cloud and to the other CSP. If IP address space between both CSP environments overlaps, your partner might offer NAT functionality for the virtual routing equipment. See the partners documentation for details. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and other CSPs. Depending on the partner and the other CSP, the available attachment capacity might vary. On the Google Cloud side, Partner Interconnect is available with an attachment capacity between 50 Mbps and 50 Gbps. Latency for this solution is the sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the interconnect location where the partner connects to Google Cloud. Latency in the partners network to, from, and through the virtual routing instance towards the other CSP. Latency from the other CSP's edge location where the interconnect with the partner takes place to the region where the resources are hosted in the CSP. For the lowest possible latency, the edge locations of Google Cloud and the other CSP should be in the same metropolitan area, along with the virtual routing instance. For example, you might have a low latency connection if both CSP's cloud regions as well as the edge POP and the virtual routing instance are located in the Ashburn, Virginia area. While Google Cloud and many other CSPs offer no latency guarantees for traffic towards their network edge, because there is a dedicated path and capacity through the partner, typically the latency in this solution is less variable than if you use external IP addresses or a VPN solution. Security Traffic over Partner Interconnect isn't encrypted by default. To help secure your traffic, you can deploy HA VPN over Cloud Interconnect on the Google Cloud side of the connection. Some other CSPs allow you to use their managed VPN service over an interconnect; for example, AWS Site-to-Site VPN can be used over AWS Direct Interconnect. Alternatively, you can also use a virtual appliance that encrypts the traffic on the other CSP's side. Another option is to encrypt your traffic at the application layer instead of using VPN. Reliability and SLA This solution involves three different SLAs: one from Google, one from the interconnect partner, and one from the other CSP. When using Partner Interconnect redundantly, Google offers 99.9% - 99.99% monthly SLAs depending on the topology chosen. There is no SLA on a single Partner Interconnect connection. See the other CSP's documentation for the SLA on their interconnect product, for example, AWS Direct Connect SLA or on Azure SLA for ExpressRoute. Consult the documentation or terms of the partner service provider for the Partner Interconnect for their SLA on the availability of the connectivity and virtual routing instance. For example, see the Megaport Global Services Agreement. Pricing On the Google Cloud side, there is a monthly fee for each Partner Interconnect attachment, depending on the bandwidth. Traffic egressing through the Partner Interconnect gets charged at a lower rate than internet traffic. For more information, see the Partner Interconnect pricing page. Consult the other CSP's pricing page for their interconnect product, for example AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, pricing also has a monthly charge for the interconnect and data transfer charges through the interconnect at a lower rate than over the internet. The partner provides interconnect services charges according to their own pricing, which can be found on their website or by consulting their sales team for a quote. Typically, if all data transfers happen in the same metropolitan area, charges are much lower than if data has to travel a longer distance on a partner network. When data is transferred regularly at sufficiently high volumes, depending on the other CSP's prices, this solution can sometimes offer the lowest total cost due to the discounted egress rates. Even when adding in monthly costs for the interconnect for Partner Interconnect, the other CSP, and the service provider partner, using this solution can provide significant savings. As partners and other CSP's prices can change without notice, make your own comparison using up-to-date quotes from all involved parties. Dedicated Interconnect through a common POP By using one or more physical routing devices at a common interconnect facility between Google Cloud and the other CSP, you can connect both cloud providers by using Dedicated Interconnect on the Google Cloud side and an equivalent product at the other CSP. The interconnect location isn't necessarily at the same location as the region in which your resources are located. The following diagram shows a redundant setup using two Dedicated Interconnect connections: Routes are exchanged between Cloud Router and the gateway on the other CSP's side through a physical router that you place in a common interconnect facility. Traffic flows through this router between Google Cloud and the other CSP. Implementation This solution requires that you host and maintain physical routing devices at a colocation facility where Google and the CSP you want to connect to are present. From this routing device, you order two physical connections with the facility: one toward Google Cloud using Dedicated Interconnect, and one towards the other service provider using an equivalent product, for example, AWS Direct Connect or Azure ExpressRoute. On your routing device, you need to configure BGP to allow route exchanges between the two cloud environments. Check the Colocation facility locations list from Google Cloud and your other CSP, for example AWS Direct Connect Locations or Azure ExpressRoute peering locations, to identify suitable locations for this setup. The IP address ranges between your virtual networks shouldn't overlap, unless your physical routing device is able to perform NAT between the environments or you restrict some route exchanges between the environments. This solution is effective if you use the dedicated connectivity to also connect back to your on-premises environment, in addition to the connection to another CSP. In other cases, this solution is complex because it requires you to own and maintain physical equipment and have a contract with a colocation facility. We only recommend this solution if at least one of the following is true: You already have equipment in a suitable facility for another purpose and an existing contract with the facility. You transfer large amounts of data regularly so this is a cost effective option or you have bandwidth requirements that partners cannot fulfill. Note: You can also implement this solution by using Partner Interconnect. In this case, your physical routing equipment can also be located in another site such as your data center or other on-premises environment. However, due to the additional latency to and from this location and charges for the connection to on-premises by the partner, this solution is generally not recommended for pure cloud to cloud connectivity unless traffic needs to be inspected on-premises. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and another CSP. On the Google Cloud side, Dedicated Interconnect is available by using one or multiple 10 or 100 Gbps physical connections. The latency for this solution is a sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the interconnect location where you interconnect with Google Cloud. Latency through the facility and your physical equipment, which is usually negligible when compared with any latency due to fiber length. Latency from the interconnect location through the other CSP's network to the region where the resources are hosted in the CSP. While no latency guarantees are offered, this solution typically allows for the lowest latency and highest transfer speeds between Google Cloud and other cloud environments over private IP addresses. Security Traffic over Dedicated Interconnect isn't encrypted by default. To help secure your traffic, you can deploy HA VPN over Cloud Interconnect on the Google Cloud side of the connection. Some other CSPs allow you to use their managed VPN service over an interconnect; for example, AWS Site-to-Site VPN can be used over AWS Direct Interconnect. Alternatively, you can also use a virtual appliance encrypting the traffic on the other CSP's side. Another option is to encrypt your traffic at the application layer instead of using VPN. Reliability and SLA When using Dedicated Interconnect redundantly, Google offers 99.9%-99.99% monthly SLAs depending on the chosen topology. There is no SLA on a single Dedicated Interconnect connection. For more information, see the other CSP's documentation for the SLA on their interconnect product, for example, AWS Direct Connect SLA or Azure SLA for ExpressRoute. The colocation facility or hardware vendor for the physical routing equipment might also offer SLAs on their services. Pricing On the Google Cloud side, there is a monthly fee for each Dedicated Interconnect port as well as for each VLAN attachment that connects to a VPC environment. Traffic that egresses through Dedicated Interconnect is charged at a lower rate than internet traffic. For more information, see the Dedicated Interconnect pricing page. Consult the other CSP's pricing page for their interconnect product, for example, AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, pricing also has a monthly charge for the interconnect and data transfer charges through the interconnect at a lower rate than over the internet. In addition, you need to factor in charges for the colocation facility services providing space, electrical power, and physical connections towards both cloud environments as well as any cost and ongoing service contracts with the vendor for the physical routing equipment. If the connection between both CSPs cannot happen in the same facility and you need to procure connectivity between facilities, pricing might be much higher for these services. Cross-Cloud Interconnect managed connections You can connect your Google Cloud VPC networks to your virtual networks in another CSP over Google's network fabric. In a sense, this setup works like Partner Interconnect, but with the Google SLA covering both the Google networks and the interconnections themselves. The following diagram shows a Cross-Cloud Interconnect configuration with the minimum number of connections. Routes are exchanged between Cloud Router and the gateway on the other CSP's side over Google's network fabric. Traffic flows through this fabric between Google Cloud and the other CSP. Implementation When you buy Cross-Cloud Interconnect, Google provisions a dedicated physical connection between the Google network and that of another cloud service provider. You can use this connection to peer your Google Cloud Virtual Private Cloud (VPC) network with a network that's hosted by a supported cloud service provider. After you provision the connection, Google supports the connection up to the point where it reaches the network of the other cloud service provider. Google does not guarantee uptime from the other cloud service provider. However, Google remains the primary point of contact for the full service and will notify you if you need to open a support case with the other CSP. This solution requires you to follow the setup process for the other CSP, including choosing where the two networks are going to interconnect. Only certain CSPs are supported. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and another CSP. On the Google Cloud side, Dedicated Interconnect is available by using one or multiple pairs of 10 Gbps or 100 Gbps physical connections. The latency for this solution is a sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the cross-cloud location. Latency between the Google edge location and the other CSP's edge location (often in the same facility) Latency from the other CSP's edge location where the Cross-Cloud Interconnect is deployed to the region where the resources are hosted in the CSP. Although no latency guarantees are offered, this solution typically allows for the lowest possible latency and highest possible transfer speeds between Google Cloud and other cloud environments over private IP addresses. Security Because traffic over Cross-Cloud Interconnect isn't encrypted, we recommend using application layer encryption for sensitive traffic. If all traffic needs to be encrypted, virtual appliances that are available from Google Cloud partners in the Cloud Marketplace can provide solutions for encrypting the traffic to the other CSP's environment. Reliability and SLA Cross-Cloud Interconnect uses the Cloud Interconnect SLA. To qualify for the SLA, your Cross-Cloud Interconnect configuration must use one or more pairs of connections, as described in the Service-level agreement section of the Cross-Cloud Interconnect overview. The SLA covers everything on the Google side up to the edge of the other cloud provider's network. It does not cover the other CSP's network. For more information, see the other CSP's documentation for the SLA on their interconnect product; for example, AWS Direct Connect SLA or Azure SLA for ExpressRoute. Pricing There is an hourly fee for each Cross-Cloud Interconnect connection as well as for each VLAN attachment that connects to a VPC environment. Traffic that egresses through Cross-Cloud Interconnect is charged at a lower rate than internet traffic. For more information, see Cross-Cloud Interconnect pricing. Consult the other CSP's pricing page for their interconnect product, for example, AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, there is a monthly charge for the interconnect. Charges for data transfer through the interconnect are typically lower than charges for data transfer over the internet. There are no separate costs for interconnect locations or equipment. Comparison of options The presented options vary in speed, availability, complexity, security, and pricing. You should evaluate all options thoroughly according to your needs. The following diagram guides you through the process of choosing one of the solutions mentioned in this document through a flow chart. Typically, we can recommend the following choices: For many scenarios where data is exchanged occasionally or at low volume and transfers aren't critical, transferring data over the internet is the simplest option and still provides relatively low latency and high bandwidth. If encryption or transfer of smaller amounts of data using private IP addresses is required, consider using Cloud VPN and a managed VPN service on the other CSP's side. If you transfer larger amounts of data, using Partner Interconnect with a multicloud enabled partner provides multiple benefits: Dedicated capacity, lower cost for data transfers, and depending on the topology, an SLA for each part of the solution. Partner Interconnect capacity is normally less than 10 Gbps per connection. If you connect your on-premises equipment to multiple clouds, using Dedicated Interconnect through a common POP is a common option. It comes with additional complexity of managing your own hardware and relationships with a colocation facility. Unless you already have the infrastructure in place, this solution is reserved to cases where typical data transfer rates are 10 Gbps or more. If you don't want the overhead of managing cross-connects and routing equipment in remote POPs, Cross-Cloud Interconnect provides a managed solution where Google handles all of that for you. What's next Review the series about Hybrid and multicloud patterns and practices. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud.txt b/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1e057daeba31d34097ed77b1f23df9cfb7514aa --- /dev/null +++ b/Patterns_for_connecting_other_cloud_service_providers_with_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-for-connecting-other-csps-with-gcp +Date Scraped: 2025-02-23T11:51:08.338Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for connecting other cloud service providers with Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC This document helps cloud architects and operations professionals decide how to connect Google Cloud with other cloud service providers (CSP) such as Amazon Web Services (AWS) and Microsoft Azure. In a multicloud design, connections between CSPs allow data transfers between your virtual networks. This document also provides information on how to implement the option that you choose. Many organizations operate on multiple cloud platforms, either as a temporary measure during a migration or because the organization has a long-term multicloud strategy. For data exchange between Google Cloud and other CSPs, there are multiple options discussed in this document: Transfer of data through public IP addresses over the internet. Using a managed VPN service between Google Cloud and the other CSP. Using Partner Interconnect on Google Cloud to connect to a partner that can directly connect with the other CSP. Using Dedicated Interconnect on Google Cloud to transfer data to the other CSP through a common point of presence (POP). Using Cross-Cloud Interconnect on Google Cloud for a managed connection to another CSP. These options differ in terms of transfer speed, latency, reliability, service level agreements (SLAs), complexity, and costs. This document describes each option and its advantages and disadvantages in detail and ends with a comparison of all of the options. This document covers data transfers between virtual machines residing in Google Cloud and other CSP's virtual networks. For data stored in other Google Cloud products such as Cloud Storage and BigQuery, see the section covering these products. This document can act as a guide to evaluate the options to transfer data between Google Cloud and one or more other CSPs, based on your requirements and capabilities. The concepts in this document apply in multiple cases: When you are planning to transfer large amounts of data for a short period of time, for example, for a data migration project. If you run continuous data transfers between multiple cloud providers, for example, because your compute workloads run on another CSP while your big data workloads use Google Cloud. Initial considerations Before you choose how to connect your cloud environments, it's important that you look at which regions you choose in each environment and that you have a strategy for transferring data that doesn't reside in virtual network environments. Choose cloud regions If both Google Cloud and other CSP's resources are in regions that are geographically close to each other, there is a cost and latency advantage for data transfers between cloud providers. The following diagram illustrates the flow of data between Google Cloud and other CSPs. Regardless of the method of transfer, data flows from Google Cloud to the other CSP as follows: From the Google Cloud region where resources are hosted to Google's edge POP. Through a third-party facility between Google Cloud and the other CSP. From the other CSP's edge POP to the region where resources are located inside the other CSP's network. Data that flows from the other CSP to Google Cloud travels the same path, but in the opposite direction. The end-to-end path determines the latency of the data transfer. For some solutions, the network costs also increase when the edge POPs of both providers aren't in one metropolitan area. Details are listed in the following pricing section of each solution. Make sure you choose suitable regions in each cloud that can host your intended workloads. Visit the Locations page for Google Cloud and similar pages for other CSPs, such as the AWS region table or Azure products by region. For general help in selecting one or multiple locations in Google Cloud, review Best practices for Compute Engine region selection. Transfer of data in Cloud Storage and BigQuery Only data that resides inside a Google Cloud VPC environment passes a Cloud VPN tunnel or a Cloud Interconnect connection by default. If you want to transfer data to and from other Google services, you can use Private Service Connect and Private Google Access for on-premises hosts from the other CSP's environment. If you want to transfer another CSP's object storage, database, data warehouse, or other products, check their documentation to see if data can pass through their interconnect or managed VPN product. Otherwise, you might be able to pass this data through proxy virtual machines that you set up in the respective CSP's environment to make it pass through the connection you want. For transferring data to Cloud Storage or BigQuery, you can also use Storage transfer service or BigQuery transfer service. Transfer through external IP addresses over the internet The easiest way to transfer data between Google Cloud and another CSP is to use the internet and transfer the data by using external IP addresses. The following diagram illustrates the elements for this solution. Between Google's network edge and the other CSP's network edge, data passes through the internet or uses a direct peering between Google Cloud and the other CSP. Data can only pass between resources that have public IP addresses assigned. How Google connects to other networks Google's edge POPs are where Google's network connects to other networks that collectively form the internet. Google is present in numerous locations around the world. On the internet, every network is assigned an autonomous system number (ASN) that encompasses the network's internal network infrastructure and routes. Google's primary ASN is 15169. There are two primary ways a network can send or receive traffic to or from Google: Buy internet service from an internet service provider (ISP) that already has connectivity to Google (AS15169). This option is generally referred to as IP transit and is similar to what consumers and businesses purchase from access providers at their homes and businesses. Connect directly to Google (AS15169). This option, called peering, lets a network directly send and receive traffic to Google (AS15169) without using a third-party network. At scale, peering is generally preferred over transit because network operators have more control over how and where traffic is exchanged, allowing for optimization of performance and cost. Peering is a voluntary system; when choosing to peer, network operators decide jointly which facilities to connect in, how much bandwidth to provision, how to split infrastructure costs, and any other details required to set up the connections. AS15169 has an open peering policy, which means as long as a network meets the technical requirements, Google will peer with them. Peering is a private, mutually beneficial agreement between two independent networks, and as such, networks generally don't disclose publicly who they peer with at particular locations, how much bandwidth is available, etc. But due to scale and an open policy, Google is directly peered with almost all major ISPs and cloud services providers in multiple locations and across regions. The Google network team works directly with their counterparts at these networks to provide sufficient peering capacity to meet demand. You can read more about how internet peering works at The Internet Peering Playbook. Implementation In this setup, all virtual machines that transfer data between Google Cloud and other cloud service providers must have a public IP address. On one end, the firewall must be opened to allow a connection from the public IP address of the other cloud provider. No extra steps are necessary because the data exchange happens over the existing internet connectivity. Transfer speed and latency While there is no guaranteed speed and latency on the path through the internet, typically, major CSPs and Google exchange data directly in multiple locations around the world. Capacity is shared with other customers and services, but, often due to the direct connection between both providers, latency is similar or lower than other options. We recommend that you test the latency and bandwidth between Google Cloud and the other CSPs in your chosen regions. You can do a quick benchmark by using tools such as iperf or netperf, or run a more complete custom benchmark based on your app. While conditions might change over time, the benchmark can provide an indication of the performance you can expect and if this solution fulfills your needs. If you require a dedicated bandwidth between both environments, you should choose another solution. Note that products from different vendors may have performance characteristics that might not always align. For example, per-tunnel IPsec VPN capacity might vary from vendor to vendor. Security Traffic over the internet isn't encrypted and might pass through third-party internet service providers (ISPs), autonomous systems, and facilities. Therefore, you should encrypt sensitive traffic at the application layer or choose another solution. Reliability and SLA Google Cloud generally has multiple diverse paths for internet connectivity from regions, and there are direct peering connections with other major CSPs at multiple locations around the globe. However, Google Cloud doesn't provide any SLAs for connectivity to other CSPs over the internet. While you should check SLAs for your other CSPs, they typically only refer to internet connectivity as a whole and not specific providers. Providers may have different routing policies which can impact availability. For example, on its peeringdb page, Amazon explains that many AWS regions announce only local routes, because AWS VPCs are regional only (Google Cloud VPCs are global.) This means customers may be relying on links at a single peering location, as traffic leaving Google Cloud can only use those links to reach the destination. This is fine under normal operation with traffic being exchanged in-region, but it is advisable for customers to architect for multi-region deployments to tolerate regional failures. This could include setting up additional gateways, HA VPNs, virtual network peering, or other multi-region topologies in the third-party cloud. Applications should also be built in such a way that they 'fail open' as Google SRE recommends in the SRE book. For example, if you build an application that relies on being able to reach a third-party service using Internet routing, make sure that the application still functions, or at least returns helpful error messages to the user in the event of connectivity issues. When issues with Internet routing do occur, the Google network team will attempt to restore connectivity with the third-party. However, not all issues will be under Google's control. So in some cases, repair might depend on a third party (ISP or cloud provider) taking restorative action. Customers have the most influence over how operators respond to outages, so make sure you have support coverage with all providers and a plan to escalate issues should something go wrong. Also perform periodic BCP (business continuity process) drills to ensure the resilience of applications architected on multicloud. Pricing For data transfers over the internet, normal internet egress rates apply for traffic leaving Google Cloud. In cases where latency isn't critical, using the Standard Network Service Tier provides a lower pricing tier. The other CSPs have their own charges for data transfers. In many cases, they only charge for traffic egressing their network. Consult your CSP's price list for example for AWS, see EC2 data transfer charges and for Azure, see Bandwidth Pricing Details. Managed VPN between cloud providers You can use managed VPN services from both cloud providers, which has two benefits. It provides an encrypted channel between virtual networks in both cloud environments, and it lets you transfer data by using private IP addresses. This is an extension to the previous solution of transferring over the internet without requiring any hardware or partners. The following diagram illustrates the elements for this solution. Using this solution, data is encrypted on Google Cloud using Cloud VPN and a VPN solution for the other CSP. The data transfer between Google Cloud and the other CSP uses the internet like in the previous solution. Implementation Google offers Cloud VPN as a managed VPN service for encrypted IPsec tunnels, which can be used on the Google end. Other CSPs offer their own managed VPN products, which you can use to build IPsec VPN tunnels between both environments. For example, AWS offers AWS Site-to-Site VPN and Azure offers Azure VPN gateway. You can connect your virtual networks between the environments by using VPN tunnels. IP addresses in the two environments must not overlap because there is no network address translation (NAT) functionality available in Cloud VPN. On Cloud Router, overlapping routes could be removed from being exchanged between environments, but then no communication between the services using these IP addresses is possible. With Cloud Router in global dynamic routing mode, you can reach all regions in a global VPC network by using only VPN tunnels to that region. For other CSPs, you might need VPN tunnels per region. If you have multiple virtual networks in a cloud environment that aren't peered, you have to connect all virtual networks that need to communicate with each other independently by using a VPN. Google Cloud offers interoperability guides, which have step-by-step instructions for setting up VPN tunnels to other major cloud providers: Warning: Certain Classic VPN dynamic routing functionality is deprecated. For more information, see Classic VPN dynamic routing partial deprecation. Alibaba Cloud VPN gateway without redundancy: supports static routes only. Alibaba Cloud VPN gateway with redundancy: supports static routes only. AWS: supports dynamic routing with Cloud Router. Microsoft Azure: supports dynamic routing with Cloud Router. Transfer speed and latency When you use managed VPN tunnels, data still follows the same internet path as if data were directly exchanged by using public IP addresses. The latency observed should be similar to the previous option with only a small latency overhead for the VPN tunnel. The available bandwidth is limited by the maximum bandwidth per VPN tunnel on Google Cloud, the maximum bandwidth of the other CSP's tunnels, and the available bandwidth over the internet path. For higher bandwidth, you can deploy multiple tunnels in parallel. For more information on how to deploy such a solution, see Building high-throughput VPNs. You can test latency and bandwidth as described in the last section, but conditions might vary over time, and there are no guarantees on latency or bandwidth. Security Traffic over IPsec VPN tunnels is encrypted by using ciphers accepted by both CSPs. For more information, see Cloud VPN supported IKE ciphers, AWS VPN FAQ, and Azure VPN IPsec/IKE parameters. Reliability and SLA Cloud VPN offers an SLA of 99.9%-99.99% depending on if Classic VPN or HA VPN is selected. Other CSPs sometimes offer SLAs on their managed VPN product, for example, AWS Site-to-Site VPN SLA and Azure SLA for VPN Gateway. However, these SLAs only cover the VPN gateways availability and don't include connectivity to other CSPs over the internet, so there is no SLA on the end-to-end solution. To increase reliability, consider using multiple VPN gateways and tunnels on both Google Cloud and the other CSPs. Pricing For the managed VPN service, charges apply. For Google Cloud, an hourly charge applies see Cloud VPN pricing. For other CSPs, consult their price lists, for example, see AWS Site-to-Site VPN connection pricing or Azure VPN Gateway pricing. In addition to hourly pricing for the VPN service, you have to pay for data transferred through the VPN gateways. For Google Cloud and many CSPs, standard internet data transfer charges apply, as detailed in Transfer through external IP addresses over the internet. In many cases, data transfer charges exceed the fixed cost for this solution. Partner Interconnect with multicloud enabled partners Partner Interconnect lets you connect a Virtual Private Cloud to another CSP's virtual networks through the network of select partners that offer direct multicloud solutions. You connect by deploying one or more virtual routing instances that take care of the necessary Border Gateway Protocol (BGP) setup. The following diagram shows a redundant setup using two Partner Interconnect connections. Routes are exchanged between Cloud Router and the gateway on the other CSP's side through a virtual routing instance that is managed by the partner providing the interconnect. Traffic flows through the partner network between Google Cloud and the other CSP. Implementation This solution requires you to set up multiple components: On the Google Cloud side, you set up Partner Interconnect with an interconnect service provider that's serving your Google Cloud regions and offering multicloud connectivity to the other CSP. On the other CSP, you have to use their interconnect product to connect to the same partner. For example, on AWS you can use Direct Connect and on Azure you can use ExpressRoute. On the service provider partner side, you have to configure the virtual routing equipment providing the BGP sessions to Google Cloud and to the other CSP. If IP address space between both CSP environments overlaps, your partner might offer NAT functionality for the virtual routing equipment. See the partners documentation for details. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and other CSPs. Depending on the partner and the other CSP, the available attachment capacity might vary. On the Google Cloud side, Partner Interconnect is available with an attachment capacity between 50 Mbps and 50 Gbps. Latency for this solution is the sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the interconnect location where the partner connects to Google Cloud. Latency in the partners network to, from, and through the virtual routing instance towards the other CSP. Latency from the other CSP's edge location where the interconnect with the partner takes place to the region where the resources are hosted in the CSP. For the lowest possible latency, the edge locations of Google Cloud and the other CSP should be in the same metropolitan area, along with the virtual routing instance. For example, you might have a low latency connection if both CSP's cloud regions as well as the edge POP and the virtual routing instance are located in the Ashburn, Virginia area. While Google Cloud and many other CSPs offer no latency guarantees for traffic towards their network edge, because there is a dedicated path and capacity through the partner, typically the latency in this solution is less variable than if you use external IP addresses or a VPN solution. Security Traffic over Partner Interconnect isn't encrypted by default. To help secure your traffic, you can deploy HA VPN over Cloud Interconnect on the Google Cloud side of the connection. Some other CSPs allow you to use their managed VPN service over an interconnect; for example, AWS Site-to-Site VPN can be used over AWS Direct Interconnect. Alternatively, you can also use a virtual appliance that encrypts the traffic on the other CSP's side. Another option is to encrypt your traffic at the application layer instead of using VPN. Reliability and SLA This solution involves three different SLAs: one from Google, one from the interconnect partner, and one from the other CSP. When using Partner Interconnect redundantly, Google offers 99.9% - 99.99% monthly SLAs depending on the topology chosen. There is no SLA on a single Partner Interconnect connection. See the other CSP's documentation for the SLA on their interconnect product, for example, AWS Direct Connect SLA or on Azure SLA for ExpressRoute. Consult the documentation or terms of the partner service provider for the Partner Interconnect for their SLA on the availability of the connectivity and virtual routing instance. For example, see the Megaport Global Services Agreement. Pricing On the Google Cloud side, there is a monthly fee for each Partner Interconnect attachment, depending on the bandwidth. Traffic egressing through the Partner Interconnect gets charged at a lower rate than internet traffic. For more information, see the Partner Interconnect pricing page. Consult the other CSP's pricing page for their interconnect product, for example AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, pricing also has a monthly charge for the interconnect and data transfer charges through the interconnect at a lower rate than over the internet. The partner provides interconnect services charges according to their own pricing, which can be found on their website or by consulting their sales team for a quote. Typically, if all data transfers happen in the same metropolitan area, charges are much lower than if data has to travel a longer distance on a partner network. When data is transferred regularly at sufficiently high volumes, depending on the other CSP's prices, this solution can sometimes offer the lowest total cost due to the discounted egress rates. Even when adding in monthly costs for the interconnect for Partner Interconnect, the other CSP, and the service provider partner, using this solution can provide significant savings. As partners and other CSP's prices can change without notice, make your own comparison using up-to-date quotes from all involved parties. Dedicated Interconnect through a common POP By using one or more physical routing devices at a common interconnect facility between Google Cloud and the other CSP, you can connect both cloud providers by using Dedicated Interconnect on the Google Cloud side and an equivalent product at the other CSP. The interconnect location isn't necessarily at the same location as the region in which your resources are located. The following diagram shows a redundant setup using two Dedicated Interconnect connections: Routes are exchanged between Cloud Router and the gateway on the other CSP's side through a physical router that you place in a common interconnect facility. Traffic flows through this router between Google Cloud and the other CSP. Implementation This solution requires that you host and maintain physical routing devices at a colocation facility where Google and the CSP you want to connect to are present. From this routing device, you order two physical connections with the facility: one toward Google Cloud using Dedicated Interconnect, and one towards the other service provider using an equivalent product, for example, AWS Direct Connect or Azure ExpressRoute. On your routing device, you need to configure BGP to allow route exchanges between the two cloud environments. Check the Colocation facility locations list from Google Cloud and your other CSP, for example AWS Direct Connect Locations or Azure ExpressRoute peering locations, to identify suitable locations for this setup. The IP address ranges between your virtual networks shouldn't overlap, unless your physical routing device is able to perform NAT between the environments or you restrict some route exchanges between the environments. This solution is effective if you use the dedicated connectivity to also connect back to your on-premises environment, in addition to the connection to another CSP. In other cases, this solution is complex because it requires you to own and maintain physical equipment and have a contract with a colocation facility. We only recommend this solution if at least one of the following is true: You already have equipment in a suitable facility for another purpose and an existing contract with the facility. You transfer large amounts of data regularly so this is a cost effective option or you have bandwidth requirements that partners cannot fulfill. Note: You can also implement this solution by using Partner Interconnect. In this case, your physical routing equipment can also be located in another site such as your data center or other on-premises environment. However, due to the additional latency to and from this location and charges for the connection to on-premises by the partner, this solution is generally not recommended for pure cloud to cloud connectivity unless traffic needs to be inspected on-premises. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and another CSP. On the Google Cloud side, Dedicated Interconnect is available by using one or multiple 10 or 100 Gbps physical connections. The latency for this solution is a sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the interconnect location where you interconnect with Google Cloud. Latency through the facility and your physical equipment, which is usually negligible when compared with any latency due to fiber length. Latency from the interconnect location through the other CSP's network to the region where the resources are hosted in the CSP. While no latency guarantees are offered, this solution typically allows for the lowest latency and highest transfer speeds between Google Cloud and other cloud environments over private IP addresses. Security Traffic over Dedicated Interconnect isn't encrypted by default. To help secure your traffic, you can deploy HA VPN over Cloud Interconnect on the Google Cloud side of the connection. Some other CSPs allow you to use their managed VPN service over an interconnect; for example, AWS Site-to-Site VPN can be used over AWS Direct Interconnect. Alternatively, you can also use a virtual appliance encrypting the traffic on the other CSP's side. Another option is to encrypt your traffic at the application layer instead of using VPN. Reliability and SLA When using Dedicated Interconnect redundantly, Google offers 99.9%-99.99% monthly SLAs depending on the chosen topology. There is no SLA on a single Dedicated Interconnect connection. For more information, see the other CSP's documentation for the SLA on their interconnect product, for example, AWS Direct Connect SLA or Azure SLA for ExpressRoute. The colocation facility or hardware vendor for the physical routing equipment might also offer SLAs on their services. Pricing On the Google Cloud side, there is a monthly fee for each Dedicated Interconnect port as well as for each VLAN attachment that connects to a VPC environment. Traffic that egresses through Dedicated Interconnect is charged at a lower rate than internet traffic. For more information, see the Dedicated Interconnect pricing page. Consult the other CSP's pricing page for their interconnect product, for example, AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, pricing also has a monthly charge for the interconnect and data transfer charges through the interconnect at a lower rate than over the internet. In addition, you need to factor in charges for the colocation facility services providing space, electrical power, and physical connections towards both cloud environments as well as any cost and ongoing service contracts with the vendor for the physical routing equipment. If the connection between both CSPs cannot happen in the same facility and you need to procure connectivity between facilities, pricing might be much higher for these services. Cross-Cloud Interconnect managed connections You can connect your Google Cloud VPC networks to your virtual networks in another CSP over Google's network fabric. In a sense, this setup works like Partner Interconnect, but with the Google SLA covering both the Google networks and the interconnections themselves. The following diagram shows a Cross-Cloud Interconnect configuration with the minimum number of connections. Routes are exchanged between Cloud Router and the gateway on the other CSP's side over Google's network fabric. Traffic flows through this fabric between Google Cloud and the other CSP. Implementation When you buy Cross-Cloud Interconnect, Google provisions a dedicated physical connection between the Google network and that of another cloud service provider. You can use this connection to peer your Google Cloud Virtual Private Cloud (VPC) network with a network that's hosted by a supported cloud service provider. After you provision the connection, Google supports the connection up to the point where it reaches the network of the other cloud service provider. Google does not guarantee uptime from the other cloud service provider. However, Google remains the primary point of contact for the full service and will notify you if you need to open a support case with the other CSP. This solution requires you to follow the setup process for the other CSP, including choosing where the two networks are going to interconnect. Only certain CSPs are supported. Transfer speed and latency This solution offers dedicated capacity between Google Cloud and another CSP. On the Google Cloud side, Dedicated Interconnect is available by using one or multiple pairs of 10 Gbps or 100 Gbps physical connections. The latency for this solution is a sum of the following: Latency between the region in which your resources are hosted on Google Cloud and the cross-cloud location. Latency between the Google edge location and the other CSP's edge location (often in the same facility) Latency from the other CSP's edge location where the Cross-Cloud Interconnect is deployed to the region where the resources are hosted in the CSP. Although no latency guarantees are offered, this solution typically allows for the lowest possible latency and highest possible transfer speeds between Google Cloud and other cloud environments over private IP addresses. Security Because traffic over Cross-Cloud Interconnect isn't encrypted, we recommend using application layer encryption for sensitive traffic. If all traffic needs to be encrypted, virtual appliances that are available from Google Cloud partners in the Cloud Marketplace can provide solutions for encrypting the traffic to the other CSP's environment. Reliability and SLA Cross-Cloud Interconnect uses the Cloud Interconnect SLA. To qualify for the SLA, your Cross-Cloud Interconnect configuration must use one or more pairs of connections, as described in the Service-level agreement section of the Cross-Cloud Interconnect overview. The SLA covers everything on the Google side up to the edge of the other cloud provider's network. It does not cover the other CSP's network. For more information, see the other CSP's documentation for the SLA on their interconnect product; for example, AWS Direct Connect SLA or Azure SLA for ExpressRoute. Pricing There is an hourly fee for each Cross-Cloud Interconnect connection as well as for each VLAN attachment that connects to a VPC environment. Traffic that egresses through Cross-Cloud Interconnect is charged at a lower rate than internet traffic. For more information, see Cross-Cloud Interconnect pricing. Consult the other CSP's pricing page for their interconnect product, for example, AWS Direct Connect pricing or Azure ExpressRoute pricing. Typically, there is a monthly charge for the interconnect. Charges for data transfer through the interconnect are typically lower than charges for data transfer over the internet. There are no separate costs for interconnect locations or equipment. Comparison of options The presented options vary in speed, availability, complexity, security, and pricing. You should evaluate all options thoroughly according to your needs. The following diagram guides you through the process of choosing one of the solutions mentioned in this document through a flow chart. Typically, we can recommend the following choices: For many scenarios where data is exchanged occasionally or at low volume and transfers aren't critical, transferring data over the internet is the simplest option and still provides relatively low latency and high bandwidth. If encryption or transfer of smaller amounts of data using private IP addresses is required, consider using Cloud VPN and a managed VPN service on the other CSP's side. If you transfer larger amounts of data, using Partner Interconnect with a multicloud enabled partner provides multiple benefits: Dedicated capacity, lower cost for data transfers, and depending on the topology, an SLA for each part of the solution. Partner Interconnect capacity is normally less than 10 Gbps per connection. If you connect your on-premises equipment to multiple clouds, using Dedicated Interconnect through a common POP is a common option. It comes with additional complexity of managing your own hardware and relationships with a colocation facility. Unless you already have the infrastructure in place, this solution is reserved to cases where typical data transfer rates are 10 Gbps or more. If you don't want the overhead of managing cross-connects and routing equipment in remote POPs, Cross-Cloud Interconnect provides a managed solution where Google handles all of that for you. What's next Review the series about Hybrid and multicloud patterns and practices. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Patterns_for_scalable_and_resilient_apps.txt b/Patterns_for_scalable_and_resilient_apps.txt new file mode 100644 index 0000000000000000000000000000000000000000..dd89da729c63fedc5c228abba48a5f5c09f38069 --- /dev/null +++ b/Patterns_for_scalable_and_resilient_apps.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/scalable-and-resilient-apps +Date Scraped: 2025-02-23T11:46:58.117Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for scalable and resilient apps Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-19 UTC This document introduces some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. A well-designed app scales up and down as demand increases and decreases, and is resilient enough to withstand service disruptions. Building and operating apps that meet these requirements requires careful planning and design. Scalability: Adjusting capacity to meet demand Scalability is the measure of a system's ability to handle varying amounts of work by adding or removing resources from the system. For example, a scalable web app is one that works well with one user or many users, and that gracefully handles peaks and dips in traffic. The flexibility to adjust the resources consumed by an app is a key business driver for moving to the cloud. With proper design, you can reduce costs by removing under-utilized resources without compromising performance or user experience. You can similarly maintain a good user experience during periods of high traffic by adding more resources. In this way, your app can consume only the resources necessary to meet demand. Google Cloud provides products and features to help you build scalable, efficient apps: Compute Engine virtual machines and Google Kubernetes Engine (GKE) clusters integrate with autoscalers that let you grow or shrink resource consumption based on metrics that you define. Google Cloud's serverless platform provides managed compute, database, and other services that scale quickly from zero to high request volumes, and you pay only for what you use. Database products like BigQuery, Spanner, and Bigtable can deliver consistent performance across massive data sizes. Cloud Monitoring provides metrics across your apps and infrastructure, helping you make data-driven scaling decisions. Resilience: Designing to withstand failures A resilient app is one that continues to function despite failures of system components. Resilience requires planning at all levels of your architecture. It influences how you lay out your infrastructure and network and how you design your app and data storage. Resilience also extends to people and culture. Building and operating resilient apps is hard. This is especially true for distributed apps, which might contain multiple layers of infrastructure, networks, and services. Mistakes and outages happen, and improving the resilience of your app is an ongoing journey. With careful planning, you can improve the ability of your app to withstand failures. With proper processes and organizational culture, you can also learn from failures to further increase your app's resilience. Google Cloud provides tools and services to help you build highly available and resilient apps: Google Cloud services are available in regions and zones across the globe, enabling you to deploy your app to best meet your availability goals. Compute Engine instance groups and GKE clusters can be distributed and managed across the available zones in a region. Compute Engine regional persistent disks are synchronously replicated across zones in a region. Google Cloud provides a range of load-balancing options to manage your app traffic, including global load balancing that can direct traffic to a healthy region closest to your users. Google Cloud's serverless platform includes managed compute and database products that offer built-in redundancy and load balancing. Google Cloud supports CI/CD through native tools and integrations with popular open source technologies, to help automate building and deploying your apps. Cloud Monitoring provides metrics across your apps and infrastructure, helping you make data-driven decisions about the performance and health of your apps. Drivers and constraints There are varying requirements and motivations for improving the scalability and resilience of your app. There might also be constraints that limit your ability to meet your scalability and resilience goals. The relative importance of these requirements and constraints varies depending on the type of app, the profile of your users, and the scale and maturity of your organization. Drivers To help prioritize your requirements, consider the drivers from the different parts of your organization. Business drivers Common drivers from the business side include the following: Optimize costs and resource consumption. Minimize app downtime. Ensure that user demand can be met during periods of high usage. Improve quality and availability of service. Ensure that user experience and trust are maintained during any outages. Increase flexibility and agility to handle changing market demands. Development drivers Common drivers from the development side include the following: Minimize time spent investigating failures. Increase time spent on developing new features. Minimize repetitive toil through automation. Build apps using the latest industry patterns and practices. Operations drivers Requirements to consider from the operations side include the following: Reduce the frequency of failures requiring human intervention. Increase the ability to automatically recover from failures. Minimize repetitive toil through automation. Minimize the impact from the failure of any particular component. Constraints Constraints might limit your ability to increase the scalability and resilience of your app. Ensure that your design decisions don't introduce or contribute to these constraints: Dependencies on hardware or software that are difficult to scale. Dependencies on hardware or software that are difficult to operate in a high-availability configuration. Dependencies between apps. Licensing restrictions. Lack of skills or experience in your development and operations teams. Organizational resistance to automation. Patterns and practices The remainder of this document defines patterns and practices to help you build resilient and scalable apps. These patterns touch all parts of your app lifecycle, including your infrastructure design, app architecture, storage choices, deployment processes, and organizational culture. Three themes are evident in the patterns: Automation. Building scalable and resilient apps requires automation. Automating your infrastructure provisioning, testing, and app deployments increases consistency and speed, and minimizes human error. Loose coupling. Treating your system as a collection of loosely coupled, independent components allows flexibility and resilience. Independence covers how you physically distribute your resources and how you architect your app and design your storage. Data-driven design. Collecting metrics to understand the behavior of your app is critical. Decisions about when to scale your app, or whether a particular service is unhealthy, need to be based on data. Metrics and logs should be core features. Automate your infrastructure provisioning Create immutable infrastructure through automation to improve the consistency of your environments and increase the success of your deployments. Treat your infrastructure as code Infrastructure as code (IaC) is a technique that encourages you to treat your infrastructure provisioning and configuration in the same way you handle application code. Your provisioning and configuration logic is stored in source control so that it's discoverable and can be versioned and audited. Because it's in a code repository, you can take advantage of continuous integration and continuous deployment (CI/CD) pipelines, so that any changes to your configuration can be automatically tested and deployed. By removing manual steps from your infrastructure provisioning, IaC minimizes human error and improves the consistency and reproducibility of your apps and environments. In this way, adopting IaC increases the resilience of your apps. Cloud Deployment Manager lets you automate the creation and management of Google Cloud resources with flexible templates. Alternatively, Config Connector lets you manage your resources using Kubernetes techniques and workflows. Google Cloud also has built-in support for popular third-party IaC tools, including Terraform, Chef, and Puppet. Create immutable infrastructure Immutable infrastructure is a philosophy that builds on the benefits of infrastructure as code. Immutable infrastructure mandates that resources never be modified after they're deployed. If a virtual machine, Kubernetes cluster, or firewall rule needs to be updated, you can update the configuration for the resource in the source repository. After you've tested and validated the changes, you fully redeploy the resource using the new configuration. In other words, rather than tweaking resources, you re-create them. Creating immutable infrastructure leads to more predictable deployments and rollbacks. It also mitigates issues that are common in mutable infrastructures, like configuration drift and snowflake servers. In this way, adopting immutable infrastructure further improves the consistency and reliability of your environments. Design for high availability Availability is a measure of the fraction of time that a service is usable. Availability is often used as a key indicator of overall service health. Highly available architectures aim to maximize service availability, typically through redundantly deploying components. In simplest terms, achieving high availability typically involves distributing compute resources, load balancing, and replicating data. Physically distribute resources Google Cloud services are available in locations across the globe. These locations are divided into regions and zones. How you deploy your app across these regions and zones affects the availability, latency, and other properties of your app. For more information, see best practices for Compute Engine region selection. Redundancy is the duplication of components of a system in order to increase the overall availability of that system. In Google Cloud, redundancy is typically achieved by deploying your app or service to multiple zones, or even in multiple regions. If a service exists in multiple zones or regions, it can better withstand service disruptions in a particular zone or region. Although Google Cloud makes every effort to prevent such disruptions, certain events are unpredictable and it's best to be prepared. With Compute Engine managed instance groups, you can distribute virtual machine instances across multiple zones in a region, and you can manage the instances as a logical unit. Google Cloud also offers regional persistent disks to automatically replicate your data to two zones in a region. You can similarly improve the availability and resilience of your apps deployed on GKE by creating regional clusters. A regional cluster distributes GKE control plane components, nodes, and pods across multiple zones within a region. Because your control plane components are distributed, you can continue to access the cluster's control plane even during an outage involving one or more (but not all) zones. Note: For more information about region-specific considerations, see Geography and regions. Favor managed services Rather than independently installing, supporting, and operating all parts of your application stack, you can use managed services to consume parts of your application stack as services. For example, rather than installing and managing a MySQL database on virtual machines (VMs), you can instead use a MySQL database provided by Cloud SQL. You then get an availability Service Level Agreement (SLA) and can rely on Google Cloud to manage data replication, backups, and the underlying infrastructure. By using managed services, you can spend less time managing infrastructure, and more time on improving the reliability of your app. Many of Google Cloud's managed compute, database, and storage services offer built-in redundancy, which can help you meet your availability goals. Many of these services offer a regional model, which means the infrastructure that runs your app is located in a specific region and is managed by Google to be redundantly available across all the zones within that region. If a zone becomes unavailable, your app or data automatically serves from another zone in the region. Certain database and storage services also offer multi-regional availability, which means that the infrastructure that runs your app is located in several regions. Multi-regional services can withstand the loss of an entire region, but typically at the cost of higher latency. Load-balance at each tier Load balancing lets you distribute traffic among groups of resources. When you distribute traffic, you help ensure that individual resources don't become overloaded while others sit idle. Most load balancers also provide health-checking features to help ensure that traffic isn't routed to unhealthy or unavailable resources. Google Cloud offers several load-balancing choices. If your app runs on Compute Engine or GKE, you can choose the most appropriate type of load balancer depending on the type, source, and other aspects of the traffic. For more information, see the load-balancing overview and GKE networking overview. Alternatively, some Google Cloud-managed services, such as App Engine and Cloud Run, automatically load-balance traffic. It's common practice to load-balance requests received from external sources, such as from web or mobile clients. However, using load balancers between different services or tiers within your app can also increase resilience and flexibility. Google Cloud provides internal layer 4 and layer 7 load balancing for this purpose. The following diagram shows an external load balancer distributing global traffic across two regions, us-central1 and asia-east1. It also shows internal load balancing distributing traffic from the web tier to the internal tier within each region. Monitor your infrastructure and apps Before you can decide how to improve the resilience and scalability of your app, you need to understand its behavior. Having access to a comprehensive set of relevant metrics and time series about the performance and health of your app can help you discover potential issues before they cause an outage. They can also help you diagnose and resolve an outage if it does occur. The monitoring distributed systems chapter in the Google SRE book provides a good overview of some approaches to monitoring. In addition to providing insight into the health of your app, metrics can also be used to control autoscaling behavior for your services. Cloud Monitoring is Google Cloud's integrated monitoring tool. Cloud Monitoring ingests events, metrics, and metadata, and provides insights through dashboards and alerts. Most Google Cloud services automatically send metrics to Cloud Monitoring, and Google Cloud also supports many third-party sources. Cloud Monitoring can also be used as a backend for popular open source monitoring tools, providing a "single pane of glass" with which to observe your app. Monitor at all levels Gathering metrics at various levels or tiers within your architecture provides a holistic picture of your app's health and behavior. Infrastructure monitoring Infrastructure-level monitoring provides the baseline health and performance for your app. This approach to monitoring captures information like CPU load, memory usage, and the number of bytes written to disk. These metrics can indicate that a machine is overloaded or is not functioning as expected. In addition to the metrics collected automatically, Cloud Monitoring provides an agent that can be installed to collect more detailed information from Compute Engine VMs, including from third-party apps running on those machines. App monitoring We recommend that you capture app-level metrics. For example, you might want to measure how long it takes to execute a particular query, or how long it takes to perform a related sequence of service calls. You define these app-level metrics yourself. They capture information that the built-in Cloud Monitoring metrics cannot. App-level metrics can capture aggregated conditions that more closely reflect key workflows, and they can reveal problems that low-level infrastructure metrics do not. We also recommend using OpenTelemetry to capture your app-level metrics. OpenTelemetry provides a single open standard for telemetry data. Use OpenTelemetry to collect and export data from your cloud first applications and infrastructure. You can then monitor and analyze the exported telemetry data. Service monitoring For distributed and microservices-driven apps, it's important to monitor the interactions between the different services and components in your apps. These metrics can help you diagnose problems like increased numbers of errors or latency between services. Istio is an open source tool that provides insights and operational control over your network of microservices. Istio generates detailed telemetry for all service communications, and it can be configured to send the metrics to Cloud Monitoring. End-to-end monitoring End-to-end monitoring, also called black-box monitoring, tests externally visible behavior the way a user sees it. This type of monitoring checks whether a user is able to complete critical actions within your defined thresholds. This coarse-grained monitoring can uncover errors or latency that finer-grained monitoring might not, and it reveals availability as perceived by the user. Expose the health of your apps A highly available system must have some way of determining which parts of the system are healthy and functioning correctly. If certain resources appear unhealthy, the system can send requests elsewhere. Typically health checks involve pulling data from an endpoint to determine the status or health of a service. Health checking is a key responsibility of load balancers. When you create a load balancer that is associated with a group of virtual machine instances, you also define a health check. The health check defines how the load balancer communicates with the virtual machines to evaluate whether particular instances should continue to receive traffic. Load-balancer health checks can also be used to autoheal groups of instances such that unhealthy machines are re-created. If you are running on GKE and load-balancing external traffic through an ingress resource, GKE automatically creates appropriate health checks for the load balancer. Kubernetes has built-in support for liveness and readiness probes. These probes help the Kubernetes orchestrator decide how to manage pods and requests within your cluster. If your app is deployed on Kubernetes, it's a good idea to expose the health of your app to these probes through appropriate endpoints. Establish key metrics Monitoring and health checking provide you with metrics on the behavior and status of your app. The next step is to analyze those metrics to determine which are the most descriptive or impactful. The key metrics vary, depending on the platform that the app is deployed on, and on the work that the app is doing. You're not likely to find just one metric that indicates whether to scale your app, or that a particular service is unhealthy. Often it's a combination of factors that together indicate a certain set of conditions. With Cloud Monitoring, you can create custom metrics to help capture these conditions. The Google SRE book advocates four golden signals for monitoring a user-facing system: latency, traffic, errors, and saturation. Also consider your tolerance for outliers. Using an average or median value to measure health or performance might not be the best choice, because these measures can hide wide imbalances. It's therefore important to consider the metric distribution; the 99th percentile might be a more informative measure than the average. Define service level objectives (SLOs) You can use the metrics that are collected by your monitoring system to define service level objectives (SLOs). SLOs specify a target level of performance or reliability for your service. SLOs are a key pillar of SRE practices and are described in detail in the service level objectives chapter in the SRE book, and also in the implementing SLOs chapter in the SRE workbook. You can use service monitoring to define SLOs based on the metrics in Cloud Monitoring. You can create alerting policies on SLOs to let you know whether you are in danger of violating an SLO. Store the metrics Metrics from your monitoring system are useful in the short term to help with real-time health checks or to investigate recent problems. Cloud Monitoring retains your metrics for several weeks to best meet those use cases. However, there is also value in storing your monitoring metrics for longer-term analysis. Having access to a historical record can help you adopt a data-driven approach to refining your app architecture. You can use data collected during and after an outage to identify bottlenecks and interdependencies in your apps. You can also use the data to help create and validate meaningful tests. Historical data can also help validate that your app is supporting business goals during key periods. For example, the data can help you analyze how your app scaled during high-traffic promotional events over the course of the last few quarters or even years. For details on how to export and store your metrics, see the Cloud Monitoring metric export solution. Determine scaling profile You want your app to meet its user experience and performance goals without over-provisioning resources. The following diagram shows how a simplified representation of an app's scaling profile. The app maintains a baseline level of resources, and uses autoscaling to respond to changes in demand. Balance cost and user experience Deciding whether to scale your app is fundamentally about balancing cost against user experience. Decide what your minimum acceptable level of performance is, and potentially also where to set a ceiling. These thresholds vary from app to app, and also potentially across different components or services within a single app. For example, a consumer-facing web or mobile app might have strict latency goals. Research shows that even small delays can negatively impact how users perceive your app, resulting in lower conversions and fewer signups. Therefore, it's important to ensure that your app has enough serving capacity to quickly respond to user requests. In this instance, the higher costs of running more web servers might be justified. The cost-to-performance ratio might be different for a non-business-critical internal app where users are probably more tolerant of small delays. Hence, your scaling profile can be less aggressive. In this instance, keeping costs low might be of greater importance than optimizing the user experience. Set baseline resources Another key component of your scaling profile is deciding on an appropriate minimum set of resources. Compute Engine virtual machines or GKE clusters typically take time to scale up, because new nodes need to be created and initialized. Therefore, it might be necessary to maintain a minimum set of resources, even if there is no traffic. Again, the extent of baseline resources is influenced by the type of app and traffic profile. Conversely, serverless technologies like App Engine, Cloud Run functions, and Cloud Run are designed to scale to zero, and to start up and scale quickly, even in the instance of a cold start. Depending on the type of app and traffic profile, these technologies can deliver efficiencies for parts of your app. Configure autoscaling Autoscaling helps you to automatically scale the computing resources consumed by your app. Typically, autoscaling occurs when certain metrics are exceeded or conditions are met. For example, if request latencies to your web tier start exceeding a certain value, you might want to automatically add more machines to increase serving capacity. Many Google Cloud compute products have autoscaling features. Serverless managed services like Cloud Run, Cloud Run functions, and App Engine are designed to scale quickly. These services typically offer configuration options to limit or influence autoscaling behavior, but in general, much of the autoscaler behavior is hidden from the operator. Compute Engine and GKE provide more options to control scaling behavior. With Compute Engine, you can scale based on various inputs, including Cloud Monitoring custom metrics and load-balancer serving capacity. You can set minimum and maximum limits on the scaling behavior, and you can define an autoscaling policy with multiple signals to handle different scenarios. As with GKE, you can configure the cluster autoscaler to add or remove nodes based on workload or pod metrics, or on metrics external to the cluster. We recommend that you configure autoscaling behavior based on key app metrics, on your cost profile, and on your defined minimum required level of resources. Minimize startup time For scaling to be effective, it must happen quickly enough to handle the increasing load. This is especially true when adding compute or serving capacity. Use pre-baked images If your app runs on Compute Engine VMs, you likely need to install software and configure the instances to run your app. Although you can use startup scripts to configure new instances, a more efficient way is to create a custom image. A custom image is a boot disk that you set up with your app-specific software and configuration. For more information on managing images, see the image-management best practices article. When you've created your image, you can define an instance template. Instance templates combine the boot disk image, machine type, and other instance properties. You can then use an instance template to create individual VM instances or a managed instance group. Instance templates are a convenient way to save a VM instance's configuration so you can use it later to create identical new VM instances. Although creating custom images and instance templates can increase your deployment speed, it can also increase maintenance costs because the images might need to be updated more frequently. For more information, see the balancing image configuration and deployment speed documents. Containerize your app An alternative to building customized VM instances is to containerize your app. A container is a lightweight, standalone, executable package of software that includes everything needed to run an app: code, runtime, system tools, system libraries, and settings. These characteristics make containerized apps more portable, easier to deploy, and easier to maintain at scale than virtual machines. Containers are also typically fast to start, which makes them suitable for scalable and resilient apps. Google Cloud offers several services to run your app containers. Cloud Run provides a serverless, managed compute platform to host your stateless containers. The App Engine Flexible environment hosts your containers in a managed platform as a service (PaaS). GKE provides a managed Kubernetes environment to host and orchestrate your containerized apps. You can also run your app containers on Compute Engine when you need complete control over your container environment. Optimize your app for fast startup In addition to ensuring your infrastructure and app can be deployed as efficiently as possible, it's also important to ensure your app comes online quickly. The optimizations that are appropriate for your app vary depending on the app's characteristics and execution platform. It's important to do the following: Find and eliminate bottlenecks by profiling the critical sections of your app that are invoked at startup. Reduce initial startup time by implementing techniques like lazy initialization, particularly of expensive resources. Minimize app dependencies that might need to be loaded at startup time. Favor modular architectures You can increase the flexibility of your app by choosing architectures that enable components to be independently deployed, managed, and scaled. This pattern can also improve resiliency by eliminating single points of failure. Break your app into independent services If you design your app as a set of loosely coupled, independent services, you can increase your app's flexibility. If you adopt a loosely coupled design, it lets your services be independently released and deployed. In addition to many other benefits, this approach enables those services to use different tech stacks and to be managed by different teams. This loosely coupled approach is the key theme of architecture patterns like microservices and SOA. As you consider how to draw boundaries around your services, availability and scalability requirements are key dimensions. For example, if a given component has a different availability requirement or scaling profile from your other components, it might be a good candidate for a standalone service. Aim for statelessness A stateless app or service does not retain any local persistent data or state. A stateless model ensures that you can handle each request or interaction with the service independent of previous requests. This model facilitates scalability and recoverability, because it means that the service can grow, shrink, or be restarted without losing data that's required in order to handle any in-flight processes or requests. Statelessness is especially important when you are using an autoscaler, because the instances, nodes, or pods hosting the service can be created and destroyed unexpectedly. It might not be possible for all your services to be stateless. In such a case, be explicit about services that require state. By ensuring clean separation of stateless and stateful services, you can ensure straightforward scalability for stateless services while adopting a more considered approach for stateful services. Manage communication between services One challenge with distributed microservices architectures is managing communication between services. As your network of services grows, it's likely that service interdependencies will also grow. You don't want the failure of one service to result in the failure of other services, sometimes called a cascading failure. You can help reduce traffic to an overloaded service or failing service by adopting techniques like the circuit breaker pattern, exponential backoffs, and graceful degradation. These patterns increase the resiliency of your app either by giving overloaded services a chance to recover, or by gracefully handling error states. For more information, see the addressing cascading failures chapter in the Google SRE book. Using a service mesh can help you manage traffic across your distributed services. A service mesh is software that links services together, and helps decouple business logic from networking. A service mesh typically provides resiliency features like request retries, failovers, and circuit breakers. Use appropriate database and storage technology Certain databases and types of storage are difficult to scale and make resilient. Make sure that your database choices don't constrain your app's availability and scalability. Evaluate your database needs The pattern of designing your app as a set of independent services also extends to your databases and storage. It might be appropriate to choose different types of storage for different parts of your app, which results in heterogeneous storage. Conventional apps often operate exclusively with relational databases. Relational databases offer useful functionality such as transactions, strong consistency, referential integrity, and sophisticated querying across tables. These features make relational databases a good choice for many common app features. However, relational databases also have some constraints. They are typically hard to scale, and they require careful management in a high-availability configuration. A relational database might not be the best choice for all your database needs. Non-relational databases, often referred to as NoSQL databases, take a different approach. Although details vary across products, NoSQL databases typically sacrifice some features of relational databases in favor of increased availability and easier scalability. In terms of the CAP theorem, NoSQL databases often choose availability over consistency. Whether a NoSQL database is appropriate often comes down to the required degree of consistency. If your data model for a particular service does not require all the features of an RDBMS, and can be designed to be eventually consistent, choosing a NoSQL database might offer increased availability and scalability. In the realm of data management, relational and non-relational databases are often seen as complementary rather than competing technologies. By using both types of databases strategically, organizations can harness the strengths of each to achieve optimal results in data storage, retrieval, and analysis. In addition to a range of relational and NoSQL databases, Google Cloud also offers Spanner, a strongly consistent, highly available, and globally distributed database with support for SQL. For information about choosing an appropriate database on Google Cloud, see Google Cloud databases. Implement caching A cache's primary purpose is to increase data retrieval performance by reducing the need to access the underlying slower storage layer. Caching supports improved scalability by reducing reliance on disk-based storage. Because requests can be served from memory, request latencies to the storage layer are reduced, typically allowing your service to handle more requests. In addition, caching can reduce the load on services that are downstream of your app, especially databases, allowing other components that interact with that downstream service to also scale more, or at all. Caching can also increase resiliency by supporting techniques like graceful degradation. If the underlying storage layer is overloaded or unavailable, the cache can continue to handle requests. And even though the data returned from the cache might be incomplete or not up to date, that might be acceptable for certain scenarios. Memorystore for Redis provides a fully managed service that is powered by the Redis in-memory datastore. Memorystore for Redis provides low-latency access and high throughput for heavily accessed data. It can be deployed in a high-availability configuration that provides cross-zone replication and automatic failover. Modernize your development processes and culture DevOps can be considered a broad collection of processes, culture, and tooling that promote agility and reduced time-to-market for apps and features by breaking down silos between development, operations, and related teams. DevOps techniques aim to improve the quality and reliability of software. A detailed discussion of DevOps is beyond the scope of this document, but some key aspects that relate to improving the reliability and resilience of your app are discussed in the following sections. For more details, see the Google Cloud DevOps page. Design for testability Automated testing is a key component of modern software delivery practices. The ability to execute a comprehensive set of unit, integration, and system tests is essential to verify that your app behaves as expected, and that it can progress to the next stage of the deployment cycle. Testability is a key design criterion for your app. We recommend that you use unit tests for the bulk of your testing because they are quick to execute and typically straightforward to maintain. We also recommend that you automate higher-level integration and system tests. These tests are greatly simplified if you adopt infrastructure-as-code techniques, because dedicated test environments and resources can be created on demand, and then torn down once tests are complete. As the percentage of your codebase covered by tests increases, you reduce uncertainty and the potential decrease in reliability from each code change. Adequate testing coverage means that you can make more changes before reliability falls below an acceptable level. Automated testing is an integral component of continuous integration. Executing a robust set of automated tests on each code commit provides fast feedback on changes, improving the quality and reliability of your software. Google Cloud–native tools like Cloud Build and third-party tools like Jenkins can help you implement continuous integration. Automate your deployments Continuous integration and comprehensive test automation give you confidence in the stability of your software. And when they are in place, your next step is automating deployment of your app. The level of deployment automation varies depending on the maturity of your organization. Choosing an appropriate deployment strategy is essential in order to minimize the risks associated with deploying new software. With the right strategy, you can gradually increase the exposure of new versions to larger audiences, verifying behavior along the way. You can also set clear provisions for rollback if problems occur. Adopt SRE practices for dealing with failure For distributed apps that operate at scale, some degree of failure in one or more components is common. If you adopt the patterns covered in this document, your app can better handle disruptions caused by a defective software release, unexpected termination of virtual machines, or even an infrastructure outage that affects an entire zone. However, even with careful app design, you inevitably encounter unexpected events that require human intervention. If you put structured processes in place to manage these events, you can greatly reduce their impact and resolve them more quickly. Furthermore, if you examine the causes and responses to the event, you can help protect your app against similar events in the future. Strong processes for managing incidents and performing blameless postmortems are key tenets of SRE. Although implementing the full practices of Google SRE might not be practical for your organization, if you adopt even a minimum set of guidelines, you can improve the resilience of your app. The appendices in the SRE book contain some templates that can help shape your processes. Validate and review your architecture As your app evolves, user behavior, traffic profiles, and even business priorities can change. Similarly, other services or infrastructure that your app depends on can evolve. Therefore, it's important to periodically test and validate the resilience and scalability of your app. Test your resilience It's critical to test that your app responds to failures in the way you expect. The overarching theme is that the best way to avoid failure is to introduce failure and learn from it. Simulating and introducing failures is complex. In addition to verifying the behavior of your app or service, you must also ensure that expected alerts are generated, and appropriate metrics are generated. We recommend a structured approach, where you introduce simple failures and then escalate. For example, you might proceed as follows, validating and documenting behavior at each stage: Introduce intermittent failures. Block access to dependencies of the service. Block all network communication. Terminate hosts. For details, see the Breaking your systems to make them unbreakable video from Google Cloud Next 2019. If you're using a service mesh like Istio to manage your app services, you can inject faults at the application layer instead of killing pods or machines, or you can inject corrupting packets at the TCP layer. You can introduce delays to simulate network latency or an overloaded upstream system. You can also introduce aborts, which mimic failures in upstream systems. Test your scaling behavior We recommend that you use automated nonfunctional testing to verify that your app scales as expected. Often this verification is coupled with performance or load testing. You can use simple tools like hey to send load to a web app. For a more detailed example that shows how to do load testing against a REST endpoint, see Distributed load testing using Google Kubernetes Engine. One common approach is to ensure that key metrics stay within expected levels for varying loads. For example, if you're testing the scalability of your web tier, you might measure the average request latencies for spiky volumes of user requests. Similarly, for a backend processing feature, you might measure the average task-processing time when the volume of tasks suddenly increases. Also, you want your tests to measure that the number of resources that were created to handle the test load is within the expected range. For example, your tests might verify that the number of VMs that were created to handle some backend tasks does not exceed a certain value. It's also important to test edge cases. What is the behavior of your app or service when maximum scaling limits are reached? What is the behavior if your service is scaling down and then load suddenly increases again? Always be architecting The technology world moves fast, and this is especially true of the cloud. New products and features are released frequently, new patterns emerge, and the demands from your users and internal stakeholders continue to grow. As the principles for cloud-native architecture blog post defines, always be looking for ways to refine, simplify, and improve the architecture of your apps. Software systems are living things and need to adapt to reflect your changing priorities. What's next Read the principles for cloud-native architecture blog post. Read the SRE books for details on how the Google production environment is managed. Learn more about how DevOps on Google Cloud can improve your software quality and reliability. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Patterns_for_using_Active_Directory_in_a_hybrid_environment.txt b/Patterns_for_using_Active_Directory_in_a_hybrid_environment.txt new file mode 100644 index 0000000000000000000000000000000000000000..9767ce925766b54bf3500900b54cc91722417971 --- /dev/null +++ b/Patterns_for_using_Active_Directory_in_a_hybrid_environment.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-for-using-active-directory-in-a-hybrid-environment +Date Scraped: 2025-02-23T11:51:24.024Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for using Active Directory in a hybrid environment Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This document discusses requirements to consider when you deploy Active Directory to Google Cloud and helps you choose the right architecture. By federating Active Directory with Cloud Identity or Google Workspace (see Patterns for authenticating workforce users in a hybrid environment), you can enable users from your existing Active Directory domains to authenticate and access resources on Google Cloud. You can also deploy Active Directory to Google Cloud if you plan to use Active Directory to manage Windows servers on Google Cloud or if you rely on protocols that are not supported by Google Cloud. Before you deploy Active Directory to Google Cloud, you must first decide which domain and forest architecture to use and how to integrate with your existing Active Directory forest. Assessing requirements Active Directory supports a range of domain and forest architectures. In a hybrid environment, one option is to extend a single Active Directory domain across multiple environments. Alternatively, you can use separate domains or forests and connect them using trusts. Which architecture is best depends on your requirements. To choose the best architecture, look at these factors: Alignment with existing security zones Interaction between on-premises and Google Cloud resources Administrative autonomy Availability The following sections discuss these factors. Alignment with existing security zones Start by reviewing the design of your on-premises network. In your on-premises environment, you might have segmented your network into multiple security zones—for example, by using separate VLANs or subnets. In a network that's been segmented into security zones, communication within a security zone is either unrestricted or subject to only lightweight firewall and auditing policies. In contrast, any communication across security zones is subject to strict firewall, auditing, or traffic inspection policies. The intent of security zones is more far-reaching than just constraining and auditing network communication, however—security zones should implement trust boundaries. Trust boundaries Each machine in a network runs several processes. These processes might communicate with one another locally by using interprocess communication, and they might communicate across machines by using protocols such as HTTP. In this web of communication, peers don't always trust each other to the same extent. For example, processes might trust processes that are running on the same machine more than processes that are running on other machines. And some machines might be considered more trustworthy than others. A trust boundary enforces discriminating between communication parties—trusting one set of communication parties more than another set of parties. Trust boundaries are critical for containing the impact of an attack. Attacks rarely end once a single system has been compromised—whether that system is a single process or an entire machine. Instead, an attacker is likely to try to extend the attack to other systems. Because systems in a trust boundary don't discriminate between each other, spreading an attack inside that trust boundary is easier than attacking systems across a trust boundary. Once a system in a trust boundary has been compromised, all other systems in that trust boundary must be assumed to be compromised. This assumption can help you identify trust boundaries or validate whether a certain system boundary is a trust boundary, for example: Suppose an attacker has achieved the highest level of access to target A (for example, Administrator or root access to a machine or application). If they can leverage these privileges to gain the same level of access to target B, then A and B are, by definition, within the same trust boundary. Otherwise, a trust boundary lies between A and B. By constraining network communication, security zones can help implement trust boundaries. For a security zone to become a true trust boundary, however, the workloads across each side of the boundary must discriminate between requests that originate from the same security zone and requests that originate from different security zones—and scrutinize the latter more closely. Zero-trust model The zero-trust model is the preferred networking model on Google Cloud. Given a single compromised system in a trust boundary, you can assume that all systems in that boundary are compromised. This assumption suggests that smaller trust boundaries are better. The smaller the trust boundary, the fewer systems are compromised and the more boundaries an attacker must clear for an attack to spread. The zero-trust model takes this idea to its logical conclusion: Each machine in the network is treated as a unique security zone and trust boundary. All communication between machines is subject to the same scrutiny and firewalling, and all network requests are treated as originating from an untrusted source. On the networking level, you can implement a zero-trust model by using firewall rules to restrict traffic and VPC flow logs and firewall rules logging to analyze traffic. At the application level, you must ensure that all applications consistently and securely handle authentication, authorization, and auditing. Trust boundaries in Active Directory In an Active Directory domain, machines trust domain controllers to handle authentication and authorization on their behalf. Once a user has proven their identity to one of the domain controllers, they can log on by default to all machines of the same domain. Any access rights that the user is granted by the domain controller (in the form of privileges and group memberships) apply to many machines of the domain. By using group policies, you can prevent users from accessing certain machines or constrain their rights on certain machines. Once a machine has been compromised, however, an attacker might be able to steal passwords, password hashes, or Kerberos tokens of other domain users signed on to the same machine. The attacker can then leverage these credentials to spread the attack to other domains in the forest. Given these factors, it's best to assume that all machines in a forest are in a trust security boundary. Compared to domain boundaries, whose purpose is to control replication and granting administrative autonomy to different parts of the organization, forest boundaries can provide stronger isolation. Forests can serve as a trust boundary. Aligning Active Directory forests and security zones Assuming the forest as the trust boundary influences the design of security zones. If a forest extends across two security zones, it's easier for an attacker to clear the boundary between these security zones. As a result, the two security zones effectively become one and form a single trust boundary. In a zero-trust model, each machine in a network can be thought of as a separate security zone. Deploying Active Directory in such a network undermines this concept and widens the effective security boundary to include all machines of the Active Directory forest. For a security zone to serve as a trust boundary, you must ensure that the entire Active Directory forest is in the security zone. Impact on extending Active Directory to Google Cloud When you plan a deployment to Google Cloud that requires Active Directory, you must decide between two options to align the deployment with your existing security zones: Extend an existing security zone to Google Cloud. You can extend one or more of your existing security zones to Google Cloud by provisioning Shared VPC on Google Cloud and connecting it to the existing zone by using Cloud VPN or Cloud Interconnect. Resources deployed on-premises and on Google Cloud that share a common zone can also share a common Active Directory forest, so there's no need to deploy a separate forest to Google Cloud. As an example, consider an existing network that has a development perimeter network and production perimeter network as security zones with a separate Active Directory forest in each security zone. If you extend the security zones to Google Cloud, then all deployments on Google Cloud are also part of one of these two security zones, and can use the same Active Directory forests. Introduce new security zones. Extending a security zone might not be applicable—either because you don't want to further expand a zone, or because the security requirements of your existing security zones don't match the requirements of your Google Cloud deployments. You can treat Google Cloud as additional security zones. As an example, consider an on-premises environment that has a perimeter network, but doesn't distinguish between development and production workloads. To separate these workloads on Google Cloud, you can create two Shared VPCs and consider them new security zones. You then deploy two additional Active Directory forests to Google Cloud, one per security zone. If necessary, you can establish trust relationships between forests to enable authentication across security zones. Interaction between on-premises and Google Cloud resources The second factor to consider when you extend your Active Directory to Google Cloud is the interaction of users and resources between on-premises and Google Cloud. Depending on your use case, this interaction might be anywhere from light to heavy. Light interaction If the sole purpose of using Active Directory on Google Cloud is to manage a fleet of Windows servers and to enable administrators to sign in to these servers, the level of interaction between environments is light: The set of employees interacting with resources on Google Cloud is limited to administrative staff. Applications deployed to Google Cloud might not interact with on-premises applications at all or might do so without relying on Windows authentication facilities such as Kerberos and NTLM. In a light scenario, consider integrating your on-premises and Google Cloud environments in one of the following two ways: Two disjoint Active Directory forests: You can isolate the two environments by using two separate Active Directory forests that don't share a trust relationship. To enable administrators to authenticate, you maintain a separate set of user accounts for them in the Google Cloud forest. Maintaining a duplicate set of user accounts increases administrative effort and introduces the risk of forgetting to disable accounts when employees leave the company. A better approach is to provision these accounts automatically. If user accounts in your on-premises Active Directory are provisioned by a Human Resources Information System (HRIS), you might be able to use a similar mechanism to provision and manage user accounts in the Google Cloud forest. Alternatively, you can use tools such as Microsoft Identity Manager to synchronize user accounts between environments. Two Active Directory forests with cross-forest trust: By using two Active Directory forests and connecting them with a cross-forest trust, you can avoid having to maintain duplicate accounts. Administrators use the same user account to authenticate to both environments. Although the level of isolation between environments can be considered weaker, using separate forests lets you maintain a trust boundary between your on-premises and Google Cloud environment. Moderate interaction Your use case might be more complex. For example: Administrators logging in to Windows servers deployed on Google Cloud might need to access file shares hosted on-premises. Applications might use Kerberos or NTLM to authenticate and communicate across environment boundaries. You might want to enable employees to use Integrated Windows Authentication (IWA) to authenticate to web applications deployed on Google Cloud. In a moderate scenario, operating two disjoint Active Directory forests might be too limiting: You cannot use Kerberos to authenticate across the two forests, and using NTLM pass-through authentication requires that you keep user accounts and passwords in sync. In this case, we recommend using two Active Directory forests with a cross-forest trust. Heavy interaction Certain workloads, including Virtual Desktop Infrastructure (VDI) deployments, might require heavy interaction between on-premises resources and resources deployed on Google Cloud. When resources across environments are closely coupled, trying to maintain a trust boundary between environments might not be practical and using two forests with a cross-forest trust can be too limiting. In this case, we recommend using a single Active Directory forest and sharing it across environments. Administrative autonomy The third factor to consider when you extend your Active Directory to Google Cloud is administrative autonomy. If you plan to distribute workloads across on-premises and Google Cloud, the workloads you are going to run on Google Cloud might be very different than the workloads you keep on-premises. You might even decide that different teams should manage the two environments. Separating responsibilities among different teams requires granting each team some administrative autonomy. In Active Directory, you can grant teams the authority to manage resources by using delegated administration or by using separate domains. Delegated administration is a lightweight way to delegate administrative duties and grant some autonomy to teams. Maintaining a single domain might also help you ensure consistency across environments and teams. You can't delegate all administrative capabilities, however, and sharing a single domain might increase the administrative overhead of coordinating between teams. In such a case, we recommend using separate domains. Availability The final factor to consider when you extend your Active Directory to Google Cloud is availability. For each domain in an Active Directory forest, the domain controller serves as the identity provider for users in that domain. When using Kerberos for authentication, a user needs to interact with an Active Directory domain controller at various points: Initial authentication (obtaining a ticket-granting ticket). Periodic reauthentication (refreshing a ticket-granting ticket). Authenticating with a resource (obtaining a service ticket). Performing the initial authentication and periodic reauthentication requires communicating with a single domain controller only—a domain controller of the domain that the user is a member of. When authenticating with a resource, it might be necessary to interact with multiple domain controllers, depending on the domain the resource is in: If the resource is located in the same domain as the user, the user can use the same domain controller (or a different domain controller of the same domain) to complete the authentication process. If the resource is located in a different domain, but there is a direct trust relationship with the user's domain, the user needs to interact with at least two domain controllers: One in the resource domain and one in the user's domain. If the resource and user are part of different domains and there is only an indirect trust relationship between these domains, then a successful authentication requires communicating with domain controllers of every domain in the trust path. Locating users and resources in different Active Directory domains or forests can constrain overall availability. Because authenticating requires interacting with multiple domains, an outage of one domain can impact the availability of resources in other domains. Considering the potential impact of Active Directory topology on availability, we recommend aligning your Active Directory topology with your hybrid strategy. Distributed workloads To capitalize on the unique capabilities of each computing environment, you might use a hybrid approach to distribute workloads across those environments. In such a setup, the workloads you deploy to Google Cloud are likely to depend on other infrastructure or applications running on-premises, which makes highly available hybrid connectivity a critical requirement. If you deploy a separate Active Directory forest or domain to Google Cloud and use a trust to connect it to your on-premises Active Directory, you add another dependency between Google Cloud and your on-premises environment. This dependency has a minimal effect on availability. Redundant workloads If you use Google Cloud to ensure business continuity, your workloads on Google Cloud mirror some of the workloads in your on-premises environment. This setup enables one environment to take over the role of the other environment if a failure occurs. So you need to look at every dependency between these environments. If you deploy a separate Active Directory forest or domain to Google Cloud and use a trust to connect it to your on-premises Active Directory, you might create a dependency that undermines the purpose of the setup. If the on-premises Active Directory becomes unavailable, all workloads deployed on Google Cloud that rely on cross-domain or cross-forest authentication might also become unavailable. If your goal is to ensure business continuity, you can use separate Active Directory forests in each environment that are not connected to one another. Or you can extend an existing Active Directory domain to Google Cloud. By deploying additional domain controllers to Google Cloud and replicating directory content between environments, you ensure that the environments operate in isolation. Integration patterns After you've assessed your requirements, use the following decision tree to help identify a suitable forest and domain architecture. The following sections describe each pattern. Synchronized forests In the synchronized forests pattern, you deploy a separate Active Directory forest to Google Cloud. You use this forest to manage any resources deployed on Google Cloud as well as the user accounts required to manage these resources. Instead of creating a trust between the new forest and an existing, on-premises forest, you synchronize accounts. If you use an HRIS as the leading system to manage user accounts, you can configure the HRIS to provision user accounts in the Google Cloud-hosted Active Directory forest. Or you can use tools such as Microsoft Identity Manager to synchronize user accounts between environments. Consider using the synchronized forests pattern if one or more of the following applies: Your intended use of Google Cloud fits one of the redundant deployment patterns, and you want to avoid runtime dependencies between your on-premises environment and Google Cloud. You want to maintain a trust boundary between your on-premises Active Directory environment and the Google Cloud-hosted Active Directory environment—either as a defense-in-depth measure or because you trust one environment more than the other. You expect light to no interaction between Active Directory–managed resources on-premises and on Google Cloud. The number of user accounts you need in the Google Cloud-hosted Active Directory environment is small. Advantages: The synchronized forests pattern lets you maintain strong isolation between the two Active Directory environments. An attacker who compromises one environment might gain little advantage in attacking the second environment. You don't need to set up hybrid connectivity between your on-premises and Google Cloud networks to operate Active Directory. The Google Cloud-hosted Active Directory is unaffected by an outage of your on-premises Active Directory. The pattern is well-suited for business-continuity use cases or other scenarios requiring high availability. You can grant a high level of administrative autonomy to teams that manage the Google Cloud-hosted Active Directory forest. This pattern is supported by Managed Service for Microsoft Active Directory. Best practices: Don't synchronize user account passwords between the two Active Directory forests. Doing so undermines the isolation between the environments. Don't rely on pass-through authentication, because it requires synchronizing user account passwords. Make sure that when an employee leaves the company, their accounts in both Active Directory environments are disabled or removed. Resource forest In the resource forest pattern, you deploy a separate Active Directory forest to Google Cloud. You use this forest to manage any resources deployed to Google Cloud as well as a minimal set of administrative user accounts required for managing the forest. By establishing a forest trust to your existing, on-premises Active Directory forest, you enable users from your existing forest to authenticate and access resources managed by the Google Cloud-hosted Active Directory forest. If necessary, you can evolve this pattern into a hub and spoke topology with your existing Active Directory forest at the center, connected to many resource forests. Consider using the resource forest pattern if one or more of the following applies: Your intended use of Google Cloud fits one of the distributed deployment patterns, and a dependency between your on-premises environment and Google Cloud is acceptable. You want to maintain a trust boundary between your on-premises Active Directory environment and the Google Cloud-hosted Active Directory environment—either as a defense-in-depth measure or because you consider one environment more trusted than the other. You expect a moderate level of interaction between Active Directory–managed resources on-premises and on Google Cloud. You have a large number of users who need to access resources deployed on Google Cloud—for example, to web applications that use IWA for authentication. Advantages: The resource forest pattern lets you maintain a trust boundary between the two Active Directory environments. Depending on how the trust relationship is configured, an attacker who compromises one environment might gain little to no access to the other environment. This pattern is fully supported by Managed Microsoft AD. You can grant a high level of administrative autonomy to teams that manage the Google Cloud-hosted Active Directory forest. Best practices: Connect the two Active Directory forests using a one-way trust so that the Google Cloud-hosted Active Directory trusts your existing Active Directory, but not the other way around. Use selective authentication to limit the resources in the Google Cloud-hosted Active Directory that users from the on-premises Active Directory are allowed to access. Use redundant Dedicated Interconnect, Partner Interconnect, or Cloud VPN connections to ensure highly available network connectivity between your on-premises network and Google Cloud. Extended domain In the extended domain pattern, you extend one or more of your existing Active Directory domains to Google Cloud. For each domain, you deploy one or more domain controllers on Google Cloud, which causes all domain data as well as the global catalog to be replicated and made available on Google Cloud. By using separate Active Directory sites for your on-premises and Google Cloud subnets, you ensure that clients communicate with domain controllers that are closest to them. Consider using the extended domain pattern if one or more of the following applies: Your intended use of Google Cloud fits one of the redundant deployment patterns, and you want to avoid runtime dependencies between your on-premises environment and Google Cloud. You expect heavy interaction between Active Directory—managed resources on-premises and on Google Cloud. You want to speed up authentication for applications that rely on LDAP for authentication. Advantages: The Google Cloud-hosted Active Directory is unaffected by an outage of your on-premises Active Directory. The pattern is well-suited for business-continuity use cases or other scenarios that require high availability. You can limit the communication between your on-premises network and Google Cloud. This can potentially save bandwidth and improve latency. All content of the global catalog is replicated to Active Directory and can be efficiently accessed from Google Cloud-hosted resources. Best practices: If you replicate all domains, the communication between your on-premises network and Google Cloud network are limited to Active Directory replication between domain controllers. If you replicate only a subset of your forest's domains, domain-joined servers running on Google Cloud might still need to communicate with domain controllers of non-replicated domains. Make sure that the firewall rules that apply to communication between on-premises and Google Cloud consider all relevant cases. Be aware that replicating across sites happens at certain intervals only, so updates performed on an on-premises domain controller surface to Google Cloud only after a delay (and vice versa). Consider using read-only domain controllers (RODCs) to maintain a read-only copy of domain data on Google Cloud, but be aware that there's a trade-off related to caching of user passwords. If you allow RODCs to cache passwords and prepopulate the password cache, resources deployed on Google Cloud can remain unaffected by an outage of on-premises domain controllers. However, the security benefits of using an RODC over a regular domain controller are limited. If you disable password caching, a compromised RODC might only pose a limited risk to the rest of your Active Directory, but you lose the benefit of Google Cloud remaining unaffected by an outage of on-premises domain controllers. Extended forest In the extended forest pattern, you deploy a new Active Directory domain on Google Cloud, but integrate it into your existing forest. You use the new domain to manage any resources deployed on Google Cloud and to maintain a minimal set of administrative user accounts for managing the domain. By extending the implicit trust relationships to other domains of the forest, you enable your existing users from other domains to authenticate and access resources managed by the Google Cloud-hosted Active Directory domain. Consider using the extended forest pattern if one or more of the following applies: Your intended use of Google Cloud fits one of the distributed deployment patterns, and a dependency between your on-premises environment and Google Cloud is acceptable. The resources you plan to deploy to Google Cloud require a different domain configuration or structure than your existing domains provide, or you want to grant a high level of administrative autonomy to teams who administer the Google Cloud-hosted domain. You expect moderate to heavy interaction between Active Directory–managed resources on-premises and on Google Cloud. You have many users who need to access resources deployed to Google Cloud—for example, to web applications that use IWA for authentication. Advantages: All content of the global catalog will be replicated to Active Directory and can be efficiently accessed from Google Cloud-hosted resources. Replication traffic between your on-premises network and Google Cloud is limited to global catalog replication. This might help limit your overall bandwidth consumption. You can grant a high level of administrative autonomy to teams that manage the Google Cloud-hosted Active Directory domain. Best practices: Use redundant Dedicated Interconnect, Partner Interconnect, or Cloud VPN connections to ensure highly available network connectivity between your on-premises network and Google Cloud. Resource forest with extended domain The resource forest with extended domain pattern is an extension of the resource forest pattern. The resource forest pattern lets you maintain a trust boundary between environments and providing administrative autonomy. A limitation of the resource forest is that its overall availability hinges on the availability of on-premises domain controllers and reliable network connectivity to your on-premises data center. You can overcome these limitations by combining the resource forest pattern with the extended domain pattern. By replicating the domain, your user accounts are located in Google Cloud, and you can ensure that users can authenticate to Google Cloud-hosted resources even when on-premises domain controllers are unavailable or network connectivity is lost. Consider using the resource forest with extended domain pattern if one or more of the following applies: You want to maintain a trust boundary between your main Active Directory forest and the resource forest. You want to prevent an outage of on-premises domain controllers or loss of network connectivity to your on-premises environment from affecting your Google Cloud-hosted workloads. You expect moderate interaction between Active Directory–managed resources on-premises and on Google Cloud. You have many users from a single Active Directory domain who need to access resources deployed on Google Cloud—for example, to web applications that use IWA for authentication. Advantages: Google Cloud-hosted resources are unaffected by an outage of your on-premises Active Directory. The pattern is well-suited for business-continuity use cases or other scenarios that require high availability. The pattern lets you maintain a trust boundary between the two Active Directory forests. Depending on how the trust relationship is configured, an attacker who compromises one environment might gain little to no access to the other environment. Communication between your on-premises network and Google Cloud network is limited to Active Directory replication between domain controllers. You can implement firewall rules to disallow all other communication, strengthening the isolation between environments You can grant a high level of administrative autonomy to teams that manage the Google Cloud-hosted Active Directory forest. You can use Managed Microsoft AD for the resource forest. Best practices: Deploy domain controllers for the extended domain into a separate Google Cloud project and VPC in order to separate these components from the components of the resource forest. Use VPC peering to connect the VPC to Shared VPC or to the VPC used by the resource forest and configure firewalls to restrict communication to Kerberos user authentication and forest trust creation. Connect the two Active Directory forests using a one-way trust so that the Google Cloud-hosted Active Directory trusts your existing Active Directory forest, but not the other way around. Use selective authentication to limit the resources in the resource forest that users from other forests are allowed to access. Be aware that replicating across sites happens at certain intervals only, so updates performed on an on-premises domain controller surface to Google Cloud only after a delay (and vice versa). Consider using RODCs for the extended domain, but be sure to allow caching of passwords to preserve the availability advantages compared to the resource forest pattern. What's next Find out more about Managed Microsoft AD. Review best practices for deploying an Active Directory resource forest on Google Cloud. Learn how to deploy a fault-tolerant Active Directory environment to Google Cloud. Review our best practices on VPC design. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Patterns_for_using_floating_IP_addresses_in_Compute_Engine.txt b/Patterns_for_using_floating_IP_addresses_in_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..68c7b11bfa2eeeba9a52852f4f6f62b0b61dbebc --- /dev/null +++ b/Patterns_for_using_floating_IP_addresses_in_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/patterns-for-floating-ip-addresses-in-compute-engine +Date Scraped: 2025-02-23T11:54:46.804Z + +Content: +Home Docs Cloud Architecture Center Send feedback Patterns for using floating IP addresses in Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-03 UTC This document describes how to use floating IP address implementation patterns when migrating applications to Compute Engine from an on-premises network environment. This document is aimed at network engineers, system administrators, and operations engineers who are migrating applications to Google Cloud. Also referred to as shared or virtual IP addresses, floating IP addresses are often used to make on-premises network environments highly available. Using floating IP addresses, you can pass an IP address between multiple identically configured physical or virtual servers. This practice allows for failover or for upgrading production software. However, you can't directly implement floating IP addresses in a Compute Engine environment without changing the architecture to one of the patterns described in this document. The GitHub repository that accompanies this document includes sample deployments for each pattern that you can automatically deploy using Terraform. Floating IP addresses in on-premises environments Floating IP addresses are commonly used in on-premises environments. Example use cases are as follows: Highly available physical appliances, such as a set of firewalls or load balancers, often use floating IP addresses for failovers. Servers that require high availability typically use floating IP addresses—for example, relational databases using a primary server and a backup server. A common example, Microsoft SQL Server, uses AlwaysOn Availability Groups. To learn how to implement these patterns on Google Cloud, see Configuring SQL Server AlwaysOn availability groups with synchronous commit . Linux environments that implement load balancers or reverse proxies use floating IP addresses, like IP Virtual Server (IPVS), HAProxy, and nginx. For detecting node failures and moving floating IP addresses between instances, these environments use daemons such as Heartbeat, Pacemaker, or Keepalived. Highly available Windows Services with Windows Server Failover Clustering use floating IP addresses to ensure high availability. To implement Windows Services using failover clustering on Google Cloud, see Running Windows Server Failover Clustering. There are several ways to implement floating IP addresses in an on-premises environment. Servers sharing floating IP addresses typically also share state information through a heartbeat mechanism. This mechanism lets the servers communicate their health status to each other; it also lets the secondary server take over the floating IP address after the primary server fails. This scheme is frequently implemented using the Virtual Router Redundancy Protocol, but you can also use other, similar mechanisms. Once an IP address failover is initiated, the server taking over the floating IP address adds the address to its network interface. The server announces this takeover to other devices using Layer 2 by sending a gratuitous Address Resolution Protocol (ARP) frame. Alternatively, sometimes a routing protocol like Open Shortest Path First (OSPF), announces the IP address to the upstream Layer 3 router. The following diagram shows a typical setup in an on-premises environment. The preceding diagram shows how a primary server and a secondary server connected to the same switch exchange responsiveness information through a heartbeat mechanism. If the primary server fails, the secondary server sends a gratuitous ARP frame to the switch to take over the floating IP address. You use a slightly different setup with on-premises load-balancing solutions, such as Windows Network Load Balancing or a Linux Load Balancing with Direct Server response like IPVS. In these cases, the service also sends out gratuitous ARP frames, but with the MAC address of another server as the gratuitous ARP source. This action essentially spoofs the ARP frames and takes over the source IP address of another server. This action is done to distribute the load for one IP address between different servers. However, this kind of setup is out of scope for this document. In almost all cases when floating IP addresses are used for on-premises load balancing, migrating to Cloud Load Balancing is preferred. Challenges with migrating floating IP addresses to Compute Engine Compute Engine uses a virtualized network stack in a Virtual Private Cloud (VPC) network, so typical implementation mechanisms don't work without changes in Google Cloud. For example, the VPC network handles ARP requests in the software-defined network, and ignores gratuitous ARP frames. In addition, it's impossible to directly modify the VPC network routing table with standard routing protocols such as OSPF or Border Gateway Protocol (BGP). The typical mechanisms for floating IP addresses rely on ARP requests being handled by switching infrastructure or they rely on networks programmable by OSPF or BGP. Therefore, IP addresses don't failover using these mechanisms in Google Cloud. If you migrate a virtual machine (VM) image using an on-premises floating IP address, the floating IP address can't fail over without changing the application. You could use an overlay network to create a configuration that enables full Layer 2 communication and IP takeover through ARP requests. However, setting up an overlay network is complex and makes managing Compute Engine network resources difficult. That approach is also out of scope for this document. Instead, this document describes patterns for implementing failover scenarios in a Compute Engine networking environment without creating overlay networks. To implement highly available and reliable applications in Compute Engine, use horizontally scaling architectures. This type of architecture minimizes the effect of a single node failure. This document describes multiple patterns to migrate an existing application using floating IP addresses from on-premises to Compute Engine, including the following: Patterns using load balancing: Active-active load balancing Load balancing with failover groups and application-exposed health checks Load balancing with failover groups and heartbeat-exposed health checks Patterns using Google Cloud routes: Using ECMP routes Using different priority routes Using a heartbeat mechanism to switch a route's next hop Pattern using autohealing: Using an autohealing single instance Using Alias IP addresses that move between VM instances is discouraged as a failover mechanism because it doesn't meet high availability requirements. In certain failure scenarios, like a zonal failure event, you might not be able to remove an Alias IP address from an instance. Therefore, you might not be able to add it to another instance—making failover impossible. Selecting a pattern for your use case Depending on your requirements, one or more of the patterns described in this solution might be useful to implement floating IP addresses in an on-premises environment. Consider the following factors when deciding what pattern best lets you use an application: Floating internal or floating external IP address: Most applications that require floating IP addresses use floating internal IP addresses. Few applications use floating external IP addresses, because typically traffic to external applications should be load balanced. The table later in this section recommends patterns you can use for floating internal IP addresses and for floating external IP addresses. For use cases that rely on floating internal IP addresses, any of these patterns might be viable for your needs. However, we recommend that use cases relying on floating external IP addresses should be migrated to one of the patterns using load balancing. Application protocols: If your VM only uses TCP and UDP, you can use all of the patterns in the table. If it uses other protocols on top of IPv4 to connect, only some patterns are appropriate. Active-active deployment compatibility: Some applications, while using floating IP addresses on-premises, can work in an active-active deployment mode. This capability means they don't necessarily require failover from the primary server to the secondary server. You have more choices of patterns to move these kinds of applications to Compute Engine. Applications that require only a single application server to receive traffic at any time aren't compatible with active-active deployment. You can only implement these applications with some patterns in the following table. Failback behavior after primary VM recovers: When the original primary VM recovers after a failover, depending on the pattern used, traffic does one of two things. It either immediately moves back to the original primary VM or it stays on the new primary VM until failback is initiated manually or the new primary VM fails. In all cases, only newly initiated connections fail back. Existing connections stay at the new primary VM until they are closed. Health check compatibility: If you can't check if your application is responsive using Compute Engine health checks, without difficulty, you can't use some patterns described in the following table. Instance groups: Any pattern with health check compatibility is also compatible with instance groups. To automatically recreate failed instances, you can use a managed instance group with autohealing. If your VMs keep state, you can use a stateful managed instance group. If your VMs can't be recreated automatically or you require manual failover, use an unmanaged instance group and manually recreate the VMs during failover. Existing heartbeat mechanisms: If the high availability setup for your application already uses a heartbeat mechanism to trigger failover, like Heartbeat, Pacemaker, or Keepalived, you can use some patterns described in the following table. The following table lists pattern capabilities. Each pattern is described in the following sections: Patterns using load balancing Patterns using Google Cloud routes Pattern using autohealing Pattern name IP address Supported protocols Deployment mode Failback Application health check compatibility required Can integrate heartbeat mechanism Patterns using load balancing Active-active load balancing Internal or external TCP/UDP only Active-active N/A Yes No Load balancing with failover and application-exposed health checks Internal or external TCP/UDP only Active-passive Immediate (except existing connections) Yes No Load balancing with failover and heartbeat-exposed health checks Internal or external TCP/UDP only Active-passive Configurable No Yes Patterns using Google Cloud routes Using ECMP routes Internal All IP protocols Active-active N/A Yes No Using different priority routes Internal All IP protocols Active-passive Immediate (except existing connections) Yes No Using a heartbeat mechanism to switch route next hop Internal All IP protocols Active-passive Configurable No Yes Pattern using autohealing Using an autohealing single instance Internal All IP protocols N/A N/A Yes No Deciding which pattern to use for your use case might depend on multiple factors. The following decision tree can help you narrow your choices to a suitable option. The preceding diagram outlines the following steps: Does a single autohealing instance provide good enough availability for your needs? If yes, see Using an autohealing single instance later in this document. Autohealing uses a mechanism in a VM instance group to automatically replace a faulty VM instance. If not, proceed to the next decision point. Does your application need protocols on top of IPv4 other than TCP and UDP? If yes, proceed to the next decision point. If no, proceed to the next decision point. Can your application work in active-active mode? If yes and it needs protocols on top of IPv4 other than TCP and UDP, see Using equal-cost multipath (ECMP) routes later in this document. ECMP routes distribute traffic among the next hops of all route candidates. If yes and it doesn't need protocols on top of IPv4 other than TCP and UDP, see Active-active load balancing later in this document. Active-active load balancing uses your VMs as backends for an internal TCP/UDP load balancer. If not–in either case–proceed to the next decision point. Can your application expose Google Cloud health checks? If yes and it needs protocols on top of IPv4 other than TCP and UDP, see Load balancing with failover and application-exposed health checks later in this document. Load balancing with failover and application-exposed health checks uses your VMs as backends for an internal TCP/UDP load balancer. It also uses the Internal TCP/UDP Load Balancing IP address as a virtual IP address. If yes and it doesn't need protocols on top of IPv4 other than TCP and UDP, see Using different priority routes later in this document. Using different priority routes helps ensure that traffic always flows to a primary instance unless that instance fails. If no and it needs protocols on top of IPv4 other than TCP and UDP, see Load balancing with failover and heartbeat-exposed health checks later in this document. In the load balancing with failover and heartbeat-exposed health checks pattern, health checks aren't exposed by the application itself but by a heartbeat mechanism running between both VMs. If no and it DOES NOT NEED protocols on top of IPv4 other than TCP and UDP, see Using a heartbeat mechanism to switch a route's next hop later in this document. Using a heartbeat mechanism to switch a route's next hop uses a single static route with the next-hop pointing to the primary VM instance. Patterns using load balancing Usually, you can migrate your application using floating IP addresses to an architecture in Google Cloud that uses Cloud Load Balancing. You can use an internal passthrough Network Load Balancer, as this option fits most use cases where the on-premises migrated service is only exposed internally. This load-balancing option is used for all examples in this section and in the sample deployments on GitHub. If you have clients accessing the floating IP address from other regions, select the global access option. If your application communicates using protocols on top of IPv4, other than TCP or UDP, you must choose a pattern that doesn't use load balancing. Those patterns are described later in this document. If your application uses HTTP(S), you can use an internal Application Load Balancer to implement the active-active pattern. If the service you are trying to migrate is externally available, you can implement all the patterns that are discussed in this section by using an external passthrough Network Load Balancer. For active-active deployments, you can also use an external Application Load Balancer, a TCP proxy, or an SSL proxy if your application uses protocols and ports supported by those load balancing options. Consider the following differences between on-premises floating-IP-address-based implementations and all load-balancing-based patterns: Failover time: Pairing Keepalived with gratuitous ARP in an on-premises environment might fail over an IP address in a few seconds. In the Compute Engine environment, the mean recovery time from failover depends on the parameters you set. In case the virtual machine (VM) instance or the VM instance service fails, the mean-time-to-failover traffic depends on health check parameters such as Check Interval and Unhealthy Threshold. With these parameters set to their default values, failover usually takes 15–20 seconds. You can reduce the time by decreasing those parameter values. In Compute Engine, failovers within zones or between zones take the same amount of time. Protocols and Ports: In an on-premises setup, the floating IP addresses accept all traffic. Choose one of the following port specifications in the internal forwarding rule for the internal passthrough Network Load Balancer: Specify at least one port and up to five ports by number. Specify ALL to forward traffic on all ports for either TCP or UDP. Use multiple forwarding rules with the same IP address to forward a mix of TCP and UDP traffic or to use more than five ports with a single IP address: Only TCP or UDP and 1—5 ports: Use one forwarding rule. TCP and UDP and 1—5 ports: Use multiple forwarding rules. 6 or more ports and TCP or UDP: Use multiple forwarding rules. Health checking: On-premises, you can check application responsiveness on a machine in the following ways: Receiving a signal from the other host specifying that it is still responsive. Monitoring if the application is still available through the chosen heartbeat mechanism (Keepalived, Pacemaker, or Heartbeat). In Compute Engine, the health check has to be accessible from outside the host through gRPC, HTTP, HTTP/2, HTTPS, TCP, or SSL. The active-active load balancing and load balancing with failover group and application exposed health checking patterns require that your application expose its health checks. To migrate services using an existing heartbeat mechanism, you can use the load balancing with failover groups and heartbeat-exposed health checks pattern. Active-active load balancing In the active-active load balancing pattern, your VMs are backends for an internal passthrough Network Load Balancer. You use the internal passthrough Network Load Balancer IP address as a virtual IP address. Traffic is equally distributed between the two backend instances. Traffic belonging to the same session goes to the same backend instance as defined in the session affinity settings. Use the active-active load balancing pattern if your application only uses protocols based on TCP and UDP and doesn't require failover between machines. Use the pattern in a scenario where applications can answer requests depending on the content of the request itself. If there is a machine state that isn't constantly synchronized, don't use the pattern—for example, in a primary or secondary database. The following diagram shows an implementation of the active-active load balancing pattern: The preceding diagram shows how an internal client accesses a service that runs on two VMs through an internal passthrough Network Load Balancer. Both VMs are part of an instance group. The active-active load balancing pattern requires your service to expose health checks using one of the supported health check protocols to ensure that only responsive VMs receive traffic. For a full sample implementation of this pattern, see the example deployment with Terraform on GitHub. Load balancing with failover and application-exposed health checks Similar to the active-active pattern, the load balancing through failover and application-exposed health checks pattern uses your VMs as backends for an internal passthrough Network Load Balancer. It also uses the internal passthrough Network Load Balancer IP address as a virtual IP address. To ensure that only one VM receives traffic at a time, this pattern applies failover for internal passthrough Network Load Balancers. This pattern is recommended if your application only has TCP or UDP traffic, but doesn't support an active-active deployment. When you apply this pattern, all traffic flows to either the primary VM or the failover VM. The following diagram shows an implementation of the load balancing with failover and application-exposed health checks pattern: The preceding diagram shows how an internal client accesses a service behind an internal passthrough Network Load Balancer. Two VMs are in separate instance groups. One instance group is set as a primary backend. The other instance group is set as a failover backend for an internal passthrough Network Load Balancer. If the service on the primary VM becomes unresponsive, traffic switches over to the failover instance group. Once the primary VM is responsive again, traffic automatically switches back to the primary backend service. For a full sample implementation of this pattern, see the example deployment with Terraform on GitHub. Load balancing with failover and heartbeat-exposed health checks The load balancing with failover and heartbeat-exposed health checks pattern is the same as the previous pattern. The difference is that health checks aren't exposed by the application itself but by a heartbeat mechanism running between both VMs. The following diagram shows an implementation of the load balancing with failover and heartbeat-exposed health checks pattern: This diagram shows how an internal client accesses a service behind an internal load balancer. Two VMs are in separate instance groups. One instance group is set as a primary backend. The other instance group is set as a failover backend for an internal passthrough Network Load Balancer. Keepalived is used as a heartbeat mechanism between the VM nodes. The VM nodes exchange information on the status of the service using the chosen heartbeat mechanism. Each VM node checks its own status and communicates that status to the remote node. Depending on the status of the local node and the status received by the remote node, one node is elected as the primary node and one node is elected as the backup node. You can use this status information to expose a health check result that ensures that the node considered primary in the heartbeat mechanism also receives traffic from the internal passthrough Network Load Balancer. For example, with Keepalived you can invoke scripts using the notify_master, notify_backup, and notify_fault configuration variables that change the health check status. On transition to the primary state (in Keepalived this state is called master), you can start an application that listens on a custom TCP port. When transitioning to a backup or fault state, you can stop this application. The health check can then be a TCP health check that succeeds if this custom TCP port is open. This pattern is more complex than the pattern using failover with application-exposed health checks. However, it gives you more control. For example, you can configure it to fail back immediately or manually as part of the implementation of the heartbeat mechanism. For a full sample implementation of this pattern that uses Keepalived, see the example deployment with Terraform on GitHub. Patterns using Google Cloud routes In cases where your application uses protocols other than TCP or UDP on top of IPv4, you can migrate your floating IP address to a pattern based on routes. In this section, mentions of routes always refer to Google Cloud routes that are part of a VPC network. References to static routes always refer to static routes on Google Cloud. Using one of these patterns, you set multiple static routes for a specific IP address with the different instances as next-hops. This IP address becomes the floating IP address all clients use. It needs to be outside all VPC subnet IP address ranges because static routes can't override existing subnet routes. You must turn on IP address forwarding on the target instances. Enabling IP address forwarding lets you accept traffic for IP addresses not assigned to the instances—in this case the floating IP address. If you want the floating IP address routes to be available from peered VPC networks, export custom routes so the floating IP address routes propagate to all peer VPC networks. To have connectivity from an on-premises network connected through Cloud Interconnect or Cloud VPN, you need to use custom IP address route advertisements to have the floating IP address advertised on-premises. Route-based patterns have the following advantage over load-balancing-based patterns: Protocols and Ports: Route-based patterns apply to all traffic sent to a specific destination. Load-balancing-based patterns only allow for TCP and UDP traffic. Route-based patterns have the following disadvantages over load-balancing-based patterns: Health checking: Health checks can't be attached to Google Cloud routes. Routes are used regardless of the health of the underlying VM services. Whenever the VM is running, routes direct traffic to instances even if the service is unhealthy. Attaching an autohealing policy to those instances replaces the instances after an unhealthy time period that you specify. However, once those instances restart, traffic resumes immediately—even before the service is up. This service gap can lead to potential service errors when unhealthy instances are still serving traffic or are restarting. Failover time: After you delete or stop a VM instance, Compute Engine disregards any static route pointing to this instance. However, since there are no health checks on routes, Compute Engine still uses the static route as long as the instance is still available. In addition, stopping the instance takes time, so failover time is considerably higher than it is with load-balancing-based patterns. Internal floating IP addresses only: While you can implement patterns using load balancing with an external passthrough Network Load Balancer to create an external floating IP address, route-based patterns only work with internal floating IP addresses. Floating IP address selection: You can set routes only to internal floating IP addresses that aren't part of any subnet—subnet routes can't be overwritten in Google Cloud. Track these floating IP addresses so you don't accidentally assign them to another network. Routes reachability: To make internal floating IP addresses reachable from on-premises networks or peered networks, you need to distribute those static routes as described previously. Using equal-cost multipath (ECMP) routes The equal-cost multipath (ECMP) routes pattern is similar to the active-active load balancing pattern—traffic is equally distributed between the two backend instances. When you use static routes, ECMP distributes traffic among the next hops of all route candidates by using a five-tuple hash for affinity. You implement this pattern by creating two static routes of equal priority with the Compute Engine instances as next-hops. The following diagram shows an implementation of the ECMP routes pattern: The preceding diagram shows how an internal client accesses a service using one of two routes with the next hop pointing to the VM instances implementing the service. If the service on one VM becomes unresponsive, autohealing tries to recreate the unresponsive instance. Once autohealing deletes the instance, the route pointing to the instance becomes inactive before the new instance has been created. Once the new instance exists, the route pointed to this instance is immediately used automatically and traffic is equally distributed between instances. The ECMP routes pattern requires your service to expose health checks using supported protocols so autohealing can automatically replace unresponsive VMs. You can find a sample implementation of this pattern using Terraform in the GitHub repository associated with this document. Using different priority routes The different priority routes pattern is similar to the previous pattern, except that it uses different priority static routes so traffic always flows to a primary instance unless that instance fails. To implement this pattern, follow the same steps in the ECMP routes pattern. When creating the static routes, give the route with the next-hop pointing to the primary instance a lower priority value (primary route). Give the instance with the next-hop pointing to the secondary instance a higher priority value (secondary route). The following diagram shows an implementation of the different priority routes pattern: The preceding diagram shows how an internal client accessing a service uses a primary route with a priority value of 500 pointing to VM 1 as the next hop in normal circumstances. A second route with a priority value of 1,000 is available pointing to VM 2, the secondary VM, as the next hop. If the service on the primary VM becomes unresponsive, autohealing tries to recreate the instance. Once autohealing deletes the instance, and before the new instance it creates comes up, the primary route, with the primary instance as a next hop, becomes inactive. The pattern then uses the route with the secondary instance as a next hop. Once the new primary instance comes up, the primary route becomes active again and all traffic flows to the primary instance. Like the previous pattern, the different priority route pattern requires your service to expose health checks using supported protocols so autohealing can replace unresponsive VMs automatically. You can find a sample implementation of this pattern using Terraform in the GitHub repository that accompanies this document. Using a heartbeat mechanism to switch a route's next hop If your application implements a heartbeat mechanism, like Keepalived, to monitor application responsiveness, you can apply the heartbeat mechanism pattern to change the next hop of the static route. In this case, you only use a single static route with the next-hop pointing to the primary VM instance. On failover, the heartbeat mechanism points the next hop of the route to the secondary VM. The following diagram shows an implementation of the heartbeat mechanism to switch a route's next hop pattern: The preceding diagram shows how an internal client accesses a service using a route with the next hop pointing to the primary VM. The primary VM exchanges heartbeat information with the secondary VM through Keepalived. On failover, Keepalived calls a Cloud Run function that uses API calls to point the next hop at the secondary VM. The nodes use the chosen heartbeat mechanism to exchange information with each other about the status of the service. Each VM node checks its own status and communicates it to the remote VM node. Depending on the status of the local VM node and the status received by the remote node, one VM node is elected as the primary node and one VM node is elected as the backup node. Once a node becomes primary, it points the next hop of the route for the floating IP address to itself. If you use Keepalived, you can invoke a script using the notify_master configuration variable that replaces the static route using an API call or the Google Cloud CLI. The heartbeat mechanism to switch a route's next-hop pattern doesn't require the VMs to be part of an instance group. If you want the VMs to be automatically replaced on failure, you can put them in an autohealing instance group. You can also manually repair and recreate unresponsive VMs. Invoking the following procedure on failover ensures that failover time is minimized because traffic fails over after a single API call is completed in Step 1: Create a new static route with the floating IP address as the destination and the new primary instance as the next hop. The new route should have a different route name and a lower route priority (400, for example) than the original route. Delete the original route to the old primary VM. Create a route with the same name and priority as the route that you just deleted. Point it at the new primary VM as the next hop. Delete the new static route you created. You don't need it to ensure traffic flows to the new primary VM. Since the original route is replaced, only one route should be active at a time even when there is a split network. Using the heartbeat mechanism to switch the route priority pattern instead of the other route-based patterns can reduce failover time. You don't have to delete and replace VMs through autohealing for failover. It also gives you more control over when to fail back to the original primary server after it becomes responsive again. One disadvantage of the pattern is that you have to manage the heartbeat mechanism yourself. Managing the mechanism can lead to more complexity. Another disadvantage is that you have to give privileges to change the global routing table to either the VMs running the heartbeat process or to a serverless function called from the heartbeat process. Changing the global routing table to a serverless function is more secure as it can reduce the scope of the privileges given to the VMs. However, this approach is more complex to implement. For a full sample implementation of this pattern with Keepalived, see the example deployment with Terraform on GitHub. Pattern using autohealing Depending on recovery-time requirements, migrating to a single VM instance might be a feasible option when using Compute Engine. This option is true even if multiple servers using a floating IP address were used on-premises. The reason why this pattern can be used sometimes despite the number of VMs being reduced is that you can create a new Compute Engine instance in seconds or minutes, while on-premises failures typically require hours or even days to fix. Using an autohealing single instance Using this pattern you rely on the autohealing mechanism in a VM instance group to automatically replace a faulty VM instance. The application exposes a health check and when the application is unhealthy, autohealing automatically replaces the VM. The following diagram shows an implementation of the autohealing single instance pattern: The preceding diagram shows how an internal client connects directly to a Compute Engine instance placed in a managed instance group with a size of 1 and with autohealing turned on. Compared with patterns using load balancing, the autohealing single instance pattern has the following advantages: Traffic distribution: There is only one instance, so the instance always receives all traffic. Ease of use: Because there is only one instance, this pattern is the least complicated to implement. Cost savings: Using a single VM instance instead of two can cut the cost of the implementation in half. However, the pattern has the following disadvantages: Failover time: This process is much slower than load-balancing-based patterns. After the health checks detect a machine failure, deleting and recreating the failed instance takes at least a minute, but often takes more time. This pattern isn't common in production environments. However, the failover time might be good enough for some internal or experimental services Reaction to zone failures: A managed instance group with a size of 1 doesn't survive a zone failure. To react to zone failures, consider adding a Cloud Monitoring alert when the service fails, and create an instance group in another zone upon a zone failure. Because you can't use the same IP address in this case, use a Cloud DNS private zone to address the VM and switch the DNS name to the new IP address. You can find a sample implementation of this pattern using Terraform in the GitHub repository. What's next Check out the deployment templates for this document on GitHub. Learn about internal passthrough Network Load Balancers. Learn about failover options for internal passthrough Network Load Balancers. Learn about routes in Compute Engine. Review the SQL Server Always On Availability Group solution. Learn about running Windows Server Failover Clustering. Learn about building a Microsoft SQL Server Always On Availability Group on Compute Engine. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Perform_testing_for_recovery_from_data_loss.txt b/Perform_testing_for_recovery_from_data_loss.txt new file mode 100644 index 0000000000000000000000000000000000000000..47aec9b1a3ec5a79ac5260133d552c4d79bf79d4 --- /dev/null +++ b/Perform_testing_for_recovery_from_data_loss.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/perform-testing-for-recovery-from-data-loss +Date Scraped: 2025-02-23T11:43:40.364Z + +Content: +Home Docs Cloud Architecture Center Send feedback Perform testing for recovery from data loss Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery from data loss. This principle is relevant to the learning focus area of reliability. Principle overview To ensure that your system can recover from situations where data is lost or corrupted, you need to run tests for those scenarios. Instances of data loss might be caused by a software bug or some type of natural disaster. After such events, you need to restore data from backups and bring all of the services back up again by using the freshly restored data. We recommend that you use three criteria to judge the success or failure of this type of recovery test: data integrity, recovery time objective (RTO), and recovery point objective (RPO). For details about the RTO and RPO metrics, see Basics of DR planning. The goal of data restoration testing is to periodically verify that your organization can continue to meet business continuity requirements. Besides measuring RTO and RPO, a data restoration test must include testing of the entire application stack and all the critical infrastructure services with the restored data. This is necessary to confirm that the entire deployed application works correctly in the test environment. Recommendations When you design and run tests for recovering from data loss, consider the recommendations in the following subsections. Verify backup consistency and test restoration processes You need to verify that your backups contain consistent and usable snapshots of data that you can restore to immediately bring applications back into service. To validate data integrity, set up automated consistency checks to run after each backup. To test backups, restore them in a non-production environment. To ensure your backups can be restored efficiently and that the restored data meets application requirements, regularly simulate data recovery scenarios. Document the steps for data restoration, and train your teams to execute the steps effectively during a failure. Schedule regular and frequent backups To minimize data loss during restoration and to meet RPO targets, it's essential to have regularly scheduled backups. Establish a backup frequency that aligns with your RPO. For example, if your RPO is 15 minutes, schedule backups to run at least every 15 minutes. Optimize the backup intervals to reduce the risk of data loss. Use Google Cloud tools like Cloud Storage, Cloud SQL automated backups, or Spanner backups to schedule and manage backups. For critical applications, use near-continuous backup solutions like point-in-time recovery (PITR) for Cloud SQL or incremental backups for large datasets. Define and monitor RPO Set a clear RPO based on your business needs, and monitor adherence to the RPO. If backup intervals exceed the defined RPO, use Cloud Monitoring to set up alerts. Monitor backup health Use Google Cloud Backup and DR service or similar tools to track the health of your backups and confirm that they are stored in secure and reliable locations. Ensure that the backups are replicated across multiple regions for added resilience. Plan for scenarios beyond backup Combine backups with disaster recovery strategies like active-active failover setups or cross-region replication for improved recovery time in extreme cases. For more information, see Disaster recovery planning guide. Previous arrow_back Perform testing for recovery from failures Next Conduct thorough postmortems arrow_forward Send feedback \ No newline at end of file diff --git a/Perform_testing_for_recovery_from_failures.txt b/Perform_testing_for_recovery_from_failures.txt new file mode 100644 index 0000000000000000000000000000000000000000..79f670bffb076bdee9c236985b3e8b6b21999ec7 --- /dev/null +++ b/Perform_testing_for_recovery_from_failures.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/perform-testing-for-recovery-from-failures +Date Scraped: 2025-02-23T11:43:37.864Z + +Content: +Home Docs Cloud Architecture Center Send feedback Perform testing for recovery from failures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery in the event of failures. This principle is relevant to the learning focus area of reliability. Principle overview To be sure that your system can recover from failures, you must periodically run tests that include regional failovers, release rollbacks, and data restoration from backups. This testing helps you to practice responses to events that pose major risks to reliability, such as the outage of an entire region. This testing also helps you verify that your system behaves as intended during a disruption. In the unlikely event of an entire region going down, you need to fail over all traffic to another region. During normal operation of your workload, when data is modified, it needs to be synchronized from the primary region to the failover region. You need to verify that the replicated data is always very recent, so that users don't experience data loss or session breakage. The load balancing system must also be able to shift traffic to the failover region at any time without service interruptions. To minimize downtime after a regional outage, operations engineers also need to be able to manually and efficiently shift user traffic away from a region, in as less time as possible. This operation is sometimes called draining a region, which means you stop the inbound traffic to the region and move all the traffic elsewhere. Recommendations When you design and run tests for failure recovery, consider the recommendations in the following subsections. Define the testing objectives and scope Clearly define what you want to achieve from the testing. For example, your objectives can include the following: Validate the recovery time objective (RTO) and the recovery point objective (RPO). For details, see Basics of DR planning. Assess system resilience and fault tolerance under various failure scenarios. Test the effectiveness of automated failover mechanisms. Decide which components, services, or regions are in the testing scope. The scope can include specific application tiers like the frontend, backend, and database, or it can include specific Google Cloud resources like Cloud SQL instances or GKE clusters. The scope must also specify any external dependencies, such as third-party APIs or cloud interconnections. Prepare the environment for testing Choose an appropriate environment, preferably a staging or sandbox environment that replicates your production setup. If you conduct the test in production, ensure that you have safety measures ready, like automated monitoring and manual rollback procedures. Create a backup plan. Take snapshots or backups of critical databases and services to prevent data loss during the test. Ensure that your team is prepared to do manual interventions if the automated failover mechanisms fail. To prevent test disruptions, ensure that your IAM roles, policies, and failover configurations are correctly set up. Verify that the necessary permissions are in place for the test tools and scripts. Inform stakeholders, including operations, DevOps, and application owners, about the test schedule, scope, and potential impact. Provide stakeholders with an estimated timeline and the expected behaviors during the test. Simulate failure scenarios Plan and execute failures by using tools like Chaos Monkey. You can use custom scripts to simulate failures of critical services such as a shutdown of a primary node in a multi-zone GKE cluster or a disabled Cloud SQL instance. You can also use scripts to simulate a region-wide network outage by using firewall rules or API restrictions based on your scope of test. Gradually escalate the failure scenarios to observe system behavior under various conditions. Introduce load testing alongside failure scenarios to replicate real-world usage during outages. Test cascading failure impacts, such as how frontend systems behave when backend services are unavailable. To validate configuration changes and to assess the system's resilience against human errors, test scenarios that involve misconfigurations. For example, run tests with incorrect DNS failover settings or incorrect IAM permissions. Monitor system behavior Monitor how load balancers, health checks, and other mechanisms reroute traffic. Use Google Cloud tools like Cloud Monitoring and Cloud Logging to capture metrics and events during the test. Observe changes in latency, error rates, and throughput during and after the failure simulation, and monitor the overall performance impact. Identify any degradation or inconsistencies in the user experience. Ensure that logs are generated and alerts are triggered for key events, such as service outages or failovers. Use this data to verify the effectiveness of your alerting and incident response systems. Verify recovery against your RTO and RPO Measure how long it takes for the system to resume normal operations after a failure, and then compare this data with the defined RTO and document any gaps. Ensure that data integrity and availability align with the RPO. To test database consistency, compare snapshots or backups of the database before and after a failure. Evaluate service restoration and confirm that all services are restored to a functional state with minimal user disruption. Document and analyze results Document each test step, failure scenario, and corresponding system behavior. Include timestamps, logs, and metrics for detailed analyses. Highlight bottlenecks, single points of failure, or unexpected behaviors observed during the test. To help prioritize fixes, categorize issues by severity and impact. Suggest improvements to the system architecture, failover mechanisms, or monitoring setups. Based on test findings, update any relevant failover policies and playbooks. Present a postmortem report to stakeholders. The report should summarize the outcomes, lessons learned, and next steps. For more information, see Conduct thorough postmortems. Iterate and improve To validate ongoing reliability and resilience, plan periodic testing (for example, quarterly). Run tests under different scenarios, including infrastructure changes, software updates, and increased traffic loads. Automate failover tests by using CI/CD pipelines to integrate reliability testing into your development lifecycle. During the postmortem, use feedback from stakeholders and end users to improve the test process and system resilience. Previous arrow_back Design for graceful degradation Next Perform testing for recovery from data loss arrow_forward Send feedback \ No newline at end of file diff --git a/Performance_optimization.txt b/Performance_optimization.txt new file mode 100644 index 0000000000000000000000000000000000000000..f683b40e9edccfb50c4d8b1c10c8b868b6341cec --- /dev/null +++ b/Performance_optimization.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml/performance-optimization +Date Scraped: 2025-02-23T11:44:27.422Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and ML perspective: Performance optimization Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in the Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to help you to optimize the performance of your AI and ML workloads on Google Cloud. The recommendations in this document align with the performance optimization pillar of the Architecture Framework. AI and ML systems enable new automation and decision-making capabilities for your organization. The performance of these systems can directly affect your business drivers like revenue, costs, and customer satisfaction. To realize the full potential of your AI and ML systems, you need to optimize their performance based on your business goals and technical requirements. The performance optimization process often involves certain trade-offs. For example, a design choice that provides the required performance might lead to higher costs. The recommendations in this document prioritize performance over other considerations like costs. To optimize AI and ML performance, you need to make decisions regarding factors like the model architecture, parameters, and training strategy. When you make these decisions, consider the entire lifecycle of the AI and ML systems and their deployment environment. For example, LLMs that are very large can be highly performant on massive training infrastructure, but very large models might not perform well in capacity-constrained environments like mobile devices. Translate business goals to performance objectives To make architectural decisions that optimize performance, start with a clear set of business goals. Design AI and ML systems that provide the technical performance that's required to support your business goals and priorities. Your technical teams must understand the mapping between performance objectives and business goals. Consider the following recommendations: Translate business objectives into technical requirements: Translate the business objectives of your AI and ML systems into specific technical performance requirements and assess the effects of not meeting the requirements. For example, for an application that predicts customer churn, the ML model should perform well on standard metrics, like accuracy and recall, and the application should meet operational requirements like low latency. Monitor performance at all stages of the model lifecycle: During experimentation and training after model deployment, monitor your key performance indicators (KPIs) and observe any deviations from business objectives. Automate evaluation to make it reproducible and standardized: With a standardized and comparable platform and methodology for experiment evaluation, your engineers can increase the pace of performance improvement. Run and track frequent experiments To transform innovation and creativity into performance improvements, you need a culture and a platform that supports experimentation. Performance improvement is an ongoing process because AI and ML technologies are developing continuously and quickly. To maintain a fast-paced, iterative process, you need to separate the experimentation space from your training and serving platforms. A standardized and robust experimentation process is important. Consider the following recommendations: Build an experimentation environment: Performance improvements require a dedicated, powerful, and interactive environment that supports the experimentation and collaborative development of ML pipelines. Embed experimentation as a culture: Run experiments before any production deployment. Release new versions iteratively and always collect performance data. Experiment with different data types, feature transformations, algorithms, and hyperparameters. Build and automate training and serving services Training and serving AI models are core components of your AI services. You need robust platforms and practices that support fast and reliable creation, deployment, and serving of AI models. Invest time and effort to create foundational platforms for your core AI training and serving tasks. These foundational platforms help to reduce time and effort for your teams and improve the quality of outputs in the medium and long term. Consider the following recommendations: Use AI-specialized components of a training service: Such components include high-performance compute and MLOps components like feature stores, model registries, metadata stores, and model performance-evaluation services. Use AI-specialized components of a prediction service: Such components provide high-performance and scalable resources, support feature monitoring, and enable model performance monitoring. To prevent and manage performance degradation, implement reliable deployment and rollback strategies. Match design choices to performance requirements When you make design choices to improve performance, carefully assess whether the choices support your business requirements or are wasteful and counterproductive. To choose the appropriate infrastructure, models, or configurations, identify performance bottlenecks and assess how they're linked to your performance measures. For example, even on very powerful GPU accelerators, your training tasks can experience performance bottlenecks due to data I/O issues from the storage layer or due to performance limitations of the model itself. Consider the following recommendations: Optimize hardware consumption based on performance goals: To train and serve ML models that meet your performance requirements, you need to optimize infrastructure at the compute, storage, and network layers. You must measure and understand the variables that affect your performance goals. These variables are different for training and inference. Focus on workload-specific requirements: Focus your performance optimization efforts on the unique requirements of your AI and ML workloads. Rely on managed services for the performance of the underlying infrastructure. Choose appropriate training strategies: Several pre-trained and foundational models are available, and more such models are released often. Choose a training strategy that can deliver optimal performance for your task. Decide whether you should build your own model, tune a pre-trained model on your data, or use a pre-trained model API. Recognize that performance-optimization strategies can have diminishing returns: When a particular performance-optimization strategy doesn't provide incremental business value that's measurable, stop pursuing that strategy. Link performance metrics to design and configuration choices To innovate, troubleshoot, and investigate performance issues, establish a clear link between design choices and performance outcomes. In addition to experimentation, you must reliably record the lineage of your assets, deployments, model outputs, and the configurations and inputs that produced the outputs. Consider the following recommendations: Build a data and model lineage system: All of your deployed assets and their performance metrics must be linked back to the data, configurations, code, and the choices that resulted in the deployed systems. In addition, model outputs must be linked to specific model versions and how the outputs were produced. Use explainability tools to improve model performance: Adopt and standardize tools and benchmarks for model exploration and explainability. These tools help your ML engineers understand model behavior and improve performance or remove biases. ContributorsAuthors: Benjamin Sadik | AI and ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerZach Seils | Networking Specialist Previous arrow_back Cost optimization Send feedback \ No newline at end of file diff --git a/Persistent_Disk.txt b/Persistent_Disk.txt new file mode 100644 index 0000000000000000000000000000000000000000..12c01722bd2e4728ad6632cc3ef783fae92b17db --- /dev/null +++ b/Persistent_Disk.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/persistent-disk +Date Scraped: 2025-02-23T12:09:54.468Z + +Content: +Jump to Persistent DiskPersistent DiskReliable, high-performance block storage for virtual machine instances. Enterprise scale, limitless flexibility, and competitive price for performance.New customers get $300 in free credits to spend on Persistent Disk.Go to consoleView documentationGet started today by creating or attaching a diskUnderstand how to pick the best disk for your workloadA recent study shows our monthly machine costs are up to 80% less than other leading cloudsSee all block storage options available14:23Best practices for running storage-intensive workloads on Google Cloud.BenefitsBlock storage that is easy to deploy and scaleNo volumes, no striping, no sizing—just disks. Stop the headache of dealing with partitioning, redundant disk arrays, or subvolume management. Scale up or down as needed, and only pay for what you use.Industry-leading price and performanceHDD offers low-cost storage when bulk throughput is of primary importance. SSD offers consistently high performance for both random-access workloads and bulk throughput. Both types can be sized up to 64 TB.Flexibility that comes with no downtimeAttach multiple persistent disks to Compute Engine or GKE instances simultaneously. Configure quick, automatic, incremental backups or resize storage on the fly without disrupting your application.Key featuresKey featuresHigh-performance block storage for any workloadPersistent Disk performance scales with the size of the disk and with the number of vCPUs on your VM instance. Choose from the range of disk performance options that fit your business goals, and only pay for the storage you use.Durability and availability that keep your business runningPersistent Disks are designed for durability. We automatically store your data redundantly to ensure the highest level of data integrity. Whether you're worried about planned maintenance or unexpected failures, we ensure your data is available, and your business stays uninterrupted. Automatic security and encryptionAutomatically encrypt your data before it travels outside of your instance to Persistent Disk storage. Each Persistent Disk remains encrypted with system-defined keys or with customer-supplied keys. Google distributes Persistent Disk data across multiple physical disks, ensuring the ultimate level of security. When a disk is deleted, we discard the keys, rendering the data irretrievable.Data protection for business continuityProtect your data with cross-zone synchronous replication, cross-region asynchronous replication, disk snapshots, and disk clones to ensure that data is recoverable when and where you need it. Replicating data to multiple points of presence gives your workload higher resilience and allows you to implement a multi-zone or multi-region business continuity strategy.View all features2:45What are Persistent Disks? When should you use them and why? Get answers today.By using Persistent Disks we got very easy differential backups with Snapshots. Coming from a world where we did full backups every single day, this improved our backup times from two to three hours down to three minutes.Jeremy Tinley, Senior Staff Systems ArchitectWatch the full videoWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postA Google Cloud block storage options cheat sheetRead the blogBlog postCloud storage data protection that fits your businessRead the blogVideoNext '19: Optimizing Block Storage for Workload PerformanceWatch videoReportCreating Persistent Disk snapshotsLearn moreDocumentationDocumentationQuickstartGetting startedAdding or resizing zonal persistent disks.Learn moreTutorialCreating persistent disk snapshotsCreate snapshots to periodically back up data from your zonal persistent disks or regional persistent disks.Learn moreTutorialCodelab: Creating a Persistent DiskFollow along with this lab to learn how to create persistent disks and attach them to a virtual machine.Learn moreTutorialHow to set up a new persistent disk for PostgreSQL DataLearn how to set up a basic installation of PostgreSQL on a separate persistent disk, which is also the boot disk, on Compute Engine.Learn moreTutorialDeploying apps with regional persistent disksSee how to release a highly available app by deploying WordPress using regional persistent disks on Google Kubernetes Engine.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Persistent Disk.All featuresAll featuresFind the right price and performance for your workloadPersistent Disks come in four types at different price points and performance profiles. We've designed these disk types based on years of working with customers to understand the range of uses of our Persistent Disks. Understand the price and performance of each disk type.Scale anytime: resize your block storage while it's in usePersistent Disk allows you to flexibly resize your block storage while it’s in use by one or more virtual machines. Performance scales automatically with size, so you can resize your existing persistent disks or add more persistent disks to an instance to meet your performance and storage requirements—all with no application downtime.Use disk clones to create new disks from a data sourceUse Disk Clones to quickly bring up staging environments from production, create new disks for backup verification or data export jobs, and create disks in a different project.Use Local SSD option for temporary storageLocal SSDs are physically attached to the server that hosts your VM instance. This tight coupling offers superior performance, very high input/output operations per second (IOPS), and very low latency compared to other block storage options. Local SSDs are often used for temporary storage such as caches or scratch processing space.Automatic security and encryptionAutomatically encrypt your data before it travels outside of your instance to Persistent Disk storage. Each Persistent Disk remains encrypted with system-defined keys or with customer-supplied keys. Google distributes Persistent Disk data across multiple physical disks, ensuring the ultimate level of security. When a disk is deleted, we discard the keys, rendering the data irretrievable.Decoupled compute and storageYour storage exists independently from your virtual machine instances, so you can detach or move your disks to keep your data even after you delete your instances.Use snapshots to back up your data on a scheduleCreate snapshots to periodically back up data from your zonal or regional Persistent Disks. To reduce the risk of unexpected data loss, consider the best practice of setting up a snapshot schedule to ensure your data is backed up on a regular schedule.Use Machine Images to store your disk metadata and permissionsUse a Machine Images to store all the configuration, metadata, permissions, and data from one or more disks for a VM instance running on Compute Engine. The VM instance that you use to create a machine image is referred to as a source instance.Asynchronous Replication keeps your business runningAsynchronous Replication provides low recovery point objective (RPO) and low recovery time objective (RTO) block storage replication for cross-region disaster recovery (DR). In the unlikely event of a regional outage, Persistent Disk Asynchronous Replication enables you to failover your data to a secondary region so you can quickly restart your workload.Regional Persistent Disk for high availability servicesRegional Persistent Disk is a storage option that provides synchronous replication of data between two zones in the same region. Regional persistent disks offer zero RPO and low RTO and are a good building block to implement high availability services in Compute Engine.PricingPricingEach VM instance has at least one disk attached to it. Each disk incurs a cost, described in this section. In addition, if you use snapshots, there are separate snapshot charges.For more detailed pricing information, please view the pricing guide.View pricing detailsCloud AI products comply with the SLA policies listed here. They may offer different latency or availability guarantees from other Google Cloud services.Ex: ORACLE® is a registered trademark of Oracle Corporation.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Plan_a_hybrid_and_multicloud_strategy.txt b/Plan_a_hybrid_and_multicloud_strategy.txt new file mode 100644 index 0000000000000000000000000000000000000000..5b1bbbd11d0d69bad67e30c9ef914507776f4f2d --- /dev/null +++ b/Plan_a_hybrid_and_multicloud_strategy.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/strategy +Date Scraped: 2025-02-23T11:49:44.832Z + +Content: +Home Docs Cloud Architecture Center Send feedback Plan a hybrid and multicloud strategy Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This document focuses on how to apply predefined business considerations when planning a hybrid and multicloud strategy. It expands on guidance in Drivers, considerations, strategy, and approaches. That article defines and analyzes the business considerations enterprises should account for when planning such a strategy. Clarify and agree on the vision and objectives Ultimately, the main purpose of a hybrid or multicloud strategy is to achieve the identified business requirements and the associated technical objectives for each business use case aligned with specific business objectives. To achieve this goal, create a well-structured plan that includes the following considerations: Which workloads should be run in each computing environment. Which application architecture patterns to apply across multiple workloads. Which technology and networking architecture pattern to use. Know that defining a plan that considers all workloads and requirements is difficult at best, especially in a complex IT environment. In addition, planning takes time and might lead to competing stakeholder visions. To avoid such situations, initially formulate a vision statement that addresses the following questions (at minimum): What's the targeted business use case to meet specific business objectives? Why is the current approach and computing environment insufficient to meet the business objectives? What are the primary technological aspects to optimize for by using the public cloud? Why and how is the new approach going to optimize and meet your business objectives? How long do you plan to use your hybrid or multicloud setup? Agreeing on the key business and technical objectives and drivers, then obtaining relevant stakeholder sign-off can provide a foundation for the next steps in the planning process. To effectively align your proposed solution with the overarching architectural vision of your organization, align with your team and the stakeholders responsible for leading and sponsoring this initiative. Identify and clarify other considerations While planning a hybrid or multicloud architecture, it's important to identify and agree about the architectural and operational constraints of your project. On the operations side, the following non-exhaustive list provides some requirements that might create some constraints to consider when planning your architecture: Managing and configuring multiple clouds separately versus building a holistic model to manage and secure the different cloud environments. Ensuring consistent authentication, authorization, auditing, and policies across environments. Using consistent tooling and processes across environments to provide a holistic view into security, costs, and opportunities for optimization. Using consistent compliance and security standards to apply unified governance. On the architecture-planning side, the biggest constraints often stem from existing systems and can include the following: Dependencies between applications Performance and latency requirements for communication between systems Reliance on hardware or operating systems that might not be available in the public cloud Licensing restrictions Dependence on the availability of required capabilities in the selected regions of a multicloud architecture For more information about the other considerations related to workload portability, data movement, and security aspects, see Other considerations. Design a hybrid and multicloud architecture strategy After you have clarified the specifics of the business and technical objectives with the associated business requirements (and ideally clarified and agreed on a vision statement), you can build your strategy to create a hybrid or multicloud architecture. The following flowchart summarizes the logical steps to build such a strategy. To help you determine your hybrid or multicloud architecture technical objectives and needs, the steps in the preceding flowchart start with the business requirements and objectives. How you implement your strategy can vary depending on the objectives, drivers, and the technological migration path of each business use case. It's important to remember that a migration is a journey. The following diagram illustrates the phases of this journey as described in Migrate to Google Cloud. This section provides guidance about the "Assess," "Plan," "Deploy," and "Optimize" phases in the preceding diagram. It presents this information in the context of a hybrid or multicloud migration. You should align any migration with the guidance and best practices discussed in the migration path section of the Migrate to Google Cloud guide. These phases might apply to each workload individually, not to all workloads at once. At any point in time, several workloads might be in different phases: Assess phase In the Assess phase, you conduct an initial workload assessment. During this phase, consider the goals outlined in your vision and strategy planning documents. Decide on a migration plan by first identifying a candidate list of workloads that could benefit from being deployed or migrated to the public cloud. To start, choose a workload that isn't business-critical or too difficult to migrate (with minimal or no dependencies on any workload in other environments), yet typical enough to serve as a blueprint for upcoming deployments or migrations. Ideally, the workload or application you select should be part of a targeted business use case or function that has a measurable effect on the business after it's complete. To evaluate and mitigate any potential migration risks, conduct a migration risks assessment. it's important to assess your candidate workload to determine its suitability for migration to a multicloud environment. This assessment involves evaluating various aspects of the applications and infrastructure including the following: Application compatibility requirements with your selected cloud providers Pricing models Security features offered by your selected cloud providers Application interoperability requirements Running an assessment also helps you identify data privacy requirements, compliance requirements, consistency requirements, and solutions across multiple cloud environments. The risks you identify can affect the workloads you choose to migrate or operate. There are several types of tools, like Google Cloud Migration Center, to help you assess existing workloads. For more information, see Migration to Google Cloud: Choose an assessment tool. From a workload modernization perspective, the fit assessment tool helps to assess a VM workload to determine if the workload is fit for modernization to a container or for migration to Compute Engine. Plan phase In the Plan phase, start with the identified applications and required cloud workloads and perform the following tasks: Develop a prioritized migration strategy that defines application migration waves and paths. Identify the applicable high-level hybrid or multicloud application architecture pattern. Select a networking architecture pattern that supports the selected application architecture pattern. Ideally, you should incorporate the cloud networking pattern with the landing zone design. The landing zone design serves as a key foundational element of overall hybrid and multicloud architectures. The design requires seamless integration with these patterns. Don't design the landing zone in isolation. Consider these networking patterns as a subset of the landing zone design. A landing zone might consist of different applications, each with a different networking architecture pattern. Also, in this phase, it's important to decide on the design of the Google Cloud organization, projects, and resource hierarchy to prepare your cloud environment landing zone for the hybrid or multicloud integration and deployment. As part of this phase you should consider the following: Define the migration and modernization approach. There's more information about migration approaches later in this guide. It's also covered in more detail in the migration types section of Migrate to Google Cloud. Use your assessment and discovery phase findings. Align them with the candidate workload you plan to migrate. Then develop an application migration waves plan. The plan should incorporate the estimated resource sizing requirements that you determined during the assessment phase. Define the communication model required between the distributed applications and among application components for the intended hybrid or multicloud architecture. Decide on a suitable deployment archetype to deploy your workload, such as zonal, regional, multi-regional, or global, for the chosen architecture pattern. The archetype you select forms the basis for constructing the application-specific deployment architectures tailored to your business and technical needs. Decide on measurable success criteria for the migration, with clear milestones for each migration phase or wave. Selecting criteria is essential, even if the technical objective is to have the hybrid architecture as a short term setup. Define application SLAs and KPIs when your applications operate in a hybrid setup, especially for those applications that might have distributed components across multiple environments. For more information, see About migration planning to help plan a successful migration and to minimize the associated risks. Deploy phase In the Deploy phase, you are ready to start executing your migration strategy. Given the potential number of requirements, it's best to take an iterative approach. Prioritize your workloads based on the migration and application waves that you developed during the planning phase. With hybrid and multicloud architectures, start your deployment by establishing the necessary connectivity between Google Cloud and the other computing environments. To facilitate the required communication model for your hybrid or multicloud architecture, base the deployment on your selected design and network connectivity type, along with the applicable networking pattern. We recommend that you take this approach for your overall landing zone design decision. In addition, you must test and validate the application or service based on the defined application success criteria. Ideally, these criteria should include both functional and load testing (non-functional) requirements before moving to production. Optimize phase In the Optimize phase, test your deployment: After you complete testing, and the application or service meets the functional and performance capacity expectations, you can move it to production. Cloud monitoring and visibility tools, such as Cloud Monitoring, can provide insights into the performance, availability, and health of your applications and infrastructure and help you optimize where needed. For more information, see Migrate to Google Cloud: Optimize your environment. To learn more about how to design such tools for hybrid or multicloud architecture, see Hybrid and multicloud monitoring and logging patterns. Assess candidate workloads The choice of computing environments for different workloads significantly affects the success of a hybrid and multicloud strategy. Workload placement decisions should align with specific business objectives. Therefore, these decisions should be guided by targeted business use cases that enable measurable business effects. However, starting with the most business-critical workload/application isn't always necessary nor recommended. For more information, see Choosing the apps to migrate first in the Migrate to Google Cloud guide. As discussed in the Business and technical drivers section, there are different types of drivers and considerations for hybrid and multicloud architectures. The following summarized list of factors can help you evaluate your migration use case in the context of a hybrid or multicloud architecture with opportunities to have a measurable business effect: Potential for market differentiation or innovation that is enabled by using cloud services to enable certain business functions or capabilities, such as artificial intelligence capabilities that use existing on-premises data to train machine learning models. Potential savings in total cost of ownership for an application. Potential improvements in availability, resiliency, security, or performance—for example adding a disaster recovery (DR) site in the cloud. Potential speedup of the development and release processes—for example, building your development and testing environments in the cloud. The following factors can help you evaluate migration risks: The potential effect of outages that are caused by a migration. The experience your team has with public cloud deployments, or with deployments for a new or second cloud provider. The need to comply with any existing legal or regulatory restrictions. The following factors can help you evaluate the technical difficulties of a migration: The size, complexity, and age of the application. The number of dependencies with other applications and services across different computing environments. Any restrictions imposed by third-party licenses. Any dependencies on specific versions of operating systems, databases, or other environment configurations. After you have assessed your initial workloads, you can start prioritizing them and defining your migration waves and approaches. Then, you can identify applicable architecture patterns and supporting networking patterns. This step might require multiple iterations, because your assessment could change over time. It's therefore worth re-evaluating workloads after you make your first cloud deployments. Previous arrow_back Drivers, considerations, strategy, and patterns Next Architectural approaches to adopt a hybrid or multicloud architecture arrow_forward Send feedback \ No newline at end of file diff --git a/Plan_and_build_your_foundation.txt b/Plan_and_build_your_foundation.txt new file mode 100644 index 0000000000000000000000000000000000000000..9ecfa6974073c4fa9114f3d3f3674ca0a6bb381b --- /dev/null +++ b/Plan_and_build_your_foundation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-building-your-foundation +Date Scraped: 2025-02-23T11:51:35.783Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Plan and build your foundation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-31 UTC This document helps you create the basic cloud infrastructure for your workloads. It can also help you plan how this infrastructure supports your applications. This planning includes identity management, organization and project structure, and networking. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation (this document) Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs The following diagram illustrates the path of your migration journey. This document is useful if you're planning a migration from an on-premises environment, from a private hosting environment, from another cloud provider to Google Cloud, or if you're evaluating the opportunity to migrate and want to explore what it might look like. This document helps you understand the available products and decisions you'll make building a foundation focused on a migration use case. For premade implementable options, see: Google Cloud setup checklist Enterprise foundations blueprint For additional best practice guidance designing your foundation, see: Landing Zone design for designing a foundation broadly. Architecture Framework for best practice guidance on system design. When planning your migration to Google Cloud, you need to understand an array of topics and concepts related to cloud architecture. A poorly planned foundation can cause your business to face delays, confusion, and downtime, and can put the success of your cloud migration at risk. This guide provides an overview of Google Cloud foundation concepts and decision points. Each section of this document poses questions that you need to ask and answer for your organization before building your foundation on Google Cloud. These questions are not exhaustive; they are meant to facilitate a conversation between your architecture teams and business leadership about what is right for your organization. Your plans for infrastructure, tooling, security, and account management are unique for your business and need deep consideration. When you finish this document and answer the questions for your organization, you're ready to begin the formal planning of your cloud infrastructure and services that support your migration to Google Cloud. Enterprise considerations Consider the following questions for your organization: Which IT responsibilities might change between you and your infrastructure provider when you move to Google Cloud? How can you support or meet your regulatory compliance needs—for example, HIPAA or GDPR—during and after your migration to Google Cloud? How can you control where your data is stored and processed in accordance with your data residency requirements? Shared responsibility model The shared responsibilities between you and Google Cloud might be different than those you are used to, and you need to understand their implications for your business. The processes you previously implemented to provision, configure, and consume resources might change. Review the Terms of Service and the Google security model for an overview of the contractual relationship between your organization and Google, and the implications of using a public cloud provider. Compliance, security, and privacy Many organizations have compliance requirements around industry and government standards, regulations, and certifications. Many enterprise workloads are subject to regulatory scrutiny, and can require attestations of compliance by you and your cloud provider. If your business is regulated under HIPAA or HITECH, make sure you understand your responsibilities and which Google Cloud services are regulated. For information about Google Cloud certifications and compliance standards, see the Compliance resource center. For more information about region-specific or sector-specific regulations, see Google Cloud and the General Data Protection Regulation (GDPR). Trust and security are important to every organization. Google Cloud implements a shared security model for many services. The Google Cloud trust principles can help you understand our commitment to protecting the privacy of your data and your customers' data. For more information about Google's design approach for security and privacy, read the Google infrastructure security design overview. Data residency considerations Geography can also be an important consideration for compliance. Make sure that you understand your data residency requirements and implement policies for deploying workloads into new regions to control where your data is stored and processed. Understand how to use resource location constraints to help ensure that your workloads can only be deployed in pre-approved regions. You need to account for the regionality of different Google Cloud services when choosing the deployment target for your workloads. Make sure that you understand your regulatory compliance requirements and how to implement a governance strategy that helps you ensure compliance. Resource hierarchy Consider the following questions for your organization: How do your existing business and organizational structures map to Google Cloud? How often do you expect changes to your resource hierarchy? How do project quotas impact your ability to create resources in the cloud? How can you incorporate your existing cloud deployments with your migrated workloads? What are best practices for managing multiple teams working simultaneously on multiple Google Cloud projects? Your current business processes, lines of communication, and reporting structure are reflected in the design of your Google Cloud resource hierarchy. The resource hierarchy provides the necessary structure to your cloud environment, determines the way you are billed for resource consumption, and establishes a security model for granting roles and permissions. You need to understand how these facets are implemented in your business today, and plan how to migrate these processes to Google Cloud. Understand Google Cloud resources Resources are the fundamental components that make up all of Google Cloud services. The Organization resource is the apex of the Google Cloud resource hierarchy. All resources that belong to an organization are grouped under the organization node. This structure provides central visibility and control over every resource that belongs to an organization. An Organization can contain one or more folders, and each folder can contain one or more projects. You can use folders to group related projects. Google Cloud projects contain service resources such as Compute Engine virtual machines (VMs), Pub/Sub topics, Cloud Storage buckets, Cloud VPN endpoints, and other Google Cloud services. You can create resources by using the Google Cloud console, Cloud Shell, or the Cloud APIs. If you expect frequent changes to your environment, consider adopting an infrastructure as a code (IaC) approach to streamline resource management. Manage your Google Cloud projects For more information about planning and managing your Google Cloud resource hierarchy, see Decide a resource hierarchy for your Google Cloud landing zone. If you're already working in Google Cloud and have created independent projects as tests or proofs-of-concept, you can migrate existing Google Cloud projects into your organization. Identity and Access Management Consider the following questions for your organization: Who will control, administer, and audit access to Google Cloud resources? How will your existing security and access policies change when you move to Google Cloud? How will you securely enable your users and apps to interact with Google Cloud services? Identity and Access Management (IAM) lets you grant granular access to Google Cloud resources. Cloud Identity is a separate but related service that can help you migrate and manage your identities. At a high level, understanding how you want to manage access to your Google Cloud resources forms the basis for how you provision, configure, and maintain IAM. Understand identities Google Cloud uses identities for authentication and access management. To access any Google Cloud resources, a member of your organization must have an identity that Google Cloud can understand. Cloud Identity is an identity as a service (IDaaS) platform that lets you centrally manage users and groups who can access Google Cloud resources. By setting up your users in Cloud Identity, you can set up single sign-on (SSO) with thousands of third-party software as a service (SaaS) applications. The way you set up Cloud Identity depends on how you manage identities. For more information about identity provisioning options for Google Cloud, see Decide how to onboard identities to Google Cloud. Understand access management The model for managing access consists of four core concepts: Principal: Can be a Google Account (for end users), a service account (for Google Cloud products), a Google Group, or a Google Workspace or Cloud Identity account that can access a resource. Principals cannot perform any action that they aren't permitted to do. Role: A collection of permissions. Permission: Determines what operations are allowed on a resource. When you grant a role to a principal, you grant all the permissions that the role contains. IAM allow policy: Binds a set of principals to a role. When you want to define which principals have access to a resource, you create a policy and attach it to the resource. Proper setup and effective management of principals, roles, and permissions forms the backbone of your security posture in Google Cloud. Access management helps protect you from internal misuse, and helps protect you from external attempts at unauthorized access to your resources. Understand application access In addition to users and groups, there is another kind of identity known as a service account. A service account is an identity that your programs and services can use to authenticate and gain access to Google Cloud resources. User-managed service accounts include service accounts that you explicitly create and manage using IAM, and the Compute Engine default service account that comes built into all Google Cloud projects. Service agents are automatically created, and run internal Google processes on your behalf. When using service accounts, it's important to understand application default credentials, and follow our recommended best practices for service accounts to avoid exposing your resources to undue risk. The most common risks involve privilege escalation or accidental deletion of a service account that a critical application relies on. Follow best practices For more information about best practices to manage identity and access effectively, see Verify every access attempt explicitly. Billing How you pay for Google Cloud resources that you consume is an important consideration for your business, and an important part of your relationship with Google Cloud. You can manage billing in the Google Cloud console with Cloud Billing alongside the rest of your cloud environment. The concepts of resource hierarchy and billing are closely related, so it's critical that you and your business stakeholders understand these concepts. For more information about best practices, tools and techniques to help you track and control costs, see Cost optimization. Connectivity and networking For more information about designing your network on Google Cloud, see: Decide the network design for your Google Cloud landing zone. Implement your Google Cloud landing zone network design. If your source environment is in another cloud service provider, you might need to connect it with your Google Cloud environment. For more information, see Patterns for connecting other cloud service providers with Google Cloud. When migrating production data and workloads to Google Cloud, we recommend that you consider how the availability of the connectivity solution can affect the success of your migration. For example, Cloud Interconnect provides supports production-level SLA if you provision it according to specific topologies. When migrating data from your source environment to your Google Cloud environment, you should adjust the maximum transmission unit (MTU) to take protocol overhead into account. Doing so helps ensure that the data is transferred efficiently and accurately. This adjustment can also help prevent delays caused by data fragmentation and network performance issues. For example, if you're using Cloud VPN to connect your source environment to your Google Cloud environment, you might need to configure the MTU to a lower value to accommodate the VPN protocol overhead in each transmission unit. To help you avoid connectivity issues during your migration to Google Cloud, we recommend that you: Ensure that DNS records resolve across the source environment and your Google Cloud environment. Ensure that network routes between the source environment and your Google Cloud environment correctly propagate across environments. If you need to provision and use your own public IPv4 addresses in your VPCs, see Bring your own IP address. Understand DNS options Cloud DNS can serve as your public domain name system (DNS) server. For more information about how you can implement Cloud DNS, see Cloud DNS best practices. If you need to customize how Cloud DNS responds to queries according to their source or destination, see DNS policies overview. For example, you can configure Cloud DNS to forward queries to your existing DNS servers, or you can override private DNS responses based on the query name. A separate but similar service, called internal DNS, is included with your VPC. Instead of manually migrating and configuring your own DNS servers, you can use the internal DNS service for your private network. For more information, see Overview of internal DNS. Understand data transfer On-premises networking is managed and priced in a fundamentally different way than cloud networking. When managing your own data center or colocation facility, installing routers, switches, and cabling requires a fixed, upfront capital expenditure. In the cloud, you are billed for data transfer rather than the fixed cost of installing hardware, plus the ongoing cost of maintenance. Plan and manage data transfer costs accurately in the cloud by understanding data transfer costs. When planning for traffic management, there are three ways you are charged: Ingress traffic: Network traffic that enters your Google Cloud environment from outside locations. These locations can be from the public internet, on-premises locations, or other cloud environments. Ingress is free for most services on Google Cloud. Some services that deal with internet-facing traffic management, such as Cloud Load Balancing, Cloud CDN, and Google Cloud Armor charge based on how much ingress traffic they handle. Egress traffic: Network traffic that leaves your Google Cloud environment in any way. Egress charges apply to many Google Cloud services, including Compute Engine, Cloud Storage, Cloud SQL, and Cloud Interconnect. Regional and zonal traffic: Network traffic that crosses regional or zonal boundaries in Google Cloud can also be subject to bandwidth charges. These charges can impact how you choose to design your apps for disaster recovery and high availability. Similar to egress charges, cross-regional and cross-zonal traffic charges apply to many Google Cloud services and are important to consider when planning for high availability and disaster recovery. For example, sending traffic to a database replica in another zone is subject to cross-zonal traffic charges Automate and control your network setup In Google Cloud, the physical network layer is virtualized, and you deploy and configure your network using software-defined networking (SDN). To ensure that your network is configured in a consistent and repeatable way, you need to understand how to automatically deploy and tear down your environments. You can use IaC tools, such as Terraform. Security The way you manage and maintain the security of your systems in Google Cloud, and the tools you use, are different than when managing an on-premises infrastructure. Your approach will change and evolve over time to adapt to new threats, new products, and improved security models. The responsibilities that you share with Google might be different than the responsibilities of your current provider, and understanding these changes is critical to ensuring the continued security and compliance of your workloads. Strong, verifiable security and regulatory compliance are often intertwined, and begin with strong management and oversight practices, consistent implementation of Google Cloud best practices, and active threat detection and monitoring. For more information about designing a secure environment on Google Cloud, see Decide the security for your Google Cloud landing zone. Monitoring, alerting, and logging For more information about setting up monitoring, alerting, and logging, see: Centrally aggregate audit logs Review best practices for protecting access to your sensitive logs Governance Consider the following questions for your organization: How can you ensure your users support and meet their compliance needs and align them with your business policies? What strategies are available to maintain and organize your Google Cloud users and resources? Effective governance is critical for helping to ensure reliability, security, and maintainability of your assets in Google Cloud. As in any system, entropy naturally increases over time and left unchecked, it can result in cloud sprawl and other maintainability challenges. Without effective governance, the accumulation of these challenges can impact your ability to achieve your business objectives and to reduce risk. Disciplined planning and enforcement of standards around naming conventions, labeling strategies, access controls, cost controls, and service levels is an important component of your cloud migration strategy. More broadly, the exercise of developing a governance strategy creates alignment among stakeholders and business leadership. Support continuous compliance To help support organization-wide compliance of your Google Cloud resources, consider establishing a consistent resource naming and grouping strategy. Google Cloud provides several methods for annotating and enforcing policies on resources: Security marks let you classify resources to provide security insights from Security Command Center, and to enforce policies on groups of resources. Labels can track your resource spend in Cloud Billing, and provide extra insights in Cloud Logging. Tags for firewalls let you define sources and targets in global network firewall policies and regional network firewall policies. For more information, see Risk and compliance as code. What's next Learn how to implement a secure foundation in Google Cloud. Continue your cloud migration and explore transferring your data to Google Cloud. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Marco Ferrari | Cloud Solutions ArchitectTravis Webb | Solution Architect Send feedback \ No newline at end of file diff --git a/Plan_resource_allocation.txt b/Plan_resource_allocation.txt new file mode 100644 index 0000000000000000000000000000000000000000..4a89c480954da4c8e08e5f054c5a02f3389a8d73 --- /dev/null +++ b/Plan_resource_allocation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/performance-optimization/plan-resource-allocation +Date Scraped: 2025-02-23T11:44:03.229Z + +Content: +Home Docs Cloud Architecture Center Send feedback Plan resource allocation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you plan resources for your workloads in Google Cloud. It emphasizes the importance of defining granular requirements before you design and develop applications for cloud deployment or migration. Principle overview To meet your business requirements, it's important that you define the performance requirements for your applications, before design and development. Define these requirements as granularly as possible for the application as a whole and for each layer of the application stack. For example, in the storage layer, you must consider the throughput and I/O operations per second (IOPS) that the applications need. From the beginning, plan application designs with performance and scalability in mind. Consider factors such as the number of users, data volume, and potential growth over time. Performance requirements for each workload vary and depend on the type of workload. Each workload can contain a mix of component systems and services that have unique sets of performance characteristics. For example, a system that's responsible for periodic batch processing of large datasets has different performance demands than an interactive virtual desktop solution. Your optimization strategies must address the specific needs of each workload. Select services and features that align with the performance goals of each workload. For performance optimization, there's no one-size-fits-all solution. When you optimize each workload, the entire system can achieve optimal performance and efficiency. Consider the following workload characteristics that can influence your performance requirements: Deployment archetype: The deployment archetype that you select for an application can influence your choice of products and features, which then determine the performance that you can expect from your application. Resource placement: When you select a Google Cloud region for your application resources, we recommend that you prioritize low latency for end users, adhere to data-locality regulations, and ensure the availability of required Google Cloud products and services. Network connectivity: Choose networking services that optimize data access and content delivery. Take advantage of Google Cloud's global network, high-speed backbones, interconnect locations, and caching services. Application hosting options: When you select a hosting platform, you must evaluate the performance advantages and disadvantages of each option. For example, consider bare metal, virtual machines, containers, and serverless platforms. Storage strategy: Choose an optimal storage strategy that's based on your performance requirements. Resource configurations: The machine type, IOPS, and throughput can have a significant impact on performance. Additionally, early in the design phase, you must consider appropriate security capabilities and their impact on resources. When you plan security features, be prepared to accommodate the necessary performance trade-offs to avoid any unforeseen effects. Recommendations To ensure optimal resource allocation, consider the recommendations in the following sections. Configure and manage quotas Ensure that your application uses only the necessary resources, such as memory, storage, and processing power. Over-allocation can lead to unnecessary expenses, while under-allocation might result in performance degradation. To accommodate elastic scaling and to ensure that adequate resources are available, regularly monitor the capacity of your quotas. Additionally, track quota usage to identify potential scaling constraints or over-allocation issues, and then make informed decisions about resource allocation. Educate and promote awareness Inform your users about the performance requirements and provide educational resources about effective performance management techniques. To evaluate progress and to identify areas for improvement, regularly document the target performance and the actual performance. Load test your application to find potential breakpoints and to understand how you can scale the application. Monitor performance metrics Use Cloud Monitoring to analyze trends in performance metrics, to analyze the effects of experiments, to define alerts for critical metrics, and to perform retrospective analyses. Active Assist is a set of tools that can provide insights and recommendations to help optimize resource utilization. These recommendations can help you to adjust resource allocation and improve performance. Previous arrow_back Overview Next Take advantage of elasticity arrow_forward Send feedback \ No newline at end of file diff --git a/Plan_the_onboarding_process.txt b/Plan_the_onboarding_process.txt new file mode 100644 index 0000000000000000000000000000000000000000..e1bf10c561552ddeff2c2c03e1c041f699da6437 --- /dev/null +++ b/Plan_the_onboarding_process.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/overview-assess-and-plan +Date Scraped: 2025-02-23T11:55:13.213Z + +Content: +Home Docs Cloud Architecture Center Send feedback Plan the onboarding process for your corporate identities Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC The documents in the Assess and plan section help you assess your requirements and develop a plan for onboarding your corporate identities to Cloud Identity or Google Workspace. Managing corporate identities is often one of the key responsibilities of enterprise IT departments. But each organization is unique, and the way you manage corporate identities in your organization is likely to be unique, too. To determine the best way to use Cloud Identity or Google Workspace to manage corporate identities in your organization, it's important that you assess your requirements. Before you begin Before you begin to assess and plan your Cloud Identity or Google Workspace deployment, make sure that you do the following: Understand the domain model that underpins Cloud Identity and Google Workspace. Determine whether you need a single Google Cloud organization or multiple Google Cloud organizations for your deployment. For help with this decision, see Best practices for planning accounts and organizations. Review the Reference architectures article and select the architecture that most closely matches your requirements. If you selected an architecture that uses an external identity provider (IdP), review Best practices for federating Google Cloud with an external identity provider so that you can incorporate these best practices in your design. Assess and planning your deployment To assess and plan your Cloud Identity or Google Workspace deployment, follow these steps: If you selected an architecture that uses an external IdP, learn how to map the logical model of your external IdP to Cloud Identity or Google Workspace. If you use Active Directory, refer to Federating with Active Directory to learn how to map forests, domains, users, and groups and learn which configuration options to consider. Similarly, if you plan to federate with Azure Active Directory (AD), see Federating with Azure AD for more details on how you can map tenants, domains, users, and groups. Identify and assess existing user accounts. If you haven't been using Google Workspace or Cloud Identity, it's possible that your organization's employees have been using consumer accounts to access Google services. Before you set up Google Workspace or Cloud Identity, we recommend that you analyze user accounts that exist and how to best deal with them. For more details on the different sets of user accounts you might have and how they can impact your deployment, see Assess existing user accounts. Settle on a high-level plan for onboarding identities to Cloud Identity or Google Workspace. In Assess onboarding plans, you can find a selection of proven onboarding plans, along with guidance on how to select the plan that best suits your needs. If you plan to use an external IdP and have identified user accounts that need to be migrated, you might need to consider additional requirements when configuring your external IdP. For more details, see Assess user account consolidation impact on federation. When you have completed your assessment and created a plan, you will be ready to onboard your corporate identities to Cloud Identity or Google Workspace. Send feedback \ No newline at end of file diff --git a/Prepare_data_and_batch_workloads_for_migration_across_regions.txt b/Prepare_data_and_batch_workloads_for_migration_across_regions.txt new file mode 100644 index 0000000000000000000000000000000000000000..35dd52877ff6b6131265abf23c5592c8289e8a75 --- /dev/null +++ b/Prepare_data_and_batch_workloads_for_migration_across_regions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migrate-across-regions/prepare-data-and-batch-workloads +Date Scraped: 2025-02-23T11:52:26.089Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate across Google Cloud regions: Prepare data and batch workloads for migration across regions Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-02 UTC This document describes how to design a data platform on Google Cloud to minimize the impact of a future expansion to other regions or of a region-to-region migration. This document is part of a series that helps you to understand the impact of expanding your data platform to another region. It helps you learn how to do the following: Prepare to move data and data pipelines. Set up checks during the migration phases. Create a flexible migration strategy by separating data storage and data computation. The guidance in this series is also useful if you didn't plan for a migration across regions or for an expansion to multiple regions in advance. In this case, you might need to spend additional effort to prepare your infrastructure, workloads, and data for the migration across regions and for the expansion to multiple regions. This document is part of a series: Get started Design resilient single-region environments on Google Cloud Architect your workloads Prepare batch data pipelines for a migration across regions (this document) This series assumes that you've read and are familiar with the following documents: Migrate to Google Cloud: Get started: describes the general migration framework that you follow in this migration. Migrate to Google Cloud: Transfer your large datasets: describes general concerns around moving data between regions, such as network bandwidth, cost, and security. The following diagram illustrates the path of your migration journey. During each migration step, you follow the phases defined in Migration to Google Cloud: Get started: Assess and discover your workloads. Plan and build a foundation. Deploy your workloads. Optimize your environment. The modern data platform on Google Cloud This section describes the different parts of a modern data platform, and how they're usually constructed in Google Cloud. Data platforms as a general concept can be divided into two sections: The data storage layer is where data is saved. The data that you're saving might be in the form of files where you manage actual bytes on a file system like Hadoop Distributed File System (HDFS) or Cloud Storage, or you might use a domain-specific language (DSL) to manage the data in a database management system. The data computation layer is any data processing that you might activate on top of the storage system. As with the storage layer, there are many possible implementations, and some data storage tools also handle data computation. The role of the data computation layer in the platform is to load data from the storage layer, process the data, and then save the results to a target system. The target system can be the source storage layer. Some data platforms use multiple storage systems for their data storage layer, and multiple data computation systems for their data processing layer. In most cases, the data storage layer and the data computation layer are separated. For example, you might have implemented your data storage layer using these Google Cloud services: Cloud Storage BigQuery storage Cloud SQL Spanner Bigtable You might have implemented the data computation layer using other Google Cloud services like these: Dataflow Dataproc BigQuery Custom workloads on Google Kubernetes Engine or Compute Engine Vertex AI or Colaboratory To reduce the time and latency of communication, the cost of outbound data transfer, and the number of I/O operations between the storage layer and the computation layer, we recommend that you store the data in the same zone that you process the data in. We also recommend that you keep your data storage layer separate from your data computation layer. Keeping these layers separate improves your flexibility in changing computation layers and migrating data. Keeping the layers separate also reduces your resource use because you don't have to keep the computation layer running all the time. Therefore, we recommend that you deploy your data storage and data computation on separate platforms in the same zone and region. For example, you can move your data storage from HDFS to Cloud Storage and use a Dataproc cluster for computation. Assess your environment In the assessment phase, you determine the requirements and dependencies to migrate the batch data pipelines that you've deployed: Build a comprehensive inventory of your data pipelines. Catalog your pipelines according to their properties and dependencies. Train and educate your teams on Google Cloud. Build an experiment and proof of concept on Google Cloud. Calculate the total cost of ownership (TCO) of the target environment. Choose the workloads that you want to migrate first. For more information about the assessment phase and these tasks, see Migration to Google Cloud: Assess and discover your workloads. The following sections are based on the information in that document. Build your inventories To scope your migration, you must understand the data platform environment where your data pipelines are deployed: Create an inventory of your data infrastructure—the different storage layers and different computation layers that you're using for data storage and batch data processing. Create an inventory of the data pipelines that are scheduled to be migrated. Create an inventory of the datasets that are being read by the data pipelines and that need to be migrated. To build an inventory of your data platform, consider the following for each part of the data infrastructure: Storage layers. Along with standard storage platforms like Cloud Storage, consider other storage layers such as databases like Firebase, BigQuery, Bigtable, and Postgres, or other clusters like Apache Kafka. Each storage platform has its own strategy and method to complete migration. For example, Cloud Storage has data migration services, and a database might have a built-in migration tool. Make sure that each product that you're using for data storage is available to you in your target environment, or that you have a compatible replacement. Practice and verify the technical data transfer process for each of the involved storage platforms. Computation layers. For each computation platform, verify the deployment plan and verify any configuration changes that you might have made to the different platforms. Network latency. Test and verify the network latency between the source environment and the target environment. It's important for you to understand how long it will take for the data to be copied. You also need to test the network latency from clients and external environments (such as an on-premises environment) to the target environment in comparison to the source environment. Configurations and deployment. Each data infrastructure product has its own setup methods. Take inventory of the custom configurations that you've made for each component, and which components you're using the default versions of for each platform (for example, which Dataproc version or Apache Kafka version you're using). Make sure that those configurations are deployable as part of your automated deployment process. You need to know how each component is configured because computational engines might behave differently when they're configured differently—particularly if the processing layer framework changes during the migration. For example, if the target environment is running a different version of Apache Spark, some configurations of the Spark framework might have changed between versions. This kind of configuration change can cause changes in outputs, serializations, and computation. During the migration, we recommend that you use automated deployments to ensure that versions and configurations stay the same. If you can't keep versions and configurations the same, then make sure to have tests that validate the data outputs that the framework calculates. Cluster sizes. For self-managed clusters, such as a long-living Dataproc cluster or an Apache Kafka cluster running on Compute Engine, note the number of nodes and CPUs, and the memory for each node in the clusters. Migrating to another region might result in a change to the processor that your deployment uses. Therefore, we recommend that you profile and optimize your workloads after you deploy the migrated infrastructure to production. If a component is fully managed or serverless (for example Dataflow), the sizing will be part of each individual job, and not part of the cluster itself. The following items that you assess in your inventory focus on the data pipelines: Data sources and sinks. Make sure to account for the sources and sinks that each data pipeline uses for reading and writing data. Service Level Agreements (SLAs) and Service Level Objectives (SLOs). Batch data pipelines SLAs and SLOs are usually measured in time to completion, but they can also be measured in other ways, such as compute power used. This business metadata is important in driving business continuity and disaster recovery plan processes (BCDR), such as failing over a subset of your most critical pipelines to another region in the event of a zonal or regional failure. Data pipelines dependencies. Some data pipelines rely on data that is generated by another data pipeline. When you split pipelines into migration sprints, make sure to consider data dependencies. Datasets generated and consumed. For each data pipeline, identify datasets that the pipeline consumes, and which datasets it generates. Doing so can help you to identify dependencies between pipelines and between other systems or components in your overall architecture. The following items that you assess in your inventory focus on the datasets to be migrated: Datasets. Identify the datasets that need to be migrated to the target environment. You might consider some historical data as not needed for migration, or to be migrated at a different time, if the data is archived and isn't actively used. By defining the scope for the migration process and the migration sprints, you can reduce risks in the migration. Data sizes. If you plan to compress files before you transfer them, make sure to note the file size before and after compression. The size of your data will affect the time and cost that's required to copy the data from the source to the destination. Considering these factors will help you to choose between downtime strategies, as described later in this document. Data structure. Classify each dataset to be migrated and make sure that you understand whether the data is structured, semi-structured, or unstructured. Understanding data structure can inform your strategy for how to verify that data is migrated correctly and completely. Complete the assessment After you build the inventories related to your Kubernetes clusters and workloads, complete the rest of the activities of the assessment phase in Migration to Google Cloud: Assess and discover your workloads. Plan and build your foundation The plan and build phase of your migration to Google Cloud consists of the following tasks: Build a resource hierarchy. Configure Identity and Access Management (IAM). Set up billing. Set up network connectivity. Harden your security. Set up logging, monitoring, and alerting. For more information about each of these tasks, see Migrate to Google Cloud: Plan and build your foundation. Migrate data and data pipelines The following sections describes some of the aspects of the plan for migrating data and batch data pipelines. It defines some concepts around the characteristics of data pipelines that are important to understand when you create the migration plan. It also discusses some data testing concepts that can help increase your confidence in the data migration. Migration plan In your migration plan, you need to include time to complete the data transfer. Your plan should account for network latency, time to test the data completeness and get any data that failed to migrate, and any network costs. Because data will be copied from one region to another, your plan for network costs should include inter-region network costs. We recommend that you divide the different pipelines and datasets into sprints and migrate them separately. This approach helps to reduce the risks for each migration sprint, and it allows for improvements in each sprint. To improve your migration strategy and uncover issues early, we recommend that you prioritize smaller, non-critical workloads, before you migrate larger, more critical workloads. Another important part of a migration plan is to describe the strategy, dependencies, and nature of the different data pipelines from the computation layer. If your data storage layer and data computation layer are built on the same system, we recommend that you monitor the performance of the system while data is being copied. Typically, the act of copying large amounts of data can cause I/O overhead on the system and degrade performance in the computation layer. For example, if you run a workload to extract data from a Kafka cluster in a batch fashion, the extra I/O operations to read large amounts of data can cause a degradation of performance on any active data pipelines that are still running in the source environment. In that kind of scenario, you should monitor the performance of the system by using any built-in or custom metrics. To avoid overwhelming the system, we recommend that you have a plan to decommission some workloads during the data copying process, or to throttle down the copy phase. Because copying data makes the migration a long-running process, we recommend that you have contingency plans to address anything that might go wrong during the migration. For example, if data movement is taking longer than expected or if integrity tests fail before you put the new system online, consider whether you want to roll back or try to fix and retry failed operations. Although a rollback can be a cleaner solution, it can be time-consuming and expensive to copy large datasets multiple times. We recommend that you have a clear understanding and predefined tests to determine which action to take in which conditions, how much time to allow to try to create patches, and when to perform a complete rollback. It's important to differentiate between the tooling and scripts that you're using for the migration, and the data that you're copying. Rolling back data movement means that you have to recopy data and either override or delete data that you already copied. Rolling back changes to the tooling and scripts is potentially easier and less costly, but changes to tooling might force you to recopy data. For example, you might have to recopy data if you create a new target path in a script that generates a Cloud Storage location dynamically. To help avoid recopying data, build your scripts to allow for resumability and idempotency. Data pipeline characteristics In order to create an optimal migration plan, you need to understand the characteristics of different data pipelines. It's important to remember that batch pipelines that write data are different from batch pipelines that read data: Data pipelines that write data: Because it changes the state of the source system, it can be difficult to write data to the source environment at the same time that data is being copied to the target environment. Consider the runtimes of pipelines that write data, and try to prioritize their migration earlier in the overall process. Doing so will let you have data ready on the target environment before you migrate the pipelines that read the data. Data pipelines that read data: Pipelines that read data might have different requirements for data freshness. If the pipelines that generate data are stopped on the source system, then the pipelines that read data might be able to run while data is being copied to the target environment. Data is state, and copying data between regions isn't an atomic operation. Therefore, you need to be aware of state changes while data is being copied. It's also important in the migration plan to differentiate between systems. Your systems might have different functional and non-functional requirements (for example, one system for batch and another for streaming). Therefore, your plan should include different strategies to migrate each system. Make sure that you specify the dependencies between the systems and specify how you will reduce downtime for each system during each phase of the migration. A typical plan for a migration sprint should include the following: General strategy. Describe the strategy for handling the migration in this sprint. For common strategies, see Deploy your workloads. List of tools and methods for data copy and resource deployment. Specify any tool that you plan to use to copy data or deploy resources to the target environment. This list should include custom scripts that are used to copy Cloud Storage assets, standard tooling such as the Google Cloud CLI, and Google Cloud tools such as Migration Services. List of resources to deploy to the target environment. List all resources that need to be deployed in the target environment. This list should include all data infrastructure components such as Cloud Storage buckets, BigQuery datasets, and Dataproc clusters. In some cases, early migration sprints will include deployment of a sized cluster (such as a Dataproc cluster) in a smaller capacity, while later sprints will include resizing to fit new workloads. Make sure that your plan includes potential resizing. List of datasets to be copied. For each dataset, make sure to specify the following information: Order in copying (if applicable): For most strategies, the order of operation might be important. An exception is the scheduled maintenance strategy that's described later in this document. Size Key statistics: Chart key statistics, such as row number, that can help you to verify that the dataset was copied successfully. Estimated time to copy: The time to complete your data transfer, based on the migration plan. Method to copy: Refer to the tools and methods list described earlier in this document. Verification tests: Explicitly list the tests that you plan to complete to verify that the data was copied in full. Contingency plan: Describe what to do if any verification tests fail. Your contingency plan should specify when to retry and resume the copy or fill in the gap, and when to do a complete rollback and recopy the entire dataset. Testing This section describes some typical types of tests that you can plan for. The tests can help you to ensure data integrity and completeness. They can also help you to ensure that the computational layer is working as expected and is ready to run your data pipelines. Summary or hashing comparison: In order to validate data completeness after copying data over, you need to compare the original dataset against the new copy on the target environment. If the data is structured inside BigQuery tables, you can't join the two tables in a query to see if all data exists, because the tables reside in different regions. Because of the cost and latency, BigQuery doesn't allow queries to join data across regions. Instead, the method of comparison must summarize each dataset and compare the results. Depending on the dataset structure, the method for summarizing might be different. For example, a BigQuery table might use an aggregation query, but a set of files on Cloud Storage might use a Spark pipeline to calculate a hash of each file, and then aggregate the hashes. Canary flows: Canary flows activate jobs that are built to validate data integrity and completeness. Before you continue to business use cases like data analytics, it can be useful to run canary flow jobs to make sure that input data complies with a set of prerequisites. You can implement canary flows as custom-made data pipelines, or as flows in a DAG based on Cloud Composer. Canary flows can help you to complete tasks like verifying that there are no missing values for certain fields, or validating that the row count of specific datasets matches the expected count. You can also use canary flows to create digests or other aggregations of a column or a subset of the data. You can then use the canary flow to compare the data to a similar digest or aggregation that's taken from the copy of the data. Canary flow methods are valuable when you need to evaluate the accuracy of data that's stored and copied in file formats, like Avro files on top of Cloud Storage. Canary flows don't normally generate new data, but instead they fail if a set of rules isn't met within the input data. Testing environment: After you complete your migration plan, you should test the plan in a testing environment. The testing environment should include copying sampled data or staging data to another region, to estimate the time that it takes to copy data over the network. This testing helps you to identify any issues with the migration plan, and helps to verify that the data can be migrated successfully. The testing should include both functional and non-functional testing. Functional testing verifies that the data is migrated correctly. Non-functional testing verifies that the migration meets performance, security, and other non-functional requirements. Each migration step in your plan should include a validation criteria that details when the step can be considered complete. To help with data validation, you can use the Data Validation Tool (DVT). The tool performs multi-leveled data validation functions, from the table level to the row level, and it helps you compare the results from your source and target systems. Your tests should verify deployment of the computational layer, and test the datasets that were copied. One approach to do so is to construct a testing pipeline that can compute some aggregations of the copied datasets, and make sure the source datasets and the target datasets match. A mismatch between source and target datasets is more common when the files that you copy between regions aren't exact byte-copy representations between the source and target systems (such as when you change file formats or file compressions). For example, consider a dataset that's composed of newline delimited JSON files. The files are stored in a Cloud Storage bucket, and are mounted as an external table in BigQuery. To reduce the amount of data moved over the network, you can perform Avro compression as part of the migration, before you copy files to the target environment. This conversion has many upsides, but it also has some risks, because the files that are being written to the target environment aren't a byte-copy representation of the files in the source environment. To mitigate the risks from the conversion scenario, you can create a Dataflow job, or use BigQuery to calculate some aggregations and checksum hashes of the dataset (such as by calculating sums, averages, or quantiles for each numeric column). For string columns, you can compute aggregations on top of the string length, or on the hash code of that string. For each row, you can compute an aggregated hash from a combination of all the other columns, which can verify with high accuracy that one row is the same as its origin. These calculations are made on both the source and target environments, and then they're compared. In some cases, such as if your dataset is stored in BigQuery, you can't join tables from the source and target environments because they're in different regions, so you need to use a client that can connect to both environments. You can implement the preceding testing methods either in BigQuery or as a batch job (such as in Dataflow). You can then run the aggregation jobs and compare the results calculated for the source environment to the results calculated for the target environment. This approach can help you to make sure that data is complete and accurate. Another important aspect of testing the computational layer is to run pipelines that include all varieties of the processing engines and computational methods. Testing the pipeline is less important for managed computational engines like BigQuery or Dataflow. However, it's important to test the pipeline for non-managed computational engines like Dataproc. For example, if you have a Dataproc cluster that handles several different types of computation, such as Apache Spark, Apache Hive, Apache Flink, or Apache MapReduce, you should test each runtime to make sure that the different workload types are ready to be transferred. Migration strategies After you verify your migration plan with proper testing, you can migrate data. When you migrate data, you can use different strategies for different workloads. The following are examples of migration strategies that you can use as is or customize for your needs: Scheduled maintenance: You plan when your cutover window occurs. This strategy is good when data is changed frequently, but SLOs and SLAs can withstand some downtime. This strategy offers high confidence of data transferred because data is completely stale while it's being copied. For more information, see Scheduled maintenance in "Migration to Google Cloud: Transferring your large datasets." Read-only cutover: A slight variation of the scheduled maintenance strategy, where the source system data platform allows read-only data pipelines to continue reading data while data is being copied. This strategy is useful because some data pipelines can continue to work and provide insights to end systems. The disadvantage to this strategy is that the data that's produced is stale during the migration, because the source data doesn't get updated. Therefore, you might need to employ a catch-up strategy after the migration, to account for the stale data in the end systems. Fully active: You copy the data at a specific timestamp, while the source environment is still active for both read and write data pipelines. After you copy the data and switch over to the new deployment, you perform a delta copy phase to get the data that was generated after the migration timestamp in the source environment. This approach requires more coordination and consideration compared to other strategies. Therefore, your migration plan must include how you will handle the update and delete operations on the source data. Double-writes: The data pipelines can run on both the source and target environments, while data is being copied. This strategy avoids the delta copy phase that's required to backfill data if you use the fully active or read-only strategies. However, to help make sure that data pipelines are producing identical results, a double-writes strategy requires more testing before the migration. If you don't perform advance testing, you will encounter problems trying to consolidate a split-brain scenario. The double-writes strategy also introduces potential costs of running the same workloads twice in different regions. This strategy has the potential to migrate your platform with zero downtime, but it requires much more coordination to execute it correctly. Post-migration testing After the migration is complete, you should test data completeness and test the data pipelines for validity. If you complete your migration in sprints, you need to perform these tests after each sprint. The tests that you perform in this stage are similar to integration tests: you test the validity of a data pipeline that's running business use cases with full production-grade data as input, and then you inspect the output for validity. You can compare the output of the integration test to the output from the source environment by running the same data pipeline in both the source environment and in the target environment. This type of test works only if the data pipeline is deterministic, and if you can ensure that the input to both environments is identical. You can confirm that the data is complete when it meets a set of predefined criteria, where the data in the source environment is equal (or similar enough) to the data in the target environment. Depending on the strategy that you used from the previous section, the data might not match one-to-one. Therefore, you need to predefine criteria to describe data completeness for your use case. For example, for time-series data, you might consider the data to be complete when the most up-to-date record is no more than five minutes behind the current timestamp. Cutover After you verify and test the data and data pipelines on the target environment, you can start the cutover phase. Starting this phase means that clients might need to change their configuration to reference the new systems. In some cases, the configuration can't be the same as the configuration that's pointing to the source system. For example, if a service needs to read data from a Cloud Storage bucket, clients need to change the configuration for which bucket to use. Cloud Storage bucket names are globally unique, so your target environment Cloud Storage bucket will be different from the source environment. During the cutover phase, you should decommission and unschedule the source environment workloads. We recommend that you keep the source environment data for some time, in case you need to roll back. The pre-migration testing phase isn't as complete as a production run of a data pipeline. Therefore, after the cutover is complete and the target system is operational, you need to monitor the metrics, runtimes, and semantic outputs of your data pipelines. This monitoring will help you to catch errors that your testing phase might have missed, and it will help ensure the success of the migration. Optimize your environment Optimization is the last phase of your migration. In this phase, you make your environment more efficient by executing multiple iterations of a repeatable loop until your environment meets your optimization requirements: Assess your current environment, teams, and optimization loop. Establish your optimization requirements and goals. Optimize your environment and your teams. Tune the optimization loop. For more information about how to optimize your Google Cloud environment, see Migration to Google Cloud: Optimize your environment. Prepare your Google Cloud data and computing resources for a migration across regions This section provides an overview of the data and computing resources on Google Cloud and of the design principles to prepare for a migration across regions. BigQuery Because BigQuery is a serverless SQL data warehouse, you don't have to deploy the computation layer. If some of your BigQuery clients specify regions for processing, you will need to adjust those clients. Otherwise, BigQuery is the same in the source environment and the target environment. BigQuery data is stored in two kinds of tables: BigQuery tables: Tables in the BigQuery format. BigQuery manages the data files for you. For more information about migrating data in the BigQuery format, see Manage datasets. BigQuery external tables: Tables for which the data is stored outside of BigQuery. After the data is moved, you will need to recreate the external tables in the new destination. For more information about migrating external tables, see Introduction to external tables. Cloud Storage Cloud Storage offers a Storage Transfer Service that can help you migrate your data. Dataflow (Batch) Dataflow is a Google-managed data processing engine. To help simplify your Dataflow migration and ensure that your jobs can be deployed to any region, you should inject all inputs and outputs as parameters to your job. Instead of writing input and output data locations in your source code, we recommend that you pass Cloud Storage paths and database connection strings as arguments or parameters. Dataproc Dataproc is a managed Apache Hadoop environment that can run any workload that's compatible with the Apache Hadoop framework. It's compatible with frameworks such as Apache Spark, Apache Flink, and Apache Hive. You can use Dataproc in the following ways, which affect how you should migrate your Dataproc environment across regions: Ephemeral clusters with data on Cloud Storage: Clusters are built to run specific jobs, and they're destroyed after the jobs are done. This means that the HDFS layer or any other state of the cluster is also destroyed. If your configuration meets the following criteria, then this type of usage is easier to migrate compared to other types of usage: Inputs and outputs to your jobs aren't hardcoded in the source code. Instead, your jobs receive inputs and output as arguments. The Dataproc environment provisioning is automated, including the configurations for the individual frameworks that your environment is using. Long-living clusters with external data: You have one or more clusters, but they're long-living clusters—even if there are no running jobs on the cluster, the cluster is still up and running. The data and compute are separate because the data is saved outside of the cluster on Google Cloud solutions like Cloud Storage or BigQuery. This model is usually effective when there are always jobs that are running on the cluster, so it doesn't make sense to tear down and set up clusters like in the ephemeral model. Because data and compute are separate, the migration is similar to migration of the ephemeral model. Long-living clusters with data in the cluster: The cluster is long living, but the cluster is also keeping state, data, or both, inside the cluster, most commonly as data on HDFS. This type of use complicates the migration efforts because data and compute aren't separate; if you migrate one without the other, there is a high risk of creating inconsistencies. In this scenario, consider moving data and state outside of the cluster before the migration, to separate the two. If doing so is impossible, then we recommend that you use the scheduled maintenance strategy in order to reduce the risk of creating inconsistencies in your data. Because there are many potential frameworks, and many versions and configurations of those frameworks, you need to test thoroughly before you execute your migration plan. Cloud Composer Cloud Composer is Google Cloud's managed version of Apache Airflow, for orchestration and scheduling of flows. DAGs, configurations, and logs are managed in a Cloud Storage bucket that should be migrated with your Cloud Composer deployment. In order to migrate the state of your Cloud Composer deployment, you can save and load environment snapshots. If you've deployed any custom plugins to your Cloud Composer instance, we recommend that you apply an infrastructure-as-code methodology to recreate the environment in a fully automated manner. Cloud Composer doesn't manage data but it activates other data processing frameworks and platforms. Therefore, migration of Cloud Composer can be completely isolated from the data. Cloud Composer also doesn't process data, so your deployment doesn't need to be in the same region as the data. Therefore, you can create a Cloud Composer deployment in the target environment, and still run pipelines on the source environment. In some cases, doing so can be useful for operating different pipelines in different regions while the entire platform is being migrated. Cloud Data Fusion Cloud Data Fusion is a visual integration tool that helps you build data pipelines using a visual editor. Cloud Data Fusion is based on the open source project CDAP. Like Cloud Composer, Cloud Data Fusion doesn't manage data itself, but it activates other data processing frameworks and platforms. Your Cloud Data Fusion pipelines should be exported from the source environment and imported to the target environment in one of these ways: Manual process: Use the web interface to export and import pipelines. Automated process: Write a script that uses the CDAP API. For more information, see CDAP reference and the CDAP REST API documentation. Depending on the amount of flows that you need to migrate, you might prefer one method over the other. Using the CDAP API to build a migration script might be difficult, and it requires more software engineering skills. However, if you have a lot of flows, or if the flows change relatively frequently, an automated process might be the best approach. What's Next Learn how to design resilient single-region environments on Google Cloud. Learn how to minimize the costs of migrating your single- and multi-region environments. Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Eyal Ben Ivri | Cloud Solutions ArchitectOther contributor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Prepare_your_Google_Workspace_or_Cloud_Identity_account.txt b/Prepare_your_Google_Workspace_or_Cloud_Identity_account.txt new file mode 100644 index 0000000000000000000000000000000000000000..ea5f50788bb6bff93c25956986796c83aba189bc --- /dev/null +++ b/Prepare_your_Google_Workspace_or_Cloud_Identity_account.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/preparing-your-g-suite-or-cloud-identity-account +Date Scraped: 2025-02-23T11:55:29.495Z + +Content: +Home Docs Cloud Architecture Center Send feedback Prepare your Google Workspace or Cloud Identity account Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how you can create a Cloud Identity or Google Workspace account and how you can prepare it for a production deployment. Before you begin To prepare your Cloud Identity or Google Workspace account, you must do the following: Select a target architecture for your production deployment based on our reference architectures. Identify whether you need one or more additional Cloud Identity or Google Workspace accounts for production or staging purposes. For details on identifying the right number of accounts to use, see Best practices for planning accounts and organizations. Identify a suitable onboarding plan and have completed all the activities that your plan defines as prerequisites for consolidating your existing user accounts. For each Cloud Identity or Google Workspace account that you must create, make sure of the following: You have selected the DNS domain name to use as the primary domain name. This domain name determines the name of the associated Google Cloud organization. You can use a neutral domain name as the primary domain name. You have selected any secondary DNS domain names that you want to add to the account. Make sure that you don't exceed a total of 600 domains per account. To complete the sign-up process for a new Cloud Identity or Google Workspace account, you also need the following information: A contact phone number and email address. Google uses this phone number and address to contact you in case of problems with your account. An email address for the first super-admin user account. The email address must use the primary DNS domain and must not be used by an existing consumer account. If you plan to set up federation later, select an email address that maps to a user in your external identity provider (IdP). Creating a new Cloud Identity or Google Workspace account might require collaboration between multiple teams and stakeholders in your organization. These might include the following: DNS administrators. To verify primary and secondary DNS domains, you need administrative access to both DNS zones. If you use an external IdP, the administrators of your external IdP. Future administrators of the Google Cloud organization. Process for preparing an account The following flowchart illustrates the process of preparing your Cloud Identity or Google Workspace account. As the two sides of the diagram indicate, the process might require collaboration between different teams. Sign up for Cloud Identity or Google Workspace. During the sign-up process, you must provide a contact phone number and email address, the primary domain that you want to use, and the username for the first super-admin user account. Note: The sign-up process might fail if the primary DNS domain that you selected is already in use by a different Cloud Identity or Google Workspace account. Verify the ownership of your primary domain by creating either a TXT or CNAME record in the corresponding DNS zone of your DNS server. Add any secondary domains to the Cloud Identity or Google Workspace account. Verify ownership of the secondary domains by creating either TXT or CNAME records in the corresponding DNS zones of your DNS server. Protect your account by configuring security settings. Create a default configuration for user accounts. Note: If you are signing up for Google Workspace and need to consolidate existing user accounts, don't update the DNS MX records of your primary domain yet. Doing so might cause the users of unmanaged accounts to not receive emails anymore. Secure access to your account During the sign-up process, you create a first user in your Cloud Identity or Google Workspace account. This user account is assigned super-admin privileges and has full access to the Cloud Identity or Google Workspace account. You need super-admin privileges in order to complete the initial configuration of your Cloud Identity or Google Workspace account. After you've completed the initial configuration, occurrences where you need super-admin privileges will be rare—but to ensure business continuity, it's important that you and other authorized personnel maintain super-admin access to the Cloud Identity or Google Workspace account: To ensure this access, do the following: Select a group of administrators that should have super-admin access to the Cloud Identity or Google Workspace account. It's best to keep the number of users small. Create a set of dedicated super-admin user accounts for each administrator. Enforce Google 2-step authentication for these users and require them to create backup codes so that they maintain access even if they lose their phone or USB key. Instruct administrators to use the super-admin accounts only when necessary, and discourage everyday use of those accounts. For details on how to keep super-admin users secure, see Super administrator account best practices. And to make sure your account is properly secured, follow our Security checklist for medium and large businesses. Configure default settings for user accounts Cloud Identity and Google Workspace support a number of settings that help you keep user accounts secure: Enforcing 2-step verification. Controlling who can access Google Workspace and Google services. Allowing or disallowing access to apps that are less secure. Assigning licenses for Cloud Identity Premium or Google Workspace. Choosing a geographic location for your data and controlling supplemental data storage (Google Workspace only). To minimize administrative effort, it's best to configure these settings so that they are applied by default to new users. You can configure default settings on the following levels: Global: A global setting applies to all users but has the lowest priority. Organizational unit (OU): A setting configured for an OU applies to all users in the OU and to descendant OUs, and it overrides a global setting. Group: A setting configured by group applies to all members of the group and overrides OU and global settings. Create an OU structure By creating a structure of organizational units, you can segment the user accounts of your Cloud Identity or Google Workspace account into discrete sets to make them easier to manage. If you use Cloud Identity in combination with an external IdP, creating custom organizational units might not be necessary. Instead, you can use a combination of global and group-specific settings: Keep all user accounts in the default OU. To control who is allowed to access certain Google services, create dedicated groups such as Google Cloud Users and Google Ads Users in your external IdP. Provision these groups to Cloud Identity and apply the right default settings to them. You can then control access by modifying group memberships in your external IdP. If some or all of your users use Google Workspace, you are likely to require a custom OU structure because some of the Google Workspace–specific settings cannot be applied by group. If you use an external IdP, it's best to keep the OU structure simple, as follows: Create a basic OU structure that lets you automatically assign licenses, choose a geographic location for your data, and control supplemental data storage. For all other settings, we recommend that you apply settings by group. Configure your external IdP so that new users are automatically assigned to the right OU. Create dedicated groups such as Google Cloud Users and Google Ads Users in your external IdP. Provision these groups to Google Workspace and apply the right default settings to them. You can then control access by modifying group memberships in your external IdP. Impact of the default OU on account migration If you have identified existing consumer accounts that you plan to migrate to Cloud Identity or Google Workspace, the default OU plays a special role. If you migrate a consumer account to Cloud Identity or Google Workspace, that account is always placed into the default OU and not part of any groups. To migrate a consumer account, you have to initiate an account transfer. This transfer has to be approved by the owner of the consumer account. As an administrator, you have limited control when the owner might give consent and you can therefore complete the transfer. When the transfer is complete, all settings applied to the default OU take effect on the migrated user account. Make sure that these settings grant a base level of access to Google services so that the associated employee's ability to work is not impeded. Best practices When you are preparing your Cloud Identity or Google Workspace account, follow these best practices: If you use an external IdP, then ensure that the users in Cloud Identity or Google Workspace are a subset of the identities in your external IdP. Consider shortening the default session length and session length used by Google Cloud. When you use an external IdP, make sure that you align the session length with your IdP. Export audit logs to BigQuery to retain them beyond the default retention period. To help keep your account safe, periodically review our security checklist for medium and large businesses. What's next Read about how to consolidate your existing user accounts. Send feedback \ No newline at end of file diff --git a/Preventative_controls.txt b/Preventative_controls.txt new file mode 100644 index 0000000000000000000000000000000000000000..5755c7ed4f9213abfe76fe0d108273458b343ad9 --- /dev/null +++ b/Preventative_controls.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/preventative-controls +Date Scraped: 2025-02-23T11:45:34.129Z + +Content: +Home Docs Cloud Architecture Center Send feedback Preventative controls for acceptable resource configurations Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC We recommend that you define policy constraints that enforce acceptable resource configurations and prevent risky configurations. The blueprint uses a combination of organization policy constraints and infrastructure-as-code (IaC) validation in your pipeline. These controls prevent the creation of resources that don't meet your policy guidelines. Enforcing these controls early in the design and build of your workloads helps you to avoid remediation work later. Organization policy constraints The Organization Policy service enforces constraints to ensure that certain resource configurations can't be created in your Google Cloud organization, even by someone with a sufficiently privileged IAM role. The blueprint enforces policies at the organization node so that these controls are inherited by all folders and projects within the organization. This bundle of policies is designed to prevent certain high-risk configurations, such as exposing a VM to the public internet or granting public access to storage buckets, unless you deliberately allow an exception to the policy. The following table introduces the organization policy constraints that are implemented in the blueprint: Organization policy constraint Description compute.disableNestedVirtualization Nested virtualization on Compute Engine VMs can evade monitoring and other security tools for your VMs if poorly configured. This constraint prevents the creation of nested virtualization. compute.disableSerialPortAccess IAM roles like compute.instanceAdmin allow privileged access to an instance's serial port using SSH keys. If the SSH key is exposed, an attacker could access the serial port and bypass network and firewall controls. This constraint prevents serial port access. compute.disableVpcExternalIpv6 External IPv6 subnets can be exposed to unauthorized internet access if they are poorly configured. This constraint prevents the creation of external IPv6 subnets. compute.requireOsLogin The default behavior of setting SSH keys in metadata can allow unauthorized remote access to VMs if keys are exposed. This constraint enforces the use of OS Login instead of metadata-based SSH keys. compute.restrictProtocolForwardingCreationForTypes VM protocol forwarding for external IP addresses can lead to unauthorized internet egress if forwarding is poorly configured. This constraint allows VM protocol forwarding for internal addresses only. compute.restrictXpnProjectLienRemoval Deleting a Shared VPC host project can be disruptive to all the service projects that use networking resources. This constraint prevents accidental or malicious deletion of the Shared VPC host projects by preventing the removal of the project lien on these projects. compute.setNewProjectDefaultToZonalDNSOnly A legacy setting for global (project-wide) internal DNS is not recommended because it reduces service availability. This constraint prevents the use of the legacy setting. compute.skipDefaultNetworkCreation A default VPC network and overly permissive default VPC firewall rules are created in every new project that enables the Compute Engine API. This constraint skips the creation of the default network and default VPC firewall rules. compute.vmExternalIpAccess By default, a VM is created with an external IPv4 address that can lead to unauthorized internet access. This constraint configures an empty allowlist of external IP addresses that the VM can use and denies all others. essentialcontacts.allowedContactDomains By default, Essential Contacts can be configured to send notifications about your domain to any other domain. This constraint enforces that only email addresses in approved domains can be set as recipients for Essential Contacts. iam.allowedPolicyMemberDomains By default, allow policies can be granted to any Google Account, including unmanaged accounts, and accounts belonging to external organizations. This constraint ensures that allow policies in your organization can only be granted to managed accounts from your own domain. Optionally, you can allow additional domains. iam.automaticIamGrantsForDefaultServiceAccounts By default, default service accounts are automatically granted overly permissive roles. This constraint prevents the automatic IAM role grants to default service accounts. iam.disableServiceAccountKeyCreation Service account keys are a high-risk persistent credential, and in most cases a more secure alternative to service account keys can be used. This constraint prevents the creation of service account keys. iam.disableServiceAccountKeyUpload Uploading service account key material can increase risk if key material is exposed. This constraint prevents the uploading of service account keys. sql.restrictAuthorizedNetworks Cloud SQL instances can be exposed to unauthenticated internet access if the instances are configured to use authorized networks without a Cloud SQL Auth Proxy. This policy prevents the configuration of authorized networks for database access and forces the use of the Cloud SQL Auth Proxy instead. sql.restrictPublicIpCloud SQL instances can be exposed to unauthenticated internet access if the instances are created with public IP addresses. This constraint prevents public IP addresses on Cloud SQL instances. storage.uniformBucketLevelAccess By default, objects in Cloud Storage can be accessed through legacy Access Control Lists (ACLs) instead of IAM, which can lead to inconsistent access controls and accidental exposure if misconfigured. Legacy ACL access is not affected by the iam.allowedPolicyMemberDomains constraint. This constraint enforces that access can only be configured through IAM uniform bucket-level access, not legacy ACLs. storage.publicAccessPrevention Cloud Storage buckets can be exposed to unauthenticated internet access if misconfigured. This constraint prevents ACLs and IAM permissions that grant access to allUsers and allAuthenticatedUsers. These policies are a starting point that we recommend for most customers and most scenarios, but you might need to modify organization policy constraints to accommodate certain workload types. For example, a workload that uses a Cloud Storage bucket as the backend for Cloud CDN to host public resources is blocked by storage.publicAccessPrevention, or a public-facing Cloud Run app that doesn't require authentication is blocked by iam.allowedPolicyMemberDomains. In these cases, modify the organization policy at the folder or project level to allow a narrow exception. You can also conditionally add constraints to organization policy by defining a tag that grants an exception or enforcement for policy, then applying the tag to projects and folders. For additional constraints, see available constraints and custom constraints. Pre-deployment validation of infrastructure-as-code The blueprint uses a GitOps approach to manage infrastructure, meaning that all infrastructure changes are implemented through version-controlled infrastructure-as-code (IaC) and can be validated before deploying. The policies enforced in the blueprint define acceptable resource configurations that can be deployed by your pipeline. If code that is submitted to your GitHub repository does not pass the policy checks, no resources are deployed. For information on how pipelines are used and how controls are enforced through CI/CD automation, see deployment methodology. What's next Read about deployment methodology (next document in this series) Send feedback \ No newline at end of file diff --git a/Pricing_calculator.txt b/Pricing_calculator.txt new file mode 100644 index 0000000000000000000000000000000000000000..ba11bcf02825963013dd612cda64808194e86323 --- /dev/null +++ b/Pricing_calculator.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/calculator +Date Scraped: 2025-02-23T12:10:35.649Z + +Content: +Welcome to Google Cloud's pricing calculatorGet started with your estimateAdd and configure products to get a cost estimate to share with your team.addAdd to estimatevideo_youtubeA quick tutorial on how to use this tool Watch now.This tool creates cost estimates based on assumptions that you provide. These estimates may not accurately reflect the final costs on your monthly Google Cloud bill.lockLink billing account to view negotiated pricing.Cost detailsUSDarrow_drop_downAdjust currencyUnited States Dollar (USD)Australian Dollar (AUD)Brazilian Real (BRL)Canadian Dollar (CAD)Swiss Franc (CHF)Czech Koruna (CZK)Danish Krone (DKK)Euro (EUR)Great Britain Pound (GBP)Hong Kong Dollar (HKD)Indonesian Rupiah (IDR)Israeli New Shekel (ILS)Indian Rupee (INR)Japanese Yen (JPY)South Korean Won (KRW)Mexican Peso (MXN)Malaysian Ringgit (MYR)Norwegian Krone (NOK)New Zealand Dollar (NZD)Polish Zloty (PLN)Swedish Krona (SEK)Singapore Dollar (SGD)Thai Baht (THB)Turkish Iira (TRY)New Taiwan Dollar (TWD)Vietnamese Dong (VND)addAdd to estimateinfoDetails of items added to your estimate are displayed in this panelEstimated cost--/ moplagiarismOpen detailed viewfile_saveDownload .csvnote_addNew from duplicatelinkShareGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Private_Service_Connect.txt b/Private_Service_Connect.txt new file mode 100644 index 0000000000000000000000000000000000000000..46f623cffc39990b8b012e04295f53435ec122e3 --- /dev/null +++ b/Private_Service_Connect.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/private-service-connect +Date Scraped: 2025-02-23T12:07:22.012Z + +Content: +Jump to Private Service ConnectPrivate Service ConnectCreates a private and secure connection from your VPCs to Google, third parties, or your own services.Go to consoleContact salesKeep traffic between your VPC and services within the Google networkLine rate performance and scales to enterprise-size networksUse in conjunction with Service Directory for service-centric networkingSee how customers accelerate growth with Google Cloud's Cross-Cloud Network2:39BenefitsConsume services fasterEasily and securely connect your private network to access services on Google (Cloud Storage, Bigtable), third parties (Snowflake, MongoDB), or services you own.Protect your network trafficPrevent your network traffic from being exposed to the public internet. Data remains secure on Google’s backbone network.Simplify service managementRemoves the need to configure an internet gateway or a VPC peering connection. Simplify the management of complicated cloud network architectures.Key featuresPrivately connect services across different networks and organizationsAccess Google APIs and servicesConnect to Google Cloud services like Cloud Storage and Bigtable using Private Service Connect endpoints with internal IP addresses in your VPC networks.Connect to a service in another VPC networkConnect to your own services or those provided by other service producers (Example: MongoDB, Snowflake) using a Private Service Connect endpoint.Publish services as a service producerYou can publish a service (make a service available outside your VPC network) by using an internal TCP/UDP load balancer and create a service attachment in the same region.Service DirectoryPrivate Service Connect endpoints are registered with Service Directory where you can store, manage, and publish services.View all featuresVIDEOSee how Private Service Connect simplifies secure access to services at scale2:13Integration with Google Cloud's Private Service Connect and DataStax Astra DB has been effortless. As a Google partner, we had close collaboration with the product and engineering team that allowed our customers to connect DataStax Astra DB securely using PSC just as it was generally available.Cory Schickendantz, Global Director, Cloud Ecosystems, DataStaxWhat's newSee the latest updates about Private Service ConnectSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postTroubleshooting best practices for Private Service ConnectRead the blogVideoService-Centric Cross-Cloud Network demo - AWS and Google CloudWatch videoBlog postWhat is Private Service Connect? Learn moreRead the blogBlog postManaged service egress with Private Service Connect interfacesRead the blogBlog postThree Private Service Connect patterns - Networking basicsRead the blogBlog postPrivate, secure, and seamless connectivity to Cloud SQL using Private Service ConnectRead the blogDocumentationFind resources and documentation for Private Service ConnectGoogle Cloud BasicsPrivate Service Connect overviewGet an overview of Private Service Connect and key concepts as a consumer of services and as a producer of services.Learn moreQuickstartConfiguring Private Service Connect to access Google APIsLearn how to connect to service producers using endpoints in Private Service Connect.Learn moreQuickstartConfiguring Private Service Connect to access servicesLearn how to connect to services in another VPC network.Learn moreTutorialHow-to guide for service producersLearn how to publish services for your customers to connect using Private Service Connect.Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseUse Private Service Connect to access Google APIsCreate private endpoints using global internal IP addresses within your VPC network. Assign DNS names to these internal IP addresses. You can control which traffic goes to which endpoint, and can demonstrate that traffic stays within Google Cloud.Use caseUse Private Service Connect to offer servicesPrivate Service Connect uses endpoints and service attachments to let service consumers send traffic from the consumer's VPC network to services in the service producer's VPC network.Use caseUse HTTP(S) load balancing for service controlsPrivate Service Connect with consumer HTTP(S) service controls gives service consumers full control of their policies by steering traffic through a Google Cloud External HTTP(S) load balancer.View all technical guidesAll featuresLearn more about Private Service Connect featuresAccess Google APIs and servicesConnect to Google Cloud services, such as Cloud Storage and Bigtable, using Private Service Connect endpoints with internal IP addresses in your VPC networks.Connect to a service in another VPC networkConnect to your own services or those provided by other service producers (Example: MongoDB, Snowflake) using a Private Service Connect endpoint.Publish services as a service producerYou can publish a service (make a service available outside your VPC network) by using an internal TCP/UDP load balancer and create a service attachment in the same region.Service DirectoryPrivate Service Connect endpoints are registered with Service Directory for Google APIs where you can store, manage, and publish services.Proxy protocolFind your consumers' source IP addresses and Private Service Connect ID from one central proxy protocol header.VPC Service ControlsRestrict Private Service Connect within a service perimeter and mitigate data exfiltration risks. VPC Service Controls service perimeters are always enforced on APIs and services that support VPC Service Controls.Enable consumer http(s) service controls using a load balancerYou can create a Private Service Connect endpoint with consumer HTTP(S) service controls using an external or internal HTTP(S) load balancer, which lets you maintain consistent policies across multiple service producers.PricingPricingFor service consumers (customers), Private Service Connect pricing is per endpoint. For producers (service owners), Private Service Connect pricing is per GB processed only.View pricing detailsPartnersPrivate Service Connect partnersGoogle Cloud partners can deliver multi-tenant services securely at massive scale.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Promote_modular_design.txt b/Promote_modular_design.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9c5995c184edc1213a4ce90426884c8bf516610 --- /dev/null +++ b/Promote_modular_design.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/performance-optimization/promote-modular-design +Date Scraped: 2025-02-23T11:44:10.854Z + +Content: +Home Docs Cloud Architecture Center Send feedback Promote modular design Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you promote a modular design. Modular components and clear interfaces can enable flexible scaling, independent updates, and future component separation. Principle overview Understand the dependencies between the application components and the system components to design a scalable system. Modular design enables flexibility and resilience, regardless of whether a monolithic or microservices architecture was initially deployed. By decomposing the system into well-defined, independent modules with clear interfaces, you can scale individual components to meet specific demands. Targeted scaling can help optimize resource utilization and reduce costs in the following ways: Provisions only the necessary resources to each component, and allocates fewer resources to less-demanding components. Adds more resources during high-traffic periods to maintain the user experience. Removes under-utilized resources without compromising performance. Modularity also enhances maintainability. Smaller, self-contained units are easier to understand, debug, and update, which can lead to faster development cycles and reduced risk. While modularity offers significant advantages, you must evaluate the potential performance trade-offs. The increased communication between modules can introduce latency and overhead. Strive for a balance between modularity and performance. A highly modular design might not be universally suitable. When performance is critical, a more tightly coupled approach might be appropriate. System design is an iterative process, in which you continuously review and refine your modular design. Recommendations To promote modular designs, consider the recommendations in the following sections. Design for loose coupling Design a loosely coupled architecture. Independent components with minimal dependencies can help you build scalable and resilient applications. As you plan the boundaries for your services, you must consider the availability and scalability requirements. For example, if one component has requirements that are different from your other components, you can design the component as a standalone service. Implement a plan for graceful failures for less-important subprocesses or services that don't impact the response time of the primary services. Design for concurrency and parallelism Design your application to support multiple tasks concurrently, like processing multiple user requests or running background jobs while users interact with your system. Break large tasks into smaller chunks that can be processed at the same time by multiple service instances. Task concurrency lets you use features like autoscaling to increase the resource allocation in products like the following: Compute Engine GKE BigQuery Spanner Balance modularity for flexible resource allocation Where possible, ensure that each component uses only the necessary resources (like memory, storage, and processing power) for specific operations. Resource over-allocation can result in unnecessary costs, while resource under-allocation can compromise performance. Use well-defined interfaces Ensure modular components communicate effectively through clear, standardized interfaces (like APIs and message queues) to reduce overhead from translation layers or from extraneous traffic. Use stateless models A stateless model can help ensure that you can handle each request or interaction with the service independently from previous requests. This model facilitates scalability and recoverability, because you can grow, shrink, or restart the service without losing the data necessary for in-progress requests or processes. Choose complementary technologies Choose technologies that complement the modular design. Evaluate programming languages, frameworks, and databases for their modularity support. For more information, see the following resources: Re-architecting to cloud native Introduction to microservices Previous arrow_back Take advantage of elasticity Next Continuously monitor and improve performance arrow_forward Send feedback \ No newline at end of file diff --git a/Pub-Sub(1).txt b/Pub-Sub(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..c2eb33b31eea062898f6aa0f15385ba22cf52450 --- /dev/null +++ b/Pub-Sub(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/pubsub +Date Scraped: 2025-02-23T12:05:30.507Z + +Content: +Explore and analyze data with a prebuilt, Google-recommended data warehouse solution that uses BigQuery, Looker Studio, and AI. Deploy in console.Jump to Pub/SubPub/SubIngest events for streaming into BigQuery, data lakes or operational databases.New customers get $300 in free credits to spend on Pub/Sub. All customers get up to 10 GB for ingestion or delivery of messages free per month, not charged against your credits.Go to consoleContact salesDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis. Ingest analytic events and stream them to BigQuery with DataflowNo-ops, secure, scalable messaging or queue systemIn-order and any-order at-least-once message delivery with pull and push modesSecure data with fine-grained access controls and always-on encryption VIDEOLearn Pub/Sub in a minute, including how it works and common use cases1:45BenefitsHigh availability made simpleSynchronous, cross-zone message replication and per-message receipt tracking ensures reliable delivery at any scale.No-planning, auto-everythingAuto-scaling and auto-provisioning with no partitions eliminates planning and ensures workloads are production ready from day one.Easy, open foundation for real-time data systems A fast, reliable way to land small records at any volume, an entry point for real-time and batch pipelines feeding BigQuery, data lakes and operational databases. Use it with ETL/ELT pipelines in Dataflow. Key featuresKey featuresStream analytics and connectorsNative Dataflow integration enables reliable, expressive, exactly-once processing and integration of event streams in Java, Python, and SQL.In-order delivery at scaleOptional per-key ordering simplifies stateful application logic without sacrificing horizontal scale—no partitions required.Simplified streaming ingestion with native integrationsIngest streaming data from Pub/Sub directly in BigQuery or Cloud Storage with our native subscriptions.View all featuresBLOGRead what's new with Pub/Sub from Next '24CustomersLearn from customers using Pub/SubCase studyCME Group uses Pub/Sub to offer real-time market data, helping firms mitigate risk and drive revenue.5-min readCase studySky scales event publishing with Pub/Sub to build its next generation Sky Q box.5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postWhat's New with Pub/Sub Next '24Read the blogBlog postSimplify data lake pipelines with new Pub/Sub Cloud Storage subscriptionsRead the blogBlog postPub/Sub schema evolution is generally availableRead the blogBlog postNo pipelines needed. Stream data with Pub/Sub direct to BigQueryRead the blogDocumentationDocumentationGoogle Cloud BasicsWhat is Pub/Sub?Get a comprehensive overview of Pub/Sub, from core concepts and message flow to common use cases and integrations.Learn moreTutorialIntroduction to Pub/SubLearn how to enable Pub/Sub in a Google Cloud project, create a Pub/Sub topic and subscription, and publish messages and pull them to the subscription.Learn moreQuickstartQuickstart: Using client librariesSee how the Pub/Sub service allows applications to exchange messages reliably, quickly, and asynchronously.Learn moreTutorialIn-order message deliveryLearn how scalable message ordering works and when to use it.Learn moreTutorialChoosing between Pub/Sub or Pub/Sub LiteUnderstand how to make most of both options.Learn moreQuickstartQuickstart: Stream processing with DataflowLearn how to use Dataflow to read messages published to a Pub/Sub topic, window the messages by timestamp, and write the messages to Cloud Storage.Learn moreTutorialGuide: Publishing messages to topicsLearn how to create a message containing your data and send a request to the Pub/Sub Server to publish the message to the desired topic.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Pub/SubUse casesUse casesUse caseStream analyticsGoogle’s stream analytics makes data more organized, useful, and accessible from the instant it’s generated. Built on Pub/Sub along with Dataflow and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights. This abstracted provisioning reduces complexity and makes stream analytics accessible to both data analysts and data engineers.Use caseAsynchronous microservices integrationPub/Sub works as a messaging middleware for traditional service integration or a simple communication medium for modern microservices. Push subscriptions deliver events to serverless webhooks on Cloud Functions, App Engine, Cloud Run, or custom environments on Google Kubernetes Engine or Compute Engine. Low-latency pull delivery is available when exposing webhooks is not an option or for efficient handling of higher throughput streams.View all technical guidesAll featuresAll featuresAt-least-once deliverySynchronous, cross-zone message replication and per-message receipt tracking ensures at-least-once delivery at any scale.OpenOpen APIs and client libraries in seven languages support cross-cloud and hybrid deployments.Exactly-once processingDataflow supports reliable, expressive, exactly-once processing of Pub/Sub streams.No provisioning, auto-everythingPub/Sub does not have shards or partitions. Just set your quota, publish, and consume.Compliance and securityPub/Sub is a HIPAA-compliant service, offering fine-grained access controls and end-to-end encryption.Google Cloud–native integrationsTake advantage of integrations with multiple services, such as Cloud Storage and Gmail update events and Cloud Functions for serverless event-driven computing.Third-party and OSS integrationsPub/Sub provides third-party integrations with Splunk and Datadog for logs along with Striim and Informatica for data integration. Additionally, OSS integrations are available through Confluent Cloud for Apache Kafka and Knative Eventing for Kubernetes-based serverless workloads. Seek and replayRewind your backlog to any point in time or a snapshot, giving the ability to reprocess the messages. Fast forward to discard outdated data.Dead letter topicsDead letter topics allow for messages unable to be processed by subscriber applications to be put aside for offline examination and debugging so that other messages can be processed without delay.FilteringPub/Sub can filter messages based upon attributes in order to reduce delivery volumes to subscribers.PricingPricingPub/Sub pricing is calculated based upon monthly data volumes. The first 10 GB of data per month is offered at no charge. Monthly data volume1Price per TB2First 10 GB$0.00Beyond 10 GB$40.001 For detailed pricing information, please consult the pricing guide.2 TB refers to a tebibyte, or 240 bytes.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Pub-Sub.txt b/Pub-Sub.txt new file mode 100644 index 0000000000000000000000000000000000000000..5a0f3156cf6e74a9986d53dc38087ec3f73ac614 --- /dev/null +++ b/Pub-Sub.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/pubsub +Date Scraped: 2025-02-23T12:03:35.328Z + +Content: +Explore and analyze data with a prebuilt, Google-recommended data warehouse solution that uses BigQuery, Looker Studio, and AI. Deploy in console.Jump to Pub/SubPub/SubIngest events for streaming into BigQuery, data lakes or operational databases.New customers get $300 in free credits to spend on Pub/Sub. All customers get up to 10 GB for ingestion or delivery of messages free per month, not charged against your credits.Go to consoleContact salesDeploy an example data warehouse solution to explore, analyze, and visualize data using BigQuery and Looker Studio. Plus, apply generative AI to summarize the results of the analysis. Ingest analytic events and stream them to BigQuery with DataflowNo-ops, secure, scalable messaging or queue systemIn-order and any-order at-least-once message delivery with pull and push modesSecure data with fine-grained access controls and always-on encryption VIDEOLearn Pub/Sub in a minute, including how it works and common use cases1:45BenefitsHigh availability made simpleSynchronous, cross-zone message replication and per-message receipt tracking ensures reliable delivery at any scale.No-planning, auto-everythingAuto-scaling and auto-provisioning with no partitions eliminates planning and ensures workloads are production ready from day one.Easy, open foundation for real-time data systems A fast, reliable way to land small records at any volume, an entry point for real-time and batch pipelines feeding BigQuery, data lakes and operational databases. Use it with ETL/ELT pipelines in Dataflow. Key featuresKey featuresStream analytics and connectorsNative Dataflow integration enables reliable, expressive, exactly-once processing and integration of event streams in Java, Python, and SQL.In-order delivery at scaleOptional per-key ordering simplifies stateful application logic without sacrificing horizontal scale—no partitions required.Simplified streaming ingestion with native integrationsIngest streaming data from Pub/Sub directly in BigQuery or Cloud Storage with our native subscriptions.View all featuresBLOGRead what's new with Pub/Sub from Next '24CustomersLearn from customers using Pub/SubCase studyCME Group uses Pub/Sub to offer real-time market data, helping firms mitigate risk and drive revenue.5-min readCase studySky scales event publishing with Pub/Sub to build its next generation Sky Q box.5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postWhat's New with Pub/Sub Next '24Read the blogBlog postSimplify data lake pipelines with new Pub/Sub Cloud Storage subscriptionsRead the blogBlog postPub/Sub schema evolution is generally availableRead the blogBlog postNo pipelines needed. Stream data with Pub/Sub direct to BigQueryRead the blogDocumentationDocumentationGoogle Cloud BasicsWhat is Pub/Sub?Get a comprehensive overview of Pub/Sub, from core concepts and message flow to common use cases and integrations.Learn moreTutorialIntroduction to Pub/SubLearn how to enable Pub/Sub in a Google Cloud project, create a Pub/Sub topic and subscription, and publish messages and pull them to the subscription.Learn moreQuickstartQuickstart: Using client librariesSee how the Pub/Sub service allows applications to exchange messages reliably, quickly, and asynchronously.Learn moreTutorialIn-order message deliveryLearn how scalable message ordering works and when to use it.Learn moreTutorialChoosing between Pub/Sub or Pub/Sub LiteUnderstand how to make most of both options.Learn moreQuickstartQuickstart: Stream processing with DataflowLearn how to use Dataflow to read messages published to a Pub/Sub topic, window the messages by timestamp, and write the messages to Cloud Storage.Learn moreTutorialGuide: Publishing messages to topicsLearn how to create a message containing your data and send a request to the Pub/Sub Server to publish the message to the desired topic.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Pub/SubUse casesUse casesUse caseStream analyticsGoogle’s stream analytics makes data more organized, useful, and accessible from the instant it’s generated. Built on Pub/Sub along with Dataflow and BigQuery, our streaming solution provisions the resources you need to ingest, process, and analyze fluctuating volumes of real-time data for real-time business insights. This abstracted provisioning reduces complexity and makes stream analytics accessible to both data analysts and data engineers.Use caseAsynchronous microservices integrationPub/Sub works as a messaging middleware for traditional service integration or a simple communication medium for modern microservices. Push subscriptions deliver events to serverless webhooks on Cloud Functions, App Engine, Cloud Run, or custom environments on Google Kubernetes Engine or Compute Engine. Low-latency pull delivery is available when exposing webhooks is not an option or for efficient handling of higher throughput streams.View all technical guidesAll featuresAll featuresAt-least-once deliverySynchronous, cross-zone message replication and per-message receipt tracking ensures at-least-once delivery at any scale.OpenOpen APIs and client libraries in seven languages support cross-cloud and hybrid deployments.Exactly-once processingDataflow supports reliable, expressive, exactly-once processing of Pub/Sub streams.No provisioning, auto-everythingPub/Sub does not have shards or partitions. Just set your quota, publish, and consume.Compliance and securityPub/Sub is a HIPAA-compliant service, offering fine-grained access controls and end-to-end encryption.Google Cloud–native integrationsTake advantage of integrations with multiple services, such as Cloud Storage and Gmail update events and Cloud Functions for serverless event-driven computing.Third-party and OSS integrationsPub/Sub provides third-party integrations with Splunk and Datadog for logs along with Striim and Informatica for data integration. Additionally, OSS integrations are available through Confluent Cloud for Apache Kafka and Knative Eventing for Kubernetes-based serverless workloads. Seek and replayRewind your backlog to any point in time or a snapshot, giving the ability to reprocess the messages. Fast forward to discard outdated data.Dead letter topicsDead letter topics allow for messages unable to be processed by subscriber applications to be put aside for offline examination and debugging so that other messages can be processed without delay.FilteringPub/Sub can filter messages based upon attributes in order to reduce delivery volumes to subscribers.PricingPricingPub/Sub pricing is calculated based upon monthly data volumes. The first 10 GB of data per month is offered at no charge. Monthly data volume1Price per TB2First 10 GB$0.00Beyond 10 GB$40.001 For detailed pricing information, please consult the pricing guide.2 TB refers to a tebibyte, or 240 bytes.If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Quickstarts.txt b/Quickstarts.txt new file mode 100644 index 0000000000000000000000000000000000000000..46b5b625f2e8a099536c267a8a83bbbcabbdc62b --- /dev/null +++ b/Quickstarts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/docs/tutorials?doctype=quickstart +Date Scraped: 2025-02-23T12:11:13.038Z + +Content: +Home Documentation Send feedback Stay organized with collections Save and categorize content based on your preferences. Application development Big data and analytics Compute Containers Databases DevOps Healthcare and life sciences High performance computing (HPC) Hybrid and multicloud Identity Internet of Things (IoT) Logging and monitoring Machine learning and artificial intelligence (ML/AI) Migrations Networking Security and compliance Serverless Storage Google Cloud console Google Cloud CLI Cloud Client Libraries Quickstart Tutorial Interactive walkthrough 03:38 Get started with Google Cloud quickstarts What are Google Cloud quickstarts? Whether you're looking to deploy a web app, set up a database, or run big data workloads, it can be challenging to get started. Luckily, Google Cloud quickstarts offer step-by-step tutorials that cover basic use cases, operating the Google Cloud console, and how to use the Google command-line tools. Send feedback \ No newline at end of file diff --git a/Rapid_Migration_and_Modernization_Program(1).txt b/Rapid_Migration_and_Modernization_Program(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..65cfd2677a315fc59f7b8401af4259fa08f102f6 --- /dev/null +++ b/Rapid_Migration_and_Modernization_Program(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/cloud-migration-program +Date Scraped: 2025-02-23T12:06:55.337Z + +Content: +VMware Cloud Foundation on Google Cloud VMware Engine: 20% lower price and up to 40% in migration incentives. Read more.Rapid Migration and Modernization Program (RaMP)Google Cloud's holistic, end-to-end migration and modernization program to help customers leverage expertise and best practices, lower risk, control costs, and simplify their path to cloud success. Try Migration CenterFree migration assessmentMigrating workloads to the public cloud: an essential guide & checklistGet the reportBenefitsMigrating and modernizing should be fast and easy. With RaMP, it is. Clarity and road mapUnderstand your IT environment with data-driven discovery and analytics, and receive a road map with timeline, costs, resources, and risks. Gain confidence with a transparent transition plan and cost proposal.Expertise in executionMigrate with confidence through best practices delivered by Google Cloud and our partners, including blueprints, governance, security, and training for your personnel and cloud operations. Reduce cost and riskReduce migration costs with flexible proposals that meet your budget, partner funding from Google Cloud to offset costs, and solution-specific discounts and credits. Lower risk with data-driven migration plans.Why Migrate and modernize with Google Cloud RaMP? For any large-scale migration, we’ve found the following five best practices greatly increase chances of success. First, there needs to be a business case articulating the “why” that executives across business, IT, finance, security, and operations are aligned on. Second, there needs to be a learning plan tailored to each role so individuals can obtain new skills and neutralize the FUD accompanying changes to how they work. Third, there needs to be a “Cloud Center of Excellence” (CCoE) team that harvests best practices, sets up and maintains the landing zone, sets “principles” and blueprints for how teams will scale usage/migration, and drives the fourth and fifth best practices, which are good program governance and a well-defined set of goals or an OKR (objective and key results) to measure, inspect, and quickly triage progress. With RaMP, our goal is to bring our product portfolio and best practices together within a unified, holistic approach. One that helps customers build a business case and drive stakeholder alignment, assess and close gaps on their people, process, and technology readiness, disposition workloads into priority “migration waves” segmented by migration strategy, create a landing zone with adequate financial, security, and compliance controls, decide what tools to use, execute a training and enablement plan, establish a CCoE, set an OKR, pick the right partner(s), and relentlessly manage the migration. If you’re early in your journey, take advantage of our quick cloud capability assessment, which helps determine where you are in your cloud journey and how you can develop new competencies across your business, financial, and technology plans. Alternatively, if you know you’re heading to the cloud and need a detailed discovery report with recommendations and cost analysis based on your existing IT landscapes, then request one of our free assessments today.Key featuresHow RaMP worksRaMP's cycle of successMigrations have been occurring in enterprise IT for decades (for example, mainframes to x86 to virtualization), with the migration life cycle remaining largely the same. The key to success? A smart, integrated migration program that flows intelligently and smoothly, represented below by our RaMP flywheel:Opportunity evaluationUnderstand exactly what your IT landscape looks like so you can properly map out your cloud-based options. Foundation and planningUnderstand which workloads are ideal for the cloud, how you will migrate them, and then build the foundation you'll need in your cloud. Migrate and validateStarting with the workloads that are most ideal for the cloud, begin your migrations. Analyze and evaluate at each step so that you can fine-tune for upcoming migrations. Optimize and operateAssess how workloads are performing in the cloud, tune as needed, and cut over fully to cloud-based operations. A well run migration program creates a flywheel effect, where learnings and experience gained from migrating workloads compound to refine the business case, bolster the foundation, accelerate subsequent migrations, and make it easier to optimize workloads over time. In other words, any improvement made to the flywheel will accelerate not only that customer’s migration, but all customer migrations if we build the machinery to capture and share these improvements.Get started by taking our quick cloud capability assessment, which helps determine where you are in your cloud journey, or by requesting your free comprehensive assessment today. CustomersCustomers using RaMP to accelerate their migration successCase studyHow Sabre migrated their apps and data centers to Google Cloud with speed and ease32-min watchCase studyThe Home Depot Canada migrates SAP to Google Cloud, improves efficiency and meets customer demand4-min readCase studyInvoca migrates 300+ server racks to Google Cloud in less than 4 months with help from Zencore7-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud8-min readVideoTop secrets to migration speed, scale, and success with Loblaw Technology and Global Payment Systems14-minute watchSee all customersPartnersSpecialized RaMP partnersA select set of trusted partners that are capable of delivering high quality, complex migrations for our customers. Expand allData Center ModernizationPartners that have achieved this specialization have demonstrated success with data center transformation of workloads from on-premises, private cloud, or other public clouds.Cloud Migration These partners help make the journey to the cloud easier and are handpicked to deliver a seamless transition to Google Cloud—from foundation building to migration execution. See all Cloud Migration partnersRelated servicesWhat's included in RaMP?Click any of the boxes below to learn more about the seamless components built into RaMP. Cloud Readiness AssessmentDetermine where you are in your cloud journey and how to develop new competencies to maximize the value of your cloud investments.Discovery and assessmentReceive a complete inventory of your current infrastructure, migration, and modernization recommendations, and a total cost of ownership (TCO) report.Specialized partnersWork with partners specialized in data center modernization and cloud migration services. Cloud Architecture CenterDiscover migration reference architectures, guidance, and best practices for migrating your workloads to Google Cloud. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience. Training and certificationGrow in-demand skills in emerging cloud technologies with Google Cloud training and certification.Google expertsOur technical expertise helps you unlock business value throughout your entire migration journey. What's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postVMware Engine: 20% lower price, up to 40% in migration incentivesRead the blogBlog post30 ways to leave your data center: key migration guides, in one placeRead the blogReportThe Total Economic Impact™ Of Migrating to Google CloudRead reportReportMigrating workloads to the public cloud: an essential guide & checklistRead reportTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Rapid_Migration_and_Modernization_Program.txt b/Rapid_Migration_and_Modernization_Program.txt new file mode 100644 index 0000000000000000000000000000000000000000..88d41aeba9bb726376ad7dd3a61ed4678faeb363 --- /dev/null +++ b/Rapid_Migration_and_Modernization_Program.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/cloud-migration-program +Date Scraped: 2025-02-23T11:59:47.927Z + +Content: +VMware Cloud Foundation on Google Cloud VMware Engine: 20% lower price and up to 40% in migration incentives. Read more.Rapid Migration and Modernization Program (RaMP)Google Cloud's holistic, end-to-end migration and modernization program to help customers leverage expertise and best practices, lower risk, control costs, and simplify their path to cloud success. Try Migration CenterFree migration assessmentMigrating workloads to the public cloud: an essential guide & checklistGet the reportBenefitsMigrating and modernizing should be fast and easy. With RaMP, it is. Clarity and road mapUnderstand your IT environment with data-driven discovery and analytics, and receive a road map with timeline, costs, resources, and risks. Gain confidence with a transparent transition plan and cost proposal.Expertise in executionMigrate with confidence through best practices delivered by Google Cloud and our partners, including blueprints, governance, security, and training for your personnel and cloud operations. Reduce cost and riskReduce migration costs with flexible proposals that meet your budget, partner funding from Google Cloud to offset costs, and solution-specific discounts and credits. Lower risk with data-driven migration plans.Why Migrate and modernize with Google Cloud RaMP? For any large-scale migration, we’ve found the following five best practices greatly increase chances of success. First, there needs to be a business case articulating the “why” that executives across business, IT, finance, security, and operations are aligned on. Second, there needs to be a learning plan tailored to each role so individuals can obtain new skills and neutralize the FUD accompanying changes to how they work. Third, there needs to be a “Cloud Center of Excellence” (CCoE) team that harvests best practices, sets up and maintains the landing zone, sets “principles” and blueprints for how teams will scale usage/migration, and drives the fourth and fifth best practices, which are good program governance and a well-defined set of goals or an OKR (objective and key results) to measure, inspect, and quickly triage progress. With RaMP, our goal is to bring our product portfolio and best practices together within a unified, holistic approach. One that helps customers build a business case and drive stakeholder alignment, assess and close gaps on their people, process, and technology readiness, disposition workloads into priority “migration waves” segmented by migration strategy, create a landing zone with adequate financial, security, and compliance controls, decide what tools to use, execute a training and enablement plan, establish a CCoE, set an OKR, pick the right partner(s), and relentlessly manage the migration. If you’re early in your journey, take advantage of our quick cloud capability assessment, which helps determine where you are in your cloud journey and how you can develop new competencies across your business, financial, and technology plans. Alternatively, if you know you’re heading to the cloud and need a detailed discovery report with recommendations and cost analysis based on your existing IT landscapes, then request one of our free assessments today.Key featuresHow RaMP worksRaMP's cycle of successMigrations have been occurring in enterprise IT for decades (for example, mainframes to x86 to virtualization), with the migration life cycle remaining largely the same. The key to success? A smart, integrated migration program that flows intelligently and smoothly, represented below by our RaMP flywheel:Opportunity evaluationUnderstand exactly what your IT landscape looks like so you can properly map out your cloud-based options. Foundation and planningUnderstand which workloads are ideal for the cloud, how you will migrate them, and then build the foundation you'll need in your cloud. Migrate and validateStarting with the workloads that are most ideal for the cloud, begin your migrations. Analyze and evaluate at each step so that you can fine-tune for upcoming migrations. Optimize and operateAssess how workloads are performing in the cloud, tune as needed, and cut over fully to cloud-based operations. A well run migration program creates a flywheel effect, where learnings and experience gained from migrating workloads compound to refine the business case, bolster the foundation, accelerate subsequent migrations, and make it easier to optimize workloads over time. In other words, any improvement made to the flywheel will accelerate not only that customer’s migration, but all customer migrations if we build the machinery to capture and share these improvements.Get started by taking our quick cloud capability assessment, which helps determine where you are in your cloud journey, or by requesting your free comprehensive assessment today. CustomersCustomers using RaMP to accelerate their migration successCase studyHow Sabre migrated their apps and data centers to Google Cloud with speed and ease32-min watchCase studyThe Home Depot Canada migrates SAP to Google Cloud, improves efficiency and meets customer demand4-min readCase studyInvoca migrates 300+ server racks to Google Cloud in less than 4 months with help from Zencore7-min readCase studyViant partners with Slalom to migrate a data center with 600 VMs and 200+ TB of data to Google Cloud8-min readVideoTop secrets to migration speed, scale, and success with Loblaw Technology and Global Payment Systems14-minute watchSee all customersPartnersSpecialized RaMP partnersA select set of trusted partners that are capable of delivering high quality, complex migrations for our customers. Expand allData Center ModernizationPartners that have achieved this specialization have demonstrated success with data center transformation of workloads from on-premises, private cloud, or other public clouds.Cloud Migration These partners help make the journey to the cloud easier and are handpicked to deliver a seamless transition to Google Cloud—from foundation building to migration execution. See all Cloud Migration partnersRelated servicesWhat's included in RaMP?Click any of the boxes below to learn more about the seamless components built into RaMP. Cloud Readiness AssessmentDetermine where you are in your cloud journey and how to develop new competencies to maximize the value of your cloud investments.Discovery and assessmentReceive a complete inventory of your current infrastructure, migration, and modernization recommendations, and a total cost of ownership (TCO) report.Specialized partnersWork with partners specialized in data center modernization and cloud migration services. Cloud Architecture CenterDiscover migration reference architectures, guidance, and best practices for migrating your workloads to Google Cloud. Migration CenterReduce complexity, time, and cost with Migration Center's centralized, integrated migration and modernization experience. Training and certificationGrow in-demand skills in emerging cloud technologies with Google Cloud training and certification.Google expertsOur technical expertise helps you unlock business value throughout your entire migration journey. What's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postVMware Engine: 20% lower price, up to 40% in migration incentivesRead the blogBlog post30 ways to leave your data center: key migration guides, in one placeRead the blogReportThe Total Economic Impact™ Of Migrating to Google CloudRead reportReportMigrating workloads to the public cloud: an essential guide & checklistRead reportTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Recommender.txt b/Recommender.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec1b47dadf7fab4db533e736d1ff18baedb6fb54 --- /dev/null +++ b/Recommender.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/recommender/docs/whatis-activeassist +Date Scraped: 2025-02-23T12:02:49.031Z + +Content: +Home Recommender Documentation Send feedback Stay organized with collections Save and categorize content based on your preferences. What is Active Assist Active Assist refers to the portfolio of tools used in Google Cloud to generate recommendations and insights to help you optimize your Google Cloud projects. This includes recommenders that generate recommendations and insights and analysis tools. Recommenders and Recommendations Recommenders generate recommendations that fall into six value categories that can help you optimize your cloud in a variety of ways. See Recommendations and Recommenders for a detailed explanation of these concepts. Grant permissions to view and update recommendations and insights Each recommender and insight type has specific roles and permissions to control access to its recommendations and insights. In order to enable users to review and assess these recommendations and insights, they will include some metadata about resources. Granting these permissions provides users with a partial view of the resource's metadata. This partial view of data is particularly important to consider if you are using custom roles to grant permissions. For example, the Identity and Access Management recommender provides recommendations about permissions. Members that have the recommender.iamPolicyRecommendations.get and recommender.iamPolicyRecommendations.list permissions can also see information about your IAM policy bindings. Ensure that you have the necessary roles and permissions to view recommendations. Find recommendations and insights Each recommender lets you view and manage its recommendations and insights using one or more of the following clients: Recommendation Hub In context using the service's user interface (UI) in the Google Cloud console REST API or Google Cloud CLI For details about recommenders and supported clients, see Recommenders. View recommendations in Recommendation Hub Preview — Recommendation Hub This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions. View your recommendations Certain recommendations such as the IAM Recommendation require different levels of permissions to view. For more information, see the guide for the specific Recommendation that you are looking for. Manage recommendations and insights In Recommendation Hub, you can choose to apply or dismiss the recommendations. Some recommendations can be applied automatically, while other recommendations require additional steps to complete. Any additional steps will appear in the details panel. Apply recommendations Before applying recommendations, ensure that they are reviewed by someone who can properly assess the impacts of changes. Recommender provides information on direct impacts in areas such as cost, performance, or security. Recommendation reviewers should have a holistic understanding of your infrastructure and processes so that other business-specific impacts are considered. Warning: Before applying recommendations in the Google Cloud console or using the API, ensure that impacts are assessed by a reviewer. The reviewer should have the knowledge to assess impacts identified in recommendations, as well as impacts specific to your infrastructure and business. Applying recommendations without proper assessment could result in unexpected changes, such as issues with system performance, poor reliability, or loss of required permissions. If you choose to apply recommendations without human review, ensure that you have set up a rollback process before making any changes. Get started with recommendations Recommendations are an easy way to optimize your cloud to help you maintain a secure and cost effective workspace. Use the Recommendation Hub quickstart guide to get started with the Recommendation Hub. Alternatively, you can follow the In Context quickstart guide to find recommendations directly in your service pages. Recommendations can also be reviewed and applied or dismissed using the gCloud CL or REST API. Intelligence Centers and Tools Active Assist offers intelligent tools that helps you proactively monitor and manage your cloud. We offer tools such as the Network Intelligence Center and Policy Simulator to simplify and automate your management experience on Google Cloud. Read the following documentation pages for more information on how our products can help you optimize your cloud experience. Network Intelligence Centers Policy Simulator Policy Analyzer Policy Troubleshooter Firewall Insights Send feedback \ No newline at end of file diff --git a/Reconcile_orphaned_managed_user_accounts.txt b/Reconcile_orphaned_managed_user_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..6d4211386d924fd9a02a2058aec36a51eb9b83dc --- /dev/null +++ b/Reconcile_orphaned_managed_user_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/reconciling-orphaned-managed-user-accounts +Date Scraped: 2025-02-23T11:56:01.451Z + +Content: +Home Docs Cloud Architecture Center Send feedback Reconcile orphaned managed user accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how to identify and reconcile orphaned user accounts. If you use an external identity provider (IdP), then the authoritative source for identities is external to Cloud Identity or Google Workspace. Each identity in Cloud Identity or Google Workspace should therefore have a counterpart in the external authoritative source. It's possible that some of the identities in your Cloud Identity or Google Workspace account lack a counterpart in your external authoritative source—if so, these user accounts are considered orphaned. Orphaned accounts can occur under the following circumstances: A Cloud Identity or Google Workspace administrator has manually created a user account that has a non-matching identity. You have migrated a consumer account to Cloud Identity or Google Workspace, but the account uses an identity that does not match any existing identity in the external source. Before you begin To reconcile orphaned managed user accounts, you must meet the following prerequisites: You have identified a suitable onboarding plan and have completed all prerequisites for consolidating your existing user accounts. You have created a Cloud Identity or Google Workspace account. Process To reconcile orphaned user accounts, you must first identify which user accounts are orphaned. For each user account, you then have to decide how to best reconcile that account. Identify orphaned user accounts To find orphaned user accounts, you must compare the identities of user accounts in Cloud Identity or Google Workspace against the identities that are recognized by your authoritative source. To perform a comparison, you can use the export functionality of a Google Workspace or Cloud Identity account to obtain a list of your current user accounts: In the Admin Console, go to the Users page. Select Download users. Select All user info columns and currently selected columns. Click Download. After a few minutes, depending on the number of user accounts you have, you see a notification that the user info CSV file is ready to be downloaded. Click Download CSV and save the file to your local disk. Note: The CSV export might contain personally identifiable information (PII). Make sure that you select a storage location that is protected against unauthorized access. If you use Active Directory or Azure Active Directory (Azure AD) as your authoritative source, follow these steps to compare identities: Active Directory Sign on to a workstation that has access to Active Directory. Open a PowerShell console. Set a variable to the location of your downloaded file: $GoogleUsersCsv="GOOGLE_PATH" Replace GOOGLE_PATH with the path to the CSV file that you downloaded before. Determine the list of user accounts that lack a counterpart in Active Directory: $GoogleUsers = (Import-Csv -Path $GoogleUsersCsv -Header FirstName,LastName,Email | Select-Object -Skip 1) $LdapFilter = "(|{0})" -f (($GoogleUsers | Select-Object @{Name="Clause";Expression={"(userPrincipalName=$($_.Email))"}} | Select-Object -ExpandProperty Clause) -join "") $GoogleUsersWithMatch = Get-ADUser -LdapFilter $LdapFilter ` | Select-Object -ExpandProperty UserPrincipalName $GoogleUsers | Where-Object {$_.Email -NotIn $GoogleUsersWithMatch} The command compares the primary email address of user accounts in Cloud Identity or Google Workspace against the userPrincipalName attribute in Active Directory. If you are using a different mapping between Active Directory users and Cloud Identity or Google Workspace user accounts, you might need to adjust the command. Note: If the CSV file contains a large number of users, the Get-ADUser command might take several minutes to execute and might cause significant load on the associated domain controller. The output is similar to this: FirstName LastName Email --------- -------- ----- Alice Admin admin@example.org Olly Orphaned olly@example.org Matty Mismatch matty@wrongsubdomain.example.org Each item listed in the output represents a user account in Cloud Identity or Google Workspace that lacks a counterpart in Active Directory. An empty result indicates that you don't have any orphaned user accounts in Google Workspace or Cloud Identity. Delete the CSV file from your local disk. Azure AD In the Azure Portal, go to Azure Active Directory Users. Click Download users. Enter a filename and click Start. Wait until a Click here to download link appears. Depending on the number of user accounts you have, it might take a few minutes for the operation to complete. Click Click here to download and save the file to your local disk. Note: The CSV export might contain personally identifiable information (PII). Make sure that you select a storage location that is protected against unauthorized access. On a workstation that has PowerShell installed, open a PowerShell console. Set two environment variables: $GoogleUsersCsv="GOOGLE_PATH" $AzureUsersCsv="AZURE_PATH" Replace GOOGLE_PATH and AZURE_PATH with the file paths to the CSV files that you previously downloaded. Determine the list of user accounts that lack a counterpart in Active Directory: $GoogleUsers = (Import-Csv -Path $GoogleUsersCsv -Header FirstName,LastName,Email | Select-Object -Skip 1) $AzureUsers = (Import-Csv -Path $AzureUsersCsv) $GoogleUsers | Where-Object {$_.Email -NotIn ($AzureUsers | Select-Object -ExpandProperty userPrincipalName)} The command compares the primary email address of user accounts in Cloud Identity or Google Workspace against the userPrincipalName attribute in Azure AD. If you are using a different mapping between Azure AD users and the Cloud Identity or Google Workspace user accounts, you might need to adjust the command. The output is similar to the following: FirstName LastName Email --------- -------- ----- Alice Admin admin@example.org Olly Orphaned olly@example.org Matty Mismatch matty@wrongsubdomain.example.org Each item listed in the output represents a user account in Cloud Identity or Google Workspace that lacks a counterpart in Active Directory. An empty result indicates that you don't have any orphaned user account in Google Workspace or Cloud Identity. Delete both CSV files from your local disk. Reconcile orphaned user accounts To reconcile orphaned user accounts, you have to analyze each user account to determine why its identity lacks a counterpart in your authoritative source system. If you think a user account is obsolete, check whether any configuration settings or data associated with the account are worth preserving: To keep existing Google Drive data, transfer the data to a different user. If you don't want to keep any existing configuration settings or data, delete the user account. To temporarily retain the user account, suspend the user account and change its primary email address to an address that is unlikely to ever cause a collision. For example, rename olly.obsolete@example.com to obsolete-2019-11-10-olly.obsolete@example.com. For each user account that is still valid, try to fix the primary email address so that it matches an identity in your authoritative source. This might require the following: Changing the domain of the primary email address. Swapping the primary email address and an alias address. Fixing casing or spelling of the primary email address (for example, adding or removing dots). Note: Changing the primary email address impacts the owner of the associated user account. Make sure that you notify the owner of the change so that they know which email address to use for subsequent sign-ins. Best practices We recommend the following best practices when you are reconciling managed user accounts: If you migrate consumer accounts to Cloud Identity or Google Workspace, repeat the reconciliation process at least once for every batch of user accounts that you migrate. Send feedback \ No newline at end of file diff --git a/Red_Hat_on_Google_Cloud.txt b/Red_Hat_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..7193b8743cfd13a8cc2a18bc976d25de581adb98 --- /dev/null +++ b/Red_Hat_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/redhat +Date Scraped: 2025-02-23T11:59:52.195Z + +Content: +New: Managed OpenShift service can now be procured from Google Cloud via OpenShift Dedicated in Marketplace.Red HatRed Hat Solutions on Google CloudRed Hat solutions on Google Cloud work together to speed cloud migration and enhance cloud operations so you can focus on business innovation.Contact usProduct highlightsRed Hat Enterprise LinuxRed Hat OpenShiftRed Hat Ansible Automation PlatformOverview of Red Hat and Google CloudOverviewOptimized for Google CloudRed Hat Enterprise Linux images are optimized to include drivers and configurations to benefit from Google Cloud infrastructure innovations (such as gVNIC for 200 Gbps bandwidth and jumbo frames or Hyperdisk Extreme with up to 500,000 IOPS). They use Red Hat Update Infrastructure deployed on Google Cloud's planet-scale, high performant, and highly reliable network, are thoroughly tested for Google Cloud, and include Compute Engine guest environment.Integrated supportRed Hat Enterprise Linux comes with integrated support. With a valid Google Cloud support pack, cases involving Red Hat Enterprise Linux can be filed directly to Google Cloud Customer Care. You need not switch back and forth from Google Cloud to Red Hat to resolve issues and can rely on a single point of contact for support.Migrate on-prem workloads to the cloudRed Hat solutions enable customers to easily migrate existing on-prem workloads to Google Cloud by leveraging the familiarity of Red Hat Enterprise Linux, Red Hat OpenShift, and Red Hat Ansible Automation Platform—which are designed to support seamless operations across on-prem and cloud environments.BYOSLeverage your existing subscriptions with Red Hat and bring your own subscriptions (BYOS) to save on license costs for your Red Hat deployments such as Red Hat Enterprise Linux, Red Hat OpenShift, or Red Hat Ansible Automation Platform in Google Cloud.Procure from Google CloudYou can procure Red Hat subscriptions from Google Cloud to optimize costs and draw down your existing Google Cloud spend commitments.View moreHow It WorksDeploy your enterprise workloads with confidence in Google Cloud with Red Hat solutions.Common UsesRed Hat Enterprise LinuxRed Hat Enterprise Linux on Google Cloud provides an enterprise-grade platform for migration of on-prem workloads or for building new applications- creating a consistent foundation for hybrid environments.Benefit from the familiarity of Red Hat Enterprise Linux optimized for Compute Engine VMs with integrated support and the security, scalability, and simplicity of Google Cloud.Bring your own subscription (BYOS) or buy subscriptions directly from Google Cloud with cost optimization options such as committed usage discounts.Red Hat Enterprise Linux on Google CloudBYOS or PAYG billing optionsCommitted usage discountsAbout Red Hat Enterprise LinuxAdditional resourcesRed Hat Enterprise Linux on Google Cloud provides an enterprise-grade platform for migration of on-prem workloads or for building new applications- creating a consistent foundation for hybrid environments.Benefit from the familiarity of Red Hat Enterprise Linux optimized for Compute Engine VMs with integrated support and the security, scalability, and simplicity of Google Cloud.Bring your own subscription (BYOS) or buy subscriptions directly from Google Cloud with cost optimization options such as committed usage discounts.Red Hat Enterprise Linux on Google CloudBYOS or PAYG billing optionsCommitted usage discountsAbout Red Hat Enterprise LinuxRed Hat Enterprise Linux for SAPSAP certifies Google Cloud and Red Hat Enterprise Linux (including High Availability and Update Services) for production deployments of SAP applications and databases, including SAP S/4HANA—delivering faster time to value, greater operational efficiencies, extended version support, and high levels of security for cloud-based deployments.Bring your own subscription (BYOS) or buy subscriptions directly from Google Cloud with cost optimization options such as committed usage discounts.Red Hat Enterprise Linux for SAP on Google CloudBYOS or PAYG billing optionsSAP on Google CloudAbout Red Hat Enterprise Linux for SAPAdditional resourcesSAP certifies Google Cloud and Red Hat Enterprise Linux (including High Availability and Update Services) for production deployments of SAP applications and databases, including SAP S/4HANA—delivering faster time to value, greater operational efficiencies, extended version support, and high levels of security for cloud-based deployments.Bring your own subscription (BYOS) or buy subscriptions directly from Google Cloud with cost optimization options such as committed usage discounts.Red Hat Enterprise Linux for SAP on Google CloudBYOS or PAYG billing optionsSAP on Google CloudAbout Red Hat Enterprise Linux for SAPRed Hat Enterprise Linux Extended Lifecycle Support (ELS) Add-OnGoogle Cloud and Red Hat provide a differentiated capability to manage legacy workloads. When your Red Hat Enterprise Linux version reaches end of maintenance, you have the option to access extended support, with critical impact security and selected urgent priority bug fixes for two additional years. You can onboard into the offering minimizing disruptions, as appending the ELS Add-On does not require to re-provision running VMs.Red Hat Enterprise Linux ELS on Google CloudAppend Red Hat Enterprise Linux ELS licensesAbout Red Hat Enterprise Linux ELSAdditional resourcesGoogle Cloud and Red Hat provide a differentiated capability to manage legacy workloads. When your Red Hat Enterprise Linux version reaches end of maintenance, you have the option to access extended support, with critical impact security and selected urgent priority bug fixes for two additional years. You can onboard into the offering minimizing disruptions, as appending the ELS Add-On does not require to re-provision running VMs.Red Hat Enterprise Linux ELS on Google CloudAppend Red Hat Enterprise Linux ELS licensesAbout Red Hat Enterprise Linux ELSSelf-managed Red Hat OpenShiftRed Hat OpenShift on Google Cloud is an application platform that provides users with a consistent hybrid cloud foundation for building and scaling containerized applications—whether migrating existing workloads to the cloud or building new experiences for customers.Benefit from the familiarity of Red Hat OpenShift with the security, performance, scalability, and simplicity of Google Cloud.Use existing contracts bringing your own subscription (BYOS), or procure Red Hat OpenShift from Google Cloud via Marketplace.Red Hat OpenShift Container Platform in Google Cloud MarketplaceAbout Red Hat OpenShift Container PlatformAdditional resourcesRed Hat OpenShift on Google Cloud is an application platform that provides users with a consistent hybrid cloud foundation for building and scaling containerized applications—whether migrating existing workloads to the cloud or building new experiences for customers.Benefit from the familiarity of Red Hat OpenShift with the security, performance, scalability, and simplicity of Google Cloud.Use existing contracts bringing your own subscription (BYOS), or procure Red Hat OpenShift from Google Cloud via Marketplace.Red Hat OpenShift Container Platform in Google Cloud MarketplaceAbout Red Hat OpenShift Container PlatformRed Hat OpenShift DedicatedRed Hat OpenShift Dedicated on Google Cloud is a turnkey application platform managed by Red Hat and backed by global 24x7 site reliability engineers (SREs). Red Hat OpenShift Dedicated allows organizations to build, deploy and scale applications quickly and focus on their business, not managing infrastructure.Use existing contracts to bring your own subscription (BYOS), or procure Red Hat OpenShift Dedicated from Google Cloud via Marketplace.Red Hat OpenShift Dedicated in Google Cloud MarketplaceAbout Red Hat OpenShift DedicatedAdditional resourcesRed Hat OpenShift Dedicated on Google Cloud is a turnkey application platform managed by Red Hat and backed by global 24x7 site reliability engineers (SREs). Red Hat OpenShift Dedicated allows organizations to build, deploy and scale applications quickly and focus on their business, not managing infrastructure.Use existing contracts to bring your own subscription (BYOS), or procure Red Hat OpenShift Dedicated from Google Cloud via Marketplace.Red Hat OpenShift Dedicated in Google Cloud MarketplaceAbout Red Hat OpenShift DedicatedRed Hat Ansible Automation PlatformRed Hat Ansible Automation Platform on Google Cloud is an end-to-end automation platform to configure systems, deploy software, and orchestrate advanced workflows. It includes resources to create, manage, and scale across the entire enterprise.Use existing contracts bringing your own subscription (BYOS), or procure Red Hat Ansible Automation Platform from Google Cloud via Marketplace.Red Hat Ansible Automation Platform in Google Cloud MarketplaceAbout Red Hat Ansible Automation PlatformAdditional resourcesRed Hat Ansible Automation Platform on Google Cloud is an end-to-end automation platform to configure systems, deploy software, and orchestrate advanced workflows. It includes resources to create, manage, and scale across the entire enterprise.Use existing contracts bringing your own subscription (BYOS), or procure Red Hat Ansible Automation Platform from Google Cloud via Marketplace.Red Hat Ansible Automation Platform in Google Cloud MarketplaceAbout Red Hat Ansible Automation PlatformRed Hat solutions on Google CloudConsult an expertContact UsLearn more about marketplace offeringsExplore marketplaceRed Hat Enterprise LinuxRed Hat Enterprise Linux in the Google Cloud consoleRed Hat OpenShiftRed Hat OpenShift in marketplaceRed Hat Ansible Automation PlatformRed Hat Ansible in marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Refactor_a_monolith_into_microservices.txt b/Refactor_a_monolith_into_microservices.txt new file mode 100644 index 0000000000000000000000000000000000000000..743c1a735431bfbd2ac50a9a5d1632cb685ec6af --- /dev/null +++ b/Refactor_a_monolith_into_microservices.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/microservices-architecture-refactoring-monoliths +Date Scraped: 2025-02-23T11:46:50.993Z + +Content: +Home Docs Cloud Architecture Center Send feedback Refactor a monolith into microservices Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-26 UTC This reference guide is the second in a four-part series about designing, building, and deploying microservices. This series describes the various elements of a microservices architecture. The series includes information about the benefits and drawbacks of the microservices architecture pattern, and how to apply it. Introduction to microservices Refactor a monolith into microservices (this document) Interservice communication in a microservices setup Distributed tracing in a microservices application This series is intended for application developers and architects who design and implement the migration to refactor a monolith application to a microservices application. The process of transforming a monolithic application into microservices is a form of application modernization. To accomplish application modernization, we recommend that you don't refactor all of your code at the same time. Instead, we recommend that you incrementally refactor your monolithic application. When you incrementally refactor an application, you gradually build a new application that consists of microservices, and run the application along with your monolithic application. This approach is also known as the Strangler Fig pattern. Over time, the amount of functionality that is implemented by the monolithic application shrinks until either it disappears entirely or it becomes another microservice. To decouple capabilities from a monolith, you have to carefully extract the capability's data, logic, and user-facing components, and redirect them to the new service. It's important that you have a good understanding of the problem space before you move into the solution space. When you understand the problem space, you understand the natural boundaries in the domain that provide the right level of isolation. We recommend that you create larger services instead of smaller services until you thoroughly understand the domain. Defining service boundaries is an iterative process. Because this process is a non-trivial amount of work, you need to continuously evaluate the cost of decoupling against the benefits that you get. Following are factors to help you evaluate how you approach decoupling a monolith: Avoid refactoring everything all at once. To prioritize service decoupling, evaluate cost versus benefits. Services in a microservice architecture are organized around business concerns, and not technical concerns. When you incrementally migrate services, configure communication between services and monolith to go through well-defined API contracts. Microservices require much more automation: think in advance about continuous integration (CI), continuous deployment (CD), central logging, and monitoring. The following sections discuss various strategies to decouple services and incrementally migrate your monolithic application. Decouple by domain-driven design Microservices should be designed around business capabilities, not horizontal layers such as data access or messaging. Microservices should also have loose coupling and high functional cohesion. Microservices are loosely coupled if you can change one service without requiring other services to be updated at the same time. A microservice is cohesive if it has a single, well-defined purpose, such as managing user accounts or processing payment. Domain-driven design (DDD) requires a good understanding of the domain for which the application is written. The necessary domain knowledge to create the application resides within the people who understand it—the domain experts. You can apply the DDD approach retroactively to an existing application as follows: Identify a ubiquitous language—a common vocabulary that is shared between all stakeholders. As a developer, it's important to use terms in your code that a non-technical person can understand. What your code is trying to achieve should be a reflection of your company processes. Identify the relevant modules in the monolithic application, and then apply the common vocabulary to those modules. Define bounded contexts where you apply explicit boundaries to the identified modules with clearly defined responsibilities. The bounded contexts that you identify are candidates to be refactored into smaller microservices. The following diagram shows how you can apply bounded contexts to an existing ecommerce application: Figure 1. Application capabilities are separated into bounded contexts that migrate to services. In figure 1, the ecommerce application's capabilities are separated into bounded contexts and migrated to services as follows: Order management and fulfillment capabilities are bound into the following categories: The order management capability migrates to the order service. The logistics delivery management capability migrates to the delivery service. The inventory capability migrates to the inventory service. Accounting capabilities are bound into a single category: The consumer, sellers, and third-party capabilities are bound together and migrate to the account service. Prioritize services for migration An ideal starting point to decouple services is to identify the loosely coupled modules in your monolithic application. You can choose a loosely coupled module as one of the first candidates to convert to a microservice. To complete a dependency analysis of each module, look at the following: The type of the dependency: dependencies from data or other modules. The scale of the dependency: how a change in the identified module might impact other modules. Migrating a module with heavy data dependencies is usually a nontrivial task. If you migrate features first and migrate the related data later, you might be temporarily reading from and writing data to multiple databases. Therefore, you must account for data integrity and synchronization challenges. We recommend that you extract modules that have different resource requirements compared to the rest of the monolith. For example, if a module has an in-memory database, you can convert it into a service, which can then be deployed on hosts with higher memory. When you turn modules with particular resource requirements into services, you can make your application much easier to scale. From an operations standpoint, refactoring a module into its own service also means adjusting your existing team structures. The best path to clear accountability is to empower small teams that own an entire service. Additional factors that can affect how you prioritize services for migration include business criticality, comprehensive test coverage, security posture of the application, and organizational buy-in. Based on your evaluations, you can rank services as described in the first document in this series, by the benefit you receive from refactoring. Extract a service from a monolith After you identify the ideal service candidate, you must identify a way for both microservice and monolithic modules to coexist. One way to manage this coexistence is to introduce an inter-process communication (IPC) adapter, which can help the modules work together. Over time, the microservice takes on the load and eliminates the monolithic component. This incremental process reduces the risk of moving from the monolithic application to the new microservice because you can detect bugs or performance issues in a gradual fashion. The following diagram shows how to implement the IPC approach: Figure 2. An IPC adapter coordinates communication between the monolithic application and a microservices module. In figure 2, module Z is the service candidate that you want to extract from the monolithic application. Modules X and Y are dependent upon module Z. Microservice modules X and Y use an IPC adapter in the monolithic application to communicate with module Z through a REST API. The next document in this series, Interservice communication in a microservices setup, describes the Strangler Fig pattern and how to deconstruct a service from the monolith. Manage a monolithic database Typically, monolithic applications have their own monolithic databases. One of the principles of a microservices architecture is to have one database for each microservice. Therefore, when you modernize your monolithic application into microservices, you must split the monolithic database based on the service boundaries that you identify. To determine where to split a monolithic database, first analyze the database mappings. As part of the service extraction analysis , you gathered some insights on the microservices that you need to create. You can use the same approach to analyze database usage and to map tables or other database objects to the new microservices. Tools like SchemaCrawler, SchemaSpy, and ERBuilder can help you to perform such an analysis. Mapping tables and other objects helps you to understand the coupling between database objects that spans across your potential microservices boundaries. However, splitting a monolithic database is complex because there might not be clear separation between database objects. You also need to consider other issues, such as data synchronization, transactional integrity, joins, and latency. The next section describes patterns that can help you respond to these issues when you split your monolithic database. Reference tables In monolithic applications, it's common for modules to access required data from a different module through a SQL join to the other module's table. The following diagram uses the previous ecommerce application example to show this SQL join access process: Figure 3. A module joins data to a different module's table. In figure 3, to get product information, an order module uses a product_id foreign key to join an order to the products table. However, if you deconstruct modules as individual services, we recommend that you don't have the order service directly call the product service's database to run a join operation. The following sections describe options that you can consider to segregate the database objects. Share data through an API When you separate the core functionalities or modules into microservices, you typically use APIs to share and expose data. The referenced service exposes data as an API that the calling service needs, as shown in the following diagram: Figure 4. A service uses an API call to get data from another service. In figure 4, an order module uses an API call to get data from a product module. This implementation has obvious performance issues due to additional network and database calls. However, sharing data through an API works well when data size is limited. Also, if the called service is returning data that has a well-known rate of change, you can implement a local TTL cache on the caller to reduce network requests to the called service. Replicate data Another way to share data between two separate microservices is to replicate data in the dependent service database. The data replication is read-only and can be rebuilt any time. This pattern enables the service to be more cohesive. The following diagram shows how data replication works between two microservices: Figure 5. Data from a service is replicated in a dependent service database. In figure 5, the product service database is replicated to the order service database. This implementation lets the order service get product data without repeated calls to the product service. To build data replication, you can use techniques like materialized views, change data capture (CDC), and event notifications. The replicated data is eventually consistent, but there can be lag in replicating data, so there is a risk of serving stale data. Static data as configuration Static data, such as country codes and supported currencies, are slow to change. You can inject such static data as a configuration in a microservice. Modern microservices and cloud frameworks provide features to manage such configuration data using configuration servers, key-value stores, and vaults. You can include these features declaratively. Shared mutable data Monolithic applications have a common pattern known as shared mutable state. In a shared mutable state configuration, multiple modules use a single table, as shown in the following diagram: Figure 6. Multiple modules use a single table. In figure 6, the order, payment, and shipping functionalities of the ecommerce application use the same ShoppingStatus table to maintain the customer's order status throughout the shopping journey. To migrate a shared mutable state monolith, you can develop a separate ShoppingStatus microservice to manage the ShoppingStatus database table. This microservice exposes APIs to manage a customer's shopping status, as shown in the following diagram: Figure 7. A microservice exposes APIs to multiple other services. In figure 7, the payment, order, and shipping microservices use the ShoppingStatus microservice APIs. If the database table is closely related to one of the services, we recommend that you move the data to that service. You can then expose the data through an API for other services to consume. This implementation helps you ensure that you don't have too many fine-grained services that call each other frequently. If you split services incorrectly, you need to revisit defining the service boundaries. Distributed transactions After you isolate the service from the monolith, a local transaction in the original monolithic system might get distributed between multiple services. A transaction that spans multiple services is considered a distributed transaction. In the monolithic application, the database system ensures that the transactions are atomic. To handle transactions between various services in a microservice-based system, you need to create a global transaction coordinator. The transaction coordinator handles rollback, compensating actions, and other transactions that are described in the next document in this series, Interservice communication in a microservices setup. Data consistency Distributed transactions introduce the challenge of maintaining data consistency across services. All updates must be done atomically. In a monolithic application, the properties of transactions guarantee that a query returns a consistent view of the database based on its isolation level. In contrast, consider a multistep transaction in a microservices-based architecture. If any one service transaction fails, data must be reconciled by rolling back steps that have succeeded across the other services. Otherwise, the global view of the application's data is inconsistent between services. It can be challenging to determine when a step that implements eventual consistency has failed. For example, a step might not fail immediately, but instead could block or time out. Therefore, you might need to implement some kind of time-out mechanism. If the duplicated data is stale when the called service accesses it, then caching or replicating data between services to reduce network latency can also result in inconsistent data. The next document in the series, Interservice communication in a microservices setup, provides an example of a pattern to handle distributed transactions across microservices. Design interservice communication In a monolithic application, components (or application modules) invoke each other directly through function calls. In contrast, a microservices-based application consists of multiple services that interact with each other over the network. When you design interservices communication, first think about how services are expected to interact with each other. Service interactions can be one of the following: One-to-one interaction: each client request is processed by exactly one service. One-to-many interactions: each request is processed by multiple services. Also consider whether the interaction is synchronous or asynchronous: Synchronous: the client expects a timely response from the service and it might block while it waits. Asynchronous: the client doesn't block while waiting for a response. The response, if any, isn't necessarily sent immediately. The following table shows combinations of interaction styles: One-to-one One-to-many Synchronous Request and response: send a request to a service and wait for a response. — Asynchronous Notification: send a request to a service, but no reply is expected or sent. Publish and subscribe: the client publishes a notification message, and zero or more interested services consume the message. Request and asynchronous response: send a request to a service, which replies asynchronously. The client doesn't block. Publish and asynchronous responses: the client publishes a request, and waits for responses from interested services. Each service typically uses a combination of these interaction styles. Implement interservices communication To implement interservice communication, you can choose from different IPC technologies. For example, services can use synchronous request-response-based communication mechanisms such as HTTP-based REST, gRPC, or Thrift. Alternatively, services can use asynchronous, message-based communication mechanisms such as AMQP or STOMP. You can also choose from various different message formats. For example, services can use human-readable, text-based formats such as JSON or XML. Alternatively, services can use a binary format such as Avro or Protocol Buffers. Configuring services to directly call other services leads to high coupling between services. Instead, we recommend using messaging or event-based communication: Messaging: When you implement messaging, you remove the need for services to call each other directly. Instead, all services know of a message broker, and they push messages to that broker. The message broker saves these messages in a message queue. Other services can subscribe to the messages that they care about. Event-based communication: When you implement event-driven processing, communication between services takes place through events that individual services produce. Individual services write their events to a message broker. Services can listen to the events of interest. This pattern keeps services loosely coupled because the events don't include payloads. In a microservices application, we recommend using asynchronous interservice communication instead of synchronous communication. Request-response is a well-understood architectural pattern, so designing a synchronous API might feel more natural than designing an asynchronous system. Asynchronous communication between services can be implemented using messaging or event-driven communication. Using asynchronous communication provides the following advantages: Loose coupling: An asynchronous model splits the request–response interaction into two separate messages, one for the request and another one for the response. The consumer of a service initiates the request message and waits for the response, and the service provider waits for request messages to which it replies with response messages. This setup means that the caller doesn't have to wait for the response message. Failure isolation: The sender can still continue to send messages even if the downstream consumer fails. The consumer picks up the backlog whenever it recovers. This ability is especially useful in a microservices architecture, because each service has its own lifecycle. Synchronous APIs, however, require the downstream service to be available or the operation fails. Responsiveness: An upstream service can reply faster if it doesn't wait on downstream services. If there is a chain of service dependencies (service A calls B, which calls C, etc.), waiting on synchronous calls can add unacceptable amounts of latency. Flow control: A message queue acts as a buffer, so that receivers can process messages at their own rate. However, following are some challenges to using asynchronous messaging effectively: Latency: If the message broker becomes a bottleneck, end-to-end latency might become high. Overhead in development and testing: Based on the choice of messaging or event infrastructure, there can be a possibility of having duplicate messages, which makes it difficult to make operations idempotent. It also can be hard to implement and test request-response semantics using asynchronous messaging. You need a way to correlate request and response messages. Throughput: Asynchronous message handling, either using a central queue or some other mechanism can become a bottleneck in the system. The backend systems, such as queues and downstream consumers, should scale to match the system's throughput requirements. Complicates error handling: In an asynchronous system, the caller doesn't know if a request was successful or failed, so error handling needs to be handled out of band. This type of system can make it difficult to implement logic like retries or exponential back-offs. Error handling is further complicated if there are multiple chained asynchronous calls that have to all succeed or fail. The next document in the series, Interservice communication in a microservices setup, provides a reference implementation to address some of the challenges mentioned in the preceding list. What's next Read the first document in this series to learn about microservices, their benefits, challenges, and use cases. Read the next document in this series, Interservice communication in a microservices setup. Read the fourth, final document in this series to learn more about distributed tracing of requests between microservices. Send feedback \ No newline at end of file diff --git a/Reference_architecture(1).txt b/Reference_architecture(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..4f0b95256f4266f6800406c520d50a252973b8ed --- /dev/null +++ b/Reference_architecture(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/exposing-service-mesh-apps-through-gke-ingress +Date Scraped: 2025-02-23T11:47:30.203Z + +Content: +Home Docs Cloud Architecture Center Send feedback From edge to mesh: Expose service mesh applications through GKE Gateway Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-10-24 UTC This reference architecture shows how to combine Cloud Service Mesh with Cloud Load Balancing to expose applications in a service mesh to internet clients. Cloud Service Mesh is a managed service mesh, based on Istio, that provides a security-enhanced, observable, and standardized communication layer for applications. A service mesh provides a holistic communications platform for clients that are communicating in the mesh. However, a challenge remains in how to connect clients that are outside the mesh to applications hosted inside the mesh. You can expose an application to clients in many ways, depending on where the client is. This reference architecture is intended for advanced practitioners who run Cloud Service Mesh but it works for Istio on Google Kubernetes Engine (GKE) too. Mesh ingress gateway Istio 0.8 introduced the mesh ingress gateway. The gateway provides a dedicated set of proxies whose ports are exposed to traffic coming from outside the service mesh. These mesh ingress proxies let you control network exposure behavior separately from application routing behavior. The proxies also let you apply routing and policy to mesh-external traffic before it arrives at an application sidecar. Mesh ingress defines the treatment of traffic when it reaches a node in the mesh, but external components must define how traffic first arrives at the mesh. To manage this external traffic, you need a load balancer that is external to the mesh. This reference architecture uses Google Cloud Load Balancing provisioned through GKE Gateway resources to automate deployment. For Google Cloud, the canonical example of this setup is an external load balancing service that deploys a public network load balancer (L4). That load balancer points at the NodePorts of a GKE cluster. These NodePorts expose the Istio ingress gateway Pods, which route traffic to downstream mesh sidecar proxies. Architecture The following diagram illustrates this topology. Load balancing for internal private traffic looks similar to this architecture, except that you deploy an internal passthrough Network Load Balancer instead. The preceding diagram shows that using L4 transparent load balancing with a mesh ingress gateway offers the following advantages: The setup simplifies deploying the load balancer. The load balancer provides a stable virtual IP address (VIP), health checking, and reliable traffic distribution when cluster changes, node outages, or process outages occur. All routing rules, TLS termination, and traffic policy is handled in a single location at the mesh ingress gateway. Note: For applications in Cloud Service Mesh, deployment of the external TCP/UDP load balancer is the default exposure type. You deploy the load balancer through the Kubernetes Service, which selects the Pods of the mesh ingress proxies. GKE Gateway and Services You can provide access to applications for clients that are outside the cluster in many ways. GKE Gateway is an implementation of the Kubernetes Gateway API. GKE Gateway evolves the Ingress resource and improves it. As you deploy GKE Gateway resources to your GKE cluster, the Gateway controller watches the Gateway API resources and reconciles Cloud Load Balancing resources to implement the networking behavior that's specified by the GKE Gateway resources. When using GKE Gateway, the type of load balancer you use to expose applications to clients depends largely on the following factors: The status of the clients (external or internal). The required capabilities of the load balancer, including the capability to integrate with Google Cloud Armor security policies. The spanning requirements of the service mesh. Service meshes can span multiple GKE clusters or can be contained in a single cluster. In GKE Gateway, this behavior is controlled by specifying the appropriate GatewayClass. Note: Although you can expose applications hosted in Cloud Service Mesh through Node IP addresses directly, we don't recommend this approach for most production environments. Direct exposure through Node IP addresses is fragile and can also become a security risk. To direct traffic to applications hosted in Cloud Service Mesh, we recommend that you use an L4 or L7 load balancer. Although the default load balancer for Cloud Service Mesh is the Network Load Balancer, this reference architecture focuses on the external Application Load Balancer (L7). The external Application Load Balancer provides integration with edge services like Identity-Aware Proxy and Google Cloud Armor, URL redirects and rewrites, as well as a globally distributed network of edge proxies. The next section describes the architecture and advantages of using two layers of HTTP load balancing. Cloud ingress and mesh ingress Deploying external L7 load balancing outside of the mesh along with a mesh ingress layer offers significant advantages, especially for internet traffic. Even though Cloud Service Mesh and Istio ingress gateways provide advanced routing and traffic management in the mesh, some functions are better served at the edge of the network. Taking advantage of internet-edge networking through Google Cloud's external Application Load Balancer might provide significant performance, reliability, or security-related benefits over mesh-based ingress. These benefits include the following: Global Anycast VIP advertisement and globally distributed TLS and HTTP termination DDoS defense and traffic filtering at the edge with Google Cloud Armor API gateway functionality with IAP Automatic public certificate creation and rotation with Google Certificate Manager Multi-cluster and multi-regional load balancing at the edge with Multi Cluster Gateway This external layer of L7 load balancing is referred to as cloud ingress because it is built on cloud-managed load balancers rather than the self-hosted proxies that are used by mesh ingress. The combination of cloud ingress and mesh ingress uses complementary capabilities of the Google Cloud infrastructure and the mesh. The following diagram illustrates how you can combine cloud ingress (through GKE gateway) and mesh ingress to serve as two load balancing layers for internet traffic. In the topology of the preceding diagram, the cloud ingress layer sources traffic from outside of the service mesh and directs that traffic to the mesh ingress layer. The mesh ingress layer then directs traffic to the mesh-hosted application backends. Cloud and mesh ingress topology This section describes the complementary roles that each ingress layer fulfills when you use them together. These roles aren't concrete rules, but rather guidelines that use the advantages of each layer. Variations of this pattern are likely, depending on your use case. Cloud ingress: When paired with mesh ingress, the cloud ingress layer is best used for edge security and global load balancing. Because the cloud ingress layer is integrated with DDoS protection, cloud firewalls, authentication, and encryption products at the edge, this layer excels at running these services outside of the mesh. The routing logic is typically straightforward at this layer, but the logic can be more complex for multi-cluster and multi-region environments. Because of the critical function of internet-facing load balancers, the cloud ingress layer is likely managed by an infrastructure team that has exclusive control over how applications are exposed and secured on the internet. This control also makes this layer less flexible and dynamic than a developer-driven infrastructure, a consideration that could impact who and how you provide administrative access to this layer. Mesh ingress: When paired with cloud ingress, the mesh ingress layer provides flexible routing that is close to the application. Because of this flexibility, the mesh ingress is better than cloud ingress for complex routing logic and application-level visibility. The separation between ingress layers also makes it easier for application owners to directly control this layer without affecting other teams. To help secure applications When you expose service mesh applications through an L4 load balancer instead of an L7 load balancer, you should terminate client TLS at the mesh ingress layer inside the mesh. Health checking One complexity of using two layers of L7 load balancing is health checking. You must configure each load balancer to check the health of the next layer to ensure that it can receive traffic. The topology in the following diagram shows how cloud ingress checks the health of the mesh ingress proxies, and the mesh, in return, checks the health of the application backends. The preceding topology has the following considerations: Cloud ingress: In this reference architecture, you configure the Google Cloud load balancer through the Gateway to check the health of the mesh ingress proxies on their exposed health check ports. If a mesh proxy is down, or if the cluster, mesh, or region is unavailable, the Google Cloud load balancer detects this condition and doesn't send traffic to the mesh proxy. Mesh ingress: In the mesh application, you perform health checks on the backends directly so that you can execute load balancing and traffic management locally. Security The preceding topology also involves several security elements. One of the most critical elements is in how you configure encryption and deploy certificates. GKE Gateway integrates with Google-managed certificates. You can refer to a Certificate Manager CertificateMap in your Gateway definition. Internet clients authenticate against the public certificates and connect to the external load balancer as the first hop in the Virtual Private Cloud (VPC). The next hop, which is between the Google Front End (GFE) and the mesh ingress proxy, is encrypted by default. Network-level encryption between the GFEs and their backends is applied automatically. However, if your security requirements dictate that the platform owner retain ownership of the encryption keys, then you can enable HTTP/2 with TLS encryption between the cluster gateway (the GFE) and the mesh ingress (the envoy proxy instance). When you enable HTTP/2 with TLS encryption for this path, you can use a self-signed or public certificate to encrypt traffic because the GFE doesn't authenticate against it. This additional layer of encryption is demonstrated in the associated deployment guide. To help prevent the mishandling of certificates, don't use the public certificate for the public load balancer elsewhere. Instead, we recommend that you use separate certificates in the service mesh. If the service mesh mandates TLS, then all traffic is encrypted between sidecar proxies and to the mesh ingress. The following diagram illustrates HTTPS encryption from the client to the Google Cloud load balancer, from the load balancer to the mesh ingress proxy, and from the ingress proxy to the sidecar proxy. Cost optimization In this document, you use the following billable components of Google Cloud: Google Kubernetes Engine Compute Engine Cloud Load Balancing Cloud Service Mesh Google Cloud Armor Cloud Endpoints Deployment To deploy this architecture, see From edge to mesh: Deploy service mesh applications through GKE Gateway. What's next Learn about the additional features offered by GKE Gateway that you can use with your service mesh. Learn about the different types of cloud load balancing available for GKE. Learn about the features and functionality offered by Cloud Service Mesh. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Reference_architecture(10).txt b/Reference_architecture(10).txt new file mode 100644 index 0000000000000000000000000000000000000000..a15d19b75549a42fbde7fcb10d93dab61d96808e --- /dev/null +++ b/Reference_architecture(10).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/stream-logs-from-google-cloud-to-splunk +Date Scraped: 2025-02-23T11:53:20.173Z + +Content: +Home Docs Cloud Architecture Center Send feedback Stream logs from Google Cloud to Splunk Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-24 UTC This document describes a reference architecture that helps you create a production-ready, scalable, fault-tolerant, log export mechanism that streams logs and events from your resources in Google Cloud into Splunk. Splunk is a popular analytics tool that offers a unified security and observability platform. In fact, you have the choice of exporting the logging data to either Splunk Enterprise or Splunk Cloud Platform. If you're an administrator, you can also use this architecture for either IT operations or security use cases. To deploy this reference architecture, see Deploy log streaming from Google Cloud to Splunk. This reference architecture assumes a resource hierarchy that is similar to the following diagram. All the Google Cloud resource logs from the organization, folder, and project levels are gathered into an aggregated sink. Then, the aggregated sink sends these logs to a log export pipeline, which processes the logs and exports them to Splunk. Architecture The following diagram shows the reference architecture that you use when you deploy this solution. This diagram demonstrates how log data flows from Google Cloud to Splunk. This architecture includes the following components: Cloud Logging–To start the process, Cloud Logging collects the logs into an organization-level aggregated log sink and sends the logs to Pub/Sub. Pub/Sub–The Pub/Sub service then creates a single topic and subscription for the logs and forwards the logs to the main Dataflow pipeline. Dataflow–There are two Dataflow pipelines in this reference architecture: Primary Dataflow pipeline–At the center of the process, the main Dataflow pipeline is a Pub/Sub to Splunk streaming pipeline which pulls logs from the Pub/Sub subscription and delivers them to Splunk. Secondary Dataflow pipeline–Parallel to the primary Dataflow pipeline, the secondary Dataflow pipeline is a Pub/Sub to Pub/Sub streaming pipeline to replay messages if a delivery fails. Splunk–At the end of the process, Splunk Enterprise or Splunk Cloud Platform acts as an HTTP Event Collector (HEC) and receives the logs for further analysis. You can deploy Splunk on-premises, in Google Cloud as SaaS, or through a hybrid approach. Use case This reference architecture uses a cloud, push-based approach. In this push-based method, you use the Pub/Sub to Splunk Dataflow template to stream logs to a Splunk HTTP Event Collector (HEC) . The reference architecture also discusses Dataflow pipeline capacity planning and how to handle potential delivery failures when there are transient server or network issues. While this reference architecture focuses on Google Cloud logs, the same architecture can be used to export other Google Cloud data, such as real-time asset changes and security findings. By integrating logs from Cloud Logging, you can continue to use existing partner services like Splunk as a unified log analytics solution. The push-based method to stream Google Cloud data into Splunk has the following advantages: Managed service. As a managed service, Dataflow maintains the required resources in Google Cloud for data processing tasks such as log export. Distributed workload. This method lets you distribute workloads across multiple workers for parallel processing, so there is no single point of failure. Security. Because Google Cloud pushes your data to Splunk HEC, there's none of the maintenance and security burden associated with creating and managing service account keys. Autoscaling. The Dataflow service autoscales the number of workers in response to variations in incoming log volume and backlog. Fault-tolerance. ​​If there are transient server or network issues, the push-based method automatically tries to resend the data to the Splunk HEC. It also supports unprocessed topics (also known as dead-letter topics) for any undeliverable log messages to avoid data loss. Simplicity. You avoid the management overhead and the cost of running one or more heavy forwarders in Splunk. This reference architecture applies to businesses in many different industry verticals, including regulated ones such as pharmaceutical and financial services. When you choose to export your Google Cloud data into Splunk, you might choose to do so for the following reasons: Business analytics IT operations Application performance monitoring Security operations Compliance Design alternatives An alternative method for log export to Splunk is one where you pull logs from Google Cloud. In this pull-based method, you use Google Cloud APIs to fetch the data through the Splunk Add-on for Google Cloud. You might choose to use the pull-based method in the following situations: Your Splunk deployment does not offer a Splunk HEC endpoint. Your log volume is low. You want to export and analyze Cloud Monitoring metrics, Cloud Storage objects, Cloud Resource Manager API metadata, Cloud Billing data, or low-volume logs. You already manage one or more heavy forwarders in Splunk. You use the hosted Inputs Data Manager for Splunk Cloud. Also, keep in mind the additional considerations that arise when you use this pull-based method: A single worker handles the data ingestion workload, which does not offer autoscaling capabilities. In Splunk, the use of a heavy forwarder to pull data might cause a single point of failure. The pull-based method requires you to create and manage the service account keys that you use to configure the Splunk Add-on for Google Cloud. Before using the Splunk Add-on, log entries must first be routed to Pub/Sub using a log sink. To create a log sink with Pub/Sub topic as the destination, follow these steps. Make sure to grant the Pub/Sub Publisher role (roles/pubsub.publisher) to the sink's writer identity over that Pub/Sub topic destination. For more information about configuring sink destination permissions, see Set destination permissions. To enable the Splunk Add-on, perform the following steps: In Splunk, follow the Splunk instructions to install the Splunk Add-on for Google Cloud. Create a Pub/Sub pull subscription for the Pub/Sub topic where the logs are routed to, if you do not have one already. Create a service account. Create a service account key for the service account that you just created. Grant the Pub/Sub Viewer (roles/pubsub.viewer) and Pub/Sub Subscriber (roles/pubsub.subscriber) roles to the service account to let the account receive messages from the Pub/Sub subscription. In Splunk, follow the Splunk instructions to configure a new Pub/Sub input in the Splunk Add-on for Google Cloud. The Pub/Sub messages from the log export appear in Splunk. To verify that the add-on is working, perform the following steps: In Cloud Monitoring, open Metrics Explorer. In the Resources menu, select pubsub_subscription. In the Metric categories, select pubsub/subscription/pull_message_operation_count. Monitor the number of message-pull operations for one to two minutes. Design considerations The following guidelines can help you to develop an architecture that meets your organization's requirements for security, privacy, compliance, operational efficiency, reliability, fault tolerance, performance, and cost optimization. Security, privacy, and compliance The following sections describe the security considerations for this reference architecture: Use private IP addresses to secure the VMs that support the Dataflow pipeline Enable Private Google Access Restrict Splunk HEC ingress traffic to known IP addresses used by Cloud NAT Store the Splunk HEC token in Secret Manager Create a custom Dataflow worker service account to follow least privilege best practices Configure SSL validation for an internal root CA certificate if you use a private CA Use private IP addresses to secure the VMs that support the Dataflow pipeline You should restrict access to the worker VMs that are used in the Dataflow pipeline. To restrict access, deploy these VMs with private IP addresses. However, these VMs also need to be able to use HTTPS to stream the exported logs into Splunk and access the internet. To provide this HTTPS access, you need a Cloud NAT gateway which automatically allocates Cloud NAT IP addresses to the VMs that need them. Make sure to map the subnet that contains the VMs to the Cloud NAT gateway. Enable Private Google Access When you create a Cloud NAT gateway, Private Google Access becomes enabled automatically. However, to allow Dataflow workers with private IP addresses to access the external IP addresses that Google Cloud APIs and services use, you must also manually enable Private Google Access for the subnet. Restrict Splunk HEC ingress traffic to known IP addresses used by Cloud NAT If you want to restrict traffic into the Splunk HEC to a subset of known IP addresses, you can reserve static IP addresses and manually assign them to the Cloud NAT gateway. Depending on your Splunk deployment, you can then configure your Splunk HEC ingress firewall rules using these static IP addresses. For more information about Cloud NAT, see Set up and manage network address translation with Cloud NAT. Store the Splunk HEC token in Secret Manager When you deploy the Dataflow pipeline, you can pass the token value in one of the following ways: Plaintext Ciphertext encrypted with a Cloud Key Management Service key Secret version encrypted and managed by Secret Manager In this reference architecture, you use the Secret Manager option because this option offers the least complex and most efficient way to protect your Splunk HEC token. This option also prevents leakage of the Splunk HEC token from the Dataflow console or the job details. A secret in Secret Manager contains a collection of secret versions. Each secret version stores the actual secret data, such as the Splunk HEC token. If you later choose to rotate your Splunk HEC token as an added security measure, you can add the new token as a new secret version to this secret. For general information about the rotation of secrets, see About rotation schedules. Create a custom Dataflow worker service account to follow least privilege best practices Workers in the Dataflow pipeline use the Dataflow worker service account to access resources and execute operations. By default, the workers use your project's Compute Engine default service account as the worker service account, which grants them broad permissions to all resources in your project. However, to run Dataflow jobs in production, we recommend that you create a custom service account with a minimum set of roles and permissions. You can then assign this custom service account to your Dataflow pipeline workers. The following diagram lists the required roles that you must assign to a service account to enable Dataflow workers to run a Dataflow job successfully. As shown in the diagram, you need to assign the following roles to the service account for your Dataflow worker: Dataflow Admin Dataflow Worker Storage Object Admin Pub/Sub Subscriber Pub/Sub Viewer Pub/Sub Publisher Secret Accessor For more details on the roles that you need to assign to the Dataflow worker service account, see the Grant roles and access to the Dataflow worker service account section of the deployment guide. Configure SSL validation with an internal root CA certificate if you use a private CA By default, the Dataflow pipeline uses the Dataflow worker’s default trust store to validate the SSL certificate for your Splunk HEC endpoint. If you use a private certificate authority (CA) to sign an SSL certificate that is used by the Splunk HEC endpoint, you can import your internal root CA certificate into the trust store. The Dataflow workers can then use the imported certificate for SSL certificate validation. You can use and import your own internal root CA certificate for Splunk deployments with self-signed or privately signed certificates. You can also disable SSL validation entirely for internal development and testing purposes only. This internal root CA method works best for non-internet facing, internal Splunk deployments. For more information, see the Pub/Sub to Splunk Dataflow template parameters rootCaCertificatePath and disableCertificateValidation. Operational efficiency The following sections describe the operational efficiency considerations for this reference architecture: Use UDF to transform logs or events in-flight Replay unprocessed messages Use UDF to transform logs or events in-flight The Pub/Sub to Splunk Dataflow template supports user-defined functions (UDF) for custom event transformation. Example use cases include enriching records with additional fields, redacting some sensitive fields, or filtering out undesired records. UDF enables you to change the Dataflow pipeline's output format without having to re-compile or to maintain the template code itself. This reference architecture uses a UDF to handle messages that the pipeline isn't able to deliver to Splunk. Replay unprocessed messages Sometimes, the pipeline receives delivery errors and doesn't try to deliver the message again. In this case, Dataflow sends these unprocessed messages to an unprocessed topic as shown in the following diagram. After you fix the root cause of the delivery failure, you can then replay the unprocessed messages. The following steps outline the process shown in the previous diagram: The main delivery pipeline from Pub/Sub to Splunk automatically forwards undeliverable messages to the unprocessed topic for user investigation. The operator or site reliability engineer (SRE) investigates the failed messages in the unprocessed subscription. The operator troubleshoots and fixes the root cause of the delivery failure. For example, fixing an HEC token misconfiguration might enable the messages to be delivered. Note: To avoid data backlog or data loss, you must regularly inspect any failed messages in the unprocessed subscription and resolve any issues before Pub/Sub discards the messages. The maximum message retention in a Pub/Sub subscription is seven days, which is the default setting for both the input and the unprocessed subscriptions in this reference architecture. You can monitor and receive alerts on the unacknowledged messages in the unprocessed subscription. For more information about monitoring and alerting for Pub/Sub subscriptions, see Monitor message backlog. The operator triggers the replay failed message pipeline. This Pub/Sub to Pub/Sub pipeline (highlighted in the dotted section of the preceding diagram) is a temporary pipeline that moves the failed messages from the unprocessed subscription back to the original log sink topic. The main delivery pipeline re-processes the previously failed messages. This step requires the pipeline to use the sample UDF for correct detection and decoding of failed messages payloads. The following code shows the part of the function that implements this conditional decoding logic, including a tally of delivery attempts for tracking purposes: // If the message has already been converted to a Splunk HEC object // with a stringified obj.event JSON payload, then it's a replay of // a previously failed delivery. // Unnest and parse the obj.event. Drop the previously injected // obj.attributes such as errorMessage and timestamp if (obj.event) { try { event = JSON.parse(obj.event); redelivery = true; } catch(e) { event = obj; } } else { event = obj; } // Keep a tally of delivery attempts event.delivery_attempt = event.delivery_attempt || 1; if (redelivery) { event.delivery_attempt += 1; } Reliability and fault tolerance In regard to reliability and fault tolerance, the following table, Table 1, lists some possible Splunk delivery errors. The table also lists the corresponding errorMessage attributes that the pipeline records with each message before forwarding these messages to the unprocessed topic. Table 1: Splunk delivery error types Delivery error type Automatically retried by pipeline? Example errorMessage attribute Transient network error Yes Read timed outorConnection reset Splunk server 5xx error Yes Splunk write status code: 503 Splunk server 4xx error No Splunk write status code: 403 Splunk server down No The target server failed to respond Splunk SSL certificate invalid No Host name X does not match the certificate JavaScript syntax error in the user-defined function (UDF) No ReferenceError: X is not defined In some cases, the pipeline applies exponential backoff and automatically tries to deliver the message again. For example, when the Splunk server generates a 5xx error code, the pipeline needs to redeliver the message. These error codes occur when the Splunk HEC endpoint is overloaded. Alternatively, there could be a persistent issue that prevents a message from being submitted to the HEC endpoint. For such persistent issues, the pipeline does not try to deliver the message again. The following are examples of persistent issues: A syntax error in the UDF function. An invalid HEC token that causes the Splunk server to generate a 4xx "Forbidden" server response. Performance and cost optimization In regard to performance and cost optimization, you need to determine the maximum size and throughput for your Dataflow pipeline. You must calculate the correct size and throughput values so that your pipeline can handle peak daily log volume (GB/day) and log message rate (events per second, or EPS) from the upstream Pub/Sub subscription. You must select the size and throughput values so that the system doesn't incur either of the following issues: Delays caused by message backlogging or message throttling. Extra costs from overprovisioning a pipeline. After you perform the size and throughput calculations, you can use the results to configure an optimal pipeline that balances performance and cost. To configure your pipeline capacity, you use the following settings: The Machine type and Machine count flags are part of the gcloud command that deploys the Dataflow job. These flags let you define the type and number of VMs to use. The Parallelism and Batch count parameters are part of the Pub/Sub to Splunk Dataflow template. These parameters are important to increase EPS while avoiding overwhelming the Splunk HEC endpoint. The following sections provide an explanation of these settings. When applicable, these sections also provide formulas and example calculations that use each formula. These example calculations and resulting values assume an organization with the following characteristics: Generates 1 TB of logs daily. Has an average message size of 1 KB. Has a sustained peak message rate that is two times the average rate. Because your Dataflow environment is unique, substitute the example values with values from your own organization as you work through the steps. Machine type Best practice: Set the --worker-machine-type flag to n2-standard-4 to select a machine size that provides the best performance to cost ratio. Because the n2-standard-4 machine type can handle 12k EPS, we recommend that you use this machine type as a baseline for all of your Dataflow workers. For this reference architecture, set the --worker-machine-type flag to a value of n2-standard-4. Machine count Best practice: Set the --max-workers flag to control the maximum number of workers needed to handle expected peak EPS. Dataflow autoscaling allows the service to adaptively change the number of workers used to execute your streaming pipeline when there are changes to resource usage and load. To avoid overprovisioning when autoscaling, we recommend that you always define the maximum number of virtual machines that are used as Dataflow workers. You define the maximum number of virtual machines with the --max-workers flag when you deploy the Dataflow pipeline. Dataflow statically provisions the storage component as follows: An autoscaling pipeline deploys one data persistent disk for each potential streaming worker. The default persistent disk size is 400 GB, and you set the maximum number of workers with the --max-workers flag. The disks are mounted to the running workers at any point in time, including startup. Because each worker instance is limited to 15 persistent disks, the minimum number of starting workers is ⌈--max-workers/15⌉. So, if the default value is --max-workers=20, the pipeline usage (and cost) is as follows: Storage: static with 20 persistent disks. Compute: dynamic with minimum of 2 worker instances (⌈20/15⌉ = 2), and a maximum of 20. This value is equivalent to 8 TB of a Persistent Disk. This size of Persistent Disk could incur unnecessary cost if the disks are not fully used, especially if only one or two workers are running the majority of the time. To determine the maximum number of workers that you need for your pipeline, use the following formulas in sequence: Determine the average events per second (EPS) using the following formula: \( {AverageEventsPerSecond}\simeq\frac{TotalDailyLogsInTB}{AverageMessageSizeInKB}\times\frac{10^9}{24\times3600} \) Example calculation: Given the example values of 1 TB of logs per day with an average message size of 1 KB, this formula generates an average EPS value of 11.5k EPS. Determine the sustained peak EPS by using the following formula, where the multiplier N represents the bursty nature of logging: \( {PeakEventsPerSecond = N \times\ AverageEventsPerSecond} \) Example calculation: Given an example value of N=2 and the average EPS value of 11.5k that you calculated in the previous step, this formula generates a sustained peak EPS value of 23k EPS. Determine the maximum required number of vCPUs by using the following formula: \( {maxCPUs = ⌈PeakEventsPerSecond / 3k ⌉} \) Example calculation: Using the sustained peak EPS value of 23k that you calculated in the previous step, this formula generates a maximum of ⌈23 / 3⌉ = 8 vCPU cores. Note: A single vCPU in a Splunk Dataflow pipeline can generally process 3k EPS, assuming there are no artificially low rate limits. So, one Dataflow VM worker of machine type (n2-standard-4) is generally enough to process up to 12k EPS. Determine the maximum number of Dataflow workers by using the following formula: \( maxNumWorkers = ⌈maxCPUs / 4 ⌉ \) Example calculation: Using the example maximum vCPUs value of 8 that was calculated in the previous step, this formula [8/4] generates a maximum number of 2 for an n2-standard-4 machine type. For this example, you would set the --max-workers flag to a value of 2 based on the previous set of example calculations. However, remember to use your own unique values and calculations when you deploy this reference architecture in your environment. Parallelism Best practice: Set the parallelism parameter in the Pub/Sub to Splunk Dataflow template to twice the number of vCPUs used by the maximum number of Dataflow workers. The parallelism parameter helps maximize the number of parallel Splunk HEC connections, which in turn maximizes the EPS rate for your pipeline. The default parallelism value of 1 disables parallelism and limits the output rate. You need to override this default setting to account for 2 to 4 parallel connections per vCPU, with the maximum number of workers deployed. As a rule, you calculate the override value for this setting by multiplying the maximum number of Dataflow workers by the number of vCPUs per worker, and then doubling this value. To determine the total number of parallel connections to the Splunk HEC across all Dataflow workers, use the following formula: \( {parallelism = maxCPUs * 2} \) Example calculation: Using the example maximum vCPUs of 8 that was previously calculated for machine count, this formula generates the number of parallel connections to be 8 x 2 = 16. For this example, you would set the parallelism parameter to a value of 16 based on the previous example calculation. However, remember to use your own unique values and calculations when you deploy this reference architecture in your environment. Batch count Best practice: To enable the Splunk HEC to process events in batches rather than one at a time, set the batchCount parameter to a value between 10 to 50 events/request for logs. Configuring the batch count helps to increase EPS and reduce the load on the Splunk HEC endpoint. The setting combines multiple events into a single batch for more efficient processing. We recommend that you set the batchCount parameter to a value between 10 to 50 events/request for logs, provided the max buffering delay of two seconds is acceptable. \( {batchCount >= 10} \) Because the average log message size is 1 KB in this example, we recommend that you batch at least 10 events per request. For this example, you would set the batchCount parameter to a value of 10. However, remember to use your own unique values and calculations when you deploy this reference architecture in your environment. For more information about these performance and cost optimization recommendations, see Plan your Dataflow pipeline. Deployment To deploy this reference architecture, see Deploy log streaming from Google Cloud to Splunk. What's next For a full list of Pub/Sub to Splunk Dataflow template parameters, see the Pub/Sub to Splunk Dataflow documentation. For the corresponding Terraform templates to help you deploy this reference architecture, see the terraform-splunk-log-export GitHub repository. It includes a pre-built Cloud Monitoring dashboard for monitoring your Splunk Dataflow pipeline. For more details on Splunk Dataflow custom metrics and logging to help you monitor and troubleshoot your Splunk Dataflow pipelines, refer to this blog New observability features for your Splunk Dataflow streaming pipelines. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Reference_architecture(11).txt b/Reference_architecture(11).txt new file mode 100644 index 0000000000000000000000000000000000000000..2749ed5b4ee25423491218e64cec5c08652c0236 --- /dev/null +++ b/Reference_architecture(11).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/stream-cloud-logs-to-datadog +Date Scraped: 2025-02-23T11:53:33.193Z + +Content: +Home Docs Cloud Architecture Center Send feedback Stream logs from Google Cloud to Datadog Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-10 UTC This document describes a reference architecture to send log event data from across your Google Cloud ecosystem to Datadog Log Management. Sending data from Google Cloud to Datadog uses a Pub/Sub pull subscription and a Dataflow pipeline. This reference architecture is intended for IT professionals who want to stream logs from Google Cloud to Datadog. This document assumes that you are familiar with Datadog Log Management. Sending your logs to Datadog lets you visualize logs, set up alerts for log events, and correlate logs with metrics and traces from across your stack. Datadog is a cloud-based platform that provides methods to monitor and secure your infrastructure and applications. Datadog Log Management unifies logs, metrics, and traces into a single dashboard. Having a single view helps provide rich context when you analyze your log data. Architecture The following diagram shows the architecture that's described in this document. This diagram demonstrates how log files that are generated by Google Cloud are ingested by Datadog and shown to Datadog users. Click the diagram to enlarge it. This reference architecture uses Pub/Sub and Dataflow to forward your log files to Datadog. The architecture achieves a high level of log-file throughput by allowing batch delivery and compression. If you generate high-throughput and low-latency event logs, we recommend that you use a pull subscription. For more information about the different features that are supported by Pub/Sub subscriptions, see the Pub/Sub subscription comparison table. This architecture diagram includes the following components: Cloud Logging: Routes all of your logs to a Cloud Logging sink, where they can then be filtered and forwarded to supported destinations, like Pub/Sub. Pub/Sub: Forwards your log file data to Datadog through a Dataflow pipeline. When Pub/Sub is integrated with Cloud Logging, Pub/Sub uses topics and pull subscriptions to publish log file data to Pub/Sub topics in near real-time. For more information, see View logs routed to Pub/Sub. Dataflow: Offers two pipeline types to manage Google Cloud log files: Log forwarding: This is the primary pipeline. Dataflow workers batch log file data and then compress it. The pipeline then sends that data to Datadog. Dead-letter pipeline: This is the backup pipeline. When there are data processing errors, Dataflow workers send the log messages to the dead-letter topic. When you've resolved the errors manually, you create this pipeline to resend the data in the dead-letter topic to Datadog. Datadog: Datadog's Log Management system is the destination for your Google Cloud log file data. Each Datadog site provides a unique SSL-encrypted logging endpoint. For more information about the HTTPS endpoint of your Datadog data center, see logging endpoints. Products used This reference architecture uses the following Google Cloud and third-party products: Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Pub/Sub: An asynchronous and scalable messaging service that decouples services that produce messages from services that process those messages. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Dataflow: A service that provides unified stream and batch data processing at scale. Datadog Log Management: A service to collect, process, archive, explore, and monitor all of your logs. Use case Use a pre-built Dataflow template to forward your Google Cloud logs to Datadog. The pre-built template minimizes network overhead by batching and compressing the log files before they're transferred. If your organization uses a Virtual Private Cloud (VPC) with service perimeters, you need a pull-based subscription model to access endpoints that are outside the VPC perimeter. Pull subscriptions are useful if your log volume is highly variable. Use the log data that you transferred to Datadog for your organization's dashboards, alerts, and security platforms. You can also use the log data to troubleshoot. The log forwarding architecture that's described on this page uses a pull subscription with the Pub/Sub topic. The pull subscription enables access to the external Datadog endpoint. However, when you're using service perimeters, not all of the subscription delivery types provide access to external endpoints. For more information, see Supported products and limitations. Design considerations This section describes design factors, best practices, and design recommendations that you should consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud and third-party products and features that you use, there might be additional design factors and trade-offs that you should consider. Security, privacy, and compliance This section describes factors that you should consider when you design and build a log-file-delivery pipeline between Google Cloud and Datadog that meets your security, privacy, and compliance requirements. Use private networking for the Dataflow VMs We recommend that you restrict access to the worker VMs that are used in the Dataflow pipeline by configuring them with private IP addresses. Restricting access lets you keep communication on private networks and away from the public internet. To restrict access while also allowing the VMs to stream the exported logs into Datadog over HTTPS, configure a public Cloud NAT gateway. After you configure the gateway, map the gateway to the subnet that contains your Dataflow pipeline workers. This mapping lets Dataflow automatically allocate Cloud NAT IP addresses to the VMs in the subnet. The mapping also enables Private Google Access. For more information about using private networking for the Dataflow VMs, see Private Google Access interaction. Store the Datadog API key in Secret Manager This reference architecture uses Secret Manager to store the Datadog API key value. Datadog API keys are unique to your organization. You use them to authenticate to the Datadog API endpoints. Any credentials that you store in Secret Manager are encrypted by default. The credentials offer access control options through IAM, and observability through audit logging. Secret Manager also supports versioning. That means that you can maintain a policy of short-lived credentials by rotating your API key value whenever it's appropriate. Each time the Datadog API key value is updated, Secret Manager creates a new version of the Secret Manager secret. For more information about rotating your API key values, see About rotation schedules. Create a custom service account for Dataflow pipeline workers By default, the service account that's used by worker VMs in the Dataflow pipeline is the Compute Engine default service account. This service account provides broad access to resources in your project. To follow the principle of least privilege, you should create a custom service account with the minimum required permissions. Successfully running a Dataflow job requires the following roles for the Dataflow worker service account: Dataflow Admin Dataflow Worker Pub/Sub Publisher Pub/Sub Subscriber Pub/Sub Viewer Secret Manager Secret Accessor Storage Object Admin Use Datadog to maintain data sovereignty To maintain data sovereignty, Datadog offers unique sites that are distributed throughout the world. Data is never shared between sites. Each site operates independently. Use different sites for specific use cases (such as government security regulations) or to store your data in different regions. Reliability This section describes features and design considerations for reliability. Intake errors The Datadog API documentation lists the potential errors that you might encounter at intake. The following table briefly describes Datadog status codes, causes, and which error types are automatically retried. For example, 4xx errors aren't automatically retried and 5xx errors are automatically retried. Status Code Cause Automatically retried? 400 Bad request (likely an issue in the payload formatting) No 401 Unauthorized (likely a missing API Key) No 403 Permission issue (likely using an invalid API Key) No 408 Request timeout No 413 Payload too large No 429 Too many requests No 500 Internal server error Yes 503 Service unavailable Yes Due to API rate-limits at intake, rapid and unexpected bursts in platform logs can lead to throttling and 429 errors. To help prevent these errors, configure the Pub/Sub to Datadog Dataflow template so that the Dataflow worker pipelines batch up to 1,000 logs in each request. To ensure a timely and consistent flow of logs during slow periods, configure the template so that Google Cloud sends batches to Datadog every two seconds. Index daily quotas If you are receiving errors with a 429 status code (too many requests), you might have reached the maximum daily quota for a Datadog log index. By default, Datadog log indexes can receive up to 200,000,000 log events per day before being rate-limited. Increase your daily quota by directly editing your index in the Datadog user interface or through the Datadog API. You can also set up multiple indexes. Configure each index to have a different retention period. You can also create different queries for each index. For more information, see Best Practices for Log Management. Logs silently dropped at intake Sometimes Datadog drops Google Cloud log files at intake without generating an error status code. For more information about potential causes, as well as how to use Datadog metrics to determine if you're affected by this issue, see Unexpectedly dropping logs. Log event tags A log event shouldn't have more than 100 tags. Each tag shouldn't exceed 200 characters. Tags can include Unicode characters. Datadog supports a maximum of 10,000,000 unique tags per day. For more information, see Getting Started with Tags. Log event attributes Any log event that's converted to the JSON file format should contain less than 256 attributes. Each attribute key should be less than 50 characters. Each key should be nested in less than 10 successive levels. If you intend to promote attributes as log facets, the attributes should have fewer than 1,024 characters. For more information, see Attributes and Aliasing. Maximum log file sizes To learn about the limitations on single log file size, uncompressed payload size, and number of logs grouped together that the Datadog API can accept, see the Datadog Logs API reference. Operational efficiency The following sections describe the operational efficiency considerations for this reference architecture. Overwrite default log attributes with user-defined functions You can use Datadog Log Management processors to transform and enrich Google Cloud log files after Datadog receives them. However, an alternative transformation option is to extend the Pub/Sub to Datadog template by writing a user-defined function (UDF) in JavaScript. The UDF can override certain default log attributes such as host or service. For more information, see the User-defined function parameter in the Pub/Sub to Datadog template. Redeliver failed messages To prevent data loss, messages that are sent but that aren't delivered to Datadog are sent to the dead-letter topic. This can happen because of 4xx (authorization) or 5xx (server) errors. When a server error (5xx) occurs, delivery is retried with exponential backoff. The maximum backoff is 15 minutes. If the message isn't successfully delivered in this timeframe, it's sent to the dead-letter topic. The Datadog Logs API accepts log events with timestamps up to 18 hours in the past. Ensure that you resend any log messages in the dead-letter topic within this timeframe so that they are accepted by the Datadog API. For failed log messages, use the following process to troubleshoot and then redeliver the logs to Datadog: Inspect the logs and resolve the issues that prevented delivery. For example: For 401 (unauthorized) or 403 (permission issue) errors, confirm that the Datadog API key is valid and that the Dataflow job has access to it. Check the API key validity in Datadog. Check that the Secret Manager secret that contains your valid Datadog API key allows access from the correct service account. Review the reasons for the errors. Other errors might be caused by restrictions on the Datadog logging endpoint. For more information, see the custom log forwarding section of Datadog's log collection documentation. Create a temporary Dataflow job with the Pub/Sub to Pub/Sub template. Use this job to route the undelivered message back into the input topic of the primary log. Confirm that all failed messages in the dead-letter topic have been sent back to the input topic of the primary log. Performance and cost optimization The following sections describe the factors that can influence the network and cost efficiency of this reference architecture. Batch count For optimal efficiency of network egress traffic, and its associated cost savings, Datadog recommends that you configure your batchCount parameter to the maximum setting of 1,000. This maximum parameter value means that up to 1,000 messages are batched together in a single network request. A batch is sent at least every two seconds, regardless of the batch size. The minimum value for batchCount is 10. The default value for batchCount is 100. To provide near real-time viewing of Google Cloud logs, Datadog sets the delay between batches to two seconds, regardless of whether the batchCount value has been reached. For example, if your batchCount value is set to 1,000, you continue to receive logs at least every two seconds—even during periods of sparse log-file generation. Parallelism To increase the number of requests that are sent to Datadog in parallel, use the parallelism parameter in the Pub/Sub-to-Datadog template. By default, this value is set to 1, which makes parallelism inactive. There is no defined upper limit for parallelism. For more information about parallelism, see Pipeline lifecycle. Optimize compute resources with Dataflow Prime You can incorporate Dataflow Prime with the Pub/Sub to Datadog template. Dataflow Prime is a serverless data-processing platform based on Dataflow. Use one or both of the following parameters to optimize the compute resources that your Dataflow pipeline uses: Vertical autoscaling: To meet the requirements of your pipeline, Dataflow Prime automatically scales the memory capacity of your Dataflow worker VMs. Scaling memory capacity can be useful for bursty workloads that trigger out-of-memory issues. For more information, see Vertical Autoscaling. Right fitting: To help you specify stage-specific or pipeline-wide compute resources, Dataflow Prime creates stage-specific worker pools, with resource hints, for each stage in the Dataflow pipeline. For more information, see Right fitting. Dataflow pipeline options Other factors that can influence the performance and cost of your Dataflow pipeline are documented on the Pipeline options page—for example: Resource utilization options, like autoscaling mode and the maximum number of Compute Engine instances that are available to your pipeline during runtime. Worker-level options, like worker machine type. If needed, use these settings to further refine the performance and cost efficiency of your Dataflow pipeline. Deployment To deploy this architecture, see Deploy Log Streaming from Google Cloud to Datadog. What's Next To learn more about the benefits of the Pub/Sub to Datadog Dataflow template, read the Stream your Google Cloud logs to Datadog with Dataflow blog post. To learn more about Datadog Log Management, see Best Practices for Log Management. For more information about Dataflow, see the Dataflow overview. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Ashraf Hanafy | Senior Software Engineer for Google Cloud Integrations, DatadogDaniel Trujillo | Engineering Manager, Google Cloud Integrations, DatadogBryce Eadie | Technical Writer, DatadogSriram Raman | Senior Product Manager, Google Cloud Integrations, DatadogOther contributors: Maruti C | Global Partner EngineerChirag Shankar | Data EngineerKevin Winters | Key Enterprise ArchitectLeonid Yankulin | Developer Relations EngineerMohamed Ali | Cloud Technical Solutions Developer Send feedback \ No newline at end of file diff --git a/Reference_architecture(12).txt b/Reference_architecture(12).txt new file mode 100644 index 0000000000000000000000000000000000000000..2d064fbc8032838664bc92fcab87e8a9cd4c3eb7 --- /dev/null +++ b/Reference_architecture(12).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/automate-malware-scanning-for-documents-uploaded-to-cloud-storage +Date Scraped: 2025-02-23T11:56:36.152Z + +Content: +Home Docs Cloud Architecture Center Send feedback Automate malware scanning for files uploaded to Cloud Storage Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-16 UTC This reference architecture shows you how to build an event-driven pipeline that can help you automate the evaluation of files for malware like trojans, viruses, and other malicious code. Manually evaluating the large number of files that are uploaded to Cloud Storage is too time-consuming for most apps. Automating the process can help you save time and improve efficiency. The pipeline in this architecture uses Google Cloud products along with the open source antivirus engine ClamAV. You can also use any other anti-malware engine that performs on-demand scanning in Linux containers. In this architecture, ClamAV runs in a Docker container hosted in Cloud Run. The pipeline also writes log entries to Cloud Logging and records metrics to Cloud Monitoring. Architecture The following diagram gives an overview of the architecture: The architecture shows the following pipelines: User-uploaded file scanning pipeline, which checks if an uploaded file contains malware. ClamAV malware database mirror update pipeline, which maintains an up-to-date mirror of the database of malware that ClamAV uses. The pipelines are described in more detail in the following sections. User-uploaded file scanning pipeline The file scanning pipeline operates as follows: End users upload their files to the unscanned Cloud Storage bucket. The Eventarc service catches this upload event and tells the Cloud Run service about this new file. The Cloud Run service downloads the new file from the unscanned Cloud Storage bucket and passes it to the ClamAV malware scanner. Depending on the result of the malware scan, the service performs one of the following actions: If ClamAV declares that the file is clean, then it's moved from the unscanned Cloud Storage bucket to the clean Cloud Storage bucket. If ClamAV declares that the file contains malware, then it's moved from the unscanned Cloud Storage bucket to the quarantined Cloud Storage bucket. The service reports the result of these actions to Logging and Monitoring to allow administrators to take action. ClamAV Malware database mirror update pipeline The ClamAV Malware database mirror update pipeline keeps an up-to-date private local mirror of the database in Cloud Storage. This ensures that the ClamAV public database is only accessed once per update to download the smaller differential updates files, and not the full database, which prevents any rate-limiting. This pipeline operates as follows: A Cloud Scheduler job is configured to trigger every two hours, which is the same as the default update check interval used by the ClamAV freshclam service. This job makes an HTTP POST request to the Cloud Run service instructing it to update the malware database mirror. The Cloud Run instance copies the malware database mirror from the Cloud Storage bucket to the local file system. The instance then runs the ClamAV CVDUpdate tool, which downloads any available differential updates and applies them to the database mirror. Then, it copies the updated malware database mirror back to the Cloud Storage bucket. On startup, the ClamAV freshclam service running in the Cloud Run instance downloads the malware database from Cloud Storage. During runtime, the service also regularly checks for and downloads any available database updates from the Cloud Storage bucket. Design considerations The following guidelines can help you to develop an architecture that meets your organization's requirements for reliability, cost, and operational efficiency. Reliability In order to scan effectively, the ClamAV malware scanner needs to maintain an up-to-date database of malware signatures. The ClamAV service is run using Cloud Run, which is a stateless service. Upon startup of an instance of the service, ClamAV must always download the latest complete malware database, which is several hundreds of megabytes in size. The public malware database for ClamAV is hosted on a Content Distribution Network (CDN), which rate limits these downloads. If multiple instances start up and attempt to download the full database, rate limiting can be triggered. This causes the external IP address used by Cloud Run to be blocked for 24 hours. This prevents the ClamAV service from starting up, as well as preventing download of malware database updates. Also, Cloud Run uses a shared pool of external IP addresses. As a result, downloads from different projects' malware scanning instances are seen by the CDN as coming from a single address and also trigger the block. Cost optimization This architecture uses the following billable components of Google Cloud: Cloud Storage Cloud Run Eventarc To generate a cost estimate based on your projected usage, use the pricing calculator. Operational efficiency To trigger log-based alerts for files that are infected, you can use log entries from Logging. However, setting up these alerts is outside the scope of this architecture. Deployment To deploy this architecture, see Deploy automated malware scanning for files uploaded to Cloud Storage. What's next Explore Cloud Storage documentation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Reference_architecture(2).txt b/Reference_architecture(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..0a0e93f00b9e84124910648920465cb79d97f0e7 --- /dev/null +++ b/Reference_architecture(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/build-apps-using-gateway-and-cloud-service +Date Scraped: 2025-02-23T11:47:34.209Z + +Content: +Home Docs Cloud Architecture Center Send feedback From edge to multi-cluster mesh: Globally distributed applications exposed through GKE Gateway and Cloud Service Mesh Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-30 UTC This reference architecture describes the benefits of exposing applications externally through Google Kubernetes Engine (GKE) Gateways running on multiple GKE clusters within a service mesh. This guide is intended for platform administrators. You can increase the resiliency and redundancy of your services by deploying applications consistently across multiple GKE clusters, where each cluster becomes an additional failure domain. For example, a service's compute infrastructure with a service level objective (SLO) of 99.9% when deployed in a single GKE cluster achieves an SLO of 99.9999% when deployed across two GKE clusters (1 - (0.001)2). You can also provide users with an experience where incoming requests are automatically directed to the least latent and available mesh ingress gateway. If you're interested in the benefits of exposing service-mesh-enabled applications that run on a single cluster, see From edge to mesh: Expose service mesh applications through GKE Gateway. Architecture The following architecture diagram shows how data flows through cloud ingress and mesh ingress: The preceding diagram shows the following data flow scenarios: From the client terminating at the Google Cloud load balancer using its own Google-managed TLS certificate. From the Google Cloud load balancer to the mesh ingress proxy using its own self-signed TLS certificate. From the mesh ingress gateway proxy to the workload sidecar proxies using service mesh-enabled mTLS. This reference architecture contains the following two ingress layers: Cloud ingress: in this reference architecture, you use the Kubernetes Gateway API (and the GKE Gateway controller) to program the external, multi-cluster HTTP(S) load balancing layer. The load balancer check the mesh ingress proxies across multiple regions, sending requests to the nearest healthy cluster. It also implements a Google Cloud Armor security policy. Mesh ingress: In the mesh, you perform health checks on the backends directly so that you can run load balancing and traffic management locally. When you use the ingress layers together, there are complementary roles for each layer. To achieve the following goals, Google Cloud optimizes the most appropriate features from the cloud ingress layer and the mesh ingress layer: Provide low latency. Increase availability. Use the security features of the cloud ingress layer. Use the security, authorization, and observability features of the mesh ingress layer. Cloud ingress When paired with mesh ingress, the cloud ingress layer is best used for edge security and global load balancing. Because the cloud ingress layer is integrated with the following services, it excels at running those services at the edge, outside the mesh: DDoS protection Cloud firewalls Authentication and authorization Encryption The routing logic is typically straightforward at the cloud ingress layer. However, it can be more complex for multi-cluster and multi-region environments. Because of the critical function of internet-facing load balancers, the cloud ingress layer is likely managed by a platform team that has exclusive control over how applications are exposed and secured on the internet. This control makes this layer less flexible and dynamic than a developer-driven infrastructure. Consider these factors when determining administrative access rights to this layer and how you provide that access. Mesh ingress When paired with cloud ingress, the mesh ingress layer provides a point of entry for traffic to enter the service mesh. The layer also provides backend mTLS, authorization policies, and flexible regex matching. Deploying external application load balancing outside of the mesh with a mesh ingress layer offers significant advantages, especially for internet traffic management. Although service mesh and Istio ingress gateways provide advanced routing and traffic management in the mesh, some functions are better served at the edge of the network. Taking advantage of internet-edge networking through Google Cloud's external Application Load Balancer might provide significant performance, reliability, or security-related benefits over mesh-based ingress. Note: This reference architecture refers to the routing for cloud ingress layers as north-south routing. The routing between mesh ingress layers and service layers, or the routing from service layers to service layers, is referred to as east-west routing. Products and features used The following list summarizes of all the Google Cloud products and features that this referenence architecture uses: GKE Enterprise: A managed Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. For the purpose of this reference architecture, each of the GKE clusters serving an application must be in the same fleet. Fleets and multi-cluster Gateways: Services that are used to create containerized applications at enterprise scale using Google's infrastructure and GKE Enterprise. Google Cloud Armor: A service that helps you to protect your applications and websites against denial of service and web attacks. Cloud Service Mesh: A fully managed service mesh based on Envoy and Istio Application Load Balancer: A proxy-based L7 load balancer that lets you run and scale your services. Certificate Manager: A service that lets you acquire and manage TLS certificates for use with Cloud Load Balancing. Fleets To manage multi-cluster deployments, GKE Enterprise and Google Cloud use fleets to logically group and normalize Kubernetes clusters. Using one or more fleets can help you uplevel management from individual clusters to entire groups of clusters. To reduce cluster-management friction, use the fleet principle of namespace sameness. For each GKE cluster in a fleet, ensure that you configure all mesh ingress gateways the same way. Also, consistently deploy application services so that the service balance-reader in the namespace account relates to an identical service in each GKE cluster in the fleet. The principles of sameness and trust that are assumed within a fleet are what let you use the full range of fleet-enabled features in GKE Enterprise and Google Cloud. East-west routing rules within the service mesh and traffic policies are handled at the mesh ingress layer. The mesh ingress layer is deployed on every GKE cluster in the fleet. Configure each mesh ingress gateway in the same manner, adhering to the fleet's principle of namespace sameness. Although there's a single configuration cluster for GKE Gateway, you should synchronize your GKE Gateway configurations across all GKE clusters in the fleet. If you need to nominate a new configuration cluster, use ConfigSync. ConfigSync helps ensure that all such configurations are synchronized across all GKE clusters in the fleet and helps avoid reconciling with a non-current configuration. Mesh ingress gateway Istio 0.8 introduced the mesh ingress gateway. The gateway provides a dedicated set of proxies whose ports are exposed to traffic coming from outside the service mesh. These mesh ingress proxies let you control network exposure behavior separately from application routing behavior. The proxies also let you apply routing and policy to mesh-external traffic before it arrives at an application sidecar. Mesh ingress defines the treatment of traffic when it reaches a node in the mesh, but external components must define how traffic first arrives at the mesh. To manage external traffic, you need a load balancer that's external to the mesh. To automate deployment, this reference architecture uses Cloud Load Balancing, which is provisioned through GKE Gateway resources. GKE Gateway and multi-cluster services There are many ways to provide application access to clients that are outside the cluster. GKE Gateway is an implementation of the Kubernetes Gateway API. GKE Gateway evolves and improves the Ingress resource. As you deploy GKE Gateway resources to your GKE cluster, the Gateway controller watches the Gateway API resources. The controller reconciles Cloud Load Balancing resources to implement the networking behavior that's specified by the Gateway resources. When using GKE Gateway, the type of load balancer you use to expose applications to clients depends largely on the following factors: Whether the backend services are in a single GKE cluster or distributed across multiple GKE clusters (in the same fleet). The status of the clients (external or internal). The required capabilities of the load balancer, including the capability to integrate with Google Cloud Armor security policies. The spanning requirements of the service mesh. Service meshes can span multiple GKE clusters or can be contained in a single cluster. In Gateway, this behavior is controlled by specifying the appropriate GatewayClass. When referring to Gateway classes, those classes which can be used in multi-cluster scenarios have a class name ending in -mc. This reference architecture discusses how to expose application services externally through an external Application Load Balancer. However, when using Gateway, you can also create a multi-cluster regional internal Application Load Balancer. To deploy application services in multi-cluster scenarios you can define the Google Cloud load balancer components in the following two ways: Multi Cluster Ingress and MultiClusterService resources Multi-cluster Gateway and Multi-cluster Services For more information about these two approaches to deploying application services, see Choose your multi-cluster load balancing API for GKE. Note: Both multi-cluster Ingress and multi-cluster GKE Gateway use multi-cluster services. Multi-cluster services can be deployed across GKE clusters in a fleet, and are identical for the purposes of cross-cluster service discovery. However, the manner in which these services are exposed across clusters within the fleet differ. Multi Cluster Ingress relies on creating MultiClusterService resources. Multi-cluster Gateway relies on creating ServiceExport resources, and referring to ServiceImport resources. When you use a multi-cluster Gateway, you can enable the additional capabilities of the underlying Google Cloud load balancer by creating Policies. The deployment guide associated with this reference architecture shows how to configure a Google Cloud Armor security policy to help protect backend services from cross-site scripting. These policy resources target the backend services in the fleet that are exposed across multiple clusters. In multi-cluster scenarios, all such policies must reference the ServiceImport resource and API group. Health checking One complexity of using two layers of L7 load balancing is health checking. You must configure each load balancer to check the health of the next layer. The GKE Gateway checks the health of the mesh ingress proxies, and the mesh, in return, checks the health of the application backends. Cloud ingress: In this reference architecture, you configure the Google Cloud load balancer through GKE Gateway to check the health of the mesh ingress proxies on their exposed health check ports. If a mesh proxy is down, or if the cluster, mesh, or region is unavailable, the Google Cloud load balancer detects this condition and doesn't send traffic to the mesh proxy. In this case, traffic would be routed to an alternate mesh proxy in a different GKE cluster or region. Mesh ingress: In the mesh application, you perform health checks on the backends directly so that you can run load balancing and traffic management locally. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for security and compliance, reliability, and cost. Security, privacy, and compliance The architecture diagram in this document contains several security elements. The most critical elements are how you configure encryption and deploy certificates. GKE Gateway integrates with Certificate Manager for these security purposes. Internet clients authenticate against public certificates and connect to the external load balancer as the first hop in the Virtual Private Cloud (VPC). You can refer to a Certificate Manager CertificateMap in your Gateway definition. The next hop is between the Google Front End (GFE) and the mesh ingress proxy. That hop is encrypted by default. Network-level encryption between the GFEs and their backends is applied automatically. If your security requirements dictate that the platform owner retain ownership of the encryption keys, you can enable HTTP/2 with TLS encryption between the cluster gateway (the GFE) and the mesh ingress (the envoy proxy instance). When you enable HTTP/2 with TLS encryption between the cluster gateway and the mesh ingress, you can use a self-signed or a public certificate to encrypt traffic. You can use a self-signed or a public certificate because the GFE doesn't authenticate against it. This additional layer of encryption is demonstrated in the deployment guide associated with this reference architecture. To help prevent the mishandling of certificates, don't reuse public certificates. Use separate certificates for each load balancer in the service mesh. To help create external DNS entries and TLS certificates, the deployment guide for this reference architecture uses Cloud Endpoints. Using Cloud Endpoints lets you create an externally available cloud.goog subdomain. In enterprise-level scenarios, use a more appropriate domain name, and create an A record that points to the global Application Load Balancer IP address in your DNS service provider. If the service mesh you're using mandates TLS, then all traffic between sidecar proxies and all traffic to the mesh ingress is encrypted. The architecture diagram shows HTTPS encryption from the client to the Google Cloud load balancer, from the load balancer to the mesh ingress proxy, and from the ingress proxy to the sidecar proxy. Reliability and resiliency A key advantage of the multi-cluster, multi-regional edge-to-mesh pattern is that it can use all of the features of service mesh for east-west load balancing, like traffic between application services. This reference architecture uses a multi-cluster GKE Gateway to route incoming cloud-ingress traffic to a GKE cluster. The system selects a GKE cluster based on its proximity to the user (based on latency), and its availability and health. When traffic reaches the Istio ingress gateway (the mesh ingress), it's routed to the appropriate backends through the service mesh. An alternative approach for handling the east-west traffic is through multi-cluster services for all application services deployed across GKE clusters. When using multi-cluster services across GKE clusters in a fleet, service endpoints are collected together in a ClusterSet. If a service needs to call another service, then it can target any healthy endpoint for the second service. Because endpoints are chosen on a rotating basis, the selected endpoint could be in a different zone or a different region. A key advantage of using service mesh for east-west traffic rather than using multi-cluster services is that service mesh can use locality load balancing. Locality load balancing isn't a feature of multi-cluster services, but you can configure it through a DestinationRule. Once configured, a call from one service to another first tries to reach a service endpoint in the same zone, then it tries in the same region as the calling service. Finally, the call only targets an endpoint in another region if a service endpoint in the same zone or same region is unavailable. Cost optimization When adopting this multi-cluster architecture broadly across an enterprise, Cloud Service Mesh and multi-cluster Gateway are included in Google Kubernetes Engine (GKE) Enterprise edition. In addition, GKE Enterprise includes many features that enable you to manage and govern GKE clusters, applications, and other processes at scale. Deployment To deploy this architecture, see From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh. What's next Learn about more features offered by GKE Gateway that you can use with your service mesh. Learn about the different types of Cloud Load Balancing available for GKE. Learn about the features and functionality offered by Cloud Service Mesh. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Alex Mattson | Application Specialist EngineerMark Chilvers | Application Specialist EngineerOther contributors: Abdelfettah Sghiouar | Cloud Developer AdvocateArunkumar Jayaraman | Principal EngineerGreg Bray | Customer EngineerMegan Yahya | Product ManagerPaul Revello | Cloud Solutions ArchitectValavan Rajakumar | Key Enterprise ArchitectMaridi (Raju) Makaraju | Supportability Tech Lead Send feedback \ No newline at end of file diff --git a/Reference_architecture(3).txt b/Reference_architecture(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..87a3af78bc6766a6b4f060bcd3ea6d9daed0472c --- /dev/null +++ b/Reference_architecture(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/app-development-and-delivery-with-cloud-code-gcb-cd-and-gke +Date Scraped: 2025-02-23T11:47:47.932Z + +Content: +Home Docs Cloud Architecture Center Send feedback CI/CD pipeline for developing and delivering containerized apps Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2022-11-18 UTC This document describes an integrated set of Google Cloud tools to set up a system for development, for continuous integration (CI), and for continuous delivery (CD) that you can use to develop and deploy applications to Google Kubernetes Engine (GKE). This reference architecture document is intended for both software developers and operators. It assumes that you're familiar with running gcloud commands on Google Cloud and with deploying application containers to GKE. Architecture The following diagram shows the resources that are used in this architecture: This architecture includes the following components: Cloud Code as a development workspace. As part of this workspace, you can see changes in the development cluster, which runs on minikube. You run Cloud Code and the minikube cluster in Cloud Shell. Cloud Shell is an online development environment accessible from your browser. It has compute resources, memory, an integrated development environment, (IDE), and it also has Cloud Code installed. Cloud Build to build and test the application—the "CI" part of the pipeline This part of the pipeline includes the following actions: Cloud Build monitors changes to the source repository, using a Cloud Build trigger. When a change is committed into the main branch, the Cloud Build trigger does the following: Rebuilds the application container. Places build artifacts in a Cloud Storage bucket. Places the application container in Artifact Registry. Runs tests on the container. Calls Cloud Deploy to deploy the container to the staging environment. In this example, the staging environment is a Google Kubernetes Engine cluster. If the build and tests are successful, you can then use Cloud Deploy to promote the container from staging to production. Cloud Deploy to manage the deployment—the "CD" part of the pipeline. In this part of the pipeline, Cloud Deploy does the following: Registers a delivery pipeline and targets. The targets represent the staging and production clusters. Creates a Cloud Storage bucket and stores the Skaffold rendering source and rendered manifests in that bucket. Generates a new release for each source code change. Deploys the application to the production environment. For this deployment to production, an operator (or other designated person) manually approves the deployment. In this architecture, the production environment is a Google Kubernetes Engine cluster. In this architecture, configuration is shared among the development, staging, and production environments through Skaffold, a command-line tool that facilitates continuous development for Kubernetes-native applications. Google Cloud stores the application's source code in GitHub. This architecture uses Google Cloud products for most of the components of the system, with Skaffold enabling the integration of the system. Because Skaffold is open source, you can use these principles to create a similar system using a combination of Google Cloud, in-house, and third-party components. The modularity of this solution means that you can adopt it incrementally as part of your development and deployment pipeline. Use cases The following are the key features of this integrated system: Develop and deploy faster. The development loop is efficient because you can validate changes in the developer workspace. Deployment is fast because the automated CI/CD system and increased parity across the environments allow you to detect more issues when you deploy changes to production. Benefit from increased parity across development, staging, and production. The components of this system use a common set of Google Cloud tools. Reuse configurations across the different environments. This reuse is done with Skaffold, which allows a common configuration format for the different environments. It also allows developers and operators to update and use the same configuration. Apply governance early in the workflow. This system applies validation tests for governance at production and in the CI system and development environment. Applying governance in the development environment allows problems to be found and fixed earlier. Let opinionated tooling manage your software delivery. Continuous delivery is fully managed, separating the stages of your CD pipeline from the details of rendering and deploying. Deployment To deploy this architecture, see Develop and deploy containerized apps using a CI/CD pipeline. What's next To learn how to deploy into a private GKE instance, see Deploying to a private cluster on a Virtual Private Cloud network. For information about how to implement, improve, and measure deployment automation, see Deployment automation. Send feedback \ No newline at end of file diff --git a/Reference_architecture(4).txt b/Reference_architecture(4).txt new file mode 100644 index 0000000000000000000000000000000000000000..7f12bf193bb7a1d62d0523850c7a966827523310 --- /dev/null +++ b/Reference_architecture(4).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-guacamole-gke +Date Scraped: 2025-02-23T11:47:57.110Z + +Content: +Home Docs Cloud Architecture Center Send feedback Apache Guacamole on GKE and Cloud SQL Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-09 UTC Apache Guacamole offers a fully browser-based way to access remote desktops through Remote Desktop Protocol (RDP), Virtual Network Computing (VNC), and Secure Shell Protocol (SSH) on Compute Engine virtual machines (VMs). Identity-Aware Proxy (IAP) provides access to Guacamole with improved security. This reference architecture document is intended for server administrators and engineers who want to host Apache Guacamole on Google Kubernetes Engine (GKE) and Cloud SQL. This document assumes you are familiar with deploying workloads to Kubernetes and Cloud SQL for MySQL. This document also assumes you are familiar with Identity and Access Management and Google Compute Engine. Note: Apache Guacamole is not a full Virtual Desktop Infrastructure (VDI) solution by itself. For alternative solutions that provide a full VDI, see Virtual Desktops Solutions. Architecture The following diagram shows how a Google Cloud load balancer is configured with IAP, to protect an instance of the Guacamole client running in GKE: This architecture includes the following components: Google Cloud load balancer: Distributes traffic across multiple instances, which reduces the risk of performance issues. IAP: Provides improved security through a custom authentication extension. Guacamole client: Runs in GKE and connects to the guacd backend service. Guacd backend service: Brokers remote desktop connections to one or more Compute Engine VMs. Guacamole database in Cloud SQL: Manages configuration data for Guacamole. Compute Engine instances: VMs hosted on the Google infrastructure. Design considerations The following guidelines can help you to develop an architecture that meets your organization's requirements for security, cost, and performance. Security and compliance This architecture uses IAP to help protect access to the Guacamole service. Authorized users sign in to the Guacamole instance through a custom IAP authentication extension. For details, see the custom extension in GitHub. When you add additional users (through the Guacamole user interface), these additional users must have permissions through IAM, with the IAP-secured Web App User role. The OAuth configuration that this deployment creates is set to internal. Because of this setting, you must use a Google account in the same organization as the one you use to deploy Guacamole. If you use a Google account outside the organization, you receive an HTTP/403 org_internal error. Performance Google Cloud load balancer and GKE distributes traffic across multiple instances, which helps to reduce the risk of performance issues. Deployment To deploy this architecture, see Deploy Apache Guacamole on GKE and Cloud SQL. What's Next? Review the GKE guidance on Hardening your cluster's security. Review Encrypt secrets at the application layer to increase security for secrets, such as database credentials and OAuth credentials. Review IAM Conditions to learn how to provide more granular control for user access to Guacamole. Understand more about how IAP integration works by reviewing the custom authentication provider in the GitHub repository. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Richard Grime | Principal Architect, UK Public SectorOther contributors: Aaron Lind | Solution Engineer, Application InnovationEyal Ben Ivri | Cloud Solutions ArchitectIdo Flatow | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Reference_architecture(5).txt b/Reference_architecture(5).txt new file mode 100644 index 0000000000000000000000000000000000000000..fbc3c2bf93e3a4065d093d59b6c10b48a8337917 --- /dev/null +++ b/Reference_architecture(5).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/manage-and-scale-windows-networking +Date Scraped: 2025-02-23T11:48:27.948Z + +Content: +Home Docs Cloud Architecture Center Send feedback Manage and scale networking for Windows applications that run on managed Kubernetes Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC This reference architecture provides a highly available and scalable solution that uses Cloud Service Mesh and Envoy gateways to manage network traffic for Windows applications that run on Google Kubernetes Engine (GKE). It explains how to manage that network traffic by using a service that can route traffic to Pods and to an open-source xDS-compliant proxy. Using an architecture like this can help to reduce costs and improve network management. This document is intended for cloud architects, network administrators and IT professionals who are responsible for designing and managing Windows applications running on GKE. Architecture The following diagram shows an architecture for managing networking for Windows applications running on GKE using Cloud Service Mesh and Envoy gateways: The architecture includes the following components: A regional GKE cluster with both Windows and Linux node pools. Two Windows applications running in two separate GKE Pods. Each application is exposed by a ClusterIP-type Kubernetes Service and a network endpoint group (NEG). Cloud Service Mesh creates and manages the traffic routes to the NEGs for each GKE Pod. Each route is mapped to a specific scope. That scope uniquely identifies a Cloud Service Mesh ingress gateway. HTTP routes that map to the backend services for Cloud Service Mesh. Envoy container Pods that act as an Envoy Gateway to the GKE cluster. Envoy gateways that run on Linux nodes. The gateways are configured to direct traffic to the Windows applications through the services that correspond to those applications. Envoy is configured to use the scope parameter to load the configuration details of the relevant Cloud Service Mesh services. An internal Application Load Balancer that terminates SSL traffic and directs all external incoming traffic to the Envoy gateways. Products used This reference architecture uses the following Google Cloud and third-party products: Google Cloud products Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Google Kubernetes Engine (GKE): A Kubernetes service that you can use to deploy and operate containerized applications at scale using Google's infrastructure. Cloud Service Mesh: A suite of tools that helps you monitor and manage a reliable service mesh on-premises or on Google Cloud. Third-party products Envoy Gateway: Manages an Envoy proxy as a standalone or Kubernetes-based application gateway. Gateway API: An official Kubernetes project focused on L4 and L7 routing in Kubernetes. Use case The main use case for this reference architecture is to manage network traffic for Windows applications that run on GKE. This architecture provides the following benefits: Simplified network management: Cloud Service Mesh and Envoy gateways provide simplified network management through a centralized control plane that manages network traffic to applications. These applications can be either Linux or Windows applications that run on GKE or Compute Engine. Using this simplified network management scheme reduces the need for manual configuration. Enhanced scalability and availability: To meet your changing demands, use Cloud Service Mesh and Envoy gateways to scale your Linux and Windows applications. You can also use Envoy gateways to provide high availability for your applications by load balancing traffic across multiple Pods. Improved security: Use Envoy gateways to add security features to your Linux and Windows applications, such as SSL termination, authentication, and rate limiting. Reduced costs: Both Cloud Service Mesh and Envoy gateways can help reduce the costs of managing network traffic for Linux and Windows applications. Design considerations This section provides guidance to help you develop an architecture that meets your specific requirements for security, reliability, cost, and efficiency. Security Secured networking: The architecture uses an internal Application Load Balancer to encrypt incoming traffic to the Windows containers. Encryption in transit helps to prevent data leakage. Windows containers: Windows containers help provide a secure and isolated environment for containerized applications. Reliability Load balancing: The architecture uses multiple layers of Cloud Load Balancing to distribute traffic across the Envoy gateways and the Windows containers. Fault tolerance: This architecture is fault tolerant with no single point of failure. This design helps to ensure that it's always available, even if one or more of the components fails. Autoscaling: The architecture uses autoscaling to automatically scale the number of Envoy gateways and Windows containers based on the load. Autoscaling helps to ensure that the gateways, and the applications, can handle spikes in traffic without experiencing performance issues. Monitoring: The architecture uses Google Cloud Managed Service for Prometheus and Cloud Operations to monitor the health of the Envoy gateways and Windows containers. Monitoring helps you identify issues early and potentially prevent them from disrupting your applications. Cost optimization Choose the right instance types for your workloads: Consider the following factors when choosing instance types: The number of vCPUs and memory your applications require The expected traffic load for your applications The need for users to have highly available applications Use autoscaling: Autoscaling can help you save money by automatically scaling your Windows workloads vertically and horizontally. Vertical scaling tunes container requests and limits according to customer use. Automate vertical scaling with vertical Pod autoscaling. Horizontal scaling adds or removes Kubernetes Pods to meet demand. Automate horizontal scaling with horizontal Pod autoscaling. Use Cloud Service Mesh and Envoy gateways: Cloud Service Mesh and Envoy gateways can help you save money by efficiently routing traffic to your Windows applications. Using more efficient routing can help reduce the amount of bandwidth you must purchase. It can also help improve the performance of those applications. Use shared Virtual Private Cloud (VPC) networks: Shared Virtual Private Cloud networks let you share a single VPC across multiple projects. Sharing can help you save money by reducing the number of VPCs that you need to create and manage. Operational efficiency Multiple domains with a single internal load balancer: The architecture uses internal Application Load Balancers to offload SSL traffic. Each HTTPS target proxy can support multiple SSL certificates (up to the supported maximum) to manage multiple applications with different domains. Infrastructure as Code (IaC): To manage the infrastructure, the architecture can be deployed using IaC. IaC helps to ensure that your infrastructure is consistent and repeatable. Deployment To deploy this architecture, see Deploy Windows applications running on managed Kubernetes. What's next Learn more about the Google Cloud products used in this design guide: GKE networking best practices Best practices for running cost effective Kubernetes applications on GKE Cloud Service Mesh control plane observability Using self-managed SSL certificates with load balancers For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Eitan Eibschutz | Staff Technical Solutions ConsultantOther contributors: John Laham | Solutions ArchitectKaslin Fields | Developer AdvocateMaridi (Raju) Makaraju | Supportability Tech LeadValavan Rajakumar | Key Enterprise ArchitectVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Reference_architecture(6).txt b/Reference_architecture(6).txt new file mode 100644 index 0000000000000000000000000000000000000000..7bfcf9d1455f486ad66c0962e013cb303ac12441 --- /dev/null +++ b/Reference_architecture(6).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/scalable-bigquery-backup-automation +Date Scraped: 2025-02-23T11:49:06.709Z + +Content: +Home Docs Cloud Architecture Center Send feedback Scalable BigQuery backup automation Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-09-17 UTC This architecture provides a framework and reference deployment to help you develop your BigQuery backup strategy. This recommended framework and its automation can help your organization do the following: Adhere to your organization's disaster recovery objectives. Recover data that was lost due to human errors. Comply with regulations. Improve operational efficiency. The scope of BigQuery data can include (or exclude) folders, projects, datasets, and tables. This recommended architecture shows you how to automate the recurrent backup operations at scale. You can use two backup methods for each table: BigQuery snapshots and BigQuery exports to Cloud Storage. This document is intended for cloud architects, engineers, and data governance officers who want to define and automate data policies in their organizations. Architecture The following diagram shows the automated backup architecture: The workflow that's shown in the preceding diagram includes the following phases: Cloud Scheduler triggers a run to the dispatcher service through a Pub/Sub message, which contains the scope of the BigQuery data that's included and excluded. Runs are scheduled by using a cron expression. The dispatcher service, which is built on Cloud Run, uses the BigQuery API to list the tables that are within the BigQuery scope. The dispatcher service submits one request for each table to the configurator service through a Pub/Sub message. The Cloud Run configurator service computes the backup policy of the table from one of the following defined options: The table-level policy, which is defined by data owners. The fallback policy, which is defined by the data governance officer, for tables that don't have defined policies. For details about backup policies, see Backup policies. The configurator service submits one request for each table to the next service, based on the computed backup policy. Depending on the backup method, one of the following custom Cloud Run services submits a request to the BigQuery API and runs the backup process: The service for BigQuery snapshots backs up the table as a snapshot. The service for data exports backs up the table as a data export to Cloud Storage. When the backup method is a table data export, a Cloud Logging log sink listens to the export jobs completion events in order to enable the asynchronous execution of the next step. After the backup services complete their operations, Pub/Sub triggers the tagger service. For each table, the tagger service logs the results of the backup services and updates the backup state in the Cloud Storage metadata layer. Products used This reference architecture uses the following Google Cloud products: BigQuery: An enterprise data warehouse that helps you manage and analyze your data with built-in features like machine learning geospatial analysis, and business intelligence. Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Pub/Sub: An asynchronous and scalable messaging service that decouples services that produce messages from services that process those messages. Cloud Run: A serverless compute platform that lets you run containers directly on top of Google's scalable infrastructure. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Cloud Scheduler: A fully managed enterprise-grade cron job scheduler that lets you set up scheduled units of work to be executed at defined times or regular intervals. Datastore: A highly scalable NoSQL database for your web and mobile applications. Use cases This section provides examples of use cases for which you can use this architecture. Backup automation As an example, your company might operate in a regulated industry and use BigQuery as the main data warehouse. Even when your company follows best practices in software development, code review, and release engineering, there's still a risk of data loss or data corruption due to human errors. In a regulated industry, you need to minimize this risk as much as possible. Examples of these human errors include the following: Accidental deletion of tables. Data corruption due to erroneous data pipeline logic. These types of human errors can usually be resolved with the time travel feature, which lets you recover data from up to seven days ago. In addition, BigQuery also offers a fail-safe period, during which deleted data is retained in fail-safe storage for an additional seven days after the time travel window. That data is available for emergency recovery through Cloud Customer Care. However, if your company doesn't discover and fix such errors within this combined timeframe, the deleted data is no longer recoverable from its last stable state. To mitigate this, we recommend that you execute regular backups for any BigQuery tables that can't be reconstructed from source data (for example, historical records or KPIs with evolving business logic). Your company could use basic scripts to back up tens of tables. However, if you need to regularly back up hundreds or thousands of tables across the organization, you need a scalable automation solution that can do the following: Handle different Google Cloud API limits. Provide a standardized framework for defining backup policies. Provide transparency and monitoring capabilities for the backup operations. Backup policies Your company might also require that the backup policies be defined by the following groups of people: Data owners, who are most familiar with the tables and can set the appropriate table-level backup policies. Data governance team, who ensure that a fallback policy is in place to cover any tables that don't have a table-level policy. The fallback policy ensures that certain datasets, projects, and folders are backed up to comply with your company's data retention regulations. In the deployment for this reference architecture, there are two ways to define the backup policies for tables, and they can be used together: Data owner configuration (decentralized): a table-level backup policy, which is manually attached to a table. The data owner defines a table-level JSON file that's stored in a common bucket. Manual policies take precedence over fallback policies when the solution determines the backup policy of a table. For details in the deployment, see Set table-level backup policies. Organization default configuration (centralized): a fallback policy, which applies only to tables that don't have manually-attached policies. A data governance team defines a central JSON file in Terraform, as part of the solution. The fallback policy offers default backup strategies on folder, project, dataset, and table levels. For details in the deployment, see Define fallback backup policies. Backup versus replication A backup process makes a copy of the table data from a certain point in time, so that it can be restored if the data is lost or corrupted. Backups can be run as a one-time occurrence or recurrently (through a scheduled query or workflow). In BigQuery, point-in-time backups can be achieved with snapshots. You can use snapshots to keep copies of the data beyond the seven-day time travel period within the same storage location as the source data. BigQuery snapshots are particularly helpful for recovering data after human errors that lead to data loss or corruption, rather than recovering from regional failures. BigQuery offers a Service Level Objective (SLO) of 99.9% to 99.99%, depending on the edition. By contrast, replication is the continuous process of copying database changes to a secondary (or replica) database in a different location. In BigQuery, cross-region replication can help provide geo-redundancy by creating read-only copies of the data in secondary Google Cloud regions, which are different from the source data region. However, BigQuery cross-region replication isn't intended for use as a disaster recovery plan for total-region outage scenarios. For resilience against regional disasters, consider using BigQuery managed disaster recovery. BigQuery cross-region replication provides a synchronized read-only copy of the data in a region that is close to the data consumers. These data copies enable collocated joins and avoid cross-regional traffic and cost. However, in cases of data corruption due to human error, replication alone can't help with recovery, because the corrupted data is automatically copied to the replica. In such cases, point-in-time backups (snapshots) are a better choice. The following table shows a summarized comparison of backup methods and replication: Method Frequency Storage location Use cases Costs Backup (Snapshots or Cloud Storage export) One-time or recurrently Same as the source table data Restore original data, beyond the time travel period Snapshots incur storage charges for data changes in the snapshot only Exports can incur standard storage charges See Cost optimization Cross-region replication Continuously Remote Create a replica in another region One-time migrations between regions Incurs charges for storing data in the replica Incurs data replication costs Design considerations This section provides guidance for you to consider when you use this reference architecture to develop a topology that meets your specific requirements for security, reliability, cost optimization, operational efficiency, and performance. Security, privacy, and compliance The deployment incorporates the following security measures in its design and implementation: The network ingress setting for Cloud Run accepts only internal traffic, to restrict access from the internet. It also allows only authenticated users and service accounts to call the services. Each Cloud Run service and Pub/Sub subscription uses a separate service account, which has only the required permissions assigned to it. This mitigates the risks associated with using one service account for the system and follows the principle of least privilege. For privacy considerations, the solution doesn't collect or process personally identifiable information (PII). However, if the source tables have exposed PII, the backups taken of those tables also include this exposed data. The owner of the source data is responsible for protecting any PII in the source tables (for example, by applying column-level security, data masking, or redaction). The backups are secure only when the source data is secured. Another approach is to make sure that projects, datasets, or buckets that hold backup data with exposed PII have the required Identity and Access Management (IAM) policies that restrict access to only authorized users. As a general-purpose solution, the reference deployment doesn't necessarily comply with a particular industry's specific requirements. Reliability This section describes features and design considerations for reliability. Failure mitigation with granularity To take backups of thousands of tables, it's likely that you might reach API limits for the underlying Google Cloud products (for example, snapshot and export operation limits for each project). However, if the backup of one table fails due to misconfiguration or other transient issues, that shouldn't affect the overall execution and ability to back up other tables. To mitigate potential failures, the reference deployment decouples the processing steps by using granular Cloud Run services and connecting them through Pub/Sub. If a table backup request fails at the final tagger service step, Pub/Sub retries only this step and it doesn't retry the entire process. Breaking down the flow into multiple Cloud Run services, instead of multiple endpoints hosted under one Cloud Run service, helps provide granular control of each service configuration. The level of configuration depends on the service's capabilities and the APIs that it communicates with. For example, the dispatcher service executes once per run, but it requires a substantial amount of time to list all the tables within the BigQuery backup scope. Therefore, the dispatcher service requires higher time-out and memory settings. However, the Cloud Run service for BigQuery snapshots executes once per table in a single run, and completes in less time than the dispatcher service. Therefore, the Cloud Run service requires a different set of configurations at the service level. Data consistency Data consistency across tables and views is crucial for maintaining a reliable backup strategy. Because data is continuously updated and modified, backups taken at different times might capture different states of your dataset. These backups in different states can lead to inconsistencies when you restore data, particularly for tables that belong to the same functional dataset. For example, restoring a sales table to a point in time that's different from its corresponding inventory table could create a mismatch in available stock. Similarly, database views that aggregate data from multiple tables can be particularly sensitive to inconsistencies. Restoring these views without ensuring that the underlying tables are in a consistent state could lead to inaccurate or misleading results. Therefore, when you design your BigQuery backup policies and frequencies, it's imperative to consider this consistency and ensure that your restored data accurately reflects the real-world state of your dataset at a given point in time. For example, in the deployment for this reference architecture, data consistency is controlled through the following two configurations in the backup policies. These configurations compute the exact table snapshot time through time travel, without necessarily backing up all tables at the same time. backup_cron: Controls the frequency with which a table is backed up. The start timestamp of a run is used as a reference point for time travel calculation for all tables that are backed up in this run. backup_time_travel_offset_days: Controls how many days in the past should be subtracted from the reference point in time (run start time), to compute the exact time travel version of the table. Automated backup restoration Although this reference architecture focuses on backup automation at scale, you can consider restoring these backups in an automated way as well. This additional automation can provide similar benefits to those of the backup automation, including improved recovery efficiency and speed, with less downtime. Because the solution keeps track of all backup parameters and results through the tagger service, you could develop a similar architecture to apply the restoration operations at scale. For example, you could create a solution based on an on-demand trigger that sends a scope of BigQuery data to a dispatcher service, which dispatches one request per table to a configurator service. The configurator service could fetch the backup history that you want for a particular table. The configurator service could then pass it on to either a BigQuery snapshot restoration service or Cloud Storage restoration service to apply the restoration operation accordingly. Lastly, a tagger service could store the results of these operations in a state store. By doing so, the automated restoration framework can benefit from the same design objectives as the backup framework detailed in this document. Cost optimization The framework of this architecture provides backup policies that set the following parameters for overall cost optimization: Backup method: The framework offers the following two backup methods: BigQuery snapshots, which incur storage costs based on updated and deleted data compared to the base table. Therefore, snapshots are more cost effective for tables that are append-only or have limited updates. BigQuery exports to Cloud Storage, which incur standard storage charges. However, for large tables that follow a truncate and load approach, it's more cost effective to back them up as exports in less expensive storage classes. Snapshot expiration: The time to live (TTL) is set for a single table snapshot, to avoid incurring storage costs for the snapshot indefinitely. Storage costs can grow over time if tables have no expiration. Operational efficiency This section describes features and considerations for operational efficiency. Granular and scalable backup policies One of the goals of this framework is operational efficiency by scaling up business output while keeping business input relatively low and manageable. For example, the output is a high number of regularly backed up tables, while the input is a small number of maintained backup policies and configurations. In addition to allowing backup policies at the table level, the framework also allows for policies at the dataset, project, folder, and global level. This means that with a few configurations at higher levels (for example, the folder or project level), hundreds or thousands of tables can be backed up regularly, at scale. Observability With an automation framework, it's critical that you understand the statuses of the processes. For example, you should be able to find the information for the following common queries: The backup policy that is used by the system for each table. The backup history and backup locations of each table. The overall status of a single run (the number of processed tables and failed tables). The fatal errors that occurred in a single run, and the components or steps of the process in which they occurred. To provide this information, the deployment writes structured logs to Cloud Logging at each execution step that uses a Cloud Run service. The logs include the input, output, and errors, along with other progress checkpoints. A log sink routes these logs to a BigQuery table. You can run a number of queries to monitor runs and get reports for common observability use cases. For more information about logs and queries in BigQuery, see View logs routed to BigQuery. Performance optimization To handle thousands of tables at each run, the solution processes backup requests in parallel. The dispatcher service lists all of the tables that are included within the BigQuery backup scope and it generates one backup request per table at each run. This enables the application to process thousands of requests and tables in parallel, not sequentially. Some of these requests might initially fail for temporary reasons such as reaching the limits of the underlying Google Cloud APIs or experiencing network issues. Until the requests are completed, Pub/Sub automatically retries the requests with the exponential backoff retry policy. If there are fatal errors such as invalid backup destinations or missing permissions, the errors are logged and the execution of that particular table request is terminated without affecting the overall run. Limits The following quotas and limits apply to this architecture. For table snapshots, the following applies for each backup operation project that you specify: One project can run up to 100 concurrent table snapshot jobs. One project can run up to 50,000 table snapshot jobs per day. One project can run up to 50 table snapshot jobs per table per day. For details, see Table snapshots. For export jobs (exports to Cloud Storage), the following applies: You can export up to 50 TiB of data per day from a project for free, by using the shared slot pool. One project can run up to 100,000 exports per day. To extend this limit, create a slot reservation. For more information about extending these limits, see Export jobs. Regarding concurrency limits, this architecture uses Pub/Sub to automatically retry requests that fail due to these limits, until they're served by the API. However, for other limits on the number of operations per project per day, these could be mitigated by either a quota-increase request, or by spreading the backup operations (snapshots or exports) across multiple projects. To spread operations across projects, configure the backup policies as described in the following deployment sections: Define fallback backup policies Configure additional backup operation projects Set table-level backup policies Deployment To deploy this architecture, see Deploy scalable BigQuery backup automation. What's next Learn more about BigQuery: BigQuery table snapshots BigQuery table exports to Cloud Storage For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Karim Wadie | Strategic Cloud EngineerOther contributors: Chris DeForeest | Site Reliability EngineerEyal Ben Ivri | Cloud Solutions ArchitectJason Davenport | Developer AdvocateJaliya Ekanayake | Engineering ManagerMuhammad Zain | Strategic Cloud Engineer Send feedback \ No newline at end of file diff --git a/Reference_architecture(7).txt b/Reference_architecture(7).txt new file mode 100644 index 0000000000000000000000000000000000000000..04dc4e5fa2afaead85c6641bd31ff9997dcd2200 --- /dev/null +++ b/Reference_architecture(7).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/using-apache-hive-on-cloud-dataproc +Date Scraped: 2025-02-23T11:49:14.795Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use Apache Hive on Dataproc Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-05-08 UTC Important: We recommend that you use Dataproc Metastore. to manage Hive metadata on Google Cloud, rather than the legacy workflow described in the deployment. This reference architecture describes the benefits of using Apache Hive on Dataproc in an efficient and flexible way by storing Hive data in Cloud Storage and hosting the Hive metastore in a MySQL database on Cloud SQL. This document is intended for cloud architects and data engineers who are interested in deploying Apache Hive on Dataproc and the Hive Metastore in Cloud SQL. Architecture The following diagram shows the lifecycle of a Hive query. In the diagram, the lifecycle of a Hive query follows these steps: The Hive client submits a query to a Hive server that runs in an ephemeral Dataproc cluster. The Hive server processes the query and requests metadata from the metastore service. The Metastore service fetches Hive metadata from Cloud SQL through the Cloud SQL Proxy. The Hive server loads data from the Hive warehouse located in a regional bucket in Cloud Storage. The Hive server returns the result to the client. Design alternatives The following section presents a potential design alternative for this architecture. Multi-regional architecture Consider using a multi-regional architecture if you need to run Hive servers in different geographic regions. In that case, you should create separate Dataproc clusters that are dedicated to hosting the metastore service and that reside in the same region as the Cloud SQL instance. The metastore service can sometimes send high volumes of requests to the MySQL database, so it is critical to keep the metastore service geographically close to the MySQL database in order to minimize impact on performance. In comparison, the Hive server typically sends far fewer requests to the metastore service. Therefore, it can be more acceptable for the Hive server and the metastore service to reside in different regions despite the increased latency. The metastore service can run only on Dataproc master nodes, not on worker nodes. Dataproc enforces a minimum of two worker nodes in standard clusters and in high-availability clusters. To avoid wasting resources on unused worker nodes, you can create a single-node cluster for the metastore service instead. To achieve high availability, you can create multiple single-node clusters. The Cloud SQL proxy needs to be installed only on the metastore service clusters, because only the metastore service clusters need to directly connect to the Cloud SQL instance. The Hive servers then point to the metastore service clusters by setting the hive.metastore.uris property to the comma-separated list of URIs. For example: thrift://metastore1:9083,thrift://metastore2:9083 You can also consider using a dual-region or multi-region bucket if the Hive data needs to be accessed from Hive servers that are located in multiple locations. The choice between different bucket location types depends on your use case. You must balance latency, availability, and costs. The following diagram shows an example of a multi-regional architecture. As you can see, the multi-regional scenario is slightly more complex and much more robust. The deployment guide for this reference architecture uses a single-region scenario. Advantages of a multi-regional architecture Separating compute and storage resources offers some advantages: Flexibility and agility: You can tailor cluster configurations for specific Hive workloads and scale each cluster independently up and down as needed. Cost savings: You can spin up an ephemeral cluster when you need to run a Hive job and then delete it when the job completes. The resources that your jobs require are active only when they're being used, so you pay only for what you use. You can also use preemptible VMs for noncritical data processing or to create very large clusters at a lower total cost. Resilience: For simplicity, this reference architecture uses only one master instance. To increase resilience in production workloads, you should consider creating a cluster with three master instances by using Dataproc's high availability mode. Cost optimization This reference architecture and deployment uses the following billable components of Google Cloud: Dataproc Cloud Storage Cloud SQL You can use the pricing calculator to generate a cost estimate based on your projected usage. New Google Cloud users might be eligible for a free trial. Deployment To deploy this architecture, see Deploy Apache Hive on Dataproc. What's next Try BigQuery, Google's serverless, highly scalable, low-cost enterprise data warehouse. Check out this guide on migrating Hadoop workloads to Google Cloud. Check out this initialization action for more details on how to use Hive HCatalog on Dataproc. Learn how to configure Cloud SQL for high availability to increase service reliability. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Reference_architecture(8).txt b/Reference_architecture(8).txt new file mode 100644 index 0000000000000000000000000000000000000000..cb2d30dd99afca63b886eb705ebd1290a21e2269 --- /dev/null +++ b/Reference_architecture(8).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/riot-live-migration-redis-enterprise-cloud +Date Scraped: 2025-02-23T11:53:03.684Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use RIOT Live Migration to migrate to Redis Enterprise Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-29 UTC This document describes an architecture to migrate from Redis compatible sources like Redis Open Source (Redis OSS), AWS ElastiCache, and Azure Cache for Redis to fully managed Redis Enterprise Cloud in Google Cloud using the Redis Input and Output Tool (RIOT) Live Migration service. This document is intended for database architects, database administrators, and database engineers who want to migrate from Redis-compatible sources to fully managed Redis Enterprise Cloud. Redis Enterprise Cloud is a fully managed, enterprise-grade Redis solution that can help support your mission-critical applications. Compared to Redis-compatible sources, it provides enhanced scalability, availability, security, and operational efficiency. By using RIOT—a free, command-line utility—you can migrate your data from Redis to Redis Enterprise Cloud without any service interruption or downtime. Architecture The following diagram shows the migration architecture: In the diagram, RIOT Live Migration Service is used to migrate Redis-compatible sources to Redis Enterprise Cloud. The architecture contains the following components: Source: Redis-compatible sources like Redis OSS, AWS ElastiCache, and Azure Redis. Target: Redis Enterprise Cloud running in a Redis managed VPC. Migration Service: RIOT running on Compute Engine virtual machines (VMs). Products used This reference architecture uses the following Google Cloud and third-party products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. RIOT Live Migration: A free, command-line utility that's designed to help you get data in and out of Redis. Redis Enterprise Cloud on Google Cloud: A fully managed, enterprise-grade Redis solution that can help support your mission-critical applications. Use case Redis offers sub-millisecond latency, advanced data structure support, resiliency, and open source portability. However, it can be difficult to scale self-managed Redis-compatible sources to meet the demanding workloads of enterprises while maintaining ultra-low latencies. When you outgrow your self-managed Redis cluster deployment, you might struggle to scale. It's time-consuming and error-prone to architect a highly available solution and manage replication. Scaling also presents logistic challenges and costs associated with hardware management, patching, and upgrades. To help you solve these challenges, Redis Enterprise Cloud fully integrates with Google Cloud to provide a real time database service to run, scale, and manage Redis. Redis Enterprise Cloud offers an open source core, full enterprise grade functionality and security, market-leading performance, scalability, and availability that business-critical applications require. Redis Enterprise Cloud offers sub-millisecond latency, single digit seconds failover, and five-nines uptime. Design alternatives RIOT provides a flexible migration solution in and out of Redis. The following sections present potential design alternatives for this architecture. The alternatives either incur downtime or require that the target database is in a Redis Flexible (or Annual) subscription. RDB snapshots Redis Database (RDB) snapshot is one way that you can persist your data in Redis on durable storage. It performs point-in-time snapshots of your dataset, and it's commonly used to back up data in Redis. As an alternative to using RIOT to perform your migration, you can use RDB snapshot to migrate from a Redis OSS instance to Redis Enterprise. However, unlike RIOT, RDB snapshot doesn't support live migration and will incur downtime. Sync using Active-Passive You can use the Redis OSS ReplicaOf command to configure a Redis instance as a replica of another Redis server. The command is used in the context of Redis replication, which lets you create copies of your data in different Redis instances. Like RIOT, the ReplicaOf command supports live migration and incurs zero downtime, but the command is built in to Redis OSS, so you don't need to install any tools. Redis Enterprise's Active-Passive Geo Distribution uses the ReplicaOf command to scale a Redis deployment in multiple geographical locations. If the target database is in a Flexible (or Annual) subscription, the command can also be used to migrate data from a Redis database to Redis Enterprise Cloud subscriptions. However, the command doesn't work if the target is a Fixed subscription, and it doesn't work between flexible subscriptions from different Redis Cloud accounts. Design considerations The following guidelines can help you to develop an architecture that meets your organization's requirements for reliability, cost, and performance. Reliability The migration in this architecture is a one-way migration from a source Redis OSS instance to a target Redis Enterprise instance. After you complete a cutover from the source Redis OSS to the target Redis Enterprise cluster, the source isn't kept up to date with changes to the target cluster. Therefore, if you implement this architecture in a production environment, you can't switch your applications to up-to-date source instances in a fallback. Cost optimization When you migrate Redis OSS instances to Redis Enterprise, we recommend that you group your target Redis Enterprise databases into subscriptions so that you can lower the total cost of ownership by using multi-tenancy. For example, if you have a group of databases that are designed for development and testing, you can group them in a single subscription because they share common characteristics and network requirements. Similarly, a group of databases for production can be hosted on a different subscription. Performance RIOT Live Migration supports near-zero downtime. During the migration from the source Redis OSS instance, your applications can still access the source Redis OSS instance without any impact. During the migration process, after the initial load of data from Redis OSS, RIOT Live Migration continues to migrate changes from Rediss OSS as they occur. After the initial key-value pairs data is migrated, you perform the cutover from the source Redis OSS instance to the target Redis Enterprise instance. As part of the cutover process, you suspend client writes to the source Redis OSS instance. You then wait for RIOT to process any remaining changes from the source Redis OSS instance to the target Redis Enterprise instance. Deployment To deploy this architecture, see Deploy RIOT Live Migration to migrate from Redis Open Source to Redis Enterprise Cloud. What's Next? Read Google Cloud data migration content. For more in-depth documentation and best practices, review RIOT documentation. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Saurabh Kumar | ISV Partner EngineerGilbert Lau | Principal Cloud Architect, RedisOther contributors: Chris Mague | Customer Engineer, Data ManagementGabe Weiss | Developer Advocacy ManagerMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Reference_architecture(9).txt b/Reference_architecture(9).txt new file mode 100644 index 0000000000000000000000000000000000000000..ca11c60d4b3b64307830e29804b34abf0eb31ac1 --- /dev/null +++ b/Reference_architecture(9).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/import-logs-from-storage-to-logging +Date Scraped: 2025-02-23T11:53:14.293Z + +Content: +Home Docs Cloud Architecture Center Send feedback Import logs from Cloud Storage to Cloud Logging Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-19 UTC This reference architecture describes how you can import logs that were previously exported to Cloud Storage back to Cloud Logging. This reference architecture is intended for engineers and developers, including DevOps, site reliability engineers (SREs), and security investigators, who want to configure and run the log importing job. This document assumes you are familiar with running Cloud Run jobs, and how to use Cloud Storage and Cloud Logging. Architecture The following diagram shows how Google Cloud services are used in this reference architecture: This workflow includes the following components: Cloud Storage bucket: Contains the previously exported logs you want to import back to Cloud Logging. Because these logs were previously exported, they're organized in the expected export format. Cloud Run job: Runs the import logs process: Reads the objects that store log entries from Cloud Storage. Finds exported logs for the specified log ID, in the requested time range, based on the organization of the exported logs in the Cloud Storage bucket. Converts the objects into Cloud Logging API LogEntry structures. Multiple LogEntry structures are aggregated into batches, to reduce Cloud Logging API quota consumption. The architecture handles quota errors when necessary. Writes the converted log entries to Cloud Logging. If you re-run the same job multiple times, duplicate entries can result. For more information, see Run the import job. Cloud Logging: Ingests and stores the converted log entries. The log entries are processed as described in the Routing and storage overview. The Logging quotas and limits apply, including the Cloud Logging API quotas and limits and a 30-day retention period. This reference architecture is designed to work with the default write quotas, with a basic retrying mechanism. If your write quota is lower than the default, the implementation might fail. The imported logs aren't included in log-based metrics, because their timestamps are in the past. However, if you opt to use a label, the timestamp records the import time, and the logs are included in the metric data. BigQuery: Uses SQL to run analytical queries on imported logs (optional). To import audit logs from Cloud Storage, this architecture modifies the log IDs; you must account for this renaming when you query the imported logs. Use case You might choose to deploy this architecture if your organization requires additional log analysis for incident investigations or other audits of past events. For example, you might want to analyze connections to your databases for the first quarter of the last year, as a part of a database access audit. Design alternatives This section describes alternatives to the default design shown in this reference architecture document. Retention period and imported logs Cloud Logging requires incoming log entries to have timestamps that don't exceed a 30-day retention period. Imported log entries with timestamps older than 30 days from the import time are not stored. This architecture validates the date range set in the Cloud Run job to avoid importing logs that are older than 29 days, leaving a one-day safety margin. To import logs older than 29 days, you need to make the following changes to the implementation code, and then build a new container image to use in the Cloud Run job configuration. Remove the 30-day validation of the date range Add the original timestamp as a user label to the log entry Reset the timestamp label of the log entry to allow it to be ingested with the current timestamp When you use this modification, you must use the labels field instead of the timestamp field in your Log Analytics queries. For more information about Log Analytics queries and samples, see Sample SQL queries. Design considerations The following guidelines can help you to develop an architecture that meets your organization's requirements. Cost optimization The cost for importing logs by using this reference architecture has multiple contributing factors. You use the following billable components of Google Cloud: Cloud Logging (logs retention period costs apply) Cloud Run Cloud Storage API Consider the following factors that might increase costs: Log duplication: To avoid additional log storage costs, don't run the import job with the same configuration multiple times. Storage in additional destinations: To avoid additional log storage costs, disable routing policies at the destination project to prevent log storage in additional locations or forwarding logs to other destinations such as Pub/Sub or BigQuery. Additional CPU and memory: If your import job times out, you might need to increase the import job CPU and memory in your import job configuration. Increasing these values might increase incurred Cloud Run costs. Additional tasks: If the expected number of logs to be imported each day within the time range is high, you might need to increase the number of tasks in the import job configuration. The job will split the time range equally between the tasks, so each task will process a similar number of days from the range concurrently. Increasing the number of tasks might increase incurred Cloud Run costs. Storage class: If your Cloud Storage bucket's storage class is other than Standard, such as Nearline, Durable Reduced Availability (DRA), or Coldline, you might incur additional charges. Data traffic between different locations: Configure the import job to run in the same location as the Cloud Storage bucket from which you import the logs. Otherwise, network egress costs might be incurred. To generate a cost estimate based on your projected usage, including Cloud Run jobs, use the pricing calculator. Operational efficiency This section describes considerations for managing analytical queries after the solution is deployed. Log names and queries Logs are stored to the project that is defined in the logName field of the log entry. To import the logs to the selected project, this architecture modifies the logName field of each imported log. The import logs are stored in the selected project's default log bucket that has the log ID imported_logs (unless the project has a log routing policy that changes the storage destination). The original value of the logName field is preserved in the labels field with the key original_logName. You must account for the location of the original logName value when you query the imported logs. For more information about Log Analytics queries and samples, see Sample SQL queries. Performance optimization If the volume of logs that you're importing exceeds Cloud Run capacity limits, the job might time out before the import is complete. To prevent an incomplete data import, consider increasing the tasks value in the import job. Increasing CPU and memory resources can also help improve task performance when you increase the number of tasks. Deployment To deploy this architecture, see Deploy a job to import logs from Cloud Storage to Cloud Logging. What's Next Review the implementation code in the GitHub repository. Learn how to analyze imported logs by using Log Analytics and SQL. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Leonid Yankulin | Developer Relations EngineerOther contributors: Summit Tuladhar | Senior Staff Software EngineerWilton Wong | Enterprise ArchitectXiang Shen | Solutions Architect Send feedback \ No newline at end of file diff --git a/Reference_architecture.txt b/Reference_architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..04a03bdc40d1ece6167d64cb6c0de3a05c300671 --- /dev/null +++ b/Reference_architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/building-a-vision-analytics-solution +Date Scraped: 2025-02-23T11:46:29.085Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build an ML vision analytics solution with Dataflow and Cloud Vision API Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-23 UTC In this reference architecture, you'll learn about the use cases, design alternatives, and design considerations when deploying a Dataflow pipeline to process image files with Cloud Vision and to store processed results in BigQuery. You can use those stored results for large scale data analysis and to train BigQuery ML pre-built models. This reference architecture document is intended for data engineers and data scientists. Architecture The following diagram illustrates the system flow for this reference architecture. As shown in the preceding diagram, information flows as follows: Ingest and trigger: This is the first stage of the system flow where images first enter the system. During this stage, the following actions occur: Clients upload image files to a Cloud Storage bucket. For each file upload, the Cloud Storage automatically sends an input notification by publishing a message to Pub/Sub. Process: This stage immediately follows the ingest and trigger stage. For each new input notification, the following actions occur: The Dataflow pipeline listens for these file input notifications, extracts file metadata from the Pub/Sub message, and sends the file reference to Vision API for processing. Vision API reads the image and creates annotations. The Dataflow pipeline stores the annotations produced by Vision API in BigQuery tables. Store and analyze: This is the final stage in the flow. At this stage, you can do the following with the saved results: Query BigQuery tables and analyze the stored annotations. Use BigQuery ML or Vertex AI to build models and execute predictions based on the stored annotations. Perform additional analysis in the Dataflow pipeline (not shown on this diagram). Products used This reference architecture uses the following Google Cloud products: BigQuery Cloud Storage Vision API Dataflow Pub/Sub Use cases Vision API supports multiple processing features, including image labeling, face and landmark detection, optical character recognition, explicit content tagging, and others. Each of these features enable several use cases that are applicable to different industries. This document contains some simple examples of what's possible when using Vision API, but the spectrum of possible applications is very broad. Vision API also offers powerful pre-trained machine learning models through REST and RPC APIs. You can assign labels to images and classify them into millions of predefined categories. It helps you detect objects, read printed and handwritten text, and build valuable metadata into your image catalog. This architecture doesn't require any model training before you can use it. If you need a custom model trained on your specific data, Vertex AI lets you train an AutoML or a custom model for computer vision objectives, like image classification and object detection. Or, you can use Vertex AI Vision for an end-to-end application development environment that lets you build, deploy, and manage computer vision applications. Design alternatives Instead of storing images in a Google Cloud Storage bucket, the process that produces the images can publish them directly to a messaging system—Pub/Sub for example—and the Dataflow pipeline can send the images directly to Vision API. This design alternative can be a good solution for latency-sensitive use cases where you need to analyze images of relatively small sizes. Pub/Sub limits the maximum size of the message to 10 Mb. If you need to batch process a large number of images, you can use a specifically designed asyncBatchAnnotate API. Design considerations This section describes the design considerations for this reference architecture: Security, privacy, and compliance Cost optimization Performance optimization Security, privacy, and compliance Images received from untrusted sources can contain malware. Because Vision API doesn't execute anything based on the images it analyzes, image-based malware wouldn't affect the API. If you need to scan images, change the Dataflow pipeline to add a scanning step. To achieve the same result, you can also use a separate subscription to the Pub/Sub topic and scan images in a separate process. For more information, see Automate malware scanning for files uploaded to Cloud Storage. Vision API uses Identity and Access Management (IAM) for authentication. To access the Vision API, the security principal needs Cloud Storage > Storage object viewer (roles/storage.objectViewer) access to the bucket that contains the files that you want to analyze. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Cost optimization Compared to the other options discussed, like low-latency processing and asynchronous batch processing, this reference architecture uses a cost-efficient way to process the images in streaming pipelines by batching the API requests. The lower latency direct image streaming mentioned in the Design alternatives section could be more expensive due to the additional Pub/Sub and Dataflow costs. For image processing that doesn't need to happen within seconds or minutes, you can run the Dataflow pipeline in batch mode. Running the pipeline in batch mode can provide some savings when compared to what it costs to run the streaming pipeline. Vision API supports offline asynchronous batch image annotation for all features. The asynchronous request supports up to 2,000 images per batch. In response, Vision API returns JSON files that are stored in a Cloud Storage bucket. Vision API also provides a set of features for analyzing images. The pricing is per image per feature. To reduce costs, only request the specific features you need for your solution. To generate a cost estimate based on your projected usage, use the pricing calculator. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Performance optimization Vision API is a resource intensive API. Because of that, processing images at scale requires careful orchestration of the API calls. The Dataflow pipeline takes care of batching the API requests, gracefully handling of the exceptions related to reaching quotas, and producing custom metrics of the API usage. These metrics can help you decide if an API quota increase is warranted, or if the Dataflow pipeline parameters should be adjusted to reduce the frequency of requests. For more information about increasing quota requests for Vision API, see Quotas and limits. The Dataflow pipeline has several parameters that can affect the processing latencies. For more information about these parameters, see Deploy an ML vision analytics solution with Dataflow and Vision API. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Deployment To deploy this architecture, see Deploy an ML vision analytics solution with Dataflow and Vision API. What's next Learn more about Dataflow. Learn more about BigQuery ML. Learn more about BigQuery reliability in the Understand BigQuery reliability guide. Learn about storing data in Jump Start Solution: Data warehouse with BigQuery. Review the Vision API features list. Learn how to deploy an ML vision analytics solution with Dataflow and Vision API. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Masud Hasan | Site Reliability Engineering ManagerSergei Lilichenko | Solutions ArchitectLakshmanan Sethu | Technical Account ManagerOther contributors: Jiyeon Kang | Customer EngineerSunil Kumar Jang Bahadur | Customer Engineer Send feedback \ No newline at end of file diff --git a/Reference_architectures.txt b/Reference_architectures.txt new file mode 100644 index 0000000000000000000000000000000000000000..8be104d8581bfeba9446cd4affafd62ba9b735b3 --- /dev/null +++ b/Reference_architectures.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/reference-architectures +Date Scraped: 2025-02-23T11:55:04.076Z + +Content: +Home Docs Cloud Architecture Center Send feedback Reference architectures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document presents typical architectures that you can use as a reference for managing corporate identities. Two core tenets of corporate identity management are the following: An authoritative source for identities that is the sole system that you use to create, manage, and delete identities for your employees. The identities managed in the authoritative source system might be propagated to other systems. A central identity provider (IdP) that is the sole system for authentication and that provides a single sign-on experience for your employees that spans applications. When you use Google Cloud or other Google services, you must decide which system to use as your identity provider and which system to use as your authoritative source. Use Google as an IdP By using Cloud Identity Premium or Google Workspace, you can make Google your primary IdP. Google provides a large selection of ready-to-use integrations for popular third-party applications, and you can use standard protocols such as SAML, OAuth, and OpenID Connect to integrate your custom applications. Google as IdP and authoritative source You can use Cloud Identity Premium or Google Workspace as both IdP and as authoritative source, as in the following diagram. You use Cloud Identity Premium or Google Workspace to manage users and groups. All Google services use Cloud Identity Premium or Google Workspace as the IdP. You configure corporate applications and other SaaS services to use Google as the IdP. User experience In this configuration, to an employee, the sign-on user experience looks like this: Upon requesting a protected resource or access to a corporate application, the employee is redirected to the Google Sign-In screen, which prompts you for your email address and your password. If 2-step verification is enabled, the employee is prompted to provide a second factor such as a USB key or code. When the employee is authenticated, they are redirected back to the protected resource. Advantages Using Google as your IdP and authoritative source has the following advantages: You can take full advantage of Google's multi-factor authentication and mobile device management features. You don't need an additional IdP, which might save you money. When to use this architecture Consider using Google as IdP and authoritative source in the following scenarios: You already use Google Workspace as your collaboration and productivity solution. There is no existing on-premises infrastructure or IdP that you have to integrate with or you would like to keep it separate from all of your resources on Google (in Google Cloud, in Google Ads, and so on). You don't require integration with a human resources information system (HRIS) to manage identities. Google as IdP with an HRIS as authoritative source If you use a human resources information system (HRIS) to manage the onboarding and offboarding process for your employees, you can still use Google as your IdP. Cloud Identity and Google Workspace provide APIs that let HRIS and other systems take control of managing users and groups, as shown in the following diagram. You use your existing HRIS to manage users and optionally groups. The HRIS remains the single source of truth for identity management and automatically provisions users for Cloud Identity or Google Workspace. All Google services use Cloud Identity Premium or Google Workspace as the IdP. You configure corporate applications and other SaaS services to use Google as the IdP. User experience To an employee, the sign-on user experience is equivalent to using Google as IdP and authoritative source. Advantages Using Google as the IdP and authoritative source has the following advantages: You can minimize administrative overhead by reusing your existing HRIS workflows. You can take full advantage of Google's multi-factor authentication and mobile device management features. You don't need an additional IdP, which might save you money. When to use this architecture Consider using Google as your IdP with an HRIS as authoritative source in the following scenarios: You have an existing HRIS or other system that serves as the authoritative source for identities. You already use Google Workspace as your collaboration and productivity solution. There is no existing on-premises infrastructure or IdP that you have to integrate with or that you would like to keep separate from your Google estate. Use an external IdP If your organization already uses an IdP such as Active Directory, Azure AD, ForgeRock, Okta, or Ping Identity, then you can integrate Google Cloud with this external IdP by using federation. By federating a Cloud Identity or Google Workspace account with an external IdP, you can let employees use their existing identity and credentials to sign in to Google services such as Google Cloud, Google Marketing Platform, and Google Ads. External IDaaS as IdP and authoritative source If you use an identity as a service (IDaaS) provider such as ForgeRock, Okta, or Ping Identity, then you can set up federation as illustrated in the following diagram. Cloud Identity or Google Workspace uses your IDaaS as the IdP for single sign-on. The IDaaS automatically provisions users and groups for Cloud Identity or Google Workspace. Existing corporate applications and other SaaS services can continue to your IDaaS as an IdP. To learn more about federating Cloud Identity or Google Workspace with Okta, see Okta user provisioning and single sign-on. User experience To an employee, the sign-on user experience looks like this: Upon requesting a protected resource, the employee is redirected to the Google Sign-In screen, which prompts them for their email address. Google Sign-In redirects you to the sign-in page of your IDaaS. You authenticate with your IDaaS. Depending on your IDaaS, this might require you to provide a second factor such as a code. After you are authenticated, you are redirected back to the protected resource. Advantages Using an external IDaaS as IdP and authoritative source has the following advantages: You enable a single sign-on experience for your employees that extends across Google services and other applications that are integrated with your IDaaS. If you configured your IDaaS to require multi-factor authentication, that configuration automatically applies to Google Cloud. You don't need to synchronize passwords or other credentials with Google. You can use the free version of Cloud Identity. When to use this architecture Consider using an external IDaaS as IdP and authoritative source in the following scenarios: You already use an IDaaS provider such as ForgeRock, Okta, or Ping Identity as your IdP. Best practices See our best practices for federating Google Cloud with an external identity provider. Active Directory as IdP and authoritative source If you use Active Directory as the source of truth for identity management, then you can set up federation as illustrated in the following diagram. You use Google Cloud Directory Sync (GCDS) to automatically provision users and groups from Active Directory for Cloud Identity or Google Workspace. Google Cloud Directory Sync is a free Google-provided tool that implements the synchronization process and can be run either on Google Cloud or in your on-premises environment. Synchronization is one-way so that Active Directory remains the source of truth. Cloud Identity or Google Workspace uses Active Directory Federation Services (AD FS) for single sign-on. Existing corporate applications and other SaaS services can continue to use your AD FS as an IdP. For a variation of this pattern, you can also use Active Directory Lightweight Directory Services (AD LDS) or a different LDAP directory with either AD FS or another SAML-compliant IdP. For more information about this approach, see Federate Google Cloud with Active Directory. User experience Upon requesting the protected resource, the employee is redirected to the Google Sign-in screen, which prompts them for their email address. Google Sign-In redirects the employee to the sign-in page of AD FS. Depending on the configuration of AD FS, the employee might see a sign-on screen prompting for their Active Directory username and password. Alternatively, AD FS might attempt to sign the employee in automatically based on their Windows login. After AD FS has authenticated the employee, they are redirected back to the protected resource. Advantages Using Active Directory as IdP and authoritative source has the following advantages: You enable a single sign-on experience for your employees that extends across Google services and your on-premises environment. If you configured AD FS to require multi-factor authentication, that configuration automatically applies to Google Cloud. You don't need to synchronize passwords or other credentials to Google. You can use the free version of Cloud Identity. Because the APIs that GCDS uses are publicly accessible, there's no need to set up hybrid connectivity between your on-premises network and Google Cloud. When to use this architecture Consider using Active Directory as the IdP and authoritative source in the following scenarios: You have an existing Active Directory infrastructure. You want to provide a seamless sign-in experience for Windows users. Best practices Consider these best practices: Active Directory and Cloud Identity use a different logical structure. Make sure you understand the differences and assess which method of mapping domains, identities, and groups best suits your situation. For more information, see our guide on federating Google Cloud with Active Directory. Synchronize groups in addition to users. With this approach, you can set up IAM so that you can use group memberships in Active Directory to control who has access to which resources in Google Cloud. Deploy and expose AD FS so that corporate users can access it, but don't expose it more than necessary. Although corporate users must be able to access AD FS, there's no requirement for AD FS to be reachable from Cloud Identity or Google Workspace, or from any application deployed on Google Cloud. Consider enabling Integrated Windows Authentication (IWA) in AD FS to allow users to sign in automatically based on their Windows login. If AD FS becomes unavailable, users might not be able to use the Google Cloud console or any other resource that uses Google as IdP. So ensure that AD FS and the domain controllers AD FS relies on are deployed and sized to meet your availability objectives. If you use Google Cloud to help ensure business continuity, relying on an on-premises AD FS might undermine the intent of using Google Cloud as an independent copy of your deployment. In this case, consider deploying replicas of all relevant systems on Google Cloud in one of the following ways: Extend your existing Active Directory domain to Google Cloud and deploy GCDS to run on Google Cloud. Run dedicated AD FS servers on Google Cloud. These servers use the Active Directory domain controllers that are running on Google Cloud. Configure Cloud Identity to use the AD FS servers deployed on Google Cloud for single sign-on. To learn more, see Best practices for federating Google Cloud with an external identity provider. Azure AD as IdP with Active Directory as authoritative source If you are a Microsoft Office 365 or Azure customer, you might have connected your on-premises Active Directory to Azure AD. If all user accounts that potentially need access to Google Cloud are already being synchronized to Azure AD, you can reuse this integration by federating Cloud Identity with Azure AD, as shown in the following diagram. You use Azure AD to automatically provision users and groups to Cloud Identity or Google Workspace. Azure AD itself might be integrated with an on-premises Active Directory. Cloud Identity or Google Workspace use Azure AD for single sign-on. Existing corporate applications and other SaaS services can continue to use Azure AD as an IdP. For more detailed information about this approach, see Federate Google Cloud with Azure Active Directory. User experience Upon requesting the protected resource, the employee is redirected to the Google Sign-In screen, which prompts them for their email address. Google Sign-In redirects them to the sign-in page of AD FS. Depending on how their on-premises Active Directory is connected to Azure AD, Azure AD might prompt them for a username and password, or it might redirect them to an on-premises AD FS. After the employee is authenticated with Azure AD, they are redirected back to the protected resource. Advantages Using Azure AD as your IdP with Active Directory as authoritative source has several advantages: You enable a single sign-on experience for your employees that extends across Google services, Azure, and your on-premises environment. If you configured Azure AD to require multi-factor authentication, that configuration automatically applies to Google Cloud. You don't need to install any additional software on-premises. If your on-premises Active Directory uses multiple domains or forests and you have set up a custom Azure AD Connect configuration to map this structure to an Azure AD tenant, you can take advantage of this integration work. You don't need to synchronize passwords or other credentials to Google. You can use the free version of Cloud Identity. You can surface the Google Cloud console as a tile in the Office 365 portal. Because the APIs that Azure AD uses are publicly accessible, there's no need to set up hybrid connectivity between Azure and Google Cloud. When to use this architecture Consider using Azure AD as IdP with Active Directory as authoritative source in the following scenarios: You already use Azure AD and have integrated it with an existing Active Directory infrastructure. You want to provide a seamless sign-in experience for users across Azure and Google Cloud. Best practices Follow these best practices: Because Azure AD and Cloud Identity use a different logical structure, make sure you understand the differences. Assess which method of mapping domains, identities, and groups best suits your situation. For more detailed information, see Federate Google Cloud with Azure AD. Synchronize groups in addition to users. With this approach, you can set up IAM so that you can use group memberships in Azure AD to control who has access to which resources in Google Cloud. If you use Google Cloud to help ensure business continuity, relying on Azure AD for authentication might undermine the intent of using Google Cloud as an independent copy of your deployment. To learn more, see Best practices for federating Google Cloud with an external identity provider. What's next Learn more about federating with Active Directory. Find out how to set up federation with Azure AD. Review our best practices for planning accounts and organizations and for federating Google Cloud with an external identity provider. Send feedback \ No newline at end of file diff --git a/Regional.txt b/Regional.txt new file mode 100644 index 0000000000000000000000000000000000000000..9d2fbe345dc4d3179920d1abfee442787202ccc5 --- /dev/null +++ b/Regional.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/regional +Date Scraped: 2025-02-23T11:44:40.110Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud regional deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-14 UTC This section of the Google Cloud deployment archetypes guide describes the regional deployment archetype. In a cloud architecture that uses the regional deployment archetype, instances of the application run in two or more zones within a single Google Cloud region. All the application instances use a centrally managed, shared repository of configuration files. Application data is replicated synchronously across all the zones in the architecture. The following diagram shows the cloud topology for a highly available application that runs independently in three zones within a single Google Cloud region: The preceding diagram shows an application with frontend and backend components that run independently in three zones in a Google Cloud region. An external load balancer forwards user requests to one of the frontends. An internal load balancer forwards traffic from the frontends to the backends. The application uses a database that's replicated across the zones. If a zone outage occurs, the database fails over to a replica in another zone. The topology in the preceding diagram is robust against zone outages, but not against region outages. To be able to recover from region outages, you must have deployed a passive replica of the application in a second (failover) region, as shown in the following diagram: When an outage occurs in the primary region, you must promote the database in the failover region, and let the DNS routing policies route traffic to the load balancer in the failover region. To optimize the cost of the failover infrastructure, you can operate the failover region at a lower capacity by deploying fewer resources. Use cases The following sections provide examples of use cases for which the regional deployment archetype is an appropriate choice. Highly available application with users within a geographic area We recommend the regional deployment archetype for applications that need robustness against zone outages but can tolerate some downtime caused by region outages. If any part of the application stack fails, the application continues to run if at least one functioning component with adequate capacity exists in every tier. If a zone outage occurs, the application stack continues to run in the other zones. Low latency for application users If users of an application are within a geographic area, such as a single country, the regional deployment archetype can help improve the user-perceived performance of the application. You can optimize network latency for user requests by deploying the application in the Google Cloud region that's closest to your users. Low-latency networking between application components A single-region architecture might be well suited for applications such as batch computing that need low-latency and high-bandwidth network connections among the compute nodes. All resources are in a single Google Cloud region, so inter-resource network traffic remains within the region. The inter-resource network latency is low, and you don't incur cross-region data transfer costs. Intra-region network costs still apply. Compliance with data residency and sovereignty requirements The regional deployment archetype can help you meet regulatory requirements for data residency and operational sovereignty. For example, a country in Europe might require that all user data be stored and accessed in data centers that are located physically within the country. To help meet this requirement, you can deploy the application to a Google Cloud region in Europe. Design considerations When you build an architecture that's based on the regional deployment archetype, consider the following design factors. Downtime during region outages When a region outage occurs, the application is down. You can reduce the downtime caused by region outages by maintaining a passive (failover) replica of the infrastructure stack in another Google Cloud region. If an outage occurs in the primary region, you can activate the stack in the failover region and use DNS routing policies to route traffic to the load balancer in the failover region. Cost of redundant resources A multi-zone architecture typically has more cloud resources than a single-zone deployment. Consider the cost of these cloud resources when you build your architecture. For applications that need robustness against zone outages, the availability advantage of a multi-zone architecture might justify the higher cost. Note: For more information about region-specific considerations, see Geography and regions. Reference architecture For a reference architecture that you can use to design a regional deployment on Compute Engine VMs, see Regional deployment on Compute Engine. Previous arrow_back Zonal Next Multi-regional arrow_forward Send feedback \ No newline at end of file diff --git a/Regional_deployment_on_Compute_Engine.txt b/Regional_deployment_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..1e8da558ab11aae36a3a6c1c25daca3ba32e3728 --- /dev/null +++ b/Regional_deployment_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/regional-deployment-compute-engine +Date Scraped: 2025-02-23T11:44:58.779Z + +Content: +Home Docs Cloud Architecture Center Send feedback Regional deployment on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-14 UTC This document provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in multiple zones within a Google Cloud region. You can use this reference architecture to efficiently rehost (lift and shift) on-premises applications to the cloud with minimal changes to the applications. The document also describes the design factors that you should consider when you build a regional architecture for your cloud applications. The intended audience for this document is cloud architects. Architecture The following diagram shows an architecture for an application that runs in active-active mode in isolated stacks that are deployed across three Google Cloud zones within a region. The architecture is aligned with the regional deployment archetype. The architecture is based on the infrastructure as a service (IaaS) cloud model. You provision the required infrastructure resources (compute, networking, and storage) in Google Cloud. You retain full control over the infrastructure and responsibility for the operating system, middleware, and higher layers of the application stack. To learn more about IaaS and other cloud models, see PaaS vs. IaaS vs. SaaS vs. CaaS: How are they different?. The preceding diagram includes the following components: Component Purpose Regional external load balancer The regional external load balancer receives and distributes user requests to the web tier VMs. Use an appropriate load balancer type depending on the traffic type and other requirements. For example, if the backend consists of web servers (as shown in the preceding architecture), then use an Application Load Balancer to forward HTTP(S) traffic. To load-balance TCP traffic, use a Network Load Balancer. For more information, see Choose a load balancer. Regional managed instance group (MIG) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of a regional MIG. The MIG is the backend for the regional external load balancer. The MIG contains Compute Engine VMs in three different zones. Each of these VMs hosts an independent instance of the web tier of the application. Regional internal load balancer The regional internal load balancer distributes traffic from the web tier VMs to the application tier VMs. Depending on your requirements, you can use a regional internal Application Load Balancer or Network Load Balancer. For more information, see Choose a load balancer. Regional MIG for the application tier The application tier is deployed on Compute Engine VMs that are part of a regional MIG, which is the backend for the internal load balancer. The MIG contains Compute Engine VMs in three different zones. Each VM hosts an independent instance of the application tier. Third-party database deployed on a Compute Engine VM The architecture in this document shows a third-party database (like PostgreSQL) that's deployed on a Compute Engine VM. You can deploy a standby database in another zone. The database replication and failover capabilities depend on the database that you use. Installing and managing a third-party database involves additional effort and operational cost for applying updates, monitoring, and ensuring availability. You can avoid the overhead of installing and managing a third-party database and take advantage of built-in high availability (HA) features by using a fully managed database service like Cloud SQL or AlloyDB for PostgreSQL. For more information about managed database options, see Database services later in this guide. Virtual Private Cloud network and subnet All the Google Cloud resources in the architecture use a single VPC network and subnet. Depending on your requirements, you can choose to build an architecture that uses multiple VPC networks or multiple subnets. For more information, see Deciding whether to create multiple VPC networks in "Best practices and reference architectures for VPC design." Cloud Storage dual-region bucket Application and database backups are stored in a dual-region Cloud Storage bucket. If a zone or region outage occurs, your application and data aren't lost. Alternatively, you can use Backup and DR Service to create, store, and manage the database backups. Products used This reference architecture uses the following Google Cloud products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Use cases This section describes use cases for which a regional deployment on Compute Engine is an appropriate choice. Efficient migration of on-premises applications You can use this reference architecture to build a Google Cloud topology to rehost (lift and shift) on-premises applications to the cloud with minimal changes to the applications. All the tiers of the application in this reference architecture are hosted on Compute Engine VMs. This approach lets you migrate on-premises applications efficiently to the cloud and take advantage of the cost benefits, reliability, performance, and operational simplicity that Google Cloud provides. Highly available application with users within a geographic area We recommend a regional deployment architecture for applications that need robustness against zone outages but can tolerate some downtime caused by region outages. If any part of the application stack fails, the application continues to run if at least one functioning component with adequate capacity exists in every tier. If a zone outage occurs, the application stack continues to run in the other zones. Low latency for application users If all the users of an application are within a single geographic area, such as a single country, a regional deployment architecture can help improve the user-perceived performance of the application. You can optimize network latency for user requests by deploying the application in the Google Cloud region that's closest to your users. Low-latency networking between application components A single-region architecture might be well suited for applications such as batch computing that need low-latency and high-bandwidth network connections among the compute nodes. All the resources are in a single Google Cloud region, so inter-resource network traffic remains within the region. The inter-resource network latency is low, and you don't incur cross-region data transfer costs. Intra-region network costs still apply. Compliance with data residency requirements You can use a single-region architecture to build a topology that helps you to meet data residency requirements. For example, a country in Europe might require that all user data be stored and accessed in data centers that are located physically within Europe. To meet this requirement, you can run the application in a Google Cloud region in Europe. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for system design, security and compliance, reliability, operational efficiency, cost, and performance. Note: The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions for your regional deployment and to select appropriate Google Cloud services. Region selection When you choose a Google Cloud region for your applications, consider the following factors and requirements: Availability of Google Cloud services. For more information, see Products available by location. Availability of Compute Engine machine types. For more information, see Regions and zones. End-user latency requirements. Cost of Google Cloud resources. Regulatory requirements. Some of these factors and requirements might involve tradeoffs. For example, the most cost-efficient region might not have the lowest carbon footprint. Compute services The reference architecture in this document uses Compute Engine VMs for all the tiers of the application. The design guidance in this document is specific to Compute Engine unless mentioned otherwise. Depending on the requirements of your application, you can choose from the following other Google Cloud compute services. The design guidance for those services is outside the scope of this document. You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. If you prefer to focus your IT efforts on your data and applications instead of setting up and operating infrastructure resources, then you can use serverless services like Cloud Run and Cloud Run functions. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. For more information about choosing appropriate compute services for your workloads in Google Cloud, see Hosting Applications on Google Cloud in the Google Cloud Architecture Framework. Storage services The architecture shown in this document uses regional Persistent Disk volumes for all the tiers. Persistent disks provide synchronous replication of data across two zones within a region. For low-cost storage that's redundant across the zones within a region, you can use Cloud Storage regional buckets. To store data that's shared across multiple VMs in a region, such as across all the VMs in the web tier or application tier, you can use a Filestore Enterprise instance. The data that you store in a Filestore Enterprise instance is replicated synchronously across three zones within the region. This replication ensures HA and robustness against zone outages. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. If your database is Microsoft SQL Server, we recommend using Cloud SQL for SQL Server. In scenarios when Cloud SQL doesn't support your configuration requirements, or if you need access to the operating system, then you can deploy a failover cluster instance (FCI). In this scenario, you can use the fully managed Google Cloud NetApp Volumes to provide continuous availability (CA) SMB storage for the database. When you design storage for your regional workloads, consider the functional characteristics of the workloads, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Database services The reference architecture in this document uses a third-party database, like PostgreSQL, that's deployed on Compute Engine VMs. Installing and managing a third-party database involves effort and cost for operations like applying updates, monitoring and ensuring availability, performing backups, and recovering from failures. You can avoid the effort and cost of installing and managing a third-party database by using a fully managed database service like Cloud SQL, AlloyDB for PostgreSQL, Bigtable, Spanner, or Firestore. These Google Cloud database services provide uptime service-level agreements (SLAs), and they include default capabilities for scalability and observability. If your workloads require an Oracle database, you can use Bare Metal Solution provided by Google Cloud. For an overview of the use cases that each Google Cloud database service is suitable for, see Google Cloud databases. Security and compliance This section describes factors that you should consider when you use this reference architecture to design and build a regional topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against external threats To protect your application against external threats like distributed denial-of-service (DDoS) attacks and cross-site scripting (XSS), you can use Google Cloud Armor security policies. The security policies are enforced at the perimeter—that is, before traffic reaches the web tier. Each policy is a set of rules that specifies certain conditions that should be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the incoming traffic's source IP address matches a specific IP address or CIDR range, then the traffic must be denied. In addition, you can apply preconfigured web application firewall (WAF) rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the application tier, web tier, and databases don't need inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only a private, internal IP address can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only internal IP addresses, like the Compute Engine VMs in this reference architecture, you can use Cloud NAT. VM image security To ensure that your VMs use only approved images (that is, images with software that meets your policy or security requirements), you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. The default service account is granted the Editor IAM role (roles/editor) unless this behavior is disabled. By default, the default service account is attached to all VMs that you create by using the Google Cloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each application. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges in "Best practices for using service accounts." Network security To control network traffic between the resources in the architecture, you must set up appropriate Cloud Next Generation Firewall rules. Each firewall rule lets you control traffic based on parameters like the protocol, IP address, and port. For example, you can configure a firewall rule to allow TCP traffic from the web server VMs to a specific port of the database VMs, and block all other traffic. More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations provided in the Security foundations blueprint. Reliability This section describes design factors that you should consider when you use this reference architecture to build and operate reliable infrastructure for your regional deployments in Google Cloud. Infrastructure outages In a regional architecture, if any individual component in the infrastructure stack fails, the application can process requests if at least one functioning component with adequate capacity exists in each tier. For example, if a web server instance fails, the load balancer forwards user requests to the other available web server instances. If a VM that hosts a web server or app server instance crashes, the MIG recreates the VM automatically. If a zone outage occurs, the load balancer isn't affected, because it's a regional resource. A zone outage might affect individual Compute Engine VMs. But the application remains available and responsive because the VMs are in a regional MIG. A regional MIG ensures that new VMs are created automatically to maintain the configured minimum number of VMs. After Google resolves the zone outage, you must verify that the application runs as expected in all the zones where it's deployed. If all the zones in this architecture have an outage or if a region outage occurs, then the application is unavailable. You must wait for Google to resolve the outage, and then verify that the application works as expected. You can reduce the downtime caused by region outages by maintaining a passive (failover) replica of the infrastructure stack in another Google Cloud region. If an outage occurs in the primary region, you can activate the stack in the failover region and use DNS routing policies to route traffic to the load balancer in the failover region. For applications that require robustness against region outages, consider using a multi-regional architecture. For more information, see Multi-regional deployment on Compute Engine. MIG autoscaling When you run your application on VMs in a regional MIG, the application remains available during isolated zone outages. The autoscaling capability of stateless MIGs lets you maintain application availability and performance at predictable levels. Stateful MIGs can't be autoscaled. To control the autoscaling behavior of your MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling. For more information, see Autoscaling groups of instances. VM autohealing Sometimes the VMs that host your application might be running and available, but there might be issues with the application itself. It might freeze, crash, or not have sufficient memory. To verify whether an application is responding as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configuring autohealing, see Set up an application health check and autohealing. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs that are distributed across multiple zones. This distribution ensures that your application is robust against zone outages. To improve this robustness further, you can create a spread placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs within each zone on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. For more information, see Apply spread placement policies to VMs. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required for MIG autoscaling, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or shared across multiple projects. You incur charges for reserved resources even if the resources aren't provisioned or used. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Persistent disk state A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your persistent disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them easily to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Data durability You can use Backup and DR to create, store, and manage backups of the Compute Engine VMs. Backup and DR stores backup data in its original, application-readable format. When required, you can restore your workloads to production by directly using data from long-term backup storage without time-consuming data movement or preparation activities. If you use a managed database service like Cloud SQL, backups are taken automatically based on the retention policy that you define. You can supplement the backup strategy with additional logical backups to meet regulatory, workflow, or business requirements. If you use a third-party database and you need to store database backups and transaction logs, you can use regional Cloud Storage buckets. Regional Cloud Storage buckets provide low cost backup storage that's redundant across zones. Compute Engine provides the following options to help you to ensure the durability of data that's stored in Persistent Disk volumes: You can use snapshots to capture the point-in-time state of Persistent Disk volumes. Standard snapshots are stored redundantly in multiple regions, with automatic checksums to ensure the integrity of your data. Snapshots are incremental by default, so they use less storage space and you save money. Snapshots are stored in a Cloud Storage location that you can configure. For more recommendations about using and managing snapshots, see Best practices for Compute Engine disk snapshots. Regional Persistent Disk volumes let you run highly available applications that aren't affected by failures in persistent disks. When you create a regional Persistent Disk volume, Compute Engine maintains a replica of the disk in a different zone in the same region. Data is replicated synchronously to the disks in both zones. If any one of the two zones has an outage, the data remains available. Database availability If you use a managed database service like Cloud SQL in HA configuration, then in the event of a failure of the primary database, Cloud SQL fails over automatically to the standby database. You don't need to change the IP address for the database endpoint. If you use a self-managed third-party database that's deployed on a Compute Engine VM, then you must use an internal load balancer or other mechanism to ensure that the application can connect to another database if the primary database is unavailable. To implement cross-zone failover for a database that's deployed on a Compute Engine VM, you need a mechanism to identify failures of the primary database and a process to fail over to the standby database. The specifics of the failover mechanism depend on the database that you use. You can set up an observer instance to detect failures of the primary database and orchestrate the failover. You must configure the failover rules appropriately to avoid a split-brain situation and prevent unnecessary failover. For example architectures that you can use to implement failover for PostgreSQL databases, see Architectures for high availability of PostgreSQL clusters on Compute Engine. More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a regional Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the resource utilization of your VM instances, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match your workload's compute requirements. For workloads with predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce your Compute Engine costs for the VMs in the application and web tiers. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have HA requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, MIGs might not be able to scale out (that is, create VMs) automatically to the specified target size until the required capacity becomes available again. Resource utilization The autoscaling capability of stateless MIGs enables your application to handle increases in traffic gracefully, and it helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Third-party licensing When you migrate third-party workloads to Google Cloud, you might be able to reduce cost by bringing your own licenses (BYOL). For example, to deploy Microsoft Windows Server VMs, instead of using a premium image that incurs additional cost for the third-party license, you can create and use a custom Windows BYOL image. You then pay only for the VM infrastructure that you use on Google Cloud. This strategy helps you continue to realize value from your existing investments in third-party licenses. If you decide to use the BYOL approach, we recommend that you do the following: Provision the required number of compute CPU cores independently of memory by using custom machine types. By doing this, you limit the third-party licensing cost to the number of CPU cores that you need. Reduce the number of vCPUs per core from 2 to 1 by disabling simultaneous multithreading (SMT), and reduce your licensing costs by 50%. If you deploy a third-party database like Microsoft SQL Server on Compute Engine VMs, then you must consider the license costs for the third-party software. When you use a managed database service like Cloud SQL, the database license costs are included in the charges for the service. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors that you should consider when you use this reference architecture to design and build a regional Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (such as the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using the update method that you choose: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom images that contain the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts to install third-party software, make sure that the scripts explicitly specify software-installation parameters such as the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors that you should consider when you use this reference architecture to design and build a regional topology in Google Cloud that meets the performance requirements of your workloads. VM placement For workloads that require low inter-VM network latency, you can create a compact placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. For more information, see Reduce latency by using compact placement policies. VM machine types Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on your cost and performance requirements. The machine types are grouped into machine series and families. The following table provides a summary of the recommended machine families and series for different workload types: Requirement Recommended machine family Example machine series Best price-performance ratio for a variety of workloads General-purpose machine family C3, C3D, E2, N2, N2D, Tau T2D, Tau T2A Highest performance per core and optimized for compute-intensive workloads Compute-optimized machine family C2, C2D, H3 High memory-to-vCPU ratio for memory-intensive workloads Memory-optimized machine family M3, M2, M1 GPUs for massively parallelized workloads Accelerator-optimized machine family A2, G2 For more information, see Machine families resource and comparison guide. VM multithreading Each virtual CPU (vCPU) that you allocate to a Compute Engine VM is implemented as a single hardware multithread. By default, two vCPUs share a physical CPU core. For workloads that are highly parallel or that perform floating point calculations (such as genetic sequence analysis and financial risk modeling), you can improve performance by reducing the number of threads that run on each physical CPU core. For more information, see Set the number of threads per core. VM multithreading might have licensing implications for some third-party software, like databases. For more information, read the licensing documentation for the third-party software. Network Service Tiers Network Service Tiers lets you optimize the network cost and performance of your workloads. You can choose Premium Tier or Standard Tier. Premium Tier uses Google's highly reliable global backbone to help you achieve minimal packet loss and latency. Traffic enters and leaves the Google network at a global edge point of presence (PoP) that's close to your end user. We recommend using Premium Tier as the default tier for optimal performance. With Standard Tier, traffic enters and leaves the Google network at an edge PoP that's closest to the Google Cloud location where your workload runs. The pricing for Standard Tier is lower than Premium Tier. Standard Tier is suitable for traffic that isn't sensitive to packet loss and that doesn't have low latency requirements. More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Learn more about the Google Cloud products used in this reference architecture: Cloud Load Balancing overview Instance groups Get started with migrating your workloads to Google Cloud. Explore and evaluate deployment archetypes that you can choose to build architectures for your cloud workloads. Review architecture options for designing reliable infrastructure for your workloads in Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Ben Good | Solutions ArchitectCarl Franklin | Director, PSO Enterprise ArchitectureDaniel Lees | Cloud Security ArchitectGleb Otochkin | Cloud Advocate, DatabasesMark Schlagenhauf | Technical Writer, NetworkingPawel Wenda | Group Product ManagerSean Derrington | Group Outbound Product Manager, StorageSekou Page | Outbound Product ManagerSimon Bennett | Group Product ManagerSteve McGhee | Reliability AdvocateVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Reliability.txt b/Reliability.txt new file mode 100644 index 0000000000000000000000000000000000000000..f060df7b254c0c0f1fad502970d6eb2bda824586 --- /dev/null +++ b/Reliability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml/reliability +Date Scraped: 2025-02-23T11:44:23.199Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and ML perspective: Reliability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in the Architecture Framework: AI and ML perspective provides an overview of the principles and recommendations to design and operate reliable AI and ML systems on Google Cloud. It explores how to integrate advanced reliability practices and observability into your architectural blueprints. The recommendations in this document align with the reliability pillar of the Architecture Framework. In the fast-evolving AI and ML landscape, reliable systems are essential for ensuring customer satisfaction and achieving business goals. You need AI and ML systems that are robust, reliable, and adaptable to meet the unique demands of both predictive ML and generative AI. To handle the complexities of MLOps—from development to deployment and continuous improvement—you need to use a reliability-first approach. Google Cloud offers a purpose-built AI infrastructure that's aligned with Site Reliability Engineering (SRE) principles and provides a powerful foundation for reliable AI and ML systems. Ensure that infrastructure is scalable and highly available By architecting for scalability and availability, you enable your applications to handle varying levels of demand without service disruptions or performance degradation. This means that your AI services are still available to users during infrastructure outages and when traffic is very high. Consider the following recommendations: Design your AI systems with automatic and dynamic scaling capabilities to handle fluctuations in demand. This helps to ensure optimal performance, even during traffic spikes. Manage resources proactively and anticipate future needs through load testing and performance monitoring. Use historical data and predictive analytics to make informed decisions about resource allocation. Design for high availability and fault tolerance by adopting the multi-zone and multi-region deployment archetypes in Google Cloud and by implementing redundancy and replication. Distribute incoming traffic across multiple instances of your AI and ML services and endpoints. Load balancing helps to prevent any single instance from being overloaded and helps to ensure consistent performance and availability. Use a modular and loosely coupled architecture To make your AI systems resilient to failures in individual components, use a modular architecture. For example, design the data processing and data validation components as separate modules. When a particular component fails, the modular architecture helps to minimize downtime and lets your teams develop and deploy fixes faster. Consider the following recommendations: Separate your AI and ML system into small self-contained modules or components. This approach promotes code reusability, simplifies testing and maintenance, and lets you develop and deploy individual components independently. Design the loosely coupled modules with well-defined interfaces. This approach minimizes dependencies, and it lets you make independent updates and changes without impacting the entire system. Plan for graceful degradation. When a component fails, the other parts of the system must continue to provide an adequate level of functionality. Use APIs to create clear boundaries between modules and to hide the module-level implementation details. This approach lets you update or replace individual components without affecting interactions with other parts of the system. Build an automated MLOps platform With an automated MLOps platform, the stages and outputs of your model lifecycle are more reliable. By promoting consistency, loose coupling, and modularity, and by expressing operations and infrastructure as code, you remove fragile manual steps and maintain AI and ML systems that are more robust and reliable. Consider the following recommendations: Automate the model development lifecycle, from data preparation and validation to model training, evaluation, deployment, and monitoring. Manage your infrastructure as code (IaC). This approach enables efficient version control, quick rollbacks when necessary, and repeatable deployments. Validate that your models behave as expected with relevant data. Automate performance monitoring of your models, and build appropriate alerts for unexpected outputs. Validate the inputs and outputs of your AI and ML pipelines. For example, validate data, configurations, command arguments, files, and predictions. Configure alerts for unexpected or unallowed values. Adopt a managed version-control strategy for your model endpoints. This kind of strategy enables incremental releases and quick recovery in the event of problems. Maintain trust and control through data and model governance The reliability of AI and ML systems depends on the trust and governance capabilities of your data and models. AI outputs can fail to meet expectations in silent ways. For example, the outputs might be formally consistent but they might be incorrect or unwanted. By implementing traceability and strong governance, you can ensure that the outputs are reliable and trustworthy. Consider the following recommendations: Use a data and model catalog to track and manage your assets effectively. To facilitate tracing and audits, maintain a comprehensive record of data and model versions throughout the lifecycle. Implement strict access controls and audit trails to protect sensitive data and models. Address the critical issue of bias in AI, particularly in generative AI applications. To build trust, strive for transparency and explainability in model outputs. Automate the generation of feature statistics and implement anomaly detection to proactively identify data issues. To ensure model reliability, establish mechanisms to detect and mitigate the impact of changes in data distributions. Implement holistic AI and ML observability and reliability practices To continuously improve your AI operations, you need to define meaningful reliability goals and measure progress. Observability is a foundational element of reliable systems. Observability lets you manage ongoing operations and critical events. Well-implemented observability helps you to build and maintain a reliable service for your users. Consider the following recommendations: Track infrastructure metrics for processors (CPUs, GPUs, and TPUs) and for other resources like memory usage, network latency, and disk usage. Perform load testing and performance monitoring. Use the test results and metrics from monitoring to manage scaling and capacity for your AI and ML systems. Establish reliability goals and track application metrics. Measure metrics like throughput and latency for the AI applications that you build. Monitor the usage patterns of your applications and the exposed endpoints. Establish model-specific metrics like accuracy or safety indicators in order to evaluate model reliability. Track these metrics over time to identify any drift or degradation. For efficient version control and automation, define the monitoring configurations as code. Define and track business-level metrics to understand the impact of your models and reliability on business outcomes. To measure the reliability of your AI and ML services, consider adopting the SRE approach and define service level objectives (SLOs). ContributorsAuthors: Rick (Rugui) Chen | AI Infrastructure Solutions ArchitectFilipe Gracio, PhD | Customer EngineerOther contributors: Jose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer Engineer Previous arrow_back Security Next Cost optimization arrow_forward Send feedback \ No newline at end of file diff --git a/Remove_Gmail_from_consumer_accounts.txt b/Remove_Gmail_from_consumer_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..319fa6e6a544f41101cd66e2055db8fe6bddcf23 --- /dev/null +++ b/Remove_Gmail_from_consumer_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/removing-gmail-from-consumer-accounts +Date Scraped: 2025-02-23T11:55:58.983Z + +Content: +Home Docs Cloud Architecture Center Send feedback Remove Gmail from consumer accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how you can remove Gmail from an existing consumer account to enable the user account to be migrated to Cloud Identity or Google Workspace. If your organization doesn't use Cloud Identity or Google Workspace, it's possible that some of your employees have been using Gmail accounts to access Google services. Some of these Gmail accounts might use a corporate email address such as alice@example.com as an alternate email address. Gmail accounts are owned and managed by the individuals who created them. Your organization therefore has no control over the configuration, security, and lifecycle of these accounts, which means they can't be migrated as is. By removing Gmail from affected consumer accounts, you allow the user accounts to be migrated to Cloud Identity or Google Workspace. Consider removing Gmail from an existing consumer account if both of the following conditions apply: You want to migrate the user account to Cloud Identity or Google Workspace and retain the user's settings and data. You don't use the email functionality provided by Gmail. Note: Because a Gmail account is owned and managed by the individual who created it, the account's owner must perform the steps to remove Gmail. If you want to retain Gmail functionality, or if you want to avoid engaging the owner of a Gmail account, consider sanitizing the Gmail account instead. Before you begin To remove Gmail from a user account, at least one of the alternate email addresses of the Gmail account must correspond to one of the domains that you've added to your Cloud Identity or Google Workspace account. Both primary and secondary domains qualify, but alias domains are not supported. Note: If you are considering using the transfer tool for unmanaged users, keep in mind that the tool doesn't find Gmail users, regardless of the alternate email addresses they use. The following sections walk through the process of removing Gmail from a consumer account. The steps must be performed by the owner of the Gmail account. Export Gmail data Before you remove Gmail from your user account, you must export your existing Gmail data. You won't be able to access that data after you remove Gmail from the account. (Non-Gmail data is not affected.) For information about how to export your Gmail data, see Download your data. Remove the Gmail service In the Google Account dashboard, open the Data & personalization page. Go to Data & personalization Under Download or delete your data, select Delete a service. Click Delete a service. You might be prompted to enter your password again. Click the delete icon next to the service you want to delete. In the How you'll sign in to Google dialog, select your corporate email address and click Next. Carefully read the explanations on the next screen. If you're sure you want to proceed, select Yes and click Delete Gmail. It might take a few days until your corporate email address becomes the primary email address of the user account and the Gmail data is deleted. Although you won't be able to receive or send email using your @gmail.com email address anymore, the address remains associated with your user account and you can continue to use it to sign in. Migrate the user account When the removal process has completed and the user account has been updated to use your corporate email address as the primary email address, your administrator can migrate the user account to Cloud Identity or Google Workspace like other consumer accounts. For more information, see Migrating consumer accounts Best practices for administrators We recommend the following best practices for working with Gmail accounts: Prevent other users from assigning a corporate email address to their Gmail accounts by proactively provisioning user accounts to Cloud Identity or Google Workspace. Prevent new Gmail accounts from being granted access to Google Cloud resources by using an organizational policy to restrict identities by domain. Prevent Gmail accounts from being given access to Google Marketing Platform by using a policy that restricts sharing by domain. What's next Review how to assess existing user accounts. Learn how to sanitize Gmail accounts. Send feedback \ No newline at end of file diff --git a/Request_a_quote.txt b/Request_a_quote.txt new file mode 100644 index 0000000000000000000000000000000000000000..d7b61474357896ce1430159ab4b9800164886bd1 --- /dev/null +++ b/Request_a_quote.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/contact/form?direct=true +Date Scraped: 2025-02-23T12:10:32.062Z + +Content: +Talk to a Google Cloud sales specialistDiscuss your unique challenge in more detailStart a conversation about products and solutions pricingExplore use cases for your industryContact our sales team directlyChat live with our sales teamAvailable all day, starting 9 AM ET Monday to 7 PM ET Fridayjamzith nmajamzithnma0@gmail.comSwitch account First nameFirst nameFirst name Last nameLast nameLast name Job titleJob titleJob titleBusiness emailBusiness emailBusiness emailCalling Code+971 (+971) (+971) Business phoneBusiness phoneBusiness phone Company nameCompany nameCompany nameIndustryIndustryIndustryIndustryAdvertising & MarketingAgricultureAutomotiveConsumer Packaged GoodsEducationElectrical & ElectronicsEnergy & UtilitiesFinancial ServicesFood, Beverage & RestaurantsGamingGovernmentHealthcare & Life SciencesManufacturingMedia & EntertainmentNon-ProfitProfessional & Business ServicesRetail & WholesaleTechnologyTelecommunicationsTourism & LeisureLogisticsOtherSubindustrySubindustrySubindustrySubindustryEducation TechnologyHigher EducationPrimary & Secondary EducationSubindustrySubindustrySubindustrySubindustryFederal & Central GovernmentRegional, State & Local GovernmentCountryCountryCountryCountryUnited Arab EmiratesAfghanistanAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua & BarbudaArgentinaArmeniaArubaAscension IslandAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBosnia & HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBritish Virgin IslandsBruneiBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongo - BrazzavilleCongo - KinshasaCook IslandsCosta RicaCroatiaCuraçaoCuraçaoCyprusCzechiaCôte d’IvoireDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEswatiniEthiopiaFalkland Islands (Islas Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuineaGuinea-BissauGuyanaHaitiHeard & McDonald IslandsHondurasHong KongHungaryIcelandIndiaIndonesiaIraqIrelandIsraelItalyJamaicaJapanJordanKazakhstanKenyaKiribatiKuwaitKyrgyzstanLaosLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacaoMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesiaMoldovaMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmar (Burma)NamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorth MacedoniaNorthern Mariana IslandsNorwayOmanPakistanPalauPalestinePanamaPapua New GuineaParaguayPeruPhilippinesPitcairn IslandsPolandPortugalPuerto RicoQatarRomaniaRussiaRwandaRéunionSamoaSan MarinoSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint MaartenSlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth Georgia & South Sandwich IslandsSouth KoreaSouth SudanSpainSri LankaSt. BarthélemySt. HelenaSt. Kitts & NevisSt. LuciaSt. MartinSt. Pierre & MiquelonSt. Vincent & GrenadinesSurinameSvalbard & Jan MayenSwedenSwitzerlandSão Tomé & PríncipeTaiwanTajikistanTanzaniaThailandTimor-LesteTogoTokelauTongaTrinidad & TobagoTunisiaTurkmenistanTurks & Caicos IslandsTuvaluTürkiyeU.S. Outlying IslandsU.S. Virgin IslandsUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVatican CityVenezuelaVietnamWallis & FutunaWestern SaharaYemenZambiaZimbabweDescribe your business problem 0/1,0000 of 1000 characters enteredSign me up to receive news, product updates, event information, and special offers about Google Cloud from Google.Sign me up to receive news, product updates, event information, and special offers about Google Cloud from Google.Google 개인정보처리방침에 따라 Google Cloud가 아래와 같이 내 개인정보를 수집, 사용 및 보유하는 데 동의합니다. 귀하가 이에 부동의하는 경우, 본 요청을 위한 절차를 더 진행하실 수 없습니다.수집되는 항목: (영업 전문가에게 문의, 자료신청 또는 이벤트등록) 이름, 직무, 이메일, 회사 전화번호, 회사 정보, 국가; (뉴스레터 구독 시) 이름, 직무, 회사 이메일, 국가수집 및 이용 목적 : 문의 처리 및 마케팅 커뮤니케이션 수행보유 및 이용 기간 : 문의 처리 및 마케팅 커뮤니케이션 종료 시까지 또는 법적 의무 또는 제한된 사업 목적상 필요한 경우 (자세한 내용은 위 개인정보처리방침 참조)I agree that Google Cloud will collect, use and retain my personal data as below in accordance with Google's Privacy Policy. If you do not agree with this, you cannot proceed further with this request.Collected Items: (for contact to our sales specialist, gated content or event registration) name, job title, company email, business mobile phone, company information, country; (for newsletter subscription) name, job title, company email, countryPurpose of collection and use: handling inquiry and conducting marketing communications.Retention and use period: until the end of the inquiry process and marketing communications, or as necessary for legal obligations or limited business purposes (see above privacy policy for more details).Submit requestSubmit requestI understand my personal data will be processed in accordance with Google's Privacy Policy.Contact our sales team directlyChat live with our sales teamAvailable all day, starting 9 AM ET Monday to 7 PM ET FridayGet technical support, resolve billing issues, and find helpful resources.Visit Cloud Support HubLearn how certified partners help deliver Google Cloud solutions to our customers.Explore partnersGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Resource_management_with_ServiceNow.txt b/Resource_management_with_ServiceNow.txt new file mode 100644 index 0000000000000000000000000000000000000000..4035e8b01049cddc7528650688c06c1d18f590bf --- /dev/null +++ b/Resource_management_with_ServiceNow.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/resource-management-with-servicenow +Date Scraped: 2025-02-23T11:47:41.829Z + +Content: +Home Docs Cloud Architecture Center Send feedback Reference architecture: Resource management with ServiceNow Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-30 UTC This document discusses architecture patterns for how to find and collect information about assets in Google Cloud, in other cloud platforms, and on-premises by using ServiceNow cloud discovery. The document is intended for architecture teams or cloud operations teams that are familiar with IT operations management (ITOM); Information technology infrastructure library (ITIL); Google Cloud services such as Compute Engine, Google Kubernetes Engine (GKE), and Cloud Asset Inventory; and ServiceNow Cloud Discovery. Overview Many large enterprises use a hybrid IT infrastructure deployment that combines Google Cloud, other cloud platforms, and on-premises infrastructure. Such a hybrid deployment is typically the initial iteration in a cloud migration strategy. IT departments in these enterprises are required to discover and keep track of all the assets in their technical ecosystem, which can potentially number in the millions. The IT departments must construct a configuration management system that ties these assets together with the technical services that the assets provide. This system must also monitor the assets and services in a way that supports IT operations management (ITOM) and IT service management (ITSM) best practices. For Google Cloud customers, a common architecture uses ServiceNow cloud resource discovery to find and collect information about assets in Google Cloud, in other cloud platforms, and on-premises. ServiceNow offers a wide range of tools for automating resource-management IT workflows across multiple cloud providers. Tools such as Cloud Operations Workspace let IT departments create multi-cloud resource dashboards and manage complex configurations through a unified interface (sometimes called a single pane of glass). This document presents a set of architecture patterns for this scenario, an overview of its high-level components, and a discussion of general design considerations. ServiceNow components for this architecture The ServiceNow platform components in these architecture patterns include the following: A ServiceNow instance that contains a configuration management database (CMDB) of configuration items (CIs). Each CI represents components in your operational environment that are involved in the delivery of digital services. A CI has multiple attributes that contain specific metadata about the component and its relationships to other CIs. One or more ServiceNow Management, Instrumentation, and Discovery (MID) Servers, running in your Google Cloud project. MID Servers collect the metadata for CIs and store it in the CMDB. These architecture patterns define some common practices for importing Google Cloud Asset Inventory data into ServiceNow's Google Cloud Platform asset inventory discovery. Architecture patterns for Google Cloud integration This document discusses the following architecture patterns for integrating Google Cloud into ServiceNow: Google Cloud discovery pattern Google Cloud agentless IP discovery pattern Google Cloud discovery with Agent Client Collector pattern These example architecture patterns are designed for a hybrid deployment that includes some infrastructure in Google Cloud and some in the ServiceNow cloud. They demonstrate how ServiceNow operates in Google Cloud between Google-managed infrastructure and customer-managed infrastructure. ServiceNow MID Servers query all Google-managed infrastructure by calling Google Cloud APIs. For more information about which APIs are called, see Google Cloud Platform APIs used by ITOM applications. In each of the following patterns, the architecture components work together in the Google Cloud Platform asset inventory discovery process to collect the cloud asset inventory information required by the ServiceNow Discovery application and related tools. Google Cloud discovery pattern The basic ServiceNow cloud discovery architecture pattern uses ServiceNow MID Servers to call Google Cloud Asset Inventory and other Google Cloud APIs to gather data about resources such as the following: VM instances Tags (keys/values) Storage volumes and storage mapping Data center resources, including hardware types Cloud networks, subnets, and gateways Images Cloud load balancers and availability zones Cloud databases and database clusters Containers (GKE) Service mapping based on resource labels In this pattern, the MID Servers don't need credentials, because they don't log into the VMs to collect data. This limits the ability of the discovery process to gather additional information. But it imposes less operational cost, because it removes the need to manage and rotate MID Server credentials. The following diagram illustrates this architecture pattern. The Google Cloud portion of this pattern consists of the following: One Google Cloud project (Service Project A in the diagram), which consists of two Google Cloud load balancers, one or more VM instances, a GKE instance, and one or more ServiceNow MID Servers. Each MID Server runs in its own VM. A second Google Cloud project (Service Project B in the diagram), which consists of MID Servers running in their own VMs. A third Google Cloud project (Host Project C in the diagram), which consists of the partner interconnect. Additional managed services, such as Cloud APIs, BigQuery, and Cloud Storage. Network routes that are set up from the MID Servers to the Google Cloud APIs. The ServiceNow portion consists of the ServiceNow instance, which captures the metadata from the MID Servers and stores it in the CMDB. Google Cloud agentless IP-based discovery pattern This architecture pattern adds IP-based discovery to the basic cloud discovery pattern by using a batch job and a Google Cloud service account to log into VMs and gather additional details. This pattern requires more of an operational burden to manage the MID Server than with the basic pattern, because it requires you to manage and rotate the MID Server credentials. However, it expands the discovery process beyond the data provided by Cloud Asset Inventory to include additional data, such as the following: OS credential management and security Enhanced discovery, such as file-based discovery and discovery of licenses OS details Running processes TCP connections Installed software In this architecture pattern, one or more ServiceNow MID Servers are located in Google Cloud, while the ServiceNow instance is hosted in the ServiceNow cloud platform. The MID Servers are connected to the ServiceNow instance through the MID Server External Communication Channel (ECC) Queue (not shown). This architecture is shown in the following diagram. The Google Cloud portion of this pattern consists of the following: Service Project A, which consists of two Google Cloud load balancers, one or more VMs, a GKE instance, and one or more ServiceNow MID Servers. Each MID Server runs in its own VM. Service Project B, which consists of MID Servers that run in their own VMs. Host Project C, which consists of the partner interconnect. Additional managed services, such as Cloud APIs, BigQuery, and Cloud Storage. ServiceNow Kubernetes Discovery deployed on the GKE infrastructure. Network routes that are set up from the MID Servers to the Google Cloud APIs. Service accounts that enable MID Servers to log into any Google Cloud VMs that require serverless IP address discovery. Network routes that are set up from the MID Servers to any Google Cloud VMs that require serverless IP address discovery. The ServiceNow portion consists of the ServiceNow instance, which captures the metadata from the MID Servers and stores it in the CMDB. Google Cloud discovery with Agent Client Collector pattern This architecture pattern includes the following: The initial cloud discovery. One or more MID Servers. An additional ServiceNow agent, the Agent Client Collector, which you install on your VMs. These agents connect directly to the MID Servers and relay the following additional information to ServiceNow: Near real-time push-based discovery Software metering Live CI view Workflow automation to servers The following diagram illustrates this architecture pattern. The Google Cloud portion of this pattern consists of the following: Service Project A, which consists of two Google Cloud load balancers, one or more VM instances, a GKE instance, and one or more ServiceNow MID Servers. Each MID Server runs in its own VM. Service Project B, which consists of MID Servers running in their own VMs. Host Project C, which consists of the partner interconnect. ServiceNow Kubernetes Discovery deployed on the GKE infrastructure. Additional managed services, such as Cloud APIs, BigQuery, and Cloud Storage. Network routes that are set up from the MID Servers to the Google Cloud APIs. Network routes that are set up from the MID Servers to Google Cloud VMs that have ServiceNow Discovery Agents installed. The ServiceNow portion consists of the following: The ServiceNow instance, which captures the metadata from the MID Servers and stores it in the CMDB. ServiceNow discovery agents that are installed on customer-managed Google Cloud VMs. Cloud asset discovery workflow The following sections discuss the workflow for Google Cloud asset discovery. This workflow applies to all three of the architecture patterns described in this document. Install and configure ServiceNow components Enable the Cloud Asset Inventory APIs. Install Agent Client Collector on your VMs. For more information, see Agent Client Collector installation. Allocate resources for computers that host the MID Servers. Configure firewall rules to allow connections on port 443 between your VM instance and the computers that host the MID Servers. Configure MID Server network connectivity. Install the MID Servers. Configure the MID Servers to call the relevant Google Cloud APIs. Make sure that the MID Servers have a valid network route to call Google Cloud APIs. Workflow Cloud Asset Inventory compiles a database of all supported asset types in the Google Cloud environment. ServiceNow uses Cloud Asset Inventory as the source to retrieve additional information to update the CMDB. The ServiceNow MID Servers query the Cloud Asset Inventory database for information about all of the assets in the Google Cloud environment. The MID Servers store the cloud asset information in a Cloud Storage bucket. Not all required information can be obtained from the Cloud Asset Inventory database. In the agentless pattern, the VM information doesn't include the current OS patch version. For this level of detail, the MID Servers perform a deep discovery by doing the following: The MID Servers create a batch job based on the IP addresses of the VMs that require a deep discovery. In the batch job, the MID Servers log into each VM and query the OS for patch versioning and other information. If Agent Client Collectors are installed, the data that they capture is transmitted to the MID Servers directly, rather than stored in the Cloud Asset Inventory database. For more information, see Networking Preparation and MID Server Configuration. After collecting the asset discovery data, the MID Servers store it into the CMDB as follows: The MID Servers create CIs in the CMDB to represent the operational capability provided by each asset. The MID Servers automatically discover labels from Google Cloud and store them in the CMDB. These labels are mapped to the CIs automatically and are useful for creating service maps. The workflow process should be repeated periodically as needed. Depending on the scale and configuration of your deployment, you might choose event-based or schedule-based discovery. For more information, see "Managing the CI lifecycle" in CMDB Design Guidance. Design considerations The following sections provide design guidance for implementing any of the architecture patterns that are described in this document. Location of the ServiceNow instance For most use cases, we recommend deploying the MID Servers in Google Cloud. That way the instances are close to the cloud assets on which they perform deep discovery. The architecture patterns in this document assume that your CMDB stores discovery data for all your cloud resources and for all on-premises resources, not just your Google Cloud assets. The CMDB can be located in the ServiceNow cloud, in Google Cloud, or on-premises. The ultimate decision about where to locate the CMDB in your environment depends on your specific requirements. Deciding to use MID Server agents Another design consideration is whether to use MID Server agents and service accounts. If your discovery process needs to collect information beyond the metadata that Cloud Asset Inventory provides, you need to use a MID Server infrastructure with service accounts, or alternatively, a MID Server with Agent Client Collector. Either approach can affect your operational support cost, which you must consider in your design. The approach that you should use depends on what data you need to capture and how you will use the data. The operational cost of capturing the data might outweigh the value that the data provides. Multi-region support for MID Servers If your company requires multi-region support of the MID Server infrastructure, you should plan to deploy each MID Server in at least two availability zones and replicate it into another region. Cost implications When you choose where to deploy the ServiceNow components for Google Cloud asset discovery, you need to consider egress cost and compute cost. Egress cost Egress charges are incurred whenever data moves out of Google Cloud. For this reason, you should analyze the egress cost for your use case to determine the best option for locating your ServiceNow components. Typically, MID Servers that perform deep discovery incur lower egress costs if they are running in Google Cloud than if they run on another cloud or on-premises. Compute cost ServiceNow components that run in Google Cloud incur compute costs that you should analyze to determine the best value for your company. For example, you should consider the number of MID Servers that you deploy in Google Cloud Compute Engine instances. Deploying more MID Servers makes the asset discovery process go faster. But doing so increases compute cost, because each MID Server is deployed in its own VM instance. For more information about whether to deploy one or multiple MID Servers, see Install multiple MID Servers on a single system. Operational supportability considerations Your deployment likely includes network security controls such as firewalls, intrusion protection systems, intrusion detection systems, and packet mirroring infrastructure. If there are extensive network security controls in place between Google Cloud and the environment where the MID Servers are deployed, these controls can create operational supportability issues. To avoid these issues, we recommend that you host the MID Servers in Google Cloud or as close as possible to the Google Cloud infrastructure that a deep discovery will query against. Securing MID Servers We recommend the following security practices for your MID Server instances: Make sure that the MID Servers are isolated to help ensure that only trusted administrators can connect to them. Make sure that the MID Servers are protected from deletion. Make sure that IAM roles are applied to limit the capability for changes to only those changes that are approved through ITIL processes or through a CI/CD pipeline. Securing service accounts We recommend the following security practices for managing service accounts: Make sure that the MID Servers use a service account that has only the IAM roles and permissions that are absolutely necessary for asset discovery. For more information, see Best practices for working with service accounts. ServiceNow authentication requires copying the content of a service account key into the ServiceNow user interface. Service account keys are a security risk if not managed correctly. You are responsible for the security of the private key and for other operations described by Best practices for managing service account keys. If you are prevented from creating a service account key, service account key creation might be disabled for your organization. For more information, see Managing secure-by-default organization resources. If you acquired the service account key from an external source, you must validate it before use. For more information, see Security requirements for externally sourced credentials". Folder and project structure Folders and projects are nodes in the Google Cloud resource hierarchy. To support asset discovery, your folder and project structure should reflect the structure of your application and of the environments where the application is deployed. Structuring your resources in this way also makes it possible for ServiceNow to map your assets to the technical services that they provide. Be mindful of any changes you make to the folder and project structure to support ServiceNow discovery. The primary role of the folder and project structure is to support billing and IAM access. Therefore, any changes you make to support ServiceNow should support and align to your organization's Google Cloud billing structure. For best practices for structuring your Google Cloud organization, folder, and project hierarchy, see Resource hierarchy and Creating and managing organizations. The following diagram represents an example Google Cloud resource hierarchy in its complete form. In this example, the folder structure defines the application, and each project defines an environment. Labeling Labels are key-value pairs that you can assign to your cloud resources. (ServiceNow, AWS, and Azure refer to labels as tags.) ServiceNow uses the labels that you assign to your Google Cloud assets to identify your assets and optionally map them to services. Choosing a good labeling scheme helps ServiceNow monitor your infrastructure for accurate reporting and ITOM/ITSM compliance. We recommend that you use labels for any resources that require unique identification that is more specific than what your folder and project structure allows for. For example, you might want to use labels in the following cases: If there are strict compliance requirements for your application, you can label all of the application resources so that your organization can easily identify all the infrastructure that's in scope. In a multi-cloud environment, you can use labels to identify the cloud provider and region for all resources. If you need more fine-grained visibility than what's provided by default by Google Cloud Billing reports or Cloud Billing export to BigQuery, the data can be filtered by labels. Google automatically assigns labels to Google-managed assets that run in your VPC. Google-assigned labels are prefixed with goog-. Your MID Servers shouldn't attempt to perform a deep inspection on these assets. For more information about labels for Google-managed resources, see Tag Based Mapping and Label resources automatically based on Cloud Asset Inventory real-time notifications. The following table lists labels that Google Cloud services assign to resources that those services manage. Google Cloud service Labels or label prefix Google Kubernetes Engine goog-gke- Compute Engine goog-gce- Dataproc goog-dataproc- Vertex AI Workbench goog-caip-notebook and goog-caip-notebook-volume To support resource management effectively, your organization's deployment process must create project and folder structures and assign asset labels consistently across your entire organization. Inconsistencies in infrastructure and labeling can make it difficult to maintain a correct CMDB without manual processes that are likely to be unsupportable and that present scaling challenges in the long term. The following list suggests best practices for making your deployment process consistent and repeatable: Use infrastructure as code (IaC) or automated provisioning systems such as Terraform, ServiceNow ITOM, or Cloud provisioning and governance with Google Cloud Deployment Manager. Have a good governance process in place for your labels. For an overview of labeling governance, see Tag Governance in the ServiceNow documentation. What's next For additional best practices for structuring your resources for Cloud Billing, see Guide to Cloud Billing Resource Organization & Access Management and Cloud Insights setup guide for Google Cloud. For best practices for structuring your organization's IAM permissions, see Best practices for planning accounts and organizations and Cloud Provisioning and Governance. For best practices for structuring your VPC firewall policies across your organization, see Hierarchical firewall policies. Learn how to use labels to support ServiceNow tag-based discovery. Learn about ServiceNow Agent Client Collector, a push mechanism that runs in your Google Cloud project and sends output data to the ServiceNow instance through the MID Server, storing events and metrics in the relevant database. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Retail.txt b/Retail.txt new file mode 100644 index 0000000000000000000000000000000000000000..90cd2674033138747a2de932126442eb5f1ec956 --- /dev/null +++ b/Retail.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/retail +Date Scraped: 2025-02-23T11:57:43.933Z + +Content: +Google Cloud for RetailWe’re on a mission to help retailers easily leverage AI throughout their organizations and gain real-time insights from their data.The world’s leading brands are partnering with Google Cloud to thrive in this ever-changing landscape. Talk with an expertWatch how retailers can accelerate product catalog management with generative AIRetail solutions to power your businessCapture digital and omnichannel revenue growthEmbrace the digital momentPower ecommerce operations with reliability, scalability, and flexibility. With Google Cloud, retailers can deliver growth through channel-less, frictionless customer experiences, and click-to-deploy ecommerce solutions.“No matter what the intent of the shopper is, we want to surface that vast assortment that we have. That's what Google Cloud Vertex AI Search for commerce helped us to do.”Matt Baer, Chief Digital Officer, Macy'sExplore our ecommerce offerings:Product discoveryLearn moreDeploy a sample ecommerce websiteDeploy in consoleConversational commerceLearn moreBecome a customer-centric, data-driven retailerHarness the power of dataGet the most value out of your data and empower teams to make data-driven decisions.Google Cloud can help retailers leverage large volumes of customer and product data for business transformation. With Google, you can easily run powerful AI/ML for better marketing, forecasting, and insights. “The data synergies across Google platforms have allowed us to better use contextual data to optimize and improve our ad effectiveness as we continue to scale our investments, which helped us double our return on ad spend while increasing our investment by 20%.”Nari Sitaraman, Chief Technical Officer, Crate & BarrelExplore our data offerings:Customer Data PlatformLearn moreDemand forecastingLearn moreData cloud for retailLearn moreModernize the store experienceInfuse the store with AI-powered experiencesDigitize the store to deliver AI-driven use cases for shoppers and associates.Google Cloud can help associates better serve customers by improving product and service intelligence and streamlining in-store processes.“In choosing to build our data cloud on Google Cloud, we prioritized service scalability, security, and openness to build an agile development system that can serve the needs of the organization well into the future.”Izuru Nishimura, Executive Office and Head of ICT, Seven-Eleven JapanExplore our store offerings:Assisted order pickingLearn moreShelf checking with Vision AILearn moreAssociate productivityLearn moreCreate sustainable and efficient operationsDrive efficiency Ensure business continuity, increase collaboration, and improve supply chain performance.Retailers can reduce operational costs and operate efficiently, even under resource constraints.“As we move from traditional data platforms to Google Cloud, we have the potential to return millions of dollars to the bottom line, directly improve margins, and improve the customer shopping experience.”Steve Dee, Chief Information Officer, Rodan + FieldsExplore our offerings:Contact Center AILearn moreFleet routingLearn moreStore locator and address validationLearn moreCapture digital and omnichannel revenue growthCapture digital and omnichannel revenue growthEmbrace the digital momentPower ecommerce operations with reliability, scalability, and flexibility. With Google Cloud, retailers can deliver growth through channel-less, frictionless customer experiences, and click-to-deploy ecommerce solutions.“No matter what the intent of the shopper is, we want to surface that vast assortment that we have. That's what Google Cloud Vertex AI Search for commerce helped us to do.”Matt Baer, Chief Digital Officer, Macy'sExplore our ecommerce offerings:Product discoveryLearn moreDeploy a sample ecommerce websiteDeploy in consoleConversational commerceLearn moreBecome a customer-centric, data-driven retailerBecome a customer-centric, data-driven retailerHarness the power of dataGet the most value out of your data and empower teams to make data-driven decisions.Google Cloud can help retailers leverage large volumes of customer and product data for business transformation. With Google, you can easily run powerful AI/ML for better marketing, forecasting, and insights. “The data synergies across Google platforms have allowed us to better use contextual data to optimize and improve our ad effectiveness as we continue to scale our investments, which helped us double our return on ad spend while increasing our investment by 20%.”Nari Sitaraman, Chief Technical Officer, Crate & BarrelExplore our data offerings:Customer Data PlatformLearn moreDemand forecastingLearn moreData cloud for retailLearn moreCreate the modern storeModernize the store experienceInfuse the store with AI-powered experiencesDigitize the store to deliver AI-driven use cases for shoppers and associates.Google Cloud can help associates better serve customers by improving product and service intelligence and streamlining in-store processes.“In choosing to build our data cloud on Google Cloud, we prioritized service scalability, security, and openness to build an agile development system that can serve the needs of the organization well into the future.”Izuru Nishimura, Executive Office and Head of ICT, Seven-Eleven JapanExplore our store offerings:Assisted order pickingLearn moreShelf checking with Vision AILearn moreAssociate productivityLearn moreDrive sustainable and efficient operationsCreate sustainable and efficient operationsDrive efficiency Ensure business continuity, increase collaboration, and improve supply chain performance.Retailers can reduce operational costs and operate efficiently, even under resource constraints.“As we move from traditional data platforms to Google Cloud, we have the potential to return millions of dollars to the bottom line, directly improve margins, and improve the customer shopping experience.”Steve Dee, Chief Information Officer, Rodan + FieldsExplore our offerings:Contact Center AILearn moreFleet routingLearn moreStore locator and address validationLearn moreYour guide to generative AI in retail For retailers, generative AI can drive personalized customer experiences and improve store productivity. In this ebook, we explore four strategies you can implement.Read nowCheck out the latest happenings in retailElevate your retail business with edge computingTake your retail business to the next level with our latest report on edge computing.Shopify and Google Cloud AI integration brings advanced ecommerce capabilities to retailers and merchants worldwideShopify merchants can now deploy advanced search and browse experiences using Google Cloud.New research: Search abandonment continues to vex retailers worldwide Read about the impact of search abandonment and its role in brand loyalty and shopper sentiment.Ensure resilience in retail with a data-driven approachThe top four areas where investing in a data-driven approach can help achieve efficiency.Fulfilling the future of possibleRead how retail CFOs can glean insights and grow revenue in modern times.Google Cloud Retail BlogGoogle Cloud news, best practices, and how-tos for retailers.View MoreTop brands who choose Google CloudDiscover why many of the world’s leading retailers are choosing Google Cloud to help them innovate faster, make smarter decisions, and collaborate from anywhere.See all customersRecommended retail partnersOur industry partners can help you solve your business challenges and unlock growth opportunities with painless implementations and integrated out-of-the-box or custom solutions.See all partnersDiscover insights. Find solutions.See how you can transform your retail organization with Google Cloud.Talk with an expertWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleSee all industry solutionsContinue browsingGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Risk_and_compliance_as_code_(RCaC).txt b/Risk_and_compliance_as_code_(RCaC).txt new file mode 100644 index 0000000000000000000000000000000000000000..686e012143e33c8c7972640bbf1761cf49b25f17 --- /dev/null +++ b/Risk_and_compliance_as_code_(RCaC).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/risk-and-compliance-as-code +Date Scraped: 2025-02-23T12:01:04.885Z + +Content: +Risk and compliance as code (RCaC)Embrace automation to transform your security and compliance function to adhere to the speed and agility of DevOps, reduce risk, and create value in the cloud securely. Contact usBenefitsCodify infrastructure and policies, and automate routine compliance checksPrevent non-compliancePrevent non-compliance by asserting infrastructure and policies as code for easy onboarding on Google Cloud.Establish secure guardrails from the get-go via security blueprints and Assured Workloads.Detect drift and non-complianceDetect non-compliance via Security Command Center, notifying stakeholders when offending infrastructure is identified.Reduce risk with intelligent automation, control mapping, and continuous assessments.Transfer risk Once on Google Cloud, you can leverage Risk Manager to continuously evaluate risk and our Risk Protection Program to qualify for cyber insurance. Key featuresModernize compliance by automating routine checks to reduce your audit fatigueAs more regulations come into existence and organizations migrate to the cloud, the risk of non-compliance and associated impact increases.Continuous complianceAdopt security controls and compliance requirements in a codified format using our security blueprints and Assured Workloads.Continuously monitor for security and compliance drift via Security Command Center.Shared fateMove from shared responsibility to shared fate by partnering with Google. Deploy and run securely on our platform and become risk aware. This means providing holistic capabilities throughout your cloud journey. Reduce security risk and gain access to a cyber insurance policy designed exclusively for Google Cloud customers via our Risk Protection Program.Ready to get started? Contact usCompliance in DevOps cultureLearn how to address common compliance requirements in a cloud-native way.VideoMaking compliance cloud nativeWatch videoPartnersRecommended RCaC partnersOur large ecosystem of trusted industry partners can help you simplify your complex risk and compliance journey.See all partnersExplore our marketplaceRelated servicesIntegrations and productsAssured WorkloadsAssured Workloads allows customers to confidently secure and configure sensitive workloads to support their regulatory compliance requirements.Security Command CenterObtain compliance reports to help ensure your resources are meeting compliance requirements. Risk Protection ProgramReduce security risk and gain access to an exclusive cyber insurance policy designed exclusively for Google Cloud customers. DocumentationExplore how to get started on your modernized compliance management journeyWith the changing risk landscape, the aim of a modern compliance function is to help an organization stay compliant as well as modernize itself. Read our documentation and best practices on how to get started.WhitepaperAssuring compliance in the cloudThe aim of a modern compliance function is to help an organization stay compliant as well as modernize itself. Read on how to get started.Learn moreBlueprintSecure foundation blueprint to adopt initial configurationsResources, including code and templates, that can be used to deploy cloud resources in recommended configurations.Learn moreArchitectureSetting up a cloud-native PCI DSS environment using GKEThe PCI on GKE blueprint contains a set of Terraform configurations and scripts that demonstrate how to bootstrap a PCI environment in Google Cloud. Learn moreQuickstartSetting up a FedRAMP environment on Google CloudA quickstart to deploy a three-tiered application aligning to FedRAMP requirements.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Run_Applications_at_the_Edge.txt b/Run_Applications_at_the_Edge.txt new file mode 100644 index 0000000000000000000000000000000000000000..56f9479dd488838a185ed7ff675718dedd80448c --- /dev/null +++ b/Run_Applications_at_the_Edge.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/modernize-with-edge +Date Scraped: 2025-02-23T11:58:29.837Z + +Content: +Building the next generation edge platformMany industries are facing a growing demand to provide localized, consistent, low latency services. Our Edge solution addresses these challenges of edge deployments and governance while being hardware agnostic.Contact usBenefitsEnhance customer experience and employee productivity with your edge strategyComprehensive solutionGoogle Cloud’s Edge solution provides an end-to-end approach including deploying the platform and applications as well as managing them and integrating them with Google Cloud.Minimize risk and operational overheadEliminates cumbersome platform upgrades that result in edge locations staying on outdated platforms longer and being vulnerable to security issues through centralized platform management.Cost reductionOrganizations can eliminate their expensive VMWare licensing costs and also reduce their operational costs by having a unified approach for managing applications running at the edge and the cloud.Key featuresSimplifying your edge strategyOur solution provides standardized approaches for deploying and managing your applications across various edge locations.Automated platform deploymentDeploy edge environments in a completely automated manner and use Config Management to standardize configuration across multiple deployments.Single pane of glass viewGoogle Cloud and Anthos allow you to have a single view of your various edge locations to monitor and manage them.Seamless developer experienceThis solution allows application teams to use standardized development approaches irrespective of whether it is deployed on the cloud or the edge.Build for the futureBy building applications on Anthos, customers can adopt a platform that is not just built for today but for the future, including integrations with products like BigQuery, ML, and more.Ready to get started? Contact usGoogle Cloud Next '21 SessionWatch and learn about how customers can reimagine how they leverage their edge locations to improve employee productivity and customer experience. VideoBuilding the next-generation edge with AnthosWatch videoTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/SAP_on_Google_Cloud.txt b/SAP_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..5307d2c835490c391cc2a5481685806703579655 --- /dev/null +++ b/SAP_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/sap +Date Scraped: 2025-02-23T11:59:37.301Z + +Content: +See how SAP customers are driving success on Google Cloud. Watch now.SAP on Google CloudActivate your SAP data with Google Cloud AI for advanced analytics and building cutting-edge AI/ML and generative AI applications.Contact usWatch full video here2:02Unified data cloud for simplicity and intelligenceIndustry leaders choose Google's data cloud to accelerate digital transformation. Read more to see how organizations are solving data challenges. Learn about the findings and recommendations from a global survey of more than 800 C-suite executives. Read the IDC whitepaperTrust. Accelerate. Innovate with Google AI.Discover a unified data experience that offers leading connectors, analytics accelerators, and innovative AI to bring clarity from data to decisions and accelerate innovation.TRUSTED CLOUDProtect your information, identities, applications, and devices using the same secure-by-design infrastructure, built-in protection, and global network that Google usesOn an open, unified cloud, access SAP, non-SAP, structured, and unstructured data at petabyte scale instantly, with full data context.Unlock data silos with flexible architectural components and bidirectional data integration.Connect BigQuery and Vertex AI with SAP ERP and SAP Datasphere with endorsed integration patterns.Related contentSAP Datasphere on Google Cloud info sheet5 minute readInfobrief: Activating Enterprise Data with AI and Analytics 10 minute readMaximize the value of your SAP data with BigQueryVideo (01:35)Limitless insights Solve your biggest challenges across the enterprise with SAP and Google Cloud data analyticsOur unique partnership combines powerful data analytics, AI, and co-innovations for faster time to value.Packaged solutions for supply chain, customer experience, sustainability, and more.Speed deployment with endorsed blueprints, reference architectures, and packaged data and analytics content. Related contentIDC Report: The Value of BigQuery from Google Cloud for SAP Customers 8 minute readVodafone runs SAP on Google Cloud Video (02:02)Cementos Pacasmayo lays solid foundations for innovation with Google Cloud Cortex FrameworkCase Studyfaster innovationBuild transformative AI/ML and generative AI experiences with confidence and speedInnovate with Google’s generative AI, natural language AI, and large language models integrated with SAP workflows and applications. Leverage smart analytics, AI technologies, and integrated solutions. Maximize co-innovation, joint reference architectures, and joint technology investments.Related contentReimagine the supply chain with the power of AI info sheetOne-pager 2 minute readArtificial intelligence has opened up new frontiers for innovation in various industries7 minute readGenerative AI on Google CloudVideo (02:43)Foundation of trustTRUSTED CLOUDProtect your information, identities, applications, and devices using the same secure-by-design infrastructure, built-in protection, and global network that Google usesOn an open, unified cloud, access SAP, non-SAP, structured, and unstructured data at petabyte scale instantly, with full data context.Unlock data silos with flexible architectural components and bidirectional data integration.Connect BigQuery and Vertex AI with SAP ERP and SAP Datasphere with endorsed integration patterns.Related contentSAP Datasphere on Google Cloud info sheet5 minute readInfobrief: Activating Enterprise Data with AI and Analytics 10 minute readMaximize the value of your SAP data with BigQueryVideo (01:35)Accelerate time to value Limitless insights Solve your biggest challenges across the enterprise with SAP and Google Cloud data analyticsOur unique partnership combines powerful data analytics, AI, and co-innovations for faster time to value.Packaged solutions for supply chain, customer experience, sustainability, and more.Speed deployment with endorsed blueprints, reference architectures, and packaged data and analytics content. Related contentIDC Report: The Value of BigQuery from Google Cloud for SAP Customers 8 minute readVodafone runs SAP on Google Cloud Video (02:02)Cementos Pacasmayo lays solid foundations for innovation with Google Cloud Cortex FrameworkCase StudyInnovate with Google AI/MLfaster innovationBuild transformative AI/ML and generative AI experiences with confidence and speedInnovate with Google’s generative AI, natural language AI, and large language models integrated with SAP workflows and applications. Leverage smart analytics, AI technologies, and integrated solutions. Maximize co-innovation, joint reference architectures, and joint technology investments.Related contentReimagine the supply chain with the power of AI info sheetOne-pager 2 minute readArtificial intelligence has opened up new frontiers for innovation in various industries7 minute readGenerative AI on Google CloudVideo (02:43)SAP on Google CloudDiscover how Dufry transformed its supply chain with SAP on Google Cloud, running 30% faster and improved its in-store product mix.Watch the videoSAP customers on Google CloudDiscover why more and more SAP customers are choosing Google Cloud to help them maximize their data, innovate faster, and transform their business. The Home Depot uses BigQuery for accurate supply chain planning and analyzing its businessVideo (01:53)Learn how Mercado Libre leverages Google Cloud Cortex Framework and BigQuery to provide users access to data.Video (2:02)Learn how AI uncovers deeper customer insights resulting in better decision makingVideo (1:46)ATB Financial speeds insights 117x with SAP data in BigQuery Video (02:12)Southwire eliminates downtime to keep 30+ factories running 24/7Video (01:55)Why Inchcape chose Google Cloud for their SAP Rise migration to help transform their businessVideo (2:14)View MoreLearn more about deploying SAP on Google CloudEase and speed your SAP ERP journey to the cloud with SAP BTP and Google CloudThis infographic explains how it's done.ReadSAP on Google Cloud: migration strategiesMigrating SAP landscapes, workloads, and data to Google Cloud can be a breeze.Watch how9:15SAP on Google Cloud: high availability 11:38SAP on Google Cloud: backup strategies Check out our technical documentation for your SAP infrastructure modernizationThe Google Cloud and SAP partnership provides customers with SAP-certified infrastructure for all SAP systems.Learn moreAI-powered resilient supply chainSAP and Google Cloud's AI-powered supply chain solutions deliver real-time insights and risk management, fueling efficiency and competitive advantage.Read info sheet Google Cloud is carbon neutralWe want to give our customers more reasons to go to SAP and GoogleChristian Klein, Chief Executive Officer, SAPSee what makes our partnership so uniqueTake the next stepTell us what you’re solving for. A Google Cloud SAP expert is ready to help you find the best solution.Contact usWork with a trusted partnerFind a partnerContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/SQL_Server_on_Google_Cloud.txt b/SQL_Server_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..07247e4ebf43403f6e90ffcb05cf910e8247a385 --- /dev/null +++ b/SQL_Server_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sql-server +Date Scraped: 2025-02-23T11:59:27.598Z + +Content: +Data Cloud & AI Summit round-up: What’s new in Cloud SQL.Jump to SQL Server on Google CloudSQL Server on Google CloudMigrate and run Microsoft SQL Server on Compute Engine or use managed Cloud SQL for SQL ServerGo to consoleBring your existing SQL Server licenses and run them on Compute EngineRun license-included SQL Server VM images on Compute EngineUse fully managed Cloud SQL for SQL Server to reduce operational overheadFull compatibility and configurable custom VM shapes help you optimize costGoogle named a Leader in the 2022 Gartner® Magic Quadrant™ for Cloud Database Management SystemsRegister to download the reportBenefitsReduce dependenceImprove server utilization by paying as you go (including BYOL). Take advantage of a managed database service that provides high availability, updates, and replication, so you can focus on business value.Global infrastructureIndustry-leading VM performance speeds up Windows boot times dramatically. Your databases run on Google’s global fiber network. The result is ultra low latency for your applications.Fine-grained controlCustom Machine Types give admins control over the vCPU, RAM, and storage you provision for your virtual machine, to help ensure the ideal price-performance match for your enterprise applications.Key featuresKey featuresLift and shift SQL ServerMigrate your existing workloads to Cloud SQL or SQL Server running on Compute Engine with full compatibility. SQL Server running on Google Cloud works with all of your familiar tools like SSMS and Visual Studio. Connect your existing workloads to the best of what Google Cloud has to offer.Reduce operational overheadCloud SQL for SQL Server is a fully managed database service with a 99.95% SLA. Being fully managed includes upgrades, patching, maintenance, backups, and tuning. Regional availability and various VM shapes with memory from 3.75 GB up to 416 GB and storage up to 64 TB for all workloads provide flexible scaling options to eliminate the need to pre-provision or plan capacity before you get started.Live migration for underlying VMsWhen you run SQL Server on Compute Engine, the virtual machines can live-migrate between host systems without rebooting, which keeps your applications running even when host systems require maintenance.37:05Running business-critical commercial databases with Google CloudCustomersLearn more about running SQL Server on Compute Engine or using managed Cloud SQLCase studyVolusion improves performance for its customers’ ecommerce websites.5-min readBlog postMi9 Retail announces collaboration with Google Cloud to advance enterprise retail technology.5-min readBlog postDescartes Labs relied on automatic storage increases to cover nearly 40x disk growth.5-min readSee all customersDocumentationDocumentationGoogle Cloud BasicsSQL Server on Compute EngineRun SQL Server on Compute Engine with your own licenses or with pre-configured images.Learn moreTutorialConfiguring SQL Server Availability GroupsConfigure instances to use Windows Server Failover Clustering and SQL Server AlwaysOn Availability GroupsLearn moreBest PracticeSQL Server best practicesBest practices to optimize Compute Engine instances that run Microsoft SQL Server.Learn moreBest PracticeMigrate SQL Server 2008 to Cloud SQL for SQL ServerMigrate data from SQL Server 2008 to Cloud SQL for SQL Server 2017 Enterprise.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Cloud SQL for SQL ServerPricingPricingPricing for SQL Server on Google Cloud depends on where you run it. Running from within virtual machines on Compute Engine would be priced based on per-second usage of the virtual machine types, persistent disks, and other resources that you select. Alternatively, you can run SQL Server on Cloud SQL for SQL Server, which has its own pricing model.View pricing detailsPartnersPartnersGoogle Cloud’s partners can advise and assist you with managing, migrating, optimizing, and modernizing your SQL Server, Windows, and .NET applications.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/SRE_Principles.txt b/SRE_Principles.txt new file mode 100644 index 0000000000000000000000000000000000000000..d0fd6e0252aafc0617f15fa8d90f1d5a8b505f02 --- /dev/null +++ b/SRE_Principles.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/sre +Date Scraped: 2025-02-23T11:58:23.536Z + +Content: +Download the new whitepaper on SRE to learn about key concepts and how Google Cloud can help you on your SRE journeySite Reliability Engineering (SRE)SRE is a job function, a mindset, and a set of engineering practices to run reliable production systems. Google Cloud helps you implement SRE principles through tooling, professional services, and other resources.Contact usVIDEOHow to Adopt Site Reliability Engineering (SRE) on Google Cloud1:59BenefitsStrike the balance between speed and reliabilityReap the benefits of speed Automate end to end, from writing code to running services in production. Align dev and ops around shared goals to go faster. Connect to the tools you love, including incident management, as you minimize toil.Improve reliability with proven SRE principlesLeverage SRE principles developed at Google and proven to work at scale. Easily implement SRE best practices with Google Cloud’s Observability to speed up problem resolution and improve reliability.We meet you where you are in your SRE journeyDrive higher software delivery, irrespective of company size, industry, or whether you are using VMs, Kubernetes, or serverless. Choose from free tools or paid offerings to jump-start your SRE journey.Key featuresSRE tools and resources to make your operations and SRE teams run betterMonitor service health using SRE principlesMonitor the health of your services and work with developers to increase the velocity of changes using built-in support for service monitoring. Select metrics for SLIs, set SLOs, and track error budgets to mitigate risk for your service. Use powerful dashboards to aggregate metrics and logs, including golden signals to reduce MTTR and quickly answer questions about service health.Out-of-the-box integrations to increase automation, reduce toilUse our built-in integrations with the tools you love to troubleshoot incidents quickly. Implement progressive rollouts and roll back changes safely. Pre-built integrations with Cloud Build are available to allow you to build, test, and deploy artifacts to Google Kubernetes Engine, App Engine, Cloud Functions, Firebase, and Cloud Run as part of your CI/CD.One integrated view for faster resolutionGet one unified view across logs, events, metrics, and SLOs. Get in-context observability data, right within service consoles of Google Kubernetes Engine, Cloud Run, Compute Engine, Anthos and other run times. Collect metrics, traces, and logs with zero setup. Sub-second ingestion latency and terabyte per-second ingestion rate ensure you can perform real-time log management and analysis at scale. Get extra help from Google Cloud SRE specialistsIf you would like more hands-on help through the journey, we have additional services to consider including Google consulting services. Reach out to sales to see which option would work for your organization. Learn from our CRE team and customer success stories for how Google Cloud tools and practices have helped other companies implement SRE in their organization.Drive SRE/developer collaboration to “shift-left” observabilityWith OpenTelemetry (OT) packages and Google Exporter, developers can instrument and export trace data to Cloud Trace. Our new unified Ops agent (in preview), collects metrics and logs and also supports OpenTelemetry to capture and transport metrics. We are working to implement OT libraries as out-of-the-box features in many of our cloud products. Cloud SQL Insights is one example of this effort.CustomersMeeting customer demand with SRE practicesBlog postHow Hakuhodo Technologies is using SRE and its transformative impact4-min readBlog postHow JCB is leveraging SRE to lead a successful digital transformation4-min readBlog postHow Sabre is using SRE to lead a successful digital transformation3-min readBlog postHow Lowe's increased their monthly release velocity by 300x4-min readBlog postHow Lowe’s SRE reduced its mean time to recovery (MTTR) by over 80 percent4-min readSee all customersRelated servicesSRE integrations and productsBuild and deploy new cloud applications, store artifacts, and monitor app security and reliability on Google Cloud.Cloud LoggingSecurely store, search, analyze, and receive alerts for all of your log data and events in real time with an exabyte-scale, fully managed service.Cloud MonitoringGain visibility into the performance, availability, and health of your applications and infrastructure.Cloud BuildBuild, test, and deploy on our serverless CI/CD platform.Artifact RegistryStore, manage, and secure your build artifacts on the next generation of Container Registry.Cloud DeployGoogle Cloud Deploy provides fully managed continuous delivery with release promotion, built-in metrics, and security.DocumentationLearn how to implement SRE at your organization with these resourcesBest PracticeGoogle Site Reliability EngineeringAccess the SRE books, hear from SREs, and learn how we SRE at Google.Learn moreGoogle Cloud BasicsCreating an SLOTo monitor a service, you need at least one service-level objective (SLO). Learn step by step how to create your first SLO in Cloud Monitoring.Learn moreTutorialEngineering for reliabilityLearn how to define and defend your SLOs in Google Cloud’s Observability and improve observability of your applications running in Google Cloud.Learn moreTutorialSRE: Measuring and managing reliabilityThis course teaches the theory of service-level objectives (SLOs), a principled way of describing and measuring the desired reliability of a service.Learn moreTutorialDeveloping a Google SRE cultureThis course introduces key practices of Google SRE and the important role IT and business leaders play in the success of SRE organizational adoption.Learn moreNot seeing what you’re looking for?View documentationWhat's newWhat's new in Google Cloud SRESign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postAre we there yet? Thoughts on assessing an SRE team’s maturityRead the blogVideoGetting started with SLOsWatch videoPodcastSRE III with Steve McGhee and Yuri GrinshteynListen nowTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Sanitize_Gmail_accounts.txt b/Sanitize_Gmail_accounts.txt new file mode 100644 index 0000000000000000000000000000000000000000..96c7f0dc52340f407eaa3afcb7a031071ab96ef3 --- /dev/null +++ b/Sanitize_Gmail_accounts.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/sanitizing-gmail-accounts +Date Scraped: 2025-02-23T11:55:56.849Z + +Content: +Home Docs Cloud Architecture Center Send feedback Sanitize Gmail accounts Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-11 UTC This document describes how to sanitize existing Gmail accounts by deliberately removing any corporate email addresses from them. If your company hasn't been using Cloud Identity or Google Workspace, it's possible that some of your employees have been using Gmail accounts to access Google services. Some of these Gmail accounts might use a corporate email address such as alice@example.com as an alternate email address. Consider sanitizing a Gmail account if either of the following conditions is true: You want the owner of the Gmail account to switch to a managed user account. You want the Gmail account to stop using a corporate email address as an alternate address. This might be because the account belongs to a former employee or because you don't recognize the owner of the account. Removing the corporate email address from a Gmail account can mitigate a social engineering risk: if a Gmail account uses a seemingly trustworthy email address like alice@example.com as an alternate address, then the owner of the account might be able to convince employees or business partners to grant them access to resources they shouldn't be allowed to access. Before you begin To sanitize a Gmail account, you must meet all of the following prerequisites: You have identified a suitable onboarding plan and have completed all activities that your plan defines as prerequisites for consolidating your existing user accounts. You have created a Cloud Identity or Google Workspace account. Each Gmail account that you plan to sanitize must meet the following criteria: One of the alternate email addresses of the Gmail account corresponds to one of the domains that you've added to your Cloud Identity or Google Workspace account. Both primary and secondary domains qualify, but alias domains are not supported. Note: The transfer tool for unmanaged users doesn't find Gmail users, regardless of the alternate email addresses they use. Process Sanitizing Gmail accounts works like migrating consumer accounts, but it is based on the idea that you deliberately create a conflicting account. The following diagram illustrates the process. Rectangular boxes on the Administrator side denote actions that a Cloud Identity or Google Workspace administrator takes; rectangular boxes on the User account owner side denote actions that only the owner of a consumer account can perform. The sequence of steps differs slightly depending on whether you want the owner of the Gmail account to switch to a managed user account or whether you simply want the account to give up its corporate email address. Encourage a switch to a managed account If you want a user to switch to a managed account, create a user account for that user in Cloud Identity or Google Workspace. For the primary email address, use the email address that's used as an alternate email address by the Gmail account. For example, if the Gmail user bob@gmail.com has specified bob@example.com as an alternate email address, use bob@example.com as the primary email address for the Cloud Identity or Google Workspace user. The owner of the affected account has two ways to sign in—by using the Gmail address or by using the corporate email address. If the owner signs in by using the Gmail address, they see the following message, indicating that the corporate email address has been disassociated from the user account: The account owner sees this message only once. If the owner instead signs in by using the corporate email address, they see a ballot screen: If they select Organizational Google Workspace account, they must authenticate using the credentials of the newly created user account in Cloud Identity or Google Workspace. If they use an external IdP, this process involves single sign-on. Because the user account in Cloud Identity or Google Workspace is new, none of the Gmail account's data is transferred. If they select Individual Google account, they continue with their Gmail account, but they see the following message indicating that the corporate email address is being disassociated from the user account: After confirming, they are shown another message: Force an account to give up its corporate email address You can force an account to give up its corporate email address as follows: Create a user account in Cloud Identity or Google Workspace that has the corresponding corporate email address. Because you don't want the managed user account to ever be used, assign a random password. Delete the user account that you just created. By creating a conflicting account and immediately deleting the managed account, you leave the consumer account in a state where the owner has to rename the account. The owner of the affected account has two ways to sign in—by using the Gmail address or by using the corporate email address: If the owner signs in by using the Gmail address, they see the following message, indicating that the corporate email address has been disassociated from the user account: If they instead sign in by using the corporate email address, they see the following message: After confirming, they are shown another message: All configuration and data that was created by using this consumer account is unaffected by the renaming process. But for subsequent attempts to sign in, the user must use the Gmail address because the corporate address is no longer associated with the user account. Best practices We recommend the following best practices when you are sanitizing Gmail accounts: Prevent other users from assigning a corporate email address to their Gmail accounts by proactively provisioning user accounts to Cloud Identity or Google Workspace. Prevent new Gmail accounts from being granted access to Google Cloud resources by using an organizational policy to restrict identities by domain. Prevent Gmail accounts from being given access to Google Marketing Platform by using a policy that restricts sharing by domain. What's next Review how you can assess existing user accounts. Learn how to evict unwanted consumer accounts. Send feedback \ No newline at end of file diff --git a/Scenarios_for_applications.txt b/Scenarios_for_applications.txt new file mode 100644 index 0000000000000000000000000000000000000000..7358866016fdc81bc952df8f753885ac314d7031 --- /dev/null +++ b/Scenarios_for_applications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/dr-scenarios-for-applications +Date Scraped: 2025-02-23T11:54:34.321Z + +Content: +Home Docs Cloud Architecture Center Send feedback Disaster recovery scenarios for applications Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-05 UTC This document is part of a series that discusses disaster recovery (DR) in Google Cloud. This part explores common disaster recovery scenarios for applications. The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications (this document) Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Introduction This document frames DR scenarios for applications in terms of DR patterns that indicate how readily the application can recover from a disaster event. It uses the concepts discussed in the DR building blocks document to describe how you can implement an end-to-end DR plan appropriate for your recovery goals. To begin, consider some typical workloads to illustrate how thinking about your recovery goals and architecture has a direct influence on your DR plan. Batch processing workloads Batch processing workloads tend not to be mission critical, so you typically don't need to incur the cost of designing a high availability (HA) architecture to maximize uptime; in general, batch processing workloads can deal with interruptions. This type of workload can take advantage of cost-effective products such as Spot VMs and preemptible VM instances which is an instance you can create and run at a much lower price than normal instances. (However, Compute Engine might preemptively stop or delete these instances if it requires access to those resources for other tasks. By implementing regular checkpoints as part of the processing task, the processing job can resume from the point of failure when new VMs are launched. If you're using Dataproc, the process of launching preemptible worker nodes is managed by a managed instance group. This can be considered a warm pattern, where there's a short pause waiting for replacement VMs to continue processing. Ecommerce sites In ecommerce sites, some parts of the application can have larger RTO values. For example, the actual purchasing pipeline needs to have high availability, but the email process that sends order notifications to customers can tolerate a few hours' delay. Customers know about their purchase, and so although they expect a confirmation email, the notification is not a crucial part of the process. This is a mix of a hot (purchasing) and warm and cold (notification) patterns. The transactional part of the application needs high uptime with a minimal RTO value. Therefore, you use HA, which maximizes the availability of this part of the application. This approach can be considered a hot pattern. The ecommerce scenario illustrates how you can have varying RTO values within the same application. Video streaming A video streaming solution has many components that need to be highly available, from the search experience to the actual process of streaming content to the user. In addition, the system requires low latency to create a satisfactory user experience. If any aspect of the solution fails to provide a great experience, it's bad for the supplier as well as the customer. Moreover, customers today can turn to a competitive product. In this scenario, an HA architecture is a must-have, and small RTO values are needed. This scenario requires a hot pattern throughout the application architecture to help ensure minimal impact in case of a disaster. DR and HA architectures for production on-premises This section examines how to implement three patterns—cold, warm, and hot—when your application runs on-premises and your DR solution is on Google Cloud. Cold pattern: Recovery to Google Cloud In a cold pattern, you have minimal resources in the DR Google Cloud project—just enough to enable a recovery scenario. When there's a problem that prevents the production environment from running production workloads, the failover strategy requires a mirror of the production environment to be started in Google Cloud. Clients then start using the services from the DR environment. In this section we examine an example of this pattern. In the example, Cloud Interconnect is configured with a self-managed (non-Google Cloud) VPN solution to provide connectivity to Google Cloud. Data is copied to Cloud Storage as part of the production environment. This pattern uses the following DR building blocks: Cloud DNS Cloud Interconnect Self-managed VPN solution Cloud Storage Compute Engine Cloud Load Balancing Deployment Manager The following diagram illustrates this example architecture: The following steps outline how you can configure the environment: Create a VPC network. Configure connectivity between your on-premises network and the Google Cloud network. Create a Cloud Storage bucket as the target for your data backup. Create a service account. Create an IAM policy to restrict who can access the bucket and its objects. You include the service account created specifically for this purpose. You also add the user account or group to the policy for your operator or system administrator, granting to all these identities the relevant permissions. For details about permissions for access to Cloud Storage, see IAM permissions for Cloud Storage. Use Service account impersonation to provide access for your local Google Cloud user (or service account) to impersonate the service account you created earlier. Alternatively you can create a new user specifically for this purpose. Test that you can upload and download files in the target bucket. Create a data-transfer script. Create a scheduled task to run the script. You can use tools such as Linux crontab and Windows Task Scheduler. Create custom images that are configured for each server in the production environment. Each image should be of the same configuration as its on-premises equivalent. As part of the custom image configuration for the database server, create a startup script that will automatically copy the latest backup from a Cloud Storage bucket to the instance and then invoke the restore process. Configure Cloud DNS to point to your internet-facing web services. Create a Deployment Manager template that will create application servers in your Google Cloud network using the previously configured custom images. This template should also set up the appropriate firewall rules required. You need to implement processes to ensure that the custom images have the same version of the application as on-premises. Ensure that you incorporate upgrades to the custom images as part of your standard upgrade cycle, and ensure that your Deployment Manager template is using the latest custom image. Failover process and post-restart tasks If a disaster occurs, you can recover to the system that's running on Google Cloud. To do this, you launch your recovery process in order to create the recovery environment using the Deployment Manager template you create. When the instances in the recovery environment are ready to accept production traffic, you adjust the DNS to point to the web server in Google Cloud. A typical recovery sequence is this: Use the Deployment Manager template to create a deployment in Google Cloud. Apply the most recent database backup in Cloud Storage to the database server running in Google Cloud by following the instructions your database system for recovering backup files. Apply the most recent transaction logs in Cloud Storage. Test that the application works as expected by simulating user scenarios on the recovered environment. When tests succeed, configure Cloud DNS to point to the web server on Google Cloud. (For example, you can use an anycast IP address behind a Google Cloud load balancer, with multiple web servers behind the load balancer.) The following diagram shows the recovered environment: When the production environment is running on-premises again and the environment can support production workloads, you reverse the steps that you followed to failover to the Google Cloud recovery environment. A typical sequence to return to the production environment is this: Take a backup of the database running on Google Cloud. Copy the backup file to your production environment. Apply the backup file to your production database system. Prevent connections to the application in Google Cloud. For example, prevent connections to the global load balancer. From this point your application will be unavailable until you finish restoring the production environment. Copy any transaction log files over to the production environment and apply them to the database server. Configure Cloud DNS to point to your on-premises web service. Ensure that the process you had in place to copy data to Cloud Storage is operating as expected. Delete your deployment. Warm standby: Recovery to Google Cloud A warm pattern is typically implemented to keep RTO and RPO values as small as possible without the effort and expense of a fully HA configuration. The smaller the RTO and RPO value, the higher the costs as you approach having a fully redundant environment that can serve traffic from two environments. Therefore, implementing a warm pattern for your DR scenario is a good trade-off between budget and availability. An example of this approach is to use Cloud Interconnect configured with a self-managed VPN solution to provide connectivity to Google Cloud. A multitiered application is running on-premises while using a minimal recovery suite on Google Cloud. The recovery suite consists of an operational database server instance on Google Cloud. This instance must run at all times so that it can receive replicated transactions through asynchronous or semisynchronous replication techniques. To reduce costs, you can run the database on the smallest machine type that's capable of running the database service. Because you can use a long-running instance, sustained use discounts will apply. This pattern uses the following DR building blocks: Cloud DNS Cloud Interconnect Self-managed VPN solution Compute Engine Deployment Manager Compute Engine snapshots provide a way to take backups that you can roll back to a previous state. Snapshots are used in this example because updated web pages and application binaries are written frequently to the production web and to application servers. These updates are regularly replicated to the reference web server and application server instances on Google Cloud. (The reference servers don't accept production traffic; they are used to create the snapshots.) The following diagram illustrates an architecture that implements this approach. The replication targets are not shown in the diagram. The following steps outline how you can configure the environment: Create a VPC network. Configure connectivity between your on-premises network and the Google Cloud network. Replicate your on-premises servers to Google Cloud VM instances. One option is to use a partner solution; the method you employ depends on your circumstances. Create a custom image of your database server on Google Cloud that has the same configuration as your on-premises database server. Create snapshots of the web server and application server instances. Start a database instance in Google Cloud using the custom image you created earlier. Use the smallest machine type that is capable of accepting replicated data from the on-premises production database. Attach persistent disks to the Google Cloud database instance for the databases and transaction logs. Configure replication between your on-premises database server and the database server in Google Cloud by following the instructions for your database software. Set the auto delete flag on the persistent disks attached to the database instance to no-auto-delete. Configure a scheduled task to create regular snapshots of the persistent disks of the database instance on Google Cloud. Create reservations to assure capacity for your web server and application servers as needed. Test the process of creating instances from snapshots and of taking snapshots of the persistent disks. Create instances of the web server and the application server using the snapshots created earlier. Create a script that copies updates to the web application and the application server whenever the corresponding on-premises servers are updated. Write the script to create a snapshot of the updated servers. Configure Cloud DNS to point to your internet-facing web service on premises. Failover process and post-restart tasks To manage a failover, you typically use your monitoring and alerting system to invoke an automated failover process. When the on-premises application needs to fail over, you configure the database system on Google Cloud so it is able to accept production traffic. You also start instances of the web and application server. The following diagram shows the configuration after failover to Google Cloud enabling production workloads to be served from Google Cloud: A typical recovery sequence is this: Resize the database server instance so that it can handle production loads. Use the web server and application snapshots on Google Cloud to create new web server and application instances. Test that the application works as expected by simulating user scenarios on the recovered environment. When tests succeed, configure Cloud DNS to point to your web service on Google Cloud. When the production environment is running on-premises again and can support production workloads, you reverse the steps that you followed to fail over to the Google Cloud recovery environment. A typical sequence to return to the production environment is this: Take a backup of the database running on Google Cloud. Copy the backup file to your production environment. Apply the backup file to your production database system. Prevent connections to the application in Google Cloud. One way to do this is to prevent connections to the web server by modifying the firewall rules. From this point your application will be unavailable until you finish restoring the production environment. Copy any transaction log files over to the production environment and apply them to the database server. Test that the application works as expected by simulating user scenarios on the production environment. Configure Cloud DNS to point to your on-premises web service. Delete the web server and application server instances that are running in Google Cloud. Leave the reference servers running. Resize the database server on Google Cloud back to the minimum instance size that can accept replicated data from the on-premises production database. Configure replication between your on-premises database server and the database server in Google Cloud by following the instructions for your database software. Hot HA across on-premises and Google Cloud If you have small RTO and RPO values, you can achieve these only by running HA across your production environment and Google Cloud concurrently. This approach gives you a hot pattern, because both on-premises and Google Cloud are serving production traffic. The key difference from the warm pattern is that the resources in both environments are running in production mode and serving production traffic. This pattern uses the following DR building blocks: Cloud Interconnect Cloud VPN Compute Engine Managed instance groups Cloud Monitoring Cloud Load Balancing The following diagram illustrates this example architecture. By implementing this architecture, you have a DR plan that requires minimal intervention in the event of a disaster. The following steps outline how you can configure the environment: Create a VPC network. Configure connectivity between your on-premises network and your Google Cloud network. Create custom images in Google Cloud that are configured for each server in the on-premises production environment. Each Google Cloud image should have the same configuration as its on-premises equivalent. Configure replication between your on-premises database server and the database server in Google Cloud by following the instructions for your database software. Many database systems permit only a single writeable database instance when you configure replication. Therefore, you might need to ensure that one of the database replicas acts as a read-only server. Create individual instance templates that use the images for the application servers and the web servers. Configure regional managed instance groups for the application and web servers. Configure health checks using Cloud Monitoring. Configure load balancing using the regional managed instance groups that were configured earlier. Configure a scheduled task to create regular snapshots of the persistent disks. Configure a DNS service to distribute traffic between your on-premises environment and the Google Cloud environment. With this hybrid approach, you need to use a DNS service that supports weighted routing to the two production environments so that you can serve the same application from both. You need to design the system for failures that might occur in only part of an environment (partial failures). In that case, traffic should be rerouted to the equivalent service in the other backup environment. For example, if the on-premises web servers become unavailable, you can disable DNS routing to that environment. If your DNS service supports health checks, this will occur automatically when the health check determines that web servers in one of the environments can't be reached. If you're using a database system that allows only a single writeable instance, in many cases the database system will automatically promote the read-only replica to be the writeable primary when the heartbeat between the original writable database and the read replica loses contact. Be sure that you understand this aspect of your database replication in case you need to intervene after a disaster. You must implement processes to ensure that the custom VM images in Google Cloud have the same version of the application as the versions on-premises. Incorporate upgrades to the custom images as part of your standard upgrade cycle, and ensure that your Deployment Manager template is using the latest custom image. Failover process and post-restart tasks In the configuration described here for a hot scenario, a disaster means that one of the two environments isn't available. There is no failover process in the same way that there is with the warm or cold scenarios, where you need to move data or processing to the second environment. However, you might need to handle the following configuration changes: If your DNS service doesn't automatically reroute traffic based on a health check failure, you need to manually configure DNS routing to send traffic to the system that's still up. If your database system doesn't automatically promote a read-only replica to be the writeable primary on failure, you need to intervene to ensure that the replica is promoted. When the second environment is running again and can handle production traffic, you need to resynchronize databases. Because both environments support production workloads, you don't have to take any further action to change which database is the primary. After the databases are synchronized, you can allow production traffic to be distributed across both environments again by adjusting the DNS settings. DR and HA architectures for production on Google Cloud When you design your application architecture for production workload on Google Cloud, the HA features of the platform have a direct influence on your DR architecture. Backup and DR Service is a centralized, cloud-native solution for backing up and recovering cloud and hybrid workloads. It offers swift data recovery and facilitates the quick resumption of essential business operations. For more information about using Backup and DR Service for applications scenarios on Google Cloud, see the following: Backup and DR Service for Compute Engine describes the concepts and details of using Google Cloud Backup and DR Service to incrementally back up data from your Persistent Disks at the instance level. Backup and DR Service for Google Cloud VMware Engine describes the concepts and details of using Google Cloud Backup and DR Service to incrementally backup data from your VMDKs at the VM level. Backup and DR Service for Filestore and file systems describes the concepts and details of using Google Cloud Backup and DR Service to capture and back up data from production SMB, NFS, and Filestore file systems. Cold: recoverable application server In a cold failover scenario where you need a single active server instance, only one instance should write to disk. In an on-premises environment, you often use an active / passive cluster. When you run a production environment on Google Cloud, you can create a VM in a managed instance group that only runs one instance. This pattern uses the following DR building blocks: Compute Engine Managed instance groups This cold failover scenario is shown in the following example architecture image: The following steps outline how to configure this cold failover scenario: Create a VPC network. Create a custom VM image that's configured with your application web service. Configure the VM so that the data processed by the application service is written to an attached persistent disk. Create a snapshot from the attached persistent disk. Create an instance template that references the custom VM image for the web server. Configure a startup script to create a persistent disk from the latest snapshot and to mount the disk. This script must be able to get the latest snapshot of the disk. Create a managed instance group and health checks with a target size of one that references the instance template. Create a scheduled task to create regular snapshots of the persistent disk. Configure an external Application Load Balancer. Configure alerts using Cloud Monitoring to send an alert when the service fails. This cold failover scenario takes advantage of some of the HA features available in Google Cloud. If a VM fails, the managed instance group tries to recreate the VM automatically. You don't have to initiate this failover step. The external Application Load Balancer makes sure that even when a replacement VM is needed, the same IP address is used in front of the application server. The instance template and custom image make sure that the replacement VM is configured identically to the instance it replaces. Your RPO is determined by the last snapshot taken. The more often you take snapshots, the smaller the RPO value. The managed instance group provides HA in depth. The managed instance group provides ways to react to failures at the application or VM level. You don't manually intervene if any of those scenarios occur. A target size of one makes sure that you only ever have one active instance that runs in the managed instance group and serves traffic. Persistent disks are zonal, so you must take snapshots to re-create disks if there's a zonal failure. Snapshots are also available across regions, which lets you restore a disk to a different region similar to restoring it to the same region. In the unlikely event of a zonal failure, you must manually intervene to recover, as outlined in the next section. Failover process If a VM fails, the managed instance group automatically tries to recreate a VM in the same zone. The startup script in the instance template creates a persistent disk from the latest snapshot and attaches it to the new VM. However, a managed instance group with size one doesn't recover if there's a zone failure. In the scenario where a zone fails, you must react to the Cloud Monitoring alert, or other monitoring platform, when the service fails and manually create an instance group in another zone. A variation on this configuration is to use regional persistent disks instead of zonal persistent disks. With this approach, you don't need to use snapshots to restore the persistent disk as part of the recovery step. However, this variation consumes twice as much storage and you need to budget for that. The approach you choose is dictated by your budget and RTO and RPO values. Warm: static site failover If Compute Engine instances fail, you can mitigate service interruption by having a Cloud Storage-based static site on standby. This pattern is appropriate when your web application is mostly static. In this scenario, the primary application runs on Compute Engine instances. These instances are grouped into managed instance groups, and the instance groups serve as backend services for an HTTPS load balancer. The HTTP load balancer directs incoming traffic to the instances according to the load balancer configuration, the configuration of each instance groups, and the health of each instance. This pattern uses the following DR building blocks: Compute Engine Cloud Storage Cloud Load Balancing Cloud DNS The following diagram illustrates this example architecture: The following steps outline how to configure this scenario: Create a VPC network. Create a custom image that's configured with the application web service. Create an instance template that uses the image for the web servers. Configure a managed instance group for the web servers. Configure health checks using Monitoring. Configure load balancing using the managed instance groups that you configured earlier. Create a Cloud Storage-based static site. In the production configuration, Cloud DNS is configured to point at this primary application, and the standby static site sits dormant. If the Compute Engine application goes down, you would configure Cloud DNS to point to this static site. Failover process If the application server or servers go down, your recovery sequence is to configure Cloud DNS to point to your static website. The following diagram shows the architecture in its recovery mode: When the application Compute Engine instances are running again and can support production workloads, you reverse the recovery step: you configure Cloud DNS to point to the load balancer that fronts the instances. Alternatively, you can use Persistent Disk Asynchronous Replication. It offers block storage replication with low recovery point objective (RPO) and low recovery time objective (RTO) for cross-region active-passive DR. This storage option lets you manage replication for Compute Engine workloads at the infrastructure level, rather than at the workload level. Hot: HA web application A hot pattern when your production environment is running on Google Cloud is to establish a well-architected HA deployment. This pattern uses the following DR building blocks: Compute Engine Cloud Load Balancing Cloud SQL The following diagram illustrates this example architecture: This scenario takes advantage of HA features in Google Cloud—you don't have to initiate any failover steps, because they will occur automatically in the event of a disaster. As shown in the diagram, the architecture uses a regional managed instance group together with global load balancing and Cloud SQL. The example here uses a regional managed instance group, so the instances are distributed across three zones. With this approach, you get HA in depth. Regional managed instance groups provide mechanisms to react to failures at the application, instance, or zone level, and you don't have to manually intervene if any of those scenarios occurs. To address application-level recovery, as part of setting up the managed instance group, you configure HTTP health checks that verify that the services are running properly on the instances in that group. If a health check determines that a service has failed on an instance, the group automatically re-creates that instance. For more informations about building scalable and resilient applications on Google Cloud, see Patterns for scalable and resilient apps . What's next Read about Google Cloud geography and regions. Read other documents in this DR series: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Scenarios_for_data.txt b/Scenarios_for_data.txt new file mode 100644 index 0000000000000000000000000000000000000000..68187dea94b7c116e4872c3ae1cdd970cfa29269 --- /dev/null +++ b/Scenarios_for_data.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/dr-scenarios-for-data +Date Scraped: 2025-02-23T11:54:31.797Z + +Content: +Home Docs Cloud Architecture Center Send feedback Disaster recovery scenarios for data Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-05 UTC This document is the third part of a series that discusses disaster recovery (DR) in Google Cloud. This part discusses scenarios for backing up and recovering data. The series consists of these parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data (this document) Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Introduction Your disaster recovery plans must specify how you can avoid losing data during a disaster. The term data here covers two scenarios. Backing up and then recovering database, log data, and other data types fits into one of the following scenarios: Data backups. Backing up data alone involves copying a discrete amount of data from one place to another. Backups are made as part of a recovery plan either to recover from a corruption of data so that you can restore to a known good state directly in the production environment, or so that you can restore data in your DR environment if your production environment is down. Typically, data backups have a small to medium RTO and a small RPO. Database backups. Database backups are slightly more complex, because they typically involve recovering to the point in time. Therefore, in addition to considering how to back up and restore the database backups and ensuring that the recovery database system mirrors the production configuration (same version, mirrored disk configuration), you also need to consider how to back up transaction logs. During recovery, after you restore database functionality, you have to apply the latest database backup and then the recovered transaction logs that were backed up after the last backup. Due to the complicating factors inherent to database systems (for example, having to match versions between production and recovery systems), adopting a high-availability-first approach to minimize the time to recover from a situation that could cause unavailability of the database server lets you achieve smaller RTO and RPO values. When you run production workloads on Google Cloud, you might use a globally distributed system so that if something goes wrong in one region, the application continues to provide service even if it's less widely available. In essence, that application invokes its DR plan. The rest of this document discusses examples of how to design some scenarios for data and databases that can help you meet your RTO and RPO goals. Production environment is on-premises In this scenario, your production environment is on-premises, and your disaster recovery plan involves using Google Cloud as the recovery site. Data backup and recovery You can use a number of strategies to implement a process to regularly back up data from on-premises to Google Cloud. This section looks at two of the most common solutions. Solution 1: Back up to Cloud Storage using a scheduled task This pattern uses the following DR building blocks: Cloud Storage One option for backing up data is to create a scheduled task that runs a script or application to transfer the data to Cloud Storage. You can automate a backup process to Cloud Storage using the gcloud storage Google Cloud CLI command or by using one of the Cloud Storage client libraries. For example, the following gcloud storage command copies all files from a source directory to a specified bucket. gcloud storage cp -r SOURCE_DIRECTORY gs://BUCKET_NAME Replace SOURCE_DIRECTORY with the path to your source directory and BUCKET_NAME with a name of your choice for the bucket. The name must meet the bucket name requirements. The following steps outline how to implement a backup and recovery process using the gcloud storage command. Install the gcloud CLI on the on-premises machine that you use to upload your data files from. Create a bucket as the target for your data backup. Create a service account. Create an IAM policy to restrict who can access the bucket and its objects. Include the service account created specifically for this purpose. For details about permissions for access to Cloud Storage, see IAM permissions for gcloud storage. Use Service account impersonation to provide access for your local Google Cloud user (or service account) to impersonate the service account you just created earlier. Alternatively you can create a new user specifically for this purpose. Test that you can upload and download files in the target bucket. Set up a schedule for the script that you use to upload your backups using tools such as Linux crontab and Windows Task Scheduler. Configure a recovery process that uses the gcloud storage command to recover your data to your recovery DR environment on Google Cloud. You can also use the gcloud storage rsync command to perform real-time incremental syncs between your data and a Cloud Storage bucket. For example, the following gcloud storage rsync command makes the contents in a Cloud Storage bucket the same as the contents in the source directory by copying any missing files or objects or those whose data has changed. If the volume of data that has changed between successive backup sessions is small relative to the entire volume of the source data, then using gcloud storage rsync can be more efficient than using gcloud storage cp command. By using gcloud storage rsync, you can implement a more frequent backup schedule and achieve a lower RPO. gcloud storage rsync -r SOURCE_DIRECTORY gs:// BUCKET_NAME For more information, see gcloud storage command for smaller transfers of on-premises data. Solution 2: Back up to Cloud Storage using Transfer service for on-premises data This pattern uses the following DR building blocks: Cloud Storage Transfer service for on-premises data Transferring large amounts of data across a network often requires careful planning and robust execution strategies. It is a non-trivial task to develop custom scripts that are scalable, reliable, and maintainable. Custom scripts can often lead to lowered RPO values and even increased risks of data loss. For guidance on moving large volumes of data from on-premises locations to Cloud Storage, see Move or back up data from on-premises storage. Solution 3: Back up to Cloud Storage using a partner gateway solution This pattern uses the following DR building blocks: Cloud Interconnect Cloud Storage tiered storage On-premises applications are often integrated with third-party solutions that can be used as part of your data backup and recovery strategy. The solutions often use a tiered storage pattern where you have the most recent backups on faster storage, and slowly migrate your older backups to cheaper (slower) storage. When you use Google Cloud as the target, you have several storage class options available to use as the equivalent of the slower tier. One way to implement this pattern is to use a partner gateway between your on-premises storage and Google Cloud to facilitate this transfer of data to Cloud Storage. The following diagram illustrates this arrangement, with a partner solution that manages the transfer from the on-premises NAS appliance or SAN. In the event of a failure, the data being backed up must be recovered to your DR environment. The DR environment is used to serve production traffic until you are able to revert to your production environment. How you achieve this depends on your application, and on the partner solution and its architecture. (Some end-to-end scenarios are discussed in the DR application document.) You can also use managed Google Cloud databases as your DR destinations. For example, Cloud SQL for SQL Server supports transaction log imports. You can export transaction logs from your on-premises SQL Server instance, upload them to Cloud Storage, and import them into Cloud SQL for SQL Server. For further guidance on ways to transfer data from on-premises to Google Cloud, see Transferring big data sets to Google Cloud. For more information about partner solutions, see the Partners page on the Google Cloud website. Database backup and recovery You can use a number of strategies to implement a process to recover a database system from on-premises to Google Cloud. This section looks at two of the most common solutions. It is out of scope in this document to discuss in detail the various built-in backup and recovery mechanisms included with third-party databases. This section provides general guidance, which is implemented in the solutions discussed here. Solution 1: Backup and recovery using a recovery server on Google Cloud Create a database backup using the built-in backup mechanisms of your database management system. Connect your on-premises network and your Google Cloud network. Create a Cloud Storage bucket as the target for your data backup. Copy the backup files to Cloud Storage using gcloud storage gcloud CLI or a partner gateway solution (see the steps discussed earlier in the data backup and recovery section). For details, see Migrate to Google Cloud: Transfer your large datasets. Copy the transaction logs to your recovery site on Google Cloud. Having a backup of the transaction logs helps keep your RPO values small. After configuring this backup topology, you must ensure that you can recover to the system that's on Google Cloud. This step typically involves not only restoring the backup file to the target database but also replaying the transaction logs to get to the smallest RTO value. A typical recovery sequence looks like this: Create a custom image of your database server on Google Cloud. The database server should have the same configuration on the image as your on-premises database server. Implement a process to copy your on-premises backup files and transaction log files to Cloud Storage. See solution 1 for an example implementation. Start a minimally sized instance from the custom image and attach any persistent disks that are needed. Set the auto delete flag to false for the persistent disks. Apply the latest backup file that was previously copied to Cloud Storage, following the instructions from your database system for recovering backup files. Apply the latest set of transaction log files that have been copied to Cloud Storage. Replace the minimal instance with a larger instance that is capable of accepting production traffic. Switch clients to point at the recovered database in Google Cloud. When you have your production environment running and able to support production workloads, you have to reverse the steps that you followed to fail over to the Google Cloud recovery environment. A typical sequence to return to the production environment looks like this: Take a backup of the database running on Google Cloud. Copy the backup file to your production environment. Apply the backup file to your production database system. Prevent clients from connecting to the database system in Google Cloud; for example, by stopping the database system service. From this point, your application will be unavailable until you finish restoring the production environment. Copy any transaction log files over to the production environment and apply them. Redirect client connections to the production environment. Solution 2: Replication to a standby server on Google Cloud One way to achieve very small RTO and RPO values is to replicate (not just back up) data and in some cases database state in real time to a replica of your database server. Connect your on-premises network and your Google Cloud network. Create a custom image of your database server on Google Cloud. The database server should have the same configuration on the image as the configuration of your on-premises database server. Start an instance from the custom image and attach any persistent disks that are needed. Set the auto delete flag to false for the persistent disks. Configure replication between your on-premises database server and the target database server in Google Cloud following the instructions specific to the database software. Clients are configured in normal operation to point to the database server on premises. After configuring this replication topology, switch clients to point to the standby server running in your Google Cloud network. When you have your production environment back up and able to support production workloads, you have to resynchronize the production database server with the Google Cloud database server and then switch clients to point back to the production environment Production environment is Google Cloud In this scenario, both your production environment and your disaster recovery environment run on Google Cloud. Data backup and recovery A common pattern for data backups is to use a tiered storage pattern. When your production workload is on Google Cloud, the tiered storage system looks like the following diagram. You migrate data to a tier that has lower storage costs, because the requirement to access the backed-up data is less likely. This pattern uses the following DR building blocks: Cloud Storage tiered storage classes (Nearline, Coldline, and Archive) Because the Nearline, Coldline, and Archive storage classes are intended for storing infrequently accessed data, there are additional costs associated with retrieving data or metadata stored in these classes, as well as minimum storage durations that you are charged for. Database backup and recovery When you use a self-managed database (for example, you've installed MySQL, PostgreSQL, or SQL Server on an instance of Compute Engine), the same operational concerns apply as managing production databases on premises, but you no longer need to manage the underlying infrastructure. Backup and DR Service is a centralized, cloud-native solution for backing up and recovering cloud and hybrid workloads. It offers swift data recovery and facilitates the quick resumption of essential business operations. For more information about using Backup and DR for self-managed database scenarios on Google Cloud, see the following: Backup and DR for MySQL. Backup and DR for PostgreSQL. Backup and DR for Microsoft SQL Server. Note: The scenario for using a managed Google Cloud database is discussed in the next section. Alternatively, you can set up HA configurations by using the appropriate DR building block features to keep RTO small. You can design your database configuration to make it feasible to recover to a state as close as possible to the pre-disaster state; this helps keep your RPO values small. Google Cloud provides a wide variety of options for this scenario. Two common approaches to designing your database recovery architecture for self-managed databases on Google Cloud are discussed in this section. Recovering a database server without synchronizing state A common pattern is to enable recovery of a database server that does not require system state to be synchronized with an up-to-date standby replica. This pattern uses the following DR building blocks: Compute Engine Managed instance groups Cloud Load Balancing (internal load balancing) The following diagram illustrates an example architecture that addresses the scenario. By implementing this architecture, you have a DR plan that reacts automatically to a failure without requiring manual recovery. The following steps outline how to configure this scenario: Create a VPC network. Create a custom image that is configured with the database server by doing the following: Configure the server so the database files and log files are written to an attached standard persistent disk. Create a snapshot from the attached persistent disk. Configure a startup script to create a persistent disk from the snapshot and to mount the disk. Create a custom image of the boot disk. Create an instance template that uses the image. Using the instance template, configure a managed instance group with a target size of 1. Configure health checking using Cloud Monitoring metrics. Configure internal load balancing using the managed instance group. Configure a scheduled task to create regular snapshots of the persistent disk. In the event a replacement database instance is needed, this configuration automatically does the following: Brings up another database server of the correct version in the same zone. Attaches a persistent disk that has the latest backup and transaction log files to the newly created database server instance. Minimizes the need to reconfigure clients that communicate with your database server in response to an event. Ensures that the Google Cloud security controls (IAM policies, firewall settings) that apply to the production database server apply to the recovered database server. Because the replacement instance is created from an instance template, the controls that applied to the original apply to the replacement instance. This scenario takes advantage of some of the HA features available in Google Cloud; you don't have to initiate any failover steps, because they occur automatically in the event of a disaster. The internal load balancer ensures that even when a replacement instance is needed, the same IP address is used for the database server. The instance template and custom image ensure that the replacement instance is configured identically to the instance it is replacing. By taking regular snapshots of the persistent disks, you ensure that when disks are re-created from the snapshots and attached to the replacement instance, the replacement instance is using data recovered according to an RPO value dictated by the frequency of the snapshots. In this architecture, the latest transaction log files that were written to the persistent disk are also automatically restored. The managed instance group provides HA in depth. It provides mechanisms to react to failures at the application or instance level, and you don't have to manually intervene if any of those scenarios occur. Setting a target size of one ensures you only ever have one active instance that runs in the managed instance group and serves traffic. Standard persistent disks are zonal, so if there's a zonal failure, snapshots are required to re-create disks. Snapshots are also available across regions, allowing you to restore a disk not only within the same region but also to a different region. A variation on this configuration is to use regional persistent disks in place of standard persistent disks. In this case, you don't need to restore the snapshot as part of the recovery step. The variation you choose is dictated by your budget and RTO and RPO values. Recovering from partial corruption in very large databases Persistent Disk Asynchronous Replication offers block storage replication with low RPO and low RTO for cross-region active-passive DR. This storage option lets you to manage replication for Compute Engine workloads at the infrastructure level, rather than at the workload level. If you're using a database that's capable of storing petabytes of data, you might experience an outage that affects some of the data, but not all of it. In that case, you want to minimize the amount of data that you need to restore; you don't need to (or want to) recover the entire database just to restore some of the data. There are a number of mitigating strategies you can adopt: Store your data in different tables for specific time periods. This method ensures that you need to restore only a subset of data to a new table, rather than a whole dataset. Store the original data on Cloud Storage. This approach lets you create a new table and reload the uncorrupted data. From there, you can adjust your applications to point to the new table. Note: This method provides good availability, with only a small interruption as you point your applications to the new store. However, unless you have implemented application-level controls to prevent access to the corrupted data, this method can result in inaccurate results during later analysis. Additionally, if your RTO permits, you can prevent access to the table that has the corrupted data by leaving your applications offline until the uncorrupted data has been restored to a new table. Managed database services on Google Cloud This section discusses some methods you can use to implement appropriate backup and recovery mechanisms for the managed database services on Google Cloud. Managed databases are designed for scale, so the traditional backup and restore mechanisms you see with traditional RDMBSs are usually not available. As in the case of self-managed databases, if you are using a database that is capable of storing petabytes of data, you want to minimize the amount of data that you need to restore in a DR scenario. There are a number of strategies for each managed database to help you achieve this goal. Bigtable provides Bigtable replication. A replicated Bigtable database can provide higher availability than a single cluster, additional read throughput, and higher durability and resilience in the face of zonal or regional failures. Bigtable backups is a fully managed service that lets you save a copy of a table's schema and data, then restore from the backup to a new table at a later time. You can also export tables from Bigtable as a series of Hadoop sequence files. You can then store these files in Cloud Storage or use them to import the data back into another instance of Bigtable. You can replicate your Bigtable dataset asynchronously across zones within a Google Cloud region. BigQuery. If you want to archive data, you can take advantage of BigQuery's long term storage. If a table is not edited for 90 consecutive days, the price of storage for that table automatically drops by 50 percent. There is no degradation of performance, durability, availability, or any other functionality when a table is considered long term storage. If the table is edited, though, it reverts back to the regular storage pricing and the 90 day countdown starts again. BigQuery is replicated to two zones in a single region, but this won't help with corruption in your tables. Therefore, you need to have a plan to be able to recover from that scenario. For example, you can do the following: If the corruption is caught within 7 days, query the table to a point in time in the past to recover the table prior to the corruption using snapshot decorators. Export the data from BigQuery, and create a new table that contains the exported data but excludes the corrupted data. Store your data in different tables for specific time periods. This method ensures that you will need to restore only a subset of data to a new table, rather than a whole dataset. Make copies of your dataset at specific time periods. You can use these copies if a data-corruption event occurred beyond what a point-in-time query can capture (for example, more than 7 days ago). You can also copy a dataset from one region to another to ensure data availability in the event of region failures. Store the original data on Cloud Storage, which lets you create a new table and reload the uncorrupted data. From there, you can adjust your applications to point to the new table. Firestore. The managed export and import service lets you import and export Firestore entities using a Cloud Storage bucket. You can then implement a process that can be used to recover from accidental deletion of data. Cloud SQL. If you use Cloud SQL, the fully managed Google Cloud MySQL database, you should enable automated backups and binary logging for your Cloud SQL instances. This approach lets you perform a point-in-time recovery, which restores your database from a backup and recovers it to a fresh Cloud SQL instance. For more information, see About Cloud SQL backups and About disaster recovery (DR) in Cloud SQL You can also configure Cloud SQL in an HA configuration and cross-region replicas to maximize up time in the event of zonal or regional failure. If you enabled near-zero downtime planned maintenance for Cloud SQL, you can evaluate the impact maintenance events on your instances by simulating near-zero downtime planned maintenance events on Cloud SQL for MySQL, and on Cloud SQL for PostgreSQL. For Cloud SQL Enterprise Plus edition, you can use advanced disaster recovery (DR) to simplify recovery and fallback processes with zero data loss after you perform a cross-regional failover. Spanner. You can use Dataflow templates for making a full export of your database to a set of Avro files in a Cloud Storage bucket, and use another template for re-importing the exported files into a new Spanner database. For more controlled backups, the Dataflow connector lets you write code to read and write data to Spanner in a Dataflow pipeline. For example, you can use the connector to copy data out of Spanner and into Cloud Storage as the backup target. The speed at which data can be read from Spanner (or written back to it) depends on the number of configured nodes. This has a direct impact on your RTO values. The Spanner commit timestamp feature can be useful for incremental backups, by allowing you to select only the rows that have been added or modified since the last full backup. For managed backups, Spanner Backup and Restore lets you create consistent backups that can be retained for up to 1 year. The RTO value is lower compared to export because the restore operation directly mounts the backup without copying the data. For small RTO values, you could set up a warm standby Spanner instance configured with the minimum number of nodes required to meet your storage and read and write throughput requirements. Spanner point-in-time-recovery (PITR) lets you recover data from a specific point in time in the past. For example, if an operator inadvertently writes data or an application rollout corrupts the database, with PITR you can recover the data from a point in time in the past, up to a maximum of 7 days. Cloud Composer. You can use Cloud Composer (a managed version of Apache Airflow) to schedule regular backups of multiple Google Cloud databases. You can create a directed acyclic graph (DAG) to run on a schedule (for example, daily) to either copy the data to another project, dataset, or table (depending on the solution used), or to export the data to Cloud Storage. Exporting or copying data can be done using the various Cloud Platform operators. For example, you can create a DAG to do any of the following: Export a BigQuery table to Cloud Storage using the BigQueryToCloudStorageOperator. Export Firestore in Datastore mode (Datastore) to Cloud Storage using the DatastoreExportOperator. Export MySQL tables to Cloud Storage using the MySqlToGoogleCloudStorageOperator. Export Postgres tables to Cloud Storage using the PostgresToGoogleCloudStorageOperator. Production environment is another cloud In this scenario, your production environment uses another cloud provider, and your disaster recovery plan involves using Google Cloud as the recovery site. Data backup and recovery Transferring data between object stores is a common use case for DR scenarios. Storage Transfer Service is compatible with Amazon S3 and is the recommended way to transfer objects from Amazon S3 to Cloud Storage. You can configure a transfer job to schedule periodic synchronization from data source to data sink, with advanced filters based on file creation dates, filename filters, and the times of day you prefer to transfer data. To achieve the RPO that you want, you must consider the following factors: Rate of change. The amount of data that's being generated or updated for a given amount of time. The higher the rate of change, the more resources are needed to transfer the changes to the destination at each incremental transfer period. Transfer performance. The time it takes to transfer files. For large file transfers, this is typically determined by the available bandwidth between source and destination. However, if a transfer job consists of a large number of small files, QPS can become a limiting factor. If that's the case, you can schedule multiple concurrent jobs to scale the performance as long as sufficient bandwidth is available. We recommend you that you measure the transfer performance using a representative subset of your real data. Frequency. The interval between backup jobs. The freshness of data at the destination is as recent as the last time a transfer job was scheduled. Therefore, it's important that the intervals between successive transfer jobs are not longer than your RPO objective. For example, if the RPO objective is 1 day, the transfer job must be scheduled at least once a day. Monitoring and alerts. Storage Transfer Service provides Pub/Sub notifications on a variety of events. We recommend that you subscribe to these notifications to handle unexpected failures or changes in job completion times. Database backup and recovery It is out of scope in this document to discuss in detail the various built-in backup and recovery mechanisms included with third-party databases or the backup and recovery techniques used on other cloud providers. If you are operating non-managed databases on the compute services, you can take advantage of the HA facilities that your production cloud provider has available. You can extend those to incorporate a HA deployment to Google Cloud, or use Cloud Storage as the ultimate destination for the cold storage of your database backup files. What's next? Read about Google Cloud geography and regions. Read other documents in this DR series: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Disaster recovery use cases: locality-restricted data analytic applications Architecting disaster recovery for cloud infrastructure outages Architectures for high availability of MySQL clusters on Compute Engine Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW(1).txt b/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..9aafbe9c423bbf3433bbf7ade42d26fa60264efc --- /dev/null +++ b/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/palo-alto-networks-ngfw +Date Scraped: 2025-02-23T11:56:54.640Z + +Content: +Home Docs Cloud Architecture Center Send feedback Secure virtual private cloud networks with the Palo Alto VM-Series NGFW Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-05 UTC Author: Matt McLimans, Technical Engagement Manager, Palo Alto Networks This document describes the networking concepts that you need to understand to deploy Palo Alto Networks VM-Series next generation firewall (NGFW) in Google Cloud. This document is intended for network administrators, solution architects, and security professionals who are familiar with Compute Engine and Virtual Private Cloud (VPC) networking. The VM-Series NGFW helps enterprises secure their applications, users, and other data deployed across Google Cloud and other virtualization environments. Palo Alto Networks delivers zero-trust security capabilities for all enterprise networks by using the following approaches to threat prevention: Securing all applications with Layer-7 inspection, granting access based on user identification, and preventing known and unknown threats. Segmenting mission-critical applications and data using zero-trust principles to improve security posture and achieve compliance with various security standards—for example, PCI DSS. Using Panorama to centrally manage both physical and virtual firewalls. Panorama helps provide a consistent security posture across all hybrid and multicloud environments. Using tooling to automate the deployment and configuration of VM-Series NGFWs. Using predefined and custom network tags within security policies to dynamically update objects as Cloud workloads are created, moved, and destroyed. The VM-Series NGFW helps secure workloads deployed across both your VPC networks and your remote networks. Consider the following ways that the VM-Series NFGW can help to secure network patterns. The items in the following list correspond to the numbers in the following diagram. Prevent inbound threats from the internet to resources deployed in VPC networks, on-premises, and in other cloud environments. Stop outbound connections from VPC networks to suspicious destinations, such as command-and-control servers, or malicious code repositories. Prevent threats from moving laterally between workload VPC networks and stealing data. Secure traffic between remote networks connected through Cloud Interconnect, Network Connectivity Center, or Cloud VPN. Help extend security to remote users and mobile devices to provide granular application access to Google Cloud resources. For more information, see VM-Series Virtual Next-Generation Firewalls. Architecture As shown in the following diagram, the VM-Series is deployed through Compute Engine. Securing the users, applications, and data residing in your VPC networks requires a minimum of three network interfaces. The network interfaces are as follows: Management Untrust Trust For more information, see Multiple network interfaces overview and examples. The following diagram shows the components required to help secure network traffic with the VM-Series firewall. The management network provides access to on-premises networks to provide private access. The untrust network serves as the VM-Series user interface. It's common to connect the management network to an internet gateway for resources deployed to the trust network. The trust network contains the cloud workloads that you want to protect. Panorama Management provides centralized management of the VM-Series firewalls. You can deploy Panorama Management either as a compute instance in Google Cloud, or as a virtual or physical appliance in an on-premises data center. Components The following sections explain the network components in more detail. Untrust network The untrust VPC interface serves as the internet gateway for resources deployed in a private network. To enable outbound internet connectivity from a private VPC network, either attach an External IP address to the untrust interface, or deploy a Cloud NAT to a public VPC network. For inbound internet connectivity to private-network resources, you can achieve traffic distribution and high availability by configuring the interfaces to serve as the backend of the external load balancer. To serve as the backend of your external passthrough Network Load Balancers and external Application Load Balancers, we recommend that you set the untrust interface as the primary VPC network interface through an interface swap. Management network The management interface is the primary network interface of the compute instance. The IP address of the management interface provides access to the VM-Series user interface and terminal console. Instance group We recommend that you deploy the VM-Series to an instance group. This approach lets you horizontally scale firewalls for resiliency and performance while managing the VM-Series NGFW as a single entity through a Palo Alto Networks Panorama appliance. Trust network The VM-Series trust interface is attached to a private VPC network. It's recommended that you set the trust interface as the backend of an Internal TCP/UDP Load Balancer to provide high availability. A private VPC network has a default route that points to either the trust network interface or the Internal TCP/UDP Load Balancer as the next hop. It's common for the private VPC network to be used as the following: A shared VPC network that delegates its subnets to organization service projects A hub VPC network that provides transitive routing and inspection for multiple VPC networks Panorama Panorama provides centralized management for all Palo Alto Networks firewalls. You can bootstrap deployed VM-Series firewalls to Panorama to receive all policy and network configurations. You can deploy Panorama in your on-premises data center or on Compute Engine. For more information, see Panorama and Network Security Management. Management interface swap For the VM-Series firewalls to receive traffic from any Google Cloud external load balancer, you must perform a management interface swap. This swap also makes the untrust interface the primary interface of the compute instance. If you don't use load balancing, you should still perform an interface swap at deployment. If you don't perform an interface swap at deployment, you must redeploy the VM-Series. Value Palo Alto Networks technologies span across major security controls. They help organizations accomplish the following goals: Centralize management Maintain optimum connectivity Extend security policies and controls to the following control types: Users Applications Devices Next-Generation Firewall platforms are available in the following form factors: Physical Virtual Containerized Cloud-delivered Costs The VM-Series firewall supports the following license types: BYOL PayGo It also supports the following two licensing models: Software Next Generation firewall credits: Flexible configurations that you specify with a deployment profile Fixed VM-Series Model configurations Both models license security services and other features. The VM-Series uses the following billable components of Google Cloud: Compute Engine For more information about licensing, see VM-Series Firewall Licensing. To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Performance We recommend using Cloud Load Balancing to distribute traffic to VM-Series firewalls deployed across regional zones. The VM-Series integrates with instance groups and Google Cloud Observability to scale the number of firewalls based on custom metrics. For more information, see Autoscaling the VM-Series Firewall on Google Cloud. Palo Alto Networks Single-Pass Architecture processes the following items only once: Network policy enforcement Application identification User controls Content inspection This check significantly reduces the processing overhead required to perform multiple functions on a single device. For more information, see VM-Series on Google Cloud Performance and Capacity. Use cases The following sections explain the VPC network use cases in more detail. Secure multiple VPC networks Because routing between subnetworks can't be overridden, consider segmenting your networks by allocating workloads to separate VPC networks. That way, the VM-Series firewalls can route and inspect all traffic that flows across the VPC network boundary. For more information about intra-VPC network traffic, see Google Cloud IDS. Multiple network interface models To help secure multiple networks, add additional VM-Series dataplane interfaces directly to your VPC networks. To steer traffic to the VM-Series NGFW interface, or to an Internal TCP/UDP Load Balancer (if multiple firewall VMs are used), use a custom route for each connected network. There's a limit of eight network interfaces per virtual machine instance. After you create the instance, you can't add or remove network interfaces. The following diagram shows a standard configuration of a VM Series deployment using multiple VM instances in an instance group (for high availability and performance): The following examples show the different traffic patterns that run through the VM-Series firewalls in this configuration: An inbound request is made to an application hosted in VPC C. The external load balancer distributes the request to the VM-Series untrust interfaces. The VM-Series firewall inspects and forwards the request through the NIC4 in VPC C and to the destination application. The route table of VPC B routes traffic that is destined to the internet to the IP address of VPC B's Internal TCP/UDP Load Balancer. The load balancer distributes the traffic to NIC3 on the VM-Series firewalls. The VM-Series inspects and forwards the traffic through its untrust interface (NIC0) to the internet. A resource in VPC A makes a request to a resource in VPC B. The route table of VPC A routes the request to the Internal Load Balancer in VPC A. The load balancer distributes the request to NIC2 on the VM-Series firewalls. The VM-Series inspects and forwards the request through NIC3 to the resource in VPC B. VPC B routes its return traffic to its Internal TCP/UDP Load Balancer using the VPC B route table. VPC network peering model VPC Network Peering helps you to build a hub and spoke topology to secure many VPC networks. Because you can add and remove spoke VPC networks as needed, network peering provides flexibility. In the following diagram, VM-Series firewalls are deployed to an instance group to help secure traffic for various spoke VPC networks. The hub network contains the VM-Series trust interfaces. The trust interfaces are the backend of an Internal TCP/UDP Load Balancer. The spoke networks are VPC peers of the hub network. Each spoke has a custom default route that directs traffic to the IP address of the Internal TCP/UDP Load Balancer. In the configuration displayed in the previous diagram, the VM-Series firewalls inspect the traffic as follows: Traffic from the internet to applications in the spoke networks are distributed by the external passthrough Network Load Balancer to the VM-Series untrust interfaces (NIC0). The VM-Series firewall inspects the traffic and forwards permissible traffic through its trust interface (NIC2) to the application in the spoke network. Traffic from the spoke networks destined to the internet are routed to the Internal TCP/UDP Load Balancer in the hub VPC. The VM-Series firewall inspects the traffic and forwards permissible traffic through its untrust interface (NIC0) to the internet. Traffic between spoke networks is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The VM-Series firewall inspects and forwards the traffic through the trust interface (NIC2) into the hub network which routes permissible traffic to the destination spoke network. You can combine the multiple network interface models and VPC network peering models to help scale security for many VPC networks by attaching additional VM-Series firewall interfaces to additional hub VPC networks. To do so, connect each interface and network to multiple spoke networks through VPC Network Peering. Distribute VM-Series firewalls To help secure specific traffic flows, you can deploy VM-Series firewalls across independent dedicated instance groups. The distributed-firewall design provides traffic autonomy and the ability to scale instance groups independently. The following use cases are examples of how to distribute instance groups: Securing traffic for networks that span multiple regions Isolating traffic based on data classification and sensitivity The following diagram is identical to the hub-and-spoke architecture with VPC Network Peering that was shown and discussed in the previous section. The difference is that the VM-Series firewalls are deployed in two managed instance groups. The first managed instance group is inbound. It helps secure inbound traffic from the internet to applications hosted in the spoke networks. The second managed instance group is outbound. It helps secure all egress traffic from the spoke networks and traffic from on-premises. This instance group provides traffic isolation with the ability to independently scale the VM-Series instance groups. The VM-Series firewalls in the configuration displayed in the previous diagram inspect traffic as follows: Traffic from the internet to applications in the spoke networks is distributed by the external passthrough Network Load Balancer to the untrust interfaces on the inbound instance group. The inbound instance group inspects all traffic and forwards permissible traffic through NIC2 to the application in the spoke network. Traffic from the spoke networks destined to the internet is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The load balancer distributes the traffic to NIC2 on the outbound instance group. The VM-Series inspects and forwards permissible traffic through its untrust interface (NIC0) and to the internet. Traffic between spoke networks and the remote network is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The load balancer distributes permissible traffic to NIC2 on the outbound instance group. The VM-Series firewall inspects and forwards permissible traffic through NIC2 where the hub network routes that traffic to the adjacent spoke network. Active/passive model You can configure the VM-Series firewalls as an active/passive high availability (HA) pair. In this model, each VM-Series firewall belongs to an unmanaged instance group. Only the primary VM-Series firewall receives network traffic from Google Cloud load balancers. The health check configured on the load balancers determines the HA state of the primary VM-Series firewall. If the health check fails on the primary VM-Series firewall, the load balancers carry the active sessions to the secondary VM-Series firewall. At that point, the secondary VM-Series firewall becomes the primary firewall. This model is suited for environments with any or all of the following requirements: Maintenance of session continuity through stateful failover between VM-Series firewalls. Establishment of static IPsec tunnels directly to the VM-Series firewall. Preservation of the original client IP address for internet inbound traffic to internal applications protected by the VM-Series firewalls. Note: There's no need to horizontally scale the VM-Series firewalls. The topology and traffic patterns are almost identical to the previous models, with the following exceptions: nic1 serves as the management and HA1 interface. The HA1 interface synchronizes configuration changes between the VM-Series firewalls. An additional interface, HA2 (nic3), is attached to the VM-Series firewalls. The HA2 interface exchanges session tables between the active/passive pair. The external and internal TCP/UDP load balancers are configured with Connection Tracking to track sessions through the VM-Series firewalls. For more information, see VM-Series on Google Cloud. What's next Learn about the VM-Series on Google Cloud. Learn about VM-Series licensing on all platforms. Watch the VM-Series capabilities in Google Cloud. Try the Secure Google Cloud Hub-and-Spoke with VM-Series tutorial. Try the VM-Series Active/Passive HA on Google Cloud tutorial. Deploy a hub-and-spoke network using VPC Network Peering. Learn about more design options for connecting multiple VPC networks. Send feedback \ No newline at end of file diff --git a/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW.txt b/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW.txt new file mode 100644 index 0000000000000000000000000000000000000000..17936845f7e0072368890c6f00e76e86d36149e1 --- /dev/null +++ b/Secure_virtual_private_cloud_networks_with_the_Palo_Alto_VM-Series_NGFW.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/partners/palo-alto-networks-ngfw +Date Scraped: 2025-02-23T11:53:52.908Z + +Content: +Home Docs Cloud Architecture Center Send feedback Secure virtual private cloud networks with the Palo Alto VM-Series NGFW Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-01-05 UTC Author: Matt McLimans, Technical Engagement Manager, Palo Alto Networks This document describes the networking concepts that you need to understand to deploy Palo Alto Networks VM-Series next generation firewall (NGFW) in Google Cloud. This document is intended for network administrators, solution architects, and security professionals who are familiar with Compute Engine and Virtual Private Cloud (VPC) networking. The VM-Series NGFW helps enterprises secure their applications, users, and other data deployed across Google Cloud and other virtualization environments. Palo Alto Networks delivers zero-trust security capabilities for all enterprise networks by using the following approaches to threat prevention: Securing all applications with Layer-7 inspection, granting access based on user identification, and preventing known and unknown threats. Segmenting mission-critical applications and data using zero-trust principles to improve security posture and achieve compliance with various security standards—for example, PCI DSS. Using Panorama to centrally manage both physical and virtual firewalls. Panorama helps provide a consistent security posture across all hybrid and multicloud environments. Using tooling to automate the deployment and configuration of VM-Series NGFWs. Using predefined and custom network tags within security policies to dynamically update objects as Cloud workloads are created, moved, and destroyed. The VM-Series NGFW helps secure workloads deployed across both your VPC networks and your remote networks. Consider the following ways that the VM-Series NFGW can help to secure network patterns. The items in the following list correspond to the numbers in the following diagram. Prevent inbound threats from the internet to resources deployed in VPC networks, on-premises, and in other cloud environments. Stop outbound connections from VPC networks to suspicious destinations, such as command-and-control servers, or malicious code repositories. Prevent threats from moving laterally between workload VPC networks and stealing data. Secure traffic between remote networks connected through Cloud Interconnect, Network Connectivity Center, or Cloud VPN. Help extend security to remote users and mobile devices to provide granular application access to Google Cloud resources. For more information, see VM-Series Virtual Next-Generation Firewalls. Architecture As shown in the following diagram, the VM-Series is deployed through Compute Engine. Securing the users, applications, and data residing in your VPC networks requires a minimum of three network interfaces. The network interfaces are as follows: Management Untrust Trust For more information, see Multiple network interfaces overview and examples. The following diagram shows the components required to help secure network traffic with the VM-Series firewall. The management network provides access to on-premises networks to provide private access. The untrust network serves as the VM-Series user interface. It's common to connect the management network to an internet gateway for resources deployed to the trust network. The trust network contains the cloud workloads that you want to protect. Panorama Management provides centralized management of the VM-Series firewalls. You can deploy Panorama Management either as a compute instance in Google Cloud, or as a virtual or physical appliance in an on-premises data center. Components The following sections explain the network components in more detail. Untrust network The untrust VPC interface serves as the internet gateway for resources deployed in a private network. To enable outbound internet connectivity from a private VPC network, either attach an External IP address to the untrust interface, or deploy a Cloud NAT to a public VPC network. For inbound internet connectivity to private-network resources, you can achieve traffic distribution and high availability by configuring the interfaces to serve as the backend of the external load balancer. To serve as the backend of your external passthrough Network Load Balancers and external Application Load Balancers, we recommend that you set the untrust interface as the primary VPC network interface through an interface swap. Management network The management interface is the primary network interface of the compute instance. The IP address of the management interface provides access to the VM-Series user interface and terminal console. Instance group We recommend that you deploy the VM-Series to an instance group. This approach lets you horizontally scale firewalls for resiliency and performance while managing the VM-Series NGFW as a single entity through a Palo Alto Networks Panorama appliance. Trust network The VM-Series trust interface is attached to a private VPC network. It's recommended that you set the trust interface as the backend of an Internal TCP/UDP Load Balancer to provide high availability. A private VPC network has a default route that points to either the trust network interface or the Internal TCP/UDP Load Balancer as the next hop. It's common for the private VPC network to be used as the following: A shared VPC network that delegates its subnets to organization service projects A hub VPC network that provides transitive routing and inspection for multiple VPC networks Panorama Panorama provides centralized management for all Palo Alto Networks firewalls. You can bootstrap deployed VM-Series firewalls to Panorama to receive all policy and network configurations. You can deploy Panorama in your on-premises data center or on Compute Engine. For more information, see Panorama and Network Security Management. Management interface swap For the VM-Series firewalls to receive traffic from any Google Cloud external load balancer, you must perform a management interface swap. This swap also makes the untrust interface the primary interface of the compute instance. If you don't use load balancing, you should still perform an interface swap at deployment. If you don't perform an interface swap at deployment, you must redeploy the VM-Series. Value Palo Alto Networks technologies span across major security controls. They help organizations accomplish the following goals: Centralize management Maintain optimum connectivity Extend security policies and controls to the following control types: Users Applications Devices Next-Generation Firewall platforms are available in the following form factors: Physical Virtual Containerized Cloud-delivered Costs The VM-Series firewall supports the following license types: BYOL PayGo It also supports the following two licensing models: Software Next Generation firewall credits: Flexible configurations that you specify with a deployment profile Fixed VM-Series Model configurations Both models license security services and other features. The VM-Series uses the following billable components of Google Cloud: Compute Engine For more information about licensing, see VM-Series Firewall Licensing. To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Performance We recommend using Cloud Load Balancing to distribute traffic to VM-Series firewalls deployed across regional zones. The VM-Series integrates with instance groups and Google Cloud Observability to scale the number of firewalls based on custom metrics. For more information, see Autoscaling the VM-Series Firewall on Google Cloud. Palo Alto Networks Single-Pass Architecture processes the following items only once: Network policy enforcement Application identification User controls Content inspection This check significantly reduces the processing overhead required to perform multiple functions on a single device. For more information, see VM-Series on Google Cloud Performance and Capacity. Use cases The following sections explain the VPC network use cases in more detail. Secure multiple VPC networks Because routing between subnetworks can't be overridden, consider segmenting your networks by allocating workloads to separate VPC networks. That way, the VM-Series firewalls can route and inspect all traffic that flows across the VPC network boundary. For more information about intra-VPC network traffic, see Google Cloud IDS. Multiple network interface models To help secure multiple networks, add additional VM-Series dataplane interfaces directly to your VPC networks. To steer traffic to the VM-Series NGFW interface, or to an Internal TCP/UDP Load Balancer (if multiple firewall VMs are used), use a custom route for each connected network. There's a limit of eight network interfaces per virtual machine instance. After you create the instance, you can't add or remove network interfaces. The following diagram shows a standard configuration of a VM Series deployment using multiple VM instances in an instance group (for high availability and performance): The following examples show the different traffic patterns that run through the VM-Series firewalls in this configuration: An inbound request is made to an application hosted in VPC C. The external load balancer distributes the request to the VM-Series untrust interfaces. The VM-Series firewall inspects and forwards the request through the NIC4 in VPC C and to the destination application. The route table of VPC B routes traffic that is destined to the internet to the IP address of VPC B's Internal TCP/UDP Load Balancer. The load balancer distributes the traffic to NIC3 on the VM-Series firewalls. The VM-Series inspects and forwards the traffic through its untrust interface (NIC0) to the internet. A resource in VPC A makes a request to a resource in VPC B. The route table of VPC A routes the request to the Internal Load Balancer in VPC A. The load balancer distributes the request to NIC2 on the VM-Series firewalls. The VM-Series inspects and forwards the request through NIC3 to the resource in VPC B. VPC B routes its return traffic to its Internal TCP/UDP Load Balancer using the VPC B route table. VPC network peering model VPC Network Peering helps you to build a hub and spoke topology to secure many VPC networks. Because you can add and remove spoke VPC networks as needed, network peering provides flexibility. In the following diagram, VM-Series firewalls are deployed to an instance group to help secure traffic for various spoke VPC networks. The hub network contains the VM-Series trust interfaces. The trust interfaces are the backend of an Internal TCP/UDP Load Balancer. The spoke networks are VPC peers of the hub network. Each spoke has a custom default route that directs traffic to the IP address of the Internal TCP/UDP Load Balancer. In the configuration displayed in the previous diagram, the VM-Series firewalls inspect the traffic as follows: Traffic from the internet to applications in the spoke networks are distributed by the external passthrough Network Load Balancer to the VM-Series untrust interfaces (NIC0). The VM-Series firewall inspects the traffic and forwards permissible traffic through its trust interface (NIC2) to the application in the spoke network. Traffic from the spoke networks destined to the internet are routed to the Internal TCP/UDP Load Balancer in the hub VPC. The VM-Series firewall inspects the traffic and forwards permissible traffic through its untrust interface (NIC0) to the internet. Traffic between spoke networks is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The VM-Series firewall inspects and forwards the traffic through the trust interface (NIC2) into the hub network which routes permissible traffic to the destination spoke network. You can combine the multiple network interface models and VPC network peering models to help scale security for many VPC networks by attaching additional VM-Series firewall interfaces to additional hub VPC networks. To do so, connect each interface and network to multiple spoke networks through VPC Network Peering. Distribute VM-Series firewalls To help secure specific traffic flows, you can deploy VM-Series firewalls across independent dedicated instance groups. The distributed-firewall design provides traffic autonomy and the ability to scale instance groups independently. The following use cases are examples of how to distribute instance groups: Securing traffic for networks that span multiple regions Isolating traffic based on data classification and sensitivity The following diagram is identical to the hub-and-spoke architecture with VPC Network Peering that was shown and discussed in the previous section. The difference is that the VM-Series firewalls are deployed in two managed instance groups. The first managed instance group is inbound. It helps secure inbound traffic from the internet to applications hosted in the spoke networks. The second managed instance group is outbound. It helps secure all egress traffic from the spoke networks and traffic from on-premises. This instance group provides traffic isolation with the ability to independently scale the VM-Series instance groups. The VM-Series firewalls in the configuration displayed in the previous diagram inspect traffic as follows: Traffic from the internet to applications in the spoke networks is distributed by the external passthrough Network Load Balancer to the untrust interfaces on the inbound instance group. The inbound instance group inspects all traffic and forwards permissible traffic through NIC2 to the application in the spoke network. Traffic from the spoke networks destined to the internet is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The load balancer distributes the traffic to NIC2 on the outbound instance group. The VM-Series inspects and forwards permissible traffic through its untrust interface (NIC0) and to the internet. Traffic between spoke networks and the remote network is routed to the Internal TCP/UDP Load Balancer in the hub VPC. The load balancer distributes permissible traffic to NIC2 on the outbound instance group. The VM-Series firewall inspects and forwards permissible traffic through NIC2 where the hub network routes that traffic to the adjacent spoke network. Active/passive model You can configure the VM-Series firewalls as an active/passive high availability (HA) pair. In this model, each VM-Series firewall belongs to an unmanaged instance group. Only the primary VM-Series firewall receives network traffic from Google Cloud load balancers. The health check configured on the load balancers determines the HA state of the primary VM-Series firewall. If the health check fails on the primary VM-Series firewall, the load balancers carry the active sessions to the secondary VM-Series firewall. At that point, the secondary VM-Series firewall becomes the primary firewall. This model is suited for environments with any or all of the following requirements: Maintenance of session continuity through stateful failover between VM-Series firewalls. Establishment of static IPsec tunnels directly to the VM-Series firewall. Preservation of the original client IP address for internet inbound traffic to internal applications protected by the VM-Series firewalls. Note: There's no need to horizontally scale the VM-Series firewalls. The topology and traffic patterns are almost identical to the previous models, with the following exceptions: nic1 serves as the management and HA1 interface. The HA1 interface synchronizes configuration changes between the VM-Series firewalls. An additional interface, HA2 (nic3), is attached to the VM-Series firewalls. The HA2 interface exchanges session tables between the active/passive pair. The external and internal TCP/UDP load balancers are configured with Connection Tracking to track sessions through the VM-Series firewalls. For more information, see VM-Series on Google Cloud. What's next Learn about the VM-Series on Google Cloud. Learn about VM-Series licensing on all platforms. Watch the VM-Series capabilities in Google Cloud. Try the Secure Google Cloud Hub-and-Spoke with VM-Series tutorial. Try the VM-Series Active/Passive HA on Google Cloud tutorial. Deploy a hub-and-spoke network using VPC Network Peering. Learn about more design options for connecting multiple VPC networks. Send feedback \ No newline at end of file diff --git a/Security(1).txt b/Security(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f37dfdfd60fc09fa4eb67c3ce80aed44b7b2549f --- /dev/null +++ b/Security(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security +Date Scraped: 2025-02-23T11:57:21.466Z + +Content: +Just released - Read the latest Defender's Advantage report -Download todayGet help now for a security breach or possible incident.Make Google part of your security team with Mandiant frontline experts, intel-driven security operations, and cloud security—supercharged by AIContact usVisit the Trust CenterAre you looking to:Detect, investigate, and respond to threats fast?Understand threat actors and mitigate risk?Help secure your cloud transformation and support digital sovereignty requirements?Supercharge your security with AIStay up to date with the latest insights from Google Cloud SecurityReportM-Trends 2024 Special Report - Download nowReportCyber Snapshot Report, Issue 7 BlogInvestigating FortiManager Zero-Day ExploitationBlogStaying a Step Ahead: Mitigating the DPRK IT Worker ThreatBlogFLARE VM: The Windows Malware Analysis Distribution You've Always Needed!BlogUNC5537 Targets Snowflake Customer Instances for Data Theft and ExtortionPodcastHow to Run an Effective Tabletop ExercisePodcastUsing LLMs to Analyze Windows BinariesReportCybersecurity forecast 2024: Insights for future planningView MoreNEW WAYS TO SECURE YOUR ORGANIZATIONAddress threats with intel, expertise, and AI-infused technologyUnderstand threats and help mitigate riskMandiant responds to some of the most impactful breaches around the world. We combine this frontline experience with insights from Mandiant threat analysts to provide a world-class view into threat actors.Detect, investigate, and respond to threats fastGoogle SecOps is a modern, AI-powered SecOps platform that is infused with Google’s unparalleled understanding of the threat landscape to empower security teams to help defend against today’s and tomorrow’s threats.Help secure your cloud transformationGoogle Cloud provides a secure-by-design foundation—core infrastructure designed, built, and operated with security in mind to support digital sovereignty requirements. We provide a broad portfolio of controls and capabilities, including advanced endpoint security capabilities in the world’s most popular enterprise browser from Google, and more to help you meet policy, regulatory, and business objectives.Supercharge security with generative AIHelp keep your organization safe with Gemini in Security, your AI-powered collaborator, built with Google’s foundation models specifically trained and fine-tuned for cybersecurity use cases.Transform your cybersecurityBoost your defenses against novel threatsModernize your security operationsHelp secure your cloud transformationBoost your defenses against novel threatsBoost your defenses against novel threatsModernize your security operationsHelp secure your cloud transformationFrontline intelligence and expertiseMandiant leverages threat intelligence and experience on the frontlines to help you understand active threats, so your organization can mitigate risk and minimize the impact of a breach.Request consultLearn moreLeverage advanced expertiseConfidently tackle breaches with IR experts to help reduce the impact of a breachLearn who is targeting you and how to proactively mitigate destructive attacksDiscover your exposed attack surface and help mitigate the risk of exploitsValidate controls and operational effectiveness against targeted attacksGoogle Threat IntelligenceUnderstand the threats that matter right now with threat intelligence from the frontlines.Mandiant Attack Surface ManagementGet the outside-in view of your attack surface with continuous monitoring for unknowns and exposures.VirusTotalExpedite investigation and threat discovery to help stop breaches by leveraging 15 years of malicious sightings to enrich and provide context around your organization's observations and logs.Mandiant Security ValidationAutomation and continuous testing give you real data of your security controls’ behavior under an emulated attack, so your security team can make improvements before one can happen.Mandiant Digital Threat MonitoringDark web monitoring helps with visibility into external threats by providing early warnings of threat actors targeting your organization, and notifications of data and credential leaks.Mandiant ConsultingDeepen the power of your security program with Mandiant consulting services. Our experience comes directly from the frontlines, and is backed by leading threat researchers and incident responders.Mandiant Incident ResponseActivate Mandiant experts to complete in-depth attack analysis, perform crisis management over response timeline, and recover business operations after a breach.Mandiant Strategic ReadinessEnhance security capabilities to help outmaneuver today’s threat actors and provide resilience against future compromise.Mandiant Technical AssuranceGauge and help improve your security programs with emulated attacks against critical assets.Mandiant Transformation ServicesEstablish and mature cyber defense capabilities across the six critical functions of cyber defense.Mandiant Managed DefenseGet 24/7 monitoring and alert prioritization by working with a growing range of third-party technologies.ProductsGoogle Threat IntelligenceUnderstand the threats that matter right now with threat intelligence from the frontlines.Mandiant Attack Surface ManagementGet the outside-in view of your attack surface with continuous monitoring for unknowns and exposures.VirusTotalExpedite investigation and threat discovery to help stop breaches by leveraging 15 years of malicious sightings to enrich and provide context around your organization's observations and logs.Mandiant Security ValidationAutomation and continuous testing give you real data of your security controls’ behavior under an emulated attack, so your security team can make improvements before one can happen.Mandiant Digital Threat MonitoringDark web monitoring helps with visibility into external threats by providing early warnings of threat actors targeting your organization, and notifications of data and credential leaks.ConsultingMandiant ConsultingDeepen the power of your security program with Mandiant consulting services. Our experience comes directly from the frontlines, and is backed by leading threat researchers and incident responders.Mandiant Incident ResponseActivate Mandiant experts to complete in-depth attack analysis, perform crisis management over response timeline, and recover business operations after a breach.Mandiant Strategic ReadinessEnhance security capabilities to help outmaneuver today’s threat actors and provide resilience against future compromise.Mandiant Technical AssuranceGauge and help improve your security programs with emulated attacks against critical assets.Mandiant Transformation ServicesEstablish and mature cyber defense capabilities across the six critical functions of cyber defense.Mandiant Managed DefenseGet 24/7 monitoring and alert prioritization by working with a growing range of third-party technologies.Security operationsRely on a modern, intel-driven approach to threat detection, investigation, and response. Operate at Google scale, help detect threats, and usher in a new era of productivity with AI.Get startedLearn moreA modern security operations platformDetect threats with confidence by storing and analyzing security data at cloud scaleGet rapid insights with context and depth to help stay ahead of the latest threatsRespond with speed and precision with orchestration, automation, and collaborationGoogle Security OperationsHelp detect, investigate, and respond to threats with Google speed, scale, and intelligence using a modern, integrated SecOps platform.Mandiant Hunt for ChronicleHelp expose previously undetected attacker activity with continuous threat hunting by Mandiant experts that leverages your Chronicle data.Incident Response RetainerPut Mandiant incident response experts on speed-dial to respond quickly in the event of a breach. With 2-hour response times and predefined Ts and Cs, the IR experts can begin triage without delay.Cyber Defense Center Development and OperationsEstablish and mature cyber defense capabilities across functions, including threat Intelligence, threat hunting, incident response and validation.ProductsGoogle Security OperationsHelp detect, investigate, and respond to threats with Google speed, scale, and intelligence using a modern, integrated SecOps platform.ConsultingMandiant Hunt for ChronicleHelp expose previously undetected attacker activity with continuous threat hunting by Mandiant experts that leverages your Chronicle data.Incident Response RetainerPut Mandiant incident response experts on speed-dial to respond quickly in the event of a breach. With 2-hour response times and predefined Ts and Cs, the IR experts can begin triage without delay.Cyber Defense Center Development and OperationsEstablish and mature cyber defense capabilities across functions, including threat Intelligence, threat hunting, incident response and validation.Cloud securityBuild on trusted, secure-by-design, secure-by-default cloud infrastructure to help drive your organization’s digital transformation.Get startedLearn moreDrive to the cloud security outcomes you wantHelp defend against threats to your Google Cloud assetsSupport digital sovereignty requirementsProvide secure access to cloud systems, data, and resourcesSecurity Command CenterHelp identify security misconfigurations and vulnerabilities, uncover threats, and use attack path simulation to discover and mitigate risk in your Google Cloud environment.Assured WorkloadsAccelerate your path to running more secure and compliant workloads on Google Cloud with automated controls and guardrails for sensitive workloads.Identity and Access ManagementManage identities and authorize who can take action on specific cloud resources. IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.Google Cloud ArmorHelp protect your cloud-hosted applications and websites against multiple types of threats including denial of service and web application attacks.Cloud Key ManagementCreate, import, and manage cryptographic keys, and perform cryptographic operations in a single centralized cloud service.Chrome Enterprise PremiumStrengthen browser security with the most trusted enterprise browser combined with Google’s advanced security capabilities.Explore moreSecurity FoundationGet recommended products, security capabilities, and prescriptive guidance to achieve a strong security posture in a single solution package.Web App and API Protection (WAAP)Help protect your applications and APIs against threats and fraud, and help ensure availability and compliance.Software supply chain securityEnhance the security of your end-to-end software supply chain, from code to production.Security and Resilience FrameworkHelp ensure continuity and protect your business against adverse cyber events by using our suite of security and resilience solutions.Cloud Architecture and Security AssessmentEvaluate cloud architectures and configurations, cloud security, and hardening techniques to protect against targeted attacks on popular cloud-hosted environmentsProductsSecurity Command CenterHelp identify security misconfigurations and vulnerabilities, uncover threats, and use attack path simulation to discover and mitigate risk in your Google Cloud environment.Assured WorkloadsAccelerate your path to running more secure and compliant workloads on Google Cloud with automated controls and guardrails for sensitive workloads.Identity and Access ManagementManage identities and authorize who can take action on specific cloud resources. IAM provides a unified view into security policy across your entire organization, with built-in auditing to ease compliance processes.Google Cloud ArmorHelp protect your cloud-hosted applications and websites against multiple types of threats including denial of service and web application attacks.Cloud Key ManagementCreate, import, and manage cryptographic keys, and perform cryptographic operations in a single centralized cloud service.Chrome Enterprise PremiumStrengthen browser security with the most trusted enterprise browser combined with Google’s advanced security capabilities.Explore moreSolutionsSecurity FoundationGet recommended products, security capabilities, and prescriptive guidance to achieve a strong security posture in a single solution package.Web App and API Protection (WAAP)Help protect your applications and APIs against threats and fraud, and help ensure availability and compliance.Software supply chain securityEnhance the security of your end-to-end software supply chain, from code to production.Security and Resilience FrameworkHelp ensure continuity and protect your business against adverse cyber events by using our suite of security and resilience solutions.Cloud Architecture and Security AssessmentEvaluate cloud architectures and configurations, cloud security, and hardening techniques to protect against targeted attacks on popular cloud-hosted environmentsLeading organizations trust Google Cloud3:05How Ascendium Education supplements critical cybersecurity functions with Mandiant Watch video2:55How Pfizer is boosting cybersecurity with AI and Google SecOpsWatch videoHow Iron Mountain uses Assured Workloads to serve their customers’ compliance needsRead the blog33:40How Broadcom and Equifax use Assured Workloads to serve their clients in regulated industriesWatch videoA critical driver of our cloud adoption has always been the capabilities that cloud brings when it comes to processing huge amounts of data, such as to derive insights. But at the moment of saving data in the cloud, we need to make sure it's protected under strict security standards—at all times. That's why we partnered with Google Cloud.Christian Gorke, Head of Cyber Center of Excellence, CommerzbankExplore resources, master in-demand skills, and keep up with our latest in securityGoogle cloud security communityGoogle Cloud Security CommunityJoin our community of security professionals from around the world with one common mission: bringing their security platforms to the next level. Event webinar podcastEvents, webinars, and podcastsBrowse our recent security webinars and podcasts.Mandiant academyMandiant AcademyUplevel your cybersecurity skills with a mix of courses, certifications, and real-world exercises.Let's start security transformation todayContact usConsult the Google Cybersecurity Action Team to discuss your digital transformation.Google Cybersecurity Action TeamLearn more about our shared fate model, designed to give you greater trust and confidence in the cloud.Shared fate modelExplore best practices for supporting your security and compliance objectives.Security best practices centerGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security(2).txt b/Security(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..52e062443038a0d6e3f5d568fd350f5552d192a8 --- /dev/null +++ b/Security(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/security +Date Scraped: 2025-02-23T12:00:56.344Z + +Content: +Learn optimization tips from IT leaders from Uber, Air Asia, Deloitte, ADT and more at our free IT Heroes Summit. Watch now.Protect your organization with Google Cloud security solutionsApply the best of Google to your security in the cloud, on-premises, or in hybrid deployments. Detect, investigate, and help stop cyber threats that target your business and users before attacks result in damage or loss.Contact usProtect your applicationsMake our security solutions part of your business continuity plan. Use WAAP to protect against fraud and Chronicle to detect and investigate threats.Build on Google's secure-by-default infrastructureProtect your users, data, and applications using the same secure-by-design infrastructure that Google relies on.Rely on our global networkOur private, software-defined network provides fast and reliable connections to users around the world.Google Cloud security solutionsSolutionsGood forSecurity FoundationSolution with recommended products and guidance to help achieve a strong security posture.Easily consume security capabilitiesAlignment with security best practicesCost-effective package of security productsRisk and compliance as code (RCaC)Transform your security and compliance function through automation to gain the speed and agility of DevOps, reduce risk, and create value in the cloud securely.Assert infrastructure and policies as codeEstablish secure guardrailsDetect drift and noncomplianceContinuously evaluate riskSecurity analytics and operationsUsing Chronicle, store and continuously analyze petabytes of security telemetry at a fixed price with zero management headache.Enhanced threat detection and investigation with intelligent data fusionThreat identification with our advanced rules engineContinuous IoC matching and retrospective analysis of security telemetryPainless scalability with elastic security telemetry storageSecurity and resilience frameworkHelp ensure continuity and protect your business against adverse cyber events by using our comprehensive suite of security and resilience solutions.Address each phase of the cybersecurity life cycleHelp protect critical assets in cloud and on-premisesModernize your security protections to help you maintain continuous operationsEnable rapid recovery wherever your assets resideSoftware supply chain securityEnhance the security of your end-to-end software supply chain from code to production.Shift security left by identifying risks earlier in your SDLCAutomate security enforcement along the supply chainAdopt industry open standards and best practices Start from where you are toward holistic supply chain securityWeb App and API Protection (WAAP)Protect your applications and APIs against threats and fraud, help ensure availability and compliance.Protect against new and existing threats to your web applications and APIsGuard apps and APIs in the cloud, on-premises, or in hybrid deploymentsSimplify operations with consolidated management and visibilityPotentially save 50%–70% over competing solutionsSolutionsSecurity FoundationSolution with recommended products and guidance to help achieve a strong security posture.Easily consume security capabilitiesAlignment with security best practicesCost-effective package of security productsRisk and compliance as code (RCaC)Transform your security and compliance function through automation to gain the speed and agility of DevOps, reduce risk, and create value in the cloud securely.Assert infrastructure and policies as codeEstablish secure guardrailsDetect drift and noncomplianceContinuously evaluate riskSecurity analytics and operationsUsing Chronicle, store and continuously analyze petabytes of security telemetry at a fixed price with zero management headache.Enhanced threat detection and investigation with intelligent data fusionThreat identification with our advanced rules engineContinuous IoC matching and retrospective analysis of security telemetryPainless scalability with elastic security telemetry storageSecurity and resilience frameworkHelp ensure continuity and protect your business against adverse cyber events by using our comprehensive suite of security and resilience solutions.Address each phase of the cybersecurity life cycleHelp protect critical assets in cloud and on-premisesModernize your security protections to help you maintain continuous operationsEnable rapid recovery wherever your assets resideSoftware supply chain securityEnhance the security of your end-to-end software supply chain from code to production.Shift security left by identifying risks earlier in your SDLCAutomate security enforcement along the supply chainAdopt industry open standards and best practices Start from where you are toward holistic supply chain securityWeb App and API Protection (WAAP)Protect your applications and APIs against threats and fraud, help ensure availability and compliance.Protect against new and existing threats to your web applications and APIsGuard apps and APIs in the cloud, on-premises, or in hybrid deploymentsSimplify operations with consolidated management and visibilityPotentially save 50%–70% over competing solutionsReady to find out how our security solutions can help you meet your security and compliance requirements?Contact usSee how our security analytics and operations platform Chronicle solves Quanta Services' security challenges.Watch the case study Google Cloud security customer storiesVideoNCR reduced the time it took to identify threats from hours and days to secondsVideo (1:11)Case StudyEvernote migrated to Google Cloud's more scalable and secure infrastructure5-min readCase StudyAspen’s security team is investigating incidents 2-3 times faster than before with Chronicle5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security.txt b/Security.txt new file mode 100644 index 0000000000000000000000000000000000000000..607b0a0d8ba10cccf43478989c4e63e0362c4489 --- /dev/null +++ b/Security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/perspectives/ai-ml/security +Date Scraped: 2025-02-23T11:44:21.240Z + +Content: +Home Docs Cloud Architecture Center Send feedback AI and ML perspective: Security Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-11 UTC This document in the Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to ensure that your AI and ML deployments meet the security and compliance requirements of your organization. The recommendations in this document align with the security pillar of the Architecture Framework. Secure deployment of AI and ML workloads is a critical requirement, particularly in enterprise environments. To meet this requirement, you need to adopt a holistic security approach that starts from the initial conceptualization of your AI and ML solutions and extends to development, deployment, and ongoing operations. Google Cloud offers robust tools and services that are designed to help secure your AI and ML workloads. Define clear goals and requirements It's easier to integrate the required security and compliance controls early in your design and development process, than to add the controls after development. From the start of your design and development process, make decisions that are appropriate for your specific risk environment and your specific business priorities. Consider the following recommendations: Identify potential attack vectors and adopt a security and compliance perspective from the start. As you design and evolve your AI systems, keep track of the attack surface, potential risks, and obligations that you might face. Align your AI and ML security efforts with your business goals and ensure that security is an integral part of your overall strategy. Understand the effects of your security choices on your main business goals. Keep data secure and prevent loss or mishandling Data is a valuable and sensitive asset that must be kept secure. Data security helps you to maintain user trust, support your business objectives, and meet your compliance requirements. Consider the following recommendations: Don't collect, keep, or use data that's not strictly necessary for your business goals. If possible, use synthetic or fully anonymized data. Monitor data collection, storage, and transformation. Maintain logs for all data access and manipulation activities. The logs help you to audit data access, detect unauthorized access attempts, and prevent unwanted access. Implement different levels of access (for example, no-access, read-only, or write) based on user roles. Ensure that permissions are assigned based on the principle of least privilege. Users must have only the minimum permissions that are necessary to let them perform their role activities. Implement measures like encryption, secure perimeters, and restrictions on data movement. These measures help you to prevent data exfiltration and data loss. Guard against data poisoning for your ML training systems. Keep AI pipelines secure and robust against tampering Your AI and ML code and the code-defined pipelines are critical assets. Code that isn't secured can be tampered with, which can lead to data leaks, compliance failure, and disruption of critical business activities. Keeping your AI and ML code secure helps to ensure the integrity and value of your models and model outputs. Consider the following recommendations: Use secure coding practices, such as dependency management or input validation and sanitization, during model development to prevent vulnerabilities. Protect your pipeline code and your model artifacts, like files, model weights, and deployment specifications, from unauthorized access. Implement different access levels for each artifact based on user roles and needs. Enforce lineage and tracking of your assets and pipeline runs. This enforcement helps you to meet compliance requirements and to avoid compromising production systems. Deploy on secure systems with secure tools and artifacts Ensure that your code and models run in a secure environment that has a robust access control system with security assurances for the tools and artifacts that are deployed in the environment. Consider the following recommendations: Train and deploy your models in a secure environment that has appropriate access controls and protection against unauthorized use or manipulation. Follow standard Supply-chain Levels for Software Artifacts (SLSA) guidelines for your AI-specific artifacts, like models and software packages. Prefer using validated prebuilt container images that are specifically designed for AI workloads. Protect and monitor inputs AI systems need inputs to make predictions, generate content, or automate actions. Some inputs might pose risks or be used as attack vectors that must be detected and sanitized. Detecting potential malicious inputs early helps you to keep your AI systems secure and operating as intended. Consider the following recommendations: Implement secure practices to develop and manage prompts for generative AI systems, and ensure that the prompts are screened for harmful intent. Monitor inputs to predictive or generative systems to prevent issues like overloaded endpoints or prompts that the systems aren't designed to handle. Ensure that only the intended users of a deployed system can use it. Monitor, evaluate, and prepare to respond to outputs AI systems deliver value because they produce outputs that augment, optimize, or automate human decision-making. To maintain the integrity and trustworthiness of your AI systems and applications, you need to make sure that the outputs are secure and within expected parameters. You also need a plan to respond to incidents. Consider the following recommendations: Monitor the outputs of your AI and ML models in production, and identify any performance, security, and compliance issues. Evaluate model performance by implementing robust metrics and security measures, like identifying out-of-scope generative responses or extreme outputs in predictive models. Collect user feedback on model performance. Implement robust alerting and incident response procedures to address any potential issues. ContributorsAuthors: Kamilla Kurta | GenAI/ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerMohamed Fawzi | Benelux Security and Compliance LeadOther contributors: Daniel Lees | Cloud Security ArchitectKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerWade Holmes | Global Solutions Director Previous arrow_back Operational excellence Next Reliability arrow_forward Send feedback \ No newline at end of file diff --git a/Security_Analytics_and_Operations.txt b/Security_Analytics_and_Operations.txt new file mode 100644 index 0000000000000000000000000000000000000000..22ab2a89821a0774a1dc24c39f3c4b1710da5027 --- /dev/null +++ b/Security_Analytics_and_Operations.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/security-analytics-and-operations +Date Scraped: 2025-02-23T12:00:58.851Z + +Content: +Explore cutting-edge innovations from Google Cloud and gain insights from Mandiant experts at Google Cloud Security Summit. Register now.Autonomic Security OperationsExceptional threat management through a modern, Cloud-native stack. Deep integrations with third party tools and a powerful engine to create connective tissue and stitch your defenses together.Contact usAutonomic Security Operations TransformationRegister to read whitepaperBenefitsAn adaptive, agile, and highly automated approach to threat managementAccelerated transformationWorkshops, technical content, products, integrations, and blueprints designed to help you kick-start your modernization journey to a state of autonomic security operations.Increase business agilityIntelligent data fusion, continuous IoC matching, sub-second petabyte scale queries, and modern YARA-L detection to conduct plaid-speed management of threats at a disruptive cost and massive scale.Maximize use-case coverageHunt for APTs, detect ransomware, investigate network anomalies, identify fraud signals, or partner with expert MSSPs. Through a transformative onboarding experience, we’ll meet you where your risks are.Ready to transform your SOC or partner with an expert MSSP. Contact us.Key featuresStack your defenses to manage modern threats at Cloud-scale.Modern threat management stackPetabyte-scale detection in Chronicle. Industry leading data lake powered by BigQuery. Rich, compelling analytics via Looker. Deep extensibility to a rich ecosystem of integrations. Managed by Google Cloud.Radical insightSearch, visualize, analyze, and build synergy across your security use cases on a deeply interoperable and semantically aware analytics enginePartner with Google CloudWe take a hands-on approach to transforming your Security Operations team to adapt to the growing needs of your organization. Take advantage of our planet-scale infrastructure and extensive security backbone to pioneer threat management together.Ready to get started? Contact usSee how Google Cloud transforms security operationsRedefining Security AnalyticsRegister to read whitepaperCustomersSee how customers are reducing costs and increasing SOC analyst outputCase studyQuanta Services identifies and addresses threats faster with Chronicle.4-min readVideoNCR reduced the time it took to identify threats from hours and days to seconds.1:11Case studyAspen's security team is investigating incidents 2-3 times faster than before.4-min readSee all customersPartnersModernize Security Operations with our preferred partnersOur deep network of highly-specialized global and regional partners can support you in your journey to modernizing Security Operations.Expand allSOC Transformation PartnersFor large enterprises with in-house Security Operations teams that need transformation partners to provide hands-on consulting, engineering, and operations support.Managed Security Service ProvidersFor organizations that don't have an extensive Security Operations footprint and need to purchase an MSSP.See all partnersRelated services Learn more about our security analytics and operations productsThese unique security intelligence products work together to analyze data and provide insight at global scale.ChronicleA global security telemetry platform for threat detection and investigation.LookerBusiness intelligence software and big data analytics platform that helps you explore, analyze and share real-time business analytics easily.BigQueryA serverless, cost-effective and multi cloud data warehouse designed to help you turn big data into valuable business insights.Virus TotalOne of the world’s largest malware intelligence systems.DocumentationExplore common use cases for Autonomic Security OperationsGoogle Cloud BasicsSupported data sets in ChronicleChronicle can ingest raw logs from different companies, protocols, systems, and equipment. This document describes the currently supported data sets.Learn moreBest PracticeMITRE ATT&CK mapping of Google Cloud logsThe tool helps you by mapping out threat tactics and techniques from the popular MITRE ATT&CK® threat model to the specific Google Cloud log types(s).Learn moreQuickstartOverview of the YARA-L 2.0 languageYARA-L 2.0 is a computer language used to create rules for searching through your enterprise log data as it is ingested into your Chronicle account.Learn moreGoogle Cloud BasicsSupported default parsersParsers normalize raw log data into structured Unified Data Model format. This section lists devices, and ingestion labels, that have a default parser. Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security_Command_Center.txt b/Security_Command_Center.txt new file mode 100644 index 0000000000000000000000000000000000000000..2975d0ec771efae6144e03548d2897941b2c08fa --- /dev/null +++ b/Security_Command_Center.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/security-command-center +Date Scraped: 2025-02-23T12:09:11.921Z + +Content: +Read our blog announcing Security Command Center Enterprise.Multi-cloud securityCloud security and risk management for multi-cloud environmentsThe industry’s first multi-cloud security solution with virtual red teaming and built-in response capabilities—supercharged by Mandiant expertise and Gemini AI at Google scale.Go to consoleContact salesJoin the Security Command Center Community to find answers, build skills, stay up-to-date, and make connections.Product highlightsBuild a secure cloud posture and find issues earlyDetect threats, vulnerabilities, and misconfigurationsInvestigate and remediate high-risk cloud issuesIntroducing Security Command Center (SCC)FeaturesBuilt-in responseTake direct action on cloud security issues to reduce risk. Cloud misconfigurations, vulnerabilities, and toxic combinations of issues are automatically grouped into cases, enriched with the latest threat intelligence, and assigned to the right owner for investigation and remediation. Streamline response with custom and out-of-the-box playbooks, and integrate with popular ITSM and ticketing solutions.Threat detectionWorld-class Mandiant threat intelligence and expertise is infused into the core solution architecture, enabling security teams to detect and stop the latest cyber threats. It identifies indicators of compromise (IOCs) to find and block newly-discovered crytpominers, command and control domains, and more. Curated threat rules are continuously applied to cloud telemetry and workload data to find active threats, while malicious files are detected when uploaded into the cloud environment.5:27Continuous virtual red teamingFind high-risk security issues by simulating a motivated and sophisticated attacker who attempts to reach and compromise cloud resources. Millions of attack permuations run against a digital twin model of your cloud environment to predict where an external attacker could strike, identify cloud resources that could be exposed, and determine the possible blast radius of an attack. Virtual red team results, including attack paths, risk scoring, and toxic combinations, are then used to prioritize remediation.6:31Cloud posture managementIdentify cloud misconfigurations, software vulnerabilities, and compliance violations across multi-cloud environments. Get visibility of cloud assets and resources, and identify security issues that could lead to compromise. Security findings are assigned an attack exposure score and are mapped on Security Command Center’s risk dashboard to help prioritize security response.6:02Shift left securityFind security issues before they happen. Developers get access to thousands of software packages tested and validated by Google via Assured Open Source Software. DevOps and DevSecOps teams get posture controls to define and monitor security guardrails in the infrastructure, and can use infrastructure as code (IaC) scanning to implement consistent security policies from code to cloud by validating security controls during the build process.Cloud Infrastructure and Entitlement Management (CIEM)Reduce identity-related risks by granting users the minimum level of access and permissions needed to perform their job. Understand which users have access to which cloud resources, get ML-generated recommendations to reduce unused and unnecessary permissions, and use out-of-the box playbooks to accelerate responses to identity-driven vulnerabilities. Compatible with Google Cloud IAM, Entra ID (Azure AD), AWS IAM, and Okta.Mandiant HuntUncover threats hiding in your cloud environments with Mandiant Hunt. Our experts proactively analyze your multicloud data, armed with the latest knowledge of adversary tactics, techniques, and procedures (TTPs) targeting cloud systems. This optional, paid-for service uses continuous intelligence from Mandiant frontline experts, VirusTotal, and Google Cloud security data. You'll receive findings mapped to the MITRE ATT&CK framework, offering actionable context to strengthen your cloud security posture.Data security posture managementAutomatically monitor, categorize, and manage sensitive cloud data to ensure that it has the right security, privacy, and compliance posture and controls. Use more than 150 AI-driven data classifiers to discover and classify structured and unstructured data across your organization. Automatically use high-value data findings to improve virtual red team results.View all featuresOptions TableSecurity Command CenterDescriptionBest forActivation and pricingEnterpriseComplete multi-cloud CNAPP security, plus automated case management and remediation playbooksProtecting Google Cloud, AWS and/or Azure. Best value. Google recommendedSubscription-based pricingPremiumSecurity posture management, attack paths, threat detection, and compliance monitoring for Google Cloud onlyGoogle Cloud customers who need pay-as-you-go billingPay-as-you-go pricing with self-service activationStandardBasic security posture management for Google Cloud onlyGoogle Cloud environments with minimal security requirementsNo cost self-service activationLearn more about Security Command Center offerings in our documentation.EnterpriseDescriptionComplete multi-cloud CNAPP security, plus automated case management and remediation playbooksBest forProtecting Google Cloud, AWS and/or Azure. Best value. Google recommendedActivation and pricingSubscription-based pricingPremiumDescriptionSecurity posture management, attack paths, threat detection, and compliance monitoring for Google Cloud onlyBest forGoogle Cloud customers who need pay-as-you-go billingActivation and pricingPay-as-you-go pricing with self-service activationStandardDescriptionBasic security posture management for Google Cloud onlyBest forGoogle Cloud environments with minimal security requirementsActivation and pricingNo cost self-service activationLearn more about Security Command Center offerings in our documentation.How It WorksSecurity Command Center brings together proactive and reactive security; delivering posture management and threat detection for code, identities, and data. Built-in remediation streamlines security response. It’s all powered by Google innovation, running on a planet-scale data lake.Watch a product demoCommon UsesRisk-centric cloud securityPrioritize cloud risks that matterUse virtual red team capabilities to quickly find the high-risk cloud security issues that could lead to significant business impact. Leverage a detailed risk dashboard to view attack path details, toxic combinations of issues, attack exposure scoring, and hand-crafted CVE information from Mandiant to prioritize response efforts.Read about our risk technologyIdentifying and Prioritizing Cloud Risks with a Cloud-native Application Protection PlatformThreat intelligence delivered within a cloud-native application protection platform wrapper enriches and prioritizes risk scoring to deliver on a promise of holistic, unified security. Read the IDC Spotlight whitepaperTutorials, quickstarts, & labsPrioritize cloud risks that matterUse virtual red team capabilities to quickly find the high-risk cloud security issues that could lead to significant business impact. Leverage a detailed risk dashboard to view attack path details, toxic combinations of issues, attack exposure scoring, and hand-crafted CVE information from Mandiant to prioritize response efforts.Read about our risk technologyLearning resourcesIdentifying and Prioritizing Cloud Risks with a Cloud-native Application Protection PlatformThreat intelligence delivered within a cloud-native application protection platform wrapper enriches and prioritizes risk scoring to deliver on a promise of holistic, unified security. Read the IDC Spotlight whitepaperCloud workload protectionDetect and stop active attacksDiscover when bad actors have infiltrated your cloud environment. Put Mandiant threat intelligence at your fingertips to find cyber attacks, including malicious execution, privilege escalation, data exfiltration, defense evasion, and more. Get threats assigned to high-priority cases, enriched with additional evidence, and use cloud-specific playbooks to remove attackers from your cloud.Learn threat detection for Google CloudTutorials, quickstarts, & labsDetect and stop active attacksDiscover when bad actors have infiltrated your cloud environment. Put Mandiant threat intelligence at your fingertips to find cyber attacks, including malicious execution, privilege escalation, data exfiltration, defense evasion, and more. Get threats assigned to high-priority cases, enriched with additional evidence, and use cloud-specific playbooks to remove attackers from your cloud.Learn threat detection for Google CloudBuilt-in security responseInvestigate and fix high-risk issuesAdd built-in response capabilities and start resolving security issues faster and eliminate the backlog of unresolved risks. Use automatic case management that groups related security issues, and identifies the right resource or project owner. Then simplify investigation with Gemini AI, streamline remediation with out-of-the-box playbooks, and plug into your existing ITSM and ticketing system.Read an ESG white paperTutorials, quickstarts, & labsInvestigate and fix high-risk issuesAdd built-in response capabilities and start resolving security issues faster and eliminate the backlog of unresolved risks. Use automatic case management that groups related security issues, and identifies the right resource or project owner. Then simplify investigation with Gemini AI, streamline remediation with out-of-the-box playbooks, and plug into your existing ITSM and ticketing system.Read an ESG white paperShift left securityFix issues before they happenMitigate supply chain risks that can be introduced during the software development process by using thousands of software packages tested and validated by Google. Scan infrastructure as code (IaC) files and CI/CD pipelines to identify resource violations, and set custom posture controls that detect and alert if cloud configurations drift from centrally-defined guardrails or compliance standards.Tutorials, quickstarts, & labsFix issues before they happenMitigate supply chain risks that can be introduced during the software development process by using thousands of software packages tested and validated by Google. Scan infrastructure as code (IaC) files and CI/CD pipelines to identify resource violations, and set custom posture controls that detect and alert if cloud configurations drift from centrally-defined guardrails or compliance standards.Security postureMake your clouds safe for critical applications and dataProactively find vulnerabilities and misconfigurations in your multi-cloud environment before attackers can exploit them to access sensitive cloud resources. Then use attack paths and attack exposure scoring to prioritize the security issues that pose the most risk. Monitor compliance to industry standards, such as CIS, PCI-DSS, NIST, and more. Export results to risk and compliance teams.Get an overview of Google Cloud postureTutorials, quickstarts, & labsMake your clouds safe for critical applications and dataProactively find vulnerabilities and misconfigurations in your multi-cloud environment before attackers can exploit them to access sensitive cloud resources. Then use attack paths and attack exposure scoring to prioritize the security issues that pose the most risk. Monitor compliance to industry standards, such as CIS, PCI-DSS, NIST, and more. Export results to risk and compliance teams.Get an overview of Google Cloud posturePricingHow Security Command Center pricing worksPricing is based on the total number of assets in the cloud environments being protected.Product tierActivationPrice USDEnterpriseAvailable via one or multi-year subscription, with built-in term discountsPricing detailsPremiumAvailable via self-service activation with pay-as-you-go consumption pricing, at a project-level or organization-levelPricing detailsStandardAvailable via self-service activation, at a project-level or organization-level No costHow Security Command Center pricing worksPricing is based on the total number of assets in the cloud environments being protected.EnterpriseActivationAvailable via one or multi-year subscription, with built-in term discountsPrice USDPricing detailsPremiumActivationAvailable via self-service activation with pay-as-you-go consumption pricing, at a project-level or organization-levelPrice USDPricing detailsStandardActivationAvailable via self-service activation, at a project-level or organization-level Price USDNo costSCC PREMIUM PRICINGLearn about pay-as-you-go pricing for SCC Premium.Get pricing detailsSCC ENTERPRISE PRICINGConnect with our sales team to get a quote for a one-year or multi-year subscription.Request a quoteGet started todayActivate SCC Premium for Google CloudGo to consoleStart a proof of conceptContact salesTake a courseGetting started with SCC EnterpriseGet more technical product informationRead documentationExpand your cloud security knowledgeView cloud security sessions at Google Cloud Next '24Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security_Foundation.txt b/Security_Foundation.txt new file mode 100644 index 0000000000000000000000000000000000000000..0b8f8a4b72e8e1f207dc2e5e2ae6ff2941088601 --- /dev/null +++ b/Security_Foundation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/solutions/security-foundation +Date Scraped: 2025-02-23T12:01:09.115Z + +Content: +Explore cutting-edge innovations from Google Cloud and gain insights from Mandiant experts at Google Cloud Security Summit. Register now.Security FoundationGoogle recommended cloud-first security products and capabilities to help you achieve a strong security posture and protections for your Google Cloud environment. Go to consoleStart discoveryVIDEOProtect your Google Cloud environment with Security Foundation4:34BenefitsAchieve a strong security posture on Google CloudEasily consume security capabilitiesEasily adopt Google Cloud recommended products and security capabilities that help achieve a strong security posture for your cloud environment.Guide with security best practicesThe solution aligns with Google Cloud security best practices to help meet your security and compliance objectives as you deploy workloads on Google Cloud.Supports key cloud adoption use casesProvides security controls you need for data protection, network security, security monitoring, and much more to help make your deployments secure.New customers get $300 in free credits to spend on Google Cloud. Key featuresA solution approach to help address your security needsEnterprise foundations blueprint alignmentPrescriptive guidance with products and capabilities aligned to Google Cloud security best practices using the enterprise foundations blueprint guide and deployable terraform assets.Comprehensive security controls for cloudProvides a comprehensive set of security products to address specific use cases in data protection, network security, security monitoring, and much more in the enterprise journey toward cloud adoption and modernizing workloads.Google Cybersecurity Action Team validationOur Google Cybersecurity Action Team (GCAT) reviewed and validated the solution and its product components to help you know you are applying best practices and robust security controls in your security transformation journey. Ready to get started? Go to consoleWhat's in itThe solution includes a foundational set of products across IAM, data security, network security, sovereignty & compliance, and security monitoring. Products included in the Security Foundation solutionProduct listRelated servicesCloud adoption use casesThe Security Foundation solution helps to address your security needs across a wide variety of use cases and scenarios.Infrastructure modernizationApply a layered security approach through identity-based access controls, network protections, VM lifecycle management, monitoring, and governance.Application modernizationImplement DevSecOps, shift left by detecting vulnerabilities and misconfigurations, securing the supply chain, and enabling runtime protections.Data analyticsDeploy key services to meet your organization’s data governance and data integrity needs. Identify, categorize, and control access to your data.Web hostingProtect web applications by mitigating common threats, including OWASP Top 10, Denial-of-Service attacks, and unauthorized access.Regulated workloadsDemonstrate and help maintain compliance based on industry standards and benchmarks, such as PCI, CIS 1.x, NIST 800-53, ISO 27001, and OWASP Top 10.SAP on Google CloudSecure your business processes and data with granular access control to SAP applications, data governance, and multi-region resiliency.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security_Guide.txt b/Security_Guide.txt new file mode 100644 index 0000000000000000000000000000000000000000..f19b64023da3530e6b1b55fce260b3a37dd54e2b --- /dev/null +++ b/Security_Guide.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/hadoop-migration-security-guide +Date Scraped: 2025-02-23T11:52:44.167Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hadoop Migration Security Guide Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-17 UTC Dataproc and Google Cloud contain several features that can help secure your data. This guide explains how Hadoop security works and how it translates to Google Cloud, providing guidance on how to architect security when deploying on Google Cloud. Overview The typical security model and mechanism for an on-premises Hadoop deployment are different than the security model and mechanism provided by the cloud. Understanding security on Hadoop can help you to better architect security when deploying on Google Cloud. You can deploy Hadoop on Google Cloud in two ways: as Google-managed clusters (Dataproc), or as user-managed clusters (Hadoop on Compute Engine). Most of the content and technical guidance in this guide applies to both forms of deployment. This guide uses the term Dataproc/Hadoop when referring to concepts or procedures that apply to either type of deployment. The guide points out the few cases where deploying to Dataproc differs from deploying to Hadoop on Compute Engine. Typical on-premises Hadoop security The following diagram shows a typical on-premises Hadoop infrastructure and how it is secured. Note how the basic Hadoop components interact with each other and with user management systems. Overall, Hadoop security is based on these four pillars: Authentication is provided through Kerberos integrated with LDAP or Active Directory Authorization is provided through HDFS and security products like Apache Sentry or Apache Ranger, which ensure that users have the right access to Hadoop resources. Encryption is provided through network encryption and HDFS encryption, which together secure data both in transit and at rest. Auditing is provided by vendor-supplied products such as Cloudera Navigator. From the perspective of the user account, Hadoop has its own user and group structure to both manage identities and to run daemons. The Hadoop HDFS and YARN daemons, for example, run as Unix users hdfs and yarn, as explained in Hadoop in Secure Mode. Hadoop users are usually mapped from Linux system users or Active Directory/LDAP users. Active Directory users and groups are synced by tools such as Centrify or RedHat SSSD. Hadoop on-premises authentication A secure system requires users and services to prove themselves to the system. Hadoop secure mode uses Kerberos for authentication. Most Hadoop components are designed to use Kerberos for authentication. Kerberos is usually implemented within enterprise authentication systems such as Active Directory or LDAP-compliant systems. Kerberos principals A user in Kerberos is called a principal. In a Hadoop deployment, there are user principals and service principals. User principals are usually synced from Active Directory or other user management systems to a key distribution center (KDC). One user principal represents one human user. A service principal is unique to a service per server, so each service on each server has one unique principal to represent it. Keytab files A keytab file contains Kerberos principals and their keys. Users and services can use keytabs to authenticate against Hadoop services without using interactive tools and entering passwords. Hadoop creates service principals for each service on each node. These principals are stored in keytab files on Hadoop nodes. SPNEGO If you are accessing a Kerberized cluster using a web browser, the browser must know how to pass Kerberos keys. This is where Simple and Protected GSS-API Negotiation Mechanism (SPNEGO) comes in, which provides a way to use Kerberos in web applications. Integration Hadoop integrates with Kerberos not only for user authentication, but also for service authentication. Any Hadoop service on any node will have its own Kerberos principal, which it uses to authenticate. Services usually have keytab files stored on the server that contain a random password. To be able to interact with services, human users usually need to obtain their Kerberos ticket via the kinit command or Centrify or SSSD. Hadoop on-premises authorization After an identity has been validated, the authorization system checks what type of access the user or service has. On Hadoop, some open source projects such as Apache Sentry and Apache Ranger are used to provide authorization. Apache Sentry and Apache Ranger Apache Sentry and Apache Ranger are common authorization mechanisms used on Hadoop clusters. Components on Hadoop implement their own plugins to Sentry or Ranger to specify how to behave when Sentry or Ranger confirms or denies access to an identity. Sentry and Ranger rely on authentication systems such as Kerberos, LDAP, or AD. The group mapping mechanism in Hadoop makes sure that Sentry or Ranger sees the same group mapping that the other components of the Hadoop ecosystem see. HDFS permissions and ACL HDFS uses a POSIX-like permission system with an access control list (ACL) to determine whether users have access to files. Each file and directory is associated with an owner and a group. The structure has a root folder that's owned by a superuser. Different levels of the structure can have different encryption, and different ownership, permissions, and extended ACL (facl). As shown in the following diagram, permissions are usually granted at the directory level to specific groups based on their access needs. Access patterns are identified as different roles and map to Active Directory groups. Objects belonging to a single dataset generally reside at the layer that has permissions for a specific group, with different directories for different data categories. For example, the stg directory is the staging area for financial data. The stg folder has read and write permissions for the fin-loader group. From this staging area, another application account group, fin-etl, which represents ETL pipelines, has read-only access to this directory. ETL pipelines process data and save them into the app directory to be served. To enable this access pattern, the app directory has read/write access for the fin-etl group, which is the identity that is used to write the ETL data, and read-only access for the fin-reader group, which consumes the resulting data. Hadoop on-premises encryption Hadoop provides ways to encrypt data at rest and data in transit. To encrypt data at rest, you can encrypt HDFS by using Java-based key encryption or vendor-supplied encryption solutions. HDFS supports encryption zones to provide the ability to encrypt different files using different keys. Each encryption zone is associated with a single encryption zone key that is specified when the zone is created. Each file within an encryption zone has a unique data encryption key (DEK). DEKs are never handled directly by HDFS. Instead, HDFS only ever handles an encrypted data encryption key (EDEK). Clients decrypt an EDEK, and then use the subsequent DEK to read and write data. HDFS data nodes simply see a stream of encrypted bytes. Data transit between Hadoop nodes can be encrypted using Transport Layer Security (TLS). TLS provides encryption and authentication in communication between any two components from Hadoop. Usually Hadoop would use internal CA-signed certificates for TLS between components. Hadoop on-premises auditing An important part of security is auditing. Auditing helps you find suspicious activity and provides a record of who has had access to resources. Cloudera Navigator and other third-party tools are usually used for data management purposes, such as audit tracing on Hadoop. These tools provide visibility into and control over the data in Hadoop datastores and over the computations performed on that data. Data auditing can capture a complete and immutable record of all activity within a system. Hadoop on Google Cloud In a traditional on-premises Hadoop environment, the four pillars of Hadoop security (authentication, authorization, encryption, and audit) are integrated and handled by different components. On Google Cloud, they are handled by different Google Cloud components external to both Dataproc and Hadoop on Compute Engine. You can manage Google Cloud resources using the Google Cloud console, which is a web-based interface. You can also use the Google Cloud CLI, which can be faster and more convenient if you are comfortable working at the command line. You can run gcloud commands by installing the gcloud CLI on your local computer, or by using an instance of Cloud Shell. Hadoop Google Cloud authentication There are two kinds of Google identities within Google Cloud: service accounts and user accounts. Most Google APIs require authentication with a Google identity. A limited number of Google Cloud APIs will work without authentication (using API keys), but we recommend using all APIs with service account authentication. Service accounts use private keys to establish identity. User accounts use the OAUTH 2.0 protocol to authenticate end users. For more information, see Authentication Overview. Hadoop Google Cloud authorization Google Cloud provides multiple ways to specify what permissions an authenticated identity has for a set of resources. IAM Google Cloud offers Identity and Access Management (IAM), which lets you manage access control by defining which users (principals) have what access (role) for which resource. With IAM, you can grant access to Google Cloud resources and prevent unwanted access to other resources. IAM lets you implement the security principle of least privilege, so you grant only the minimum necessary access to your resources. Service accounts A service account is a special type of Google account that belongs to your application or a virtual machine (VM) instead of to an individual end user. Applications can use service account credentials to authenticate themselves with other Cloud APIs. In addition, you can create firewall rules that allow or deny traffic to and from instances based on the service account assigned to each instance. Dataproc clusters are built on top of Compute Engine VMs. Assigning a custom service account when creating a Dataproc cluster will assign that service account to all VMs in your cluster. This gives your cluster fine-grained access and control to Google Cloud resources. If you do not specify a service account, Dataproc VMs use the default Google-managed Compute Engine service account. This account by default has the broad project editor role, giving it a wide range of permissions. We recommend not using the default service account to create a Dataproc cluster in a production environment. Service account permissions When you assign a custom service account to a Dataproc/Hadoop cluster, that service account's level of access is determined by the combination of access scopes granted to the cluster's VM instances and the IAM roles granted to your service account. To set up an instance using your custom service account, you need to configure both access scopes and IAM roles. Essentially, these mechanisms interact in this way: Access scopes authorize the access that an instance has. IAM restricts that access to the roles granted to the service account that the instance uses. The permissions at the intersection of access scopes and IAM roles are the final permissions that the instance has. When you create a Dataproc cluster or Compute Engine instance in the Google Cloud console, you select the access scope of the instance: A Dataproc cluster or Compute Engine instance has a set of access scopes defined for use with the Allow default access setting: There are many access scopes that you can choose from. We recommend that when you create a new VM instance or cluster, you set Allow full access to all Cloud APIs (in the console) or https://www.googleapis.com/auth/cloud-platform access scope (if you use the Google Cloud CLI). These scopes authorize access to all Google Cloud services. After you've set the scope, we recommend that you then limit that access by assigning IAM roles to the cluster service account. The account cannot perform any actions outside of these roles, despite the Google Cloud access scope. For more details, see the service account permissions documentation. Comparing IAM with Apache Sentry and Apache Ranger IAM plays a role similar to Apache Sentry and Apache Ranger. IAM defines access through roles. Access to other Google Cloud components is defined in these roles and is associated with service accounts. This means that all of the instances that use the same service account have the same access to other Google Cloud resources. Anyone who has access to these instances also has the same access to these Google Cloud resources as the service account has. Dataproc clusters and Compute Engine instances don't have a mechanism to map Google users and groups to Linux users and groups. But, you can create Linux users and groups. Inside the Dataproc cluster or inside Compute Engine VMs, HDFS permissions and Hadoop user and group mapping still work. This mapping can be used to restrict access to HDFS or to enforce resource allocation by using a YARN queue. When applications on a Dataproc cluster or Compute Engine VM need to access outside resources such as Cloud Storage or BigQuery, those applications are authenticated as the identity of the service account that you assigned to the VMs in the cluster. You then use IAM to grant your cluster's custom service account the minimum level of access needed by your application. Cloud Storage permissions Dataproc uses Cloud Storage for its storage system. Dataproc also provides a local HDFS system, but HDFS will be unavailable if the Dataproc cluster is deleted. If the application does not strictly depend on HDFS, it's best to use Cloud Storage to fully take advantage of Google Cloud. Cloud Storage does not have storage hierarchies. Directory structure simulates the structure of a file system. It also does not have POSIX-like permissions. Access control by IAM user accounts and service accounts can be set at the bucket level. It does not enforce permissions based on Linux users. Hadoop Google Cloud encryption With a few minor exceptions, Google Cloud services encrypt customer content at rest and in transit using a variety of encryption methods. Encryption is automatic, and no customer action is required. For example, any new data stored in persistent disks is encrypted using the 256-bit Advanced Encryption Standard (AES-256), and each encryption key is itself encrypted with a regularly rotated set of root (master) keys. Google Cloud uses the same encryption and key management policies, cryptographic libraries, and root of trust that are used for many of Google's production services, including Gmail and Google's own corporate data. Because encryption is a default feature of Google Cloud (unlike most on-premises Hadoop implementations), you don't need to worry about implementing encryption unless you want to use your own encryption key. Google Cloud also provides a customer-managed encryption keys solution and a customer-supplied encryption keys solution. If you need to manage encryption keys yourself or to store encryption keys on-premises, you can. For more details, see encryption at rest and encryption in transit. Hadoop Google Cloud auditing Cloud Audit Logs can maintain a few types of logs for each project and organization. Google Cloud services write audit log entries to these logs to help you answer the questions "who did what, where, and when?" within your Google Cloud projects. For more information about audit logs and services that write audit logs, see the Cloud Audit Logs documentation. Migration process To help run a secure and efficient operation of Hadoop on Google Cloud, follow the process laid out in this section. In this section, we assume that you've set up your Google Cloud environment. This includes creating users and groups in Google Workspace. These users and groups are either managed manually or synced with Active Directory, and you've configured everything so that Google Cloud is fully functional in terms of authenticating users. Determine who will manage identities Most Google customers use Cloud Identity to manage identities. But some manage their corporate identities independently of Google Cloud identities. In that case, their POSIX and SSH permissions dictate end-user access to cloud resources. If you have an independent identity system, you start by creating Google Cloud service account keys and downloading them. You can then bridge your on-premises POSIX and SSH security model with the Google Cloud model by granting appropriate POSIX-style access permissions to the downloaded service account key files. You allow or deny your on-premises identities access to these keyfiles. If you follow this route, auditability is in the hands of your own identity management systems. To provide an audit trail, you can use the SSH logs (which hold the service-account keyfiles) of user logins on edge nodes, or you can opt for a more heavyweight and explicit keystore mechanism to fetch service-account credentials from users. In that case, the "service account impersonation" is audit-logged at the keystore layer. Determine whether to use a single data project or multiple data projects If your organization has a lot of data, it means dividing the data into different Cloud Storage buckets. You also need to think about how to distribute these data buckets among your projects. You might be tempted to move over a small amount of data when you get started on Google Cloud, moving more and more data over time as workloads and applications move. It can seem convenient to leave all your data buckets under one project, but doing so is often not a good approach. To manage access to the data, you use a flattened directory structure with IAM roles for buckets. It can become unwieldy to manage as the number of buckets grows. An alternative is to store data in multiple projects that are each dedicated to different organizations—a project for the finance department, another one for the legal group, and so on. In this case, each group manages its own permissions independently. During data processing, it might be necessary to access or create ad hoc buckets. Processing might be split across trust boundaries, such as data scientists accessing data that is produced by a process that they don't own. The following diagram shows a typical organization of data in Cloud Storage under a single data project and under multiple data projects. Here are the main points to consider when deciding which approach is best for your organization. With a single data project: It's easy to manage all the buckets, as long as the number of buckets is small. Permission granting is mostly done by members of the admin group. With multiple data projects: It's easier to delegate management responsibilities to project owners. This approach is useful for organizations that have different permission-granting processes. For example, the permission-granting process might be different for marketing department projects than it is for legal department projects. Identify applications and create service accounts When Dataproc/Hadoop clusters interact with other Google Cloud resources such as with Cloud Storage, you should identify all applications that will run on Dataproc/Hadoop and the access they will need. For example, imagine that there is an ETL job that populates financial data in California to the financial-ca bucket. This ETL job will need read and write access to the bucket. After you identify applications that will use Hadoop, you can create service accounts for each of these applications. Remember that this access does not affect Linux users inside of the Dataproc cluster or in Hadoop on Compute Engine. For more information about Service Accounts, see Creating and Managing Service Accounts. Grant permissions to service accounts When you know what access each application should have to different Cloud Storage buckets, you can set those permissions on the relevant application service accounts. If your applications also need to access other Google Cloud components, such as BigQuery or Bigtable, you can also grant permissions to these components using service accounts. For example, you might specify operation-ca-etl as an ETL application to generate operation reports by assembling marketing and sales data from California, granting it permission to write reports to the financial department data bucket. Then you might set marketing-report-ca and sales-report-ca applications to each have read and write access to their own departments. The following diagram illustrates this setup. You should follow the principle of least privilege. The principle specifies that you give to each user or service account only the minimum permissions that it needs in order to perform its task or tasks. Default permissions in Google Cloud are optimized to provide ease of use and reduce setup time. To build Hadoop infrastructures that are likely to pass security and compliance reviews, you must design more restrictive permissions. Investing effort early on, and documenting those strategies, not only helps provide a secure and compliant pipeline, but also helps when it comes time to review the architecture with security and compliance teams. Create clusters After you have planned and configured access, you can create Dataproc clusters or Hadoop on Compute Engine with the service accounts you've created. Each cluster will have access to other Google Cloud components based on the permissions that you have given to that service account. Make sure you give the correct access scope or scopes for access to Google Cloud and then adjust with service account access. If an access issue ever arises, especially for Hadoop on Compute Engine, be sure to check these permissions. To create a Dataproc cluster with a specific service account, use this gcloud command: gcloud dataproc clusters create [CLUSTER_NAME] \ --service-account=[SERVICE_ACCOUNT_NAME]@[PROJECT+_ID].iam.gserviceaccount.com \ --scopes=scope[, ...] For the following reasons, it is best to avoid using the default Compute Engine service account: If multiple clusters and Compute Engine VMs use the default Compute Engine service account, auditing becomes difficult. The project setting for the default Compute Engine service account can vary, which means that it might have more privileges than your cluster needs. Changes to the default Compute Engine service account might unintentionally affect, or even break, your clusters and the applications that run in them. Consider setting IAM permissions for each cluster Placing many clusters under one project can make managing those clusters convenient, but it might not be the best way to secure access to them. For example, given cluster 1 and 2 in project A, some users might have the right privileges to work on cluster 1, but might also have too many permissions for cluster 2. Or even worse, they might have access to cluster 2 simply because it is in that project when they should not have any access. When projects contain many clusters, access to those clusters can become a tangle, as shown in the following figure. If you instead group like clusters together into smaller projects, and then configure IAM separately for each cluster, you will have a finer degree of control over access. Now users have access to the clusters intended for them and are restricted from accessing others. Restrict access to clusters Setting access using service accounts secures the interactions between Dataproc/Hadoop and other Google Cloud components. However, it does not fully control who can access Dataproc/Hadoop. For example, a user in the cluster who has the IP address of the Dataproc/Hadoop cluster nodes can still use SSH to connect to it (in some cases) or submit jobs to it. In the on-premises environment, the system administrator usually has subnets, firewall rules, Linux authentication, and other strategies to restrict access to Hadoop clusters. There are many ways to restrict access at the level of Google Workspace or Google Cloud authentication when you are running Dataproc/Hadoop on Compute Engine. However, this guide focuses on access at the level of the Google Cloud components. Restricting SSH login using OS login In the on-premises environment, to restrict users from connecting to a Hadoop node, you need to set up perimeter access control, Linux-level SSH access, and sudoer files. On Google Cloud, you can configure user-level SSH restrictions for connecting to Compute Engine instances by using the following process: Enable the OS Login feature on your project or on individual instances. Grant the necessary IAM roles to yourself and other principals. Optionally, add custom SSH keys to user accounts for yourself and other principals. Alternatively, Compute Engine can automatically generate these keys for you when you connect to instances. After you enable OS Login on one or more instances in your project, those instances accept connections only from user accounts that have the necessary IAM roles in your project or organization. As an example, you might grant instance access to your users with the following process: Grant the necessary instance access roles to the user. Users must have the following roles: The iam.serviceAccountUser role One of the following login roles: The compute.osLogin role, which does not grant administrator permissions The compute.osAdminLogin role, which grants administrator permissions If you are an organization administrator who wants to allow Google identities from outside of your organization to access your instances, grant those outside identities the compute.osLoginExternalUser role at your organization level. You must then also grant those outside identities either the compute.osLogin or compute.osAdminLogin role at your project or organization level . After you configure the necessary roles, connect to an instance using Compute Engine tools. Compute Engine automatically generates SSH keys and associates them with your user account. For more information about the OS Login feature, see Managing Instance Access Using OS Login. Restricting network access using firewall rules On Google Cloud, you can also create firewall rules that use service accounts to filter ingress or egress traffic. This approach can work particularly well in these circumstances: You have a wide range of users or applications that need access to Hadoop, which means that creating rules based on the IP is challenging. You are running ephemeral Hadoop clusters or client VMs, so that the IP addresses change frequently. Using firewall rules in combination with service accounts, you can set the access to a particular Dataproc/Hadoop cluster to allow only a certain service account. That way, only the VMs running as that service account will have access at the specified level to the cluster. The following diagram illustrates the process of using service accounts to restrict access. dataproc-app-1, dataproc-1, dataproc-2, and app-1-client are all service accounts. Firewall rules allow dataproc-app-1to access dataproc-1 and dataproc-2, and allow clients using app-1-client to access dataproc-1. On the storage side, Cloud Storage access and permissions are restricted by Cloud Storage permissions to service accounts instead of firewall rules. For this configuration, the following firewall rules have been established: Rule name Settings dp1 Target: dataproc-1 Source: [IP Range] Source SA: dataproc-app-1 Allow [ports] dp2 Target: dataproc-2 Source: [IP Range] Source SA: dataproc-app-2 Allow [ports] dp2-2 Target: dataproc-2 Source: [IP Range] Source SA: dataproc-app-1 Allow [ports] app-1-client Target: dataproc-1 Source: [IP Range] Source SA: app-1-client Allow [ports] For more information about using firewall rules with service accounts, see Source and target filtering by service account. Check for inadvertently open firewall ports Having appropriate firewall rules in place is also important for exposing Web-based user interfaces that run on the cluster. Ensure that you do not have open firewall ports from the Internet that connect to these interfaces. Open ports and improperly configured firewall rules can allow unauthorized users to execute arbitrary code. For example, Apache Hadoop YARN provides REST APIs that share the same ports as the YARN web interfaces. By default, users who can access the YARN web interface can create applications, submit jobs, and might be able to perform Cloud Storage operations. Review Dataproc networking configurations. and Create an SSH tunnel. to establish a secure connection to your cluster's controller. For more information about using firewall rules with service accounts, see Source and target filtering by service account.. What about multi-tenant clusters? In general, it's a best practice to run separate Dataproc/Hadoop clusters for different applications. But if you have to use a multi-tenant cluster and do not want to violate security requirements, you can create Linux users and groups inside the Dataproc/Hadoop clusters to provide authorization and resource allocation through a YARN queue. The authentication has to be implemented by you because there is no direct mapping between Google users and Linux users. Enabling Kerberos on the cluster can strengthen the authentication level within the scope of the cluster. Sometimes, human users such as a group of data scientists use a Hadoop cluster to discover data and build models. In a situation like this, grouping users that share the same access to data together and creating one dedicated Dataproc/Hadoop cluster would be a good choice. This way, you can add users to the group that has permission to access the data. Cluster resources can also be allocated based on their Linux users. Send feedback \ No newline at end of file diff --git a/Security_and_Identity.txt b/Security_and_Identity.txt new file mode 100644 index 0000000000000000000000000000000000000000..d627f53b6bfb7eaad7ae1fa0a989b981793a14fa --- /dev/null +++ b/Security_and_Identity.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/security-and-identity +Date Scraped: 2025-02-23T12:08:59.134Z + +Content: +Just released - Read the latest Defenders Advantage report -Download todaySecurity and identityWe offer security products that help you meet your policy, regulatory, and business objectives. The rich set of controls and capabilities we offer is always expanding.Contact salesLearn moreExplore our productsCategoryProduct Key featuresCloud securitySecurity Command CenterPlatform for defending against threats to your Google Cloud assets.Centralized visibility and controlThreat preventionThreat detectionAsset discovery and inventoryAssured WorkloadsCompliance and security controls for sensitive workloads.Enforcement of data locationLimit personnel accessBuilt-in security controlsEnforcement of product deployment locationIdentity and Access ManagementPermissions management system for Google Cloud resources.Single access control interfaceFine-grained controlAutomated access control recommendationsContext-aware accessGoogle Cloud ArmorHelp protect your applications and websites against denial of service and web attacks.Adaptive protectionSupport for hybrid and multicloud deploymentsPre-configured WAF rulesBot managementCloud Key ManagementManage encryption keys on Google Cloud.Centrally manage encryption keysDeliver hardware key security with HSMProvide support for external keys with EKMBe the ultimate arbiter of access to your data Chrome Enterprise PremiumScalable zero-trust platform with integrated threat and data protection.Security layered to protect users, data, resources, and applicationsAccess policies based on identity and contextExperience that is simple for admins and end-users with an agentless approachAssured Open Source SoftwareIncorporate the same OSS packages that Google uses into your own developer workflows.Code scanning and vulnerability testingEnriched metadataSLSA-compliant buildsVerified provenance and automatic SBOMsAccess TransparencyCloud provider visibility through near real-time logs.Explicitly approve access with Access approvalAccess justifications shows the reason for accessResource and method identificationCloud Asset InventoryView, monitor, and analyze Google Cloud and Anthos assets across projects and services.Fully managed inventory serviceExport all your assets at a point of timeExport asset change historyReal-time notification on asset config changeCloud Data Loss PreventionSensitive data inspection, classification, and redaction platform.Data discovery and classificationMask your data to safely unlock more of the cloudMeasure re-identification risk in structured dataCloud IDSCloud-first, managed network threat detection with industry-leading security.Intrusion detection serviceCloud-first network threat detectionIndustry-leading threat intelligenceConfidential ComputingEncrypt data in use with Confidential VMs.Real-time encryption in useLift and shift confidentialityEnhanced innovationCloud NGFWGlobal and flexible next-generation firewall to protect your cloud resources.Hierarchical firewall policiesIntrusion prevention serviceDomain filteringBuilt-in TLS inspectionSecret ManagerStore API keys, passwords, certificates, and other sensitive data.Replication policiesFirst-class versioningCloud IAM integrationAudit loggingVPC Service ControlsProtect sensitive data in Google Cloud services using security perimeters.Helps mitigate data exfiltration risksCentrally manage multi-tenant service access at scaleEstablish virtual security perimetersDeliver independent data access controlShielded VMsVirtual machines hardened with security controls and defenses.Verifiable integrity with secure and measured bootvTPM exfiltration resistanceTrusted UEFI firmwareTamper-evident attestationsCloud IdentityUnified platform for IT admins to manage user devices and apps.Advanced account securityDevice security on Android, iOS, and WindowsAutomated user provisioningUnified management consoleIdentity-Aware ProxyUse identity and context to guard access to your applications and VMs.Centralized access controlWorks with cloud and on-premises appsProtects apps and VMsSimpler for admins and remote workersManaged Service for Microsoft Active DirectoryHardened service running Microsoft® Active Directory (AD).Compatibility with AD-dependent appsFamiliar features and toolsMulti-region and hybrid identity supportAutomatic patchingPolicy IntelligenceSmart access control for your Google Cloud resources.Smart access controlHelps you understand and manage policiesGreater visibilityAdvanced automationreCAPTCHA EnterpriseHelp protect your website from fraudulent activity, spam, and abuse.Scores that indicate likely good or abusive actionsTake action based on scoresTune the service to your website’s needsFlexible API; integrate on your site or mobile app Identity PlatformAdd Google-grade identity and access management to your apps.Authentication as a serviceBroad protocol supportMulti-tenancyIntelligent account protection Web RiskDetect malicious URLs on your website and in client applications.Check against comprehensive list of known unsafe URLsApplication agnosticAllow client apps to check URLs with Lookup APIDownload and store unsafe lists with Update APIFrontline Intelligence and expertiseGoogle Threat IntelligenceAccess the latest intel from the frontlines.Driven by expertise and intelligencePrioritize resourcesOperationalize threat intelligenceUnderstand your active threatsMandiant Attack Surface ManagementSee your organization through the eyes of the attacker.Automated external asset discovery and enumerationInfrastructure integrations into cloud and DNS providersTechnology fingerprinting and searchable inventoryActive and passive checks for external assets Virus TotalUnique visibility into threats.Static threat indicatorsBehavior activity and network commsIn-the-wild information Mandiant Security ValidationKnow your security can be effective against today's adversaries.Report evidence of cyber preparedness and value of security investmentsCapture data on your security controls to help optimize cyber defensesPinpoint vulnerabilities and immediate changes required before an attack occurs Mandiant Digital Threat MonitoringVisibility into deep, dark, and open web.MonitorAnticipateDetect Mandiant ConsultingMitigate cyber threats and reduce business risk with the help of frontline experts.Improve cyber defenses through assessmentsAccess experts to prioritize and execute improvementsProve security effectiveness and strategic investments through technical testing Mandiant Incident Response ServicesHelp tackle breaches rapidly and confidently.Incident responders are available 24x7Rapid response to minimize business impactQuickly and fully recover from a breachMandiant Managed DefenseFind actionable incidents in real timeCan accelerate alert triage and investigationCan reduce dwell time with continuous threat huntingHelp resolve incidents quicklyExtend your team with access to expertise and intelligenceMandiant AI Security Consulting ServicesSecure your AI systems and harness the power of AI for your defendersSecure the use of AIValidate the defenses protecting AIUse AI to enhance cyber defensesCyber Risk PartnersComprehensively address business risk related to cyber threatsMitigate risk and minimize liability resulting from cyber attacksSimplify cyber risk managementFull spectrum of cyber security servicesMandiant Strategic ReadinessEnhance security capabilities to help outmaneuver today’s threat actors and provide resilience against future compromise.Improve capabilities against threatsAdvance approach to cyber risk managementStrengthen defenses against supply chain attacksEvaluate insider threatsMandiant Technical AssuranceSee how your security program performs under pressure with simulated attacks against your environment to harden systems and operations.Test security controls and operationsEvaluate with real-world attacksHarden against the latest threatsIdentify and close security gapsMandiant Cybersecurity Transformation ServicesEstablish and mature cyber defense capabilities across functions.Work to improve processes and technologiesUp-level threat detection, containment, and remediation capabilitiesReceive hands-on support to implement necessary changesHelp optimize security operations and hunt functions Security operationsGoogle Security OperationsDetect, investigate, and respond to threats fastDetect more threats with less effortsInvestigate with the right contextRespond with speed and precisionMandiant HuntUncover hidden attacks with elite threat hunters by your side.Identify detection and visibility gapsReduce attacker dwell timeHunt on 12-months of hot data directly within Google SecOpsExplore our productsCloud securitySecurity Command CenterPlatform for defending against threats to your Google Cloud assets.Centralized visibility and controlThreat preventionThreat detectionAsset discovery and inventoryFrontline Intelligence and expertiseGoogle Threat IntelligenceAccess the latest intel from the frontlines.Driven by expertise and intelligencePrioritize resourcesOperationalize threat intelligenceUnderstand your active threats Security operationsGoogle Security OperationsDetect, investigate, and respond to threats fastDetect more threats with less effortsInvestigate with the right contextRespond with speed and precisionTake the next stepStart your next project, explore interactive tutorials, and manage your account.Contact salesNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security_and_Resilience_Framework.txt b/Security_and_Resilience_Framework.txt new file mode 100644 index 0000000000000000000000000000000000000000..44564e8324c29007be991106dd52a2c0d797057c --- /dev/null +++ b/Security_and_Resilience_Framework.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/solutions/security-and-resilience +Date Scraped: 2025-02-23T12:01:02.807Z + +Content: +Explore cutting-edge innovations from Google Cloud and gain insights from Mandiant experts at Google Cloud Security Summit. Register now.Security and resilience frameworkHelp ensure continuity and protect your business against adverse cyber events by using our comprehensive suite of security and resilience solutions.Try our new Discovery platform and get recommendations.Start DiscoveryBenefitsComprehensive solutions for every phase of the security and resilience lifecycleHelp protect critical assets in cloud and on-premiseAssess risk to your critical assets and improve resilience on-premise or by migrating to Google Cloud.Modernize your security protectionsFrom securing software supply chains to transforming with Zero Trust architectures to threat-hunting at scale, our solutions help protect you from threats and maintain continuous operations.Enable rapid recovery wherever your assets resideRecover from security incidents like ransomware in minutes, not days or weeks.Key featuresOur solutions address each phase of the cybersecurity lifecycleOur solutions address business continuity, secure software supply chains, zero trust, advanced security operations, rapid recovery, and more.IdentifyOur Risk Assessment & Critical Asset Discovery solution evaluates your organization’s current IT risk, identifies where your critical assets reside, and provides recommendations for improving your security posture and resilience. Once on Google Cloud, you can leverage Risk Manager to continuously evaluate risk and our Risk Protection Program to qualify for cyber insurance.ProtectLeverage the security that thousands of customers rely on to help protect their organizations with our solutions for Secure Software Supply Chain, Data Protection, and Zero Trust.DetectOur Autonomic Security Operations (ASO) solution delivers exceptional threat management delivered through a modern, Google Cloud-native stack, and includes deep, rich integrations with third-party tools and a powerful engine to create connective tissue and stitch your defenses together. Achieve operational fusion across your cyber, fraud, compliance, and business teams.RespondOur Autonomic Security Operations solution also enables threat hunting, integrated threat intelligence, and playbook automation through SOAR partnerships to manage incidents from identification to resolution.RecoverRecover from a ransomware attack and resume daily operations within minutes. Google’s Actifio Go solution can improve the resilience of all your on-premise and Google Cloud assets. Ready to get started? Contact usWhat's newSee the latest updates about Google Cloud securityBlog postGoogle Cloud CISO Perspectives: June 2021Read the blogBlog postBest practices for ransomware protection and defenseRead the blogBlog postSoftware supply chain securityRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Security_log_analytics_in_Google_Cloud.txt b/Security_log_analytics_in_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..85c0fb3f0f9e099e9e4263676a08cf90cc81658b --- /dev/null +++ b/Security_log_analytics_in_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-log-analytics +Date Scraped: 2025-02-23T11:56:33.932Z + +Content: +Home Docs Cloud Architecture Center Send feedback Security log analytics in Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-05-21 UTC This guide shows security practitioners how to onboard Google Cloud logs to be used in security analytics. By performing security analytics, you help your organization prevent, detect, and respond to threats like malware, phishing, ransomware, and poorly configured assets. This guide shows you how to do the following: Enable the logs to be analyzed. Route those logs to a single destination depending on your choice of security analytics tool, such as Log Analytics, BigQuery, Google Security Operations, or a third-party security information and event management (SIEM) technology. Analyze those logs to audit your cloud usage and detect potential threats to your data and workloads, using sample queries from the Community Security Analytics (CSA) project. The information in this guide is part of Google Cloud Autonomic Security Operations, which includes engineering-led transformation of detection and response practices and security analytics to improve your threat detection capabilities. In this guide, logs provide the data source to be analyzed. However, you can apply the concepts from this guide to analysis of other complementary security-related data from Google Cloud, such as security findings from Security Command Center. Provided in Security Command Center Premium is a list of regularly-updated managed detectors that are designed to identify threats, vulnerabilities, and misconfigurations within your systems in near real-time. By analyzing these signals from Security Command Center and correlating them with logs ingested in your security analytics tool as described in this guide, you can achieve a broader perspective of potential security threats. The following diagram shows how security data sources, security analytics tools, and CSA queries work together. The diagram starts with the following security data sources: logs from Cloud Logging, asset changes from Cloud Asset Inventory, and security findings from Security Command Center. The diagram then shows these security data sources being routed into the security analytics tool of your choice: Log Analytics in Cloud Logging, BigQuery, Google Security Operations, or a third-party SIEM. Finally, the diagram shows using CSA queries with your analytics tool to analyze the collated security data. Security log analytics workflow This section describe the steps to set up security log analytics in Google Cloud. The workflow consists of the three steps shown in the following diagram and described in the following paragraphs: Enable logs: There are many security logs available in Google Cloud. Each log has different information that can be useful in answering specific security questions. Some logs like Admin Activity audit logs are enabled by default; others need to be manually enabled because they incur additional ingestion costs in Cloud Logging. Therefore, the first step in the workflow is to prioritize the security logs that are most relevant for your security analysis needs and to individually enable those specific logs. To help you evaluate logs in terms of the visibility and threat detection coverage they provide, this guide includes a log scoping tool. This tool maps each log to relevant threat tactics and techniques in the MITRE ATT&CK® Matrix for Enterprise. The tool also maps Event Threat Detection rules in Security Command Center to the logs on which they rely. You can use the log scoping tool to evaluate logs regardless of the analytics tool that you use. Route logs: After identifying and enabling the logs to be analyzed, the next step is to route and aggregate the logs from your organization, including any contained folders, projects, and billing accounts. How you route logs depends on the analytics tool that you use. This guide describes common log routing destinations, and shows you how to use a Cloud Logging aggregated sink to route organization-wide logs into a Cloud Logging log bucket or a BigQuery dataset depending on whether you choose to use Log Analytics or BigQuery for analytics. Analyze logs: After you route the logs into an analytics tool, the next step is to perform an analysis of these logs to identify any potential security threats. How you analyze the logs depends on the analytics tool that you use. If you use Log Analytics or BigQuery, you can analyze the logs by using SQL queries. If you use Google Security Operations, you analyze the logs by using YARA-L rules. If you are using a third-party SIEM tool, you use the query language specified by that tool. In this guide, you'll find SQL queries that you can use to analyze the logs in either Log Analytics or BigQuery. The SQL queries provided in this guide come from the Community Security Analytics (CSA) project. CSA is an open-source set of foundational security analytics designed to provide you with a baseline of pre-built queries and rules that you can reuse to start analyzing your Google Cloud logs. The following sections provide detailed information on how to set up and apply each step in the security logs analytics workflow. Enable logs The process of enabling logs involves the following steps: Identify the logs you need by using the log scoping tool in this guide. Record the log filter generated by the log scoping tool for use later when configuring the log sink. Enable logging for each identified log type or Google Cloud service. Depending on the service, you might have to also enable the corresponding Data Access audit logs as detailed later in this section. Identify logs using the log scoping tool To help you identify the logs that meet your security and compliance needs, you can use the log scoping tool shown in this section. This tool provides an interactive table that lists valuable security-relevant logs across Google Cloud including Cloud Audit Logs, Access Transparency logs, network logs, and several platform logs. This tool maps each log type to the following areas: MITRE ATT&CK threat tactics and techniques that can be monitored with that log. CIS Google Cloud Computing Platform compliance violations that can be detected in that log. Event Threat Detection rules that rely on that log. The log scoping tool also generates a log filter which appears immediately after the table. As you identify the logs that you need, select those logs in the tool to automatically update that log filter. The following short procedures explain how to use the log scoping tool: To select or remove a log in the log scoping tool, click the toggle next to the name of the log. To select or remove all the logs, click the toggle next to the Log type heading. To see which MITRE ATT&CK techniques can be monitored by each log type, click add_circle next to the MITRE ATT&CK tactics and techniques heading. Log scoping tool Record the log filter The log filter that is automatically generated by the log scoping tool contains all of the logs that you have selected in the tool. You can use the filter as is or you can refine the log filter further depending on your requirements. For example, you can include (or exclude) resources only in one or more specific projects. After you have a log filter that meets your logging requirements, you need to save the filter for use when routing the logs. For instance, you can save the filter in a text editor or save it in an environment variable as follows: In the "Auto-generated log filter" section that follows the tool, copy the code for the log filter. Optional: Edit the copied code to refine the filter. In Cloud Shell, create a variable to save the log filter: export LOG_FILTER='LOG_FILTER' Replace LOG_FILTER with the code for the log filter. Enable service-specific platform logs For each of the platform logs that you select in the log scoping tool, those logs must be enabled (typically at the resource level) on a service-by-service basis. For example, Cloud DNS logs are enabled at the VPC-network level. Likewise, VPC Flow Logs are enabled at the subnet level for all VMs in the subnet, and logs from Firewall Rules Logging are enabled at the individual firewall rule level. Each platform log has its own instructions on how to enable logging. However, you can use the log scoping tool to quickly open the relevant instructions for each platform log. To learn how to enable logging for a specific platform log, do the following: In the log scoping tool, locate the platform log that you want to enable. In the Enabled by default column, click the Enable link that corresponds to that log. The link takes you to detailed instructions on how to enable logging for that service. Enable the Data Access audit logs As you can see in the log scoping tool, the Data Access audit logs from Cloud Audit Logs provide broad threat detection coverage. However, their volume can be quite large. Enabling these Data Access audit logs might therefore result in additional charges related to ingesting, storing, exporting, and processing these logs. This section both explains how to enable these logs and presents some best practices to help you with making the tradeoff between value and cost. Note: Data Access audit logs might contain personally identifiable information (PII) like caller identities and IP addresses. You must apply the appropriate access control and retention settings available in your analytics tool to secure your log data, retain that data only as long as needed, and then dispose of that data securely. Data Access audit logs—except for BigQuery—are disabled by default. To configure Data Access audit logs for Google Cloud services other than BigQuery, you must explicitly enable them either by using the Google Cloud console or by using the Google Cloud CLI to edit Identity and Access Management (IAM) policy objects. When you enable Data Access audit logs, you can also configure which types of operations are recorded. There are three Data Access audit log types: ADMIN_READ: Records operations that read metadata or configuration information. DATA_READ: Records operations that read user-provided data. DATA_WRITE: Records operations that write user-provided data. Note that you can't configure the recording of ADMIN_WRITE operations, which are operations that write metadata or configuration information. ADMIN_WRITE operations are included in Admin Activity audit logs from Cloud Audit Logs and therefore can't be disabled. Best Practice: Enable Data Access audit logs at the folder or organization level to ensure compliance across all child projects of that folder or organization. When you enable the audit logs at the folder or organization level, the audit policy applies to all existing and new projects in that folder or organization. That audit policy cannot be disabled at the project level. Manage the volume of Data Access audit logs When enabling Data Access audit logs, the goal is to maximize their value in terms of security visibility while also limiting their cost and management overhead. To help you achieve that goal, we recommend that you do the following to filter out low-value, high-volume logs: Prioritize relevant services such as services that host sensitive workloads, keys and data. For specific examples of services that you might want to prioritize over others, see Example Data Access audit log configuration. Prioritize relevant projects such as projects that host production workloads as opposed to projects that host developer and staging environments. To filter out all logs from a particular project, add the following expression to your log filter for your sink. Replace PROJECT_ID with the ID of the project from which you want to filter out all logs: Project Log filter expression Exclude all logs from a given project NOT logName =~ "^projects/PROJECT_ID" Prioritize a subset of data access operations such as ADMIN_READ, DATA_READ, or DATA_WRITE for a minimal set of recorded operations. For example, some services like Cloud DNS write all three types of operations, but you can enable logging for only ADMIN_READ operations. After you have configured one of more of these three types of data access operations, you might want to exclude specific operations that are particularly high volume. You can exclude these high volume operations by modifying the sink's log filter. For example, you decide to enable full Data Access audit logging, including DATA_READ operations on some critical storage services. To exclude specific high-traffic data read operations in this situation, you can add the following recommended log filter expressions to your sink's log filter: Service Log filter expression Exclude high volume logs from Cloud Storage NOT (resource.type="gcs_bucket" AND (protoPayload.methodName="storage.buckets.get" OR protoPayload.methodName="storage.buckets.list")) Exclude high volume logs from Cloud SQL NOT (resource.type="cloudsql_database" AND protoPayload.request.cmd="select") Prioritize relevant resources such as resources that host your most sensitive workloads and data. You can classify your resources based on the value of the data that they process, and their security risk such as whether they are externally accessible or not. Although Data Access audit logs are enabled per service, you can filter out specific resources or resource types through the log filter. Exclude specific principals from having their data accesses recorded. For example, you can exempt your internal testing accounts from having their operations recorded. To learn more, see Set exemptions in Data Access audit logs documentation. Note: In addition to using the sink's log filter to filter out the additional logs as discussed in this section, you might want to exclude these logs from being ingested into Cloud Logging for cost reasons. To prevent these logs from being ingested, you can apply the log filter expressions listed in this section as exclusion filters on the predefined _Default sink that routes logs (including Data Access audit logs) to the _Default log bucket. Exclusion filters have the opposite effect of a log filter, which is an inclusion filter. Thus when configuring these expressions as exclusion filters, you need to remove the preceding NOT Boolean operator from the filter expressions that are shown in this section. Example Data Access audit log configuration The following table provides a baseline Data Access audit log configuration that you can use for Google Cloud projects to limit log volumes while gaining valuable security visibility: Tier Services Data Access audit log types MITRE ATT&CK tactics Authentication & authorization services IAM Identity-Aware Proxy (IAP)1 Cloud KMS Secret Manager Resource Manager ADMIN_READDATA_READ DiscoveryCredential AccessPrivilege Escalation Storage services BigQuery (enabled by default) Cloud Storage1, 2 DATA_READDATA_WRITE CollectionExfiltration Infrastructure services Compute Engine Organization Policy ADMIN_READ Discovery 1 Enabling Data Access audit logs for IAP or Cloud Storage can generate large log volumes when there is high traffic to IAP-protected web resources or to Cloud Storage objects. 2 Enabling Data Access audit logs for Cloud Storage might break the use of authenticated browser downloads for non-public objects. For more details and suggested workarounds to this issue, see the Cloud Storage troubleshooting guide. In the example configuration, notice how services are grouped in tiers of sensitivity based on their underlying data, metadata, or configuration. These tiers demonstrate the following recommended granularity of Data Access audit logging: Authentication & authorization services: For this tier of services, we recommend auditing all data access operations. This level of auditing helps you monitor access to your sensitive keys, secrets, and IAM policies. Monitoring this access might help you detect MITRE ATT&CK tactics like Discovery, Credential Access, and Privilege Escalation. Storage services: For this tier of services, we recommend auditing data access operations that involve user-provided data. This level of auditing helps you monitor access to your valuable and sensitive data. Monitoring this access might help you detect MITRE ATT&CK tactics like Collection and Exfiltration against your data. Infrastructure services: For this tier of services, we recommend auditing data access operations that involve metadata or configuration information. This level of auditing helps you monitor for scanning of infrastructure configuration. Monitoring this access might help you detect MITRE ATT&CK tactics like Discovery against your workloads. Route logs After the logs are identified and enabled, the next step is to route the logs to a single destination. The routing destination, path and complexity vary depending on the analytics tools that you use, as shown in the following diagram. The diagram shows the following routing options: If you use Log Analytics, you need an aggregated sink to aggregate the logs from across your Google Cloud organization into a single Cloud Logging bucket. If you use BigQuery, you need an aggregated sink to aggregate the logs from across your Google Cloud organization into a single BigQuery dataset. If you use Google Security Operations and this predefined subset of logs meets your security analysis needs, you can automatically aggregate these logs into your Google Security Operations account using the built-in Google Security Operations ingest. You can also view this predefined set of logs by looking at the Exportable directly to Google Security Operations column of the log scoping tool. For more information about exporting these predefined logs, see Ingest Google Cloud logs to Google Security Operations. If you use BigQuery or a third-party SIEM or want to export an expanded set of logs into Google Security Operations, the diagram shows that an additional step is needed between enabling the logs and analyzing them. This additional step consists of configuring an aggregated sink that routes the selected logs appropriately. If you're using BigQuery, this sink is all that you need to route the logs to BigQuery. If you're using a third-party SIEM, you need to have the sink aggregate the selected logs in Pub/Sub or Cloud Storage before the logs can be pulled into your analytics tool. Note: If you use Log Analytics (or BigQuery) and have already configured an aggregated sink to store logs in a central Logging bucket (or a central BigQuery dataset), you can skip this section of the guide and instead just update the sink with the log filter from the previous section. In the case of Log Analytics, make sure to upgrade your existing log bucket to use Log Analytics. The routing options to Google Security Operations and a third-party SIEM aren't covered in this guide. However, the following sections provide the detailed steps to route logs to Log Analytics or BigQuery: Set up a single destination Create an aggregated log sink. Grant access to the sink. Configure read access to the destination. Verify that the logs are routed to the destination. Set up a single destination Log Analytics Note: You can skip this step if you use a Cloud Logging bucket that already exists in the Google Cloud project where you want to aggregate the logs. You can use the _Default bucket, but we recommend that you create a separate bucket for this use case. Open the Google Cloud console in the Google Cloud project that you want to aggregate logs into. Go to Google Cloud console In a Cloud Shell terminal, run the following gcloud command to create a log bucket: gcloud logging buckets create BUCKET_NAME \ --location=BUCKET_LOCATION \ --project=PROJECT_ID Replace the following: PROJECT_ID: the ID of the Google Cloud project where the aggregated logs will be stored. BUCKET_NAME: the name of the new Logging bucket. BUCKET_LOCATION: the geographical location of the new Logging bucket. The supported locations are global, us, or eu. To learn more about these storage regions, refer to Supported regions. If you don't specify a location, then the global region is used, which means that the logs could be physically located in any of the regions. Note: After you create your bucket, you can't change your bucket's region. Verify that the bucket was created: gcloud logging buckets list --project=PROJECT_ID (Optional) Set the retention period of the logs in the bucket. The following example extends the retention of logs stored in the bucket to 365 days: gcloud logging buckets update BUCKET_NAME \ --location=BUCKET_LOCATION \ --project=PROJECT_ID \ --retention-days=365 Upgrade your new bucket to use Log Analytics by following these steps. BigQuery Open the Google Cloud console in the Google Cloud project that you want to aggregate logs into. Go to Google Cloud console In a Cloud Shell terminal, run the following bq mk command to create a dataset: bq --location=DATASET_LOCATION mk \ --dataset \ --default_partition_expiration=PARTITION_EXPIRATION \ PROJECT_ID:DATASET_ID Replace the following: PROJECT_ID: the ID of the Google Cloud project where the aggregated logs will be stored. DATASET_ID: the ID of the new BigQuery dataset. DATASET_LOCATION: the geographic location of the dataset. After a dataset is created, the location can't be changed. Note: If you choose EU or an EU-based region for the dataset location, your Core BigQuery Customer Data resides in the EU. Core BigQuery Customer Data is defined in the Service Specific Terms. PARTITION_EXPIRATION: the default lifetime (in seconds) for the partitions in the partitioned tables that are created by the log sink. You configure the log sink in the next section. The log sink that you configure uses partitioned tables that are partitioned by day based on the log entry's timestamp. Partitions (including associated log entries) are deleted PARTITION_EXPIRATION seconds after the partition's date. Best Practice: Set the default partition expiration property of the dataset based on your log retention requirements so older logs age out and expire. You can do so during or after you create the dataset. This allows you to retain logs as long as needed, while limiting the total size of the log storage and associated cost. If you have more granular retention requirements based on the log type, you can override this property at the table level after the log sink has started routing logs and has created their corresponding partitioned tables. For example, you might be required to keep Cloud Audit Logs data for three years, but VPC Flow Logs and Firewall Rules Logs need only be retained for 90 days. If you do not set a default partition expiration at the dataset level, and you do not set a partition expiration when the table is created, the partitions never expire Create an aggregated log sink You route your organization logs into your destination by creating an aggregated sink at the organization level. To include all the logs you selected in the log scoping tool, you configure the sink with the log filter generated by the log scoping tool. Note: Routing logs to this new destination doesn't mean that your logs are redirected to it. Instead, your logs are stored twice: once in their parent Google Cloud project and then again in the new destination. To avoid this duplicate storage of your logs, add an exclusion filter to the _Default sink of every child Google Cloud project in your organization. To stop logs from being ingested into the _Default sinks of future Google Cloud projects in your organization, disable the _Default sink in the default settings of your organization. Log Analytics In a Cloud Shell terminal, run the following gcloud command to create an aggregated sink at the organization level: gcloud logging sinks create SINK_NAME \ logging.googleapis.com/projects/PROJECT_ID/locations/BUCKET_LOCATION/buckets/BUCKET_NAME \ --log-filter="LOG_FILTER" \ --organization=ORGANIZATION_ID \ --include-children Replace the following: SINK_NAME: the name of the sink that routes the logs. PROJECT_ID: the ID of the Google Cloud project where the aggregated logs will be stored. BUCKET_LOCATION: the location of the Logging bucket that you created for log storage. BUCKET_NAME: the name of the Logging bucket that you created for log storage. LOG_FILTER: the log filter that you saved from the log scoping tool. ORGANIZATION_ID: the resource ID for your organization. The --include-children flag is important so that logs from all the Google Cloud projects within your organization are also included. For more information, see Collate and route organization-level logs to supported destinations. Verify the sink was created: gcloud logging sinks list --organization=ORGANIZATION_ID Get the name of the service account associated with the sink that you just created: gcloud logging sinks describe SINK_NAME --organization=ORGANIZATION_ID The output looks similar to the following: writerIdentity: serviceAccount:p1234567890-12345@logging-o1234567890.iam.gserviceaccount.com` Copy the entire string for writerIdentity starting with serviceAccount:. This identifier is the sink's service account. Until you grant this service account write access to the log bucket, log routing from this sink will fail. You grant write access to the sink's writer identity in the next section. BigQuery In a Cloud Shell terminal, run the following gcloud command to create an aggregated sink at the organization level: gcloud logging sinks create SINK_NAME \ bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET_ID \ --log-filter="LOG_FILTER" \ --organization=ORGANIZATION_ID \ --use-partitioned-tables \ --include-children Replace the following: SINK_NAME: the name of the sink that routes the logs. PROJECT_ID: the ID for the Google Cloud project you want to aggregate the logs into. DATASET_ID: the ID of the BigQuery dataset you created. LOG_FILTER: the log filter that you saved from the log scoping tool. ORGANIZATION_ID: the resource ID for your organization. The --include-children flag is important so that logs from all the Google Cloud projects within your organization are also included. For more information, see Collate and route organization-level logs to supported destinations. The --use-partitioned-tables flag is important so that data is partitioned by day based on the log entry's timestamp field. This simplifies querying of the data and helps reduce query costs by reducing the amount of data scanned by queries. Another benefit of partitioned tables is that you can set a default partition expiration at the dataset level to meet your log retention requirements. You have already set a default partition expiration when you created the dataset destination in the previous section. You might also choose to set a partition expiration at the individual table level, providing you with fine-grained data retention controls based on log type. Verify the sink was created: gcloud logging sinks list --organization=ORGANIZATION_ID Get the name of the service account associated with the sink that you just created: gcloud logging sinks describe SINK_NAME --organization=ORGANIZATION_ID The output looks similar to the following: writerIdentity: serviceAccount:p1234567890-12345@logging-o1234567890.iam.gserviceaccount.com` Copy the entire string for writerIdentity starting with serviceAccount:. This identifier is the sink's service account. Until you grant this service account write access to the BigQuery dataset, log routing from this sink will fail. You grant write access to the sink's writer identity in the next section. Grant access to the sink After creating the log sink, you must grant your sink access to write to its destination, be it the Logging bucket or the BigQuery dataset. Note: To route logs to a resource protected by a service perimeter, you must also add the service account for that sink to an access level and then assign it to the destination service perimeter. This isn't necessary for non-aggregated sinks. For details, see VPC Service Controls: Cloud Logging. Log Analytics To add the permissions to the sink's service account, follow these steps: In the Google Cloud console, go to the IAM page: Go to the IAM page Make sure that you've selected the destination Google Cloud project that contains the Logging bucket you created for central log storage. Click person_add Grant access. In the New principals field, enter the sink's service account without the serviceAccount: prefix. Recall that this identity comes from the writerIdentity field you retrieved in the previous section after you created the sink. In the Select a role drop-down menu, select Logs Bucket Writer. Click Add IAM condition to restrict the service account's access to only the log bucket you created. Enter a Title and Description for the condition. In the Condition type drop-down menu, select Resource > Name. In the Operator drop-down menu, select Ends with. In the Value field, enter the bucket's location and name as follows: locations/BUCKET_LOCATION/buckets/BUCKET_NAME Click Save to add the condition. Click Save to set the permissions. BigQuery To add the permissions to the sink's service account, follow these steps: In the Google Cloud console, go to BigQuery: Go to BigQuery Open the BigQuery dataset that you created for central log storage. In the Dataset info tab, click the Sharingkeyboard_arrow_down drop-down menu, and then click Permissions. In the Dataset Permissions side panel, click Add Principal. In the New principals field, enter the sink's service account without the serviceAccount: prefix. Recall that this identity comes from the writerIdentity field you retrieved in the previous section after you created the sink. In the Role drop-down menu, select BigQuery Data Editor. Click Save. After you grant access to the sink, log entries begin to populate the sink destination: the Logging bucket or the BigQuery dataset. Configure read access to the destination Now that your log sink routes logs from your entire organization into one single destination, you can search across all of these logs. Use IAM permissions to manage permissions and grant access as needed. Log Analytics To grant access to view and query the logs in your new log bucket, follow these steps. In the Google Cloud console, go to the IAM page: Go to the IAM page Make sure you've selected the Google Cloud project you're using to aggregate the logs. Click person_add Add. In the New principal field, add your email account. In the Select a role drop-down menu, select Logs Views Accessor. This role provides the newly added principal with read access to all views for any buckets in the Google Cloud project. To limit a user's access, add a condition that lets the user read only from your new bucket only. Click Add condition. Enter a Title and Description for the condition. In the Condition type drop-down menu, select Resource > Name. In the Operator drop-down menu, select Ends with. In the Value field, enter the bucket's location and name, and the default log view _AllLogs as follows: locations/BUCKET_LOCATION/buckets/BUCKET_NAME/views/_AllLogs Note: Cloud Logging automatically creates the _AllLogs view for every bucket, which shows all the logs in the bucket. For more granular control over which logs can be viewed and queried within that log bucket, you can create and use a custom log view instead of _AllLogs. Click Save to add the condition. Click Save to set the permissions. BigQuery To grant access to view and query the logs in your BigQuery dataset, follow the steps in the Granting access to a dataset section of the BigQuery documentation. Verify that the logs are routed to the destination Log Analytics When you route logs to a log bucket upgraded to Log Analytics, you can view and query all log entries through a single log view with a unified schema for all log types. Follow these steps to verify the logs are correctly routed. In the Google Cloud console, go to Log Analytics page: Go to Log Analytics Make sure you've selected the Google Cloud project you're using to aggregate the logs. Click on Log Views tab. Expand the log views under the log bucket that you have created (that is BUCKET_NAME) if it is not expanded already. Select the default log view _AllLogs. You can now inspect the entire log schema in the right panel, as shown in the following screenshot: Next to _AllLogs, click Query . This populates the Query editor with a SQL sample query to retrieve recently routed log entries. Click Run query to view recently routed log entries. Depending on level of activity in Google Cloud projects in your organization, you might have to wait a few minutes until some logs get generated, and then routed to your log bucket. BigQuery When you route logs to a BigQuery dataset, Cloud Logging creates BigQuery tables to hold the log entries as shown in the following screenshot: The screenshot shows how Cloud Logging names each BigQuery table based on the name of the log to which a log entry belongs. For example, the cloudaudit_googleapis_com_data_access table that is selected in the screenshot contains Data Access audit logs whose log ID is cloudaudit.googleapis.com%2Fdata_access. In addition to being named based on the corresponding log entry, each table is also partitioned based on the timestamps for each log entry. Depending on level of activity in Google Cloud projects in your organization, you might have to wait a few minutes until some logs get generated, and then routed to your BigQuery dataset. Note: Both Admin Activity and Data Access logs are loaded into BigQuery with their protoPayload log entry field renamed to protoPayload_auditlog in BigQuery. For more information about schema conversions done by Cloud Logging before writing to BigQuery, see Fields in exported audit logs. Analyze logs You can run a broad range of queries against your audit and platform logs. The following list provides a set of sample security questions that you might want to ask of your own logs. For each question in this list, there are two versions of the corresponding CSA query: one for use with Log Analytics and one for use with BigQuery. Use the query version that matches the sink destination that you previously set up. Log Analytics Before using any of the SQL queries below, replace MY_PROJECT_ID with the ID of the Google Cloud project where you created the log bucket (that is PROJECT_ID), and MY_DATASET_ID with the region and name of that log bucket (that is BUCKET_LOCATION.BUCKET_NAME). Go to Log Analytics BigQuery Before using any of the SQL queries below, replace MY_PROJECT_ID with the ID of the Google Cloud project where you created the BigQuery dataset (that is PROJECT_ID), and MY_DATASET_ID with the name of that dataset, that is DATASET_ID. Go to BigQuery Login and access questions Any suspicious login attempt flagged by Google Workspace? Any excessive login failures from any user identity? Any access attempts violating VPC Service Controls? Any access attempts violating Identity-Aware Proxy access controls? Permission changes questions Any user added to highly-privileged groups? Any permissions granted over a service account? Any service accounts or keys created by non-approved identity? Any user added to (or removed from) sensitive IAM policy? Provisioning activity questions Any changes made to logging settings? Any VPC Flow Logs disabled? Any unusual number of firewall rules modified in the past week? Any VMs deleted in the past week? Workload usage questions Any unusually high API usage by any user identity in the past week? What is the autoscaling usage per day in the past month? Data access questions Which users most frequently accessed data in the past week? Which users accessed the data in the "accounts" table last month? What tables are most frequently accessed and by whom? What are the top 10 queries against BigQuery in the past week? What are the most common actions recorded in the data access log over the past month? Network security questions Any connections from a new IP address to a specific subnetwork? Any connections blocked by Google Cloud Armor? Any high-severity virus or malware detected by Cloud IDS? What are the top Cloud DNS queried domains from your VPC network? Login and access questions These sample queries perform analysis to detect suspicious login attempts or initial access attempts to your Google Cloud environment. Note: Login activity is captured in Cloud Identity logs that are included in Google Workspace Login Audit. To analyze login activity and use some of the queries in this section, you need to enable Google Workspace data sharing with Google Cloud. To learn more about sharing Google Workspace audit logs with Google Cloud, see View and manage audit logs for Google Workspace. Any suspicious login attempt flagged by Google Workspace? By searching Cloud Identity logs that are part of Google Workspace Login Audit, the following query detects suspicious login attempts flagged by Google Workspace. Such login attempts might be from the Google Cloud console, Admin console, or the gcloud CLI. Log Analytics SELECT timestamp, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.request_metadata.caller_ip, proto_payload.audit_log.method_name, parameter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs`, UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.metadata.event[0].parameter)) AS parameter WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND proto_payload.audit_log IS NOT NULL AND proto_payload.audit_log.service_name = "login.googleapis.com" AND proto_payload.audit_log.method_name = "google.login.LoginService.loginSuccess" AND JSON_VALUE(parameter.name) = "is_suspicious" AND JSON_VALUE(parameter.boolValue) = "true" BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.requestMetadata.callerIp, protopayload_auditlog.methodName FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access`, UNNEST(JSON_QUERY_ARRAY(protopayload_auditlog.metadataJson, '$.event[0].parameter')) AS parameter WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND protopayload_auditlog.metadataJson IS NOT NULL AND protopayload_auditlog.serviceName = "login.googleapis.com" AND protopayload_auditlog.methodName = "google.login.LoginService.loginSuccess" AND JSON_VALUE(parameter, '$.name') = "is_suspicious" AND JSON_VALUE(parameter, '$.boolValue') = "true" Any excessive login failures from any user identity? By searching Cloud Identity logs that are part of Google Workspace Login Audit, the following query detects users who have had three or more successive login failures within the last 24 hours. Log Analytics SELECT proto_payload.audit_log.authentication_info.principal_email, MIN(timestamp) AS earliest, MAX(timestamp) AS latest, count(*) AS attempts FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) AND proto_payload.audit_log.service_name = "login.googleapis.com" AND proto_payload.audit_log.method_name = "google.login.LoginService.loginFailure" GROUP BY 1 HAVING attempts >= 3 BigQuery SELECT protopayload_auditlog.authenticationInfo.principalEmail, MIN(timestamp) AS earliest, MAX(timestamp) AS latest, count(*) AS attempts FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) AND protopayload_auditlog.serviceName="login.googleapis.com" AND protopayload_auditlog.methodName="google.login.LoginService.loginFailure" GROUP BY 1 HAVING attempts >= 3 Any access attempts violating VPC Service Controls? By analyzing Policy Denied audit logs from Cloud Audit Logs, the following query detects access attempts blocked by VPC Service Controls. Any query results might indicate potential malicious activity like access attempts from unauthorized networks using stolen credentials. Log Analytics SELECT timestamp, log_name, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.request_metadata.caller_ip, proto_payload.audit_log.method_name, proto_payload.audit_log.service_name, JSON_VALUE(proto_payload.audit_log.metadata.violationReason) as violationReason, IF(JSON_VALUE(proto_payload.audit_log.metadata.ingressViolations) IS NULL, 'ingress', 'egress') AS violationType, COALESCE( JSON_VALUE(proto_payload.audit_log.metadata.ingressViolations[0].targetResource), JSON_VALUE(proto_payload.audit_log.metadata.egressViolations[0].targetResource) ) AS targetResource, COALESCE( JSON_VALUE(proto_payload.audit_log.metadata.ingressViolations[0].servicePerimeter), JSON_VALUE(proto_payload.audit_log.metadata.egressViolations[0].servicePerimeter) ) AS servicePerimeter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND proto_payload.audit_log IS NOT NULL AND JSON_VALUE(proto_payload.audit_log.metadata, '$."@type"') = 'type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata' ORDER BY timestamp DESC LIMIT 1000 BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.requestMetadata.callerIp, protopayload_auditlog.methodName, protopayload_auditlog.serviceName, JSON_VALUE(protopayload_auditlog.metadataJson, '$.violationReason') as violationReason, IF(JSON_VALUE(protopayload_auditlog.metadataJson, '$.ingressViolations') IS NULL, 'ingress', 'egress') AS violationType, COALESCE( JSON_VALUE(protopayload_auditlog.metadataJson, '$.ingressViolations[0].targetResource'), JSON_VALUE(protopayload_auditlog.metadataJson, '$.egressViolations[0].targetResource') ) AS targetResource, COALESCE( JSON_VALUE(protopayload_auditlog.metadataJson, '$.ingressViolations[0].servicePerimeter'), JSON_VALUE(protopayload_auditlog.metadataJson, '$.egressViolations[0].servicePerimeter') ) AS servicePerimeter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_policy` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 400 DAY) AND JSON_VALUE(protopayload_auditlog.metadataJson, '$."@type"') = 'type.googleapis.com/google.cloud.audit.VpcServiceControlAuditMetadata' ORDER BY timestamp DESC LIMIT 1000 Any access attempts violating IAP access controls? By analyzing external Application Load Balancer logs, the following query detects access attempts blocked by IAP. Any query results might indicate an initial access attempt or vulnerability exploit attempt. Log Analytics SELECT timestamp, http_request.remote_ip, http_request.request_method, http_request.status, JSON_VALUE(resource.labels.backend_service_name) AS backend_service_name, http_request.request_url FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="http_load_balancer" AND JSON_VALUE(json_payload.statusDetails) = "handled_by_identity_aware_proxy" ORDER BY timestamp DESC BigQuery SELECT timestamp, httpRequest.remoteIp, httpRequest.requestMethod, httpRequest.status, resource.labels.backend_service_name, httpRequest.requestUrl, FROM `[MY_PROJECT_ID].[MY_DATASET_ID].requests` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="http_load_balancer" AND jsonpayload_type_loadbalancerlogentry.statusdetails = "handled_by_identity_aware_proxy" ORDER BY timestamp DESC Permission changes questions These sample queries perform analysis over administrator activity that changes permissions, including changes in IAM policies, groups and group memberships, service accounts, and any associated keys. Such permission changes might provide a high level of access to sensitive data or environments. Note: Group changes are captured in Google Workspace Admin Audit. To analyze group changes activity and use some of the queries in this section, you need to enable Google Workspace data sharing with Google Cloud. To learn more about sharing Google Workspace audit logs with Google Cloud, see View and manage audit logs for Google Workspace. Any user added to highly-privileged groups? By analyzing Google Workspace Admin Audit audit logs, the following query detects users who have been added to any of the highly-privileged groups listed in the query. You use the regular expression in the query to define which groups (such as admin@example.com or prod@example.com) to monitor. Any query results might indicate a malicious or accidental privilege escalation. Log Analytics SELECT timestamp, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.method_name, proto_payload.audit_log.resource_name, (SELECT JSON_VALUE(x.value) FROM UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.metadata.event[0].parameter)) AS x WHERE JSON_VALUE(x.name) = "USER_EMAIL") AS user_email, (SELECT JSON_VALUE(x.value) FROM UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.metadata.event[0].parameter)) AS x WHERE JSON_VALUE(x.name) = "GROUP_EMAIL") AS group_email, FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 120 DAY) AND proto_payload.audit_log.service_name = "admin.googleapis.com" AND proto_payload.audit_log.method_name = "google.admin.AdminService.addGroupMember" AND EXISTS( SELECT * FROM UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.metadata.event[0].parameter)) AS x WHERE JSON_VALUE(x.name) = "GROUP_EMAIL" AND REGEXP_CONTAINS(JSON_VALUE(x.value), r'(admin|prod).*') -- Update regexp with other sensitive groups if applicable ) BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.methodName, protopayload_auditlog.resourceName, (SELECT JSON_VALUE(x, '$.value') FROM UNNEST(JSON_QUERY_ARRAY(protopayload_auditlog.metadataJson, '$.event[0].parameter')) AS x WHERE JSON_VALUE(x, '$.name') = "USER_EMAIL") AS userEmail, (SELECT JSON_VALUE(x, '$.value') FROM UNNEST(JSON_QUERY_ARRAY(protopayload_auditlog.metadataJson, '$.event[0].parameter')) AS x WHERE JSON_VALUE(x, '$.name') = "GROUP_EMAIL") AS groupEmail, FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 120 DAY) AND protopayload_auditlog.serviceName = "admin.googleapis.com" AND protopayload_auditlog.methodName = "google.admin.AdminService.addGroupMember" AND EXISTS( SELECT * FROM UNNEST(JSON_QUERY_ARRAY(protopayload_auditlog.metadataJson, '$.event[0].parameter')) AS x WHERE JSON_VALUE(x, '$.name') = 'GROUP_EMAIL' AND REGEXP_CONTAINS(JSON_VALUE(x, '$.value'), r'(admin|prod).*') -- Update regexp with other sensitive groups if applicable ) Any permissions granted over a service account? By analyzing Admin Activity audit logs from Cloud Audit Logs, the following query detects any permissions that have been granted to any principal over a service account. Examples of permissions that might be granted are the ability to impersonate that service account or create service account keys. Any query results might indicate an instance of privilege escalation or a risk of credentials leakage. Log Analytics SELECT timestamp, proto_payload.audit_log.authentication_info.principal_email as grantor, JSON_VALUE(bindingDelta.member) as grantee, JSON_VALUE(bindingDelta.role) as role, proto_payload.audit_log.resource_name, proto_payload.audit_log.method_name FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs`, UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.service_data.policyDelta.bindingDeltas)) AS bindingDelta WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 400 DAY) -- AND log_id = "cloudaudit.googleapis.com/activity" AND ( (resource.type = "service_account" AND proto_payload.audit_log.method_name LIKE "google.iam.admin.%.SetIAMPolicy") OR (resource.type IN ("project", "folder", "organization") AND proto_payload.audit_log.method_name = "SetIamPolicy" AND JSON_VALUE(bindingDelta.role) LIKE "roles/iam.serviceAccount%") ) AND JSON_VALUE(bindingDelta.action) = "ADD" -- Principal (grantee) exclusions AND JSON_VALUE(bindingDelta.member) NOT LIKE "%@example.com" ORDER BY timestamp DESC BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail as grantor, bindingDelta.member as grantee, bindingDelta.role, protopayload_auditlog.resourceName, protopayload_auditlog.methodName, FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity`, UNNEST(protopayload_auditlog.servicedata_v1_iam.policyDelta.bindingDeltas) AS bindingDelta WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 180 DAY) AND ( (resource.type = "service_account" AND protopayload_auditlog.methodName LIKE "google.iam.admin.%.SetIAMPolicy") OR (resource.type IN ("project", "folder", "organization") AND protopayload_auditlog.methodName = "SetIamPolicy" AND bindingDelta.role LIKE "roles/iam.serviceAccount%") ) AND bindingDelta.action = 'ADD' -- Principal (grantee) exclusions AND bindingDelta.member NOT LIKE "%@example.com" ORDER BY timestamp DESC Any service accounts or keys created by non-approved identity? By analyzing Admin Activity audit logs, the following query detects any service accounts or keys that have been manually created by a user. For example, you might follow a best practice to only allow service accounts to be created by an approved service account as part of an automated workflow. Therefore, any service account creation outside of that workflow is considered non-compliant and possibly malicious. Log Analytics SELECT timestamp, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.method_name, proto_payload.audit_log.resource_name, JSON_VALUE(proto_payload.audit_log.response.email) as service_account_email FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="service_account" AND proto_payload.audit_log.method_name LIKE "%CreateServiceAccount%" AND proto_payload.audit_log.authentication_info.principal_email NOT LIKE "%.gserviceaccount.com" BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.methodName, protopayload_auditlog.resourceName, JSON_VALUE(protopayload_auditlog.responseJson, "$.email") as serviceAccountEmail FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 180 DAY) AND resource.type="service_account" AND protopayload_auditlog.methodName LIKE "%CreateServiceAccount%" AND protopayload_auditlog.authenticationInfo.principalEmail NOT LIKE "%.gserviceaccount.com" Any user added to (or removed from) sensitive IAM policy? By searching Admin Activity audit logs, the following query detects any user or group access change for an IAP-secured resource such as a Compute Engine backend service. The following query searches all IAM policy updates for IAP resources involving the IAM role roles/iap.httpsResourceAccessor. This role provides permissions to access the HTTPS resource or the backend service. Any query results might indicate attempts to bypass the defenses of a backend service that might be exposed to the internet. Log Analytics SELECT timestamp, proto_payload.audit_log.authentication_info.principal_email, resource.type, proto_payload.audit_log.resource_name, JSON_VALUE(binding, '$.role') as role, JSON_VALUE_ARRAY(binding, '$.members') as members FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs`, UNNEST(JSON_QUERY_ARRAY(proto_payload.audit_log.response, '$.bindings')) AS binding WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) -- AND log_id = "cloudaudit.googleapis.com/activity" AND proto_payload.audit_log.service_name = "iap.googleapis.com" AND proto_payload.audit_log.method_name LIKE "%.IdentityAwareProxyAdminService.SetIamPolicy" AND JSON_VALUE(binding, '$.role') = "roles/iap.httpsResourceAccessor" ORDER BY timestamp DESC BigQuery SELECT timestamp, protopayload_auditlog.authenticationInfo.principalEmail, resource.type, protopayload_auditlog.resourceName, JSON_VALUE(binding, '$.role') as role, JSON_VALUE_ARRAY(binding, '$.members') as members FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity`, UNNEST(JSON_QUERY_ARRAY(protopayload_auditlog.responseJson, '$.bindings')) AS binding WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 400 DAY) AND protopayload_auditlog.serviceName = "iap.googleapis.com" AND protopayload_auditlog.methodName LIKE "%.IdentityAwareProxyAdminService.SetIamPolicy" AND JSON_VALUE(binding, '$.role') = "roles/iap.httpsResourceAccessor" ORDER BY timestamp DESC Provisioning activity questions These sample queries perform analysis to detect suspicious or anomalous admin activity like provisioning and configuring resources. Any changes made to logging settings? By searching Admin Activity audit logs, the following query detects any change made to logging settings. Monitoring logging settings helps you detect accidental or malicious disabling of audit logs and similar defense evasion techniques. Log Analytics SELECT receive_timestamp, timestamp AS eventTimestamp, proto_payload.audit_log.request_metadata.caller_ip, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.resource_name, proto_payload.audit_log.method_name FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE proto_payload.audit_log.service_name = "logging.googleapis.com" AND log_id = "cloudaudit.googleapis.com/activity" BigQuery SELECT receiveTimestamp, timestamp AS eventTimestamp, protopayload_auditlog.requestMetadata.callerIp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.resourceName, protopayload_auditlog.methodName FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE protopayload_auditlog.serviceName = "logging.googleapis.com" Any VPC Flow Logs actively disabled? By searching Admin Activity audit logs, the following query detects any subnet whose VPC Flow Logs were actively disabled . Monitoring VPC Flow Logs settings helps you detect accidental or malicious disabling of VPC Flow Logs and similar defense evasion techniques. Log Analytics SELECT receive_timestamp, timestamp AS eventTimestamp, proto_payload.audit_log.request_metadata.caller_ip, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.resource_name, proto_payload.audit_log.method_name FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE proto_payload.audit_log.method_name = "v1.compute.subnetworks.patch" AND ( JSON_VALUE(proto_payload.audit_log.request, "$.logConfig.enable") = "false" OR JSON_VALUE(proto_payload.audit_log.request, "$.enableFlowLogs") = "false" ) BigQuery SELECT receiveTimestamp, timestamp AS eventTimestamp, protopayload_auditlog.requestMetadata.callerIp, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.resourceName, protopayload_auditlog.methodName FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE protopayload_auditlog.methodName = "v1.compute.subnetworks.patch" AND JSON_VALUE(protopayload_auditlog.requestJson, "$.logConfig.enable") = "false" Any unusually high number of firewall rules modified in the past week? By searching Admin Activity audit logs, the following query detects any unusually high number of firewall rules changes on any given day in the past week. To determine whether there is an outlier, the query performs statistical analysis over the daily counts of firewall rules changes. Averages and standard deviations are computed for each day by looking back at the preceding daily counts with a lookback window of 90 days. An outlier is considered when the daily count is more than two standard deviations above the mean. The query, including the standard deviation factor and the lookback windows, can all be configured to fit your cloud provisioning activity profile and to minimize false positives. Log Analytics SELECT * FROM ( SELECT *, AVG(counter) OVER ( ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS avg, STDDEV(counter) OVER ( ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS stddev, COUNT(*) OVER ( RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS numSamples FROM ( SELECT EXTRACT(DATE FROM timestamp) AS day, ARRAY_AGG(DISTINCT proto_payload.audit_log.method_name IGNORE NULLS) AS actions, ARRAY_AGG(DISTINCT proto_payload.audit_log.authentication_info.principal_email IGNORE NULLS) AS actors, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY) AND proto_payload.audit_log.method_name LIKE "v1.compute.firewalls.%" AND proto_payload.audit_log.method_name NOT IN ("v1.compute.firewalls.list", "v1.compute.firewalls.get") GROUP BY day ) ) WHERE counter > avg + 2 * stddev AND day >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) ORDER BY counter DESC BigQuery SELECT *, AVG(counter) OVER ( ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS avg, STDDEV(counter) OVER ( ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS stddev, COUNT(*) OVER ( RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS numSamples FROM ( SELECT EXTRACT(DATE FROM timestamp) AS day, ARRAY_AGG(DISTINCT protopayload_auditlog.methodName IGNORE NULLS) AS actions, ARRAY_AGG(DISTINCT protopayload_auditlog.authenticationInfo.principalEmail IGNORE NULLS) AS actors, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY) AND protopayload_auditlog.methodName LIKE "v1.compute.firewalls.%" AND protopayload_auditlog.methodName NOT IN ("v1.compute.firewalls.list", "v1.compute.firewalls.get") GROUP BY day ) WHERE TRUE QUALIFY counter > avg + 2 * stddev AND day >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) ORDER BY counter DESC Any VMs deleted in the past week? By searching Admin Activity audit logs, the following query lists any Compute Engine instances deleted in the past week. This query can help you audit resource deletions and detect potential malicious activity. Log Analytics SELECT timestamp, JSON_VALUE(resource.labels.instance_id) AS instance_id, proto_payload.audit_log.authentication_info.principal_email, proto_payload.audit_log.resource_name, proto_payload.audit_log.method_name FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE resource.type = "gce_instance" AND proto_payload.audit_log.method_name = "v1.compute.instances.delete" AND operation.first IS TRUE AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) ORDER BY timestamp desc, instance_id LIMIT 1000 BigQuery SELECT timestamp, resource.labels.instance_id, protopayload_auditlog.authenticationInfo.principalEmail, protopayload_auditlog.resourceName, protopayload_auditlog.methodName FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE resource.type = "gce_instance" AND protopayload_auditlog.methodName = "v1.compute.instances.delete" AND operation.first IS TRUE AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) ORDER BY timestamp desc, resource.labels.instance_id LIMIT 1000 Workload usage questions These sample queries perform analysis to understand who and what is consuming your cloud workloads and APIs, and help you detect potential malicious behavior internally or externally. Any unusually high API usage by any user identity in the past week? By analyzing all Cloud Audit Logs, the following query detects unusually high API usage by any user identity on any given day in the past week. Such unusually high usage might be an indicator of potential API abuse, insider threat, or leaked credentials. To determine whether there is an outlier, this query performs statistical analysis over the daily count of actions per principal. Averages and standard deviations are computed for each day and for each principal by looking back at the preceding daily counts with a lookback window of 60 days. An outlier is considered when the daily count for a user is more than three standard deviations above their mean. The query, including the standard deviation factor and the lookback windows, are all configurable to fit your cloud provisioning activity profile and to minimize false positives. Log Analytics SELECT * FROM ( SELECT *, AVG(counter) OVER ( PARTITION BY principal_email ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS avg, STDDEV(counter) OVER ( PARTITION BY principal_email ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS stddev, COUNT(*) OVER ( PARTITION BY principal_email RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS numSamples FROM ( SELECT proto_payload.audit_log.authentication_info.principal_email, EXTRACT(DATE FROM timestamp) AS day, ARRAY_AGG(DISTINCT proto_payload.audit_log.method_name IGNORE NULLS) AS actions, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND proto_payload.audit_log.authentication_info.principal_email IS NOT NULL AND proto_payload.audit_log.method_name NOT LIKE "storage.%.get" AND proto_payload.audit_log.method_name NOT LIKE "v1.compute.%.list" AND proto_payload.audit_log.method_name NOT LIKE "beta.compute.%.list" GROUP BY proto_payload.audit_log.authentication_info.principal_email, day ) ) WHERE counter > avg + 3 * stddev AND day >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) ORDER BY counter DESC BigQuery SELECT *, AVG(counter) OVER ( PARTITION BY principalEmail ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS avg, STDDEV(counter) OVER ( PARTITION BY principalEmail ORDER BY day ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING) AS stddev, COUNT(*) OVER ( PARTITION BY principalEmail RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS numSamples FROM ( SELECT protopayload_auditlog.authenticationInfo.principalEmail, EXTRACT(DATE FROM timestamp) AS day, ARRAY_AGG(DISTINCT protopayload_auditlog.methodName IGNORE NULLS) AS actions, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_*` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND protopayload_auditlog.authenticationInfo.principalEmail IS NOT NULL AND protopayload_auditlog.methodName NOT LIKE "storage.%.get" AND protopayload_auditlog.methodName NOT LIKE "v1.compute.%.list" AND protopayload_auditlog.methodName NOT LIKE "beta.compute.%.list" GROUP BY protopayload_auditlog.authenticationInfo.principalEmail, day ) WHERE TRUE QUALIFY counter > avg + 3 * stddev AND day >= DATE_SUB(CURRENT_DATE(), INTERVAL 7 DAY) ORDER BY counter DESC What is the autoscaling usage per day in the past month? By analyzing Admin Activity audit logs, the following query reports the autoscaling usage by day for the last month. This query can be used to identify patterns or anomalies that warrant further security investigation. Log Analytics SELECT TIMESTAMP_TRUNC(timestamp, DAY) AS day, proto_payload.audit_log.method_name, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE resource.type = "gce_instance_group_manager" AND log_id = "cloudaudit.googleapis.com/activity" AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1, 2 ORDER BY 1, 2 BigQuery SELECT TIMESTAMP_TRUNC(timestamp, DAY) AS day, protopayload_auditlog.methodName AS methodName, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_activity` WHERE resource.type = "gce_instance_group_manager" AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1, 2 ORDER BY 1, 2 Data access questions These sample queries perform analysis to understand who is accessing or modifying data in Google Cloud. Which users most frequently accessed data in the past week? The following query uses the Data Access audit logs to find the user identities that most frequently accessed BigQuery tables data over the past week. Log Analytics SELECT proto_payload.audit_log.authentication_info.principal_email, COUNT(*) AS COUNTER FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE (proto_payload.audit_log.method_name = "google.cloud.bigquery.v2.JobService.InsertJob" OR proto_payload.audit_log.method_name = "google.cloud.bigquery.v2.JobService.Query") AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) AND log_id = "cloudaudit.googleapis.com/data_access" GROUP BY 1 ORDER BY 2 desc, 1 LIMIT 100 BigQuery SELECT protopayload_auditlog.authenticationInfo.principalEmail, COUNT(*) AS COUNTER FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access` WHERE (protopayload_auditlog.methodName = "google.cloud.bigquery.v2.JobService.InsertJob" OR protopayload_auditlog.methodName = "google.cloud.bigquery.v2.JobService.Query") AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) GROUP BY 1 ORDER BY 2 desc, 1 LIMIT 100 Which users accessed the data in the "accounts" table last month? The following query uses the Data Access audit logs to find the user identities that most frequently queried a given accounts table over the past month. Besides the MY_DATASET_ID and MY_PROJECT_ID placeholders for your BigQuery export destination, the following query uses the DATASET_ID and PROJECT_ID placeholders. You need to replace to the DATASET_ID and PROJECT_ID placeholders in order to specify the target table whose access is being analyzed, such as the accounts table in this example. Log Analytics SELECT proto_payload.audit_log.authentication_info.principal_email, COUNT(*) AS COUNTER FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs`, UNNEST(proto_payload.audit_log.authorization_info) authorization_info WHERE (proto_payload.audit_log.method_name = "google.cloud.bigquery.v2.JobService.InsertJob" OR proto_payload.audit_log.method_name = "google.cloud.bigquery.v2.JobService.Query") AND authorization_info.permission = "bigquery.tables.getData" AND authorization_info.resource = "projects/[PROJECT_ID]/datasets/[DATASET_ID]/tables/accounts" AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1 ORDER BY 2 desc, 1 LIMIT 100 BigQuery SELECT protopayload_auditlog.authenticationInfo.principalEmail, COUNT(*) AS COUNTER FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access`, UNNEST(protopayload_auditlog.authorizationInfo) authorizationInfo WHERE (protopayload_auditlog.methodName = "google.cloud.bigquery.v2.JobService.InsertJob" OR protopayload_auditlog.methodName = "google.cloud.bigquery.v2.JobService.Query") AND authorizationInfo.permission = "bigquery.tables.getData" AND authorizationInfo.resource = "projects/[PROJECT_ID]/datasets/[DATASET_ID]/tables/accounts" AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1 ORDER BY 2 desc, 1 LIMIT 100 What tables are most frequently accessed and by whom? The following query uses the Data Access audit logs to find the BigQuery tables with most frequently read and modified data over the past month. It displays the associated user identity along with breakdown of total number of times data was read versus modified. Log Analytics SELECT proto_payload.audit_log.resource_name, proto_payload.audit_log.authentication_info.principal_email, COUNTIF(JSON_VALUE(proto_payload.audit_log.metadata, "$.tableDataRead") IS NOT NULL) AS dataReadEvents, COUNTIF(JSON_VALUE(proto_payload.audit_log.metadata, "$.tableDataChange") IS NOT NULL) AS dataChangeEvents, COUNT(*) AS totalEvents FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE STARTS_WITH(resource.type, 'bigquery') IS TRUE AND (JSON_VALUE(proto_payload.audit_log.metadata, "$.tableDataRead") IS NOT NULL OR JSON_VALUE(proto_payload.audit_log.metadata, "$.tableDataChange") IS NOT NULL) AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1, 2 ORDER BY 5 DESC, 1, 2 LIMIT 1000 BigQuery SELECT protopayload_auditlog.resourceName, protopayload_auditlog.authenticationInfo.principalEmail, COUNTIF(JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataRead") IS NOT NULL) AS dataReadEvents, COUNTIF(JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataChange") IS NOT NULL) AS dataChangeEvents, COUNT(*) AS totalEvents FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access` WHERE STARTS_WITH(resource.type, 'bigquery') IS TRUE AND (JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataRead") IS NOT NULL OR JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.tableDataChange") IS NOT NULL) AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY 1, 2 ORDER BY 5 DESC, 1, 2 LIMIT 1000 What are the top 10 queries against BigQuery in the past week? The following query uses the Data Access audit logs to find the most common queries over the past week. It also lists the corresponding users and the referenced tables. Log Analytics SELECT COALESCE( JSON_VALUE(proto_payload.audit_log.metadata, "$.jobChange.job.jobConfig.queryConfig.query"), JSON_VALUE(proto_payload.audit_log.metadata, "$.jobInsertion.job.jobConfig.queryConfig.query")) as query, STRING_AGG(DISTINCT proto_payload.audit_log.authentication_info.principal_email, ',') as users, ANY_VALUE(COALESCE( JSON_EXTRACT_ARRAY(proto_payload.audit_log.metadata, "$.jobChange.job.jobStats.queryStats.referencedTables"), JSON_EXTRACT_ARRAY(proto_payload.audit_log.metadata, "$.jobInsertion.job.jobStats.queryStats.referencedTables"))) as tables, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE (resource.type = 'bigquery_project' OR resource.type = 'bigquery_dataset') AND operation.last IS TRUE AND (JSON_VALUE(proto_payload.audit_log.metadata, "$.jobChange") IS NOT NULL OR JSON_VALUE(proto_payload.audit_log.metadata, "$.jobInsertion") IS NOT NULL) AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) GROUP BY query ORDER BY counter DESC LIMIT 10 BigQuery SELECT COALESCE( JSON_EXTRACT_SCALAR(protopayload_auditlog.metadataJson, "$.jobChange.job.jobConfig.queryConfig.query"), JSON_EXTRACT_SCALAR(protopayload_auditlog.metadataJson, "$.jobInsertion.job.jobConfig.queryConfig.query")) as query, STRING_AGG(DISTINCT protopayload_auditlog.authenticationInfo.principalEmail, ',') as users, ANY_VALUE(COALESCE( JSON_EXTRACT_ARRAY(protopayload_auditlog.metadataJson, "$.jobChange.job.jobStats.queryStats.referencedTables"), JSON_EXTRACT_ARRAY(protopayload_auditlog.metadataJson, "$.jobInsertion.job.jobStats.queryStats.referencedTables"))) as tables, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access` WHERE (resource.type = 'bigquery_project' OR resource.type = 'bigquery_dataset') AND operation.last IS TRUE AND (JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.jobChange") IS NOT NULL OR JSON_EXTRACT(protopayload_auditlog.metadataJson, "$.jobInsertion") IS NOT NULL) AND timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 7 DAY) GROUP BY query ORDER BY counter DESC LIMIT 10 What are the most common actions recorded in the data access log over the past month? The following query uses all logs from Cloud Audit Logs to find the 100 most frequent actions recorded over the past month. Log Analytics SELECT proto_payload.audit_log.method_name, proto_payload.audit_log.service_name, resource.type, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND log_id="cloudaudit.googleapis.com/data_access" GROUP BY proto_payload.audit_log.method_name, proto_payload.audit_log.service_name, resource.type ORDER BY counter DESC LIMIT 100 BigQuery SELECT protopayload_auditlog.methodName, protopayload_auditlog.serviceName, resource.type, COUNT(*) AS counter FROM `[MY_PROJECT_ID].[MY_DATASET_ID].cloudaudit_googleapis_com_data_access` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) GROUP BY protopayload_auditlog.methodName, protopayload_auditlog.serviceName, resource.type ORDER BY counter DESC LIMIT 100 Network security questions These sample queries perform analysis over your network activity in Google Cloud. Any connections from a new IP address to a specific subnetwork? The following query detects connections from any new source IP address to a given subnet by analyzing VPC Flow Logs. In this example, a source IP address is considered new if it was seen for the first time in the last 24 hours over a lookback window of 60 days. You might want to use and tune this query on a subnet that is in-scope for a particular compliance requirement like PCI. Log Analytics SELECT JSON_VALUE(json_payload.connection.src_ip) as src_ip, -- TIMESTAMP supports up to 6 digits of fractional precision, so drop any more digits to avoid parse errors MIN(TIMESTAMP(REGEXP_REPLACE(JSON_VALUE(json_payload.start_time), r'\.(\d{0,6})\d+(Z)?$', '.\\1\\2'))) AS firstInstance, MAX(TIMESTAMP(REGEXP_REPLACE(JSON_VALUE(json_payload.start_time), r'\.(\d{0,6})\d+(Z)?$', '.\\1\\2'))) AS lastInstance, ARRAY_AGG(DISTINCT JSON_VALUE(resource.labels.subnetwork_name)) as subnetNames, ARRAY_AGG(DISTINCT JSON_VALUE(json_payload.dest_instance.vm_name)) as vmNames, COUNT(*) numSamples FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND JSON_VALUE(json_payload.reporter) = 'DEST' AND JSON_VALUE(resource.labels.subnetwork_name) IN ('prod-customer-data') GROUP BY src_ip HAVING firstInstance >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) ORDER BY lastInstance DESC, numSamples DESC BigQuery SELECT jsonPayload.connection.src_ip as src_ip, -- TIMESTAMP supports up to 6 digits of fractional precision, so drop any more digits to avoid parse errors MIN(TIMESTAMP(REGEXP_REPLACE(jsonPayload.start_time, r'\.(\d{0,6})\d+(Z)?$', '.\\1\\2'))) AS firstInstance, MAX(TIMESTAMP(REGEXP_REPLACE(jsonPayload.start_time, r'\.(\d{0,6})\d+(Z)?$', '.\\1\\2'))) AS lastInstance, ARRAY_AGG(DISTINCT resource.labels.subnetwork_name) as subnetNames, ARRAY_AGG(DISTINCT jsonPayload.dest_instance.vm_name) as vmNames, COUNT(*) numSamples FROM `[MY_PROJECT_ID].[MY_DATASET_ID].compute_googleapis_com_vpc_flows` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND jsonPayload.reporter = 'DEST' AND resource.labels.subnetwork_name IN ('prod-customer-data') GROUP BY src_ip HAVING firstInstance >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 DAY) ORDER BY lastInstance DESC, numSamples DESC Any connections blocked by Google Cloud Armor? The following query helps detect potential exploit attempts by analyzing external Application Load Balancer logs to find any connection blocked by the security policy configured in Google Cloud Armor. This query assumes that you have a Google Cloud Armor security policy configured on your external Application Load Balancer. This query also assumes that you have enabled external Application Load Balancer logging as described in the instructions that are provided by the Enable link in the log scoping tool. Log Analytics SELECT timestamp, http_request.remote_ip, http_request.request_method, http_request.status, JSON_VALUE(json_payload.enforcedSecurityPolicy.name) AS security_policy_name, JSON_VALUE(resource.labels.backend_service_name) AS backend_service_name, http_request.request_url, FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="http_load_balancer" AND JSON_VALUE(json_payload.statusDetails) = "denied_by_security_policy" ORDER BY timestamp DESC BigQuery SELECT timestamp, httpRequest.remoteIp, httpRequest.requestMethod, httpRequest.status, jsonpayload_type_loadbalancerlogentry.enforcedsecuritypolicy.name, resource.labels.backend_service_name, httpRequest.requestUrl, FROM `[MY_PROJECT_ID].[MY_DATASET_ID].requests` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="http_load_balancer" AND jsonpayload_type_loadbalancerlogentry.statusdetails = "denied_by_security_policy" ORDER BY timestamp DESC Any high-severity virus or malware detected by Cloud IDS? The following query shows any high-severity virus or malware detected by Cloud IDS by searching Cloud IDS Threat Logs. This query assumes that you have a Cloud IDS endpoint configured. Log Analytics SELECT JSON_VALUE(json_payload.alert_time) AS alert_time, JSON_VALUE(json_payload.name) AS name, JSON_VALUE(json_payload.details) AS details, JSON_VALUE(json_payload.application) AS application, JSON_VALUE(json_payload.uri_or_filename) AS uri_or_filename, JSON_VALUE(json_payload.ip_protocol) AS ip_protocol, FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="ids.googleapis.com/Endpoint" AND JSON_VALUE(json_payload.alert_severity) IN ("HIGH", "CRITICAL") AND JSON_VALUE(json_payload.type) = "virus" ORDER BY timestamp DESC BigQuery SELECT jsonPayload.alert_time, jsonPayload.name, jsonPayload.details, jsonPayload.application, jsonPayload.uri_or_filename, jsonPayload.ip_protocol FROM `[MY_PROJECT_ID].[MY_DATASET_ID].ids_googleapis_com_threat` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY) AND resource.type="ids.googleapis.com/Endpoint" AND jsonPayload.alert_severity IN ("HIGH", "CRITICAL") AND jsonPayload.type = "virus" ORDER BY timestamp DESC What are the top Cloud DNS queried domains from your VPC network? The following query lists the top 10 Cloud DNS queried domains from your VPC network(s) over the last 60 days. This query assumes that you have enabled Cloud DNS logging for your VPC network(s) as described in the instructions that are provided by the Enable link in the log scoping tool. Log Analytics SELECT JSON_VALUE(json_payload.queryName) AS query_name, COUNT(*) AS total_queries FROM `[MY_PROJECT_ID].[MY_LOG_BUCKET_REGION].[MY_LOG_BUCKET_NAME]._AllLogs` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) AND log_id="dns.googleapis.com/dns_queries" GROUP BY query_name ORDER BY total_queries DESC LIMIT 10 BigQuery SELECT jsonPayload.queryname AS query_name, COUNT(*) AS total_queries FROM `[MY_PROJECT_ID].[MY_DATASET_ID].dns_googleapis_com_dns_queries` WHERE timestamp >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 60 DAY) GROUP BY query_name ORDER BY total_queries DESC LIMIT 10 What's next Look at how to stream logs from Google Cloud to Splunk. Ingesting Google Cloud logs to Google Security Operations. Exporting Google Cloud security data to your SIEM system. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/See_all_AI_and_machine_learning_products.txt b/See_all_AI_and_machine_learning_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..6a949069b7532383fd5a7302be51debdee5c3aa9 --- /dev/null +++ b/See_all_AI_and_machine_learning_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAE#ai-and-machine-learning +Date Scraped: 2025-02-23T12:02:26.675Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLcloseInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byAI/MLAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLcloseInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byAI/MLAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchAI/ML topicsGenerative AIConversational AIMachine Learning & MLOpsSpeech, Text & Language APIsAI InfrastructureImage, AR, and video APIsDocument APIsGenerative AIEnhance applications and the development workflow with generative AI capabilities.AI and ML platformexpand_contentVertex AI platformUnified platform for ML models and generative AI.Generative AI toolexpand_contentVertex AI StudioBuild, tune, and deploy foundation models on Vertex AI.Agent and chatbot builderexpand_contentVertex AI Agent BuilderGenerative AI apps for enterprise search and conversational AI.AI-assisted codingexpand_contentGemini Code AssistAI-assisted application development and coding.AI knowledge agents for enterprisesexpand_contentGoogle AgentspaceMultimodal search agents, information discovery, and business automation for employees.Recommendations builderexpand_contentRecommendations AIDeliver highly personalized product recommendations at scale.Conversational AICustomize and build AI-powered chatbots, virtual agents, and contact centers.Translation APIsexpand_contentTranslation AILanguage detection, translation, and glossary support.Chatbot and natural language platformexpand_contentConversational AgentsConversation applications and systems development suite for virtual agents.Agent and chatbot builderexpand_contentVertex AI Agent BuilderGenerative AI apps for enterprise search and conversational AI.Contact center solutionexpand_contentContact Center as a ServiceOmnichannel contact center service that offers security and privacy, along with unified data. AI assistance for agentsexpand_contentAgent AssistAI-powered conversation assistance and recommended responses for human agents.Contact center analyticsexpand_contentConversational InsightsContact center interaction data to answer business questions or support decision making.Machine Learning & MLOpsTrain, customize, test, and monitor machine learning (ML) models.AI and ML platformexpand_contentVertex AI platformUnified platform for ML models and generative AI.Low-experience MLexpand_contentAutoMLCustom machine learning model training and development.Data science platformexpand_contentVertex AI NotebooksA single interface for your data, analytics, and machine learning workflow.ML model analysisexpand_contentVertex Explainable AITools and frameworks to understand and interpret your machine learning models.Speech, Text & Language APIsApply natural language understanding to apps, turn speech into text, and more.Text to audio conversionexpand_contentText-to-SpeechSpeech synthesis in 220+ voices and 40+ languages.Audio to text conversionexpand_contentSpeech-to-TextSpeech recognition and transcription supporting 125 languages.Translation APIsexpand_contentTranslation AILanguage detection, translation, and glossary support.Unstructured text analysisexpand_contentNatural Language AISentiment analysis and classification of unstructured text.AI InfrastructureRun AI workloads with hardware, including TPUs, GPUs, CPUs, and Google Cloud services.AI and ML platformexpand_contentVertex AI platformUnified platform for ML models and generative AI.Containersexpand_contentGoogle Kubernetes EngineManaged environment for running containerized apps.Vector searchexpand_contentVertex AI Vector SearchSearch from billions of semantically similar or semantically related items.Tensor Processing Unitsexpand_contentCloud TPUsTensor processing units for machine learning applications.Image, AR, and video APIsExtract insights from images and video.Vision models and APIsexpand_contentVision AICustom and pre-trained models to detect emotion, text, more.Video analysisexpand_contentVideo AIVideo classification and recognition using machine learning.Appless AR and 3Dexpand_contentImmersive Stream for XRHosts, renders, and streams 3D and XR experiences.Video file converterexpand_contentTranscoder APITransform video content for use across a variety of user devices.Document APIsExtract, classify, and split data in documents.Text and document extractionexpand_contentDocument AIProcessors to extract text from documents, classify information, and split documents.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_compute_products.txt b/See_all_compute_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..b3374c916c29fc8283161da396c753a143b5b006 --- /dev/null +++ b/See_all_compute_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAUSAQw#compute +Date Scraped: 2025-02-23T12:02:53.296Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructurecloseData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byInfrastructureAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructurecloseData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byInfrastructureAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchInfrastructure topicscheckComputeNetworkingServerlessStorageWeb3ContainersHybrid & MulticloudClear allComputeChoose from preset or custom machine types for web servers, databases, AI, and more.Virtual machines (VMs)expand_contentCompute EngineVirtual machines running in Google's data center.High-performance GPUsexpand_contentCloud GPUsGPUs for machine learning, scientific computing, and 3D visualization.Tensor Processing Unitsexpand_contentCloud TPUsTensor processing units for machine learning applications.VMware migrationexpand_contentVMware EngineMigrate and run your VMware workloads natively on Google Cloud.Batch jobs and fault-tolerant workloads VMsexpand_contentSpot VMsCompute instances for batch jobs and fault-tolerant workloads.Preconfigured VMs for deep learningexpand_contentDeep Learning VM ImagePreconfigured VMs for deep learning applications.Migration platformexpand_contentMigration CenterUnified platform for migrating and modernizing with Google Cloud.VM migrationexpand_contentMigrate to Virtual MachinesComponents for migrating VMs and physical servers to Compute Engine.Hardened VMsexpand_contentShielded VMsReinforced virtual machines on Google Cloud.Batch schedulerexpand_contentBatchFully-managed batch service to schedule, queue, and execute batch jobs at scale.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_data_analytics_products.txt b/See_all_data_analytics_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..f934dc89044c7bff1a0b91cd185f5cbd54ad8f49 --- /dev/null +++ b/See_all_data_analytics_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAQ#data-analytics +Date Scraped: 2025-02-23T12:03:51.424Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructureData and analyticscloseDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byData and analyticsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructureData and analyticscloseDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byData and analyticsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchData and analytics topicsData AnalyticsDatabasesBusiness IntelligenceData AnalyticsAnalyze and ingest data, predict business outcomes, and build data pipelines.MongoDB Atlas (pay-as-you-go)expand_contentMongoDB Inc.Get started for free with Atlas, the best way to run MongoDB on Google CloudData warehouseexpand_contentBigQueryData warehouse for business agility and insights.Business intelligence platformexpand_contentLookerPlatform for BI, data applications, and embedded analytics.Streaming data analyticsexpand_contentDataflowStreaming analytics for stream and batch processing.Event ingestionexpand_contentPub/SubMessaging service for event ingestion and delivery.Data transformation pipelineexpand_contentDataformBuild, version control, and deploy SQL workflows in BigQuery.Managed Spark and Hadoop serviceexpand_contentDataprocService for running Apache Spark and Apache Hadoop clusters.Data governanceexpand_contentDataplexIntelligent data fabric for unified data management across distributed data silos.Data integrationexpand_contentCloud Data FusionData integration for building and managing data pipelines.Advertising platformexpand_contentGoogle Marketing PlatformMarketing platform unifying advertising and analytics.Geospatial data catalogexpand_contentEarth EngineGeospatial platform for Earth observation data and analysis.Storage engineexpand_contentBigLakeStorage engine for unifying data warehouses and lakes.Data capture and replication serviceexpand_contentDatastreamServerless change data capture and replication service.Elastic Cloud (Elasticsearch Service)expand_contentElasticElastic is the search-powered AI platform for search applications, observability, and security.Telecom data management and analyticsexpand_contentTelecom Data FabricTelecom data management and analytics with an automated approach.DatabasesMongoDB Atlas (pay-as-you-go)expand_contentMongoDB Inc.Get started for free with Atlas, the best way to run MongoDB on Google CloudSQL databaseexpand_contentCloud SQLFully managed database for MySQL, PostgreSQL, and SQL Server.Cloud-native databaseexpand_contentSpannerAlways on database with virtually unlimited scale and 99.999% availability.PostgreSQL databaseexpand_contentAlloyDB for PostgreSQL100% PostgreSQL-compatible database that runs anywhere.In-memory database serviceexpand_contentMemorystoreFully managed Redis, Valkey, and Memcached database for submillisecond data access.NoSQL database serviceexpand_contentBigtableWide-column database for large-scale, low-latency workloads.Document databaseexpand_contentFirestoreCloud-native document database for building rich mobile, web, and IoT apps.Simplified database migrationsexpand_contentDatabase Migration ServiceServerless, minimal downtime migrations to the cloud.Bare metal Oracle serversexpand_contentBare Metal SolutionFully managed infrastructure to support your Oracle workloads.Elastic Cloud (Elasticsearch Service)expand_contentElasticElastic is the search-powered AI platform for search applications, observability, and security.Business IntelligenceAnalyze business data and deliver actionable insights.Data warehouseexpand_contentBigQueryData warehouse for business agility and insights.Data visualizerexpand_contentLooker StudioInteractive data suite for dashboarding, reporting, and analyticsBusiness intelligence platformexpand_contentLookerPlatform for BI, data applications, and embedded analytics.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_developer_tools.txt b/See_all_developer_tools.txt new file mode 100644 index 0000000000000000000000000000000000000000..ab74520367d3fb8a14d70b504dce732aa7ec9764 --- /dev/null +++ b/See_all_developer_tools.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAI#developer-tools +Date Scraped: 2025-02-23T12:04:39.520Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolscloseApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byDeveloper toolsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolscloseApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byDeveloper toolsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchDeveloper ToolsGoogle Maps APIs and SDKsexpand_contentGoogle Maps PlatformReal-time geographical data for creating location experiences and improving business operations.Online development terminalexpand_contentCloud ShellInteractive shell environment with a built-in command line.Event-driven functionsexpand_contentCloud Run functionsEvent-driven compute platform for cloud services and apps.Serverless computeexpand_contentCloud RunFully managed environment for running containerized apps.AI-assisted codingexpand_contentGemini Code AssistAI-assisted application development and coding.Google Cloud consoleexpand_contentCloud ConsoleWeb-based interface for managing and monitoring cloud apps.Google Cloud libraries and toolsexpand_contentCloud SDKCommand-line tools and libraries for Google Cloud.Cron job serviceexpand_contentCloud SchedulerCron job scheduler for task automation and management.AI-assisted IDE pluginsexpand_contentCloud CodeIDE support to write, run, and debug Kubernetes applications.Terraform provisioningexpand_contentTerraform on Google CloudOpen source tool to provision Google Cloud resources with declarative configuration files.Asynchronous task executionexpand_contentCloud TasksTask management service for asynchronous task execution.Development environmentsexpand_contentCloud WorkstationsManaged and secure development environments in the cloud.Google Cloud templatesexpand_contentCloud Foundation ToolkitReference templates for Deployment Manager and Terraform.Event-driven execution for Firebaseexpand_contentCloud Functions for FirebaseWrite and run app logic server-side without needing to set up your own server.Service meshexpand_contentCloud Service MeshManaged service mesh for complex microservices architectures.Windows workload managementexpand_contentTools for PowerShellFull cloud control from Windows PowerShell.Kubernetes-native CI/CDexpand_contentTektonKubernetes-native resources for declaring CI/CD pipelines.AI assistance for Google Cloudexpand_contentGemini Cloud AssistAI-powered assistance for Google Cloud.Internal solutions discoveryexpand_contentService CatalogService catalog for admins managing internal enterprise solutions.Service event dashboardexpand_contentPersonalized Service HealthGain visibility into disruptive events impacting Google Cloud products.CI/CD command lineexpand_contentSkaffoldOpen source command line tool for Cloud Code, Cloud Build, and Cloud Deploy.Policy managementexpand_contentPolicy IntelligenceSmart access control for your Google Cloud resources.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_industry_solutions.txt b/See_all_industry_solutions.txt new file mode 100644 index 0000000000000000000000000000000000000000..45d99458051710c345b21cf5f3322a59edd43b30 --- /dev/null +++ b/See_all_industry_solutions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions#industry-solutions +Date Scraped: 2025-02-23T11:58:05.041Z + +Content: +Google Cloud solutionsBrowse Google Cloud solutions or visit our Solutions Center to discover and deploy solutions based on your readiness.Visit Solutions Center Contact Sales Navigate toReference architecturesProducts directoryPricing informationFilter byFiltersIndustry solutionsJump Start SolutionsApplication modernizationArtificial intelligenceAPIs and applicationsData analyticsDatabasesInfrastructure modernizationProductivity and collaborationSecurityStartups and small and medium-sized businessesFeatured partner solutionssearchsendIndustry solutionsWhatever your industry's challenge or use case, explore how Google Cloud solutions can help improve efficiency and agility, reduce cost, participate in new business models, and capture new market opportunities.RetailAnalytics and collaboration tools for the retail value chain.Consumer packaged goodsSolutions for CPG digital transformation and brand growth.ManufacturingMigration and AI tools to optimize the manufacturing value chain.AutomotiveDigital transformation along the automotive value chain.Supply chain and logisticsEnable sustainable, efficient, and resilient data-driven operations across supply chain and logistics operations.EnergyMulticloud and hybrid solutions for energy companies.Healthcare and life sciencesAdvance R&D and improve the clinician and patient experience with AI-driven tools.Media and entertainmentSolutions for content production and distribution operations.GamesAI-driven solutions to build and scale games faster.TelecommunicationsHybrid and multicloud services to deploy and monetize 5G.Financial servicesComputing, databases, and analytics tools for financial services.Capital marketsModern cloud-based architectures, high performance computing, and AI/ML.BankingReduce risk, improve customer experiences and data insights.InsuranceStrengthen decision making and deliver customer-centric experiences.PaymentsAdd new revenue streams, ensure secure transactions and scale globally.Government and public sectorGovernmentData storage, AI, and analytics solutions for government agencies.State and local governmentCloud platform helps public sector workforces better serve constituents.Federal governmentTools that increase federal agencies’ innovation and operational effectiveness.Federal cybersecuritySolutions spanning Zero Trust, analytics, and asset protection.EducationTeaching tools to provide more engaging learning experiences.Education technologyAI, analytics, and app development solutions designed for EdTech.Canada public sectorSolutions that help keep your organization secure and compliant.Department of DefenseGoogle Cloud supports the critical missions of the DoD by providing them with the most secure, reliable, innovative cloud solutions.Google Workspace for GovernmentSecure collaboration solutions and program management resources to help fulfill the unique needs of today's government.Jump Start SolutionsTo get started, choose pre-configured, interactive solutions that you can deploy directly from the Google Cloud console.Deploy a dynamic websiteBuild, deploy, and operate a sample dynamic website using responsive web frameworks.Deploy load-balanced virtual machinesLearn best practices for creating and deploying a sample load-balanced VM cluster.Summarize large documentsLearn how to detect text in raw files and automate document summaries with generative AI.Deploy an AI/ML image processing pipelineRecognize and classify images using pre-trained AI models and serverless functions.Build a three-tier web appCreate a model web app using a three-tiered architecture (frontend, middle tier, backend).Deploy an ecommerce web app Build and run a simple ecommerce application for retail organizations using Kubernetes.Create an analytics lakehouse Store, process, analyze, and activate data using a unified data stack.Deploy a data warehouse using BigQueryLearn the basics of building a data warehouse and visualizing data.Build an internal knowledge base Extract question-and-answer pairs from your documents for a knowledge base.Deploy a RAG application Learn how to use retrieval-augmented generation (RAG) to create a chat application.Deploy a Java application Learn to deploy a dynamic web app that mimics a real-world point of sale screen for a retail store.Deploy an ecommerce platform with serverless computing Build and run a simple ecommerce application for retail organizations using serverless capabilities.Build a secure CI/CD pipeline Set up a secure CI/CD pipeline for building, scanning, storing, and deploying containers to GKE.Use a cloud SDK client library Learn key skills for successfully making API calls to identify trends and observations on aggregate data.See all solutions in consoleApplication modernizationAssess, plan, implement, and measure software practices and capabilities to modernize and simplify your organization’s business application portfolios.CAMPProgram that uses DORA to improve your software delivery capabilities.Modernize Traditional ApplicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Migrate from PaaS: Cloud Foundry, OpenshiftTools for moving your existing containers into Google's managed container services.Migrate from MainframeAutomated tools and prescriptive guidance for moving your mainframe apps to the cloud.Modernize Software DeliverySoftware supply chain best practices - innerloop productivity, CI/CD and S3C.DevOps Best PracticesProcesses and resources for implementing DevOps in your org.SRE PrinciplesTools and resources for adopting SRE in your org.Day 2 Operations for GKETools and guidance for effective GKE management and monitoring.FinOps and Optimization of GKEBest practices for running reliable, performant, and cost effective applications on GKE.Run Applications at the EdgeGuidance for localized and low latency apps on Google’s hardware agnostic edge solution.Architect for MulticloudManage workloads across multiple clouds with a consistent platform.Go ServerlessFully managed environment for developing, deploying and scaling apps.API ManagementModernize old applications and accelerate new development with an API-FIRST approach. Learn moreArtificial intelligenceAdd intelligence and efficiency to your business with AI and machine learning.AI HypercomputerAI optimized hardware, software, and consumption, combined to improve productivity and efficiency.Contact Center AIAI model for speaking with customers and assisting human agents.Document AIMachine learning and AI to unlock insights from your documents.Gemini for Google CloudAI-powered collaborator integrated across Google Workspace and Google Cloud. Vertex AI Search for commerceGoogle-quality search and recommendations for retailers' digital properties help increase conversions and reduce search abandonment.Learn moreAPIs and applicationsSecurely unlock your data with APIs, automate processes, and create applications across clouds and on-premises without coding.New business channels using APIsAttract and empower an ecosystem of developers and partners.Open Banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Unlocking legacy applications using APIsCloud services for extending and modernizing legacy apps.Learn moreData analyticsGenerate instant insights from data at any scale with a serverless, fully managed analytics platform that significantly simplifies analytics.Data warehouse modernizationData warehouse to jumpstart your migration and unlock insights.Data lake modernizationServices for building and modernizing your data lake. Spark on Google CloudRun and write Spark where you need it, serverless and integrated.Stream analyticsInsights from ingesting, processing, and analyzing event streams.Business intelligenceSolutions for modernizing your BI stack and creating rich data experiences.Data sciencePut your data to work with Data Science on Google Cloud.Marketing analyticsSolutions for collecting, analyzing, and activating customer data. Geospatial analytics and AISolutions for building a more prosperous and sustainable business.DatasetsData from Google, public, and commercial providers to enrich your analytics and AI initiatives.Cortex FrameworkReduce the time to value with reference architectures, packaged services, and deployment templates.Learn moreDatabasesMigrate and manage enterprise data with security, reliability, high availability, and fully managed data services.Databases for GamesBuild global, live games with Google Cloud databases.Database migrationGuides and tools to simplify your database migration life cycle. Database modernizationUpgrades to modernize your operational database infrastructure.Google Cloud database portfolioDatabase services to migrate, manage, and modernize data.Migrate Oracle workloads to Google CloudRehost, replatform, rewrite your Oracle workloads.Open source databasesFully managed open source databases with enterprise-grade support.SQL Server on Google CloudOptions for running SQL Server virtual machines on Google Cloud. Learn moreInfrastructure modernizationMigrate and modernize workloads on Google's global, secure, and reliable infrastructure.Active AssistAutomatic cloud resource optimization and increased security.Application migrationDiscovery and analysis tools for moving to the cloud.Backup and Disaster RecoveryEnsure your business continuity needs are met.Data center migrationMigration solutions for VMs, apps, databases, and more.Rapid Migration and Modernization ProgramSimplify your path to success in the cloud.High performance computingCompute, storage, and networking options to support any workload.Mainframe modernizationAutomated tools and prescriptive guidance for moving to the cloud.ObservabilityDeliver deep cloud observability with Google Cloud and partners.SAP on Google CloudCertifications for running SAP applications and SAP HANA.Virtual desktopsRemote work solutions for desktops and applications (VDI & DaaS).Windows on Google CloudTools and partners for running Windows workloads.Red Hat on Google CloudEnterprise-grade platform for traditional on-prem and custom applications.Cross-cloud NetworkSimplify hybrid and multicloud networking and secure your workloads, data, and users.Learn moreProductivity and collaborationChange the way teams work with solutions designed for humans and built for impact.Google WorkspaceCollaboration and productivity tools for enterprises. Chrome EnterpriseChrome OS, Chrome Browser, and Chrome devices built for business. Google Workspace EssentialsSecure video meetings and modern collaboration for teams.Cloud IdentityUnified platform for IT admins to manage user devices and apps.Cloud SearchEnterprise search for employees to quickly find company information.Learn moreSecurityDetect, investigate, and protect against online threats.Digital SovereigntyA comprehensive set of sovereign capabilities, allowing you to adopt the right controls on a workload-by-workload basis.Security FoundationSolution with recommended products and guidance to help achieve a strong security posture.Security analytics and operationsSolution for analyzing petabytes of security telemetry.Web App and API Protection (WAAP)Threat and fraud protection for your web applications and APIs.Security and resilience frameworkSolutions for each phase of the security and resilience life cycle.Risk and compliance as code (RCaC)Solution to modernize your governance, risk, and compliance function with automation. Software Supply Chain SecuritySolution for strengthening end-to-end software supply chain security.Google Cloud Cybershield™Strengthen nationwide cyber defense.Learn moreStartups and small and medium-sized businessesAccelerate startup and small and medium-sized businesses growth with tailored solutions and programs.Google Cloud for Web3Build and scale faster with simple, secure tools, and infrastructure for Web3.Startup solutions Grow your startup and solve your toughest challenges using Google’s proven technology.Startup programGet financial, business, and technical support to take your startup to the next level.Small and medium-sized businessesExplore solutions for web hosting, app development, AI, and analytics.Software as a serviceBuild better SaaS products, scale efficiently, and grow your business. Featured partner solutionsGoogle Cloud works with some of the most trusted, innovative partners to help enterprises innovate faster, scale smarter, and stay secure. Here are just a few of them.CiscoCombine Cisco's networking, multicloud, and security portfolio with Google Cloud services to innovate on your own terms.DatabricksDatabricks on Google Cloud offers enterprise flexibility for AI-driven analytics on one open cloud platform.Dell TechnologiesThe Dell and Google Cloud partnership delivers a variety of solutions to help transform how enterprises operate their business.IntelGet performance on your own terms with customizable Google Cloud and Intel technologies designed for the most demanding enterprise workloads and applications.MongoDBMongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure.NetAppDiscover advanced hybrid cloud data services that simplify how you migrate and run enterprise workloads in the cloud.Palo Alto NetworksCombine Google’s secure-by-design infrastructure with dedicated protection from Palo Alto Networks to help secure your applications and data in hybrid environments and on Google Cloud.SAPDrive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.SplunkSplunk and Google Cloud have partnered to help organizations ingest, normalize, and analyze data at scale. VMwareMigrate and run your VMware workloads natively on Google Cloud.Red HatEnterprise-grade platform for traditional on-prem and custom applications with the security, performance, scalability, and simplicity of Google Cloud.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_management_tools.txt b/See_all_management_tools.txt new file mode 100644 index 0000000000000000000000000000000000000000..541b761277286078542e7b508ae0dab4dd561bb8 --- /dev/null +++ b/See_all_management_tools.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAY#managment-tools +Date Scraped: 2025-02-23T12:05:57.232Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolscloseSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byManagement toolsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolscloseSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byManagement toolsAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchManagement tools topicsObservabilityOperationsObservabilityUnderstand the behavior, health, and performance of applications.Google Cloud consoleexpand_contentCloud ConsoleWeb-based interface for managing and monitoring cloud apps.Logging toolexpand_contentCloud LoggingGoogle Cloud audit, platform, and application logs managementsMobile-based managementexpand_contentCloud Mobile AppApp to manage Google Cloud services from your mobile device.Monitoring serviceexpand_contentCloud MonitoringInfrastructure and application health with rich metrics.Distributed tracing systemexpand_contentCloud TraceTracing system collecting latency data from applications.Emissions analyticsexpand_contentCarbon FootprintDashboard to view and export your Google Cloud carbon emissions report.Error managementexpand_contentError ReportingReal time exception monitoring and alerting.Monitoring service for Prometheusexpand_contentManaged Service for PrometheusFully managed amonitoring service built on the same globally scalable data store used by Google Cloud.Application profilerexpand_contentCloud ProfilerCPU and heap profiler for analyzing application performance.Datadogexpand_contentDatadogSee inside any stack, any app, at any scale, anywhereOperationsIntegrate monitoring, logging, and tracing into applications.Monitoring, logging, and tracing platformexpand_contentOperations SuiteIntegrated monitoring, logging, and trace managed services for applications and systems.Container resource managementexpand_contentResource ManagerHierarchically manage resources by project, folder, and organization.Service managementexpand_contentService DirectoryPlatform for discovering, publishing, and connecting services.Resource automationexpand_contentInfrastructure ManagerAutomate deployment and management of Google Cloud infrastructure resources using Terraform.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_networking_products.txt b/See_all_networking_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..511df1e090e90bcd2309382c1e4cf9bc3e046243 --- /dev/null +++ b/See_all_networking_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAUSAQ0#networking +Date Scraped: 2025-02-23T12:07:23.847Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructurecloseData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byInfrastructureAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructurecloseData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse byInfrastructureAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchInfrastructure topicsComputecheckNetworkingServerlessStorageWeb3ContainersHybrid & MulticloudClear allNetworkingConnect, load balance, stream, and manage network traffic.Load balancerexpand_contentCloud Load BalancingService for distributing traffic across applications and regions.DNS serviceexpand_contentCloud DNSDomain name system for reliable and low-latency name lookups.Content delivery networkexpand_contentCloud CDNContent delivery network for serving web and video content.VPC networkexpand_contentVirtual Private Cloud (VPC)Virtual network for Google Cloud resources and cloud-based services.Connectivity managerexpand_contentNetwork Connectivity CenterDeploy, manage, and scale your networks from a single place.CBRS spectrum accessexpand_contentSpectrum Access System (SAS)Controls fundamental access to the Citizens Broadband Radio Service (CBRS).Hybrid connectivityexpand_contentCloud ConnectivityConnectivity options for routers, VPN, peering, interconnection and enterprise needs.Networking tier optionsexpand_contentNetwork Service TiersCloud network options based on performance, availability, and cost.Observability, monitoring, and troubleshootingexpand_contentNetwork Intelligence CenterNetwork monitoring, verification, and optimization platform.Cross-cloud networkexpand_contentPrivate Service ConnectSecure connection between your VPC and services.Network address translationexpand_contentCloud NATNAT service for giving private instances internet access.Cloud-native network automationexpand_contentTelecom Network AutomationReady to use cloud-native automation for telecom networks.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_products_(100+).txt b/See_all_products_(100+).txt new file mode 100644 index 0000000000000000000000000000000000000000..b821a2f7dd9ca6d42f352c7abff63f4c6f2d518a --- /dev/null +++ b/See_all_products_(100+).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products#featured-products +Date Scraped: 2025-02-23T12:01:54.990Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse bycategoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryBrowse bycategoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustrysearchsearchsearchBuild and use AIGenerative AI, chatbots, ML models, and APIs.View moreAI and ML platformexpand_contentVertex AI platformUnified platform for ML models and generative AI.Generative AI toolexpand_contentVertex AI StudioBuild, tune, and deploy foundation models on Vertex AI.AI-assisted codingexpand_contentGemini Code AssistAI-assisted application development and coding.Agent and chatbot builderexpand_contentVertex AI Agent BuilderGenerative AI apps for enterprise search and conversational AI.Build and run on Google Cloud infrastructureVirtual machines, networking, storage, multicloud solutions and more.View moreVirtual machines (VMs)expand_contentCompute EngineVirtual machines running in Google's data center.Containersexpand_contentGoogle Kubernetes EngineManaged environment for running containerized apps.Tensor Processing Unitsexpand_contentCloud TPUsTensor processing units for machine learning applications.Object storageexpand_contentCloud StorageObject storage that's secure, durable and scaleable.Access analytics and create databasesProducts and services for coding, deploying, and debugging applications.View moreData warehouseexpand_contentBigQueryData warehouse for business agility and insights.Business intelligence platformexpand_contentLookerPlatform for BI, data applications, and embedded analytics.SQL databaseexpand_contentCloud SQLFully managed database for MySQL, PostgreSQL, and SQL Server.Streaming data analyticsexpand_contentDataflowStreaming analytics for stream and batch processing.Find popular development toolsProducts and services for coding, deploying, and debugging applications.View moreAI-assisted codingexpand_contentGemini Code AssistAI-assisted application development and coding.Online development terminalexpand_contentCloud ShellInteractive shell environment with a built-in command line.Serverless computeexpand_contentCloud RunFully managed environment for running containerized apps.Develop applications and featuresBuild, run, manage, and deploy applications.View moreEvent-driven functionsexpand_contentCloud Run functionsEvent-driven compute platform for cloud services and apps.Container registryexpand_contentArtifact RegistryStore, manage, and secure container images and language packages.CI/CD platformexpand_contentCloud BuildSolution for running build steps in a Docker container.No-code platformexpand_contentAppSheetNo-code development platform to build and extend applications.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_security_and_identity_products.txt b/See_all_security_and_identity_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..5f5d0654a23da7264426a6043a4fa2b8f8090222 --- /dev/null +++ b/See_all_security_and_identity_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products?pds=CAg#security-and-identity +Date Scraped: 2025-02-23T12:09:35.840Z + +Content: +Explore over 150+ Google Cloud productsStart with one of these featured services or browse the catalog below.AI and ML platformVertex AI platformVirtual machines (VMs)Compute EngineData warehouseBigQueryServerless computeCloud RunSQL databaseCloud SQLObject storageCloud StorageTalk with a Google Cloud expert about special pricing, custom solutions, and more.Contact salesBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identitycloseWeb and app hostingProductivity and collaborationIndustryBrowse bySecurity and identityAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearBrowse by categoryAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identitycloseWeb and app hostingProductivity and collaborationIndustryBrowse bySecurity and identityAI/MLInfrastructureData and analyticsDeveloper toolsApp developmentIntegration servicesManagement toolsSecurity and identityWeb and app hostingProductivity and collaborationIndustryClearsearchsearchsearchSecurity and IdentityFraud prevention platformexpand_contentreCAPTCHA EnterpriseHelp protect your website from fraudulent activity, spam and abuse.Sensitive data managementexpand_contentSecret ManagerStore API keys, passwords, certificates, and other sensitive data.SecOps platformexpand_contentGoogle Security OperationsCloud-native threat detection, investigation, and response (TDIR) platform.Identity and access managementexpand_contentCloud IAMPermissions management system for Google Cloud resources.Unified endpoint managementexpand_contentCloud IdentityUnified platform for IT admins to manage user devices and apps.Cybersecurity platformexpand_contentSecurity Command CenterPlatform for defending against threats to your Google Cloud assets.Security keyexpand_contentTitan Security KeyTwo-factor authentication device for user account protection.Web application firewallexpand_contentCloud ArmorSecurity policies and defense against web and DDoS attacks.Encryption key managementexpand_contentCloud Key ManagementManage encryption keys on Google Cloud.Context-aware accessexpand_contentIdentity-Aware ProxyUse identity and context to guard access to your applications and VMs.Identity and access managementexpand_contentIdentity PlatformAdd Google-grade identity and access management to your apps.Threat visibilityexpand_contentGoogle Threat IntelligenceAccess the latest intel from the frontlines.Hybrid and multicloud technologyexpand_contentAnti Money Laundering AIDetect suspicious potential money laundering activity with AI.Firewall serviceexpand_contentCloud FirewallGlobal and flexible firewalls to protect your cloud resources.Private CA managementexpand_contentCertificate Authority ServiceSimplify the deployment and management of private CAs.Confidential VMsexpand_contentConfidential ComputingEncrypt data in use with Confidential VMs.Network managementexpand_contentVPC Service ControlsProtect sensitive data in Google Cloud services using security perimeters.Workload controlexpand_contentAssured WorkloadsCompliance and security controls for sensitive workloads.Intrusion detectionexpand_contentCloud IDSCloud-native network threat detection with industry-leading security.Malware intelligenceexpand_contentVirus TotalUnique visibility into threats.Cybersecurity consultingexpand_contentMandiant Consulting ServicesMitigate cyber threats and reduce business risk with the help of frontline experts.Security assessmentexpand_contentMandiant Security ValidationKnow your security can be effective against today's adversaries.Microsoft active directory domainsexpand_contentManaged Service for Microsoft Active DirectoryHardened service running Microsoft® Active Directory (AD).Open source serviceexpand_contentAssured Open Source SoftwareIncorporate the same OSS packages that Google uses into your own developer workflows.Attack monitoringexpand_contentMandiant Attack Surface ManagementInternet asset discovery, monitoring, and management.Hardened VMsexpand_contentShielded VMsReinforced virtual machines on Google Cloud.Managed defense serviceexpand_contentMandiant Managed Detection and ResponseHelp find and eliminate threats with confidence 24/7.Transparency managementexpand_contentAccess TransparencyCloud provider visibility through near real-time logs.Security coursesexpand_contentMandiant AcademyCybersecurity training, incident response and threat intelligence certifications, and hands-on cyber range.Threat monitoringexpand_contentMandiant Digital Threat MonitoringVisibility into deep, dark, and open webIncident response serviceexpand_contentMandiant Incident Response ServicesHelp tackle breaches rapidly and confidently.Policy managementexpand_contentPolicy IntelligenceSmart access control for your Google Cloud resources.Malicious URL detectionexpand_contentWeb RiskDetect malicious URLs on your website and in client applications.Access managementexpand_contentAccess Context ManagerManager for configuring access levels based on policy and request attributes.Risk reporting and managementexpand_contentRisk ManagerEvaluate your organization's security posture and connect with insurance partners.Ready to get started? Take the next stepTalk to a Google Cloud sales representativeDiscuss solutions for your unique challenges in more detailGet more information about products and solutions pricingExplore products and services built for your industryRequest a call backFill out the contact form to find the right product or solution with help from a Google Cloud expert.Start contact formChat live with our sales teamAvailable Monday, 9AM ET, to Friday, 7 PM ET.Open chatGet expert guidance and support with consulting services arrow_forwardFind a Google Cloud partner to deploy your solution arrow_forwardProduct launch stagesExpand allPreviewAt Preview, products or features are ready for testing by customers. Preview offerings are often publicly announced, but are not necessarily feature-complete, and no SLAs or technical support commitments are provided for these. Unless stated otherwise by Google, Preview offerings are intended for use in test environments only. The average Preview stage lasts about six months.General AvailabilityGeneral Availability products and features are open to all customers, ready for production use, and covered by a Google Cloud SLA, where applicable. Google typically supports General Availability products and features through APIs, CLIs, and the Google Cloud Console, except in unusual circumstances where providing one or more of the foregoing capabilities is unreasonable in the context of the particular product or feature.DeprecatedDeprecated features are scheduled to be shut down and removed. For more information, see the "Discontinuation of Services" section of the Google Cloud Platform Terms of Service.[Legacy] Early access, alpha, betaEarly access: Early access features are limited to a closed group of testers for a limited subset of launches. Participation is by invitation only and may require signing a pre-general-availability agreement, including confidentiality provisions. These features may be unstable, change in backward-incompatible ways, and are not guaranteed to be released. There are no SLAs provided and no technical support obligations. Early access releases are rare and focus on validating product prototypes.Alpha: Alpha is a limited-availability test before releases are cleared for more widespread use. Our focus with alpha testing is to verify functionality and gather feedback from a limited set of customers. Typically, alpha participation is by invitation and subject to pre-general-availability terms. Alpha releases don't have to be feature complete, no SLAs are provided, and there are no technical support obligations. However, alphas are generally suitable for use in test environments. The alpha phase usually lasts six months.Beta: At beta, products or features are ready for broader customer testing and use. Betas are often publicly announced. There are no SLAs or technical support obligations in a beta release unless otherwise specified in product terms or the terms of a particular beta program. The average beta phase lasts about six months.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_all_solutions.txt b/See_all_solutions.txt new file mode 100644 index 0000000000000000000000000000000000000000..cd0c4466f5bbc20ecc22852ccb25fc20e8d122e6 --- /dev/null +++ b/See_all_solutions.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions +Date Scraped: 2025-02-23T11:58:05.805Z + +Content: +Google Cloud solutionsBrowse Google Cloud solutions or visit our Solutions Center to discover and deploy solutions based on your readiness.Visit Solutions Center Contact Sales Navigate toReference architecturesProducts directoryPricing informationFilter byFiltersIndustry solutionsJump Start SolutionsApplication modernizationArtificial intelligenceAPIs and applicationsData analyticsDatabasesInfrastructure modernizationProductivity and collaborationSecurityStartups and small and medium-sized businessesFeatured partner solutionssearchsendIndustry solutionsWhatever your industry's challenge or use case, explore how Google Cloud solutions can help improve efficiency and agility, reduce cost, participate in new business models, and capture new market opportunities.RetailAnalytics and collaboration tools for the retail value chain.Consumer packaged goodsSolutions for CPG digital transformation and brand growth.ManufacturingMigration and AI tools to optimize the manufacturing value chain.AutomotiveDigital transformation along the automotive value chain.Supply chain and logisticsEnable sustainable, efficient, and resilient data-driven operations across supply chain and logistics operations.EnergyMulticloud and hybrid solutions for energy companies.Healthcare and life sciencesAdvance R&D and improve the clinician and patient experience with AI-driven tools.Media and entertainmentSolutions for content production and distribution operations.GamesAI-driven solutions to build and scale games faster.TelecommunicationsHybrid and multicloud services to deploy and monetize 5G.Financial servicesComputing, databases, and analytics tools for financial services.Capital marketsModern cloud-based architectures, high performance computing, and AI/ML.BankingReduce risk, improve customer experiences and data insights.InsuranceStrengthen decision making and deliver customer-centric experiences.PaymentsAdd new revenue streams, ensure secure transactions and scale globally.Government and public sectorGovernmentData storage, AI, and analytics solutions for government agencies.State and local governmentCloud platform helps public sector workforces better serve constituents.Federal governmentTools that increase federal agencies’ innovation and operational effectiveness.Federal cybersecuritySolutions spanning Zero Trust, analytics, and asset protection.EducationTeaching tools to provide more engaging learning experiences.Education technologyAI, analytics, and app development solutions designed for EdTech.Canada public sectorSolutions that help keep your organization secure and compliant.Department of DefenseGoogle Cloud supports the critical missions of the DoD by providing them with the most secure, reliable, innovative cloud solutions.Google Workspace for GovernmentSecure collaboration solutions and program management resources to help fulfill the unique needs of today's government.Jump Start SolutionsTo get started, choose pre-configured, interactive solutions that you can deploy directly from the Google Cloud console.Deploy a dynamic websiteBuild, deploy, and operate a sample dynamic website using responsive web frameworks.Deploy load-balanced virtual machinesLearn best practices for creating and deploying a sample load-balanced VM cluster.Summarize large documentsLearn how to detect text in raw files and automate document summaries with generative AI.Deploy an AI/ML image processing pipelineRecognize and classify images using pre-trained AI models and serverless functions.Build a three-tier web appCreate a model web app using a three-tiered architecture (frontend, middle tier, backend).Deploy an ecommerce web app Build and run a simple ecommerce application for retail organizations using Kubernetes.Create an analytics lakehouse Store, process, analyze, and activate data using a unified data stack.Deploy a data warehouse using BigQueryLearn the basics of building a data warehouse and visualizing data.Build an internal knowledge base Extract question-and-answer pairs from your documents for a knowledge base.Deploy a RAG application Learn how to use retrieval-augmented generation (RAG) to create a chat application.Deploy a Java application Learn to deploy a dynamic web app that mimics a real-world point of sale screen for a retail store.Deploy an ecommerce platform with serverless computing Build and run a simple ecommerce application for retail organizations using serverless capabilities.Build a secure CI/CD pipeline Set up a secure CI/CD pipeline for building, scanning, storing, and deploying containers to GKE.Use a cloud SDK client library Learn key skills for successfully making API calls to identify trends and observations on aggregate data.See all solutions in consoleApplication modernizationAssess, plan, implement, and measure software practices and capabilities to modernize and simplify your organization’s business application portfolios.CAMPProgram that uses DORA to improve your software delivery capabilities.Modernize Traditional ApplicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Migrate from PaaS: Cloud Foundry, OpenshiftTools for moving your existing containers into Google's managed container services.Migrate from MainframeAutomated tools and prescriptive guidance for moving your mainframe apps to the cloud.Modernize Software DeliverySoftware supply chain best practices - innerloop productivity, CI/CD and S3C.DevOps Best PracticesProcesses and resources for implementing DevOps in your org.SRE PrinciplesTools and resources for adopting SRE in your org.Day 2 Operations for GKETools and guidance for effective GKE management and monitoring.FinOps and Optimization of GKEBest practices for running reliable, performant, and cost effective applications on GKE.Run Applications at the EdgeGuidance for localized and low latency apps on Google’s hardware agnostic edge solution.Architect for MulticloudManage workloads across multiple clouds with a consistent platform.Go ServerlessFully managed environment for developing, deploying and scaling apps.API ManagementModernize old applications and accelerate new development with an API-FIRST approach. Learn moreArtificial intelligenceAdd intelligence and efficiency to your business with AI and machine learning.AI HypercomputerAI optimized hardware, software, and consumption, combined to improve productivity and efficiency.Contact Center AIAI model for speaking with customers and assisting human agents.Document AIMachine learning and AI to unlock insights from your documents.Gemini for Google CloudAI-powered collaborator integrated across Google Workspace and Google Cloud. Vertex AI Search for commerceGoogle-quality search and recommendations for retailers' digital properties help increase conversions and reduce search abandonment.Learn moreAPIs and applicationsSecurely unlock your data with APIs, automate processes, and create applications across clouds and on-premises without coding.New business channels using APIsAttract and empower an ecosystem of developers and partners.Open Banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Unlocking legacy applications using APIsCloud services for extending and modernizing legacy apps.Learn moreData analyticsGenerate instant insights from data at any scale with a serverless, fully managed analytics platform that significantly simplifies analytics.Data warehouse modernizationData warehouse to jumpstart your migration and unlock insights.Data lake modernizationServices for building and modernizing your data lake. Spark on Google CloudRun and write Spark where you need it, serverless and integrated.Stream analyticsInsights from ingesting, processing, and analyzing event streams.Business intelligenceSolutions for modernizing your BI stack and creating rich data experiences.Data sciencePut your data to work with Data Science on Google Cloud.Marketing analyticsSolutions for collecting, analyzing, and activating customer data. Geospatial analytics and AISolutions for building a more prosperous and sustainable business.DatasetsData from Google, public, and commercial providers to enrich your analytics and AI initiatives.Cortex FrameworkReduce the time to value with reference architectures, packaged services, and deployment templates.Learn moreDatabasesMigrate and manage enterprise data with security, reliability, high availability, and fully managed data services.Databases for GamesBuild global, live games with Google Cloud databases.Database migrationGuides and tools to simplify your database migration life cycle. Database modernizationUpgrades to modernize your operational database infrastructure.Google Cloud database portfolioDatabase services to migrate, manage, and modernize data.Migrate Oracle workloads to Google CloudRehost, replatform, rewrite your Oracle workloads.Open source databasesFully managed open source databases with enterprise-grade support.SQL Server on Google CloudOptions for running SQL Server virtual machines on Google Cloud. Learn moreInfrastructure modernizationMigrate and modernize workloads on Google's global, secure, and reliable infrastructure.Active AssistAutomatic cloud resource optimization and increased security.Application migrationDiscovery and analysis tools for moving to the cloud.Backup and Disaster RecoveryEnsure your business continuity needs are met.Data center migrationMigration solutions for VMs, apps, databases, and more.Rapid Migration and Modernization ProgramSimplify your path to success in the cloud.High performance computingCompute, storage, and networking options to support any workload.Mainframe modernizationAutomated tools and prescriptive guidance for moving to the cloud.ObservabilityDeliver deep cloud observability with Google Cloud and partners.SAP on Google CloudCertifications for running SAP applications and SAP HANA.Virtual desktopsRemote work solutions for desktops and applications (VDI & DaaS).Windows on Google CloudTools and partners for running Windows workloads.Red Hat on Google CloudEnterprise-grade platform for traditional on-prem and custom applications.Cross-cloud NetworkSimplify hybrid and multicloud networking and secure your workloads, data, and users.Learn moreProductivity and collaborationChange the way teams work with solutions designed for humans and built for impact.Google WorkspaceCollaboration and productivity tools for enterprises. Chrome EnterpriseChrome OS, Chrome Browser, and Chrome devices built for business. Google Workspace EssentialsSecure video meetings and modern collaboration for teams.Cloud IdentityUnified platform for IT admins to manage user devices and apps.Cloud SearchEnterprise search for employees to quickly find company information.Learn moreSecurityDetect, investigate, and protect against online threats.Digital SovereigntyA comprehensive set of sovereign capabilities, allowing you to adopt the right controls on a workload-by-workload basis.Security FoundationSolution with recommended products and guidance to help achieve a strong security posture.Security analytics and operationsSolution for analyzing petabytes of security telemetry.Web App and API Protection (WAAP)Threat and fraud protection for your web applications and APIs.Security and resilience frameworkSolutions for each phase of the security and resilience life cycle.Risk and compliance as code (RCaC)Solution to modernize your governance, risk, and compliance function with automation. Software Supply Chain SecuritySolution for strengthening end-to-end software supply chain security.Google Cloud Cybershield™Strengthen nationwide cyber defense.Learn moreStartups and small and medium-sized businessesAccelerate startup and small and medium-sized businesses growth with tailored solutions and programs.Google Cloud for Web3Build and scale faster with simple, secure tools, and infrastructure for Web3.Startup solutions Grow your startup and solve your toughest challenges using Google’s proven technology.Startup programGet financial, business, and technical support to take your startup to the next level.Small and medium-sized businessesExplore solutions for web hosting, app development, AI, and analytics.Software as a serviceBuild better SaaS products, scale efficiently, and grow your business. Featured partner solutionsGoogle Cloud works with some of the most trusted, innovative partners to help enterprises innovate faster, scale smarter, and stay secure. Here are just a few of them.CiscoCombine Cisco's networking, multicloud, and security portfolio with Google Cloud services to innovate on your own terms.DatabricksDatabricks on Google Cloud offers enterprise flexibility for AI-driven analytics on one open cloud platform.Dell TechnologiesThe Dell and Google Cloud partnership delivers a variety of solutions to help transform how enterprises operate their business.IntelGet performance on your own terms with customizable Google Cloud and Intel technologies designed for the most demanding enterprise workloads and applications.MongoDBMongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure.NetAppDiscover advanced hybrid cloud data services that simplify how you migrate and run enterprise workloads in the cloud.Palo Alto NetworksCombine Google’s secure-by-design infrastructure with dedicated protection from Palo Alto Networks to help secure your applications and data in hybrid environments and on Google Cloud.SAPDrive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.SplunkSplunk and Google Cloud have partnered to help organizations ingest, normalize, and analyze data at scale. VMwareMigrate and run your VMware workloads natively on Google Cloud.Red HatEnterprise-grade platform for traditional on-prem and custom applications with the security, performance, scalability, and simplicity of Google Cloud.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/See_full_price_list_with_100+_products.txt b/See_full_price_list_with_100+_products.txt new file mode 100644 index 0000000000000000000000000000000000000000..ecc80e74dea4a16854972964d4dc452a152f4af8 --- /dev/null +++ b/See_full_price_list_with_100+_products.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/pricing/list +Date Scraped: 2025-02-23T12:11:08.438Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayPrice listClick a product name below to view its pricing details. New Google Cloud customers get $300 in free credits to run, test, and deploy workloads. All customers can use 25+ products for free, up to monthly usage limits.Request a custom quoteNavigate toPricing calculatorCost management tools Pricing overview Filter byFiltersPopular productsAI and Machine LearningAPI ManagementComputeContainersData AnalyticsDatabasesDeveloper ToolsHealthcare and Life SciencesHybrid and MulticloudInternet of Things (IoT)Management ToolsMedia and GamingMigrationNetworkingOperations (formerly Stackdriver)Security and IdentityServerless ComputingStorageWeb3searchsendPopular productsCompute EngineVirtual machines running in Google's data center.Cloud StorageObject storage with global edge caching.Cloud SQLRelational database services for MySQL, PostgreSQL, and SQL Server.BigQueryData warehouse for business agility and insights.Google Kubernetes EngineManaged environment for running containerized apps.Speech-to-TextSpeech recognition and transcription supporting 125 languages.AI and Machine LearningAI for Data ScientistsSee all Vertex AI pricingEvery tool you need to build, deploy, and scale ML models faster, within a unified artificial intelligence platform.AI Building BlocksAutoMLCustom machine learning model training and development.Natural Language AISentiment analysis and classification of unstructured text.DialogflowConversation applications and systems development suite.Media TranslationAdd dynamic audio translation directly to your content and applications.Recommendations AIDeliver highly personalized product recommendations at scale.Speech-to-TextSpeech recognition and transcription supporting 125 languages.Text-to-SpeechSpeech synthesis in 220+ voices and 40+ languages.Translation AILanguage detection, translation, and glossary support.Video AIVideo classification and recognition using machine learning.Vision AICustom and pre-trained models to detect emotion, text, more.AI InfrastructureCloud GPUsGPUs for machine learning, scientific computing, and 3D visualization.Cloud TPUsTensor processing units for machine learning applications.Tensorflow EnterpriseReliability and performance for AI apps with enterprise-grade support and managed services.Deep Learning VM ImagePreconfigured VMs for deep learning applications.Deep Learning ContainersPreconfigured and optimized containers for deep learning environments.AI SolutionsContact us for more information and pricing on all other AI solutions.Document AIMachine learning and AI to unlock insights from your documents.API ManagementAPI GatewayDevelop, deploy, secure, and manage APIs with a fully managed gateway.Apigee API PlatformAPI management, development, and security platform.Apigee healthcare APIxFHIR API-based digital service formation.Apigee Open Banking APIxOpen banking and PSD2-compliant API delivery.Application IntegrationConnect your applications visually, without code.Cloud EndpointsDeployment and development management for APIs on Google Cloud.Developer PortalSelf-service and custom developer portal creation.ComputeBare MetalInfrastructure to run specialized workloads on Google Cloud. Subscription pricing. Contact sales.Compute EngineVirtual machines running in Google’s data center.Migrate to Virtual MachinesServer and VM migration to Compute Engine.Preemptible VMsCompute instances for batch jobs and fault-tolerant workloads.RecommenderProactive, easily actionable recommendations to keep your cloud optimized.SQL Server on Google CloudOptions for running SQL Server virtual machines on Google Cloud.VMware EngineMigrate and run your VMware workloads natively on Google Cloud.ContainersArtifact RegistryStore, manage, and secure container images and language packages.Container RegistryRegistry for storing, managing, and securing Docker images.Google Kubernetes EngineManaged environment for running containerized apps.Data AnalyticsBigQueryData warehouse for business agility and insights.Cloud ComposerWorkflow orchestration service built on Apache Airflow.Cloud Data FusionData integration for building and managing data pipelines.Data CatalogMetadata solution for exploring and managing data.DataflowStreaming analytics for stream and batch processing.DataplexIntelligent data fabric for centrally managing distributed data.DataprepService to prepare data for analysis and machine learning.DataprocService for running Apache Spark and Apache Hadoop clusters.Earth EngineA powerful geospatial platform for Earth observation data and analysis.LookerPlatform for BI, data applications, and embedded analytics.Pub/SubMessaging service for event ingestion and delivery.DatabasesAlloyDB for PostgreSQLFully managed, PostgreSQL-compatible database for demanding enterprise workloads.BigtableNoSQL wide-column database for storing big data with low latency.Cloud SQLRelational database services for MySQL, PostgreSQL, and SQL Server.Database Migration ServiceServerless, minimal downtime migrations to the cloud.Firebase Realtime DatabaseNoSQL cloud database for storing and syncing data in real time.FirestoreNoSQL document database for mobile and web app data.MemorystoreIn-memory data store service for Redis for fast data processing.SpannerRelational database management system for database administration.Developer ToolsCloud BuildSolution for running build steps in a Docker container.Cloud CodeIDE support to write, run, and debug Kubernetes applications.Cloud SchedulerCron job scheduler for task automation and management.Cloud SDKCommand-line tools and libraries for Google Cloud. (No charge)Cloud Source RepositoriesPrivate Git repository to store, manage, and track code.Cloud TasksTask management service for asynchronous task execution.Cloud WorkstationsFully managed and secure development environments.Google Cloud DeployFully managed solution for continuous delivery.Tools for EclipsePlugin for Google Cloud development inside the Eclipse IDE. (No charge)Tools for PowerShellFull cloud control from Windows PowerShell. (No charge)Infrastructure ManagerAutomate infrastructure management with Terraform.Healthcare and Life SciencesCloud Healthcare APISolution for bridging existing care systems and apps on Google Cloud.Cloud Life Sciences APITools for managing, processing, and transforming biomedical data.Hybrid and MulticloudAnthosPlatform for modernizing existing apps and building new ones.Internet of Things (IoT)IOT CoreIoT device management, integration, and connection service.Management ToolsCarbon FootprintDashboard to view and export your Google Cloud carbon emissions report. (No charge)Cloud ConsoleWeb-based interface for managing and monitoring cloud apps. (No charge)Cloud Deployment ManagerService for creating and managing Google Cloud resources. (No charge)Cloud ShellInteractive shell environment with a built-in command line. (No charge)Cost ManagementTools for monitoring, controlling, and optimizing your costs. (No charge)Service CatalogService catalog for admins managing internal enterprise solutions. (No charge)Media and GamingLive Stream APILive encoder that transforms live video content for use across a variety of user devices.OpenCueOpen source render manager for visual effects and animation. Transcoder APITransform video content for use across a variety of user devices.Video Stitcher APIDynamically insert content and ads for targeted personalization of VOD and live content.MigrationBigQuery Data Transfer ServiceData import service for scheduling and moving data into BigQuery.Storage Transfer ServiceData transfers from online and on-premises sources to Cloud Storage.Transfer ApplianceStorage server for moving large volumes of data to Google Cloud.NetworkingCloud ArmorSecurity policies and defense against web and DDoS attacks.Cloud CDN Content delivery network for serving web and video content.Cloud DNSDomain name system for reliable and low-latency name lookups.Cloud IDSCloud-native network threat detection with industry-leading securityCloud Load BalancingService for distributing traffic across applications and regions.Cloud NATNAT service for giving private instances internet access.Hybrid connectivityConnectivity options for VPN, peering, and enterprise needs.Network Intelligence CenterNetwork monitoring, verification, and optimization platform.Network Service TiersCloud network options based on performance, availability, and cost.Network TelemetryVPC flow logs for network monitoring, forensics, and security.Service DirectoryPlatform for discovering, publishing, and connecting services. (No charge)Traffic DirectorTraffic control pane and management for open service mesh.Virtual Private Cloud (VPC)Virtual network for Google Cloud resources and cloud-based services.Operations (formerly Stackdriver)Cloud LoggingGoogle Cloud audit, platform, and application logs management.Cloud MonitoringInfrastructure and application health with rich metrics.Cloud ProfilerCPU and heap profiler for analyzing application performance. (No charge)Cloud TraceTracing system collecting latency data from applications.Kubernetes Engine MonitoringGKE app development and troubleshooting.Security and IdentitySecurityAccess TransparencyCloud provider visibility through near real-time logs.Assured WorkloadsCompliance and security controls for sensitive workloads.Binary AuthorizationDeploy only trusted containers on Google Kubernetes Engine.Cloud Asset InventoryView, monitor, and analyze Google Cloud and Anthos assets across projects and services.Sensitive Data ProtectionSensitive data inspection, classification, and redaction platform.Cloud Key ManagementManage encryption keys on Google Cloud.Confidential ComputingEncrypt data in use with Confidential VMs.Security Command CenterPlatform for defending against threats to your Google Cloud assets.Secret ManagerStore API keys, passwords, certificates, and other sensitive data.Shielded VMsVirtual machines hardened with security controls and defenses. (No charge)VPC Service ControlsProtect sensitive data in Google Cloud services using security perimeters. (No charge)Identity and AccessBeyondCorp EnterpriseZero-trust access with built-in threat and data protection.Cloud IdentityUnified platform for IT admins to manage user devices and apps.Context-Aware AccessManage access to apps and infrastructure based on a user’s identity and context. (No charge)Identity and Access ManagementPermissions management system for Google Cloud resources. (No charge)Identity-Aware ProxyUse identity and context to guard access to your applications and VMs.Identity PlatformAdd Google-grade identity and access management to your apps.Managed Service for Microsoft Active DirectoryHardened service running Microsoft® Active Directory (AD).Resource ManagerHierarchical management for organizing resources on Google Cloud.Titan Security KeyTwo-factor authentication device for user account protection.User Protection ServicesreCAPTCHA EnterpriseHelp protect your website from fraudulent activity, spam, and abuse. Contact sales for pricing.Web RiskDetect malicious URLs on your website and in client applications.Serverless ComputingApp EngineServerless application platform for apps and back ends.Cloud FunctionsPlatform for creating functions that respond to cloud events.Cloud RunFully managed environment for running containerized apps.WorkflowsWorkflow orchestration for serverless products and API services.StorageCloud StorageObject storage with global edge-caching.Cloud Storage for FirebaseObject storage for storing and serving user-generated content.FilestoreFile storage that is highly scalable and secure.Local SSDBlock storage that is locally attached for high-performance needs.Persistent DiskBlock storage for virtual machine instances running on Google Cloud.Backup and DRManaged backup and disaster recovery for application-consistent data protection.Web3Blockchain Node EngineFully managed node hosting for developing on the blockchain.Products listed on this page may be in preview, alpha, beta, or early access. Learn more about product launch stages.Products in preview, early access, alpha, or beta may not have charges associated with usage in their current launch stage, which is subject to change. Prices listed are in USD. Your charges might be converted into local currency if applicable to your Cloud Billing account. View supported local currencies.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Select_a_managed_container_runtime_environment.txt b/Select_a_managed_container_runtime_environment.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d0c2aa2acebd3198e5440f55d855e367cacd851 --- /dev/null +++ b/Select_a_managed_container_runtime_environment.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/select-managed-container-runtime-environment +Date Scraped: 2025-02-23T11:47:43.963Z + +Content: +Home Docs Cloud Architecture Center Send feedback Select a managed container runtime environment Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-30 UTC This document helps you to assess your application requirements and choose between Cloud Run and Google Kubernetes Engine (GKE) Autopilot, based on technical and organizational considerations. This document is for cloud architects who need to choose a Google Cloud target container runtime environment for their workloads. It assumes that you're familiar with Kubernetes and Google Cloud, and that you have some knowledge of cloud serverless runtime environments like Cloud Run, Cloud Run functions, or AWS Lambda. Google Cloud offers several runtime environment options that have a range of capabilities. The following diagram shows the range of Google Cloud managed offerings: The diagram shows the following: Most-managed runtime environments (the focus of this guide): Cloud Run, which includes Cloud Run functions. GKE Autopilot. These options are managed by Google, with no user management of underlying compute infrastructure. Least-managed runtime environments: GKE Standard, which is optimized for enterprise workloads and offers single-cluster scalability up to 65,000 nodes. Compute Engine, which includes the accelerator-optimized A3 family of virtual machines for machine learning (ML) and high performance computing (HPC) workloads. These options require some degree of user-level infrastructure management, such as the virtual machines (VMs) that underlie the compute capabilities. VMs in GKE Standard are the Kubernetes cluster nodes. VMs in Compute Engine are the core platform offering, which you can customize to suit your requirements. This guide helps you to choose between the most-managed runtime environments, Cloud Run and GKE Autopilot. For a broader view of Google Cloud runtime environments, see the Google Cloud Application Hosting Options guide. Overview of environments This section provides an overview of Cloud Run and GKE Autopilot capabilities. Cloud Run and GKE Autopilot are both tightly integrated within Google Cloud, so there is a lot of commonality between the two. Both platforms support multiple options for load balancing with Google's highly reliable and scalable load balancing services. They also both support VPC networking, Identity-Aware Proxy (IAP), and Google Cloud Armor for when more granular, private networking is a requirement. Both platforms charge you only for the exact resources that you use for your applications. From a software delivery perspective, as container runtime environments, Cloud Run and GKE Autopilot are supported by services that make up the Google Cloud container ecosystem. These services include Cloud Build, Artifact Registry, Binary Authorization, and continuous delivery with Cloud Deploy, to help ensure that your applications are safely and reliably deployed to production. This means that you and your teams own the build and deployment decisions. Because of the commonality between the two platforms, you might want to take advantage of the strengths of each by adopting a flexible approach to where you deploy your applications, as detailed in the guide Use GKE and Cloud Run together. The following sections describe unique aspects of Cloud Run and Autopilot. Cloud Run Cloud Run is a serverless managed compute platform that lets you run your applications directly on top of Google's scalable infrastructure. Cloud Run provides automation and scaling for two main kinds of applications: Cloud Run services: For code that responds to web requests. Cloud Run jobs: For code that performs one or more background tasks and then exits when the work is done. With these two deployment models, Cloud Run can support a wide range of application architectures while enabling best practices and letting developers focus on code. Cloud Run also supports deploying application code from the following sources: Individual lightweight functions Full applications from source code Containerized applications Cloud Run incorporates a build-and-deploy capability that supports both FaaS and the ability to build from source, alongside the prebuilt container runtime capability. When you use Cloud Run in this way, the steps of building and deploying the application container image that will be executed are entirely automatic, and they don't require custom configuration from you. GKE Autopilot GKE Autopilot is the default and recommended cluster mode of operation in GKE. Autopilot lets you run applications on Kubernetes without the overhead of managing infrastructure. When you use Autopilot, Google manages key underlying aspects of your cluster configuration, including node provisioning and scaling, default security posture, and other preconfigured settings. With Autopilot managing node resources, you pay only for the resources that are requested by your workloads. Autopilot continuously monitors and optimizes infrastructure resourcing to ensure the best fit while providing an SLA for your workloads. GKE Autopilot supports workloads that might not be a good fit for Cloud Run. For example, GKE Autopilot commonly supports long-lived or stateful workloads. Choose a runtime environment In general, if the characteristics of your workload are suitable for a managed platform, the serverless runtime environment of Cloud Run is ideal. Using Cloud Run can result in less infrastructure to manage, less self-managed configuration, and therefore lower operational overhead. Unless you specifically want or need Kubernetes, we recommend that you consider serverless first as your target runtime environment. Although Kubernetes provides the powerful abstraction of an open platform, using it adds complexity. If you don't need Kubernetes, then we recommend that you consider whether your application is a good fit for serverless. If there are criteria that make your workload less suitable for serverless, then we recommend using Autopilot. The following sections provide more detail about some of the criteria that can help you answer these questions, particularly the question of whether the workload is a fit for serverless. Given the commonality between Autopilot and Cloud Run that's described in the preceding sections, migration between the platforms is a straightforward task when there aren't any technical or other blockers. To explore migration options in more detail, see Migrate from Cloud Run to GKE and Migrate from Kubernetes to Cloud Run. When you choose a runtime environment for your workload, you need to factor in technical considerations and organizational considerations. Technical considerations are characteristics of your application or the Google Cloud runtime environment. Organizational considerations are non-technical characteristics of your organization or team that might influence your decision. Technical considerations Some of the technical considerations that will influence your choice of platform are the following: Control and configurability: Granularity of control of the execution environment. Network traffic management and routing: Configurability of interactions over the network. Horizontal and vertical scalability: Support for dynamically growing and shrinking capacity. Support for stateful applications: Capabilities for storing persistent state. CPU architecture: Support for different CPU types. Accelerator offload (GPUs and TPUs): Ability to offload computation to dedicated hardware. High memory, CPU, and other resource capacity: Level of various resources consumed. Explicit dependency on Kubernetes: Requirements for Kubernetes API usage. Complex RBAC for multi-tenancy: Support for sharing pooled resources. Maximum container task timeout time: Execution duration of long-lived applications or components. The following sections detail these technical considerations to help you choose a runtime environment. Control and configurability Compared to Cloud Run, GKE Autopilot provides more granular control of the execution environment for your workloads. Within the context of a Pod, Kubernetes provides many configurable primitives that you can tune to meet your application requirements. Configuration options include privilege level, quality of service parameters, custom handlers for container lifecycle events, and process namespace sharing between multiple containers. Cloud Run directly supports a subset of the Kubernetes Pod API surface, which is described in the reference YAML for the Cloud Run Service object and in the reference YAML for the Cloud Run Job object. These reference guides can help you to evaluate the two platforms alongside your application requirements. The container contract for the Cloud Run execution environment is relatively straightforward and will suit most serving workloads. However, the contract specifies some requirements that must be fulfilled. If your application or its dependencies can't fulfill those requirements, or if you require a finer degree of control over the execution environment, then Autopilot might be more suitable. If you want to reduce the time that you spend on configuration and administration, consider choosing Cloud Run as your runtime environment. Cloud Run has fewer configuration options than Autopilot, so it can help you to maximize developer productivity and reduce operational overhead. Network traffic management and routing Both Cloud Run and GKE Autopilot integrate with Google Cloud Load Balancing. However, GKE Autopilot additionally provides a rich and powerful set of primitives for configuring the networking environment for service-to-service communications. The configuration options include granular permissions and segregation at the network layer by using namespaces and network policies, port remapping, and built-in DNS service discovery within the cluster. GKE Autopilot also supports the highly configurable and flexible Gateway API. This functionality provides powerful control over the way that traffic is routed into and between services in the cluster. Because Autopilot is highly configurable, it can be the best option if you have multiple services with a high degree of networking codependency, or complex requirements around how traffic is routed between your application components. An example of this pattern is a distributed application that is decomposed into numerous microservices that have complex patterns of interdependence. In such scenarios, Autopilot networking configuration options can help you to manage and control the interactions between services. Horizontal and vertical scalability Cloud Run and GKE Autopilot both support manual and automatic horizontal scaling for services and jobs. Horizontal scaling provides increased processing power when required, and it removes the added processing power when it isn't needed. For a typical workload, Cloud Run can usually scale out more quickly than GKE Autopilot to respond to spikes in the number of requests per second. As an example, the video demonstration "What's New in Serverless Compute?" shows Cloud Run scaling from zero to over 10,000 instances in approximately 10 seconds. To increase the speed of horizontal scaling on Kubernetes (at some additional cost), Autopilot lets you provision extra compute capacity. If your application can't scale by adding more instances to increase the level of resources that are available, then it might be a better fit for Autopilot. Autopilot supports vertical scaling to dynamically vary the amount of processing power that's available without increasing the number of running instances of the application. Cloud Run can automatically scale your applications down to zero replicas while they aren't being used, which is helpful for certain use cases that have a special focus on cost optimization. Because of the characteristics of how your applications can scale to zero, there are multiple optimization steps that you can take to minimize the time between the arrival of a request and the time at which your application is up and running, and able to process the request. Support for stateful applications Autopilot offers complete Kubernetes Volume support, backed by Persistent Disks that let you run a broad range of stateful deployments, including self-managed databases. Both Cloud Run and GKE Autopilot let you connect with other services like Filestore and Cloud Storage buckets. They also both include the ability to mount object-store buckets into the file system with Cloud Storage FUSE. Cloud Run uses an in-memory file system, which might not be a good fit for applications that require a persistent local file system. In addition, the local in-memory file system is shared with the memory of your application. Therefore, both the ephemeral file system and the application and container memory usage contribute towards exhausting the memory limit. You can avoid this issue if you use a dedicated in-memory volume with a size limit. A Cloud Run service or job container has a maximum task timeout. A container running within a pod in an Autopilot cluster can be rescheduled, subject to any constraints that are configured with Pod Disruption Budgets (PDBs). However, pods can run for up to seven days when they're protected from eviction caused by node auto-upgrades or scale-down events. Typically, task timeout is more likely to be a consideration for batch workloads in Cloud Run. For long-lived workloads, and for batch tasks that can't be completed within the maximum task duration, Autopilot might be the best option. CPU architecture All Google Cloud compute platforms support x86 CPU architecture. Cloud Run doesn't support Arm architecture processors, but Autopilot supports managed nodes that are backed by Arm architecture. If your workload requires Arm architecture, you will need to use Autopilot. Accelerator offload Autopilot supports the use of GPUs and the use of TPUs, including the ability to consume reserved resources. Cloud Run supports the use of GPUs with some limitations. High memory, CPU, and other resource requirements Compared to GKE Autopilot resource request limits, the maximum CPU and memory resources that can be consumed by a single Cloud Run service or job (a single instance) is limited. Depending on the characteristics of your workloads, Cloud Run might have other limits that constrain the resources that are available. For example, the startup timeout and the maximum number of outbound connections might be limited with Cloud Run. With Autopilot, some limits might not apply or they might have higher permitted values. Explicit dependency on Kubernetes Some applications, libraries, or frameworks might have an explicit dependency on Kubernetes. The Kubernetes dependency might be a result of one of the following: The application requirements (for example, the application calls Kubernetes APIs, or uses Kubernetes custom resources). The requirements of the tooling that's used to configure or deploy the application (such as Helm). The support requirements of a third-party creator or supplier. In these scenarios, Autopilot is the target runtime environment because Cloud Run doesn't support Kubernetes. Complex RBAC for multi-tenancy If your organization has particularly complex organizational structures or requirements for multi-tenancy, then use Autopilot so that you can take advantage of Kubernetes' Role-Based Access Control (RBAC). For a simpler option, you can use the security and segregation capabilities that are built in to Cloud Run. Organizational considerations The following are some of the organizational considerations that will influence your choice of environment: Broad technical strategy: Your organization's technical direction. Leveraging the Kubernetes ecosystem: Interest in leveraging the OSS community. Existing in-house tooling: Incumbent use of certain tooling. Development team profiles: Developer skill-sets and experience. Operational support: Operations teams' capabilities and focus. The following sections detail these organizational considerations to help you choose an environment. Broad technical strategy Organizations or teams might have agreed-upon strategies for preferring certain technologies over others. For example, if a team has an agreement to standardize where possible on either serverless or Kubernetes, that agreement might influence or even dictate a target runtime environment. If a given workload isn't a good fit for the runtime environment that's specified in the strategy, you might decide to do one or more of the following, with the accompanying caveats: Rearchitect the workload. However, if the workload isn't a good fit, doing so might result in non-optimal performance, cost, security, or other characteristics. Register the workload as an exception to the strategic direction. However, if exceptions are overused, doing so can result in a disparate technology portfolio. Reconsider the strategy. However, doing so can result in policy overhead that can impede or block progress. Leveraging the Kubernetes ecosystem As part of the broad technical strategy described earlier, organizations or teams might decide to select Kubernetes as their platform of choice because of the significant and growing ecosystem. This choice is distinct from selecting Kubernetes because of technical application dependencies, as described in the preceding section Explicit dependency on Kubernetes. The consideration to use the Kubernetes ecosystem places emphasis on an active community, rich third-party tooling, and strong standards and portability. Leveraging the Kubernetes ecosystem can accelerate your development velocity and reduce time to market. Existing in-house tooling In some cases, it can be advantageous to use existing tooling ecosystems in your organization or team (for any of the environments). For example, if you're using Kubernetes, you might opt to continue using deployment tooling like ArgoCD, security and policy tooling like Gatekeeper, and package management like Helm. Existing tooling might include established rules for organizational compliance automation and other functionality that might be costly or require a long lead-time to implement for an alternative target environment. Development team profiles An application or workload team might have prior experience with Kubernetes that can accelerate the team's velocity and capability to deliver on Autopilot. It can take time for a team to become proficient with a new runtime environment. Depending on the operating model, doing so can potentially lead to lower platform reliability during the upskilling period. For a growing team, hiring capability might influence an organization's choice of platform. In some markets, Kubernetes skills might be scarce and therefore command a hiring premium. Choosing an environment such as Cloud Run can help you to streamline the hiring process and allow for more rapid team growth within your budget. Operational support When you choose a runtime environment, consider the experience and abilities of your SRE, DevOps, and platforms teams, and other operational staff. The capabilities of the operational teams to effectively support the production environment are crucial from a reliability perspective. It's also critical that operational teams can support pre-production environments to ensure that developer velocity isn't impeded by downtime, reliance on manual processes, or cumbersome deployment mechanisms. If you use Kubernetes, a central operations or platform engineering team can handle Autopilot Kubernetes upgrades. Although the upgrades are automatic, operational staff will typically closely monitor them to ensure minimal disruptions to your workloads. Some organizations choose to manually upgrade control plane versions. GKE Enterprise also includes capabilities to streamline and simplify the management of applications across multiple clusters. In contrast to Autopilot, Cloud Run doesn't require ongoing management overhead or upgrades of the control plane. By using Cloud Run, you can simplify your operations processes. By selecting a single runtime environment, you can further simplify your operations processes. If you opt to use multiple runtime environments, you need to ensure that the team has the capacity, capabilities, and interest to support those runtime environments. Selection To begin the selection process, talk with the various stakeholders. For each application, assemble a working group that consists of developers, operational staff, representatives of any central technology governance group, internal application users and consumers, security, cloud financial optimization teams, and other roles or groups within your organization that might be relevant. You might choose to circulate an information-gathering survey to collate application characteristics, and share the results in advance of the session. We recommend that you select a small working group that includes only the required stakeholders. All representatives might not be required for every working session. You might also find it useful to include representatives from other teams or groups that have experience in building and running applications on either Autopilot or Cloud Run, or both. Use the technical and organizational considerations from this document to guide your conversation and evaluate your application's suitability for each of the potential platforms. We recommend that you schedule a check-in after some months have passed to confirm or revisit the decision based on the outcomes of deploying your application in the new environment. What's next Learn more about these runtime environments with the GKE Autopilot Qwik Start and the Cloud Run lab. Read more about migrating from Cloud Run to GKE and from Kubernetes to Cloud Run. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Henry Bell | Cloud Solutions ArchitectOther contributors: Marco Ferrari | Cloud Solutions ArchitectGari Singh | Outbound Product ManagerMaridi (Raju) Makaraju | Supportability Tech LeadParag Doshi | Key Enterprise ArchitectDaniel Stamer | Customer Engineer, Digital NativesSteren Giannini | Group Product ManagerVictor Szalvay | Senior Product ManagerWilliam Denniss | Group Product Manager Send feedback \ No newline at end of file diff --git a/Sensitive_Data_Protection.txt b/Sensitive_Data_Protection.txt new file mode 100644 index 0000000000000000000000000000000000000000..37537564ce935d0dca45e62dbce615bb76b9d7f8 --- /dev/null +++ b/Sensitive_Data_Protection.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/products/sensitive-data-protection +Date Scraped: 2025-02-23T12:09:03.438Z + +Content: +Cloud Data Loss Prevention (Cloud DLP) is now part of Sensitive Data Protection. Learn more.Sensitive Data ProtectionDiscover and protect your sensitive dataA fully managed service designed to help you discover, classify, and protect your valuable data assets with ease. Go to consoleContact sales Try our discovery service for BigQuery by scanning and profiling a single table of your choice. Product highlightsVisibility into sensitive data and risks across your entire organizationReduce data risk with obfuscation and de-identificationCover use cases anywhere, on or off cloud with flexible API supportWhat is automatic sensitive data discovery?FeaturesAutomated sensitive data discovery and classificationContinuously monitor your data assets across your entire organization, select organization folders, or individual projects. Powerful and easy-to-use UI available in the cloud console. Use data asset profiles to inform your security, privacy, and compliance posture. Choose from 200+ predefined detectors or add your own custom types, adjust detection thresholds, and create detection rules to fit your needs and reduce noise.Discovery in actionHow Charlotte Tilbury Beauty uses Google Cloud to respond to customer data requestsLearn moreSensitive data intelligence for security assessmentsSensitive Data Protection is deeply integrated into the Security Command Center Enterprise risk engine. This powerful combination continuously monitors your data, pinpoints your high-value assets, analyzes vulnerabilities, and simulates real-world attack scenarios. With this intelligence, you can proactively address security, posture, and threat risks and safeguard the data that drives your business.Google Security Command Center Enterprise6:30Powerful and flexible protection for your AI/ML workloadsSensitive Data Protection provides tools to classify and de-identify sensitive elements or unwanted content within your data. This fine-grained data minimization can help you prepare data for AI model training or tuning and protect at run time in workloads like chat, data collection, data display, and generative AI prompts and responses to ensure you adhere to regulations and internal policies.Data security with AI powered agentsbest practices for securing AI deployments using Sensitive Data Protection11:47De-identification, masking, tokenization, and bucketing Sensitive Data Protection helps you take a data-centric approach to securing your assets. De-identification enables you to transform your data to reduce data risk while retaining data utility. Additionally, you can use insights to apply column-level, fine-grained access, or dynamic masking policies.Secure AIExplore a data-focused approach to protecting gen AI applications with Google Sensitive Data ProtectionLearn moreCover use cases anywhere, on or off cloud with the DLP APICloud Data Loss Prevention and the DLP API are part of Sensitive Data Protection. Use the DLP API’s built-in support for various Google Cloud services. Additionally, the DLP API’s in-line content methods enable support for additional data sources, custom workloads, and applications on or off cloud.Cloud Data Loss Prevention APIDLPAPI referenceView all featuresOptions tableService typeDescriptionSuggested useSensitive data discoveryUsed to discover, scan, and classify across a wide set of dataMonitoring for sensitive data across a large set of assets, such as your entire data warehouseStorage inspectionTargeted, focused inspection to help you find every data element in Google Cloud storage systemsInvestigations or dealing with high-value unstructured data like chat logs stored in Google CloudHybrid inspectionTargeted, focused inspection to help you find every data element in storage systems outside Google CloudInvestigations or dealing with high-value unstructured data like chat logs stored outside Google CloudContent inspectionSynchronous, stateless inspection on data from anywhereInspecting in near real time or integrating into custom workloads, applications, or pipelines Content de-identificationSynchronous, stateless transformation on data from anywhereMasking, tokenizing, de-identifying in near real time or integrating into custom workloads, applications, or pipelinesLearn more about sensitive data discovery.Sensitive data discoveryDescriptionUsed to discover, scan, and classify across a wide set of dataSuggested useMonitoring for sensitive data across a large set of assets, such as your entire data warehouseStorage inspectionDescriptionTargeted, focused inspection to help you find every data element in Google Cloud storage systemsSuggested useInvestigations or dealing with high-value unstructured data like chat logs stored in Google CloudHybrid inspectionDescriptionTargeted, focused inspection to help you find every data element in storage systems outside Google CloudSuggested useInvestigations or dealing with high-value unstructured data like chat logs stored outside Google CloudContent inspectionDescriptionSynchronous, stateless inspection on data from anywhereSuggested useInspecting in near real time or integrating into custom workloads, applications, or pipelines Content de-identificationDescriptionSynchronous, stateless transformation on data from anywhereSuggested useMasking, tokenizing, de-identifying in near real time or integrating into custom workloads, applications, or pipelinesLearn more about sensitive data discovery.How It WorksTo use Sensitive Data Protection, you use one of its services, such as discovery, to scan your data for sensitive elements. You can enable post-scan actions, including alerting and automatic publishing to systems like Chronicle, Security Command Center, and Pub/Sub.View documentationCommon UsesGain awareness of your sensitive dataDiscovery: Continuous visibility into your sensitive dataUnderstand and manage your data risk across your organization. Continuous data monitoring across your organization can help you make more informed decisions, manage and reduce your data risk, and stay in compliance. With no jobs or overhead to manage, you can focus on the outcomes and your business.Learn about discoveryData Warehouse and Data LakeBigQuery, BigLakeRelational DatabasesCloud SQLRelational DatabasesObject StorageCloud Storage, Amazon S3Object StorageTutorials, quickstarts, & labsDiscovery: Continuous visibility into your sensitive dataUnderstand and manage your data risk across your organization. Continuous data monitoring across your organization can help you make more informed decisions, manage and reduce your data risk, and stay in compliance. With no jobs or overhead to manage, you can focus on the outcomes and your business.Learn about discoveryData Warehouse and Data LakeBigQuery, BigLakeRelational DatabasesCloud SQLRelational DatabasesObject StorageCloud Storage, Amazon S3Object StorageInvestigate your storageDeep inspection of structured and unstructured dataInspect your data in storage systems exhaustively and investigate individual findings.Schedule an inspection scanCloud StorageBigQueryDatastoreTutorials, quickstarts, & labsDeep inspection of structured and unstructured dataInspect your data in storage systems exhaustively and investigate individual findings.Schedule an inspection scanCloud StorageBigQueryDatastoreUnderstand sensitive anomaliesPrivacy risk analysis of your dataAssess data for privacy and re-identification risk. Risk analyses can help you see how the size, shape, and distribution of data can increase re-identification risk.Learn about re-identification risk analysisComputing k-anonymity for a datasetComputing I-diversity for a datasetTutorials, quickstarts, & labsPrivacy risk analysis of your dataAssess data for privacy and re-identification risk. Risk analyses can help you see how the size, shape, and distribution of data can increase re-identification risk.Learn about re-identification risk analysisComputing k-anonymity for a datasetComputing I-diversity for a datasetAutomate de-identificationDe-identification of structured and unstructured dataCreate de-identified copies of Cloud Storage data. De-identify Cloud Storage objects, folders, and buckets without needing to run your own pipeline or custom code.Announcing easier de-identification of Cloud Storage dataTutorials, quickstarts, & labsDe-identification of structured and unstructured dataCreate de-identified copies of Cloud Storage data. De-identify Cloud Storage objects, folders, and buckets without needing to run your own pipeline or custom code.Announcing easier de-identification of Cloud Storage dataAdvanced masking and de-identificationRun classification and de-identification in a BigQuery UDFDe-identify data while querying using a remote function. Inspect, de-identify, and tokenize BigQuery data from real-time query results to reduce exposure of sensitive data.De-identify BigQuery data at query timeTutorials, quickstarts, & labsRun classification and de-identification in a BigQuery UDFDe-identify data while querying using a remote function. Inspect, de-identify, and tokenize BigQuery data from real-time query results to reduce exposure of sensitive data.De-identify BigQuery data at query timeDe-identify: Redact and tokenize dataProtect sensitive data as you migrate to the cloudUnblock more workloads as you migrate to the cloud. Sensitive Data Protection helps you inspect and classify your sensitive data in structured and unstructured workloads. De-identification techniques like tokenization (pseudonymization) let you preserve the utility of your data for joining or analytics, while reducing the risk of handling the data, by obfuscating the raw sensitive identifiers.Learn about de-identifying sensitive dataDe-identifying sensitive dataRedacting sensitive data from imagesCreating Sensitive Data Protection de-identification templatesLearn about all transformation methods available with Sensitive Data ProtectionTutorials, quickstarts, & labsProtect sensitive data as you migrate to the cloudUnblock more workloads as you migrate to the cloud. Sensitive Data Protection helps you inspect and classify your sensitive data in structured and unstructured workloads. De-identification techniques like tokenization (pseudonymization) let you preserve the utility of your data for joining or analytics, while reducing the risk of handling the data, by obfuscating the raw sensitive identifiers.Learn about de-identifying sensitive dataDe-identifying sensitive dataRedacting sensitive data from imagesCreating Sensitive Data Protection de-identification templatesLearn about all transformation methods available with Sensitive Data ProtectionProtect high-value AI and ML workloadsPrepare data for model trainingFind and remove sensitive elements from your data before model training. Tailor this to your business needs with full control over the data types to remove or keep. Redacting sensitive data from textTransformation referenceRandom dictionary replacementAutomatic text redactionTutorials, quickstarts, & labsPrepare data for model trainingFind and remove sensitive elements from your data before model training. Tailor this to your business needs with full control over the data types to remove or keep. Redacting sensitive data from textTransformation referenceRandom dictionary replacementAutomatic text redactionRedact sensitive data elements in chatClassify and redact in Dialogflow CXRedact sensitive data from unstructured chat logs. Leverage the power of our inspection and de-identification engine to remove sensitive data from your Dialogflow (Contact Center AI) admin logs.Read more about how to redact sensitive dataTutorials, quickstarts, & labsClassify and redact in Dialogflow CXRedact sensitive data from unstructured chat logs. Leverage the power of our inspection and de-identification engine to remove sensitive data from your Dialogflow (Contact Center AI) admin logs.Read more about how to redact sensitive dataPricingHow our pricing worksDiscovery is billed based on the pricing mode you select. Inspection and transformation pricing is based on total bytes processed.Category or typeDescriptionPrice USDDiscoveryConsumption mode$0.03/GBFixed-rate subscription modeA default discovery subscription is included at no charge with the purchase of a Security Command Center Enterprise subscription.$2500/unitInspection and transformationUp to 1 GBFreeInspection of Google Cloud storage systemsStarting at$1/GB Lower with volumeInspection of data from any source (hybrid inspection)Starting at$3/GBLower with volumeIn-line content inspectionStarting at$3/GBLower with volumeIn-line content de-identificationStarting at$2/GB Lower with volumeRisk analysisAnalyze sensitive data to find properties that might increase the risk of subjects being identifiedNo Sensitive Data Protection charges* Risk analysis uses resources in BigQuery; charges appear as BigQuery usageLearn more about Sensitive Data Protection pricing.How our pricing worksDiscovery is billed based on the pricing mode you select. Inspection and transformation pricing is based on total bytes processed.DiscoveryDescriptionConsumption modePrice USD$0.03/GBFixed-rate subscription modeA default discovery subscription is included at no charge with the purchase of a Security Command Center Enterprise subscription.Description$2500/unitInspection and transformationDescriptionUp to 1 GBPrice USDFreeInspection of Google Cloud storage systemsDescriptionStarting at$1/GB Lower with volumeInspection of data from any source (hybrid inspection)DescriptionStarting at$3/GBLower with volumeIn-line content inspectionDescriptionStarting at$3/GBLower with volumeIn-line content de-identificationDescriptionStarting at$2/GB Lower with volumeRisk analysisDescriptionAnalyze sensitive data to find properties that might increase the risk of subjects being identifiedPrice USDNo Sensitive Data Protection charges* Risk analysis uses resources in BigQuery; charges appear as BigQuery usageLearn more about Sensitive Data Protection pricing.Pricing CalculatorEstimate your monthly costs.Estimate your costsCustom QuoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptNew customers get $300 in free credits.Try Sensitive Data ProtectionSee Sensitive Data Protection in action.Profile a table in test modeSensitive data discovery for your data warehouseGet startedDe-identify sensitive data stored in Cloud StorageLearn moreTry our classification engine for yourselfView demoBusiness CaseExplore how other businesses cut costs, increase ROI, and drive innovation with Sensitive Data Protection Sunway GroupSunway Group uses Cloud DLP to classify and protect sensitive dataRead customer storyRelated contentSittercity uses Cloud DLP API to detect and redact personal dataPayPal uses Cloud DLP to secure sensitive dataAmbra Health uses Cloud DLP to redact sensitive patient data at scalePartners & IntegrationSensitive Data Protection is used inPartner HighlightSensitive Data Protection powers Workspace DLP for your workforce collaboration. Create rules to protect data in applications such as Gmail, document sharing, and real-time chat.Workspace DLPPartner HighlightData inspection in BeyondCorp Enterprise. Prevent data loss and thwart threats such as malware and phishing. Utilize real-time alerts and detailed reports, all built into the Chrome Browser.BeyondcorpPartner HighlightRedact sensitive data from unstructured chat logs. Leverage the power of our inspection and de-identification engine to remove sensitive data from your Dialogflow (Contact Center AI) admin logs.Read more about how to redact sensitive dataWorkspace DLPPartner HighlightSensitive Data Protection powers Workspace DLP for your workforce collaboration. Create rules to protect data in applications such as Gmail, document sharing, and real-time chat.Workspace DLPBeyondCorp EnterprisePartner HighlightData inspection in BeyondCorp Enterprise. Prevent data loss and thwart threats such as malware and phishing. Utilize real-time alerts and detailed reports, all built into the Chrome Browser.BeyondcorpCloud AIPartner HighlightRedact sensitive data from unstructured chat logs. Leverage the power of our inspection and de-identification engine to remove sensitive data from your Dialogflow (Contact Center AI) admin logs.Read more about how to redact sensitive dataGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Serverless.txt b/Serverless.txt new file mode 100644 index 0000000000000000000000000000000000000000..3d406f8fcd362e25fe30e6798bff2de4aad0756a --- /dev/null +++ b/Serverless.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/serverless +Date Scraped: 2025-02-23T12:09:37.441Z + +Content: +Learn how gen AI can help streamline software delivery, application migration, and modernization. Register now.Jump to ServerlessServerlessDevelop and deploy highly scalable applications and functions on a fully managed serverless platform. Our serverless computing automatically scales your services up or down, even to zero, depending on traffic, and you pay only for what you use.Start a free trial and get $300 in free credits.Go to consoleContact salesLearn how to build a serverless application with these Cloud Run and Cloud Run functions guidesGoogle Cloud serverless customers deploy 95% faster and reduce infrastructure costs by 75%Learn more about what customers are sayingBLOGNow in preview: Host your LLMs on Cloud RunKey featuresProductsCloud RunBuild applications in your favorite language, dependencies, and tools, and deploy them in seconds. Cloud Run abstracts away all infrastructure management by automatically scaling up and down from zero almost instantaneously—depending on traffic. Cloud Run only charges you for the exact resources you use.Cloud Run functionsDevelop faster by writing and running small code snippets that respond to events. Use Cloud Run functions to connect with Google Cloud or third-party cloud services via triggers to streamline challenging orchestration problems. Run functions across multiple environments (local development environment, on-premises, Cloud Run, and other Knative-based serverless environments) and prevent lock-in. GPU support(Now in public preview) Access to NVIDIA L4 GPUs on demand for running AI inference workloads. GPU instances start in five seconds and scale to zero. GPUs are now supported in Cloud Run and Cloud Run functions.Service integrationWhen integrating services, it's all too easy to introduce tight coupling, which grows brittle, slow, and difficult to debug over time. Let our service integration products take care of the connective tissue, so you can do what you do best: building brilliant applications.CustomersCustomersEnterprises can innovate without worrying about provisioning machines, clusters, or autoscaling. No knowledge of containers or Kubernetes is required.Blog postLeverages serverless and BigQuery to empower product innovation through their Beauty Tech Data Platform5-min readBlog postLes Echos Le Parisien announces scaling quickly to new markets with Cloud Run5-min readVideoVeolia uses Cloud Run to remove the barriers of managed platformsVideo (48:32)See all customersUse casesServerless WorkloadsUse caseWeb services: WebsitesBuild your website with Cloud Run using your favorite language or framework (Go, Python, Java, Node.js, .NET, and more), access your SQL database on Cloud SQL, and render dynamic HTML pages.Use caseIntegration with third-party services and APIsUse Cloud Run functions to surface your own microservices via HTTP APIs or integrate with third-party services that offer webhook integrations to quickly extend your application with powerful capabilities, such as sending a confirmation email after a successful Stripe payment or responding to Twilio text message events.Use caseAI inferenceUse Cloud Run GPUs to power real-time inference with open-source models such as Gemma 3, Llama 3.2, or custom fine-tuned models. Build chatbots, generate on-the-fly document summaries, and more - all while scaling to handle unpredictable traffic spikes. Or, with GPUs on Cloud Run functions your data scientist can run Python scripts and perform event-driven inference with limited knowledge of the underlying infrastructure.Use caseIT process automationAutomate cloud infrastructure with Eventarc triggers and workflows that control Google Cloud services. For example, schedule a monthly workflow to detect and remediate security compliance issues. Iterating through critical resources and IAM permissions, send required requests for approval renewal using a Cloud Run function. Remove access for any permissions not renewed within 14 days.Use caseWeb services: REST APIs backendModern mobile apps commonly rely on RESTful backend APIs to provide current views of application data and separation for frontend and backend development teams. API services running on Cloud Run allow developers to persist data reliably on managed databases, such as Cloud SQL or Firestore (NoSQL). Logging in to Cloud Run grants users access to app‐resource data stored in Cloud Databases.Use caseReal-time analyticsRun real-time analytics on files streamed from Cloud Storage into BigQuery using Cloud Run functions. Build security threat analysis on incoming logs that draw insights and highlight malicious behavior.View all technical guidesAll featuresCapabilitiesAny runtimeModern languages or runtimes are usually appropriate for new applications, but many existing applications either can’t be rewritten, or depend on a language that the serverless platform does not support. Cloud Run supports standard Docker images and can run any runtime, or runtime version in a container.Per-instance concurrencyMany traditional applications underperform when constrained to a single-request model that’s common in FaaS platforms. Cloud Run allows for up to 1,000 concurrent requests on a single instance of an application, providing a far greater level of efficiency.GPU support (Now in public preview) Access to NVIDIA L4 GPUs on demand for running AI inference workloads. GPU instances start in five seconds and scale to zero. GPUs are now supported in Cloud Run and Cloud Run functions.Background processingServerless platforms often "freeze" the function when it's not in use. This makes for a simplified billing model (only pay while it's running), but can make it difficult to run workloads that expect to do work in the background. Cloud Run supports new CPU allocation controls, which allow these background processes to run as expected.Experiment and test ideas quicklyIn just a few clicks, you can perform gradual rollouts and rollbacks, and perform advanced traffic management in Cloud Run. No container knowledge necessaryStart with a container or use buildpacks to create container images directly from source code. With a single “gcloud run deploy” command, you can build and deploy your code to Cloud Run. Built-in tutorialsBuilt-in tutorials in Cloud Shell Editor and Cloud Code make it easy to come up to speed on serverless. No more switching between tabs, docs, your terminal, and your code. You can even author your own tutorials, allowing your organization to share best practices and onboard new hires faster. PricingPricingCloud Run is pay-per-use, with an always-free tier, rounded up to the nearest 100 millisecond. Total cost is the sum of used CPU, Memory, Requests, and Networking.Use the Google Cloud pricing calculator for an estimate.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Service_Catalog.txt b/Service_Catalog.txt new file mode 100644 index 0000000000000000000000000000000000000000..d09f58050d7a63a04ac5a1745e211e1e439a755c --- /dev/null +++ b/Service_Catalog.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/service-catalog/docs +Date Scraped: 2025-02-23T12:05:46.439Z + +Content: +Home Service Catalog Documentation Stay organized with collections Save and categorize content based on your preferences. Service Catalog documentation View all product documentation With Service Catalog, developers and cloud admins can make their solutions discoverable to their internal enterprise users. Cloud admins can manage their solutions and ensure their users are always launching the latest versions. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Service Catalog quickstart Creating a catalog Creating solutions Assigning solutions to a catalog Sharing a catalog Managing solutions Viewing and launching solutions info Resources Release notes Pricing Quotas and limits Related videos \ No newline at end of file diff --git a/Service_architecture.txt b/Service_architecture.txt new file mode 100644 index 0000000000000000000000000000000000000000..35db6d466faae0d0e4d426398a1be0e2e88951dc --- /dev/null +++ b/Service_architecture.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/enterprise-application-blueprint/service-architecture +Date Scraped: 2025-02-23T11:47:07.783Z + +Content: +Home Docs Cloud Architecture Center Send feedback Service architecture Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-13 UTC A Kubernetes service is an abstraction that lets you expose a set of pods as a single entity. Services are fundamental building blocks for exposing and managing containerized applications in a Kubernetes cluster. Services in this blueprint are architected in a standardized manner in consideration to namespaces, identity, service exposure, and service-to-service communication. Namespaces Each namespace has its own set of resources, such as pods, services, and deployments. Namespaces let you organize your applications and isolate them from each other. The blueprint uses namespaces to group services by their purpose. For example, you can create a namespace for all your frontend services and a namespace for your backend services. This grouping makes it easier to manage your services and to control access to them. Service exposure A service is exposed to the internet through the GKE Gateway controller. The GKE Gateway controller creates a load balancer using Cloud Load Balancing in a multi-cluster, multi-region configuration. Cloud Load Balancing uses Google's network infrastructure to provide the service with an anycast IP address that enables low-latency access to the service. Client access to the service is done over HTTPS connections and client HTTP requests are redirected to HTTPS. The load balancer uses Certificate Manager to manage public certificates. Services are further protected by Cloud Armour and Cloud CDN. The following diagram shows how services are exposed to the internet. Cloud Service Mesh The blueprint uses Cloud Service Mesh for mutual authentication and authorization for all communications between services. For this deployment, the Cloud Service Mesh uses CA Service for issuing TLS certificates to authenticate peers and to help ensure that only authorized clients can access a service. Using mutual TLS (mTLS) for authentication also helps ensure that all TCP communications between services are encrypted in transit. For service ingress traffic into the service mesh, the blueprint uses the GKE Gateway controller. Distributed services A distributed service is an abstraction of a Kubernetes service that runs in the same namespace across multiple clusters. A distributed service remains available even if one or more GKE clusters are unavailable, as long as any remaining healthy clusters are able to serve the load. To create a distributed service across clusters, Cloud Service Mesh provides Layer 4 and Layer 7 connectivity between an application's services on all clusters in the environment. This connectivity enables the Kubernetes services on multiple clusters to act as a single logical service. Traffic between clusters is only routed to another region if intra-region traffic cannot occur because of a regional failure. Service identity Services running on GKE have identities that are associated with them. The blueprint configures Workload Identity Federation for GKE to let a Kubernetes service account act as a Google Cloud service account. Each instance of a distributed service within the same environment has a common identity which simplifies permission management. When accessing Google Cloud APIs, services that run as the Kubernetes service account automatically authenticate as the Google Cloud service account. Each service has only the minimal permissions necessary for the service to operate. What's next Read about logging and monitoring (next document in this series). Send feedback \ No newline at end of file diff --git a/Service_networking.txt b/Service_networking.txt new file mode 100644 index 0000000000000000000000000000000000000000..35f2d422a747a59a582d28cebe2e4337d1c06c15 --- /dev/null +++ b/Service_networking.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/ccn-distributed-apps-design/service-networking +Date Scraped: 2025-02-23T11:50:54.683Z + +Content: +Home Docs Cloud Architecture Center Send feedback Service networking for distributed applications in Cross-Cloud Network Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-30 UTC This document is part of a design guide series for Cross-Cloud Network. The series consists of the following parts: Cross-Cloud Network for distributed applications Network segmentation and connectivity for distributed applications in Cross-Cloud Network Service networking for distributed applications in Cross-Cloud Network (this document) Network security for distributed applications in Cross-Cloud Network This document describes how to assemble an application from a set of chosen or created component services. We recommend you read through the entire document once before following the steps. This document guides you through the following decisions: Whether you create the individual service yourself or consume a third-party service Whether the service is available globally or regionally Whether the service is consumed from on-premises, from other clouds, or from neither Whether you access the service endpoint through a shared services VPC or distribute the endpoints through all the relevant application VPCs This document guides you through the following steps: Deciding if your application is global or regional Choosing third-party managed services or creating and publishing your own services Setting up private access to the service endpoints using either a shared or dedicated mode Assembling the services into applications to match either a global or regional archetype Developers define the service networking layer for Cross-Cloud Network. By this stage, administrators have designed a connectivity layer for Cross-Cloud Network that allows flexibility in the service networking options described in this document. In some cases, constraints from limited cross-VPC transitivity exist. We describe these constraints when they can affect design decisions. Decide whether your application is regional or global Determine if the customers of the application you are creating need a regional or global deployment archetype. You can achieve regional resiliency by spreading loads across the zones of a region. You can achieve global resiliency by spreading loads across regions. Consider the following factors when choosing an archetype: The availability requirements of the application The location where the application is consumed Cost For details, see Google Cloud deployment archetypes. This design guide discusses how to support the following deployment archetypes: Google Cloud multi-regional deployment archetype Google Cloud global deployment archetype In a cross-cloud distributed application, different services of that application can be delivered from different cloud service providers (CSPs) or private data centers. To help ensure a consistent resiliency structure, put services hosted in different CSPs into CSP data centers that are geographically near each other. The following diagram shows a global application stack that's distributed across clouds and different application services are deployed in different CSPs. Each global application service has workload instances in different regions of the same CSP. Define and access application services To assemble your application, you can use existing third-party managed services, create and host your own application services, or use a combination of both. Use existing third-party managed services Decide which third-party managed services you can use for your application. Determine which ones are constructed as regional services or global services. Also, determine which private access options each service supports. When you know which managed services you can use, you can determine which services you need to create. Create and access application services Each service is hosted by one or more workload instances that can be accessed as a single endpoint or as a group of endpoints. The general pattern for an application service is shown in the following diagram. The application service is deployed across a collection of workload instances. (In this case, a workload instance could be a Compute Engine VM, a Google Kubernetes Engine (GKE) cluster, or some other backend that runs code.) The workload instances are organized as a set of backends that are associated with a load balancer. The following diagram shows a generic load balancer with a set of backends. To achieve the chosen load distribution and to automate failovers, these groups of endpoints use a frontend load balancer. By using managed instance groups (MIGs), you can elastically increase or decrease the capacity of the service by autoscaling the endpoints that form the backend of the load balancer. Furthermore, depending on the requirements of the application service, the load balancer can also provide authentication, TLS termination, and other connection specific services. Determine the scope of the service - regional or global Decide if your service needs and can support regional or global resiliency. A regional software service can be designed for synchronization within a regional latency budget. A global application service can support synchronous failovers across nodes that are distributed across regions. If your application is global, you might want the services supporting it to be global as well. But, if your service requires synchronization among its instances to support failover, you must consider the latency between regions. For your situation, you might have to rely on regional services with redundancy in the same or nearby regions, thus supporting low-latency synchronization for failover. Cloud Load Balancing supports endpoints that are hosted either within a single region or distributed across regions. Thus, you can create a global customer-facing layer that speaks to global, regional, or hybrid service layers. Choose your service deployments to ensure that dynamic network failover (or load balancing across regions) aligns with the resiliency state and capabilities of your application logic. The following diagram shows how a global service that's built from regional load balancers can reach backends in other regions, thus providing automatic failover across regions. In this example, the application logic is global and the chosen backend supports synchronization across regions. Each load balancer primarily sends requests to the local region, but can failover to remote regions. A global backend is a collection of regional backends that are accessed by one or more load balancers. Although the backends are global, the frontend of each load balancer is regional. In this architecture pattern, load balancers primarily distribute traffic only within their region, but can also balance traffic to other regions when the resources in the local region are unavailable. A set of regional load balancer frontends, each accessible from other regions and each able to reach backends in other regions, form an aggregate global service. To assemble a global application stack, as discussed in Design global application stacks, you can use DNS routing and health checks to achieve cross-regional failover of the frontends. The load balancer frontends are themselves accessible from other regions using global access (not shown in diagram). This same pattern can be used to include published services with global failover. The following diagram depicts a published service that uses global backends. In the diagram, note that the published service has global failover implemented in the producer environment. The addition of global failover in the consumer environment enables resilience to regional failures in the consumer load balancing infrastructure. Cross-regional failover of the service must be implemented both in the service application logic and in the load balancing design of the service producer. Other mechanisms can be implemented by the service producers. To determine which Cloud Load Balancing product to use, you must first determine what traffic type your load balancers must handle. Consider the following general rules: Use an Application Load Balancer for HTTP(S) traffic. Use a proxy Network Load Balancer for non-HTTP(S) TCP traffic. This proxy load balancer also supports TLS offload. Use a passthrough Network Load Balancer to preserve the client source IP address in the header, or to support additional protocols like UDP, ESP, and ICMP. For detailed guidance on selecting the best load balancer for your use case, see Choose a load balancer. Services with serverless backends A service can be defined using serverless backends. The backend in the producer environment can be organized in a Serverless NEG as a backend to a load balancer. This service can be published using Private Service Connect by creating a service attachment that's associated with the frontend of the producer load balancer. The published service can be consumed through Private Service Connect endpoints or Private Service Connect backends. If the service requires producer-initiated connections, you can use a Serverless VPC Access connector to let Cloud Run, App Engine standard, and Cloud Run functions environments send packets to the internal IPv4 addresses of resources in a VPC network. Serverless VPC Access also supports sending packets to other networks connected to the selected VPC network. Methods for accessing services privately Your application can consist of managed services provided by Google, third-party services provided by outside vendors or peer groups in your organization, and services that your team develops. Some of those services might be accessible over the internet using public IP addresses. This section describes the methods you can use to access those services using the private network. The following service types exist: Google public APIs Google serverless APIs Published managed services from Google Published managed services from vendors and peers Your published services Keep these options in mind when reading subsequent sections. Depending on how you allocate your services, you can use one or more of the private access options described. Note: This guide assumes that all interaction between services happens privately. However, if you are using two external services and those services interact directly, how they interact is outside the scope of this guide. The organization (or group within an organization) that assembles, publishes, and manages a service is referred to as the service producer. You and your application are referred to as the service consumer. Some managed services are published exclusively using private services access. The network designs recommended in Internal connectivity and VPC networking accommodate services published with private service access and Private Service Connect. For an overview of the options for accessing services privately, see Private access options for services. We recommend using Private Service Connect to connect to managed services whenever possible. For more information on deployment patterns for Private Service Connect, see Private Service Connect deployment patterns. There are two types of Private Service Connect, and the different services can be published as either type: Private Service Connect endpoints Private Service Connect backends Services published as Private Service Connect endpoints can be consumed directly by other workloads. The services rely on the authentication and resiliency provisioned by the producer of the service. If you want additional control over the service authentication and resiliency, you can use Private Service Connect backends to add a layer of load balancing for authentication and resiliency in the consumer network. The following diagram shows services being accessed through Private Service Connect endpoints: The diagram shows the following pattern: A Private Service Connect endpoint is deployed in the consumer VPC, which makes the producer service available to VMs and GKE nodes. Both the consumer and producer networks must be deployed in the same region. The preceding diagram shows endpoints as regional resources. Endpoints are reachable from other regions because of global access. For more information on deployment patterns, see Private Service Connect deployment patterns. Private Service Connect backends use a load balancer configured with Private Service Connect network endpoint group (NEG) backends. For a list of supported load balancers, see About Private Service Connect backends. Private Service Connect backends let you create the following backend configurations: Customer-owned domains and certificates in front of managed services Consumer-controlled failover between managed services in different regions Centralized security configuration and access control for managed services In the following diagram, the global load balancer uses a Private Service Connect NEG as a backend that establishes communication to the service provider. No further networking configuration is required and the data is carried over Google's SDN fabric. Most services are designed for connections that the consumer initiates. When services need to initiate connections from the producer, use Private Service Connect interfaces. A key consideration when deploying private service access or Private Service Connect is transitivity. Private Service Connect consumer access points are reachable over Network Connectivity Center. Published services are not reachable across a VPC Network Peering connection for either Private Service Connect or private services access consumer access points. In the absence of inter-VPC transitivity for all service consumer access points, the location of the service access subnet or endpoints in the VPC topology dictates whether you design the network for shared or dedicated service deployment. Options such as HA VPN and customer managed proxies provide methods to allow inter-VPC transitive communication. Private Service Connect Endpoints aren't reachable over VPC Network Peering. If you require this type of connectivity, deploy an internal load balancer and Private Service Connect NEG as a backend, as shown in the following diagram: Google APIs can be accessed privately by using Private Service Connect endpoints and backends. The use of endpoints is generally recommended as the Google API producer provides resiliency and certificate based authentication. Create a Private Service Connect endpoint in every VPC in which the service needs to be accessed. Because the consumer IP address is a private global IP address, a single DNS mapping for each service is required, even if there are endpoint instances in multiple VPCs, as shown in the following diagram: Define consumption patterns for published services Published services can run in a variety of locations: your VPC network, another VPC network, in an on-premises data center, or in the cloud. Regardless of where the service workload runs, your application consumes those services by using an access point, such as one of the following: An IP address in a private services access subnet A Private Service Connect endpoint A VIP for a load balancer using Private Service Connect NEGs The consumer access points can be shared across networks or dedicated to a single network. Decide whether to create shared or dedicated consumer access points based on whether your organization delegates the task of creating consumer service access points to each application group or manages the access to services in a consolidated manner. The management of service access involves the following activities: Creating the access points Deploying the access points in a services access VPC, which is a VPC that has the appropriate type of reachability Registering the consumer access points' IP addresses and URLs in DNS Managing security certificates and resiliency for the service in the consumer space, when adding load balancing in front of the consumer access points For some organizations, it might be viable to assign service access management to a central team, while others might be structured to give more independence to each consumer or application team. A byproduct of operating in the dedicated mode is that some of the elements are replicated. For example, a service is registered with multiple DNS names by each application group and manages multiple sets of TLS certificates. The VPC design described in Network segmentation and connectivity for distributed applications in Cross-Cloud Network enables reachability for deploying service access points in either a shared or dedicated mode. Shared consumer access points are deployed in service access VPCs, which can be accessed from any other VPC or external network. Dedicated consumer access points are deployed in application VPCs, which can be accessed only from resources within that application VPC. The main difference between a service access VPC and an application VPC is the service access point transitive connectivity that a service access VPC enables. Service access VPCs aren't limited to hosting consumer access points. A VPC can host application resources, as well as shared consumer access points. In such a case, the VPC should be configured and handled as a service VPC. Shared managed services access For all service consumption methods, including Private Service Connect, ensure that you do the following tasks: Deploy the services consumer access points in a services VPC. Service VPCs have transitive reachability to other VPCs. If the service access VPC is connected with HA VPN, advertise the subnet for the consumer access point as a custom route advertisement from the Cloud Router that peers to other networks over HA VPN. For Google APIs, advertise the host IP address of the API. Update multicloud firewall rules to allow the private services access subnet. For private service access specifically, ensure that you can fulfill the following additional requirements: Export custom routes to the service producer's network. For more information, see On-premises hosts can't communicate with the service producer's network Create ingress firewall rules to allow the private services access subnet into the application VPCs Create ingress firewall rules to allow the private services access subnet into the services VPC For serverless service access, ensure that you can fulfill the following requirements: Access connector requires a dedicated /28 regular subnet Cloud Router advertises regular subnets by default Create ingress firewall rules to all allow VPC access subnet within the VPC(s) Update multicloud firewall rules to allow the VPC access connector subnet Create ingress firewall rule(s) to allow the private services access subnet into the application VPC(s) Dedicated managed services access Ensure that you do the following tasks: In each application VPC where access is needed, deploy a forwarding rule for the service to create an access point. For private service access, create ingress firewall rule(s) to allow the private services access subnet into the application VPC(s). For serverless service access, ensure that you can fulfill the following requirements: Access connector requires a dedicated /28 regular subnet Create ingress firewall rules to allow the VPC Access connector subnet within the application VPC(s) Assemble the application stack This section describes assembling a regional or global application stack. Design regional application stacks When an application follows the regional or multi-regional deployment archetypes, use regional application stacks. Regional application stacks can be thought of as a concatenation of regional application service layers. For example, a regional web service layer that talks with an application layer in the same region, that in turn talks to a database layer in the same region, is a regional application stack. Each regional application service layer uses load balancers to distribute traffic across the endpoints of the layer in that region. Reliability is achieved by distributing the backend resources across three or more zones in the region. Application service layers in other CSPs or on-premises data centers should be deployed in equivalent regions in the external networks. Also, make published services available in the region of the stack. To align the application stack within a region, the application service layer URLs have to resolve to the specific load balancer frontend regional IP address. These DNS mappings are registered in the relevant DNS zone for each application service. The following diagram shows a regional stack with active-standby resiliency: A complete instance of the application stack is deployed in each region across the different cloud data centers. When a regional failure occurs on any of the application service layers, the stack in the other region takes over delivery of the entire application. This failover is done in response to out-of-band monitoring of the different application service layers. When a failure is reported for one of the service layers, the frontend of the application is re-anchored to the backup stack. The application is written to reference a set of regional name records that reflect the regional IP address stack in DNS so that each layer of the application maintains its connections within the same region. Design global application stacks When an application follows the global application deployment archetype, each application service layer includes backends in multiple regions. Including backends in multiple regions expands the resiliency pool for the application service layer beyond a single region and enables automated failover detection and reconvergence. The following diagram shows a global application stack: The preceding diagram shows a global application assembled from the following components: Services running in on-premises data centers with load balancer frontends. The load balancer access points are reachable over Cloud Interconnect from the transit VPC. A transit VPC hosts hybrid connections between the external data center and the application VPC. An application VPC that hosts the core application running on workload instances. These workload instances are behind load balancers. The load balancers are reachable from any region in the network and they can reach backends in any region in the network. A services VPC that hosts access points for services running in other locations, such as in third party VPCs. These service access points are reachable over the HA VPN connection between the services VPC and the transit VPC. Service producer VPCs that are hosted by other organizations or the primary organization and applications that run in other locations. The relevant services are made reachable by Private Service Connect backends deployed as global backends to regional load balancers hosted in the services VPC. The regional load balancers are reachable from any other region. If you want the created application to be reachable from the internet, you can add a global external Application Load Balancer that points to the application workloads in the application VPC (not shown in the diagram). To support a global application stack, we used global backends for each application layer. This allows recovery from a failure of all backend endpoints in one region. Each region has a regional load balancer frontend for each application service layer. When a regional failover occurs, the internal regional load balancer frontends can be reached across regions, because they use global access. Because the application stack is global, DNS geolocation routing policies are used to select the most appropriate regional frontend for a particular request or flow. In case of a frontend failure, DNS health checks can be used to automate the failover from one frontend to another. Services published using Private Service Connect backends benefit from Private Service Connect global access. This feature allows a Private Service Connect backend to be reachable from any region and lessens disruptions from application service layer failures. This means the Private Service Connect backends can be leveraged as global backends as described in Determine the scope of the Service - regional or global. Provide private access to services hosted in external networks You might want to publish a local access point for a service hosted in another network. In these cases, you can use an internal regional TCP proxy load balancer using hybrid NEGs. You can create a service producer that's hosted on-premises or in other cloud environments that are available to service consumers (clients) in your VPC network, as shown in the following diagram: If you want to make the hybrid service available in a VPC network other than the one hosting the load balancer, you can use Private Service Connect to publish the service. By placing a service attachment in front of your internal regional TCP proxy load balancer, you can let clients in other VPC networks reach the hybrid services running on-premises or in other cloud environments. In a cross-cloud environment, the use of a hybrid NEG allows for secure application-to-application communication. When a different organization publishes an application service, use a hybrid NEG to extend private access abstractions for that service. The following diagram illustrates this scenario: In the preceding diagram, the application service layer is fully composed in the neighboring CSP, which is highlighted in the parts of the diagram that aren't grayed out. The hybrid load balancers are used in conjunction with Private Service Connect service attachments as a mechanism to publish the external application service for private consumption within Google Cloud. The hybrid load balancers with hybrid NEGs and Private Service Connect service attachments are in a VPC that's part of a service producer project. This service producer project might usually be a different VPC than the transit VPC, because it is within the administrative scope of the producer organization or project, and therefore separate from the common transit services. The producer VPC doesn't need to be connected over VPC peering or HA VPN with the consumer VPC (which is the Services Shared VPC in the diagram). Centralize service access Service access can be centralized into a VPC network and accessed from other application networks. The following diagram shows the common pattern that enables the centralization of the access points: The following diagram shows all services being accessed from a dedicated services VPC: When services are frontended with application load balancers, you can consolidate onto fewer load balancers by using URL maps to steer traffic for different service backends instead of using different load balancers for each service. In principle, an application stack could be fully composed using a single application load balancer plus service backends and appropriate URL mappings. In this implementation, you must use hybrid NEGs across VPCs for most backend types. The exception is a Private Service Connect NEG or backend, as described in Explicit Chaining of Google Cloud L7 Load Balancers with Private Service Connect. The use of hybrid NEGs across VPCs comes at the expense of foregoing autohealing and autoscaling for the backends. Published services already have a load balancer in the producer tenant that provides autoscaling. Therefore, you only run into the limitations of the hybrid NEGs if you centralize the load balancing function for service layers being composed natively rather than consumed from publication. When using this service networking pattern, remember that the services are consumed through an additional layer of load balancing. Private Service Connect service endpoints are reachable across Network Connectivity Center spoke VPCs. The centralized mode adds a layer of load balancing on the consumer side of the service. When you use this mode, you also need to manage certificates, resiliency, and additional DNS mappings in the consumer project. Other considerations This section contains considerations for common products and services not explicitly covered in this document. GKE control plane considerations The GKE control plane is deployed in a Google managed tenant project that's connected to the customer's VPC using VPC Network Peering. Because VPC Network Peering isn't transitive, direct communication to the control plane over a hub and spoke VPC peered networking topology isn't possible. When considering GKE design options, such as centralized or decentralized, direct access to the control plane from multicloud sources is a key consideration. If GKE is deployed in a centralized VPC, access to the control plane is available across clouds and within Google Cloud. However, if GKE is deployed in decentralized VPCs, direct communication to the control plane isn't possible. If an organization's requirements necessitate access to the GKE control plane in addition to adopting the decentralized design pattern, network administrators can deploy a connect agent that acts as a proxy, thus overcoming the non-transitive peering limitation to the GKE control plane. Security - VPC Service Controls For workloads involving sensitive data, use VPC Service Controls to configure service perimeters around your VPC resources and Google-managed services, and control the movement of data across the perimeter boundary. Using VPC Service Controls, you can group projects and your on-premises network into a single perimeter that prevents data access through Google-managed services. VPC Service Controls ingress and egress rules can be used to allow projects and services in different service perimeters to communicate (including VPC networks that aren't inside the perimeter). For recommended deployment architectures, a comprehensive onboarding process, and operational best practices, see the Best practices for VPC Service Controls for enterprises and Security Foundations Blueprint. DNS for APIs/Services Service producers can publish services by using Private Service Connect. The service producer can optionally configure a DNS domain name to associate with the service. If a domain name is configured, and a service consumer creates an endpoint that targets that service, Private Service Connect and Service Directory automatically create DNS entries for the service in a private DNS zone in the service consumer's VPC network. What's next Design the network security for Cross-Cloud Network applications. Learn more about the Google Cloud products used in this design guide: VPC networks Shared VPC VPC Network Peering Private Service Connect Private services access Cloud Interconnect HA VPN Cloud Load Balancing For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Victor Moreno | Product Manager, Cloud NetworkingGhaleb Al-habian | Network SpecialistDeepak Michael | Networking Specialist Customer EngineerOsvaldo Costa | Networking Specialist Customer EngineerJonathan Almaleh | Staff Technical Solutions ConsultantOther contributors: Zach Seils | Networking SpecialistChristopher Abraham | Networking Specialist Customer EngineerEmanuele Mazza | Networking Product SpecialistAurélien Legrand | Strategic Cloud EngineerEric Yu | Networking Specialist Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMark Schlagenhauf | Technical Writer, NetworkingMarwan Al Shawi | Partner Customer EngineerAmmett Williams | Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Set_realistic_targets_for_reliability.txt b/Set_realistic_targets_for_reliability.txt new file mode 100644 index 0000000000000000000000000000000000000000..a8e1aa61b65dd911f961f97d77b45da2d4ee3aad --- /dev/null +++ b/Set_realistic_targets_for_reliability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/set-targets +Date Scraped: 2025-02-23T11:43:21.170Z + +Content: +Home Docs Cloud Architecture Center Send feedback Set realistic targets for reliability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework helps you define reliability goals that are technically feasible for your workloads in Google Cloud. This principle is relevant to the scoping focus area of reliability. Principle overview Design your systems to be just reliable enough for user happiness. It might seem counterintuitive, but a goal of 100% reliability is often not the most effective strategy. Higher reliability might result in a significantly higher cost, both in terms of financial investment and potential limitations on innovation. If users are already happy with the current level of service, then efforts to further increase happiness might yield a low return on investment. Instead, you can better spend resources elsewhere. You need to determine the level of reliability at which your users are happy, and determine the point where the cost of incremental improvements begin to outweigh the benefits. When you determine this level of sufficient reliability, you can allocate resources strategically and focus on features and improvements that deliver greater value to your users. Recommendations To set realistic reliability targets, consider the recommendations in the following subsections. Accept some failure and prioritize components Aim for high availability such as 99.99% uptime, but don't set a target of 100% uptime. Acknowledge that some failures are inevitable. The gap between 100% uptime and a 99.99% target is the allowance for failure. This gap is often called the error budget. The error budget can help you take risks and innovate, which is fundamental to any business to stay competitive. Prioritize the reliability of the most critical components in the system. Accept that less critical components can have a higher tolerance for failure. Balance reliability and cost To determine the optimal reliability level for your system, conduct thorough cost-benefit analyses. Consider factors like system requirements, the consequences of failures, and your organization's risk tolerance for the specific application. Remember to consider your disaster recovery metrics, such as the recovery time objective (RTO) and recovery point objective (RPO). Decide what level of reliability is acceptable within the budget and other constraints. Look for ways to improve efficiency and reduce costs without compromising essential reliability features. Previous arrow_back Define reliability based on user-experience goals Next Build high availability through redundancy arrow_forward Send feedback \ No newline at end of file diff --git a/Set_up_and_run_a_database_migration_process.txt b/Set_up_and_run_a_database_migration_process.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf02a426aab9f46ecc924c839cdbbeb698092df1 --- /dev/null +++ b/Set_up_and_run_a_database_migration_process.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/database-migration-concepts-principles-part-2 +Date Scraped: 2025-02-23T11:52:29.416Z + +Content: +Home Docs Cloud Architecture Center Send feedback Database migration: Concepts and principles (Part 2) Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-11 UTC This document discusses setting up and executing the database migration process, including failure scenarios. This document is part 2 of two parts. Part 1 introduces concepts, principles, and terminology of near-zero downtime database migration for cloud architects who need to migrate databases to Google Cloud from on-premises or other cloud environments. Database migration setup This section describes the several phases of a database migration. First, you set up the migration. Then, after you complete the migration and switch over clients to the target databases, you either remove the source databases or, if necessary, implement a fallback plan because of problems with the migration after the switchover. A fallback helps ensure business continuity. During the migration, you need to give special attention to any schema or data changes that might be introduced. For more information about the impact these changes can have, see Dynamic changes during migration later in this document. Target schema specification For each target database system, you need to define and create its schema. For homogeneous database migrations, you can create this specification more quickly by exporting the source database schema into the target database, thereby creating the target database schema. How you name your schema is important. One option is to match the source and target schema names. However, although this simplifies switching over clients, this approach could confuse users if tools connect to the source and target database schemas simultaneously—for example, to compare data. If you abstract the schema name by using a configuration file, then giving the target database schemas different names from the source makes it easier to differentiate the schemas. With heterogeneous database migrations, you need to create each target database schema. This engineering process can take several iterations. Before you can implement the migration, you might need to change the schemas further in order to accommodate your migration process and any data modifications. Because you will likely create target databases multiple times when you test and execute your migration, the process of creating the schema needs to be repeatable (ideally performed through installation scripts). You can use a code management system to control the version of scripts, ensure consistency, and access the change history of the scripts. Query migration and execution semantics Eventually, you need to switch over clients from accessing source database systems to accessing target database systems. In homogeneous database integrations, the queries can remain unchanged if the schemas are not modified. While the clients have to be tested on the target database systems, the clients don't have to be modified because of queries. For heterogeneous database migrations in general, you must modify the queries because the schemas between the source and target databases differ. The difference might be a data type mismatch between source and target databases. In addition, not all capabilities of the query language available in the source database systems might be available in the target database systems, or the converse. In extreme cases, you might need to convert a query from a source database system into several queries on the target system. In a reverse scenario, where you have more query language capabilities available in the target database than in the source, you might need to combine several queries from the source database into a single query on the corresponding target database. The semantics of queries can also differ. For example, some database systems materialize an update within a transaction immediately within that transaction, so when the same data item is read, the updated value is retrieved. Other systems don't materialize an update immediately and wait until the transaction commits. If the logic on the source database system relies on the write being materialized, the same logic on the target database can cause incorrect data or even failures. If you must migrate queries, you need to test all functionality to ensure that the behavior of clients is the same before and after the migration. You can also test at the data level, but such testing does not replace testing on the client level. Clients execute queries from a business logic standpoint and can be tested only on a business logic level. Migration processes For heterogeneous database migrations, migration processes specify how data extracted from source database systems is modified and inserted into target databases. Data modifications, such as those discussed in Data changes in this document, are defined and executed while data items are extracted from the source databases and transferred to the target databases. With homogeneous database migrations, when the schemas of the source and target databases are equivalent, data modification is not required. Data is inserted into target databases as it was extracted from source databases. Depending on your database migration system, several configurations might be required. For example, you must specify whether data being modified and transferred must be intermittently stored in the database migration system. Storing the data might slow down the overall migration process but significantly speed up recovery if a failure occurs. You might be required to specify the type of verification. For example, some database migration systems query source and target systems to establish equivalence of the dataset migrated up to the point of the query. Error handling requires that you specify failure recovery behavior. Again, this requirement depends on the database migration system in use. Needless to say, you need to test your data migration thoroughly and repeatedly. Ideally, your migration is tested to ensure that every known data item is migrated, no data modification errors occur, performance and throughput is sufficient, and time-to-migration can be accomplished. Fallback processes During a database migration, the source databases remain operational (unless your migration involves planned downtime). If your database migration fails at any point, you can (in a worst-case scenario) abort the migration and reset the target database to its initial state. After you resolve the failures, you can restart your database migration. The failure and resolution don't affect the operational source database systems. If failures occur after the database migration is completed and clients are switched over to the target databases, the failure and resolution process might impact clients so that they are unable to operate properly. In the best case, the failure is resolved quickly and downtime for clients is short. In the worst case, the failure is not resolved, or the resolution takes a long time and you must switch clients back to the source databases. To switch clients back to the source databases, you must migrate all data modifications on the target databases back to the source databases. You can set up and execute this process as a separate, complete database migration. However, because clients are inoperational on the target databases at this point, significant downtime will be incurred. To avoid client downtime in this scenario, you need to start your migration processes immediately after the original database migration completes. Every change applied to the target database systems is immediately applied to the source database systems. Following this approach ensures that both target and source database systems are kept in sync at all times. Preparing a fallback from target databases to source databases requires a significant effort. It's critical to decide whether to implement and test fallback processes or understand the consequences of not doing so—namely, significant downtime. Database migration execution Executing a database migration involves five distinct phases, which this section discusses. A sixth phase is a fallback, but a fallback is an extreme case and considered an exception to a normal database migration execution. The process discussed in this section is a near-zero downtime heterogeneous database migration. If you're able to incur significant downtime, you can combine the first three phases (initial load, continuing migration, and draining) into one phase by using either the backup and restore or export and import approach. A homogeneous database migration presents a special case. With this type of migration, you can use database management system replication functionality (for those systems that support it) that migrates the data while the source database systems remain operational. The phases discussed here outline an approach that you might need to modify according to the requirements of your database migration process. Phase 1: Initial load The starting point is to migrate all data specified to be migrated from all source databases. At the start of the data migration, the source databases have a specific state, and that state changes during the migration. A tip for starting a migration while changes occur simultaneously is to note the database system time right before the first data item is extracted. With this timestamp, you can derive all database changes from the transaction log starting at that time. In addition, the initial load must read a consistent database state across all data. You might need a short duration lock on the database in order to prevent reading an inconsistent dataset. This phase consists of the following: Noting the database system time right before the database migration starts. Executing an initial load migration process that queries the dataset (whether complete or partial) from the source databases that need to be migrated and migrating the dataset. In a relational database model, the initial load migration processes execute queries such as SELECT * or queries with selection, or projection, or both. The migration process performs data modification as specified in the process. While the initial load migration processes execute, clients typically make changes to the source databases. Because you record the database system time at the start, you can derive those changes from the transaction log later. The result of the initial load phase is the complete migration of the initial dataset from the source database systems to the target database systems. However, the source and target databases are not yet synchronized because clients likely modified the source databases during the migration. Phase 2 involves capturing and migrating those changes. Phase 2: Continuing migration Continuing migration has two purposes. First, it reads the changes that occurred in the source databases after the initial load started. Second, it captures and transfers those changes to the target databases. This phase consists of the following: Starting the continued migration processes from the database system time recorded in Phase 1. The migration reads the transaction log from that time and applies every change to the target database system. Executing any data modification. The migration process performs this step as you specify. Changes that are logged after the database system time are sometimes transferred during the initial load. Therefore, it's possible that those changes can be applied a second time during continuing migration. You need to define your migration processes to ensure that changes are not applied twice—for example, by using identifiers. Suppose a changed data item is transferred during the initial load, and that insert is logged in the transaction log. By applying an identifier to the data item, the migration system can determine from the transaction log that another insert is not required because the data item already exists. The result of the continuing migration phase is that the target databases are either fully synchronized or almost fully synchronized with the source databases. When a change in a source database system is not migrated, you have an almost synchronized database. Depending on how you configure your database migration system, the discrepancies can be small or large. For example, to increase efficiency, not every change should be migrated immediately, otherwise you can create a heavy load on the source if changes to the source spike. In general, changes are collected and migrated in batches as bulk operations. With smaller batches, fewer discrepancies occur between the source and target, but your source can incur a higher load if changes are frequently made. If the batch size is configured dynamically, it is best to synchronize larger batches initially in the continuing migration phase, and then synchronize batches of a gradually reduced size when the databases are almost caught up. This approach makes the process of catching up more efficient and reduces the discrepancy between source and target databases later. Phase 3: Draining To prepare to switch clients from the source to the target databases, you must ensure that the source databases and the target database are fully synchronized. Draining is the process of migrating remaining changes from the source databases to the target databases. This phase consists of the following: Quiescing the source database systems. This means that no data modifications can occur at the source database, and that the transaction log does not receive additional modification entries. Waiting for all changes to migrate to the target databases. This process is the actual draining of changes. After draining completes, backing up the target databases in order to have a defined starting point for future incremental backups. The result of the draining phase is that the source database systems and the target database systems are synchronized, and no data modifications will occur. To ensure that draining is completed, a "last insert" data item can be written into a source database. Once that "last insert" data item appears in the corresponding target database, the draining phase is complete. Phase 4: Switchover After the draining phase is completed, you can switch over the clients from the source to the target databases. We recommend the following best practices: Before you enable access to the production database, test the basic functionality in order to ensure that the clients are operational and behave as intended. The number of test cases will determine the actual downtime for your production database. Back up the target databases before you enable client access. This best practice ensures that the initial state of the target databases is recoverable. At the end of the switchover, the clients are fully operational and begin to access production databases (what this document referred to as target databases up to this point). Phase 5: Source database deletion After you complete the switchover to production databases, you can delete the source databases. It is a good practice to take a final backup of each source database so that you have a defined end state that is accessible. Data regulations might also require final backups for compliance reasons. Phase 6: Fallback Implementing a fallback, especially for highly critical database clients, can be a good safeguard against issues and problems with your migration. A fallback is like a migration but in reverse. That is, with a fallback you set up a migration from the target databases back to the source databases. With heterogeneous database migrations, fallback is more complex. This is why we recommend performing the switchover only after thoroughly testing that your database migration process, and your applications connected to the target database, fulfill your service level agreements (SLAs). After you drain the source databases and back up all databases, you can enable migration processes that identify changes in the target databases and migrate them to the source databases before the switchover. Building these migration processes ensures that after clients make changes to the target databases, the source databases are synchronized and their data state is kept up to date. A fallback might be required days or weeks after the switchover. For example, clients might access functionality for the first time and be blocked because of broken functionality that cannot be quickly fixed. In this case, clients can be switched back to accessing the source databases. Before the clients are switched back, all changes to the target databases must be drained into the source databases. In this approach, some circumstances require special attention: You must design target schemas so that a reverse migration (from target databases to source databases) is possible. For example, if your initial migration process (from source to target) has joins or aggregations, a reverse migration is non-trivial or even impossible. In such a case, the individual data must be available in the target databases as well. An issue could arise in which the source databases have transaction logs but the target databases don't provide such a non-functional feature. In this case, a reverse migration (from target to source) has to rely on differential queries. That setup must be designed and prepared in the target database schemas. Clients that originally operated on the source databases need to be kept available and operational so that they can be turned on in a fallback. Any functional changes made to clients accessing the target databases must also be made to the clients accessing the source database to ensure functional equivalence. While a fallback is a last resort, implementing a fallback is essential and must be treated as a full database migration, which includes testing. Dynamic changes during migration In general, databases are dynamic systems because schema and data values can change. Database schemas can change based on factors like business requirements, and data values can change along with, or independent of, schema changes. Data value changes can happen dynamically at any time with the corresponding changes of an application's implementation. The following sections discuss some of the possible changes and their implications for a database migration. Schema changes Databases can be categorized into systems that require a predefined schema or that are schema-free or schemaless. In general, systems that require a predefined schema support schema-changing operations—for example, adding attributes or columns in a relational system. In these systems, you control changes through a change management process. A change management process allows for changes in a controlled way. Any operations that depend on the schema, like queries or data migration processes, are changed simultaneously to ensure an overall consistent change. Database systems that don't require a predefined schema can be changed at any time. A schema change can't only be done by an authorized user, in some cases it's programmatically possible. In these cases, a schema change can happen at any time. Operations that depend on the schema might fail—for example, queries or data migration processes. To prevent uncontrolled schema changes in these database systems, you must implement a change management process as a convention and an accepted rule rather than by system enforcement. Data changes In general, schemas control the possible data values for the various data attributes. Schema-less systems have no constraints on data values. In either case, data values can appear that were not previously stored. For example, enumeration types are often implemented as a set of strings in database systems. On a programming language level, these might be implemented in clients as true enumeration types, but not necessarily so. It is possible that a client stores what it considers a valid enumeration value that other clients don't consider as valid. Furthermore, a data migration process might key its functionality off enumeration values. If new values appear, the data migration process might fail. Another example is found in the storage of JSON structures. In many cases JSON structures are stored in a data type string; however, those are interpreted as JSON values upon access. If the JSON structure changes, the database system does not detect that; data migration processes that interpret a string as a JSON value might fail. Migration process changes Change management during an ongoing database migration is difficult and complex and can lead to data migration failures or data inconsistencies. It is optimal that required changes are delayed until the end of the draining phase, at which point the source and target database systems are synchronized. Changes at this point are then confined to the target databases and their clients (unless a fallback is implemented as well). If you need to change your migration process during a data migration, we recommend that you keep the changes to a minimum and possibly make several small changes instead of a more complex one. Furthermore, you might consider first testing those changes by using test instances of your source and target databases. Ideally, you load the test source with production data that you then migrate to the test target. Using this approach, you can verify your proposed changes without affecting your ongoing production migration. After you test and verify your changes, you can apply the changes to your production system. For changes to be possible during an ongoing data migration, you must be able to stop the data migration system and restart it, possibly with modified data migration processes. In that case, you don't have to start from the very beginning with an initial data load phase. If the data migration system supports a test migration run feature, you can use that as well. We recommend that you avoid changing schema, data values, or data migration processes during a data migration. If you must make changes, you might consider restarting the data migration from the beginning to ensure that you have a defined starting state. In any case, it's paramount that you test using production data, and that you back up your databases before you apply changes so that, if needed, you can reset the overall system to a consistent state. Migration failure mitigation Unexpected issues can occur during a database migration. The following highlights a few areas that can require preplanning: Insufficient throughput. A migration system can lack sufficient throughput despite load testing. This problem might have many causes, such as an unforeseen rate increase of source database changes or network throttling. You can prepare for this case by preparing additional resources for dynamic scaling up or scaling out of all involved components. Database instability. Source databases or target databases can exhibit instability, which can slow down data migration processes or intermittently prevent access. Data migration processes might need to recover frequently. In this case, an intentional HA or DR switchover might address the issue. A switchover changes the non-functional environment (machines and storage) and might help mitigate the problem. In this case, you need to test the switchover and the database migration recovery processes to ensure that the switchover does not cause data inconsistencies in the target databases. Transaction log file size exhaustion. In some cases, transaction logs are stored in files that have an upper limit. It is possible that this upper limit is reached and the database migration then fails. It is important to understand which parts of a database system can be dynamically reconfigured to address resource limitations as they arise. If certain aspects cannot be dynamically configured, initial sizing must be carefully determined. The more you make upfront testing realistic and complete, the more likely it is that you'll find potential issues to address in advance. What's next Check out the following resources on database migration: Migrating from PostgreSQL to Spanner Migrating from an Oracle® OLTP system to Spanner See Database migration for more database migration guides. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Set_up_for_Linux.txt b/Set_up_for_Linux.txt new file mode 100644 index 0000000000000000000000000000000000000000..919fd9aeddd7d08da894d703d4637d847ce1494f --- /dev/null +++ b/Set_up_for_Linux.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine +Date Scraped: 2025-02-23T11:48:02.295Z + +Content: +Home Docs Cloud Architecture Center Send feedback Set up Chrome Remote Desktop for Linux on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2022-11-16 UTC This tutorial shows you how to set up the Chrome Remote Desktop service on a Debian Linux virtual machine (VM) instance on Compute Engine. For separate instructions for Windows VMs, see Windows virtual machines. Chrome Remote Desktop lets you remotely access applications with a graphical user interface from a local computer or mobile device. When following this tutorial, the default firewall rules allow Chrome Remote Desktop connections; you don't need to configure any additional firewall rules. SSH access is required only for the initial setup. The VM does need access to the internet (either with an external IP address or through Cloud NAT), and you use your Google Account for authentication and authorization. Note: This solution is not suitable for graphically intensive applications, including playing videos, because those typically require hardware graphics acceleration as well as a network that has high bandwidth and low latency. If you want to run graphically intense applications remotely, see the Creating a virtual GPU-accelerated Linux workstation tutorial for an alternative solution. This tutorial assumes that you are familiar with the Linux command line and with installing Debian packages. For information about other options for creating virtual workstations, see Creating a virtual workstation. Objectives Create a headless Compute Engine VM instance to run Chrome Remote Desktop on. Install and configure the Chrome Remote Desktop service on the VM instance. Set up an X Window System desktop environment in the VM instance. Connect from your local computer to the desktop environment on the VM instance. Costs This tutorial uses billable components of Google Cloud, including: Compute Engine Use the Pricing Calculator to generate a cost estimate based on your projected usage. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Make sure that you have the following role or roles on the project: roles/compute.admin Check for the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator. For all rows that specify or include you, check the Role column to see whether the list of roles includes the required roles. Grant the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. Click person_add Grant access. In the New principals field, enter your user identifier. This is typically the email address for a Google Account. In the Select a role list, select a role. To grant additional roles, click add Add another role and add each additional role. Click Save. You use the Google Chrome browser on your local machine. Create a Compute Engine instance For the purposes of this tutorial, the default machine type with a Debian Linux boot disk is used. If you are using this for your own environment, you may want to adjust the machine type, name, region, boot disk size, or other settings. In the Google Cloud console, go to the VM Instances page. Go to VM Instances Click Create. Set the instance name to crdhost. Click Create. It takes a few moments to create your instance. After the instance has been created, connect to your new instance by clicking SSH in the instance list: Install Chrome Remote Desktop on the VM instance In the SSH window for your VM instance, add the Debian Linux Chrome Remote Desktop repository to your apt package list, and install the chrome-remote-desktop package. curl https://dl.google.com/linux/linux_signing_key.pub \ | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/chrome-remote-desktop.gpg echo "deb [arch=amd64] https://dl.google.com/linux/chrome-remote-desktop/deb stable main" \ | sudo tee /etc/apt/sources.list.d/chrome-remote-desktop.list sudo apt-get update sudo DEBIAN_FRONTEND=noninteractive \ apt-get install --assume-yes chrome-remote-desktop The DEBIAN_FRONTEND=noninteractive parameter suppresses a prompt to configure a layout for a keyboard that would be directly connected to the VM instance. Install an X Windows System desktop environment You need to install an X Window System desktop environment and window manager for Chrome Remote Desktop to use. Common options are: Xfce Cinnamon Gnome Gnome-Classic KDE Plasma You can use other desktop environments, but Chrome Remote Desktop does not support 3D graphics acceleration. If you do choose a desktop environment that uses 3D graphics acceleration, you need to disable that feature, or the remote desktop service won't start. For remote connections over slower networks we recommended Xfce because it has minimal graphical elements and few animations. Xfce In the SSH window connected to your VM instance, install the Xfce desktop environment and basic desktop components: sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes xfce4 desktop-base dbus-x11 xscreensaver XScreenSaver is required because the Xfce default screen locker (Light Locker) doesn't work with Chrome Remote Desktop (Light Locker displays a blank screen that cannot be unlocked). Note: You might see a Permission Denied error for the update-initramfs process during installation. This is normal, and you can ignore the error. Configure Chrome Remote Desktop to use Xfce by default: sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/xfce4-session" > /etc/chrome-remote-desktop-session' Because there is no display connected to your instance, disable the display manager service on your instance: sudo systemctl disable lightdm.service Optional: Install the full suite of Linux desktop applications along with the Xfce desktop environment: sudo apt install --assume-yes task-xfce-desktop Optional: Install the Chrome browser on your instance: curl -L -o google-chrome-stable_current_amd64.deb \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken ./google-chrome-stable_current_amd64.deb Cinnamon In the SSH window connected to your VM instance, install the Cinnamon desktop environment and basic desktop components: sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes cinnamon-core desktop-base dbus-x11 Note: You might see a Permission Denied error for the update-initramfs process during the installation process. This is normal and you can ignore the error. Set your Chrome Remote Desktop session to use Cinnamon in 2D mode (which does not use 3D graphics acceleration) by default: sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/cinnamon-session-cinnamon2d" > /etc/chrome-remote-desktop-session' Optional: Install the full suite of Linux desktop applications along with the Cinnamon desktop environment: sudo apt install --assume-yes task-cinnamon-desktop Optional: Install the Chrome browser on your instance: curl -L -o google-chrome-stable_current_amd64.deb \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken ./google-chrome-stable_current_amd64.deb Gnome In the SSH window connected to your VM instance, install the full Gnome desktop environment: sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes task-gnome-desktop Note: You might see a Permission Denied error for the update-initramfs process during the installation process. This is normal and you can ignore the error. Set your Chrome Remote Desktop session to use Gnome sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/gnome-session" > /etc/chrome-remote-desktop-session' Disable the Gnome display manager service on your instance, because it conflicts with the Chrome Remote Desktop service. sudo systemctl disable gdm3.service sudo reboot This command reboots the VM. Reconnect through SSH before continuing. Optional: Install the Chrome browser on your instance: curl -L -o google-chrome-stable_current_amd64.deb \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken ./google-chrome-stable_current_amd64.deb Gnome-Classic In the SSH window connected to your VM instance, install the full Gnome desktop environment: sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes task-gnome-desktop The DEBIAN_FRONTEND=noninteractive parameter suppresses a prompt to configure a layout for a keyboard that would be directly connected to the VM instance. Note: You might see a Permission Denied error for the update-initramfs process during the installation process. This is normal and you can ignore the error. Set your Chrome Remote Desktop session to use the Gnome-Classic desktop: sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/gnome-session-classic" > /etc/chrome-remote-desktop-session' Disable the Gnome display manager service on your instance, because it conflicts with the Chrome Remote Desktop service. sudo systemctl disable gdm3.service sudo reboot This command reboots the VM. Reconnect through SSH before continuing. Optional: Install the Chrome browser on your instance: curl -L -o google-chrome-stable_current_amd64.deb \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken ./google-chrome-stable_current_amd64.deb KDE Plasma In the SSH window connected to your VM instance, install the full KDE Plasma desktop environment: sudo DEBIAN_FRONTEND=noninteractive \ apt install --assume-yes task-kde-desktop The DEBIAN_FRONTEND=noninteractive parameter suppresses a prompt to configure a layout for a keyboard that would be directly connected to the VM instance. Note: You might see a Permission Denied error for the update-initramfs process during the installation process. This is normal and you can ignore the error. Set your Chrome Remote Desktop session to use KDE Plasma sudo bash -c 'echo "exec /etc/X11/Xsession /usr/bin/startplasma-x11" > /etc/chrome-remote-desktop-session' Optional: Install the Chrome browser on your instance: curl -L -o google-chrome-stable_current_amd64.deb \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb sudo apt install --assume-yes --fix-broken ./google-chrome-stable_current_amd64.deb Configure and start the Chrome Remote Desktop service To start the remote desktop server, you need to have an authorization key for the Google Account that you want to use to connect to it: In the Google Cloud console, go to the VM Instances page: Go to the VM Instances page Connect to your instance by clicking the SSH button. On your local computer, using the Chrome browser, go to the Chrome Remote Desktop command line setup page: https://remotedesktop.google.com/headless If you're not already signed in, sign in with a Google Account. This is the account that will be used for authorizing remote access. On the Set up another computer page, click Begin. Click Authorize. You need to allow Chrome Remote Desktop to access your account. If you approve, the page displays a command line for Debian Linux that looks like the following: DISPLAY= /opt/google/chrome-remote-desktop/start-host \ --code="4/xxxxxxxxxxxxxxxxxxxxxxxx" \ --redirect-url="https://remotedesktop.google.com/_/oauthredirect" \ --name=$(hostname) You use this command to set up and start the Chrome Remote Desktop service on your VM instance, linking it with your Google Account using the authorization code. Note: The authorization code in the command line is valid for only a few minutes, and you can use it only once. Copy the command to the SSH window that's connected to your instance, and then run the command. When you're prompted, enter a 6-digit PIN. This number will be used for additional authorization when you connect later. You might see errors like No net_fetcher or Failed to read. You can ignore these errors. Verify that the service is running using the following command. sudo systemctl status chrome-remote-desktop@$USER If the service is running, you see output that includes the state active: chrome-remote-desktop.service - LSB: Chrome Remote Desktop service Loaded: loaded (/lib/systemd/system/chrome-remote-desktop@USER.service; enabled; vendor preset: enabled) Active: active (running) since DATE_TIME; ELAPSED_TIME Connect to the VM instance You can connect to the VM instance using the Chrome Remote Desktop web application. On your local computer, go to the Chrome Remote Desktop website. Click Access my computer. If you're not already signed in to Google, sign in with the same Google Account that you used to set up the Chrome Remote Desktop service. You see your new VM instance crdhost in the Remote Devices list. Click the name of the remote desktop instance. When you're prompted, enter the PIN that you created earlier, and then click the arrow button to connect. You are now connected to the desktop environment on your remote Compute Engine instance. If you are prompted, always allow the Remote Desktop application to read your clipboard and let you copy and paste between local and remote applications. If you installed the Xfce desktop, the first time you connect, you are prompted to set up the desktop panels. Click Use Default Config to get the standard taskbar at the top and the quick launch panel at the bottom. Improve the remote desktop experience This section provides instructions for changing settings in order to improve the remote desktop experience. Install the Remote Desktop Chrome app The Remote Desktop Chrome app gives a separate windowed experience and allows keyboard shortcuts that would normally be intercepted by Chrome to be used on the remote system. If this app is not installed, do the following: Open the Session Options panel using the button chevron_left that appears when you move the mouse to the side of the window. In the Install App section, click Begin. Click Install. The remote desktop session reopens in its own application window. You can move any remote desktop sessions from a Chrome tab to the app window by clicking the Open With open_in_new icon in the address bar. Disable animations and effects in Cinnamon The Cinnamon desktop uses several graphical features and animations, such as semi-transparent windows and menus that fade in and out. Because these animations take more time to render over a remote connection, it can make the user interface feel slow. To disable these effects: In the Cinnamon desktop, select Menu > Preferences > Effects. Disable each of the effects: Set a user password The user account created by Compute Engine doesn't have a password. However, several desktop environments require one for unlocking screensavers and authorizing administrative actions. It is therefore important to set a password for your user: Connect to the instance using SSH, as you did when you first set up the instance. Create a password for the user: sudo passwd $(whoami) Disable screensavers and lock screens Because you're accessing your desktop from a remote computer, it's normally not necessary to use a screensaver or screen locker, so you can disable these. Xfce In the Applications menu, select Settings > Screensaver. Set Mode to Disable Screen Saver. Cinnamon In the desktop, select Menu > Preferences > Screensaver. In the Settings tab, set Delay to Never and disable the following two Lock settings to lock the screen automatically. Gnome In the desktop, click Activities and type Settings. Select the Settings application. In the Settings application, select Privacy > Screen Lock. Disable Automatic Screen Lock and close the dialog. Select Devices > Keyboard. In the list of keyboard shortcuts, go to the System section, and then click Lock Screen. Press the Backspace key to disable the shortcut, and then click Set. Select Power and set Blank Screen to Never. Gnome-Classic In the desktop, select Applications > System Tools > Settings. In the Settings application, select Privacy > Screen Lock. Disable Automatic Screen Lock and close the dialog. Select Devices > Keyboard. In the list of keyboard shortcuts, go to the System section and click Lock Screen. Press the Backspace key to disable the shortcut, and then click Set. Select Power and set Blank Screen to Never. KDE Plasma In the desktop, click the KDE menu button, and then type Screen Locking. Select the Screen Locking application. In the Configure Screen Locking application, disable Lock Screen Automatically after and click the backspace button to clear the keyboard shortcut. Click OK. Increase the desktop resolution If you have an ultra high-resolution monitor, you might find that the default maximum remote desktop size of 1600 x 1200 is too small. If so, you can increase it to the resolution of your monitor. Use SSH to connect to the instance. Set the CHROME_REMOTE_DESKTOP_DEFAULT_DESKTOP_SIZES environment variable to include the resolution of your monitor: echo "export CHROME_REMOTE_DESKTOP_DEFAULT_DESKTOP_SIZES=1600x1200,3840x2560" \ >> ~/.profile Restart the service: sudo systemctl restart chrome-remote-desktop@$USER Enable advanced video codec: The AV1 codec with High Quality color gives improved picture quality and allows better encoding of pure color information (such as text): Open the Session Options panel using the button chevron_left that appears when you move the mouse to the side of the window. In the Video Codec field, select AV1 Ensure that High-quality color field is enabled. Choose a different desktop environment In the preceding section, you set a default desktop environment in the global /etc/chrome-remote-desktop-session configuration file. You can also choose a different desktop environment (if it's installed) by specifying it in the .chrome-remote-desktop-session configuration file in your home directory: Xfceecho "exec /etc/X11/Xsession /usr/bin/xfce4-session" > ~/.chrome-remote-desktop-session Cinnamonecho "exec /etc/X11/Xsession /usr/bin/cinnamon-session-cinnamon2d" > ~/.chrome-remote-desktop-session Gnomeecho "exec /etc/X11/Xsession /usr/bin/gnome-session" > ~/.chrome-remote-desktop-session Gnome-Classicecho "exec /etc/X11/Xsession /usr/bin/gnome-session-classic" > ~/.chrome-remote-desktop-session KDE Plasmaecho "exec /etc/X11/Xsession /usr/bin/startplasma-x11" > ~/.chrome-remote-desktop-session After you make this change, restart the service so the change takes effect: sudo systemctl restart chrome-remote-desktop@$USER As mentioned before, Chrome Remote Desktop does not support 3D graphics acceleration. Therefore, for any desktop environments that uses these features, you need disable 3D graphics, or the session won't start. Automate the installation process When you need to set up multiple machines with Chrome Remote Desktop, the manual installation steps can become repetitive. You can use a custom startup script to automate this process, using the following procedure. For the purposes of this tutorial, the default machine type with a Debian Linux boot disk is used. If you are using this for your own environment, you may want to adjust the machine type, name, region, boot disk size, or other settings. In the Google Cloud console, go to the VM Instances page: Go to the VM Instances page Click Create Instance. Set the instance name to crdhost-autoinstall. Scroll to, and expand the Advanced Options section. Expand the Management section. Copy the following shell script and paste it into the Automation/Startup Script field: #!/bin/bash -x # # Startup script to install Chrome remote desktop and a desktop environment. # # See environmental variables at then end of the script for configuration # function install_desktop_env { PACKAGES="desktop-base xscreensaver dbus-x11" if [[ "$INSTALL_XFCE" != "yes" && "$INSTALL_CINNAMON" != "yes" ]] ; then # neither XFCE nor cinnamon specified; install both INSTALL_XFCE=yes INSTALL_CINNAMON=yes fi if [[ "$INSTALL_XFCE" = "yes" ]] ; then PACKAGES="$PACKAGES xfce4" echo "exec xfce4-session" > /etc/chrome-remote-desktop-session [[ "$INSTALL_FULL_DESKTOP" = "yes" ]] && \ PACKAGES="$PACKAGES task-xfce-desktop" fi if [[ "$INSTALL_CINNAMON" = "yes" ]] ; then PACKAGES="$PACKAGES cinnamon-core" echo "exec cinnamon-session-cinnamon2d" > /etc/chrome-remote-desktop-session [[ "$INSTALL_FULL_DESKTOP" = "yes" ]] && \ PACKAGES="$PACKAGES task-cinnamon-desktop" fi DEBIAN_FRONTEND=noninteractive \ apt-get install --assume-yes $PACKAGES $EXTRA_PACKAGES systemctl disable lightdm.service } function download_and_install { # args URL FILENAME if [[ -e "$2" ]] ; then echo "cannot download $1 to $2 - file exists" return 1; fi curl -L -o "$2" "$1" && \ apt-get install --assume-yes --fix-broken "$2" && \ rm "$2" } function is_installed { # args PACKAGE_NAME dpkg-query --list "$1" | grep -q "^ii" 2>/dev/null return $? } # Configure the following environmental variables as required: INSTALL_XFCE=yes INSTALL_CINNAMON=yes INSTALL_CHROME=yes INSTALL_FULL_DESKTOP=yes # Any additional packages that should be installed on startup can be added here EXTRA_PACKAGES="less bzip2 zip unzip tasksel wget" apt-get update if ! is_installed chrome-remote-desktop; then if [[ ! -e /etc/apt/sources.list.d/chrome-remote-desktop.list ]]; then echo "deb [arch=amd64] https://dl.google.com/linux/chrome-remote-desktop/deb stable main" \ | tee -a /etc/apt/sources.list.d/chrome-remote-desktop.list fi apt-get update DEBIAN_FRONTEND=noninteractive \ apt-get install --assume-yes chrome-remote-desktop fi install_desktop_env [[ "$INSTALL_CHROME" = "yes" ]] && ! is_installed google-chrome-stable && \ download_and_install \ https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb \ /tmp/google-chrome-stable_current_amd64.deb echo "Chrome remote desktop installation completed" This script performs the following tasks each time the machine is rebooted: If the remote desktop package is not installed: Adds the Chrome Remote Desktop Debian package repository Installs the Chrome Remote Desktop package and dependencies. Installs the Xfce or Cinnamon desktop environments (depending on the script settings). If the full desktop environment option is enabled, installs the necessary packages. If the Google Chrome browser option is enabled and is not installed: Downloads Google Chrome package. Installs Google Chrome and its dependent packages. Note: You can choose which packages to install by using the variables defined near the end of the script. Click Create. It takes a few moments to create your instance, and on first run with all the options enabled, the script can take up to 10 minutes to complete the installation. To monitor progress, connect to the VM instance using SSH, and in the terminal of the instance, run the following command: sudo journalctl -o cat -f _SYSTEMD_UNIT=google-startup-scripts.service This command shows the output from the startup script. When the script has finished, you see the following: INFO startup-script: Chrome remote desktop installation completed INFO startup-script: Return code 0. INFO Finished running startup scripts. This script only installs the required packages; you still need to configure the Remote Desktop Service for your user, as described previously. There are various ways to specify a startup script when creating a new VM instance: Pasting it into the Google Cloud console (as shown earlier). Storing it as a file on a local machine, and using the --metadata-from-file flag when you create the instance using the Google Cloud CLI. Storing it in a Cloud Storage bucket and specifying the URL to the object—either in the console or in the gcloud CLI. For more information on the alternative methods of how to configure the startup script, see Running Startup scripts in the Compute Engine documentation. Troubleshooting This section provides troubleshooting advice for this guide. Check the status of the Chrome Remote Desktop service If at any point the Chrome Remote Desktop service is not responding, you can check its status by using SSH to connect to the instance and running the following command: sudo systemctl status chrome-remote-desktop@$USER If the service is running, you see output that includes the state active: chrome-remote-desktop.service - LSB: Chrome Remote Desktop service Loaded: loaded (/lib/systemd/system/chrome-remote-desktop@USER.service; enabled; vendor preset: enabled) Active: active (running) since DATE_TIME; ELAPSED_TIME To restart the service, use the following command in the SSH window: sudo systemctl restart chrome-remote-desktop@$USER Get log and error information Chrome Remote Desktop writes log information to the system journal: journalctl SYSLOG_IDENTIFIER=chrome-remote-desktop # All logs journalctl SYSLOG_IDENTIFIER=chrome-remote-desktop -e # Most recent logs journalctl SYSLOG_IDENTIFIER=chrome-remote-desktop -b # Logs since reboot You can check these log files for error messages. Re-enable the service If you have mistakenly disabled connections to the remote instance in the client app, you can reconfigure the service and re-enable it by following the instructions in Configure and start the Chrome Remote Desktop service. Check the global and user-specific session configuration files. Check the contents of the global /etc/chrome-remote-desktop-session configuration file and the user-specific ~/.chrome-remote-desktop-session configuration file and confirm that the specified desktop environments are installed. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project The easiest way to eliminate billing is to delete the project that you created for the tutorial. To delete the project: Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the Compute Engine instance As an alternative to deleting the entire project, you can delete the VM instance you created for this tutorial: In the Google Cloud console, go to the VM Instances page: Go to the VM Instances page Select the checkbox next to the instance name you created earlier (crdhost). Click the Delete button at the top of the page: It takes a few moments to delete your instance. Deauthorize Chrome Remote Desktop for the instance If you no longer want to connect to the VM instance, you can disable it and remove the instance from the Remote Devices list. On your local computer, go to the Chrome Remote Desktop Remote Device list website. Click delete next to the instance name crdhost. Click OK to confirm that the remote device connection should be disabled. What's next Learn how to set up Chrome Remote Desktop on a Windows virtual machines. Learn about other options for creating a virtual workstation. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Set_up_for_Windows.txt b/Set_up_for_Windows.txt new file mode 100644 index 0000000000000000000000000000000000000000..fab194bf17aebf94ad140e47aca96243c5043b88 --- /dev/null +++ b/Set_up_for_Windows.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/chrome-desktop-remote-windows-compute-engine +Date Scraped: 2025-02-23T11:48:04.263Z + +Content: +Home Docs Cloud Architecture Center Send feedback Set up Chrome Remote Desktop for Windows on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2022-11-16 UTC This tutorial shows you how to set up the Chrome Remote Desktop service on a Microsoft Windows virtual machine (VM) instance on Compute Engine. For separate instructions for Linux VMs, see Linux virtual machines. Chrome Remote Desktop lets you remotely access applications with a graphical user interface from a local computer or mobile device. When following this tutorial, the default firewall rules allow Chrome Remote Desktop connections; you don't need to configure any additional firewall rules. The VM does need access to the internet (either with an external IP address or through Cloud NAT), and you use your Google Account for authentication and authorization. Two methods of setting up Chrome Remote Desktop are described: An interactive method using Windows Remote Desktop Protocol (RDP). This method requires that the VM be directly accessible from your local machine using an RDP client, which may not be possible in all situations. A non-interactive method using a startup script to install and configure Chrome Remote Desktop while the VM is being created. This method should be used if you have firewalls preventing direct access to the VM, or if you don't have access to an RDP client—for example, on Chrome OS. Note: This solution is not suitable for graphically intensive applications, including playing videos, because those typically require hardware graphics acceleration as well as a network that has high bandwidth and low latency. If you want to run graphically intense applications remotely, see the Creating a virtual GPU-accelerated Windows workstation tutorial for an alternative solution. This tutorial assumes that you are familiar with Microsoft Windows and the PowerShell command line. For information about other options for creating virtual workstations, see Creating a virtual workstation. Objectives Create a Windows Compute Engine VM instance to run Chrome Remote Desktop on. Install and configure the Chrome Remote Desktop service on the VM instance. Connect from your local computer to the desktop environment on the VM instance. Costs This tutorial uses billable components of Google Cloud, including: Compute Engine Use the Pricing Calculator to generate a cost estimate based on your projected usage. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the Compute Engine API. Enable the API When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Make sure that you have the following role or roles on the project: roles/compute.admin Check for the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. In the Principal column, find all rows that identify you or a group that you're included in. To learn which groups you're included in, contact your administrator. For all rows that specify or include you, check the Role column to see whether the list of roles includes the required roles. Grant the roles In the Google Cloud console, go to the IAM page. Go to IAM Select the project. Click person_add Grant access. In the New principals field, enter your user identifier. This is typically the email address for a Google Account. In the Select a role list, select a role. To grant additional roles, click add Add another role and add each additional role. Click Save. You use the Google Chrome browser on your local machine. If you're using the interactive method, your local machine needs to have an RDP client and be able to make a direct RDP connection to the remote VM instance. Interactive installation using RDP To install Chrome Remote Desktop interactively, you need to be able to connect to the remote VM using an RDP client. In this tutorial, you create the VM in the default VPC with default firewall rules, which exposes the RDP port 3339 to the internet. If this is not possible in your environment, use the non-interactive method that's described later in this document. Create a Compute Engine instance For the purposes of this tutorial, the default machine type is used. If you are using this for your own environment, you may want to adjust the machine type, name, region, boot disk size, or other settings. Console In the Google Cloud console, go to the VM Instances page: Go to VM Instances Click Create. Set the instance name to crdhost. Enable the Enable display device checkbox because Chrome Remote Desktop requires a display device on Windows VMs. Under Boot disk, click Change to open the Boot disk panel. From the Operating system list, select Windows Server. From the Version list, select Windows Server 2022 Datacenter. Click Select to close the panel. Click Create. Cloud Shell Open Cloud Shell. Open Cloud Shell Set your preferred zone: ZONE=us-central1-b REGION=us-central1 gcloud config set compute/zone "${ZONE}" Create a Compute Engine instance by using the app image for Windows Server 2022 Datacenter: gcloud compute instances create crdhost \ --machine-type=e2-medium \ --scopes=cloud-platform \ --enable-display-device \ --image-family=windows-2022 \ --image-project=windows-cloud \ --boot-disk-size=50GB \ --boot-disk-device-name=crdhost This command creates a Windows Server 2022 virtual machine that has an attached display device (required for Chrome Remote Desktop on Windows VMs) a 50GB boot disk, and grants the instance full access to Google Cloud APIs. Ignore the disk performance warning because you don't need high performance for this tutorial. Connect to the VM instance by using RDP In the Google Cloud console, go to the VM instances page. Go to the VM instances page Make sure a green check mark check is displayed next to the name of your crdhost instance, indicating that the instance is ready. Click the instance name crdhost to open the VM instance details page. Under Remote access, click Set Windows password, and then click Set to create your account on the remote machine. This step generates a password for you. Make a note of the password or copy it to a secure temporary file. To connect to the remote instance, click the arrow arrow_drop_down next to the RDP button, and then select Download the RDP file. You can open the RDP file by using your preferred RDP client. When your RDP client prompts for a password, enter the password that you generated earlier. When you're prompted whether you want your computer discoverable by other PCs and devices on the network, click No. Close the Server Manager Dashboard if it is open. Install the Chrome Remote Desktop service The next step is to install Google Chrome and the Chrome Remote Desktop service on the VM instance. In your RDP session, click Start on the Windows taskbar, type PowerShell, and then select the Windows PowerShell app. At the PowerShell prompt, download and run the Chrome Remote Desktop Host installer. $installer = "$env:TEMP\chromeremotedesktophost.msi" $uri = 'https://dl.google.com/edgedl/chrome-remote-desktop/chromeremotedesktophost.msi' (New-Object Net.WebClient).DownloadFile($uri,"$installer") Start-Process $installer -Wait Remove-Item $installer When you're prompted, confirm that you want the installer to make changes. Set up the Chrome Remote Desktop service You now generate a Windows command that starts the Chrome Remote Desktop service and links it to your Google Account. On your local computer, using the Chrome browser, go to the Chrome Remote Desktop command line setup page. If you're not already signed in, sign in with a Google Account. This is the account that will be used for authorizing remote access. On the Set up another computer page, click Begin, then Next. Click Authorize. You need to allow Chrome Remote Desktop to access your account. If you approve, the page displays several command lines, one of which is for Windows (Powershell) that looks like the following: & "${Env:PROGRAMFILES(X86)}\Google\Chrome Remote Desktop\CurrentVersion\remoting_start_host.exe" ` --code="4/ENCODED_AUTHENTICATION_TOKEN" ` --redirect-url="https://remotedesktop.google.com/_/oauthredirect" ` --name=$Env:COMPUTERNAME Click Copy content_copy to copy the command line to your clipboard. In your RDP session, at the Powershell prompt, paste the command line you just copied and press Enter. When you're prompted, confirm that you want the application to make changes. When you're prompted, enter a 6-digit PIN. This number will be used for additional authorization when you connect later. After the command completes, your remote desktop service has started. Close the Powershell window. Close the RDP session. You can now connect to the VM using Chrome Remote Desktop. Non-interactive installation In this approach, you configure the VM instance to have a startup script that runs when the VM is created. With this approach, the VM does not need to be directly accessible from the internet, although it still needs access to the internet. Authorize the Chrome Remote Desktop service You now generate a Windows command that you use later in the specialize script. As part of this procedure, you provide authorization information that's included in the command. On your local computer, using the Chrome browser, go to the Chrome Remote Desktop command line setup page. If you're not already signed in, sign in with a Google Account. This is the account that will be used for authorizing remote access. Click Begin, and then click Next. Click Authorize. Allow Chrome Remote Desktop to access your account. The page now contains several command lines, one of which is for Windows (Cmd) that looks like the following: "%PROGRAMFILES(X86)%\Google\Chrome Remote Desktop\CurrentVersion\remoting_start_host.exe" --code="4/ENCODED_AUTHENTICATION_TOKEN" --redirect-url="https://remotedesktop.google.com/_/oauthredirect" --name=%COMPUTERNAME% The --code flag contains a unique short-lived OAuth token. The authorization code in the command line is valid for only a few minutes, and you can use it only once. Keep this page open. Copy the startup command to Cloud Shell The next step is to create a file in your Cloud Shell instance that contains the startup command that you just generated. Open Cloud Shell. Open Cloud Shell Create a file for the startup command: cat > crd-auth-command.txt Go to the page that has the Chrome Remote Desktop startup command and copy the Windows (Cmd) command line. In Cloud Shell paste the command to add it to the file. Press Enter to end the line, and then press Control-D to close the file. Create the startup script Copy the following code block and paste it into Cloud Shell. cat << "EOF" > crd-sysprep-script.ps1 <# .SYNOPSIS GCESysprep specialize script for unattended Chrome Remote Desktop installation. #> $ErrorActionPreference = 'stop' function Get-Metadata([String]$metadataName) { try { $value = (Invoke-RestMethod ` -Headers @{'Metadata-Flavor' = 'Google'} ` -Uri "http://metadata.google.internal/computeMetadata/v1/instance/attributes/$metadataName") } catch { # Report but ignore REST errors. Write-Host $_ } if ($value -eq $null -or $value.Length -eq 0) { throw "Metadata value for ""$metadataName"" not specified. Skipping Chrome Remote Desktop service installation." } return $value } # Get config from metadata # $crdCommand = Get-Metadata('crd-command') $crdPin = Get-Metadata('crd-pin') $crdName = Get-Metadata('crd-name') if ($crdPin -isNot [Int32] -or $crdPin -gt 999999 -or $crdPin -lt 0) { throw "Metadata ""crd-pin""=""$crdPin"" is not a 6 digit number. Skipping Chrome Remote Desktop service installation." } # Prefix $crdPin with zeros if required. $crdPin = $crdPin.ToString("000000"); # Extract the authentication code and redirect URL arguments from the # remote dekstop startup command line. # $crdCommandArgs = $crdCommand.Split(' ') $codeArg = $crdCommandArgs | Select-String -Pattern '--code="[^"]+"' $redirectArg = $crdCommandArgs | Select-String -Pattern '--redirect-url="[^"]+"' if (-not $codeArg) { throw 'Cannot get --code= parameter from crd-command. Skipping Chrome Remote Desktop service installation.' } if (-not $redirectArg) { throw 'Cannot get --redirect-url= parameter from crd-command. Skipping Chrome Remote Desktop service installation.' } Write-Host 'Downloading Chrome Remote Desktop.' $installer = "$env:TEMP\chromeremotedesktophost.msi" $uri = 'https://dl.google.com/edgedl/chrome-remote-desktop/chromeremotedesktophost.msi' (New-Object Net.WebClient).DownloadFile($uri,"$installer") Write-Host 'Installing Chrome Remote Desktop.' & msiexec.exe /I $installer /qn /quiet | Out-Default Remove-Item $installer Write-Host 'Starting Chrome Remote Desktop service.' & "${env:ProgramFiles(x86)}\Google\Chrome Remote Desktop\CurrentVersion\remoting_start_host.exe" ` $codeArg $redirectArg --name="$crdName" -pin="$crdPin" | Out-Default Write-Host 'Downloading Chrome.' $installer = "$env:TEMP\chrome_installer.exe" $uri = 'https://dl.google.com/chrome/install/latest/chrome_installer.exe' (New-Object Net.WebClient).DownloadFile($uri,"$installer") Write-Host 'Installing Chrome.' & $installer /silent /install | Out-Default Remove-Item $installer EOF This code block is a PowerShell script that runs when the VM is created. It performs the following actions: Downloads and installs the Chrome Remote Desktop host service. Retrieves the following metadata parameters: crd-command - the Windows authentication and startup command. crd-pin - the 6-digit PIN used for additional authentication. crd-name - the name for this instance. Configures and starts the Chrome Remote Desktop host service. Downloads and installs the Chrome browser. Create a new Windows virtual machine You now create a new Windows VM using the files you created earlier to configure and set up Chrome Remote Desktop. For the purposes of this tutorial, the e2-medium machine type is used. If you are using this for your own environment, you may want to adjust the machine type, name, region, boot disk size, or other settings. In Cloud Shell, set your preferred zone: ZONE=us-central1-b REGION=us-central1 gcloud config set compute/zone "${ZONE}" Set a 6-digit PIN for additional authentication to Chrome Remote Desktop: CRD_PIN=your-pin Replace your-pin with a 6-digit number. Set a name for this VM instance: INSTANCE_NAME=crdhost Create the instance: gcloud compute instances create ${INSTANCE_NAME} \ --machine-type=e2-medium \ --scopes=cloud-platform \ --enable-display-device \ --image-family=windows-2022 \ --image-project=windows-cloud \ --boot-disk-size=50GB \ --boot-disk-device-name=${INSTANCE_NAME} \ --metadata=crd-pin=${CRD_PIN},crd-name=${INSTANCE_NAME} \ --metadata-from-file=crd-command=crd-auth-command.txt,sysprep-specialize-script-ps1=crd-sysprep-script.ps1 This command creates a Windows Server 2022 virtual machine in the default VPC that has an attached display device (required for Chrome Remote Desktop on Windows VMs), a 50GB boot disk, and grants the instance full access to Google Cloud APIs. The metadata values specify the specialize script, Windows startup command line, and the parameters required to start the Chrome Remote Desktop service. Monitor the VM startup You can verify that the startup script is successful by checking the messages logged to the VM's serial port while it is being created. In Cloud Shell, display the messages logged during VM startup: gcloud compute instances tail-serial-port-output ${INSTANCE_NAME} If the Chrome Remote Desktop configuration is successful, you see the following log lines: Found sysprep-specialize-script-ps1 in metadata. sysprep-specialize-script-ps1: Downloading Chrome Remote Desktop. sysprep-specialize-script-ps1: Installing Chrome Remote Desktop. sysprep-specialize-script-ps1: Downloading Chrome. sysprep-specialize-script-ps1: Installing Chrome. sysprep-specialize-script-ps1: Starting Chrome Remote Desktop service. Finished running specialize scripts. You might also see the following line: sysprep-specialize-script-ps1: ... Failed to read 'C:\ProgramData\Google\Chrome Remote Desktop\host_unprivileged.json'.: The system cannot find the path specified. (0x3) This is normal and can be ignored. If starting the Chrome Remote Desktop service fails, you see an error message indicating the problem, for example: sysprep-specialize-script-ps1: Couldn't start host: OAuth error. This error indicates that the OAuth token from the Chrome Remote Desktop authentication page is no longer valid, either because it has already been used, or because it has expired. To correct this error, either connect using RDP and perform an interactive setup as described previously, or delete the VM and retry the setup process. When you see the following message in the serial port monitor, the VM is ready. GCEInstanceSetup: ------------------------------------------------------------ GCEInstanceSetup: Instance setup finished. crdhost is ready to use. GCEInstanceSetup: ------------------------------------------------------------ Press Control-C to stop displaying the startup messages. Create a Windows user account In the Google Cloud console, go to the VM instances page. Go to the VM instances page Click the instance name crdhost to open the VM instance details page. Under Remote access, click Set Windows password, and then click Set to create your account on the remote machine. This step generates a password for you. Make a note of the username and password or copy it to a secure temporary file. Connect to the VM instance with Chrome Remote Desktop You can connect to the VM instance using the Chrome Remote Desktop web application. On your local computer, go to the Chrome Remote Desktop website. Click Access my computer. If you're not already signed in to Google, sign in with the same Google Account that you used to set up the Chrome Remote Desktop service. You see your new VM instance crdhost in the Remote Devices list. Click the name of the remote desktop instance. When you're prompted, enter the PIN that you created earlier, and then click the arrow arrow_forward button to connect. You are now connected to the Windows login screen on your remote Compute Engine instance. If you are prompted, always allow the Remote Desktop application to read your clipboard and let you copy and paste between local and remote applications. Press any key, and enter the password for the Windows user that you generated earlier. Note that the default remote keyboard has a US-English layout, so the characters entered may not match the characters on your local keyboard. You also cannot copy and paste the password. You are now connected and logged in to the remote Windows desktop. Improve the remote desktop experience This section provides instructions for changing settings in order to improve the remote desktop experience. Install the Remote Desktop Chrome app The Remote Desktop Chrome app gives a separate windowed experience and allows keyboard shortcuts that would normally be intercepted by Chrome to be used on the remote system. If this app is not installed, do the following: Open the Session Options panel using the button chevron_left that appears when you move the mouse to the side of the window. In the Install App section, click Begin. Click Install. The remote desktop session reopens in its own application window. You can move any remote desktop sessions from a Chrome tab to the app window by clicking the Open With open_in_new icon in the address bar. Improve the screen resolution The default remote desktop resolution can modified to better suit your local computers desktop resolution. Right-click the remote desktop's background and select Display Settings. In the Resolution drop-down list, select a different screen resolution. Confirm the new screen resolution in the dialog. Re-enable the service If you have mistakenly disabled connections to the remote instance in the client app, you can reconfigure the service and re-enable it by following the instructions in Set up the Chrome Remote Desktop Service. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project The easiest way to eliminate billing is to delete the project that you created for the tutorial. To delete the project: Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the Compute Engine instance As an alternative to deleting the entire project, you can delete the VM instance you created for this tutorial: In the Google Cloud console, go to the VM Instances page: Go to the VM Instances page Select the checkbox next to the instance name you created earlier (crdhost). Click the Delete button at the top of the page: It takes a few moments to delete your instance. Deauthorize Chrome Remote Desktop for the instance If you no longer want to connect to the VM instance, you can disable it and remove the instance from the Remote Devices list. On your local computer, go to the Chrome Remote Desktop Remote Device list website. Click delete next to the instance name crdhost. Click OK to confirm that the remote device connection should be disabled. What's next Learn how to set up Chrome Remote Desktop on a Linux virtual machines. Learn about other options for creating a virtual workstation. Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Shared_responsibility_and_shared_fate.txt b/Shared_responsibility_and_shared_fate.txt new file mode 100644 index 0000000000000000000000000000000000000000..529899a6c0137c81411c03193743b6891cc410bf --- /dev/null +++ b/Shared_responsibility_and_shared_fate.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/shared-responsibility-shared-fate +Date Scraped: 2025-02-23T11:43:12.563Z + +Content: +Home Docs Cloud Architecture Center Send feedback Shared responsibilities and shared fate on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-08-21 UTC This document describes the differences between the shared responsibility model and shared fate in Google Cloud. It discusses the challenges and nuances of the shared responsibility model. This document describes what shared fate is and how we partner with our customers to address cloud security challenges. Understanding the shared responsibility model is important when determining how to best protect your data and workloads on Google Cloud. The shared responsibility model describes the tasks that you have when it comes to security in the cloud and how these tasks are different for cloud providers. Understanding shared responsibility, however, can be challenging. The model requires an in-depth understanding of each service you utilize, the configuration options that each service provides, and what Google Cloud does to secure the service. Every service has a different configuration profile, and it can be difficult to determine the best security configuration. Google believes that the shared responsibility model stops short of helping cloud customers achieve better security outcomes. Instead of shared responsibility, we believe in shared fate. Shared fate includes us building and operating a trusted cloud platform for your workloads. We provide best practice guidance and secured, attested infrastructure code that you can use to deploy your workloads in a secure way. We release solutions that combine various Google Cloud services to solve complex security problems and we offer innovative insurance options to help you measure and mitigate the risks that you must accept. Shared fate involves us more closely interacting with you as you secure your resources on Google Cloud. Shared responsibility You're the expert in knowing the security and regulatory requirements for your business, and knowing the requirements for protecting your confidential data and resources. When you run your workloads on Google Cloud, you must identify the security controls that you need to configure in Google Cloud to help protect your confidential data and each workload. To decide which security controls to implement, you must consider the following factors: Your regulatory compliance obligations Your organization's security standards and risk management plan Security requirements of your customers and your vendors Defined by workloads Traditionally, responsibilities are defined based on the type of workload that you're running and the cloud services that you require. Cloud services include the following categories: Cloud service Description Infrastructure as a service (IaaS) IaaS services include Compute Engine, Cloud Storage, and networking services such as Cloud VPN, Cloud Load Balancing, and Cloud DNS. IaaS provides compute, storage, and network services on demand with pay-as-you-go pricing. You can use IaaS if you plan on migrating an existing on-premises workload to the cloud using lift-and-shift, or if you want to run your application on particular VMs, using specific databases or network configurations. In IaaS, the bulk of the security responsibilities are yours, and our responsibilities are focused on the underlying infrastructure and physical security. Platform as a service (PaaS) PaaS services include App Engine, Google Kubernetes Engine (GKE), and BigQuery. PaaS provides the runtime environment that you can develop and run your applications in. You can use PaaS if you're building an application (such as a website), and want to focus on development not on the underlying infrastructure. In PaaS, we're responsible for more controls than in IaaS. Typically, this will vary by the services and features that you use. You share responsibility with us for application-level controls and IAM management. You remain responsible for your data security and client protection. Software as a service (SaaS) SaaS applications include Google Workspace, Google Security Operations, and third-party SaaS applications that are available in Google Cloud Marketplace. SaaS provides online applications that you can subscribe to or pay for in some way. You can use SaaS applications when your enterprise doesn't have the internal expertise or business requirement to build the application themselves, but does require the ability to process workloads. In SaaS, we own the bulk of the security responsibilities. You remain responsible for your access controls and the data that you choose to store in the application. Function as a service (FaaS) or serverless FaaS provides the platform for developers to run small, single-purpose code (called functions) that run in response to particular events. You would use FaaS when you want particular things to occur based on a particular event. For example, you might create a function that runs whenever data is uploaded to Cloud Storage so that it can be classified. FaaS has a similar shared responsibility list as SaaS. Cloud Run functions is a FaaS application. The following diagram shows the cloud services and defines how responsibilities are shared between the cloud provider and customer. As the diagram shows, the cloud provider always remains responsible for the underlying network and infrastructure, and customers always remain responsible for their access policies and data. Defined by industry and regulatory framework Various industries have regulatory frameworks that define the security controls that must be in place. When you move your workloads to the cloud, you must understand the following: Which security controls are your responsibility Which security controls are available as part of the cloud offering Which default security controls are inherited Inherited security controls (such as our default encryption and infrastructure controls) are controls that you can provide as part of your evidence of your security posture to auditors and regulators. For example, the Payment Card Industry Data Security Standard (PCI DSS) defines regulations for payment processors. When you move your business to the cloud, these regulations are shared between you and your CSP. To understand how PCI DSS responsibilities are shared between you and Google Cloud, see Google Cloud: PCI DSS Shared Responsibility Matrix. As another example, in the United States, the Health Insurance Portability and Accountability Act (HIPAA) has set standards for handling electronic personal health information (PHI). These responsibilities are also shared between the CSP and you. For more information on how Google Cloud meets our responsibilities under HIPAA, see HIPAA - Compliance. Other industries (for example, finance or manufacturing) also have regulations that define how data can be gathered, processed, and stored. For more information about shared responsibility related to these, and how Google Cloud meets our responsibilities, see Compliance resource center. Defined by location Depending on your business scenario, you might need to consider your responsibilities based on the location of your business offices, your customers, and your data. Different countries and regions have created regulations that inform how you can process and store your customer's data. For example, if your business has customers who reside in the European Union, your business might need to abide by the requirements that are described in the General Data Protection Regulation (GDPR), and you might be obligated to keep your customer data in the EU itself. In this circumstance, you are responsible for ensuring that the data that you collect remains in the Google Cloud regions in the EU. For more information about how we meet our GDPR obligations, see GDPR and Google Cloud. For information about the requirements related to your region, see Compliance offerings. If your scenario is particularly complicated, we recommend speaking with our sales team or one of our partners to help you evaluate your security responsibilities. Challenges for shared responsibility Though shared responsibility helps define the security roles that you or the cloud provider has, relying on shared responsibility can still create challenges. Consider the following scenarios: Most cloud security breaches are the direct result of misconfiguration (listed as number 3 in the Cloud Security Alliance's Pandemic 11 Report) and this trend is expected to increase. Cloud products are constantly changing, and new ones are constantly being launched. Keeping up with constant change can seem overwhelming. Customers need cloud providers to provide them with opinionated best practices to help keep up with the change, starting with best practices by default and having a baseline secure configuration. Though dividing items by cloud services is helpful, many enterprises have workloads that require multiple cloud services types. In this circumstance, you must consider how various security controls for these services interact, including whether they overlap between and across services. For example, you might have an on-premises application that you're migrating to Compute Engine, use Google Workspace for corporate email, and also run BigQuery to analyze data to improve your products. Your business and markets are constantly changing; as regulations change, as you enter new markets, or as you acquire other companies. Your new markets might have different requirements, and your new acquisition might host their workloads on another cloud. To manage the constant changes, you must constantly re-assess your risk profile and be able to implement new controls quickly. How and where to manage your data encryption keys is an important decision that ties with your responsibilities to protect your data. The option that you choose depends on your regulatory requirements, whether you're running a hybrid cloud environment or still have an on-premises environment, and the sensitivity of the data that you're processing and storing. Incident management is an important, and often overlooked, area where your responsibilities and the cloud provider responsibilities aren't easily defined. Many incidents require close collaboration and support from the cloud provider to help investigate and mitigate them. Other incidents can result from poorly configured cloud resources or stolen credentials, and ensuring that you meet the best practices for securing your resources and accounts can be quite challenging. Advanced persistent threats (APTs) and new vulnerabilities can impact your workloads in ways that you might not consider when you start your cloud transformation. Ensuring that you remain up-to-date on the changing landscape, and who is responsible for threat mitigation is difficult, particularly if your business doesn't have a large security team. Shared fate We developed shared fate in Google Cloud to start addressing the challenges that the shared responsibility model doesn't address. Shared fate focuses on how all parties can better interact to continuously improve security. Shared fate builds on the shared responsibility model because it views the relationship between cloud provider and customer as an ongoing partnership to improve security. Shared fate is about us taking responsibility for making Google Cloud more secure. Shared fate includes helping you get started with a secured landing zone and being clear, opinionated, and transparent about recommended security controls, settings, and associated best practices. It includes helping you better quantify and manage your risk with cyber-insurance, using our Risk Protection Program. Using shared fate, we want to evolve from the standard shared responsibility framework to a better model that helps you secure your business and build trust in Google Cloud. The following sections describe various components of shared fate. Help getting started A key component of shared fate is the resources that we provide to help you get started, in a secure configuration in Google Cloud. Starting with a secure configuration helps reduce the issue of misconfigurations which is the root cause of most security breaches. Our resources include the following: Enterprise foundations blueprint that discuss top security concerns and our top recommendations. Secure blueprints that let you deploy and maintain secure solutions using infrastructure as code (IaC). Blueprints have our security recommendations enabled by default. Many blueprints are created by Google security teams and managed as products. This support means that they're updated regularly, go through a rigorous testing process, and receive attestations from third-party testing groups. Blueprints include the enterprise foundations blueprint and the secured data warehouse blueprint. Architecture Framework best practices that address the top recommendations for building security into your designs. The Architecture Framework includes a security section and a community zone that you can use to connect with experts and peers. Landing zone navigation guides that step you through the top decisions that you need to make to build a secure foundation for your workloads, including resource hierarchy, identity onboarding, security and key management, and network structure. Risk Protection Program Shared fate also includes the Risk Protection Program (currently in preview), which helps you use the power of Google Cloud as a platform to manage risk, rather than just seeing cloud workloads as another source of risk that you need to manage. The Risk Protection Program is a collaboration between Google Cloud and two leading cyber insurance companies, Munich Re and Allianz Global & Corporate Speciality. The Risk Protection Program includes Risk Manager, which provides data-driven insights that you can use to better understand your cloud security posture. If you're looking for cyber insurance coverage, you can share these insights from Risk Manager directly with our insurance partners to obtain a quote. For more information, see Google Cloud Risk Protection Program now in Preview. Help with deployment and governance Shared fate also helps with your continued governance of your environment. For example, we focus efforts on products such as the following: Assured Workloads, which helps you meet your compliance obligations. Security Command Center Premium, which uses threat intelligence, threat detection, web scanning, and other advanced methods to monitor and detect threats. It also provides a way to resolve many of these threats quickly and automatically. Organization policies and resource settings that let you configure policies throughout your hierarchy of folders and projects. Policy Intelligence tools that provide you with insights on access to accounts and resources. Confidential Computing, which allows you to encrypt data in use. Sovereign Controls by Partners, which is available in certain countries and helps enforce data residency requirements. Putting shared responsibility and shared fate into practice As part of your planning process, consider the following actions to help you understand and implement appropriate security controls: Create a list of the type of workloads that you will host in Google Cloud, and whether they require IaaS, PaaS, and SaaS services. You can use the shared responsibility diagram as a checklist to ensure that you know the security controls that you need to consider. Create a list of regulatory requirements that you must comply with, and access resources in the Compliance resource center that relate to those requirements. Review the list of available blueprints and architectures in the Architecture Center for the security controls that you require for your particular workloads. The blueprints provide a list of recommended controls and the IaC code that you require to deploy that architecture. Use the landing zone documentation and the recommendations in the enterprise foundations guide to design a resource hierarchy and network architecture that meets your requirements. You can use the opinionated workload blueprints, like the secured data warehouse, to accelerate your development process. After you deploy your workloads, verify that you're meeting your security responsibilities using services such as the Risk Manager, Assured Workloads, Policy Intelligence tools, and Security Command Center Premium. For more information, see the CISO's Guide to Cloud Transformation paper. What's next Review the core security principles. Keep up to date with shared fate resources. Familiarize yourself with available blueprints, including the security foundations blueprint and workload examples like the secured data warehouse. Read more about shared fate. Read about our underlying secure infrastructure in the Google infrastructure security design overview. Read how to implement NIST Cybersecurity Framework best practices in Google Cloud (PDF). Send feedback \ No newline at end of file diff --git a/Single-zone_deployment_on_Compute_Engine.txt b/Single-zone_deployment_on_Compute_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..699da1a4139dfa25d7b109619694e5f9e9d92da5 --- /dev/null +++ b/Single-zone_deployment_on_Compute_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/single-zone-deployment-compute-engine +Date Scraped: 2025-02-23T11:44:56.147Z + +Content: +Home Docs Cloud Architecture Center Send feedback Single-zone deployment on Compute Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-02-08 UTC This document provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in a single zone in Google Cloud. You can use this reference architecture to efficiently rehost (lift and shift) on-premises applications to the cloud with minimal changes to the applications. The document also describes the design factors that you should consider when you build a zonal architecture for your cloud applications. The intended audience for this document is cloud architects. Architecture The following diagram shows an architecture for an application that runs in a single Google Cloud zone. This architecture is aligned with the Google Cloud zonal deployment archetype. The architecture is based on the infrastructure as a service (IaaS) cloud model. You provision the required infrastructure resources (compute, networking, and storage) in Google Cloud. You retain full control over the infrastructure and responsibility for the operating system, middleware, and higher layers of the application stack. To learn more about IaaS and other cloud models, see PaaS vs. IaaS vs. SaaS vs. CaaS: How are they different? The preceding diagram includes the following components: Component Purpose Regional external load balancer The regional external load balancer receives and distributes user requests to the web tier VMs. Use an appropriate load balancer type depending on the traffic type and other requirements. For example, if the backend consists of web servers (as shown in the preceding architecture), then use an Application Load Balancer to forward HTTP(S) traffic. To load-balance TCP traffic, use a Network Load Balancer. For more information, see Choose a load balancer. Zonal managed instance group (MIG) for the web tier The web tier of the application is deployed on Compute Engine VMs that are part of a zonal MIG. The MIG is the backend for the regional external load balancer. Each VM in the MIG hosts an independent instance of the web tier of the application. Regional internal load balancer The regional internal load balancer distributes traffic from the web tier VMs to the application tier VMs. Depending on your requirements, you can use a regional internal Application Load Balancer or Network Load Balancer. For more information, see Choose a load balancer. Zonal MIG for the application tier The application tier is deployed on Compute Engine VMs that are part of a zonal MIG, which is the backend for the internal load balancer. Each VM in the MIG hosts an independent instance of the application tier. Third-party database deployed on a Compute Engine VM The architecture in this document shows a third-party database (like PostgreSQL) that's deployed on a Compute Engine VM. You can deploy a standby database in another zone. The database replication and failover capabilities depend on the database that you use. Installing and managing a third-party database involves additional effort and operational cost for applying updates, monitoring, and ensuring availability. You can avoid the overhead of installing and managing a third-party database and take advantage of built-in high availability (HA) features by using a fully managed database service like Cloud SQL or AlloyDB for PostgreSQL. For more information about managed database options, see Database services. Virtual Private Cloud network and subnet All the Google Cloud resources in the architecture use a single VPC network and subnet. Depending on your requirements, you can choose to build an architecture that uses multiple VPC networks or multiple subnets. For more information, see Deciding whether to create multiple VPC networks in "Best practices and reference architectures for VPC design." Cloud Storage regional bucket Application and database backups are stored in a regional Cloud Storage bucket. If a zone outage occurs, your application and data aren't lost. Alternatively, you can use Backup and DR Service to create, store, and manage the database backups. Products used This reference architecture uses the following Google Cloud products: Compute Engine: A secure and customizable compute service that lets you create and run VMs on Google's infrastructure. Cloud Load Balancing: A portfolio of high performance, scalable, global and regional load balancers. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Virtual Private Cloud (VPC): A virtual system that provides global, scalable networking functionality for your Google Cloud workloads. VPC includes VPC Network Peering, Private Service Connect, private services access, and Shared VPC. Use cases This section describes use cases for which a single-zone deployment on Compute Engine is an appropriate choice. Cloud development and testing: You can use a single-zone deployment architecture to build a low-cost cloud environment for development and testing. Applications that don't need HA: A single-zone architecture might be sufficient for applications that can tolerate downtime due to infrastructure outages. Low-latency, low-cost networking between application components: A single-zone architecture might be well suited for applications such as batch computing that need low-latency and high-bandwidth network connections among the compute nodes. With a single-zone deployment, there's no cross-zone network traffic, and you don't incur costs for intra-zone traffic. Migration of commodity workloads: The zonal deployment architecture provides a simple cloud-migration path for commodity on-premises applications for which you have no control over the code or that can't support architectures beyond a basic active-passive topology. Running license-restricted software: A single-zone architecture might be well suited for license-restricted systems where running more than one instance at a time is either too expensive or isn't permitted. Design considerations This section provides guidance to help you use this reference architecture to develop an architecture that meets your specific requirements for system design, security and compliance, reliability, operational efficiency, cost, and performance. Note: The guidance in this section isn't exhaustive. Depending on the specific requirements of your application and the Google Cloud products and features that you use, there might be additional design factors and trade-offs that you should consider. System design This section provides guidance to help you to choose Google Cloud regions and zones for your zonal deployment and to select appropriate Google Cloud services. Region selection When you choose a Google Cloud region and zone for your applications, consider the following factors and requirements: Availability of Google Cloud services. For more information, see Products available by location. Availability of Compute Engine machine types. For more information, see Regions and zones. End-user latency requirements. Cost of Google Cloud resources. Regulatory requirements. Some of these factors and requirements might involve trade-offs. For example, the most cost-efficient region might not have the lowest carbon footprint. Compute services The reference architecture in this document uses Compute Engine VMs for all the tiers of the application. The design guidance in this document is specific to Compute Engine unless mentioned otherwise. Depending on the requirements of your application, you can choose from the following other Google Cloud compute services. The design guidance for those services is outside the scope of this document. You can run containerized applications in Google Kubernetes Engine (GKE) clusters. GKE is a container-orchestration engine that automates deploying, scaling, and managing containerized applications. If you prefer to focus your IT efforts on your data and applications instead of setting up and operating infrastructure resources, then you can use serverless services like Cloud Run and Cloud Run functions. The decision of whether to use VMs, containers, or serverless services involves a trade-off between configuration flexibility and management effort. VMs and containers provide more configuration flexibility, but you're responsible for managing the resources. In a serverless architecture, you deploy workloads to a preconfigured platform that requires minimal management effort. For more information about choosing appropriate compute services for your workloads in Google Cloud, see Hosting Applications on Google Cloud in the Google Cloud Architecture Framework. Storage services The architecture shown in this document uses zonal Persistent Disk volumes for all the tiers. For more durable persistent storage, you can use regional Persistent Disk volumes, which provide synchronous replication of data across two zones within a region. For low-cost storage that's redundant across the zones within a region, you can use Cloud Storage regional buckets. To store data that's shared across multiple VMs in a region, such as across all the VMs in the web tier or application tier, you can use Filestore. The data that you store in a Filestore Enterprise instance is replicated synchronously across three zones within the region. This replication ensures high availability and robustness against zone outages. You can store shared configuration files, common tools and utilities, and centralized logs in the Filestore instance, and mount the instance on multiple VMs. If your database is Microsoft SQL Server, we recommend using Cloud SQL for SQL Server. In scenarios when Cloud SQL doesn't support your configuration requirements, or if you need access to the operating system, then you can deploy a failover cluster instance (FCI). In this scenario, you can use the fully managed Google Cloud NetApp Volumes to provide continuous availability (CA) SMB storage for the database. When you design storage for your workloads, consider the functional characteristics, resilience requirements, performance expectations, and cost goals. For more information, see Design an optimal storage strategy for your cloud workload. Database services The reference architecture in this document uses a third-party database, like PostgreSQL, that's deployed on Compute Engine VMs. Installing and managing a third-party database involves effort and cost for operations like applying updates, monitoring and ensuring availability, performing backups, and recovering from failures. You can avoid the effort and cost of installing and managing a third-party database by using a fully managed database service like Cloud SQL, AlloyDB for PostgreSQL, Bigtable, Spanner, or Firestore. These Google Cloud database services provide uptime service-level agreements (SLAs), and they include default capabilities for scalability and observability. If your workloads require an Oracle database, you can use Bare Metal Solution provided by Google Cloud. For an overview of the use cases that each Google Cloud database service is suitable for, see Google Cloud databases. Security and compliance This section describes factors that you should consider when you use this reference architecture to design and build a zonal topology in Google Cloud that meets the security and compliance requirements of your workloads. Protection against external threats To protect your application against external threats like distributed denial-of-service (DDoS) attacks and cross-site scripting (XSS), you can use Google Cloud Armor security policies. The security policies are enforced at the perimeter—that is, before traffic reaches the web tier. Each policy is a set of rules that specifies certain conditions that should be evaluated and actions to take when the conditions are met. For example, a rule could specify that if the incoming traffic's source IP address matches a specific IP address or CIDR range, then the traffic must be denied. In addition, you can apply preconfigured web application firewall (WAF) rules. For more information, see Security policy overview. External access for VMs In the reference architecture that this document describes, the VMs that host the application tier, web tier, and databases don't need inbound access from the internet. Don't assign external IP addresses to those VMs. Google Cloud resources that have only a private, internal IP address can still access certain Google APIs and services by using Private Service Connect or Private Google Access. For more information, see Private access options for services. To enable secure outbound connections from Google Cloud resources that have only internal IP addresses, like the Compute Engine VMs in this reference architecture, you can use Cloud NAT. VM image security To ensure that your VMs use only approved images (that is, images with software that meets your policy or security requirements), you can define an organization policy that restricts the use of images in specific public image projects. For more information, see Setting up trusted image policies. Service account privileges In Google Cloud projects where the Compute Engine API is enabled, a default service account is created automatically. The default service account is granted the Editor IAM role (roles/editor) unless this behavior is disabled. By default, the default service account is attached to all VMs that you create by using the Google Cloud CLI or the Google Cloud console. The Editor role includes a broad range of permissions, so attaching the default service account to VMs creates a security risk. To avoid this risk, you can create and use dedicated service accounts for each application. To specify the resources that the service account can access, use fine-grained policies. For more information, see Limit service account privileges in "Best practices for using service accounts." Network security To control network traffic between the resources in the architecture, you must set up appropriate Cloud Next Generation Firewall rules. Each firewall rule lets you control traffic based on parameters like the protocol, IP address, and port. For example, you can configure a firewall rule to allow TCP traffic from the web server VMs to a specific port of the database VMs, and block all other traffic. More security considerations When you build the architecture for your workload, consider the platform-level security best practices and recommendations provided in the Enterprise foundations blueprint. Reliability This section describes design factors that you should consider when you use this reference architecture to build and operate reliable infrastructure for your zonal deployments in Google Cloud. Infrastructure outages In a single-zone deployment architecture, if any component in the infrastructure stack fails, the application can process requests if each tier contains at least one functioning component with adequate capacity. For example, if a web server instance fails, the load balancer forwards user requests to the other available web server instances. If a VM that hosts a web server or app server instance crashes, the MIG recreates the VM automatically. If the database crashes, you must manually activate the second database and update the app server instances to connect to the database. A zone outage or region outage affects all the Compute Engine VMs in a single-zone deployment. A zone outage doesn't affect the load balancer in this architecture because it's a regional resource. However, the load balancer can't distribute traffic, because there are no available backends. If a zone or region outage occurs, you must wait for Google to resolve the outage, and then verify that the application works as expected. You can reduce the downtime caused by zone or region outages by maintaining a passive (failover) replica of the infrastructure stack in another Google Cloud zone or region. If an outage occurs in the primary zone, you can activate the stack in the failover zone or region, and use DNS routing policies to route traffic to the load balancer in the failover zone or region. For applications that require robustness against zone or region outages, consider using a regional or multi-regional architecture. See the following reference architectures: Regional deployment on Compute Engine Multi-regional deployment on Compute Engine MIG autoscaling The autoscaling capability of stateless MIGs lets you maintain application availability and performance at predictable levels. Stateful MIGs can't be autoscaled. To control the autoscaling behavior of your MIGs, you can specify target utilization metrics, such as average CPU utilization. You can also configure schedule-based autoscaling. For more information, see Autoscaling groups of instances. MIG size limit By default, a zonal MIG can have up to 1,000 VMs. You can increase the size limit of a MIG to 2,000 VMs. VM autohealing Sometimes the VMs that host your application might be running and available, but there might be issues with the application itself. It might freeze, crash, or not have sufficient memory. To verify whether an application is responding as expected, you can configure application-based health checks as part of the autohealing policy of your MIGs. If the application on a particular VM isn't responding, the MIG autoheals (repairs) the VM. For more information about configure autohealing, see Set up an application health check and autohealing. VM placement In the architecture that this document describes, the application tier and web tier run on Compute Engine VMs within a single zone. To improve the robustness of the architecture, you can create a spread placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs on different physical servers (called hosts), so your VMs are robust against failures of individual hosts. For more information, see Apply spread placement policies to VMs. VM capacity planning To make sure that capacity for Compute Engine VMs is available when required for MIG autoscaling, you can create reservations. A reservation provides assured capacity in a specific zone for a specified number of VMs of a machine type that you choose. A reservation can be specific to a project, or shared across multiple projects. You incur charges for reserved resources even if the resources aren't provisioned or used. For more information about reservations, including billing considerations, see Reservations of Compute Engine zonal resources. Persistent disk state A best practice in application design is to avoid the need for stateful local disks. But if the requirement exists, you can configure your persistent disks to be stateful to ensure that the data is preserved when the VMs are repaired or recreated. However, we recommend that you keep the boot disks stateless, so that you can update them to the latest images with new versions and security patches. For more information, see Configuring stateful persistent disks in MIGs. Data durability You can use Backup and DR to create, store, and manage backups of the Compute Engine VMs. Backup and DR stores backup data in its original, application-readable format. When required, you can restore your workloads to production by directly using data from long-term backup storage without time-consuming data movement or preparation activities. If you use a managed database service like Cloud SQL, backups are taken automatically based on the retention policy that you define. You can supplement the backup strategy with additional logical backups to meet regulatory, workflow, or business requirements. If you use a third-party database and you need to store database backups and transaction logs, you can use regional Cloud Storage buckets. Regional Cloud Storage buckets provide low-cost backup storage that's redundant across zones. Compute Engine provides the following options to help you to ensure the durability of data that's stored in Persistent Disk volumes: You can use snapshots to capture the point-in-time state of Persistent Disk volumes. Standard snapshots are stored redundantly in multiple regions, with automatic checksums to ensure the integrity of your data. Snapshots are incremental by default, so they use less storage space and you save money. Snapshots are stored in a Cloud Storage location that you can configure. For more recommendations about using and managing snapshots, see Best practices for Compute Engine disk snapshots. Regional Persistent Disk volumes let you run highly available applications that aren't affected by failures in persistent disks. When you create a regional Persistent Disk volume, Compute Engine maintains a replica of the disk in a different zone in the same region. Data is replicated synchronously to the disks in both zones. If any one of the two zones has an outage, the data remains available. Database availability If you use a managed database service like Cloud SQL in HA configuration, then in the event of a failure of the primary database, Cloud SQL fails over automatically to the standby database. You don't need to change the IP address for the database endpoint. If you use a self-managed third-party database that's deployed on a Compute Engine VM, then you must use an internal load balancer or other mechanism to ensure that the application can connect to another database if the primary database is unavailable. To implement cross-zone failover for a database deployed on a Compute Engine VM, you need a mechanism to identify failures of the primary database and a process to fail over to the standby database. The specifics of the failover mechanism depend on the database that you use. You can set up an observer instance to detect failures of the primary database and orchestrate the failover. You must configure the failover rules appropriately to avoid a split-brain situation and prevent unnecessary failover. For example architectures that you can use to implement failover for PostgreSQL databases, see Architectures for high availability of PostgreSQL clusters on Compute Engine. More reliability considerations When you build the cloud architecture for your workload, review the reliability-related best practices and recommendations that are provided in the following documentation: Google Cloud infrastructure reliability guide Patterns for scalable and resilient apps Designing resilient systems Cost optimization This section provides guidance to optimize the cost of setting up and operating a zonal Google Cloud topology that you build by using this reference architecture. VM machine types To help you optimize the resource utilization of your VM instances, Compute Engine provides machine type recommendations. Use the recommendations to choose machine types that match your workload's compute requirements. For workloads with predictable resource requirements, you can customize the machine type to your needs and save money by using custom machine types. VM provisioning model If your application is fault tolerant, then Spot VMs can help to reduce your Compute Engine costs for the VMs in the application and web tiers. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have HA requirements. Spot VMs offer the same machine types, options, and performance as regular VMs. However, when the resource capacity in a zone is limited, MIGs might not be able to scale out (that is, create VMs) automatically to the specified target size until the required capacity becomes available again. Resource utilization The autoscaling capability of stateless MIGs enables your application to handle increases in traffic gracefully, and it helps you to reduce cost when the need for resources is low. Stateful MIGs can't be autoscaled. Third-party licensing When you migrate third-party workloads to Google Cloud, you might be able to reduce cost by bringing your own licenses (BYOL). For example, to deploy Microsoft Windows Server VMs, instead of using a premium image that incurs additional cost for the third-party license, you can create and use a custom Windows BYOL image. You then pay only for the VM infrastructure that you use on Google Cloud. This strategy helps you continue to realize value from your existing investments in third-party licenses. If you decide to use the BYOL approach, we recommend that you do the following: Provision the required number of compute CPU cores independently of memory by using custom machine types. By doing this, you limit the third-party licensing cost to the number of CPU cores that you need. Reduce the number of vCPUs per core from 2 to 1 by disabling simultaneous multithreading (SMT), and reduce your licensing costs by 50%. If you deploy a third-party database like Microsoft SQL Server on Compute Engine VMs, then you must consider the license costs for the third-party software. When you use a managed database service like Cloud SQL, the database license costs are included in the charges for the service. More cost considerations When you build the architecture for your workload, also consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Cost optimization. Operational efficiency This section describes the factors that you should consider when you use this reference architecture to design and build a zonal Google Cloud topology that you can operate efficiently. VM configuration updates To update the configuration of the VMs in a MIG (such as the machine type or boot-disk image), you create a new instance template with the required configuration and then apply the new template to the MIG. The MIG updates the VMs by using the update method that you choose: automatic or selective. Choose an appropriate method based on your requirements for availability and operational efficiency. For more information about these MIG update methods, see Apply new VM configurations in a MIG. VM images For your MIG instance templates, instead of using Google-provided public images, we recommend that you create and use custom images that contain the configurations and software that your applications require. You can group your custom images into a custom image family. An image family always points to the most recent image in that family, so your instance templates and scripts can use that image without you having to update references to a specific image version. Deterministic instance templates If the instance templates that you use for your MIGs include startup scripts to install third-party software, make sure that the scripts explicitly specify software-installation parameters such as the software version. Otherwise, when the MIG creates the VMs, the software that's installed on the VMs might not be consistent. For example, if your instance template includes a startup script to install Apache HTTP Server 2.0 (the apache2 package), then make sure that the script specifies the exact apache2 version that should be installed, such as version 2.4.53. For more information, see Deterministic instance templates. More operational considerations When you build the architecture for your workload, consider the general best practices and recommendations for operational efficiency that are described in Google Cloud Architecture Framework: Operational excellence. Performance optimization This section describes the factors that you should consider when you use this reference architecture to design and build a zonal topology in Google Cloud that meets the performance requirements of your workloads. VM placement For workloads that require low inter-VM network latency, you can create a compact placement policy and apply it to the MIG template. When the MIG creates VMs, it places the VMs on physical servers that are close to each other. For more information, see Reduce latency by using compact placement policies. VM machine types Compute Engine offers a wide range of predefined and customizable machine types that you can choose from depending on your cost and performance requirements. The machine types are grouped into machine series and families. The following table provides a summary of the recommended machine families and series for different workload types: Requirement Recommended machine family Example machine series Best price-performance ratio for a variety of workloads General-purpose machine family C3, C3D, E2, N2, N2D, Tau T2D, Tau T2A Highest performance per core and optimized for compute-intensive workloads Compute-optimized machine family C2, C2D, H3 High memory-to-vCPU ratio for memory-intensive workloads Memory-optimized machine family M3, M2, M1 GPUs for massively parallelized workloads Accelerator-optimized machine family A2, G2 For more information, see Machine families resource and comparison guide. VM multithreading Each virtual CPU (vCPU) that you allocate to a Compute Engine VM is implemented as a single hardware multithread. By default, two vCPUs share a physical CPU core. For workloads that are highly parallel or that perform floating point calculations (such as genetic sequence analysis and financial risk modeling), you can improve performance by reducing the number of threads that run on each physical CPU core. For more information, see Set the number of threads per core. VM multithreading might have licensing implications for some third-party software, like databases. For more information, read the licensing documentation for the third-party software. Network Service Tiers Network Service Tiers lets you optimize the network cost and performance of your workloads. You can choose Premium Tier or Standard Tier. Premium Tier uses Google's highly reliable global backbone to help you achieve minimal packet loss and latency. Traffic enters and leaves the Google network at a global edge point of presence (PoP) that's close to your end user. We recommend using Premium Tier as the default tier for optimal performance. With Standard Tier, traffic enters and leaves the Google network at an edge PoP that's closest to the Google Cloud location where your workload runs. The pricing for Standard Tier is lower than Premium Tier. Standard Tier is suitable for traffic that isn't sensitive to packet loss and that doesn't have low latency requirements. More performance considerations When you build the architecture for your workload, consider the general best practices and recommendations that are provided in Google Cloud Architecture Framework: Performance optimization. What's next Learn more about the Google Cloud products used in this reference architecture: Cloud Load Balancing overview Instance groups Get started with migrating your workloads to Google Cloud. Explore and evaluate deployment archetypes that you can choose to build architectures for your cloud workloads. Review architecture options for designing reliable infrastructure for your workloads in Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Kumar Dhanagopal | Cross-Product Solution DeveloperOther contributors: Ben Good | Solutions ArchitectCarl Franklin | Director, PSO Enterprise ArchitectureDaniel Lees | Cloud Security ArchitectGleb Otochkin | Cloud Advocate, DatabasesMark Schlagenhauf | Technical Writer, NetworkingPawel Wenda | Group Product ManagerSean Derrington | Group Outbound Product Manager, StorageSekou Page | Outbound Product ManagerSimon Bennett | Group Product ManagerSteve McGhee | Reliability AdvocateVictor Moreno | Product Manager, Cloud Networking Send feedback \ No newline at end of file diff --git a/Single_sign-on.txt b/Single_sign-on.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc863598b212c97edb76c1a124a81a46c9519d6d --- /dev/null +++ b/Single_sign-on.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/identity/single-sign-on +Date Scraped: 2025-02-23T11:55:06.580Z + +Content: +Home Docs Cloud Architecture Center Send feedback Single sign-on Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-08 UTC You can configure your Cloud Identity or Google Workspace account to use single sign-on (SSO). When you enable SSO, users aren't prompted to enter a password when they try to access Google services. Instead, they are redirected to an external identity provider (IdP) to authenticate. Using SSO can provide several advantages: You enable a better experience for users because they can use their existing credentials to authenticate and don't have to enter credentials as often. You ensure that your existing IdP remains the system of record for authenticating users. You don't have to synchronize passwords to Cloud Identity or Google Workspace. To use SSO, a user must have a user account in Cloud Identity or Google Workspace and a corresponding identity in the external IdP. SSO is therefore commonly used in combination with an external authoritative source that automatically provisions users to Cloud Identity or Google Workspace. Note: Users with super-admin privileges can bypass single sign-on. This ensures that super admins can access the account even if the SSO configuration is incorrect or the external IdP is unavailable. Single sign-on process Cloud Identity and Google Workspace support Security Assertion Markup Language (SAML) 2.0 for single sign-on. SAML is an open standard for exchanging authentication and authorization data between a SAML IdP and SAML service providers. When you use SSO for Cloud Identity or Google Workspace, your external IdP is the SAML IdP and Google is the SAML service provider. Google implements SAML 2.0 HTTP POST binding. This binding specifies how authentication information is exchanged between the SAML IdP and SAML service provider. The following diagram illustrates an example of how this process works when you use SSO to access the Google Cloud console. You point your browser to the Google Cloud console (or any other Google resource that requires authentication). Because you are not yet authenticated, the Google Cloud console redirects your browser to Google Sign-In. Google Sign-In returns a Sign-In page, prompting you to enter your email address. You enter your email address and submit the form. Google Sign-In looks up the Cloud Identity or Google Workspace account that is associated with your email address. Because the associated Cloud Identity or Google Workspace account has single sign-on enabled, Google Sign-In redirects the browser to the URL of the configured external IdP. Before issuing the redirect, it adds two parameters to the URL, RelayState and SAMLRequest. RelayState contains an identifier that the external IdP is expected to pass back later. SAMLRequest contains the SAML authentication request, an XML document that has been deflated, base64-encoded, and URL-encoded. In decoded form, the SAML authentication request looks similar to the following: google.com This example request instructs the external IdP to authenticate the user, create a SAML assertion for the audience google.com, and post it to the assertion consumer service (ACS) at https://www.google.com/a/example.com/acs. The domain that is embedded in the ACS URL (example.com) corresponds to the primary domain of your Google Workspace or Cloud Identity account. If you use the domain-specific issuer feature when you configure SSO, the issuer is google.com/a/DOMAIN instead of google.com, where DOMAIN is the primary domain of your Cloud Identity or Google Workspace account. The steps taken by the external IdP to perform the authentication depend on the IdP and its configuration—for example, it might display a login dialog, or it might prompt for MFA or a fingerprint. When these steps have been completed successfully, the SAML exchange continues: The external IdP returns a specially crafted HTML page that causes your browser to immediately send an HTTP POST request to the ACS URL. This request contains two parameters: RelayState, which contains the value originally passed to the IdP in the SAML authentication request. SAMLResponse, which contains the base64-encoded SAML assertion. The SAML assertion is an XML document that states that the IdP has successfully authenticated the user. In decoded form, the SAML assertion looks similar to the following: ... https://idp.example.org/ ... bob@example.org ... google.com ... urn:oasis:names:tc:SAML:2.0:ac:classes:Password This example assertion has been issued for the audience google.com (matching the issuer of the SAML authentication request) and states that the IdP https://idp.example.org/ has authenticated the user bob@example.org. The SAML assertion also contains a digital signature. The IdP creates this signature by using the private key of a signing certificate. The private key is known only to the IdP. The corresponding public key is part of the SSO configuration in Cloud Identity or Google Workspace and shared with Google Sign-In. The SAML assertion also contains a digital signature that enables the SAML service provider to verify the assertion's authenticity. The browser posts the SAML assertion to the Google ACS endpoint. The ACS endpoint verifies the digital signature of the SAML assertion. This check is done to ensure that the assertion originates from the trusted external IdP and has not been tampered with. Assuming the signature is valid, the ACS endpoint then analyzes the contents of the assertion, which includes verifying its audience information and reading the NameID attribute. The ACS endpoint looks up your user account by matching the NameID of the SAML assertion to the primary email address of the user. The endpoint then starts a session. Based on the information encoded in the RelayState parameter, the endpoint determines the URL of the resource that you originally intended to access, and you are redirected to the Google Cloud console. IdP-initiated Sign-in The process outlined in the previous section is sometimes referred to as service provider–initiated sign-on because the process starts at the service provider, which in the preceding example is the Google Cloud console. SAML also defines an alternative flow called IdP-initiated sign-on, which starts at the IdP. Google does not support this flow, but you can achieve similar results by using the following URL to initiate an service provider–initiated sign-on: https://www.google.com/a/DOMAIN/ServiceLogin?continue=https://console.cloud.google.com/ In this example, DOMAIN is the primary domain of your Cloud Identity or Google Workspace account. Multi-factor authentication To protect user accounts from unauthorized access, you can require users to provide a second factor during authentication. There are two ways to implement multi-factor authentication when using single sign-on: If your external IdP supports multi-factor authentication, you can have it perform the multi-factor authentication as part of the SAML-based sign-on process. No additional configuration is required in Cloud Identity or Google Workspace in this case. If your IdP does not support multi-factor authentication, you can configure your Cloud Identity or Google Workspace account to perform two-step verification immediately after a user has authenticated with the external IdP. Note: Because super admins can bypass SSO, any multi-factor authentication enforced by your external IdP does not apply to these users. To help ensure that super-admin users are protected, enable two-step verification for these users. Networking In SAML 2.0 HTTP Redirect binding, the IdP and service provider don't communicate directly. Instead, all communication is relayed through the user's browser, as shown in the following diagram: Given this architecture, it is not necessary for the IdP to be exposed over the internet, or to even have internet access, as long as users are able to access it from your corporate network. Configuration of the external IdP Cloud Identity and Google Workspace let you configure single sign-on by using the following features: SAML profiles: You can create a SAML profile for each IdP that you want to integrate with. For each user, group, or organizational unit in your Cloud Identity or Google Workspace account, you then decide whether they must use SSO, and which SAML profile they must use. Legacy SAML profile (formerly named organizational SSO profile): You can use the legacy SAML profile to integrate with a single IdP. For each user, group, or organizational unit in your Cloud Identity or Google Workspace account, you then decide whether they must use SSO or not. The right way to configure your IdP depends on whether you use SAML profiles or the legacy SAML profile. The following table summarizes the settings that typically have to be configured in an external IdP to help ensure compatibility. Configuration Required setting forthe legacy SAML profile Required setting forSAML profiles Remarks Name ID Primary email address of a the user Primary email address of a the user Name ID format urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress Entity ID If the domain-specific issuer feature is enabled: google.com/a/DOMAIN If the domain-specific issuer feature is disabled (default): google.com Use the domain-specific issuer feature if you want to integrate multiple Google Workspace or Cloud Identity accounts with the same IdP. Otherwise leave it disabled. Unique entity ID of your SAML profile. Depending on the creation date of your SAML profile, the entity ID uses one of the following formats: https://accounts.google.com/samlrp/metadata?rpid=ID https://accounts.google.com/samlrp/ID ACS URL pattern (or Redirect URL) https://www.google.com/a/* Unique ACS URL of your SAML profile. Depending on the creation date of your SAML profile, the URL uses one of the following formats: https://accounts.google.com/samlrp/acs?rpid=ID https://accounts.google.com/samlrp/ID/acs Request signing Off Off SAML authentication requests issued by Google Sign-In are never signed Assertion signing On On SAML assertions must be signed to enable Google Sign-In to verify their authenticity. When you set up SSO in the Admin Console, you must upload the public key of the token signing key-pair. Assertion encryption Off Off Signing algorithm RSA-SHA256 RSA-SHA256 RSA-SHA256 is sometimes abbreviated as RS256 Note: Depending on the IdP you use, some settings might use different names or might not be configurable at all. What's next Review the reference architectures for integrating with an external IdP. Learn how to set up account provisioning and SSO with Azure AD or Active Directory. Read our Best practices for federating Google Cloud with an external IdP. Send feedback \ No newline at end of file diff --git a/Small_and_Medium_Business.txt b/Small_and_Medium_Business.txt new file mode 100644 index 0000000000000000000000000000000000000000..db6185c6eed787cb6c0425ce254b4846fc47427e --- /dev/null +++ b/Small_and_Medium_Business.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/smb +Date Scraped: 2025-02-23T12:01:19.432Z + +Content: +Google Cloud for small and medium-sized businessesSmall companies and medium-sized organizations face the same security and growth challenges that global enterprises do—with fewer resources. Discover how Google Cloud can help your business drive growth while saving money, simplifying operations, and mitigating risk.Talk to an expertGet gen AI updates2:150:46Estimate your costsUnderstand how your costs can fluctuate based on location, workloads, and other variables with the pricing calculator. Or get a custom quote by connecting with a sales representative.Learn moreExplore our industry solutions for small and medium-sized businessesGoogle Cloud for business helps companies of all sizes accelerate innovation, improve productivity, reduce costs, scale operations, and enhance security. Browse case studies, solutions, events, blogs, quickstarts, and training to grow your business. SPEED UP TIME TO INSIGHTS FOR RETAILEmpower your teams to get insights with a fully managed data analytics and BI platformGet the most value out of your data and empower lean teams to make data-driven decisions.Google Cloud can help retailers leverage large volumes of customer and product data for business transformation. With Google, you can easily run powerful AI/ML for better marketing, forecasting, and insights.“We would need an army of data scientists to make faster decisions on pricing and inventory levels. With Google Cloud machine learning and artificial intelligence, we don’t need that. We can make much faster pricing decisions to optimize profitability and move inventory.”Deepak Mehrotra, Co-founder and Chief Adventurer, California Design DenExplore retail offerings:California Design Den: Driving higher profits with machine learningRead the case study Google Cloud for retailLearn morePower your ecommerce with Google-quality searchLearn moreOPERATIONAL EFFICIENCY FOR MANUFACTURING Move faster with secure, sustainable, and smart solutionsDevelop AI and ML models to gain insights, optimize inventory levels, predict demand, or improve product quality, across all your operations, from factory floors to every step in your supply chain. Store and manage access to sensitive information, and deploy scalable and secure MES.“The fact that we’re getting double the performance for less cost is amazing. The cost of scaling a traditional architecture would have been outrageous. With Google Cloud, we can start pushing the limits of development to come up with services that will help our customers save lives.”John Griffin, Jr., MIS Manager, Frazer, Ltd.Explore manufacturing offerings:Frazer: Defining the future of mobile healthcareRead the case studyGoogle Cloud for manufacturingLearn moreOptimize production at scale with Manufacturing Data EngineLearn moreSTREAMLINED DATA FOR FINANCIAL SERVICES Deliver exceptional experiences with data-first strategiesWhether you are looking to develop and deploy AI/ML models, run and scale applications, manage and store data, gain insights into customers, markets, and risks, ensure compliance, or develop personalized services in a secure way, Google Cloud can help transform your financial service business.“Google Cloud's scalability and computing power is astonishing. BigQuery is responsible for processing around 400 GB of data on a daily basis. With Cloud Run, we can process 30 requests per second. And finally, Firestore can attain 2 billion operations per month.”Guilherme Kluber Mercurio, CTO, ROITExplore financial services offerings:Military Bank: Managing APIs with Apigee to deliver digital transformationRead the case studyGoogle Cloud for financial servicesLearn moreLeveraging generative AI in capital marketsLearn moreSECURE AND INNOVATIVE SOLUTIONS FOR HEALTHCARESolve complex problems with secure and innovative solutions to transform your operationsGoogle Cloud can help your company develop and deploy new products and services more quickly with AI/ML models even without prior ML expertise. Leverage Google threat intelligence and Mandiant expertise from the frontlines to help you understand active threats, so your organization can mitigate risk and minimize the impact of a breach.“Partnering with Mandiant augments our centralized security operations, provides awareness to the relevant threats, identifies gaps in security and bolsters our ability to protect critical systems and patient and employee data. The partnership helps us drive continuous improvements in proactive intrusion prevention, detection, and response.”Matthew Snyder, Senior Vice President, Chief Information Security and Privacy Officer, Penn State HealthExplore healthcare offerings: IMIDEX: Achieving 83% sensitivity in lung nodule detection with Google Vertex AIRead the case studyGoogle Cloud for healthcare and life sciencesLearn moreCloud Healthcare APILearn moreDATA-DRIVEN INSIGHTS FOR MEDIA AND ENTERTAINMENTTransform audience experiences with relevant, personalized media through data-driven insightsGoogle Cloud helps media companies of various sizes harness world-class technology to unlock insights for creating, monetizing, distributing, and protecting media, while boosting production capabilities, and helping audiences find relevant and personalized content driven by the power of AI.“Our analysts were very attached to their own set of analytics tools, which they wanted to use on top of BigQuery. But after a while, they began querying data directly through BigQuery itself. It's easy to use and fast on complex queries, and they use Looker Studio to display results.”Simon Forman, Head of Behavioral Data Engineering, ITVExplore media and entertainment offerings: ITV: Delivering a higher quality live viewing experience through dataRead the case studyGoogle Cloud for media and entertainmentLearn moreRecommendations from Vertex AI SearchLearn moreRetailSPEED UP TIME TO INSIGHTS FOR RETAILEmpower your teams to get insights with a fully managed data analytics and BI platformGet the most value out of your data and empower lean teams to make data-driven decisions.Google Cloud can help retailers leverage large volumes of customer and product data for business transformation. With Google, you can easily run powerful AI/ML for better marketing, forecasting, and insights.“We would need an army of data scientists to make faster decisions on pricing and inventory levels. With Google Cloud machine learning and artificial intelligence, we don’t need that. We can make much faster pricing decisions to optimize profitability and move inventory.”Deepak Mehrotra, Co-founder and Chief Adventurer, California Design DenExplore retail offerings:California Design Den: Driving higher profits with machine learningRead the case study Google Cloud for retailLearn morePower your ecommerce with Google-quality searchLearn moreManufacturingOPERATIONAL EFFICIENCY FOR MANUFACTURING Move faster with secure, sustainable, and smart solutionsDevelop AI and ML models to gain insights, optimize inventory levels, predict demand, or improve product quality, across all your operations, from factory floors to every step in your supply chain. Store and manage access to sensitive information, and deploy scalable and secure MES.“The fact that we’re getting double the performance for less cost is amazing. The cost of scaling a traditional architecture would have been outrageous. With Google Cloud, we can start pushing the limits of development to come up with services that will help our customers save lives.”John Griffin, Jr., MIS Manager, Frazer, Ltd.Explore manufacturing offerings:Frazer: Defining the future of mobile healthcareRead the case studyGoogle Cloud for manufacturingLearn moreOptimize production at scale with Manufacturing Data EngineLearn moreFinancial servicesSTREAMLINED DATA FOR FINANCIAL SERVICES Deliver exceptional experiences with data-first strategiesWhether you are looking to develop and deploy AI/ML models, run and scale applications, manage and store data, gain insights into customers, markets, and risks, ensure compliance, or develop personalized services in a secure way, Google Cloud can help transform your financial service business.“Google Cloud's scalability and computing power is astonishing. BigQuery is responsible for processing around 400 GB of data on a daily basis. With Cloud Run, we can process 30 requests per second. And finally, Firestore can attain 2 billion operations per month.”Guilherme Kluber Mercurio, CTO, ROITExplore financial services offerings:Military Bank: Managing APIs with Apigee to deliver digital transformationRead the case studyGoogle Cloud for financial servicesLearn moreLeveraging generative AI in capital marketsLearn moreHealthcareSECURE AND INNOVATIVE SOLUTIONS FOR HEALTHCARESolve complex problems with secure and innovative solutions to transform your operationsGoogle Cloud can help your company develop and deploy new products and services more quickly with AI/ML models even without prior ML expertise. Leverage Google threat intelligence and Mandiant expertise from the frontlines to help you understand active threats, so your organization can mitigate risk and minimize the impact of a breach.“Partnering with Mandiant augments our centralized security operations, provides awareness to the relevant threats, identifies gaps in security and bolsters our ability to protect critical systems and patient and employee data. The partnership helps us drive continuous improvements in proactive intrusion prevention, detection, and response.”Matthew Snyder, Senior Vice President, Chief Information Security and Privacy Officer, Penn State HealthExplore healthcare offerings: IMIDEX: Achieving 83% sensitivity in lung nodule detection with Google Vertex AIRead the case studyGoogle Cloud for healthcare and life sciencesLearn moreCloud Healthcare APILearn moreMedia and entertainmentDATA-DRIVEN INSIGHTS FOR MEDIA AND ENTERTAINMENTTransform audience experiences with relevant, personalized media through data-driven insightsGoogle Cloud helps media companies of various sizes harness world-class technology to unlock insights for creating, monetizing, distributing, and protecting media, while boosting production capabilities, and helping audiences find relevant and personalized content driven by the power of AI.“Our analysts were very attached to their own set of analytics tools, which they wanted to use on top of BigQuery. But after a while, they began querying data directly through BigQuery itself. It's easy to use and fast on complex queries, and they use Looker Studio to display results.”Simon Forman, Head of Behavioral Data Engineering, ITVExplore media and entertainment offerings: ITV: Delivering a higher quality live viewing experience through dataRead the case studyGoogle Cloud for media and entertainmentLearn moreRecommendations from Vertex AI SearchLearn moreLearning from expertsDiscover practical uses and unlock new insights from Google Cloud experts to boost efficiency and scale up. Get deep dives into applicable generative AI practices, how to stay secure, migration tips and more.Scaling Up seriesOn-demand talks with experts for Small to Medium Sized BusinessesCloud Launch15-min bite-sized videos from technical expertsNext '24Check out all on-demand SMB sessions from Next '24View MoreUnlock the power of your data for business intelligence and faster and better decision makingOur solutions are tailored for small and medium-sized business needs, allowing efficient scaling with flexible growth options. Build quickly, securely, and cost effectively with the next generation of infrastructure designed for any workload.Apigee, Google Cloud's native API management tool, enables you to build, manage, and secure APIs of any kind, in any environment, and at any scaleApigeeEasily create and run online virtual machines on Google’s infrastructure with security and customization built-inCompute EngineMove VMware-based apps to the cloud without changing your apps, tools, or processesVMware EngineGet more value from your structured and unstructured SAP dataSAP on Google CloudStore huge volumes of data with our fast, durable, and highly scalable, storage solutionsCloud StorageEnable your team to quickly run your most intensive workloads on the latest and greatest infrastructureHigh performance computingGet the support you need, when you need it, with Google Cloud Customer Care. Our scalable and flexible services are designed to meet your needs.Google Cloud Customer SupportNeed help migrating? Learn about our managed solutions and migration programs.Cloud migrationAutomate your processes without any code by connecting Google Cloud or third-party applications, using Application IntegrationApplication IntegrationView MoreArtificial intelligence and machine learningLeverage Google Cloud AI and ML to balance productivity, growth, and security with limited resources. Automate tasks, gain customer insights, optimize operations, and enable data-driven decisions—freeing up time for creative and strategic work.Improve customer experience and developer efficiency by making better use of your data with AI/MLGenerative AIUtilize AI-powered natural interactions and uncover insights while freeing up agents’ timeContact Center AICustomize from 130+ AI models to your use case with a variety of tuning options for Google's text, image, or code modelsVertex AI, enhanced by GeminiCreate lifelike conversational AI that interacts naturally and accurately, builds quickly, deploys universally, and manages and scales with easeDialogflowConvert speech into text with state-of-the-art accuracy, easy model customization, and flexible model deploymentSpeech-to-Text AICreate natural-sounding speech from text with an AI-powered API, which delivers high fidelity speech, widest voice selection, and one-of-a-kind voiceText-to-Speech AIBest-in-class multilingual machine translation for creating fast, dynamic, high quality, and domain-specific content and applicationsTranslation AIBuild computer vision apps with pretrained AutoML for faster time to value and reduced complexityVision AIAutoML empowers developers to build high-quality custom ML models in minutes, even with limited ML expertiseAutoMLCombine deep search capabilities with sophisticated language models to deliver the most helpful and accurate results, every timeVertex AI SearchReceive insights from customers’ unstructured text using machine learning with multimedia and multilingual supportVisual Inspection AIView MoreSecuritySmall to medium-sized businesses can be more susceptible to cyber attacks. Make Google part of your security team with Mandiant frontline experts, intel-driven security operations, and cloud security—supercharged by AI.Enhance your cyber defenses. Google and Mandiant deliver actionable threat intelligence to mitigate risk and minimize impact of a potential breach.MandiantModernize your security operations to detect, investigate, and respond to threats faster at Google-scale, while AI enhances productivitySecurity operationsScan files for malware with a variety of antivirus engines before downloading or uploading themVirusTotalAutomate and streamline regulatory, finance, and risk reporting in a fast, cost-effective, granular, and secure mannerRegulatory Reporting PlatformProtect your web applications and APIs from common attacks, such as DDoS attacks and SQL injection and comply with industry regulationsWeb App and API Protection (WAAP)Ensure consistent standards and meet your regulations with secured, fully redundant, fault-tolerant data centers and networksCloud infrastructureBuild on trusted, secure-by-design, secure-by-default cloud infrastructure to help drive your organization’s digital transformationTrusted CloudView MoreBusiness intelligence, databases, and data analyticsGoogle Cloud provides a suite of BI, data analytics, and database solutions that automate reports, identify trends, analyze customer data, develop new features, and measure performance without a large IT team or costly infrastructure.Equip business teams with comprehensive, real-time insights, intuitive self-service BI, and embedded analytics to create new forms of revenueLookerA serverless and cost-effective enterprise data warehouse; works across clouds, scales with your data, and built in AI/ML and BI toolsBigQueryFully managed relational database service for MySQL, PostgreSQL, and SQL Server workloads. Cloud SQL manages your databases so you don't have to.Cloud SQLDevelop rich applications quickly with AlloyDB for PostgreSQL, Memorystore, Firestore, and more with automated provisioning and managementDatabase solutionsFlexible, open, secure data analytics platform for intelligence-driven organizations based on the same technology principles that power our servicesSmart analytics solutionsView MorePartnersExplore our directory of trusted partners for small and medium-sized businesses with the skills needed to create affordable, easy-to-use cloud solutions that are just right for you. Expand allNorth America partnersLatin America partnersEuropean partnersAsia Pacific partnersSee all partnersWith the power of Google Cloud's infrastructure and cutting-edge technology, we can better serve our customers and meet their evolving needs, in addition to offering new services, including AI capabilities.Natan Stein, CTO/CIO, SKYPADGet down to businessDiscover how easy it is to save money, increase security, and improve services with Google Cloud.Go to consoleGet started for freeStart nowSee Google Cloud solutionsExplore nowFind assets for getting startedDiscover nowGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Software_Supply_Chain_Security.txt b/Software_Supply_Chain_Security.txt new file mode 100644 index 0000000000000000000000000000000000000000..b30445f12faaa8d7d92272cf98fe355c2e0f891b --- /dev/null +++ b/Software_Supply_Chain_Security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/solutions/software-supply-chain-security +Date Scraped: 2025-02-23T12:01:06.911Z + +Content: +Software Supply Chain SecurityEnhances software supply chain security across the entire software development life cycle from development, supply, and CI/CD to runtimes. Get started todayView documentationVIDEOSee how Google Cloud helps with software supply chain security 7:38BenefitsHolistic software supply chain security solution built on best practicesShift left on security through software life cycleCatch security issues early in the process with a holistic solution that starts from securing your development environments and software dependencies all the way to protecting your application at runtime. Improve security with proven best practicesTackle the complicated supply chain security challenge with a tested approach built on industry best practices and Google’s decades of experience protecting our own software supply chains.Meet you where you are on your security journeyIncrementally improve your security posture by incorporating the open and pluggable tools into your existing practices. No matter how early or advanced you are on this journey, you can get started today.Key featuresStrengthen software supply chain security throughout the development life cycleEnhance application security in development environmentsTake advantage of Cloud Workstations, which provides fully managed development environments on Google Cloud to protect your source code and its development environments. Cloud Workstations comes with built-in security best practices, such as VPC Service Controls, private ingress and egress, forced image updates, and IAM access policies. Improve the security of your application images and dependenciesStore, secure, and manage your build artifacts in Artifact Registry and proactively detect vulnerabilities with the on-demand and automated scanning of Container Analysis. Enhance the security of your application's open source dependencies using our Assured Open Source Software, which provides a trusted source for you to access and incorporate Google curated and tested OSS packages. Strengthen the security of your CI/CD pipelineAccess managed CI with Cloud Build, which provides out-of-the-box support for SLSA level 3 builds and comes with security features, such as VPC Service Controls, SLSA level insights, and isolated and ephemeral build environments. Cloud Build also works with Google Cloud Deploy, our CD platform, which offers built-in security best practices, such as granular IAM controls and approval gates. Protect your running applications Improve the security of your running applications with GKE and Cloud Run. GKE comes with native security features that provide actionable guidance into the security posture of your applications, such as a centralized security dashboard and automated scanning and alerting. Cloud Run, our secure serverless platform, provides insights into the SLSA levels and vulnerabilities of running containers. Enforce trust-based security policies throughout your SDLCEnhance the security of your software supply chain by establishing, verifying, and maintaining a chain of trust throughout your SDLC. Based on the attestations collected along the SDLC, Binary Authorization helps define, verify, and enforce trust-based policies to meet the scale and speed requirements of modern application development.Ready to get started? Contact usLearn about software supply chain security and how Google can helpPerspectives on Security Volume One: Securing Software Supply ChainsDownload the whitepaperSoftware Supply Chain Security deep dive session at Google Cloud Next '22Watch the webinarLearn more about software supply chain security—what, why, and howDownload the whitepaperRelated servicesSoftware supply chain security products and integrationsCloud WorkstationsFully managed development environments built for security-sensitive enterprises.Assured Open Source SoftwareTrusted source for you to access and incorporate Google curated and tested OSS packages into your own developer workflows.Cloud BuildBuild, test, and deploy quickly and securely on our serverless CI/CD platform.Artifact Registry with Container AnalysisStore, manage, and secure your container images and language packages.Google Cloud DeployFully managed continuous delivery for Google Kubernetes Engine with built-in metrics, approvals, and security.Binary AuthorizationEnsure only trusted container images are deployed on Google Kubernetes Engine or Cloud Run.Google Kubernetes EngineManaged Kubernetes platform with built-in Kubernetes security guidance and tools.Cloud RunDevelop and deploy highly scalable containerized applications on a serverless platform with rich security features.DocumentationLearn more about Software Supply Chain SecurityGoogle Cloud BasicsSoftware supply chain threatsUnderstand the attack surface of the software supply chain spanning all the way from source, build, publish, and dependencies to deploy.Learn moreGoogle Cloud BasicsAssess your security postureThis guide gives you frameworks and tools that you can use to assess your security posture and identify ways to mitigate threats.Learn moreGoogle Cloud BasicsSoftware Supply Chain Security overviewGet an overview of the Software Supply Chain Security solution and its components.Learn moreQuickstartBuild an application and view security insightsThis quickstart shows how to build an application and view security insights for the build in the Software Supply Chain Security's insights panel in Cloud Build. Learn moreQuickstartDeploy to Cloud Run and view security insightsThis quickstart shows how to deploy a container image to Cloud Run and view security insights in Software Supply Chain Security's insights panel in Cloud Run. Learn moreGoogle Cloud BasicsDeploy to GKE and view security insightsThis quickstart shows how to deploy a container image to Google Kubernetes Engine and view security insights in its security posture management dashboard.Learn moreNot seeing what you’re looking for?View documentationWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoGoogle Cloud Security SummitWatch videoBlog postWhat's next for DevOps, SysAdmins, and operatorsWatch videoBlog postHow Google Cloud can help secure your software supply chainRead the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Software_as_a_Service.txt b/Software_as_a_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..184881434ce66233a58f8b4ab8641c756aeca5d1 --- /dev/null +++ b/Software_as_a_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/saas +Date Scraped: 2025-02-23T12:01:21.292Z + +Content: +What is software as a service (SaaS)?The cloud computing model that makes it efficient for companies to deliver software and easy for consumers to use software. From a network that spans the globe to innovative solutions that transform organizations, Google Cloud has SaaS built into its DNA. Google Cloud enables you to build better SaaS products, scale efficiently, and ultimately grow your business. Learn more about SaaS transformation.Contact salesBuild your SaaS solution on Google Cloud0:44Build your SaaS solution on Google CloudSaaS definedSaaS stands for software as a service. It is a model in which the software is centrally hosted and accessed by the user via a web browser using the internet.In a SaaS model, the software provider owns and maintains the software and the customer does not need to install any software on their own computer. Typically, in this model the software is licensed on a subscription basis. SaaS is the new identity card of any company that is pursuing their digital journey.SaaS is how companies generate revenue, pivot their product to customer demand, optimize existing apps, innovate, and go to market quickly to win against competition. When we talk to customers, SaaS is a business conversation with emphasis on company growth, monetization aspects, and business models, onboarding customers faster to generate revenue and possibilities around cross-pillar and multi-product co-innovation. Google Cloud can partner with enterprises, share our best practices, and co-innovate with our customers for mutual growth and success.In a SaaS business model, the software provider owns and maintains the software. The customer does not need to install any software on their own computer. Needless to mention, the SaaS business model is quite different from a managed service model. For a recurring fee or subscription, the end user does not need to worry about the provisioning, management, and maintenance of the infrastructure, platform, and application. This is usually done through a central cloud-based system.Some interesting SaaS use cases that we explored as part of customer interactions, and how we helped them are listed below.Customers with existing apps that need SaaS multi-tenancy: Modernize and secure applicationsCustomers that onboarded multiple tenants, but need to scale business quicker: Establish SaaS operations, and tenant business intelligenceCustomers/startups that onboarded a few tenants, and need to scale globally: Enable agility and global expansionCustomers/startups that need to onboard free-tier tenants, and convert them to paid tier quickly: Provide a simple and efficient way to support free-tier customersCustomers that want cost optimization as they onboard new tenants: Assess and streamline apps/operations and optimize costs, enable usage-based pricing where applicableCustomers that need GDPR, data residency, and compliance requirements for their tenants: Bespoke approach to handle multiple types of tenants (SaaS, customer hosted, on-prem, multicloud) in a consistent way Why should you care about SaaS?There has been a fundamental shift in how users and enterprises consume software. Consumers and enterprises are demanding free trialsCustomers are exploring product fit for applications and are making self-service business decisions in days (not months or years) to start leveraging very complex business-critical appsExpectations that the new digital enterprise is expected to offer these services readily, and have the resilience to onboard new customers seamlessly, scale geographically as needed, convert them to paid/subscribed customers, and offer them full transparency with reporting, observability, and chargeback from day 1Customers need to future-proof their products and modernize their product architecture quickly and change their business model to a subscription offering (yes, licenses are a thing of the past!) Software vendors/ISVs need to embrace these tenets to onboard net-new line of customers and to enable net-new revenue channels. Google Cloud has a structured approach to help customers who are in their SaaS journey both to model their business and facilitate new revenue streams, and to model their architecture by elevating their existing offerings and making them ready for SaaS consumers. SaaS journey with Google CloudExpand allNominationPlease contact your sales team to have your organization nominated for a Saas evaluation.Consultative guidance to marketplace and Google investmentsSome of the priorities covered in our no-cost workshop: * Optimizing cost/security * Enhance growth/monetization * InnovationEngage with SaaS expertsSaaS Accelerator provides you with access to Google's expertise in open source, DevOps, and SRE, allowing you to modernize your apps/data, plan for multi-tenant onboarding, full security, funding to kick-start your SaaS pilot, and co-innovation with Google.* Grow business, onboard customers/suppliers faster * Modernize apps, minimize ops, eliminate dependencies * Faster time to market, build differentiated offeringsHow do I get started?Nominate yourself for the workshop.Transform your applications into multi-tenant architectureFeeling inspired? Let’s solve your challenges together.See how you can transform your applications into SaaS.Contact usActivate the power of SaaS for your apps.Watch the webinarIn this webinar, you will learn how to create a reusable customizable framework with the building blocks.Get the best of Google CloudOur platform enables you to build, modernize, and scale your business with differentiated technology that extends beyond infrastructure, while benefiting from a partner program that’s with you from idea to market.See all partnersRelated products and servicesGoogle Cloud can help you meet SaaS requirements without changes to your application—by bringing in isolation at the infrastructure layer at the project level. Industry-leading services such as Google Kubernetes Engine (GKE) can enable efficient SaaS operations for effective scalability and maintenance. Google Kubernetes Engine AutopilotPut your applications on autopilot, eliminating the need to manage nodes or capacity and reducing cluster costs, using fully automated Kubernetes.SolutionServerless application development and deliveryDevelop, deploy, and scale applications fast and securely by automatically scaling up and down from zero almost instantaneously, depending on traffic without infrastructure management.Take the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Sole-Tenant_Nodes.txt b/Sole-Tenant_Nodes.txt new file mode 100644 index 0000000000000000000000000000000000000000..172d2971b2ffc3c7cfce10eb1d75bdc35c128a82 --- /dev/null +++ b/Sole-Tenant_Nodes.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes +Date Scraped: 2025-02-23T12:02:45.205Z + +Content: +Home Compute Engine Documentation Guides Send feedback Sole-tenancy overview Stay organized with collections Save and categorize content based on your preferences. Linux Windows This document describes sole-tenant nodes. For information about how to provision VMs on sole-tenant nodes, see Provisioning VMs on sole-tenant nodes. Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project's VMs. Use sole-tenant nodes to keep your VMs physically separated from VMs in other projects, or to group your VMs together on the same host hardware as shown in the following diagram. You can also create a sole-tenant node group and specify whether you want to share it with other projects or with the entire organization. Figure 1: A multi-tenant host versus a sole-tenant node. VMs running on sole-tenant nodes can use the same Compute Engine features as other VMs, including transparent scheduling and block storage, but with an added layer of hardware isolation. To give you full control over the VMs on the physical server, each sole-tenant node maintains a one-to-one mapping to the physical server that is backing the node. Within a sole-tenant node, you can provision multiple VMs on machine types of various sizes, which lets you efficiently use the underlying resources of the dedicated host hardware. Also, if you choose not to share the host hardware with other projects, you can meet security or compliance requirements with workloads that require physical isolation from other workloads or VMs. If your workload requires sole tenancy only temporarily, you can modify VM tenancy as necessary. Sole-tenant nodes can help you meet dedicated hardware requirements for bring your own license (BYOL) scenarios that require per-core or per-processor licenses. When you use sole-tenant nodes, you have some visibility into the underlying hardware, which lets you track core and processor usage. To track this usage, Compute Engine reports the ID of the physical server on which a VM is scheduled. Then, by using Cloud Logging, you can view the historical server usage of a VM. To optimize the use of the host hardware, you can do the following: Overcommit CPUs on sole-tenant VMs Share sole-tenant node groups Manually live migrate sole-tenant VMs Through a configurable host maintenance policy, you can control the behavior of sole-tenant VMs while their host is undergoing maintenance. You can specify when maintenance occurs, and whether the VMs maintain affinity with a specific physical server or are moved to other sole-tenant nodes within a node group. Workload considerations The following types of workloads might benefit from using sole-tenant nodes: Gaming workloads with performance requirements. Finance or healthcare workloads with security and compliance requirements. Windows workloads with licensing requirements. Machine learning, data processing, or image rendering workloads. For these workloads, consider reserving GPUs. Workloads requiring increased I/O operations per second (IOPS) and decreased latency, or workloads that use temporary storage in the form of caches, processing space, or low-value data. For these workloads, consider reserving Local SSDs. Node templates A node template is a regional resource that defines the properties of each node in a node group. When you create a node group from a node template, the properties of the node template are immutably copied to each node in the node group. When you create a node template you must specify a node type. You can optionally specify node affinity labels when you create a node template. You can only specify node affinity labels on a node template. You can't specify node affinity labels on a node group. Node types When configuring a node template, specify a node type to apply to all nodes within a node group created based on the node template. The sole-tenant node type, referenced by the node template, specifies the total amount of vCPU cores and memory for nodes created in node groups that use that template. For example, the n2-node-80-640 node type has 80 vCPUs and 640 GB of memory. The VMs that you add to a sole-tenant node must have the same machine type as the node type that you specify in the node template. For example, n2 sole-tenant node types are only compatible with VMs created with the n2 machine type. You can add VMs to a sole-tenant node until the total amount of vCPUs or memory exceeds the capacity of the node. When you create a node group using a node template, each node in the node group inherits the node template's node type specifications. A node type applies to each individual node within a node group, not to all of the nodes in the group uniformly. So, if you create a node group with two nodes that are both of the n2-node-80-640 node type, each node is allocated 80 vCPUs and 640 GB of memory. Depending on your workload requirements, you might fill the node with multiple smaller VMs running on machine types of various sizes, including predefined machine types, custom machine types, and machine types with extended memory. When a node is full, you cannot schedule additional VMs on that node. The following table displays the available node types. To see a list of the node types available for your project, run the gcloud compute sole-tenancy node-types list command or create a nodeTypes.list REST request. Node type Processor vCPU GB vCPU:GB Sockets Cores:Socket Total cores Max VMs allowed c2-node-60-240 Cascade Lake 60 240 1:4 2 18 36 15 c3-node-176-352 Sapphire Rapids 176 352 1:2 2 48 96 44 c3-node-176-704 Sapphire Rapids 176 704 1:4 2 48 96 44 c3-node-176-1408 Sapphire Rapids 176 1408 1:8 2 48 96 44 c3d-node-360-708 AMD EPYC Genoa 360 708 1:2 2 96 192 34 c3d-node-360-1440 AMD EPYC Genoa 360 1440 1:4 2 96 192 40 c3d-node-360-2880 AMD EPYC Genoa 360 2880 1:8 2 96 192 40 c4-node-192-384 Emerald Rapids 192 384 1:2 2 60 120 40 c4-node-192-720 Emerald Rapids 192 720 1:3.75 2 60 120 30 c4-node-192-1488 Emerald Rapids 192 1,488 1:7.75 2 60 120 30 c4a-node-72-144 Google Axion 72 144 1:2 1 80 80 22 c4a-node-72-288 Google Axion 72 288 1:4 1 80 80 22 c4a-node-72-576 Google Axion 72 576 1:8 1 80 80 36 g2-node-96-384 Cascade Lake 96 384 1:4 2 28 56 8 g2-node-96-432 Cascade Lake 96 432 1:4.5 2 28 56 8 h3-node-88-352 Sapphire Rapids 88 352 1:4 2 48 96 1 m1-node-96-1433 Skylake 96 1433 1:14.93 2 28 56 1 m1-node-160-3844 Broadwell E7 160 3844 1:24 4 22 88 4 m2-node-416-8832 Cascade Lake 416 8832 1:21.23 8 28 224 1 m2-node-416-11776 Cascade Lake 416 11776 1:28.31 8 28 224 2 m3-node-128-1952 Ice Lake 128 1952 1:15.25 2 36 72 2 m3-node-128-3904 Ice Lake 128 3904 1:30.5 2 36 72 2 n1-node-96-624 Skylake 96 624 1:6.5 2 28 56 96 n2-node-80-640 Cascade Lake 80 640 1:8 2 24 48 80 n2-node-128-864 Ice Lake 128 864 1:6.75 2 36 72 128 n2d-node-224-896 AMD EPYC Rome 224 896 1:4 2 64 128 112 n2d-node-224-1792 AMD EPYC Milan 224 1792 1:8 2 64 128 112 n4-node-224-1372 Emerald Rapids 224 1372 1:6 2 60 120 90 For information about the prices of these node types, see sole-tenant node pricing. All nodes let you schedule VMs of different shapes. Node n type are general purpose nodes, on which you can schedule custom machine type instances. For recommendations about which node type to choose, see Recommendations for machine types. For information about performance, see CPU platforms. Node groups and VM provisioning Sole-tenant node templates define the properties of a node group, and you must create a node template before creating a node group in a Google Cloud zone. When you create a group, specify the host maintenance policy for VMs on the node group, the number of nodes for the node group, and whether to share it with other projects or with the entire organization. A node group can have zero or more nodes; for example, you can reduce the number of nodes in a node group to zero when you don't need to run any VMs on nodes in the group, or you can enable the node group autoscaler to manage the size of the node group automatically. Before provisioning VMs on sole-tenant nodes, you must create a sole-tenant node group. A node group is a homogeneous set of sole-tenant nodes in a specific zone. Node groups can contain multiple VMs from the same machine series running on machine types of various sizes, as long as the machine type has 2 or more vCPUs. When you create a node group, enable autoscaling so that the size of the group adjusts automatically to meet the requirements of your workload. If your workload requirements are static, you can manually specify the size of the node group. After creating a node group, you can provision VMs on the group or on a specific node within the group. For further control, use node affinity labels to schedule VMs on any node with matching affinity labels. After you've provisioned VMs on node groups, and optionally assigned affinity labels to provision VMs on specific node groups or nodes, consider labeling your resources to help manage your VMs. Labels are key-value pairs that can help you categorize your VMs so that you can view them in aggregate for reasons such as billing. For example, you can use labels to mark the role of a VM, its tenancy, the license type, or its location. Host maintenance policy Depending on your licensing scenarios and workloads, you might want to limit the number of physical cores used by your VMs. The host maintenance policy you choose might depend on, for example, your licensing or compliance requirements, or, you might want to choose a policy that lets you limit usage of physical servers. With all of these policies, your VMs remain on dedicated hardware. When you schedule VMs on sole-tenant nodes, you can choose from the following three different host maintenance policy options, which let you determine how and whether Compute Engine live migrates VMs during host events, which occur approximately every 4 to 6 weeks. During maintenance, Compute Engine live migrates, as a group, all of the VMs on the host to a different sole-tenant node, but, in some cases, Compute Engine might break up the VMs into smaller groups and live migrate each smaller group of VMs to separate sole-tenant nodes. Default host maintenance policy This is the default host maintenance policy, and VMs on nodes groups configured with this policy follow traditional maintenance behavior for non-sole-tenant VMs. That is, depending on the on-host maintenance setting of the VM's host, VMs live migrate to a new sole-tenant node in the node group before a host maintenance event, and this new sole-tenant node only runs the customer's VMs. This policy is most suitable for per-user or per-device licenses that require live migration during host events. This setting doesn't restrict migration of VMs to within a fixed pool of physical servers, and is recommended for general workloads without physical server requirements and that don't require existing licenses. Because VMs live migrate to any server without considering existing server affinity with this policy, this policy is not suitable for scenarios requiring minimization of the use of physical cores during host events. The following figure shows an animation of the Default host maintenance policy. Figure 2: Animation of the Default host maintenance policy. Restart in place host maintenance policy When you use this host maintenance policy, Compute Engine stops VMs during host events, and then restarts the VMs on the same physical server after the host event. You must set the VM's on host maintenance setting to TERMINATE when using this policy. This policy is most suitable for workloads that are fault-tolerant and can experience approximately one hour of downtime during host events, workloads that must remain on the same physical server, workloads that don't require live migration, or if you have licenses that are based on the number of physical cores or processors. With this policy, the instance can be assigned to the node group using node-name, node-group-name, or node affinity label. The following figure shows an animation of the Restart in place maintenance policy. Figure 3: Animation of the Restart in place host maintenance policy. Migrate within node group host maintenance policy When using this host maintenance policy, Compute Engine live migrates VMs within a fixed-sized group of physical servers during host events, which helps limit the number of unique physical servers used by the VM. This policy is most suitable for high-availability workloads with licenses that are based on the number of physical cores or processors, because with this host maintenance policy, each sole-tenant node in the group is pinned to a fixed set of physical servers, which is different than the default policy that lets VMs migrate to any server. To ensure capacity for live migration, Compute Engine reserves 1 holdback node for every 20 nodes that you reserve. The following table shows how many holdback nodes Compute Engine reserves depending on how many nodes you reserve for your node group. Total nodes in group Holdback nodes reserved for live migration 1 Not applicable. Must reserve at least 2 nodes. 2 to 20 1 21 to 40 2 41 to 60 3 61 to 80 4 81 to 100 5 With this policy, each instance must target a single node group by using the node-group-name affinity label and cannot be assigned to any specific node node-name. This is required to let Compute Engine live migrate the VMs to the holdback node when there is a host event. Note that the VMs can use any custom node affinity labels as long as they are assigned the node-group-name and not the node-name. The following figure shows an animation of the Migrate within node group host maintenance policy. Figure 4: Animation of the Migrate within node group host maintenance policy. Maintenance windows If you are managing workloads—for example—finely tuned databases, that might be sensitive to the performance impact of live migration, you can determine when maintenance begins on a sole-tenant node group by specifying a maintenance window when you create the node group. You can't modify the maintenance window after you create the node group. Maintenance windows are 4-hour blocks of time that you can use to specify when Google performs maintenance on your sole-tenant VMs. Maintenance events occur approximately once every 4 to 6 weeks. The maintenance window applies to all VMs in the sole-tenant node group, and it only specifies when the maintenance begins. Maintenance is not guaranteed to finish during the maintenance window, and there is no guarantee on how frequently maintenance occurs. Maintenance windows are not supported on node groups with the Migrate within node group host maintenance policy. Simulate a host maintenance event You can simulate a host maintenance event to test how your workloads that are running on sole-tenant nodes behave during a host maintenance event. This lets you see the effects of the sole-tenant VM's host maintenance policy on the applications running on the VMs. Host errors When there is a rare critical hardware failure on the host—sole-tenant or multi-tenant—Compute Engine does the following: Retires the physical server and its unique identifier. Revokes your project's access to the physical server. Replaces the failed hardware with a new physical server that has a new unique identifier. Moves the VMs from the failed hardware to the replacement node. Restarts the affected VMs if you configured them to automatically restart. Node affinity and anti-affinity Sole-tenant nodes ensure that your VMs don't share host with VMs from other projects unless you use shared sole-tenant node groups. With shared sole-tenant node groups, other projects within the organization can provision VMs on the same host. However, you still might want to group several workloads together on the same sole-tenant node or isolate your workloads from one another on different nodes. For example, to help meet some compliance requirements, you might need to use affinity labels to separate sensitive workloads from non-sensitive workloads. When you create a VM, you request sole-tenancy by specifying node affinity or anti-affinity, referencing one or more node affinity labels. You specify custom node affinity labels when you create a node template, and Compute Engine automatically includes some default affinity labels on each node. By specifying affinity when you create a VM, you can schedule VMs together on a specific node or nodes in a node group. By specifying anti-affinity when you create a VM, you can ensure that certain VMs are not scheduled together on the same node or nodes in a node group. Node affinity labels are key-value pairs assigned to nodes, and are inherited from a node template. Affinity labels let you: Control how individual VM instances are assigned to nodes. Control how VM instances created from a template, such as those created by a managed instance group, are assigned to nodes. Group sensitive VM instances on specific nodes or node groups, separate from other VMs. Default affinity labels Compute Engine assigns the following default affinity labels to each node: A label for the node group name: Key: compute.googleapis.com/node-group-name Value: Name of the node group. A label for the node name: Key: compute.googleapis.com/node-name Value: Name of the individual node. A label for the projects the node group is shared with: Key: compute.googleapis.com/projects Value: Project ID of the project containing the node group. Custom affinity labels You can create custom node affinity labels when you create a node template. These affinity labels are assigned to all nodes in node groups created from the node template. You can't add more custom affinity labels to nodes in a node group after the node group has been created. For information about how to use affinity labels, see Configuring node affinity. Pricing To help you to minimize the cost of your sole-tenant nodes, Compute Engine provides committed use discounts and sustained use discounts. Also, because you are already billed for the vCPU and memory of your sole-tenant nodes, you don't pay extra for the VMs on your sole-tenant nodes. If you provision sole-tenant nodes with GPUs or Local SSDs, you are billed for all of the GPUs or Local SSDs on each node that you provision. The sole-tenancy premium does not apply to GPUs or Local SSDs. Availability Sole-tenant nodes are available in select zones. To ensure high-availability, schedule VMs on sole-tenant nodes in different zones. Before using GPUs or Local SSDs on sole-tenant nodes, make sure you have enough GPU or Local SSD quota in the zone where you are reserving the resource. Compute Engine supports GPUs on n1 and g2 sole-tenant node types that are in zones with GPU support. The following table shows the types of GPUs that you can attach to n1 and g2 nodes and how many GPUs you must attach when you create the node template. GPU type GPU quantity Sole-tenant node type NVIDIA L4 8 g2 NVIDIA P100 4 n1 NVIDIA P4 4 n1 NVIDIA T4 4 n1 NVIDIA V100 8 n1 Compute Engine supports Local SSDs on n1, n2, n2d, and g2 sole-tenant node types that are in zones with Local SSD support. Restrictions You can't use sole-tenant VMs with the follow machine series and types: T2D, T2A, E2, C2D, A2, A3, or bare metal instances. Sole-tenant VMs can't specify a minimum CPU platform. You can't migrate a VM to a sole-tenant node if that VM specifies a minimum CPU platform. To migrate a VM to a sole-tenant node, remove the minimum CPU platform specification by setting it to automatic before updating the VM's node affinity labels. Sole-tenant nodes don't support preemptible VM instances. For information about the limitations of using Local SSDs on sole-tenant nodes, see Local SSD data persistence. For information about how using GPUs affects live migration, see the limitations of live migration. Sole-tenant nodes with GPUs don't support VMs without GPUs. Only N1, N2, N2D, and N4 sole-tenant nodes support overcommitting CPUs. C3 and C4 sole-tenant nodes don't support different VM configurations on the same sole-tenant node—for example—you can't place a c3-standard VM on the same sole-tenant node as a c3-highmem VM. You can't update the maintenance policy on a live node group. What's next Learn how to create, configure, and consume your sole-tenant nodes. Learn how to overcommit CPUs on sole-tenant VMs. Learn how to bring your own licenses. Review our best practices for using sole-tenant nodes to run VM workloads. Send feedback \ No newline at end of file diff --git a/Solution_Generator.txt b/Solution_Generator.txt new file mode 100644 index 0000000000000000000000000000000000000000..a6fb6a0462480fdbf1258798595360bd6e8badb4 --- /dev/null +++ b/Solution_Generator.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solution-generator +Date Scraped: 2025-02-23T12:11:10.754Z + +Content: +Solution GeneratorUse AI to generate a solution guide for your use case, including:check_smallRecommended productscheck_smallTech stack diagramcheck_smallPre-built Google solutionsshuffleSurprise meGenerate solutionPopular use casescheckAI & MLData and AnalyticsInfrastructureSecurityWeb & app hostingsparkBuild a chatbotsparkGenerate custom imagessparkSummarize key informationsparkSupport agents with AIsparkTrain a custom modelGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Spanner.txt b/Spanner.txt new file mode 100644 index 0000000000000000000000000000000000000000..4e5a548f31a12767532be6872ac382df049049e0 --- /dev/null +++ b/Spanner.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/spanner +Date Scraped: 2025-02-23T12:04:00.730Z + +Content: +Explore how Spanner's 2024 innovations help you build powerful AI-enabled applications, reduce operational overhead, and unlock new levels of efficiency and developer productivity.SpannerAlways on database with virtually unlimited scaleBuild intelligent apps with a single database that brings together relational, graph, key value, and search. No maintenance windows mean uninterrupted mission-critical apps.Go to consoleTry Spanner freeProduct highlightsStart small and scale without limits or re-architecture Built-in graph processing, full-text search and vector search Develop apps with the familiar PostgreSQL interface on SpannerWhat is Spanner?VideoFeaturesWrite and read scalability with no limitsSpanner decouples compute resources from data storage, which makes it possible to transparently scale in and out processing resources. Each additional compute capacity can process both reads and writes, providing effortless horizontal scalability. Spanner optimizes performance by automatically handling the sharding, replication, and transaction processing.VIDEOHow Spanner Transactions Work at Planet Scale8:17Automated maintenanceReduce operational costs and improve reliability for any size database. Synchronous replication and maintenance are automatic and built in. 100% online schema changes and maintenance while serving traffic with zero downtime.Spanner GraphReveal hidden relationships and connections. Spanner Graph supports ISO Graph Query Language (GQL), the new international standards for graph databases, offering an intuitive and concise way to match patterns and traverse relationships in your data. It combines the strengths of SQL and GQL, enabling you to query structured and connected data in a single operation. Spanner Graph, in Preview, interoperates with full-text and vector search capabilities, enabling you to deliver a new class of AI-enabled applications.Vector searchSearch vector embeddings at virtually unlimited scale in Spanner with exact nearest neighbor (KNN) and approximate nearest neighbor (ANN) vector search (both in Preview) for highly partitionable workloads. The built-in vector search support in Spanner eliminates the need for separate, specialized vector database solutions, providing transactional guarantees of operational data, fresh and consistent vector search results on a scale-out serverless architecture with zero manageability. VIDEOSupercharge vector search with Spanner5:41Supercharge vector search with SpannerPostgreSQL interfaceCombine the scalability and reliability of Spanner with the familiarity and portability of a PostgreSQL interface. Use the skills and tools that your teams already know, future-proofing your investment for peace of mind. VIDEODeveloping on PostgreSQL for Spanner5:14Automatic database shardingNever worry about manually resharding your database again. Built-in sharding automatically distributes data to optimize for performance and availability. Scale up and scale down without interruption.Geo-partitioningRetain the manageability of your single, global database while improving latency for users who are distributed around the globe. Geo-partitioning in Spanner allows you to partition your table data at the row level, across the globe, to serve data closer to your users. Even though the data is split into different data partitions, Spanner still maintains all your distributed data as a single cohesive table for queries and mutations.Single-region, dual-region, and multi-region configurationsNo matter where your users may be, apps backed by Spanner can read and write up-to-date strongly consistent data globally. Additionally, when running a dual-region or multi-region instance, your database is protected against a regional failure and offers industry-leading 99.999% availability.Strong transactional consistencyRely on industry-leading external consistency without compromising on scalability or availability.VIDEOStrong consistency at scale with Spanner7:20High-performance, workload-isolated query processingSpanner Data Boost enables users to run analytical queries, batch processing jobs, or data export operations faster without affecting the existing transactional workload. Fully managed by Google Cloud, Data Boost does not require capacity planning or management. It is always hot, ready to process user queries directly on data stored in Spanner's distributed storage system, Colossus. This on-demand, independent compute resource lets users easily handle mixed workloads and worry-free data sharing. VIDEOData sharing done right: Spanner Data Boost2:45Full-text searchEliminate separate search tools and associated extract, transform, and load (ETL) pipelines by utilizing high-performance text search powered by learnings from Google Search. Full-text search provides transactionally consistent search results, along with powerful capabilities such as phonetic search, and NGRAMs-based matching for spelling variations. To learn more, read this whitepaper.LangChain integrationEasily build gen AI applications that are more accurate, transparent, and reliable with LangChain integration. Spanner has three LangChain integrations—Document loader for loading and storing information from documents, Vector stores for enabling semantic search, and Chat Messages Memory for enabling chains to recall previous conversations. Visit the GitHub repository to learn more.Vertex AI integrationsPerform online inference on embedding, generative AI, or custom models served in Vertex AI using Spanner’s ML.PREDICT SQL function. Use the Spanner to Vertex AI Vector Search Workflow to perform similarity search on your Spanner data with Vertex AI Vector Search.Database CenterGain a comprehensive view of your entire database fleet, spanning multiple engines, versions, regions, projects and environments. Database Center, in Preview, helps proactively de-risk your fleet with intelligent performance and security recommendations. With Gemini enabled, Database Center makes optimizing your database fleet incredibly intuitive. Use a natural-language chat interface to ask questions, quickly resolve fleet issues, and get optimization recommendations.Backup and restore, point-in-time recovery (PITR)Backup your database to store a consistent copy of data and restore on demand. PITR provides continuous data protection with the ability to recover your past data to a microsecond granularity.Enterprise-grade security and controlsCustomer-managed encryption keys (CMEK), data-layer encryption, IAM integration for access and controls, and comprehensive audit logging. Support for VPC-SC, Access Transparency, and Access Approval. Fine-grained access control lets you authorize access to Spanner data at the table and column level. View all featuresDatabase comparisonDatabase attributeOther Relational DBOther Non-relational DBSpannerSchemaStaticDynamicDynamicSQLYesNoYesTransactionsACID(atomicity, consistency, isolation, durability)EventualStrong-ACIDwith TrueTime orderingScalabilityVertical (use a bigger machine)Horizontal(add more machines)HorizontalAvailabilityFailover (downtime)HighHigh 99.999% SLAReplicationConfigurableConfigurableAutomaticSchemaOther Relational DBStaticOther Non-relational DBDynamicSpannerDynamicSQLOther Relational DBYesOther Non-relational DBNoSpannerYesTransactionsOther Relational DBACID(atomicity, consistency, isolation, durability)Other Non-relational DBEventualSpannerStrong-ACIDwith TrueTime orderingScalabilityOther Relational DBVertical (use a bigger machine)Other Non-relational DBHorizontal(add more machines)SpannerHorizontalAvailabilityOther Relational DBFailover (downtime)Other Non-relational DBHighSpannerHigh 99.999% SLAReplicationOther Relational DBConfigurableOther Non-relational DBConfigurableSpannerAutomaticHow It WorksSpanner instances provide compute and storage in one or more regions. A distributed clock called TrueTime guarantees transactions are strongly consistent even across regions. Data is automatically "split" for scalability and replicated using a synchronous, Paxos-based scheme for availability.View documentationCommon UsesUser profile and entitlementsManage critical user data securely at any scaleUser profile management is a critical function that requires Spanner's scalability, availability, and global consistency. It is the entry point for players across games, platforms, and regions. Similarly financial services companies manage customer information and product offerings using Spanner.How Dragon Quest Walk handled millions of players with SpannerBuild a sample gaming trade post with SpannerRead this whitepaper on building multiplayer games with SpannerTutorials, quickstarts, & labsManage critical user data securely at any scaleUser profile management is a critical function that requires Spanner's scalability, availability, and global consistency. It is the entry point for players across games, platforms, and regions. Similarly financial services companies manage customer information and product offerings using Spanner.How Dragon Quest Walk handled millions of players with SpannerBuild a sample gaming trade post with SpannerRead this whitepaper on building multiplayer games with SpannerFinancial ledgerGain up-to-date, consistent view of global transactionsUnify financial transactions, trades, settlements, and positions across the globe into a consolidated trade ledger built on Spanner that guarantees external consistency and scalability. Consolidation of data helps in quickly adapting to changing market conditions and regulatory requirements. Similarly retail/ecommerce businesses use Spanner for inventory ledger.Watch how Goldman Sachs consolidates trade ledgers on SpannerLearn how CERC has disrupted the exchange receivables market with new Spanner innovationsSample code for a financial application on GitHubTutorials, quickstarts, & labsGain up-to-date, consistent view of global transactionsUnify financial transactions, trades, settlements, and positions across the globe into a consolidated trade ledger built on Spanner that guarantees external consistency and scalability. Consolidation of data helps in quickly adapting to changing market conditions and regulatory requirements. Similarly retail/ecommerce businesses use Spanner for inventory ledger.Watch how Goldman Sachs consolidates trade ledgers on SpannerLearn how CERC has disrupted the exchange receivables market with new Spanner innovationsSample code for a financial application on GitHubOnline bankingDeliver always-on interactivity for digital experiencesConsumers expect access to their critical financial data on their devices outside of regular banking hours. Allow your developers to focus on new experiences rather than operational overhead, such as manual sharding or eventual consistency. Reduce risk and downtime with 99.999% availability and zero maintenance.Learn how Minna Bank built a digital-native banking application with SpannerLearn how ARIGATOBANK is able to respond to spikes in financial transactions with SpannerTutorials, quickstarts, & labsDeliver always-on interactivity for digital experiencesConsumers expect access to their critical financial data on their devices outside of regular banking hours. Allow your developers to focus on new experiences rather than operational overhead, such as manual sharding or eventual consistency. Reduce risk and downtime with 99.999% availability and zero maintenance.Learn how Minna Bank built a digital-native banking application with SpannerLearn how ARIGATOBANK is able to respond to spikes in financial transactions with SpannerLoyalty programs and promotionsPersonalize experiences with real-time updatesTrack customer participation and preferences in a loyalty program to analyze trends and improve customer satisfaction. Similarly, game companies use Spanner for building personalized leaderboards in games.Learn how REWE Group uses Spanner to optimize for speed and performanceTutorials, quickstarts, & labsPersonalize experiences with real-time updatesTrack customer participation and preferences in a loyalty program to analyze trends and improve customer satisfaction. Similarly, game companies use Spanner for building personalized leaderboards in games.Learn how REWE Group uses Spanner to optimize for speed and performanceOmni-channel inventory managementProvide a consistent view across multiple channels and appsSpanner provides a high-performance, single source of truth for retail inventory and orders across online, in-store, distribution centers, and shipping to match inventory with demand, improving customer experience and profitability. Game companies similarly use Spanner to store in-game inventory data.Watch how to build a real-time inventory management system with SpannerCheck out how Walmart modernized its data management platform with SpannerLearn how Mahindra reimagined the selling process and sold 100,000 SUVs in 30 minsTutorials, quickstarts, & labsProvide a consistent view across multiple channels and appsSpanner provides a high-performance, single source of truth for retail inventory and orders across online, in-store, distribution centers, and shipping to match inventory with demand, improving customer experience and profitability. Game companies similarly use Spanner to store in-game inventory data.Watch how to build a real-time inventory management system with SpannerCheck out how Walmart modernized its data management platform with SpannerLearn how Mahindra reimagined the selling process and sold 100,000 SUVs in 30 minsKnowledge graphReveal hidden relationships and connections in your dataWith Spanner Graph, you can develop knowledge graphs that capture the complex connections between entities, represented as nodes, and their relationships, represented as edges. These connections provide rich context, making knowledge graphs invaluable for developing knowledge base systems and recommendation engines. With integrated search capabilities, you can seamlessly blend semantic understanding, keyword-based retrieval, and graph for comprehensive results.Get started with Spanner GraphTutorials, quickstarts, & labsReveal hidden relationships and connections in your dataWith Spanner Graph, you can develop knowledge graphs that capture the complex connections between entities, represented as nodes, and their relationships, represented as edges. These connections provide rich context, making knowledge graphs invaluable for developing knowledge base systems and recommendation engines. With integrated search capabilities, you can seamlessly blend semantic understanding, keyword-based retrieval, and graph for comprehensive results.Get started with Spanner GraphPricingHow Spanner pricing worksSpanner pricing is based on compute capacity, Spanner Data Boost, database storage, backup storage, replication, and network usage. Compute pricing varies depending on the edition and configuration selected. Committed use discounts can further reduce the compute price.ServiceDescriptionPrice (USD)ComputeStandard editionPacked with a comprehensive suite of established capabilities for regional (single-region) configurationsCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units). Starting at$0.030 per 100 processing units per hour per replicaEnterprise editionProvide additional multi-model and advanced search capabilities with enhanced operational simplicity and efficiencyCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units).Starting at$0.041per 100 processing units per hour per replicaEnterprise Plus editionSupport the most demanding workloads with the highest levels of availability, performance, compliance, and governanceCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units).Starting at$0.057per 100 processing units per hour per replicaData BoostOn-demand, isolated compute resources, including CPU, memory, and local data transferStarting at$0.00117per serverless processing unit per hourDatabase storagePrice is based on the amount of data stored in the database and includes the cost of storage in read-write replicas and read-only replicas; witness replica is free of charge.Starting at$0.10per GB per month per replicaBackup storageRegional configurationPricing is based on the amount of backup storage and includes the cost of storage in all replicas.Starting at$0.10per GB per month (incl. all replicas)Dual-region and multi-regional configurationPricing is based on the amount of backup storage and includes the cost of storage in all replicas.Starting at$0.30per GB per month (incl. all replicas)ReplicationIntra-region replication FreeInter-region replicationStarting at$0.04per GB NetworkIngressFreeIntra-region egressFreeInter-region egressStarting at$0.01 per GBLearn more about Spanner pricing and committed use discounts. How Spanner pricing worksSpanner pricing is based on compute capacity, Spanner Data Boost, database storage, backup storage, replication, and network usage. Compute pricing varies depending on the edition and configuration selected. Committed use discounts can further reduce the compute price.ComputeDescriptionStandard editionPacked with a comprehensive suite of established capabilities for regional (single-region) configurationsCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units). Price (USD)Starting at$0.030 per 100 processing units per hour per replicaEnterprise editionProvide additional multi-model and advanced search capabilities with enhanced operational simplicity and efficiencyCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units).DescriptionStarting at$0.041per 100 processing units per hour per replicaEnterprise Plus editionSupport the most demanding workloads with the highest levels of availability, performance, compliance, and governanceCompute capacity is provisioned as processing units or nodes (1 node = 1000 processing units).DescriptionStarting at$0.057per 100 processing units per hour per replicaData BoostDescriptionOn-demand, isolated compute resources, including CPU, memory, and local data transferPrice (USD)Starting at$0.00117per serverless processing unit per hourDatabase storageDescriptionPrice is based on the amount of data stored in the database and includes the cost of storage in read-write replicas and read-only replicas; witness replica is free of charge.Price (USD)Starting at$0.10per GB per month per replicaBackup storageDescriptionRegional configurationPricing is based on the amount of backup storage and includes the cost of storage in all replicas.Price (USD)Starting at$0.10per GB per month (incl. all replicas)Dual-region and multi-regional configurationPricing is based on the amount of backup storage and includes the cost of storage in all replicas.DescriptionStarting at$0.30per GB per month (incl. all replicas)ReplicationDescriptionIntra-region replication Price (USD)FreeInter-region replicationDescriptionStarting at$0.04per GB NetworkDescriptionIngressPrice (USD)FreeIntra-region egressDescriptionFreeInter-region egressDescriptionStarting at$0.01 per GBLearn more about Spanner pricing and committed use discounts. PRICING CALCULATOREstimate your monthly Spanner costs, tailored to your workload.Estimate your costsCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptCreate a 90-day Spanner instance for freeTry Spanner freeLearn how to use SpannerView quickstartCreate and query a database in consoleRead the guideGet best practices for constructing SQL statementsRead the guideDive into coding with examplesUse code samplesBusiness CaseExplore how other businesses built innovative apps to deliver great customer experiences, cut costs, and increase ROI with SpannerHow does Uber scale to millions of concurrent requests?Explore how Uber redesigned its fulfillment platform leveraging Spanner.Watch the videoRelated ContentHow Wayfair is modernizing, one database at a timeLearn how Gmail migrated billions of users, trillions of emails, and exabytes of data to SpannerHow Glance improves database operations with SpannerFeatured benefits and customersGrow your business with innovative applications that scale limitlessly to meet any demand.Lower TCO and free your developers from cumbersome operations to dream big and build faster.Get superior price-performance and pay for what you use, starting at as low as $40 per month.Partners & IntegrationTake advantage of partners with Spanner expertise to help you at every step of the journey, from assessments and business case to migrations and building new apps on Spanner.System integratorsSpanner partners help you modernize applications and migrate to the cloud seamlessly. Find your ideal partner or third-party integration in our directory.FAQExpand all Is Spanner a relational or non-relational database?Spanner simplifies your data architecture by bringing together relational, key-value, graph, and vector search workloads—all on the same database. It is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution. Hence, it’s suitable for both relational and non-relational workloads.Does Spanner use SQL?Spanner provides two ANSI-based SQL dialects over the same rich set of capabilities: GoogleSQL and PostgreSQL. GoogleSQL shares syntax with BigQuery for teams standardizing their data management workflows. The PostgreSQL interface provides familiarity for teams who already know PostgreSQL and portability of schemas and queries to other PostgreSQL environments. For more information about the Spanner PostgreSQL interface, see our documentation.How do I migrate databases to Spanner?Migration to Spanner can vary widely depending on a number of factors like source database, data size, downtime requirements, application code complexity, sharding schema, custom functions or transformations, failover and replication strategy. The recommended tooling comprises open source tools like Spanner migration tool for schema and data migration, and third party tools for assessments like migVisor. Learn more about the migration process in our documentation.What are the key considerations for operating Spanner?Spanner is a fully managed database so it automatically provides comprehensive infrastructure management features, but there are some application-specific management actions that may be required depending on your workload. You will need to make sure that you have set up proper alerting and monitoring and that you are watching those closely to ensure production is always running smoothly. You need to understand what actions to take when traffic grows organically over time, or if there is peak traffic expected, or how to handle data corruption due to application bugs, and last but not least, how to troubleshoot performance issues and understand what components are responsible for increased latencies.Other resources and supportSpanner EcosystemAsk the communityGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Spectrum_Access_System_(SAS).txt b/Spectrum_Access_System_(SAS).txt new file mode 100644 index 0000000000000000000000000000000000000000..3bb395e5a1db5d300294efcfa2c6459e915cb385 --- /dev/null +++ b/Spectrum_Access_System_(SAS).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/spectrum-access-system +Date Scraped: 2025-02-23T12:05:12.457Z + +Content: +CBRS gets a boost. Learn how CBRS 2.0 improves the availability of CBRS spectrum near coastlines.Spectrum Access System (SAS)Access to shared CBRS spectrum for allGoogle SAS enables access to the CBRS band, effectively sharing spectrum among fixed wireless, mobile, and private network users to maximize bandwidth and transmit power.Activate SASNew Google Cloud customers can use $300 in free credits to try out Google SAS.Product highlightsHigh reliabilityChannel guidance and transmit power optimizationTelco-grade redundant ESC network SAS portal: Map-based interfaceGoogle Cloud-based monitoring and custom analyticsProduct overviewFeaturesReliabilityGoogle SAS is built with multiple layers of reliability, including a distributed architecture that is optimized to maintain uninterrupted communications between SAS and CBRS devices. EIRP optimizationGoogle SAS helps each of your CBSDs get the maximum transmit power available so that your network can offer the best performance to your users and customers.Channel guidance and spectrum planningGoogle SAS offers tools to discover the optimal radio frequencies, either in real time or during network planning. Google's high-resolution geo data is used to estimate power and interference on each channel, resulting in more available bandwidth, higher transmit power authorizations, and lower interference.ESC networkGoogle SAS is powered by a Telco grade Environmental Sensing Capability (ESC) sensor network along the continental US coast and Hawaii (2024). Google ESC is designed for maximum coverage and redundancy to maximize spectrum availability near the coast.Tools to manage your CBSD performance Google SAS offers a user-friendly, map-based portal that makes it easy to find spectrum available, register CBSDs, and see the overall status of your network. Rich SAS-CBSD API communication logs are easy to explore in BigQuery for detailed troubleshooting, custom monitoring and insightful analytics and dashboards.View all featuresHow It WorksOn the Console, navigate to Spectrum Access System, enable service and define UR-ID for your network. Next, go to SAS portal, add your CPI to upload and sign device configuration, now register CBSDs with SAS and get access to CBRS spectrum. Use Cloud Monitoring and BigQuery to create alerts/dashboards.View documentationCommon UsesFixed wirelessBroadband networks using CBRS spectrumFixed Wireless Access (FWA) using CBRS band allows internet service providers to build out high-speed broadband networks quickly and cost-effectively. SAS combines licensed and general access CBRS spectrum effectively between base stations to Customer Premises Equipment (CPE) links, ensuring optimal data throughput for end users.Watch nowPartners & integrationsBroadband networks using CBRS spectrumFixed Wireless Access (FWA) using CBRS band allows internet service providers to build out high-speed broadband networks quickly and cost-effectively. SAS combines licensed and general access CBRS spectrum effectively between base stations to Customer Premises Equipment (CPE) links, ensuring optimal data throughput for end users.Watch nowPrivate networksCBRS spectrum for private 4G/5G cellular networksCBRS-based private 5G/LTE networks deliver improved coverage and security for business-critical applications not available with Wi-Fi. CBRS is also well suited for enterprises, public sector, and education customers who want to create local private cellular networks. SAS helps efficiently manage CBRS spectrum for private networks used by PAL and GAA users while avoiding interference.Partners & integrationsCBRS spectrum for private 4G/5G cellular networksCBRS-based private 5G/LTE networks deliver improved coverage and security for business-critical applications not available with Wi-Fi. CBRS is also well suited for enterprises, public sector, and education customers who want to create local private cellular networks. SAS helps efficiently manage CBRS spectrum for private networks used by PAL and GAA users while avoiding interference.Mobile networksBuilding nationwide 5G networks for growthMobile network operators can use CBRS spectrum to expand 5G network coverage across the country, complementing their existing 5G deployments in other bands. This allows them to accelerate their time to market for innovative mobility, broadband, and enterprise solutions. SAS coordinates the CBRS spectrum across GAA and PAL channels, ensuring that there is a high level of spectrum availability.Partners & integrationsBuilding nationwide 5G networks for growthMobile network operators can use CBRS spectrum to expand 5G network coverage across the country, complementing their existing 5G deployments in other bands. This allows them to accelerate their time to market for innovative mobility, broadband, and enterprise solutions. SAS coordinates the CBRS spectrum across GAA and PAL channels, ensuring that there is a high level of spectrum availability.PricingSpectrum Access System pricingSpectrum Access System uses a pay-as-you-go pricing model, which supports two deployment types, fixed wireless access and mobile.TypeDescriptionPriceCBRS-Fixed Wireless AccessNetworking - Spectrum Access - CBRS - Fixed Wireless - CPE Starting at$2.22 USD per monthCBRS-MobileNetworking - Spectrum Access - CBRS - Mobile - Cat.A (Indoor)Starting at$2.64USD per monthNetworking - Spectrum Access - CBRS - Mobile - Cat.B (Outdoor)Starting at$13.15USD per monthMonthly pricing, assuming 30 days/month, and charged at a daily rate.Spectrum Access System pricingSpectrum Access System uses a pay-as-you-go pricing model, which supports two deployment types, fixed wireless access and mobile.CBRS-Fixed Wireless AccessDescriptionNetworking - Spectrum Access - CBRS - Fixed Wireless - CPE PriceStarting at$2.22 USD per monthCBRS-MobileDescriptionNetworking - Spectrum Access - CBRS - Mobile - Cat.A (Indoor)PriceStarting at$2.64USD per monthNetworking - Spectrum Access - CBRS - Mobile - Cat.B (Outdoor)DescriptionStarting at$13.15USD per monthMonthly pricing, assuming 30 days/month, and charged at a daily rate.Already a SAS customer?Visit the SAS portalSAS portalIs SAS right for your organization?Find out if SAS is the right solution for youValidate CBSD interoperability with SASSpectrum Access SystemFind out moreDocumentationAlready a customer?Visit the SAS portalCBSD interoperability (IOT)Find out moreCBRS CPI training and certification renewalAccess trainingSAS portal documentationAccess documentationPartners & IntegrationSpectrum Access System partnersGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Speech-to-Text.txt b/Speech-to-Text.txt new file mode 100644 index 0000000000000000000000000000000000000000..753280ef0cb9591b8c8d065c6f5bb13d05e5fb06 --- /dev/null +++ b/Speech-to-Text.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/speech-to-text +Date Scraped: 2025-02-23T12:02:10.958Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceSpeech-to-TextTurn speech into text using Google AIConvert audio into text transcriptions and integrate speech recognition into applications with easy-to-use APIs.Start transcribingContact salesProduct highlightsEasily add Speech-to-Text to appsTranscribe audio files or real-time audioSupports over 125 languages Use AI to caption videosHow to use Speech-to-Text02:26 minsFeaturesAdvanced speech AISpeech-to-Text can utilize Chirp, Google Cloud’s foundation model for speech trained on millions of hours of audio data and billions of text sentences. This contrasts with traditional speech recognition techniques that focus on large amounts of language-specific supervised data. These techniques give users improved recognition and transcription for more spoken languages and accents.Support for 125 languages and variantsBuild for a global user base with extensive language support. Transcribe short, long, and even streaming audio data. Speech-to-Text also offers users more accurate and globe-spanning translation and recognition with Chirp, the next generation of universal speech models. Chirp was built using self-supervised training on millions of hours of audio and 28 billion sentences of text spanning 100+ languages.Transcribe short, long, or streaming audio View guidePretrained or customizable models for transcriptionChoose from a selection of trained models for voice control, phone call, and video transcription optimized for domain-specific quality requirements. Easily customize, experiment with, create, and manage custom resources with the Speech-to-Text UI.Out-of-the-box regulatory and security complianceSpeech-to-Text API v2 gives enterprise and business customers added security and regulatory requirements out of the box. Data residency enables the invocation of transcription models through a fully regionalized service that taps into Google Cloud regions like Singapore and Belgium. Recognizer resourcefulness eliminates the need for dedicated service accounts for authentication and authorization. Logs for resource generation and transcription are made easily available in the Google Cloud console. And Speech-to-Text API v2 offers enterprise-grade encryption with customer-managed encryption keys for all resources as well as batch transcription.AI-powered speech recognition and transcriptionSpeech-to-Text uses model adaptation to improve the accuracy of frequently used words, expand the vocabulary available for transcription, and improve transcription from noisy audio. Model adaptation lets users customize Speech-to-Text to recognize specific words or phrases more frequently than other options that might otherwise be suggested. For example, you could bias Speech-to-Text towards transcribing "weather" over "whether."Streaming speech recognition Receive real-time speech recognition results as the API processes the audio input streamed from your application’s microphone or sent from a prerecorded audio file (inline or through Cloud Storage).Speech adaptationCustomize speech recognition to transcribe domain-specific terms and rare words by providing hints and boost your transcription accuracy of specific words or phrases. Automatically convert spoken numbers into addresses, years, currencies, and more using classes.Speech-to-Text On-PremHave full control over your infrastructure and protected speech data while leveraging Google’s speech recognition technology on-premises, right in your own private data centers. Contact sales to get started.Multichannel recognitionSpeech-to-Text can recognize distinct channels in multichannel situations (for example, video conference) and annotate the transcripts to preserve the order.Noise robustness Speech-to-Text can handle noisy audio from many environments without requiring additional noise cancellation.Domain-specific models Choose from a selection of trained models for voice control and phone call and video transcription optimized for domain-specific quality requirements. For example, our enhanced phone call model is tuned for audio originated from telephony, such as phone calls recorded at an 8khz sampling rate.Content filteringProfanity filter helps you detect inappropriate or unprofessional content in your audio data and filter out profane words in text results.Transcription evaluationUpload your own voice data and have it transcribed with no code. Evaluate quality by iterating on your configuration.Automatic punctuation (beta)Speech-to-Text accurately punctuates transcriptions, such as by providing commas, question marks, and periods.Speaker diarizationKnow who said what by receiving automatic predictions about which of the speakers in a conversation spoke each utterance.View all featuresHow It WorksSpeech-to-Text has three main methods to perform speech recognition: synchronous, asynchronous, and streaming. Each method returns text results based on if transcription is needed in post processing, periodically, or in real time. Simply put, you'll input audio data and then receive a text-based response.View documentationLearn how to add Speech-to-Text to your appsDemoTest out the Speech-to-Text APIQuickly create audio transcription from a file upload or directly speaking into a mic. sparkGet solution recommendations for your use case, generated by AII want to create a system to transcribe text from audio filesI want to generate subtitles in five languages for my videosI want to enable voice commands on my mobile applicationMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionTranscribe audioprompt_suggestionGenerate multilingual subtitlesprompt_suggestionEnable voice commandsCommon UsesTranscribe audioCreate an audio transcription Learn how to use the Speech-to-Text API from within the Cloud Console by creating an audio transcription in just a few steps. You can also transcribe short, long, and streaming audio. Start using Speech-to-TextBrowse sample applicationsSpeech-to-Text tutorials and guidesTranscribe audio from a mic to textTutorials, quickstarts, & labsCreate an audio transcription Learn how to use the Speech-to-Text API from within the Cloud Console by creating an audio transcription in just a few steps. You can also transcribe short, long, and streaming audio. Start using Speech-to-TextBrowse sample applicationsSpeech-to-Text tutorials and guidesTranscribe audio from a mic to textCaption videos using AICreate subtitles for videos using AI Transcribe your audio and video to include captions. Add subtitles to existing content or in real time to streaming content. Our video transcription model is ideal for indexing or subtitling video and/or multispeaker content and uses similar machine learning technology as YouTube does for video captioning. This tutorial shows you how to use the Google Cloud AI services Speech-to-Text API and Translation API to add subtitles to videos and to provide localized subtitles in other languages.Watch automated subtitles tutorialDub videos using AITranscribe audio from a video file using Speech-to-Text Transcribe audio from a streaming videoTutorials, quickstarts, & labsCreate subtitles for videos using AI Transcribe your audio and video to include captions. Add subtitles to existing content or in real time to streaming content. Our video transcription model is ideal for indexing or subtitling video and/or multispeaker content and uses similar machine learning technology as YouTube does for video captioning. This tutorial shows you how to use the Google Cloud AI services Speech-to-Text API and Translation API to add subtitles to videos and to provide localized subtitles in other languages.Watch automated subtitles tutorialDub videos using AITranscribe audio from a video file using Speech-to-Text Transcribe audio from a streaming videoAdd Speech-to-Text to appsHow to add Speech-to-Text to appsLearn how you can quickly and easily enable Speech-to-Text for your application with Google Cloud. This video covers how to add AI to your application without extensive machine learning model experience. Using the pretrained Speech-to-Text API you'll quickly and easily enable AI for your application.Watch example videoAdd voice control to appsTutorials, quickstarts, & labsHow to add Speech-to-Text to appsLearn how you can quickly and easily enable Speech-to-Text for your application with Google Cloud. This video covers how to add AI to your application without extensive machine learning model experience. Using the pretrained Speech-to-Text API you'll quickly and easily enable AI for your application.Watch example videoAdd voice control to appsTranslate audio into textLanguage, speech, text, and translation with Google Cloud APIsIn this course, you'll use the Speech-to-Text API to transcribe an audio file into a text file, translate with the Google Cloud Translation API, and create synthetic speech with Natural Language AI.Start courseView supported languagesLearn more about Google Cloud TranslationTutorials, quickstarts, & labsLanguage, speech, text, and translation with Google Cloud APIsIn this course, you'll use the Speech-to-Text API to transcribe an audio file into a text file, translate with the Google Cloud Translation API, and create synthetic speech with Natural Language AI.Start courseView supported languagesLearn more about Google Cloud TranslationPricingHow Speech-to-Text pricing worksSpeech-to-Text pricing is based on the API version, channels, batch methods, and any additional Google Cloud service costs like storage. API versionService and capabilityPricingSpeech-to-Text V1 APIV1 offers data residency for multi region only. Models include short, long, phone call, and video. V1 does not include audit logging. New customers get $300 in free credits and 60 minutes for transcribing and analyzing audio free per month, not charged against your credits.$0.024per minSpeech-to-Text V2 APIV2 offers data residency for multi and single region. Models include short, long, telephony, video, and Chirp. V2 does include audit logging and support for customer managed encryption keys.$0.016per minView pricing details for Speech-to-Text.How Speech-to-Text pricing worksSpeech-to-Text pricing is based on the API version, channels, batch methods, and any additional Google Cloud service costs like storage. Speech-to-Text V1 APIService and capabilityV1 offers data residency for multi region only. Models include short, long, phone call, and video. V1 does not include audit logging. New customers get $300 in free credits and 60 minutes for transcribing and analyzing audio free per month, not charged against your credits.Pricing$0.024per minSpeech-to-Text V2 APIService and capabilityV2 offers data residency for multi and single region. Models include short, long, telephony, video, and Chirp. V2 does include audit logging and support for customer managed encryption keys.Pricing$0.016per minView pricing details for Speech-to-Text.Pricing calculatorEstimate your monthly Speech-To-Text costs, including region specific pricing and fees.Estimate costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptStart transcribing in the console with up to 60 minutes of transcribing and analyzing audio free per monthGo to my consoleHave a large project?Contact salesSpeech-to-Text On-PremView documentationSpeech-to-Text basicsView documentationSpeech-to-Text code samplesView samplesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Spot_VMs.txt b/Spot_VMs.txt new file mode 100644 index 0000000000000000000000000000000000000000..e9cc59dd73033173bd382a99aae9e1b79219d605 --- /dev/null +++ b/Spot_VMs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/spot-vms +Date Scraped: 2025-02-23T12:02:41.148Z + +Content: +Spot VMsAffordable compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs offer the same machine types, options, and performance as regular compute instances. If your applications are fault tolerant and can withstand possible instance preemptions, then Spot instances can reduce your Compute Engine costs by up to 91%.Go to consoleView documentationWhat are Spot VMs1:16BenefitsLow-cost VMsPredictable and low costSpot VMs are up to 91% more cost effective than regular instances. You get pricing stability with no more than once-a-month pricing changes and at least a 60% off guarantee.Expand your batch processingSpeed up your compute-intensive workloads and reduce costs with spot instances. Throw Spot VMs at fault-tolerant workloads such as HPC, data and analytics, CI/CD, rendering/transcoding, testing, and more.Get more from your containersContainers are naturally stateless and fault tolerant, making them a great fit for Spot VMs. To run Spot VMs in Google Kubernetes Engine (GKE), simply type "--spot" in your cloud command and you're ready to go.Enable it instantlySimply add --provisioning-model=SPOT to the cloud command line and you're off to the races. With per-second billing, just shut down your VMs as soon as you're done.Key FeaturesLow pricingSpot VMs are priced up to 91% off regular instances. They show up on your bill separately so you'll see just how much you're saving.Easy extendabilityAttach GPUs and local SSDs to spot instances for additional performance and savings.Graceful shutdownCompute Engine gives you 30 seconds to shut down when you're preempted, letting you save your work in progress for later.Large-scale computingSpin up as many instances as you need and turn them off when you're done. Only pay for what you use.Quickly reclaim capacityManaged instance groups automatically recreate your instances when they're preempted (if capacity is available).Control costsSave even more by specifying a maximum run duration and define how instances clean up after themselves on preemption or exceeding the maximum run duration.PartnersCustomers and partnersSee more Google Cloud customersTake the next stepStart your next project, explore interactive tutorials, and manage your accountContact usNeed help getting started?Contact SalesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Standalone_MQTT_broker.txt b/Standalone_MQTT_broker.txt new file mode 100644 index 0000000000000000000000000000000000000000..7efb0ced767c4f8173dea493790263ed73ef8544 --- /dev/null +++ b/Standalone_MQTT_broker.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/connected-devices/mqtt-broker-architecture +Date Scraped: 2025-02-23T11:48:08.356Z + +Content: +Home Docs Cloud Architecture Center Send feedback Standalone MQTT broker architecture on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-09 UTC MQTT is an OASIS standard protocol for connected device applications that provides bidirectional messaging using a publish-and-subscribe broker architecture. The MQTT protocol is lightweight to reduce the network overhead, and MQTT clients are very small to minimize the use of resources on constrained devices. One solution for organizations who want to support connected device applications on Google Cloud is to run a standalone MQTT broker on Compute Engine or GKE. To deploy an MQTT broker in your organization, you need to make several key decisions which affect the overall architecture; in particular, load-balancing and the deployment environment. This document describes an architecture for deploying an MQTT broker, the core application in an MQTT deployment, on Google Cloud. It also describes the decisions that you need to make when you deploy this broker, and how they impact the architecture. This document is part of a series of documents that provide information about IoT architectures on Google Cloud. The other documents in this series include the following: Connected device architectures on Google Cloud overview Standalone MQTT broker architecture on Google Cloud (this document) IoT platform product architecture on Google Cloud Best practices for running an IoT backend on Google Cloud Device on Pub/Sub architecture to Google Cloud Best practices for automatically provisioning and configuring edge and bare metal systems and servers The following diagram shows an MQTT broker architecture running on Google Cloud. The architecture in the preceding image is composed as follows: The MQTT broker is deployed as a cluster of three instances that are connected to the Cloud Load Balancing service. For the cloud load balancer, you can choose from one of several load-balancing products, which are described later in this document. The broker cluster includes a device credential store and a device authentication and authorization service. The cluster connects with the backend workloads through Dataflow or Pub/Sub. On the client side, edge gateways provide bidirectional communication between edge devices and the MQTT broker cluster through MQTT over TLS. Generally, we recommend that you deploy the MQTT broker application for this architecture in a cluster for scalability. Factors such as the clustering functionality, the scale-up and scale-down cluster management, data synchronization, and network partition handling are addressed by the specific broker implementations (such as HiveMQ, EMQX, VerneMQ, mosquito, and others). Architectural considerations and choices The following sections describe the different architectural choices and considerations that you must make for a standalone MQTT broker architecture, and the impact that these choices have on the architecture. Connected devices Internet-connected edge devices publish their telemetry events and other information to the MQTT broker. To implement the standalone MQTT broker architecture that's described in this document, the device needs to have an MQTT client, the server certificate public key for TLS authentication, and the credentials needed to authenticate with the MQTT broker. In addition, edge devices generally have connectors to local sensors, to on-premises data systems, and to other devices that do not have internet access or IP connectivity. For example, the edge device can serve as a gateway for other local constrained devices connected to the gateway using BLE, to a wired connection, or to another near-field protocol. A detailed specification of the connected device architecture is outside the scope of this guide. Load balancing In the architecture, an external load-balancing service is configured between the public network and the MQTT broker cluster. This service provides several important networking functions, including distribution of incoming connections across backend nodes, session encryption, and authentication. Google Cloud supports several load balancer types. To choose the best load balancer for your architecture, consider the following: mTLS. mTLS handles both encryption and device authentication methods, while standard TLS handles only encryption and requires a separate device authentication method: If your application uses mTLS for device authentication and needs to terminate the TLS tunnel, we recommend that you use an external passthrough Network Load Balancer or an external proxy Network Load Balancer with a target TCP proxy. External proxy Network Load Balancers terminate the TLS session and proxy the connection to the broker node, along with any authentication credentials that are contained in the message. If you need the client connection information as part of the authentication scheme, you can preserve it in the backend connection by enabling the PROXY protocol. If your application doesn't use mTLS, we recommend that you use an external proxy Network Load Balancer with a target SSL proxy to offload the external TLS and SSL processing to the load balancer. External proxy Network Load Balancers terminate the TLS session and proxy the connection to the broker node, along with any authentication credentials that are contained in the message. If you need the client connection information as part of the authentication scheme, you can preserve it in the backend connection by enabling the PROXY protocol. HTTP(S) endpoints. If you need to expose HTTP(S) endpoints, we recommend that you configure a separate external Application Load Balancer for these endpoints. For more information about the load balancer types that Cloud Load Balancing supports, see Summary of Google Cloud load balancers. Load balancing strategy Any load-balancing service distributes connections from edge devices across the nodes in the cluster according to one of several algorithms or balancing modes. For MQTT, a session affinity load-balancing strategy is better than random load balancing. Because MQTT client-server connections are persistent bidirectional sessions, state is maintained on the broker node that stops the connection. In a clustered configuration, if a client disconnects and then reconnects to a different node, the session state is moved to the new node, which adds load on the cluster. This issue can be largely avoided by using session affinity load balancing. If clients frequently change their IP addresses, the connection can break, but in most cases session affinity is better for MQTT. Session affinity is available in all Cloud Load Balancing products. Device authentication and credential management MQTT broker applications handle device authentication and access control separately from Google Cloud. A broker application also provides its own credential store and management interface. The MQTT protocol supports basic username and password authentication in the initial Connect packet, and these fields are also frequently used by broker implementations to support other forms of authentication such as X.509 Certificate or JWT authentication. MQTT 5.0 also adds support for enhanced authentication methods that use challenge and response-style authentication. The authentication method that's used depends on the choice of MQTT broker application and your connected device use case. Regardless of the authentication method that the broker uses, the broker maintains a device credential store. This store can be in a local SQL database or a flat file. Some brokers, including HiveMQ and VerneMQ, also support the use of a managed database service such as Cloud SQL. You need a separate service to manage the device credential store and handle any integrations with other authentication services such as IAM. The development of this service is outside the scope of this guide. For more information about authentication and credential management, see Best practices for running an IoT backend on Google Cloud. Backend workloads Any connected device use case includes one or more backend applications that use the data ingested from the connected devices. Sometimes, these applications also need to send commands and configuration updates to the devices. In the standalone MQTT broker architecture in this document, incoming data and outgoing commands are both routed through the MQTT broker. There are different topics within the broker's topic hierarchy to differentiate between the data and the commands. Data and commands can be sent between the broker and the backend applications in one of several ways. If the application itself supports MQTT, or if it can be modified to support MQTT, the application can subscribe directly to the broker as a client. This approach enables you to use the MQTT Pub/Sub bidirectional messaging capability directly by using your application to receive data from and send commands to the connected devices. If your application does not support MQTT, there are several other options. In the architecture described in this document, Apache Beam provides an MQTT driver, which allows bidirectional integration with Dataflow and other Beam deployments. Many brokers also have plugin capabilities that support integration with services like Google Pub/Sub. These are typically one-way integrations for data integration, although some brokers support bidirectional integration. Use cases An MQTT broker architecture is particularly well suited for the device use cases that are described in the following sections. Standards-based data ingestion from heterogeneous devices When you want to collect and analyze data from a large fleet of heterogeneous devices, an MQTT broker is often a good solution. Because MQTT is a widely adopted and implemented standard, many edge devices have built-in support for it, and lightweight MQTT clients are available to add MQTT support to devices that don't. The publish-and-subscribe paradigm is also a part of the MQTT standard, so MQTT-enabled devices can take advantage of this architecture without additional implementation work. By contrast, devices that connect to Pub/Sub must implement the Pub/Sub API or use the Pub/Sub SDK. Running a standards-compliant MQTT broker on Google Cloud thus provides a simple solution for collecting data from a wide range of devices. When your connected devices are not controlled by your application but by a third party, you might not have access to the device system software, and the management of the device itself would be the other party's responsibility. In that circumstance, we recommend that you run an MQTT broker and provide authentication credentials to the third party to set up the device-to-cloud communication channel. Bidirectional communication for multi-party application integration The bidirectional messaging capability of MQTT makes it very suitable for a multiparty-mobile-application use case such as on-demand food delivery or a large-scale web chat application. MQTT has low protocol overhead, and MQTT clients have low resource demands. MQTT also features publish-and-subscribe routing, multiple quality of service (QoS) levels, built-in message retention, and broad protocol support. An MQTT broker can be the core component of a scalable messaging platform for on-demand services applications and similar use cases. Edge-to-cloud integrated messaging Because of the standardization and low overhead that MQTT offers, it can also be a good solution for integrating on-premises and cloud-based messaging applications. For instance, a factory operator can deploy multiple MQTT brokers in the on-premises environment to connect to sensors, machines, gateways, and other devices that are behind the firewall. The local MQTT broker can handle all bidirectional command and control and telemetry messaging for the on-premises infrastructure. The local broker can also be connected by two-way subscription to a parallel MQTT broker cluster in the cloud, allowing communication between the cloud and the edge environment without exposing the on-premises devices and systems to the public internet. What's next Learn how to connect devices and build IoT applications on Google Cloud using Intelligent Products Essentials. Learn about practices for automatically provisioning and configuring edge and bare metal systems and servers. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/Startup_Program.txt b/Startup_Program.txt new file mode 100644 index 0000000000000000000000000000000000000000..5d41f7add2463aa974e6159775639b9cd3e56ed3 --- /dev/null +++ b/Startup_Program.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/startup +Date Scraped: 2025-02-23T12:01:17.599Z + +Content: +Be there for the latest AI innovations at Google Cloud Next, April 9-11 in Vegas—register todayAccelerate your startup journey with the Google for Startups Cloud ProgramGet access to startup experts, your Google Cloud and Firebase costs covered up to $200,000 USD (up to $350,000 USD for AI startups) over two years, technical training, business support, and Google-wide offers.Apply for the startup program3:06Learn why top startups choose Google CloudWhy Google for Startups Cloud ProgramScale your startup faster and smarter with Google Cloud credits, technical training, startup experts, curated resources, plus AI and Web3 specific benefits.New: AI and Web3 startup benefitsAI-first startups can take advantage of our open AI ecosystem and tap into the best of Google’s infrastructure, AI products, and foundation models.Web3 projects and startups can focus on innovation over infrastructure as you build your decentralized apps, Web3 tooling, services, and more.AI: Up to $350,000 in credits, dedicated AI training, webinars, and expertsWeb3: Events, gated Discord channel, input on Web3 roadmap, grants, and benefitsFinancial benefitsGet Google Cloud credits along with Google-wide offers and discounts to supercharge your startup’s growth.Up to $200,000 USD (up to $350,000 USD for AI startups) in creditsDedicated technical resources exclusive to startupsGoogle-wide perks for Google Workspace and Google Maps PlatformBusiness supportTap into Google Cloud’s global community of startups, experts, investors, and partners to explore growth opportunities and scale your business. Access experts across the entire Google Cloud startup communityGo-to-market supportConnect and network at exclusive startup events and webinarsTechnical guidance and trainingBoost your team’s skills and get expert guidance with online training webinars and workshops, and 1:1 mentorship as you scale.Upskill your team with trainings and workshops Dedicated Startup Success ManagerUse credits toward Skills Boost training courses and labsNew: AI and Web3 startup benefitsAI-first startups can take advantage of our open AI ecosystem and tap into the best of Google’s infrastructure, AI products, and foundation models.Web3 projects and startups can focus on innovation over infrastructure as you build your decentralized apps, Web3 tooling, services, and more.AI: Up to $350,000 in credits, dedicated AI training, webinars, and expertsWeb3: Events, gated Discord channel, input on Web3 roadmap, grants, and benefitsFinancial benefitsGet Google Cloud credits along with Google-wide offers and discounts to supercharge your startup’s growth.Up to $200,000 USD (up to $350,000 USD for AI startups) in creditsDedicated technical resources exclusive to startupsGoogle-wide perks for Google Workspace and Google Maps PlatformBusiness supportTap into Google Cloud’s global community of startups, experts, investors, and partners to explore growth opportunities and scale your business. Access experts across the entire Google Cloud startup communityGo-to-market supportConnect and network at exclusive startup events and webinarsTechnical guidance and trainingBoost your team’s skills and get expert guidance with online training webinars and workshops, and 1:1 mentorship as you scale.Upskill your team with trainings and workshops Dedicated Startup Success ManagerUse credits toward Skills Boost training courses and labsWhy top startups build on Google CloudBuild with generative AIBuild generative AI applications quickly, efficiently, and responsibly. Explore tools from Google Cloud that make it easier for developers to build with generative AI and new AI-powered experiences across our cloud portfolio.Start building with generative AIBuild and scale fasterAccelerate time to market by relying on Google Cloud’s managed services, secure global infrastructure, leading technology, and AI-powered solutions. Gain a competitive advantage from faster iterations and quick, easy deployment with Google Cloud’s serverless architecture. Explore startup solutionsSave moneyGoogle Cloud’s managed services reduce your infrastructure costs, and features like automatic scaling allow you to meet real-time demand while only paying for what you need. Save money through operational efficiency, reduced risk, and AI-powered recommendations for cost optimization.Learn more about startup savingsMake smarter decisionsTurn data into real insights and drive smarter, faster decisions and user experiences with Google Cloud’s leading data analytics and AI solutions. Our fully managed and serverless offerings help startups streamline application development and turn data into differentiated, personalized customer experiences. Discover data solutions for startupsPower your startup's success with MongoDB Atlas on Google CloudExplore our expanded partnership and see how startups leverage our programs to scale from ideation to growth.Read the blogBuild the next generation of intelligent applications to unlock the full potential of your startup.Apply nowWhat you’ll need before you applyHave your 18-character Google Cloud billing account ID ready (which you received when you signed up for Google Cloud). Need a billing ID? Sign up now to create one.Sign up for a Google Cloud accountEnsure the business email in your application matches your startup’s public website domain. Need a matching business email account? Create one with Google Workspace.Sign up for Google WorkspaceResources you might find helpfulNew: Generative AI training coursesNew generative AI training courses to advance your cloud career, at no cost.Select this learning pathNew: Technical overview of gen AI productsLearn how to leverage generative AI capabilities through Generative AI support on Vertex AI and Generative AI App Builder.Get started nowEventsExplore what's happening across Google Cloud for startups.View startup eventsStartups building on Google CloudThousands of startups around the world have grown their businesses with Google Cloud. Explore a few of their successes.Case studySesame is simplifying patient care during the COVID-19 pandemic by ramping up their telehealth capacity5-min readCase studyMyCujoo livestreams football competitions around the world—powered on Google Cloud5-min readCase studydoc.ai uses AI and machine learning to create a platform that gives users a precise view of their health5-min readCase studyomni:us built an AI pipeline on Google Cloud in four hours to streamline insurance claims processing6-min readCase studyRepl.it has built an online environment for developers to learn, build, host, and ship applications6-min readCase studyPortal Telemedicina delivers healthcare to over 30 million people with an AI-assisted diagnosis platform8-min readSee all customersWe learned what a real partnership looks like. Whenever we had questions, whether business related or technical, we knew we had the strongest partner in the world to ask.Max Lim, CTO at SwitTop startups are building on Google CloudTake the next stepWe’re looking forward to partnering with you.Apply nowStay connected with our startup teamSign up for updatesJoin startup events and on-demand webinarsExplore eventsStay up to date on the latest stories and product newsRead our blogGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Startups_and_SMB.txt b/Startups_and_SMB.txt new file mode 100644 index 0000000000000000000000000000000000000000..96b95f11eedcbe536628efddcb1e568f16a3081e --- /dev/null +++ b/Startups_and_SMB.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions#section-13 +Date Scraped: 2025-02-23T12:01:15.838Z + +Content: +Google Cloud solutionsBrowse Google Cloud solutions or visit our Solutions Center to discover and deploy solutions based on your readiness.Visit Solutions Center Contact Sales Navigate toReference architecturesProducts directoryPricing informationFilter byFiltersIndustry solutionsJump Start SolutionsApplication modernizationArtificial intelligenceAPIs and applicationsData analyticsDatabasesInfrastructure modernizationProductivity and collaborationSecurityStartups and small and medium-sized businessesFeatured partner solutionssearchsendIndustry solutionsWhatever your industry's challenge or use case, explore how Google Cloud solutions can help improve efficiency and agility, reduce cost, participate in new business models, and capture new market opportunities.RetailAnalytics and collaboration tools for the retail value chain.Consumer packaged goodsSolutions for CPG digital transformation and brand growth.ManufacturingMigration and AI tools to optimize the manufacturing value chain.AutomotiveDigital transformation along the automotive value chain.Supply chain and logisticsEnable sustainable, efficient, and resilient data-driven operations across supply chain and logistics operations.EnergyMulticloud and hybrid solutions for energy companies.Healthcare and life sciencesAdvance R&D and improve the clinician and patient experience with AI-driven tools.Media and entertainmentSolutions for content production and distribution operations.GamesAI-driven solutions to build and scale games faster.TelecommunicationsHybrid and multicloud services to deploy and monetize 5G.Financial servicesComputing, databases, and analytics tools for financial services.Capital marketsModern cloud-based architectures, high performance computing, and AI/ML.BankingReduce risk, improve customer experiences and data insights.InsuranceStrengthen decision making and deliver customer-centric experiences.PaymentsAdd new revenue streams, ensure secure transactions and scale globally.Government and public sectorGovernmentData storage, AI, and analytics solutions for government agencies.State and local governmentCloud platform helps public sector workforces better serve constituents.Federal governmentTools that increase federal agencies’ innovation and operational effectiveness.Federal cybersecuritySolutions spanning Zero Trust, analytics, and asset protection.EducationTeaching tools to provide more engaging learning experiences.Education technologyAI, analytics, and app development solutions designed for EdTech.Canada public sectorSolutions that help keep your organization secure and compliant.Department of DefenseGoogle Cloud supports the critical missions of the DoD by providing them with the most secure, reliable, innovative cloud solutions.Google Workspace for GovernmentSecure collaboration solutions and program management resources to help fulfill the unique needs of today's government.Jump Start SolutionsTo get started, choose pre-configured, interactive solutions that you can deploy directly from the Google Cloud console.Deploy a dynamic websiteBuild, deploy, and operate a sample dynamic website using responsive web frameworks.Deploy load-balanced virtual machinesLearn best practices for creating and deploying a sample load-balanced VM cluster.Summarize large documentsLearn how to detect text in raw files and automate document summaries with generative AI.Deploy an AI/ML image processing pipelineRecognize and classify images using pre-trained AI models and serverless functions.Build a three-tier web appCreate a model web app using a three-tiered architecture (frontend, middle tier, backend).Deploy an ecommerce web app Build and run a simple ecommerce application for retail organizations using Kubernetes.Create an analytics lakehouse Store, process, analyze, and activate data using a unified data stack.Deploy a data warehouse using BigQueryLearn the basics of building a data warehouse and visualizing data.Build an internal knowledge base Extract question-and-answer pairs from your documents for a knowledge base.Deploy a RAG application Learn how to use retrieval-augmented generation (RAG) to create a chat application.Deploy a Java application Learn to deploy a dynamic web app that mimics a real-world point of sale screen for a retail store.Deploy an ecommerce platform with serverless computing Build and run a simple ecommerce application for retail organizations using serverless capabilities.Build a secure CI/CD pipeline Set up a secure CI/CD pipeline for building, scanning, storing, and deploying containers to GKE.Use a cloud SDK client library Learn key skills for successfully making API calls to identify trends and observations on aggregate data.See all solutions in consoleApplication modernizationAssess, plan, implement, and measure software practices and capabilities to modernize and simplify your organization’s business application portfolios.CAMPProgram that uses DORA to improve your software delivery capabilities.Modernize Traditional ApplicationsAnalyze, categorize, and get started with cloud migration on traditional workloads.Migrate from PaaS: Cloud Foundry, OpenshiftTools for moving your existing containers into Google's managed container services.Migrate from MainframeAutomated tools and prescriptive guidance for moving your mainframe apps to the cloud.Modernize Software DeliverySoftware supply chain best practices - innerloop productivity, CI/CD and S3C.DevOps Best PracticesProcesses and resources for implementing DevOps in your org.SRE PrinciplesTools and resources for adopting SRE in your org.Day 2 Operations for GKETools and guidance for effective GKE management and monitoring.FinOps and Optimization of GKEBest practices for running reliable, performant, and cost effective applications on GKE.Run Applications at the EdgeGuidance for localized and low latency apps on Google’s hardware agnostic edge solution.Architect for MulticloudManage workloads across multiple clouds with a consistent platform.Go ServerlessFully managed environment for developing, deploying and scaling apps.API ManagementModernize old applications and accelerate new development with an API-FIRST approach. Learn moreArtificial intelligenceAdd intelligence and efficiency to your business with AI and machine learning.AI HypercomputerAI optimized hardware, software, and consumption, combined to improve productivity and efficiency.Contact Center AIAI model for speaking with customers and assisting human agents.Document AIMachine learning and AI to unlock insights from your documents.Gemini for Google CloudAI-powered collaborator integrated across Google Workspace and Google Cloud. Vertex AI Search for commerceGoogle-quality search and recommendations for retailers' digital properties help increase conversions and reduce search abandonment.Learn moreAPIs and applicationsSecurely unlock your data with APIs, automate processes, and create applications across clouds and on-premises without coding.New business channels using APIsAttract and empower an ecosystem of developers and partners.Open Banking APIxSimplify and accelerate secure delivery of open banking compliant APIs.Unlocking legacy applications using APIsCloud services for extending and modernizing legacy apps.Learn moreData analyticsGenerate instant insights from data at any scale with a serverless, fully managed analytics platform that significantly simplifies analytics.Data warehouse modernizationData warehouse to jumpstart your migration and unlock insights.Data lake modernizationServices for building and modernizing your data lake. Spark on Google CloudRun and write Spark where you need it, serverless and integrated.Stream analyticsInsights from ingesting, processing, and analyzing event streams.Business intelligenceSolutions for modernizing your BI stack and creating rich data experiences.Data sciencePut your data to work with Data Science on Google Cloud.Marketing analyticsSolutions for collecting, analyzing, and activating customer data. Geospatial analytics and AISolutions for building a more prosperous and sustainable business.DatasetsData from Google, public, and commercial providers to enrich your analytics and AI initiatives.Cortex FrameworkReduce the time to value with reference architectures, packaged services, and deployment templates.Learn moreDatabasesMigrate and manage enterprise data with security, reliability, high availability, and fully managed data services.Databases for GamesBuild global, live games with Google Cloud databases.Database migrationGuides and tools to simplify your database migration life cycle. Database modernizationUpgrades to modernize your operational database infrastructure.Google Cloud database portfolioDatabase services to migrate, manage, and modernize data.Migrate Oracle workloads to Google CloudRehost, replatform, rewrite your Oracle workloads.Open source databasesFully managed open source databases with enterprise-grade support.SQL Server on Google CloudOptions for running SQL Server virtual machines on Google Cloud. Learn moreInfrastructure modernizationMigrate and modernize workloads on Google's global, secure, and reliable infrastructure.Active AssistAutomatic cloud resource optimization and increased security.Application migrationDiscovery and analysis tools for moving to the cloud.Backup and Disaster RecoveryEnsure your business continuity needs are met.Data center migrationMigration solutions for VMs, apps, databases, and more.Rapid Migration and Modernization ProgramSimplify your path to success in the cloud.High performance computingCompute, storage, and networking options to support any workload.Mainframe modernizationAutomated tools and prescriptive guidance for moving to the cloud.ObservabilityDeliver deep cloud observability with Google Cloud and partners.SAP on Google CloudCertifications for running SAP applications and SAP HANA.Virtual desktopsRemote work solutions for desktops and applications (VDI & DaaS).Windows on Google CloudTools and partners for running Windows workloads.Red Hat on Google CloudEnterprise-grade platform for traditional on-prem and custom applications.Cross-cloud NetworkSimplify hybrid and multicloud networking and secure your workloads, data, and users.Learn moreProductivity and collaborationChange the way teams work with solutions designed for humans and built for impact.Google WorkspaceCollaboration and productivity tools for enterprises. Chrome EnterpriseChrome OS, Chrome Browser, and Chrome devices built for business. Google Workspace EssentialsSecure video meetings and modern collaboration for teams.Cloud IdentityUnified platform for IT admins to manage user devices and apps.Cloud SearchEnterprise search for employees to quickly find company information.Learn moreSecurityDetect, investigate, and protect against online threats.Digital SovereigntyA comprehensive set of sovereign capabilities, allowing you to adopt the right controls on a workload-by-workload basis.Security FoundationSolution with recommended products and guidance to help achieve a strong security posture.Security analytics and operationsSolution for analyzing petabytes of security telemetry.Web App and API Protection (WAAP)Threat and fraud protection for your web applications and APIs.Security and resilience frameworkSolutions for each phase of the security and resilience life cycle.Risk and compliance as code (RCaC)Solution to modernize your governance, risk, and compliance function with automation. Software Supply Chain SecuritySolution for strengthening end-to-end software supply chain security.Google Cloud Cybershield™Strengthen nationwide cyber defense.Learn moreStartups and small and medium-sized businessesAccelerate startup and small and medium-sized businesses growth with tailored solutions and programs.Google Cloud for Web3Build and scale faster with simple, secure tools, and infrastructure for Web3.Startup solutions Grow your startup and solve your toughest challenges using Google’s proven technology.Startup programGet financial, business, and technical support to take your startup to the next level.Small and medium-sized businessesExplore solutions for web hosting, app development, AI, and analytics.Software as a serviceBuild better SaaS products, scale efficiently, and grow your business. Featured partner solutionsGoogle Cloud works with some of the most trusted, innovative partners to help enterprises innovate faster, scale smarter, and stay secure. Here are just a few of them.CiscoCombine Cisco's networking, multicloud, and security portfolio with Google Cloud services to innovate on your own terms.DatabricksDatabricks on Google Cloud offers enterprise flexibility for AI-driven analytics on one open cloud platform.Dell TechnologiesThe Dell and Google Cloud partnership delivers a variety of solutions to help transform how enterprises operate their business.IntelGet performance on your own terms with customizable Google Cloud and Intel technologies designed for the most demanding enterprise workloads and applications.MongoDBMongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure.NetAppDiscover advanced hybrid cloud data services that simplify how you migrate and run enterprise workloads in the cloud.Palo Alto NetworksCombine Google’s secure-by-design infrastructure with dedicated protection from Palo Alto Networks to help secure your applications and data in hybrid environments and on Google Cloud.SAPDrive agility and economic value with VM-based infrastructure, analytics, and machine learning innovations.SplunkSplunk and Google Cloud have partnered to help organizations ingest, normalize, and analyze data at scale. VMwareMigrate and run your VMware workloads natively on Google Cloud.Red HatEnterprise-grade platform for traditional on-prem and custom applications with the security, performance, scalability, and simplicity of Google Cloud.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Storage.txt b/Storage.txt new file mode 100644 index 0000000000000000000000000000000000000000..563976ef7061b3f648ec80228a760e4b8ee123a0 --- /dev/null +++ b/Storage.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/storage +Date Scraped: 2025-02-23T12:09:47.284Z + +Content: +Tech leaders: Get an insider view of Google Cloud’s App Dev and Infrastructure solutions on Oct 30. Register for the Summit.Google Cloud online storage productsCloud-based storage services for your business, all running on Google Cloud’s infrastructure. If you’re a consumer looking for file sharing, use Google Drive. If you’re looking for photo storage, use Google Photos. Go to console Contact salesDeploy a sample website Launch a sample drop-ship retail product website that’s publicly accessible and customizable, leveraging Python and JavaScript.Deploy an analytics warehouseLaunch a preconfigured solution that unifies data lakes and data warehouses for storing, processing, and analyzing both structured and unstructured data.Deploy a data warehouseLaunch an example data warehaouse with generative AI capabilities from Vertex AI to explore, analyze, and summarize data trends.sparkLooking to build a solution?I want high availability storage for low latency delivery of streaming videosI want to migrate an on premise MySQL database I have, to the cloudI want to design a storage solution for my private cloudMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionHigh-performance storageprompt_suggestionMigrate on-prem database to cloudprompt_suggestionDesign storage for private cloudFind the right online storage product for your businessCategoryProductsGood forObject storageCloud StorageObject storage for companies. Store any type of data, any amount of data, and retrieve it as often as you’d like.Stream videosImage and web asset librariesData lakesLaunch or learn about a preconfigured web appUsing enterprise-ready object storage, this interactive solution deploys a sample dropship website to familiarize you with Google Cloud.Deploy full solution in 10 minutes Recommended by Google Cloud expertsFully preconfigured Block storageHyperdisk, Persistent Disk, and Local SSDReliable, high-performance block storage for virtual machine instances. SAP HANA, Oracle, and SQL serverRun PostgresSQL, MySQL, and modern databasesScale out analytics (e.g. Hadoop and Kafka)Performance-critical applicationsFile storageFilestoreFully managed service for file migration and storage. Easily mount file shares on Compute Engine VMs.Data analyticsRendering and media processingApplication migrationsWeb content managementParallelstoreHigh bandwidth, high IOPS, ultra-low latency, managed parallel file service.AI/ML workload scratch spaceHigh performance quantitative analysisComplex modeling and simulationGoogle Cloud NetApp VolumesNetApp's best-in-class file storage service, in Google Cloud.Data sharing for Windows and LinuxData recovery after ransomware, corruption, or lossMeeting mandates for uptime, availability, and securityArchival storageCloud StorageUltra low-cost archival storage with online access speeds.BackupsMedia archivesLong-tail contentMeet regulation or compliance requirements Data transfer Data Transfer ServicesTools to help you perform a data transfer, either from another cloud provider or from your private data center.Collecting research dataMoving ML/AI training datasetsMigrating from S3 to Google CloudTransfer ApplianceRuggedized server to collect and physically move data from field locations with limited connectivity or from data centers.Transfer data for archival, migrations, or analyticsCapture sensor data for machine learning and analysisMove data from bandwidth-constrained locationsBackup and disaster recoveryGoogle Cloud Backup and DRManaged backup and disaster recovery (DR) service for centralized, application-consistent data protection. Protect Compute Engine VMs, VMware VMs, databases, and file systemsBack up on-premises workloads to Google Cloud Mobile app servicesCloud Storage for FirebaseScalable storage for user-generated content from your app.User-generated contentUploads over mobile networksRobust uploads and downloadsStrong user-based securityCollaboration, communication, and file storageGoogle WorkspaceStore all of your organization’s content safely and securely in the cloud.Access files from anywhere with DriveCollaborate on content in Docs, Sheets, and SlidesStay connected with your colleagues through Gmail, Calendar, Chat, and MeetBuild artifactsArtifact RegistryA universal repository manager for container images, OS packages, and language packages that you build and deploy.Direct integration with CI/CD pipelinesCentralize storage of container images, OS packages, and language packagesBuilt-in container scanning for vulnerabilitiesFind the right online storage product for your businessObject storageCloud StorageObject storage for companies. Store any type of data, any amount of data, and retrieve it as often as you’d like.Stream videosImage and web asset librariesData lakes Block storageHyperdisk, Persistent Disk, and Local SSDReliable, high-performance block storage for virtual machine instances. SAP HANA, Oracle, and SQL serverRun PostgresSQL, MySQL, and modern databasesScale out analytics (e.g. Hadoop and Kafka)Performance-critical applicationsFile storageFilestoreFully managed service for file migration and storage. Easily mount file shares on Compute Engine VMs.Data analyticsRendering and media processingApplication migrationsWeb content managementArchival storageCloud StorageUltra low-cost archival storage with online access speeds.BackupsMedia archivesLong-tail contentMeet regulation or compliance requirements Data transfer Data Transfer ServicesTools to help you perform a data transfer, either from another cloud provider or from your private data center.Collecting research dataMoving ML/AI training datasetsMigrating from S3 to Google CloudBackup and disaster recoveryGoogle Cloud Backup and DRManaged backup and disaster recovery (DR) service for centralized, application-consistent data protection. Protect Compute Engine VMs, VMware VMs, databases, and file systemsBack up on-premises workloads to Google Cloud Mobile app servicesCloud Storage for FirebaseScalable storage for user-generated content from your app.User-generated contentUploads over mobile networksRobust uploads and downloadsStrong user-based securityCollaboration, communication, and file storageGoogle WorkspaceStore all of your organization’s content safely and securely in the cloud.Access files from anywhere with DriveCollaborate on content in Docs, Sheets, and SlidesStay connected with your colleagues through Gmail, Calendar, Chat, and MeetBuild artifactsArtifact RegistryA universal repository manager for container images, OS packages, and language packages that you build and deploy.Direct integration with CI/CD pipelinesCentralize storage of container images, OS packages, and language packagesBuilt-in container scanning for vulnerabilitiesNeed more guidance? Connect with our sales team or a vetted third-party vendor.Talk to a Google Cloud sales representative about your specific storage needs.Contact salesConnect with a third-party vendor for help with implementation, migration, and more.See partnersGo deeper! Check out this guide and storage decision tree to help you pick the best storage for you.See how brands store and run on Google CloudVideoTwitter relies on Google Cloud for its flexibility in storage and compute to scale38:56VideoThe New York Times digitizes millions of photos in their archives with help from Google Cloud03:55Case StudyAXA Switzerland uses Cloud Storage to store and access information vital to its business operations5-min readCase StudyBroad Institute uses Cloud Storage to augment its genome sequence analysis5-min readCase StudyKing reduces overhead costs while storing and analyzing data from millions of users5-min readSee all customersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Storage_Transfer_Service(1).txt b/Storage_Transfer_Service(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..cbeb1ff1f36944b28cdd116eb26ba2f9645d836a --- /dev/null +++ b/Storage_Transfer_Service(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/storage-transfer-service +Date Scraped: 2025-02-23T12:10:10.935Z + +Content: +Jump to Storage Transfer Service Storage Transfer ServiceTransfer data quickly and securely between object and file storage across Google Cloud, Amazon, Azure, on-premises, and more.Go to console View documentationMove data cloud-to-cloud, on-prem to cloud, and between cloud storage buckets. See all options.Complete transfers without writing a single line of codeCentralized job management to monitor transfer statusSecurity and reliability built in, every step of the wayConfigure your data transfer to meet your business needs: manage cost, time, and scheduleLooking for the full list of migration solutions and services?See all migration optionsBenefitsEnable your hybrid- or multicloud strategyMove data from your private data centers, AWS, Azure, and Google Cloud, globally available through a single, easy-to-use interface.Optimize infrastructure management and costsReduce your infrastructure costs by moving your data storage and application infrastructure to Google Cloud, while minimizing downtime.Complete large-scale data transfers fastTransfer petabytes of data from on-premises sources or other clouds over online networks— billions of files and 10s of Gbps. Optimize your network bandwidth and accelerate transfers with scale-out performance.Key featuresReliable and secure data transfer servicesData encryption and validationStorage Transfer Service encrypts data in transit, supports VPC Service Controls, and uses checksums to perform data integrity checks, ensuring your data arrives intact.Incremental transferAll transfers only move files and objects that are new, updated, or deleted since the last transfer, minimizing the amount of data that needs to be transferred.Metadata preservationStorage Transfer Service offers control over preserving object and file metadata during transfer. Read more on metadata preservation.NewsUnderstanding the technical and organizational challenges of data migrationWhat's newSee the latest updates about our Storage Transfer ServiceSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postIntroducing Storage Transfer Service for on-premises dataRead the blogVideoFrom blobs to tables: where and how to store your stuffWatch videoVideoCE Chat: efficiently migrating data into Google CloudWatch videoDocumentationFind resources and documentation for Storage Transfer ServiceBest Practice Migration to Google Cloud: transferring your large datasetsThis document explores the process of moving data into Google Cloud, from planning a data transfer to using best practices in implementing a plan.Learn moreGoogle Cloud BasicsStorage Transfer Service overviewAccess guides, references, and resources for learning more about Storage Transfer Service.Learn moreTutorialManaging transfers for on-premises jobsThis document describes how to create your transfer job, install transfer agents, and how to manage your transfer jobs.Learn moreTutorialMetadata preservation in Storage Transfer ServiceThis document describes metadata that is preserved when you use Storage Transfer Service for on-premises data to transfer data to Cloud Storage.Learn moreBest PracticePerformance, scaling and requirements for transfersUnderstand the requirements for object size, object naming, and other scale and performance guidelines.Learn moreNot seeing what you’re looking for?View all product documentationUse casesExplore common use cases for Storage Transfer ServiceUse case Data center migrationMove from your existing infrastructure to the cloud. The data you create and store on-premises demands time and significant resources to manage cost-effectively, securely, and reliably. Utilize our Storage Transfer Service solutions to migrate your workloads and datasets quickly to Google Cloud and start saving. Learn about all of Google Cloud’s migration services and solutions.Use caseContent storage and deliveryStorage Transfer Service enables you to push your media assets from other clouds or your private data center into multi-regional setups, designed for video streaming and frequently accessed content like websites and images. Moving data can enable more efficient content distribution and simpler publishing operations, with the help of products such as Cloud CDN.Use caseDisaster recovery, backup, and archivalWith our data transfer services, you can schedule incremental syncs to enable disaster recovery for apps running in other clouds and on-premises, to meet your recovery goals. Learn about all of our backup and disaster recovery options, or read about how to use Cloud Storage for archiving your data.Use caseAnalytics and machine learningOnce transferred to Cloud Storage, your data is enabled for analytics or machine learning projects, via data pipelines that are hybrid or run across multiple clouds or cloud regions. Wherever your data is, you can take advantage of Google Cloud’s analytics and machine learning suite. Learn more about our innovative machine learning products and services.View all technical guidesAll featuresLearn more about Storage Transfer Service featuresMonitoring and loggingTransfer service for on-premises data produces transfer logs so that you can verify the results of your transfer jobs and offers an option to monitor progress of transfer jobs via Cloud Monitoring. Storage Transfer Service could be configured to deliver Pub/Sub notification on transfer completion.Data integrityStorage Transfer Service uses metadata available from the source storage system, such as checksums and file sizes, to ensure that data written to Cloud Storage is the same data read from the source. In addition to data integrity provided by the TLS protocol, the Storage Transfer Service for on-premises data calculates a CRC32C checksum over each file that it copies as the file is being read.Security and encryptionWe use TLS encryption for HTTPs connections, through the public internet and private connections, and support transfers to Cloud Storage buckets protected by VPC Service Controls. The only exception to TLS encryption is if you specify an HTTP URL for a URL list transfer. If you are using interconnect, you can get another layer of security by accessing a private API endpoint.Data transfer schedulingYou can schedule one-time transfer operations or recurring transfer operations. When transferring data from other cloud providers, you can schedule periodic synchronization with advanced filters based on file creation dates, file-names, and the times of day you prefer to import data. There is no manual intervention needed, and the results are recorded in the Cloud Console.Bandwidth throttlingWe’ve incorporated controls into Storage Transfer Service for on-premises data to avoid disruption of your day-to-day business operations. Configure a bandwidth limit for your Google Cloud project to limit the rate at which on-premises agents will copy files. The bandwidth limit is shared across all transfer jobs in your project.Incremental transferBy default all transfers begin by evaluating the data present at the source and destination. This process determines which source objects are new, updated, or deleted since the last transfer. With this initial step, you are able to minimize the amount of data that needs to be sent, use bandwidth effectively, and ensure transfers run quickly.Flexibility, filtering, and controlsYou can use includePrefix and excludePrefix when creating transfers to limit what objects Storage Transfer Service operates on. We also support modification-time-based data filtering options.Dynamic scale outWhen transferring data from on-premises locations, getting better performance is as easy as bringing up more Storage Transfer Service agents. No transfer config changes or re-submissions required. As soon as the agents are up, they will start performing transfer work.Reliability and fault toleranceWe make reliability work out of the box. If some of your agents fail, the remaining ones will pick up their work. If all of your agents fail, no problem—as soon as you bring them back up, transfers will pick up where they had left off. No special recovery or retry logic required.Simple pricingWhen using Storage Transfer Service for on-premises data, we charge at a per-GB rate. When transferring data from other cloud providers, Storage Transfer Service is free of charge. After transferring data, you are charged for data stored on Cloud Storage as documented in Cloud Storage pricing, and external cloud providers' costs may also apply while you use Storage Transfer Service.Transfer from a URL listYou can use Storage Transfer Service to transfer data from a list of public data locations to a Cloud Storage bucket.Pricing Storage Transfer Service pricing detailsFor more detailed pricing information, please view the pricing guide.ProductPricingTransfers using agents$0.0125/GBAll other sources/sinksFreeView pricing detailsPartnersStorage Transfer Service partnersFor advanced network-level optimization or ongoing data transfer workflows, you may want to use even more advanced tools offered by Google Cloud partners.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Storage_Transfer_Service.txt b/Storage_Transfer_Service.txt new file mode 100644 index 0000000000000000000000000000000000000000..3af2e9d35cb47daead1b0a35d3d5982cd3453ee0 --- /dev/null +++ b/Storage_Transfer_Service.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/storage-transfer-service +Date Scraped: 2025-02-23T12:06:58.077Z + +Content: +Jump to Storage Transfer Service Storage Transfer ServiceTransfer data quickly and securely between object and file storage across Google Cloud, Amazon, Azure, on-premises, and more.Go to console View documentationMove data cloud-to-cloud, on-prem to cloud, and between cloud storage buckets. See all options.Complete transfers without writing a single line of codeCentralized job management to monitor transfer statusSecurity and reliability built in, every step of the wayConfigure your data transfer to meet your business needs: manage cost, time, and scheduleLooking for the full list of migration solutions and services?See all migration optionsBenefitsEnable your hybrid- or multicloud strategyMove data from your private data centers, AWS, Azure, and Google Cloud, globally available through a single, easy-to-use interface.Optimize infrastructure management and costsReduce your infrastructure costs by moving your data storage and application infrastructure to Google Cloud, while minimizing downtime.Complete large-scale data transfers fastTransfer petabytes of data from on-premises sources or other clouds over online networks— billions of files and 10s of Gbps. Optimize your network bandwidth and accelerate transfers with scale-out performance.Key featuresReliable and secure data transfer servicesData encryption and validationStorage Transfer Service encrypts data in transit, supports VPC Service Controls, and uses checksums to perform data integrity checks, ensuring your data arrives intact.Incremental transferAll transfers only move files and objects that are new, updated, or deleted since the last transfer, minimizing the amount of data that needs to be transferred.Metadata preservationStorage Transfer Service offers control over preserving object and file metadata during transfer. Read more on metadata preservation.NewsUnderstanding the technical and organizational challenges of data migrationWhat's newSee the latest updates about our Storage Transfer ServiceSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postIntroducing Storage Transfer Service for on-premises dataRead the blogVideoFrom blobs to tables: where and how to store your stuffWatch videoVideoCE Chat: efficiently migrating data into Google CloudWatch videoDocumentationFind resources and documentation for Storage Transfer ServiceBest Practice Migration to Google Cloud: transferring your large datasetsThis document explores the process of moving data into Google Cloud, from planning a data transfer to using best practices in implementing a plan.Learn moreGoogle Cloud BasicsStorage Transfer Service overviewAccess guides, references, and resources for learning more about Storage Transfer Service.Learn moreTutorialManaging transfers for on-premises jobsThis document describes how to create your transfer job, install transfer agents, and how to manage your transfer jobs.Learn moreTutorialMetadata preservation in Storage Transfer ServiceThis document describes metadata that is preserved when you use Storage Transfer Service for on-premises data to transfer data to Cloud Storage.Learn moreBest PracticePerformance, scaling and requirements for transfersUnderstand the requirements for object size, object naming, and other scale and performance guidelines.Learn moreNot seeing what you’re looking for?View all product documentationUse casesExplore common use cases for Storage Transfer ServiceUse case Data center migrationMove from your existing infrastructure to the cloud. The data you create and store on-premises demands time and significant resources to manage cost-effectively, securely, and reliably. Utilize our Storage Transfer Service solutions to migrate your workloads and datasets quickly to Google Cloud and start saving. Learn about all of Google Cloud’s migration services and solutions.Use caseContent storage and deliveryStorage Transfer Service enables you to push your media assets from other clouds or your private data center into multi-regional setups, designed for video streaming and frequently accessed content like websites and images. Moving data can enable more efficient content distribution and simpler publishing operations, with the help of products such as Cloud CDN.Use caseDisaster recovery, backup, and archivalWith our data transfer services, you can schedule incremental syncs to enable disaster recovery for apps running in other clouds and on-premises, to meet your recovery goals. Learn about all of our backup and disaster recovery options, or read about how to use Cloud Storage for archiving your data.Use caseAnalytics and machine learningOnce transferred to Cloud Storage, your data is enabled for analytics or machine learning projects, via data pipelines that are hybrid or run across multiple clouds or cloud regions. Wherever your data is, you can take advantage of Google Cloud’s analytics and machine learning suite. Learn more about our innovative machine learning products and services.View all technical guidesAll featuresLearn more about Storage Transfer Service featuresMonitoring and loggingTransfer service for on-premises data produces transfer logs so that you can verify the results of your transfer jobs and offers an option to monitor progress of transfer jobs via Cloud Monitoring. Storage Transfer Service could be configured to deliver Pub/Sub notification on transfer completion.Data integrityStorage Transfer Service uses metadata available from the source storage system, such as checksums and file sizes, to ensure that data written to Cloud Storage is the same data read from the source. In addition to data integrity provided by the TLS protocol, the Storage Transfer Service for on-premises data calculates a CRC32C checksum over each file that it copies as the file is being read.Security and encryptionWe use TLS encryption for HTTPs connections, through the public internet and private connections, and support transfers to Cloud Storage buckets protected by VPC Service Controls. The only exception to TLS encryption is if you specify an HTTP URL for a URL list transfer. If you are using interconnect, you can get another layer of security by accessing a private API endpoint.Data transfer schedulingYou can schedule one-time transfer operations or recurring transfer operations. When transferring data from other cloud providers, you can schedule periodic synchronization with advanced filters based on file creation dates, file-names, and the times of day you prefer to import data. There is no manual intervention needed, and the results are recorded in the Cloud Console.Bandwidth throttlingWe’ve incorporated controls into Storage Transfer Service for on-premises data to avoid disruption of your day-to-day business operations. Configure a bandwidth limit for your Google Cloud project to limit the rate at which on-premises agents will copy files. The bandwidth limit is shared across all transfer jobs in your project.Incremental transferBy default all transfers begin by evaluating the data present at the source and destination. This process determines which source objects are new, updated, or deleted since the last transfer. With this initial step, you are able to minimize the amount of data that needs to be sent, use bandwidth effectively, and ensure transfers run quickly.Flexibility, filtering, and controlsYou can use includePrefix and excludePrefix when creating transfers to limit what objects Storage Transfer Service operates on. We also support modification-time-based data filtering options.Dynamic scale outWhen transferring data from on-premises locations, getting better performance is as easy as bringing up more Storage Transfer Service agents. No transfer config changes or re-submissions required. As soon as the agents are up, they will start performing transfer work.Reliability and fault toleranceWe make reliability work out of the box. If some of your agents fail, the remaining ones will pick up their work. If all of your agents fail, no problem—as soon as you bring them back up, transfers will pick up where they had left off. No special recovery or retry logic required.Simple pricingWhen using Storage Transfer Service for on-premises data, we charge at a per-GB rate. When transferring data from other cloud providers, Storage Transfer Service is free of charge. After transferring data, you are charged for data stored on Cloud Storage as documented in Cloud Storage pricing, and external cloud providers' costs may also apply while you use Storage Transfer Service.Transfer from a URL listYou can use Storage Transfer Service to transfer data from a list of public data locations to a Cloud Storage bucket.Pricing Storage Transfer Service pricing detailsFor more detailed pricing information, please view the pricing guide.ProductPricingTransfers using agents$0.0125/GBAll other sources/sinksFreeView pricing detailsPartnersStorage Transfer Service partnersFor advanced network-level optimization or ongoing data transfer workflows, you may want to use even more advanced tools offered by Google Cloud partners.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Stream_Analytics.txt b/Stream_Analytics.txt new file mode 100644 index 0000000000000000000000000000000000000000..12a210295921646f6c6ed4505989ac0bea4d3aac --- /dev/null +++ b/Stream_Analytics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/stream-analytics +Date Scraped: 2025-02-23T11:59:04.581Z + +Content: +Google Cloud is a Leader in the 2023 Forrester Wave: Streaming Data Platforms. Learn more.Streaming analyticsIngest, process, and analyze event streams in real time. Google Cloud's streaming analytics solutions make data more organized, useful, and accessible from the instant it’s generated.Request a demoContact salesForrester names Google a Leader in The Forrester Wave™: Streaming Data Platforms 2023Register to download the reportBenefitsMake the most of real-time dataGenerate real value from real-time insightsIngest, process, and analyze real-time event streams and take business impacting action on high-value, perishable insights.Remove operational complexityLeverage an auto-scaling and fully managed streaming infrastructure that solves for variable data volumes, performance tuning, and resource provisioning.Utilize the best of Google CloudAccess native integrations with Vertex AI Workbench, BigQuery, and other Google Cloud services for rapid and trusted development of intelligent solutions.Key featuresReal-time made real easyAdopt simple ingestion for complex eventsIngest and analyze hundreds of millions of events per second from applications or devices virtually anywhere on the globe with Pub/Sub. Directly stream millions of events per second into your data warehouse for SQL-based analysis with BigQuery's streaming API. Or replicate data from relational databases directly into BigQuery on a serverless platform with Datastream.Unify stream and batch processing without lock-inUnify streaming and batch data analysis with equal ease and build cohesive data pipelines with Dataflow. Dataflow ensures exactly-once processing, making your streaming pipelines more reliable and consistent for mission-critical applications. Data engineers can reuse code through Dataflow’s open source SDK, Apache Beam, which provides pipeline portability for hybrid or multi-cloud environments.Keep your current tools while exploring next-generation AIBridge, migrate, or extend on-premises Apache Kafka and Apache Spark-based solutions through Confluent Cloud and Dataproc. Combined with Data Fusion’s GUI, data analysts and engineers can build streaming pipelines in a few clicks. Embed Google’s Vertex AI Workbench solution in your streaming analytics pipeline for real-time personalization, anomaly detection, and predictive maintenance scenarios.Ready to get started? Contact usLearn more about streaming analytics on Google CloudEventFree Coursera Training: Getting started with Streaming Analytics using DataflowWatch videoCIO’s guide to data analytics and machine learningRegister to read ebookESG Whitepaper: Google Streaming Analytics PlatformRegister to read whitepaperCustomersSee how these customers are using our streaming analytics solutionsCase studyEmarsys: Building a real-time data and AI platform with Google Cloud.6-min readCase studyAirAsia goes “data first” to serve customers, refine pricing, and grow revenue.7-min readCase studyWix works with Google Cloud to offer user dashboards that cut dev costs by 20%.3-min readCase studyNYC Cyber Command keeps the city’s digital services more secure at vast scale.6-min readSee all customersRelated servicesStreaming analytics services from Google CloudPub/SubSimple, reliable staging location for large-scale ingestion of streaming data originating anywhere in the world.DataflowData processing service built on the open source Apache Beam SDK for transforming and enriching streaming and batch data with equal reliability.BigQueryInstantly ingest and analyze millions of rows of data and create real-time dashboards using BigQuery.DatastreamSeamless replication from relational databases directly to BigQuery, enabling near real-time insights on operational data.What's newGet the latest streaming analytics news, blogs, and eventsSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postGoogle Cloud named a Leader in The Forrester Wave™: Streaming Analytics, Q2 2021Read the blogBlog postAfter Lambda: Exactly-once processing in Google Cloud DataflowRead the blogBlog postEasy access to stream analytics with SQL, real-time AI, and moreRead the blogBlog postHow do I move data from MySQL to BigQuery?Read the blogTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Supply_Chain_and_Logistics.txt b/Supply_Chain_and_Logistics.txt new file mode 100644 index 0000000000000000000000000000000000000000..63036629075fbcb2c3399790f4cd33d348e685f1 --- /dev/null +++ b/Supply_Chain_and_Logistics.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/supply-chain-logistics +Date Scraped: 2025-02-23T11:58:00.936Z + +Content: +Google Cloud for supply chain and logisticsWe’re on a mission to help organizations harness the power of data and AI to drive more intelligent logistics operations and supply chains. We partner with leading supply chain and logistics teams to enable sustainable, efficient, and resilient data-driven operations.Talk with an expertIndustry-leading data and AI cloudWe're helping customers get closer to the end consumer, optimizing the supply chain by building resilience in your supply chain and scaling automation and reducing carbon footprint across the entire value chain with industry-leading analytics and AI.Learn more hereTransform your supply chain and logistics operations with Google CloudProvide exceptional customer experiences Get closer to the end customerGet closer to the consumer by infusing machine learning into existing demand forecasting.Google Cloud helps organizations ingest large volumes of data, allowing planners to include relevant demand drivers such as consumer trends and events, weather, commodity prices, freight charges, and more. “Machine learning is part of everything we do at Wayfair to support each of the 30 million active customers on our website. We use ML models to forecast product demand across the globe, to ensure our customers can quickly access what they’re looking for.”Vinay Narayana, Head of ML Engineering, WayfairExplore our offerings:Vertex AILearn moreBigQueryLearn moreDemand SensingLearn moreDrive efficient and sustainable operations Optimize your operationsAchieve end-to-end visibility of your logistics operations and supply chains.Google Cloud can help organizations optimize their supply chains and logistics operations, delivering cost and carbon efficiency.“To scale quickly, we adopted Last Mile Fleet Solution and Cloud Fleet Routing which enable our drivers and fleet managers to maintain peak efficiency and go beyond our 98% on-time, first-time delivery rates.”Oliver Colinet, Chief Technology & Product Officer, PaackExplore our offerings:Cloud Fleet Routing and Last Mile Fleet Solution Learn moreGoogle Earth EngineLearn moreDocument AILearn moreDeliver resiliency and reliabilityPredict problems before they ariseAnalyze, anticipate, and communicate issues before they happen.Google Cloud empowers supply chain and logistics professionals to solve problems in real time by removing bottlenecks and resolving anomalies by ingesting large datasets with AI/ML. “Google Cloud Platform is the core of our maintenance system for all our trains. We have no doubt that we will continue putting our trust in it as we develop our use of machine learning, artificial vision, and Cloud IoT Core. For us, this is a collaboration.”José Antonio Marcos, Chief Maintenance Engineer, TalgoExplore our offerings:Predictive maintenance Learn moreVisual Inspection AILearn moreContact Center AILearn moreEmpower collaboration across teamsEnable teams to achieve moreTransform how teams connect, create, collaborate, and analyze data. Increase efficiency and deliver impact from anywhere with reports, dashboards, and hybrid collaboration.“We are in the process of transforming all our tools and integrating new solutions to be able to work faster and more efficiently. Two main IT priorities are mobility and openness, in terms of working from anywhere and being able to interact with other information systems.”Communication Manager, FM LogisticExplore our offerings:Analytics HubLearn moreLookerLearn moreGoogle WorkspaceLearn moreProvide exceptional customer experiences Provide exceptional customer experiences Get closer to the end customerGet closer to the consumer by infusing machine learning into existing demand forecasting.Google Cloud helps organizations ingest large volumes of data, allowing planners to include relevant demand drivers such as consumer trends and events, weather, commodity prices, freight charges, and more. “Machine learning is part of everything we do at Wayfair to support each of the 30 million active customers on our website. We use ML models to forecast product demand across the globe, to ensure our customers can quickly access what they’re looking for.”Vinay Narayana, Head of ML Engineering, WayfairExplore our offerings:Vertex AILearn moreBigQueryLearn moreDemand SensingLearn moreDrive efficient and sustainable operations Drive efficient and sustainable operations Optimize your operationsAchieve end-to-end visibility of your logistics operations and supply chains.Google Cloud can help organizations optimize their supply chains and logistics operations, delivering cost and carbon efficiency.“To scale quickly, we adopted Last Mile Fleet Solution and Cloud Fleet Routing which enable our drivers and fleet managers to maintain peak efficiency and go beyond our 98% on-time, first-time delivery rates.”Oliver Colinet, Chief Technology & Product Officer, PaackExplore our offerings:Cloud Fleet Routing and Last Mile Fleet Solution Learn moreGoogle Earth EngineLearn moreDocument AILearn moreDeliver resiliency and reliabilityDeliver resiliency and reliabilityPredict problems before they ariseAnalyze, anticipate, and communicate issues before they happen.Google Cloud empowers supply chain and logistics professionals to solve problems in real time by removing bottlenecks and resolving anomalies by ingesting large datasets with AI/ML. “Google Cloud Platform is the core of our maintenance system for all our trains. We have no doubt that we will continue putting our trust in it as we develop our use of machine learning, artificial vision, and Cloud IoT Core. For us, this is a collaboration.”José Antonio Marcos, Chief Maintenance Engineer, TalgoExplore our offerings:Predictive maintenance Learn moreVisual Inspection AILearn moreContact Center AILearn moreEmpower collaboration across your supply chain and teamsEmpower collaboration across teamsEnable teams to achieve moreTransform how teams connect, create, collaborate, and analyze data. Increase efficiency and deliver impact from anywhere with reports, dashboards, and hybrid collaboration.“We are in the process of transforming all our tools and integrating new solutions to be able to work faster and more efficiently. Two main IT priorities are mobility and openness, in terms of working from anywhere and being able to interact with other information systems.”Communication Manager, FM LogisticExplore our offerings:Analytics HubLearn moreLookerLearn moreGoogle WorkspaceLearn moreHow Data is Driving Resilient and Sustainable Supply ChainsDiscover the three main processes companies need to address and learn why data and AI are critical to transforming the supply chain.Get the ebookLearn how we are helping improve supply chain and logistics across other industriesRetailLearn how Google Cloud is helping retailers ensure shoppers have what they want in stock.ManufacturingRead how Google Cloud is helping manufacturers strengthen supply chain and logistics operations.Life sciencesDiscover how Google Cloud is advancing research at scale and empowering healthcare innovation.Consumer packaged goodsRead how Google Cloud is helping brands capture new routes to market and drive connected operations.Public sectorLearn how Google Cloud is helping the public sector optimize their supply chain and logistics.View MoreLeading companies trust Google CloudDiscover why many of the world’s leading companies are choosing Google Cloud to help transform their supply chain and logistics operations.See all customersRecommended partnersOur industry partners can help you solve your business challenges and unlock growth opportunities with painless implementations and integrated out-of-the-box or custom solutions.See all partnersDiscover insights. Find solutions.See how you can transform your supply chain and logistics operations with Google Cloud.Talk with an expertWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleSee all industry solutionsContinue browsingGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Sustainability.txt b/Sustainability.txt new file mode 100644 index 0000000000000000000000000000000000000000..5ad798fe4ffecb5abe1e10c1c6955cf4b4ed480f --- /dev/null +++ b/Sustainability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/sustainability +Date Scraped: 2025-02-23T11:44:14.851Z + +Content: +Home Docs Cloud Architecture Center Send feedback Design for environmental sustainability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-27 UTC This document in the Google Cloud Architecture Framework summarizes how you can approach environmental sustainability for your workloads in Google Cloud. It includes information about how to minimize your carbon footprint on Google Cloud. Understand your carbon footprint To understand the carbon footprint from your Google Cloud usage, use the Carbon Footprint dashboard. The Carbon Footprint dashboard attributes emissions to the Google Cloud projects that you own and the cloud services that you use. Choose the most suitable cloud regions One effective way to reduce carbon emissions is to choose cloud regions with lower carbon emissions. To help you make this choice, Google publishes carbon data for all Google Cloud regions. When you choose a region, you might need to balance lowering emissions with other requirements, such as pricing and network latency. To help select a region, use the Google Cloud Region Picker. Choose the most suitable cloud services To help reduce your existing carbon footprint, consider migrating your on-premises VM workloads to Compute Engine. Consider serverless options for workloads that don't need VMs. These managed services often optimize resource usage automatically, reducing costs and carbon footprint. Minimize idle cloud resources Idle resources incur unnecessary costs and emissions. Some common causes of idle resources include the following: Unused active cloud resources, such as idle VM instances. Over-provisioned resources, such as larger VM instances machine types than necessary for a workload. Non-optimal architectures, such as lift-and-shift migrations that aren't always optimized for efficiency. Consider making incremental improvements to these architectures. The following are some general strategies to help minimize wasted cloud resources: Identify idle or overprovisioned resources and either delete them or rightsize them. Refactor your architecture to incorporate a more optimal design. Migrate workloads to managed services. Reduce emissions for batch workloads Run batch workloads in regions with lower carbon emissions. For further reductions, run workloads at times that coincide with lower grid carbon intensity when possible. What's next Learn how to use Carbon Footprint data to measure, report, and reduce your cloud carbon emissions. Send feedback \ No newline at end of file diff --git a/Take_advantage_of_elasticity.txt b/Take_advantage_of_elasticity.txt new file mode 100644 index 0000000000000000000000000000000000000000..91d13731442c27026194d7e176d4ac136c044aea --- /dev/null +++ b/Take_advantage_of_elasticity.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/performance-optimization/elasticity +Date Scraped: 2025-02-23T11:44:05.367Z + +Content: +Home Docs Cloud Architecture Center Send feedback Take advantage of elasticity Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-06 UTC This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you incorporate elasticity, which is the ability to adjust resources dynamically based on changes in workload requirements. Elasticity allows different components of a system to scale independently. This targeted scaling can help improve performance and cost efficiency by allocating resources precisely where they're needed, without over provisioning or under provisioning your resources. Principle overview The performance requirements of a system directly influence when and how the system scales vertically or scales horizontally. You need to evaluate the system's capacity and determine the load that the system is expected to handle at baseline. Then, you need to determine how you want the system to respond to increases and decreases in the load. When the load increases, the system must scale out horizontally, scale up vertically, or both. For horizontal scaling, add replica nodes to ensure that the system has sufficient overall capacity to fulfill the increased demand. For vertical scaling, replace the application's existing components with components that contain more capacity, more memory, and more storage. When the load decreases, the system must scale down (horizontally, vertically, or both). Define the circumstances in which the system scales up or scales down. Plan to manually scale up systems for known periods of high traffic. Use tools like autoscaling, which responds to increases or decreases in the load. Recommendations To take advantage of elasticity, consider the recommendations in the following sections. Plan for peak load periods You need to plan an efficient scaling path for known events, such as expected periods of increased customer demand. Consider scaling up your system ahead of known periods of high traffic. For example, if you're a retail organization, you expect demand to increase during seasonal sales. We recommend that you manually scale up or scale out your systems before those sales to ensure that your system can immediately handle the increased load or immediately adjust existing limits. Otherwise, the system might take several minutes to add resources in response to real-time changes. Your application's capacity might not increase quickly enough and cause some users to experience delays. For unknown or unexpected events, such as a sudden surge in demand or traffic, you can use autoscaling features to trigger elastic scaling that's based on metrics. These metrics can include CPU utilization, load balancer serving capacity, latency, and even custom metrics that you define in Cloud Monitoring. For example, consider an application that runs on a Compute Engine managed instance group (MIG). This application has a requirement that each instance performs optimally until the average CPU utilization reaches 75%. In this example, you might define an autoscaling policy that creates more instances when the CPU utilization reaches the threshold. These newly-created instances help absorb the load, which helps ensure that the average CPU utilization remains at an optimal rate until the maximum number of instances that you've configured for the MIG is reached. When the demand decreases, the autoscaling policy removes the instances that are no longer needed. Plan resource slot reservations in BigQuery or adjust the limits for autoscaling configurations in Spanner by using the managed autoscaler. Use predictive scaling If your system components include Compute Engine, you must evaluate whether predictive autoscaling is suitable for your workload. Predictive autoscaling forecasts the future load based on your metrics' historical trends—for example, CPU utilization. Forecasts are recomputed every few minutes, so the autoscaler rapidly adapts its forecast to very recent changes in load. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed real-time changes in load. Predictive autoscaling works with both real-time data and historical data to respond to both the current and the forecasted load. Implement serverless architectures Consider implementing a serverless architecture with serverless services that are inherently elastic, such as the following: Cloud Run Cloud Run functions BigQuery Spanner Eventarc Workflows Pub/Sub Unlike autoscaling in other services that require fine-tuning rules (for example, Compute Engine), serverless autoscaling is instant and can scale down to zero resources. Use Autopilot mode for Kubernetes For complex applications that require greater control over Kubernetes, consider Autopilot mode in Google Kubernetes Engine (GKE). Autopilot mode provides automation and scalability by default. GKE automatically scales nodes and resources based on traffic. GKE manages nodes, creates new nodes for your applications, and configures automatic upgrades and repairs. Previous arrow_back Plan resource allocation Next Promote modular design arrow_forward Send feedback \ No newline at end of file diff --git a/Take_advantage_of_horizontal_scalability.txt b/Take_advantage_of_horizontal_scalability.txt new file mode 100644 index 0000000000000000000000000000000000000000..b1a966659e41280fb81ae681bbea81619fc446cc --- /dev/null +++ b/Take_advantage_of_horizontal_scalability.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/reliability/horizontal-scalability +Date Scraped: 2025-02-23T11:43:27.862Z + +Content: +Home Docs Cloud Architecture Center Send feedback Take advantage of horizontal scalability Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you use horizontal scalability. By using horizontal scalability, you can help ensure that your workloads in Google Cloud can scale efficiently and maintain performance. This principle is relevant to the scoping focus area of reliability. Principle overview Re-architect your system to a horizontal architecture. To accommodate growth in traffic or data, you can add more resources. You can also remove resources when they're not in use. To understand the value of horizontal scaling, consider the limitations of vertical scaling. A common scenario for vertical scaling is to use a MySQL database as the primary database with critical data. As database usage increases, more RAM and CPU is required. Eventually, the database reaches the memory limit on the host machine, and needs to be upgraded. This process might need to be repeated several times. The problem is that there are hard limits on how much a database can grow. VM sizes are not unlimited. The database can reach a point when it's no longer possible to add more resources. Even if resources were unlimited, a large VM can become a single point of failure. Any problem with the primary database VM can cause error responses or cause a system-wide outage that affects all users. Avoid single points of failure, as described in Build highly available systems through redundant resources. Besides these scaling limits, vertical scaling tends to be more expensive. The cost can increase exponentially as machines with greater amounts of compute power and memory are acquired. Horizontal scaling, by contrast, can cost less. The potential for horizontal scaling is virtually unlimited in a system that's designed to scale. Recommendations To transition from a single VM architecture to a horizontal multiple-machine architecture, you need to plan carefully and use the right tools. To help you achieve horizontal scaling, consider the recommendations in the following subsections. Use managed services Managed services remove the need to manually manage horizontal scaling. For example, with Compute Engine managed instance groups (MIGs), you can add or remove VMs to scale your application horizontally. For containerized applications, Cloud Run is a serverless platform that can automatically scale your stateless containers based on incoming traffic. Promote modular design Modular components and clear interfaces help you scale individual components as needed, instead of scaling the entire application. For more information, see Promote modular design in the performance optimization pillar. Implement a stateless design Design applications to be stateless, meaning no locally stored data. This lets you add or remove instances without worrying about data consistency. Previous arrow_back Build high availability through redundancy Next Detect potential failures by using observability arrow_forward Send feedback \ No newline at end of file diff --git a/Telecom_Data_Fabric.txt b/Telecom_Data_Fabric.txt new file mode 100644 index 0000000000000000000000000000000000000000..757ee8492cb38e19ba6b8310be0d967117e5ef7e --- /dev/null +++ b/Telecom_Data_Fabric.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/telecom-data-fabric +Date Scraped: 2025-02-23T12:05:07.982Z + +Content: +Find out why cloud-native networks are a must for communication service providers to meet the needs of the future. Download study.Jump to Telecom Data FabricTelecom Data FabricAccelerate telecom data management and analytics with an automated approach, leveraging multiple Google Cloud data and AI/ML products. Democratize data and governance, improving the ability of communication service providers and ISVs to innovate, build analytical applications, and drive automation.Go to consoleQuickly ingest and normalize telecom dataTransform to reusable open data modelsAccelerate the use of data without compromising data quality, privacy, and securityBLOGIntroducing Telecom Data FabricKey featuresKey featuresBuild unified experiences from Edge to CloudEliminate the collection of disparate data coming from a multitude of products by leveraging a common control plane. Simplify data migration to cloud and quickly curate and operationalize data with a centralized data governance model from the telecom network.Create faster paths to high-value dataNormalize and correlate data across the multi-vendor deployments with pre-built data adapters. Harmonize data into a set of unified reusable models to simplify consumption across your stakeholders.Accelerate your ecosystems with AIEnable your developer ecosystem to leverage reusable high value data while applying AI to create more predictive engines. The data mesh architecture enables highly flexible deployment models adapted to telecom network organizations.Automate operations with control loopsCreate new telecom network deployments to be automated and highly configurable. Control loops can be easily integrated into network elements and new standards, network operations centers, customer care, incident management, OSS/BSS, and more to create the autonomous network.What's newTelecom Data Fabric and related topicsBlog postFind out how telecom providers are leading the way on automating dataRead the blogBlog post"Act like a real digital company..." Why Vodafone is all-in on the data cloudRead the blogReportVisit our telecommunications industry pageLearn moreUse casesUse casesUse caseArchitectural view Google Cloud's Telecom Data Fabric provides faster data accessibility using a set of APIs and data management to quickly curate the data and centralize governance to preserve data sovereignty and privacy.View all technical guidesPricingPricingTelecom Data Fabric is currently offered in Private preview. For pricing or more information, please contact Google Cloud sales in your area or email us at telecom_data_fabric@google.com.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Telecom_Network_Automation.txt b/Telecom_Network_Automation.txt new file mode 100644 index 0000000000000000000000000000000000000000..72a088bfc5769803c4f91297b297079fc6476a3d --- /dev/null +++ b/Telecom_Network_Automation.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/telecom-network-automation +Date Scraped: 2025-02-23T12:05:06.461Z + +Content: +Find out why cloud-native networks are a must for communication service providers to meet the needs of the future. Download study.Telecom Network AutomationCloud-native automation for Telecom networksTelecom Network Automation is based on open source Nephio. It delivers simple, carrier grade, cloud-native automation. It supports the creation of new intent-driven networks.Go to console Contact salesProduct highlightsQuickly deploy cloud and edge telecom network infrastructureReady to use cloud-native automation for telecom networksAutomate the life cycle of Core and RAN multi-vendor network functions with SDKIntroduction to Telecom Network AutomationFeaturesToil-free deployment using intent-driven and declarative configurationsAccelerate cloud-native infrastructure configuration and network function deployments with simple intent-driven and declarative automation blueprints. Focus on the desired outcome rather than complex setup and management processes.Kubernetes consistency at telecom scaleSeamlessly manage telecom networks from RAN to Edge to Core to Cloud with a unified Kubernetes automation framework—no need for additional legacy out-of-band automation software. Using the Kubernetes Resource Model (KRM), define structured Kubernetes Custom Resources (CRD) and leverage Operators to reconcile them on the live network simplifying deployment and Day 2 operations. Avoid lock-in with open source NephioAvoid lock-in challenges or complex multi-vendor integrations by embracing openness through Nephio automation framework. The KRM Custom Resources packaging best practices and Kubernetes Operator framework are defined by the Nephio community to match the Telecom specific networking requirements and simplify interoperability.Single source of truth with GitOps version controlSimplify how you apply and manage the various infrastructure and network function configurations. Configuration-as-Data (CaD) is a methodology for configuration management that rigorously enforces well-structured declarative configurations and separates those configurations from the code that operates on them. It’s simpler to understand and manipulate. The configurations are also anchored in a robust GitOps-based configuration management enabling users to author, review, and publish configuration packages.Automate now—use Google's readily available blueprintsUse readily available blueprints by Google to automate deployment and operations of Google Cloud infrastructure at scale: Google Distributed Cloud Edge (GDCE) and Google Kubernetes Engine. Google blueprints automates the GDCE networking fabric configuration, Kubernetes cluster(s) deployment, and their customization to fit the network function requirements (SR-IOV, Hugepages, and more), avoiding possible misconfigurations and rollbacks. Use our partner artifacts/blueprints or systems to deploy, operate, and configure network functions.View all featuresHow It WorksTelecom Network Automation offers a cloud-based control plane that automates the deployment and operations (Day 0, 1, and 2) of cloud infrastructure (GDCE and GKE clusters), adaptation of these clusters to meet the network function requirements, and the configuration of networking fabric and the network function workloads.Learn moreTelecom Network Automation ExplainerCommon UsesCloud-native telecom networks automationAccelerate hybrid cloud infrastructure deployment at telecom scaleConfigure the edge networking fabric and use Google automation blueprints to deploy Kubernetes clusters on Google Distributed Cloud Edge and/or GKE. Telecom Network Automation automatically customizes these clusters to match vendor network functions specific requirements (for example, SR-IOV) and avoid rollbacks. Day 2 cluster(s) operations and configuration management are simplified through GitOps, CaD, and control-loops. Contact a Google Cloud sales specialistCluster deployment demo videoDigitally transforming CSP's from IT to networks using an AI based open source approachReimagining Radio Access Networks with Google CloudLearning resourcesAccelerate hybrid cloud infrastructure deployment at telecom scaleConfigure the edge networking fabric and use Google automation blueprints to deploy Kubernetes clusters on Google Distributed Cloud Edge and/or GKE. Telecom Network Automation automatically customizes these clusters to match vendor network functions specific requirements (for example, SR-IOV) and avoid rollbacks. Day 2 cluster(s) operations and configuration management are simplified through GitOps, CaD, and control-loops. Contact a Google Cloud sales specialistCluster deployment demo videoDigitally transforming CSP's from IT to networks using an AI based open source approachReimagining Radio Access Networks with Google CloudAutomate 5G RAN to Core CNF life cycle managementDeploy multi-vendor 5G Core/RAN network functions (workloads) on Google Distributed Cloud Edge and/or GKE using Telecom Network Automation/Nephio SDK. Provision network function configuration using KRM resources for zero touch provisioning (ZTP). Supports Day 2 operations for network functions (NFs) with simplified operations through robust GitOps, CaD, and control-loops.Contact a Google Cloud sales specialist Nephio tutorial on free 5G Core Network Functions operatorLearning resourcesAutomate 5G RAN to Core CNF life cycle managementDeploy multi-vendor 5G Core/RAN network functions (workloads) on Google Distributed Cloud Edge and/or GKE using Telecom Network Automation/Nephio SDK. Provision network function configuration using KRM resources for zero touch provisioning (ZTP). Supports Day 2 operations for network functions (NFs) with simplified operations through robust GitOps, CaD, and control-loops.Contact a Google Cloud sales specialist Nephio tutorial on free 5G Core Network Functions operatorPricingHow Telecom Network Automation pricing worksSimple, the pricing for Telecom Network Automation is pay-as-you-go (PAYG) based on component automated vCPU per hour.SERVICES AND USAGEDESCRIPTIONPRICE (USD)Infrastructure automationGDC Edge cluster automationNumber of Google Distributed Cloud Edge cluster(s) vCPU(s) automated by Telecom Network AutomationContact sales per vCPU per hourGKE cluster automationNumber of GKE cluster(s) vCPU(s) automated by Telecom Network AutomationContact sales per vCPU per hourHow Telecom Network Automation pricing worksSimple, the pricing for Telecom Network Automation is pay-as-you-go (PAYG) based on component automated vCPU per hour.Infrastructure automationDESCRIPTIONGDC Edge cluster automationNumber of Google Distributed Cloud Edge cluster(s) vCPU(s) automated by Telecom Network AutomationPRICE (USD)Contact sales per vCPU per hourGKE cluster automationNumber of GKE cluster(s) vCPU(s) automated by Telecom Network AutomationDESCRIPTIONContact sales per vCPU per hourDiscover more about TNAIntroduction to Telecom Network Automation blogRead blogLearn more about the Telecom portfolioGoogle Cloud for TelecommunicationsDiscover moreAutomate your network todayTalk to a Google Cloud sales specialistBegin nowTransform your network with NephioLearn about NephioDownload Telecom Network Automation solution briefDownload the briefLearn more about LFN open source project Nephio R1Nephio playlistCloud-native automation: The transformation of CSP networksDownload studyPartners & IntegrationTelecom Network Automation partnersPartnersGoogle Cloud is a founding member of the open source Nephio community. This rapidly growing LFN community now counts more than 70+ telecom participating organizations, including CSPs, cloud infrastructure providers, network function vendors, and system integrators.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Telecom_Subscriber_Insights.txt b/Telecom_Subscriber_Insights.txt new file mode 100644 index 0000000000000000000000000000000000000000..324d5372c950549f38d0235f47fca36001bdb329 --- /dev/null +++ b/Telecom_Subscriber_Insights.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/telecom-subscriber-insights +Date Scraped: 2025-02-23T12:05:10.917Z + +Content: +Find out why cloud-native networks are a must for communication service providers to meet the needs of the future. Download study.Jump to Telecom Subscriber InsightsTelecom Subscriber InsightsThis product is geared to improve subscriber acquisition and retention for communication service providers. It ingests data from various sources, creates predictive models, uses AI to recommend offers, and presents offers to the subscribers' devices for activation.Go to consoleQuick ingestion and normalization of dataPredictive telecom data models driven by AIContextual offers with a multitude of activation channelsImproved data efficiency3:02Learn about Telecom Subscriber Insights BenefitsIncrease subscriber engagementImprove net promoter score by creating a holistic view of your subscribers.Increase ARPU with upsell and cross-sellIdentify subscriber segments that are most receptive to defined offerings.Retain subscribersIncrease prepaid to postpaid conversions and device upgrades.Key featuresKey featuresSimplified data ingestion and normalizationPre-built data adapters and tooling enable you to maximize the number of data sources. Targeted reach with activation using simplified onboarding and providing SDK for in application experience.Subscriber acquisition modelingTargeted reach with activation using simplified onboarding and providing SDK for in application experience.Predictive intelligencePredictive modeling enabled by AI refines recommendations and identifies subscribers with high risk of churn.What's newTelecom Subscriber Insights and related topicsBlog postIntroducing Telecom Subscriber InsightsRead the blogVideoVMO2 building smart Customer Data Platforms with Google Cloud and ZeotapWatch videoBlog postOrange: Three unexpected lessons about AI in businessRead the blogReportCheck out our telecommunications industry pageLearn moreDocumentationTelecom Subscriber Insights documentationQuickstartProduct overviewLearn more about how Telecom Subscriber Insights enables communication service providers to extract information to recommend actions to telecom customers.Learn moreQuickstartProduct videosExplore the different Telecom Subscriber Insights use cases including how to fast track your Google Cloud migration and provide customers with new services Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseSubscriber life cycleHelping CSPs use data and insights to drive greater engagement and sales across key subscriber moments.Use caseArchitectural viewBuilding on Google Cloud's data and AI expertise.View all technical guidesPricingPricingFor pricing or more information, please contact Google Cloud sales in your area or email us at telecom_subscriber_insights@google.com View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Telecommunications.txt b/Telecommunications.txt new file mode 100644 index 0000000000000000000000000000000000000000..31e9e5a17a7185cc4ad2411d3545a66c0088e6d6 --- /dev/null +++ b/Telecommunications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/telecommunications +Date Scraped: 2025-02-23T11:57:55.034Z + +Content: +Hear from our customers, starting with TELUS, how we are delivering the AI-driven telecom. Watch now.Google Cloud for TelecommunicationsAt Google Cloud, we’re partnering with communication service providers around the world to deliver the AI-driven telecom. By harnessing intelligent data, advancing cloud-native networks, and unlocking new monetization models, we are enabling the future of telecom. We are proud of our growing ecosystem that helps drive these innovations forward.Learn more from industry expertsContact sales10:38Discover how Vodafone and Google Cloud harnessed the power of AI to slash tool usage by halfTransform to the AI-driven telecomCreate actionable insights through data intelligence.The ROI of gen AI in telecommunicationsReportEmbracing the AI-native network transformation for Communications Service ProvidersWhitepaperDownload whitepaperWhitepaperLearn about Vertex AIDemoModernizing telecommunications and keeping the UK connectedBlogGoogle Cloud Contact Center AI reimagines the customer and agent experienceBlogEVO2CLOUD - Vodafone’s SAP migration from on-prem to Google CloudBlogView MoreEvolve 5G networksDiscover how to build, deploy, and operate cloud-native networks.Download nowWhitepaperLearn about how Telenet teamed with Google Cloud to activate a cloud-native approach to their 5G networkVideoDownload nowWhitepaperAchieving cloud-native network automation at a global scale with NephioBlogDownload nowStudyView MoreExpand monetization modelsDevelop the 5G and edge ecosystem. Indosat Ooredoo Hutchison, Kloudville, and Google Cloud launch new B2B2X Digital Marketplace for Indonesia’s 64 million small enterprisesWhitepaperBT and Google Cloud advance cybersecurity with new partnershipPress releaseThe power of Google AI comes to the new Samsung Galaxy S24 seriesBlogαU(Alpha-U) and Google's Gemini model to provide Generative AI servicesVideoDownload nowStudyDelivering value at the edge: The CSP opportunityWebinarDownload nowWhitepaperHow 5G and Cloud will change every industry, including yoursBlogDownload nowStudyView MoreBuild a more sustainable networkCreate a sustainable future and a healthier planet.AI queried contextual data for selling and sustainabilityVideoCSPs: The power of the energy data cloudBlogCreating green and efficient radio access networks using AIVideoGoogle Cloud sustainabilityWebpageView MoreFeatured topicsJoin the conversation with our analysts, customers, partners, and experts.Find your balance between energy efficiency and network performanceReportSix key criteria to accelerate 5G networksebookTelcos and Hyperscalers: Together on the EdgeWebinarGenerative AI for Contact Centers: Reimagining CX with Google CloudWebinarDelivering automated cloud-native networks with Google CloudWebinarCreating the data giga plant with Vodafone and Google CloudWebinarView MoreAdditional resourcesMake sure to check out the Innovators for Telecommunications YouTube channel, AI-Driven Telecom YouTube channel, and the Demonstrations for Telecommunications YouTube channel. Google Cloud Innovators in Telecommunications YouTube channelGoogle Cloud AI-Driven Telecom YouTube ChannelGoogle Cloud Demonstrations for Telecommunications YouTube channelGoogle Cloud Telecommunications blogInsights into the US telecommunications industryView MoreCustomersView the latest customer videos, stories, blog posts, and press releases to learn how CSPs are using Google Cloud to meet their business needs across IT, network, and edge.Partners for telecom transformationOur trusted partners work closely with us to help communication service providers transform their IT and network systems, migrating applications and network functions to Google Cloud.Products and solutionsGoogle Distributed CloudExtend Google Cloud’s infrastructure and services to the edge and your data centers.Explore moreGoogle WorkspaceGrow and run your business more efficiently.Explore moreContact Center AIDelight your customers with human-like AI-powered contact center experiences, lower costs, and free up your human agents' time. Contact Center AI enables you to do just that.Explore moreApigee API ManagementBuild, manage, and secure APIs—for any use case, environment, or scale. Google Cloud's native API management to operate your APIs with enhanced scale, security, and automation.Explore moreGeminiFurther your AI-driven telecom by using a highly advanced AI model developed by Google that excels in various tasks like text generation, translation, coding, and more.Explore moreBigQueryExtract the most value possible from your data using a solution designed for multiple data engines, formats, and cloud environments.Explore moreVertex AIConstruct and use generative AI—from AI solutions such as Gemini, to Search and Conversation, to 130+ foundation models, to a unified AI platform.Explore moreLookerMake data-driven decisions by leveraging the most intelligent BI solution. Explore moreTelecommunications industry blogNews and updates on Google Cloud products and services for the telecommunications industry.Read nowGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Text-to-Speech.txt b/Text-to-Speech.txt new file mode 100644 index 0000000000000000000000000000000000000000..fe2fd8eb5c7cffe9437f84207ae152e37acc9543 --- /dev/null +++ b/Text-to-Speech.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/text-to-speech +Date Scraped: 2025-02-23T12:02:13.089Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performance.Jump to Text-to-SpeechText-to-Speech AIConvert text into natural-sounding speech using an API powered by the best of Google’s AI technologies.New customers get up to $300 in free credits to try Text-to-Speech and other Google Cloud products.Try it in consoleContact sales Improve customer interactions with intelligent, lifelike responsesEngage users with voice user interface in your devices and applicationsPersonalize your communication based on user preference of voice and languageLearn how to create synthetic speech using Text-to-Speech APIStart self-paced labBenefitsHigh fidelity speechDeploy Google’s groundbreaking technologies to generate speech with humanlike intonation. Built based on DeepMind’s speech synthesis expertise, the API delivers voices that are near human quality.Widest voice selectionChoose from a set of 380+ voices across 50+ languages and variants, including Mandarin, Hindi, Spanish, Arabic, Russian, and more. Pick the voice that works best for your user and application.One-of-a-kind voiceCreate a unique voice to represent your brand across all your customer touchpoints, instead of using a common voice shared with other organizations.DemoPut Text-to-Speech into actionType what you want, select a language then click “Speak It” to hear.sparkLooking to build a solution?I want to add audio versions of blog posts automatically on my website’s blog I want to take in audio in one language and generate a translation in another language, in real timeI want to have Electronic Program Guides (EPGs) read text aloud to provide a better user experience and meet accessibility requirementsMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionAutomate blog post audioprompt_suggestionTranslate real time audioprompt_suggestionCreate program guidesKey featuresKey featuresChirp HD voices (Preview)Build engaging agents using the latest spontaneous conversational voices based on AudioLM. These voices offer high-quality audio, low-latency streaming, and natural-sounding speech, incorporating human disfluencies and accurate intonation.Studio voicesDazzle your listeners with professionally narrated content recorded in a studio-quality environment. Make sure to put your headphones on.You can now generate dialogues with multiple speakers to create your most interactive scenarios.Neural2 voicesInternationalize your voice experience with ready to use voices powered by the latest research behind Custom Voice.Custom Voice Train a custom voice model using your own audio recordings to create a unique and more natural sounding voice for your organization. You can define and choose the voice profile that suits your organization and quickly adjust to changes in voice needs without needing to record new phrases.Text and SSML supportCustomize your speech with SSML tags that allow you to add pauses, numbers, date and time formatting, and other pronunciation instructions.View all featuresWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postGoogle Cloud Text-to-Speech API now supports custom voicesRead the blogVideoHow to convert PDFs to audiobooks with machine learningWatch videoBlog postConversational AI drives better customer experiencesRead the blogVideoSolving for accessible phone calls with Speech-to-Text and Text-to-SpeechWatch videoBlog postNew voices and languages for Text-to-SpeechRead the blogDocumentationDocumentationGoogle Cloud BasicsText-to-Speech basicsA guide to the fundamental concepts of using the Text-to-Speech API.Learn moreQuickstartQuickstart: Using the command lineSet up your Google Cloud project and authorization and make a request for Text-to-Speech to create audio from text.Learn moreGoogle Cloud BasicsSupported voices and languagesBrowse guides and resources for this product.Learn moreGoogle Cloud BasicsCustom Voice (beta) overviewLearn how you can create a unique and more natural-sounding voice with Custom Voice using your own studio-quality audio recordings.Learn moreTutorialWaveNet and other synthetic voicesLearn about the different synthetic voices available for use in Text-to-Speech, including the premium WaveNet voices.Learn moreTutorialSpeaking addresses with SSMLThis tutorial demonstrates how to use Speech Synthesis Markup Language (SSML) to speak a text file of addresses.Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for Text-to-SpeechUse casesUse casesUse caseVoicebots in contact centersDeliver a better voice experience for customer service with voicebots on Dialogflow that dynamically generate speech, instead of playing static, pre-recorded audio. Engage with high-quality synthesized voices that give callers a sense of familiarity and personalization.Use caseVoice generation in devicesEnable natural communications with your users by empowering your devices to speak humanlike voices as a text reader. Build an end-to-end voice user interface together with Speech-to-Text and Natural Language to improve user experience with easy and engaging interactions.Use caseAccessible EPGs (Electronic Program Guides)Easily have the EPGs read text aloud to provide a better user experience to your customers and meet accessibility requirements for your services and applications. Try the EPG demo.Easily implement text-to-speech functionality in EPGs to provide a better user experience to your customers and meet accessibility requirements for your services and applications. View all technical guidesAll featuresAll featuresCustom Voice Train a custom speech synthesis model using your own audio recordings to create a unique and more natural-sounding voice for your organization. You can define and choose the voice profile that suits your organization and quickly adjust to changes in voice needs without needing to record new phrases. Learn more.Long audio synthesisAsynchronously synthesize up to 1 million bytes of input with Long Audio Synthesis.Voice and language selectionChoose from an extensive selection of 220+ voices across 40+ languages and variants, with more to come soon.WaveNet voicesTake advantage of 90+ WaveNet voices built based on DeepMind’s groundbreaking research to generate speech that significantly closes the gap with human performance.Text and SSML supportCustomize your speech with SSML tags that allow you to add pauses, numbers, date and time formatting, and other pronunciation instructions.Pitch tuningPersonalize the pitch of your selected voice, up to 20 semitones more or less than the default.Speaking rate tuningAdjust your speaking rate to be 4x faster or slower than the normal rate.Volume gain controlIncrease the volume of the output by up to 16db or decrease the volume up to -96db.Integrated REST and gRPC APIsEasily integrate with any application or device that can send a REST or gRPC request including phones, PCs, tablets, and IoT devices (for example cars, TVs, speakers).Audio format flexibilityConvert text to MP3, Linear16, OGG Opus, and a number of other audio formats.Audio profilesOptimize for the type of speaker from which your speech is intended to play, such as headphones or phone lines.PricingPricingText-to-Speech is priced based on the number of characters sent to the service to be synthesized into audio each month. The first 1 million characters for WaveNet voices are free each month. For Standard (non-WaveNet) voices, the first 4 million characters are free each month. After the free tier has been reached, Text-to-Speech is priced per 1 million characters of text processed.If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.View pricing detailsTake the next stepTry Text-to-Speech in the console.Go to my consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Three-tier_web_app.txt b/Three-tier_web_app.txt new file mode 100644 index 0000000000000000000000000000000000000000..dc5ca8be555c787650b15c8eb8e85866227df892 --- /dev/null +++ b/Three-tier_web_app.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/three-tier-web-app +Date Scraped: 2025-02-23T11:48:43.171Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Three-tier web app Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-10-08 UTC This guide helps you understand and deploy the Three-tier web app Jump Start Solution, which demonstrates how to quickly deploy a multi-tier web application stack to Google Cloud. The three-tier web app solution deploys a task-tracker app in Google Cloud. The app has a web-based frontend and an API layer in the middle tier. The frontend and API layer are containerized apps that are deployed as serverless services. The backend is a SQL database. The solution also includes an in-memory cache to serve frequently accessed data. Each tier in this solution is independent. You can develop, update, and scale any tier without affecting the other tiers. This architecture enables efficient app development and delivery. This guide is intended for developers who have some background with deploying multi-tier app stacks. It assumes that you're familiar with basic cloud concepts, though not necessarily Google Cloud. Experience with Terraform is helpful. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Products used The solution uses the following Google Cloud products: Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Google Cloud handles scaling and other infrastructure tasks so that you can focus on the business logic of your code. Cloud SQL: A fully managed MySQL or PostgreSQL database in Google Cloud. Memorystore for Redis: A service that provides application caching using a scalable, secure, and highly available in-memory service for Redis and Memcached. Virtual Private Cloud (VPC) network: A global virtual network that spans all Google Cloud regions and that lets you interconnect your cloud resources. For information about how these products are configured and how they interact, see the next section. Architecture The example app that the three-tier web app solution deploys is a task-tracking app for which the code already exists. The following diagram shows the architecture of the infrastructure that the solution deploys: The following subsections describe the request flow and the configuration of the Google Cloud resources that are shown in the diagram. Request flow The following is the request processing flow of the task-tracker app that this solution deploys. The steps in the flow are numbered as shown in the preceding architecture diagram. A web-based frontend receives requests from clients to the task-tracker app. The frontend is a Cloud Run service, which renders an HTML client in the user's browser. The frontend sends requests to an API layer, which is also deployed as a Cloud Run service. Data that is read frequently is cached in and served from a Memorystore for Redis instance. Requests that can't be served from the in-memory Redis cache are sent by the API layer to a Cloud SQL database. Resource configuration This section describes the configuration of the Cloud Run, Memorystore, Cloud SQL, and networking resources that the solution deploys. If you're familiar with the Terraform configuration language, you can change some of these settings, as described later in this guide. To view the configuration settings, click the following subsections: Cloud Run services Parameter Preconfigured setting Compute capacity per container instance 1 vCPU, 512-MiB memory Autoscaling range (number of container instances) Frontend: 0-8 API layer: 0-8 Memorystore for Redis instance Parameter Preconfigured setting Redis version Version 6.x Service tier Basic, no high availability (HA) Memory 1 GB Data encryption At rest: Google-owned and Google-managed encryption key In transit: Not encrypted Cloud SQL database Parameter Preconfigured setting Database version PostgreSQL 14 or MySQL 8.0 Machine type db-g1-small: 1 vCPU, 1.7-GB memory Availability Single zone Storage 10-GB SSD, with autoscaling enabled Networking resources The Cloud SQL instance is attached to a customer-created VPC network and has an internal IP address. Serverless VPC Access provides connectivity from the Cloud Run instance that hosts the API layer to the Cloud SQL instance. Requests from the Cloud Run service to the Cloud SQL instance use internal DNS and internal IP addresses. Response traffic also uses the internal network. In other words, traffic between the app and the database is not exposed to the internet. Also, traffic over Serverless VPC Access can have lower latency than traffic that traverses the internet. Connectivity between the Memorystore instance and the Cloud SQL database is through a direct-peering connection. Cost For an estimate of the cost of the Google Cloud resources that the three-tier web app solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Before you begin To deploy this solution, you first need a Google Cloud project and some IAM permissions. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/artifactregistry.admin roles/cloudsql.admin roles/compute.networkAdmin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/redis.admin roles/resourcemanager.projectIamAdmin roles/run.admin roles/servicenetworking.serviceAgent roles/serviceusage.serviceUsageViewer roles/vpcaccess.admin Deploy the solution To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Three-tier web app solution. Go to the Three-tier web app solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view and use the task-tracker app that this solution deploys, click more_vert Actions on the Solution deployments page, and then select View web app. The frontend web page of the task-tracker app is displayed in a new browser tab. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-google-three-tier-web-app/ Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-google-three-tier-web-app/ directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # This is an example of the terraform.tfvars file. # The values in this file must match the variable types declared in variables.tf. # The values in this file override any defaults in variables.tf. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Google Cloud zone where you want to deploy the solution # Example: us-central1-a zone = "ZONE" For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects region and zone: Available regions and zones Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! The Terraform output also lists the frontend URL of the task-tracker app (endpoint) and the name of the Cloud SQL instance (sqlservername), as shown in the following example: endpoint = "https://three-tier-app-fe-pn4ngg7gnq-uc.a.run.app" sqlservername = "three-tier-app-db-75c2" To view and use the task-tracker app that the solution deployed, copy the endpoint URL from the previous step and open the URL in a browser. The frontend web page of the task-tracker app is displayed in a new browser tab. To view the Google Cloud resources that are deployed and their configuration, take an interactive tour. Start the tour When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Customize the solution This section provides information that Terraform developers can use to modify the three-tier web app solution in order to meet their own technical and business requirements. The guidance in this section is relevant only if you deploy the solution by using the Terraform CLI. Note: Changing the Terraform code for this solution requires familiarity with the Terraform configuration language. If you modify the Google-provided Terraform configuration, and then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. The Resource configuration section (earlier in this guide) lists the preconfigured parameters of the Google Cloud resources that the three-tier web app solution provisions. You can customize the solution by changing some parameters in the main.tf file. To customize the solution, complete the following steps in Cloud Shell: Make sure that the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. If it isn't, go to that directory. Open the main.tf file and make the required changes, as shown in the examples in the following table: Parameter Terraform code Cloud Run scaling Argument in the main.tf file: autoscaling.knative.dev/maxScale Code snippet resource "google_cloud_run_service" "api" { ... template { ... metadata { annotations = { "autoscaling.knative.dev/maxScale" = "COUNT" ... } } } } Redis version Argument in the main.tf file: redis_version Code snippet resource "google_redis_instance" "main" { ... redis_version = "VERSION" ... } Caution: The Google-provided Terraform configuration has been validated for Redis version 6.x. If you change the version, the deployed solution might not work as intended. Redis tier Argument in the main.tf file: tier Code snippet resource "google_redis_instance" "main" { ... tier = "TIER" ... } Redis memory Argument in the main.tf file: memory_size_gb Code snippet resource "google_redis_instance" "main" { ... memory_size_gb = SIZE ... } PostgreSQL or MySQL version Argument in the main.tf file: database_version Code snippet resource "google_sql_database_instance" "main" { ... database_version = "VERSION" ... ... } Caution: The Google-provided Terraform configuration has been validated for PostgreSQL version 14 and MySQL version 8.0. If you change the version, the deployed solution might not work as intended. Database machine type Argument in the main.tf file: settings.tier Code snippet resource "google_sql_database_instance" "main" { ... settings { tier = "MACHINE_TYPE" ... } ... } Database storage Argument in the main.tf file: settings.disk_size Code snippet resource "google_sql_database_instance" "main" { ... settings { ... ... disk_size = SIZE ... } ... } Validate and review the Terraform configuration. Provision the resources. Design recommendations This section provides recommendations for using the three-tier web app solution to develop an architecture that meets your requirements for security, reliability, cost, and performance. To view the design recommendations for each area, click the appropriate tab. security Security Design focus Recommendations Data encryption By default, Cloud Run encrypts data by using a Google-owned and Google-managed encryption key. To protect your containers by using a key that you control, you can use customer-managed encryption keys. For more information, see Using customer-managed encryption keys . By default, Memorystore uses Google-owned and Google-managed encryption keys to encrypt data at rest. To encrypt data by using a key that you control, you can use customer-managed encryption keys. For more information, see About customer-managed encryption keys (CMEK). You can enable encryption of in-transit data in Memorystore by using the Transport Layer Security (TLS) protocol. For more information, see About in-transit encryption. Software supply-chain security To ensure that only authorized container images are deployed to the Cloud Run services, you can use Binary Authorization. Access control The Cloud Run service that runs the API layer allows ingress from any source. For enhanced security, you can restrict ingress to allow traffic from only internal sources. For more information, see Restricting ingress for Cloud Run. To guard your application against unauthorized access, you can enable the AUTH feature in Memorystore, so that incoming client connections are authenticated. For more information, see About Redis AUTH. restore Reliability Design focus Recommendations App scaling The Cloud Run services in the solution are configured to autoscale the container instances horizontally based on the request load. Review and adjust the autoscaling parameters based on your requirements. For more information, see About container instance autoscaling. Request handling To improve the responsiveness of Cloud Run services that store client-specific state on container instances, you can use session affinity. Requests from the same client are routed to the same container instance, on a best-effort basis. For more information, see Setting session affinity (services). Data durability To protect your data against loss, you can use automated backups of the Cloud SQL database. For more information, see About Cloud SQL backups. Database high availability (HA) The Cloud SQL database in the solution is deployed in a single zone. For HA, you can use a multi-zone configuration. For more information, see About high availability. For more information about region-specific considerations, see Geography and regions. If database HA is a critical requirement, AlloyDB for PostgreSQL is an alternative Google Cloud service that you can consider. Database reliability The Cloud SQL instance in this solution uses the db-g1-small machine type, which uses a shared-core CPU. This machine type is designed to provide resources for a low-cost database that might be appropriate for test and development environments only. If you need production-grade reliability, consider using a machine type that provides more CPU and memory. A Cloud SQL instance that uses the db-g1-small machine type is not included in the Cloud SQL service level agreement (SLA). For more information about configurations that are excluded from the SLA, see Operational guidelines. Cache HA To help ensure HA for the in-memory cache layer in this solution, you can use the Standard Tier of Memorystore for Redis. The service creates read replicas for distributed read operations, and provides automatic failover. For more information, see Redis tier capabilities. payment Cost Design focus Recommendations Resource efficiency Cloud Run determines the number of requests that should be sent to a container instance based on CPU usage and memory usage. By increasing the maximum concurrency setting, you can reduce the number of container instances that Cloud Run needs to create, and therefore reduce cost. For more information, see Maximum concurrent requests per instance (services). The Cloud Run services in this solution are configured to allocate CPUs only during request processing. When a Cloud Run service finishes handling a request, the container instance's access to CPUs is disabled. For information about the cost and performance effect of this configuration, see CPU allocation (services). Resource usage If your app needs to handle requests globally, consider deploying the Cloud Run services to multiple regions. Cross-region deployment can help to reduce the cost of cross-continent data transfer traffic. Google recommends a cross-region deployment if you decide to use a load balancer and CDN. For more information, see Serve traffic from multiple regions. speed Performance Design focus Recommendations App startup time To reduce the performance effect of cold starts, you can configure the minimum number of Cloud Run container instances to a non-zero value. For more information, see General development tips for Cloud Run. Frontend response time If your app handles requests globally, to help ensure faster responses to client requests, consider deploying the Cloud Run services in multiple regions. You can use a global load balancer to route requests to the nearest region. For more information, see Serve traffic from multiple regions. Multi-region deployments can also help to reduce the volume of cross-continent egress traffic, and therefore reduce the cost of operating the app. Database performance For performance-sensitive applications, you can improve performance of Cloud SQL by using a larger machine type and by increasing the storage capacity. If database performance is a critical requirement, AlloyDB for PostgreSQL is an alternative Google Cloud service that you can consider. Cache performance To improve the performance experience for users of your app, you can increase the capacity of the Memorystore for Redis instance. At larger capacities, the network throughput is higher. For more information, see Memory management best practices. Note the following: Before you make any design changes, assess the cost impact and consider potential trade-offs with other features. You can assess the cost impact of design changes by using the Google Cloud Pricing Calculator. To implement design changes in the solution, you need expertise in Terraform coding and advanced knowledge of the Google Cloud services that are used in the solution. If you modify the Google-provided Terraform configuration and if you then experience errors, create issues in GitHub. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For more information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Delete the deployment When you no longer need the solution deployment, to avoid continued billing for the resources that you created, delete the deployment. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-google-three-tier-web-app/. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Cannot assign requested address error When you run the terraform apply command, a cannot assign requested address error might occur, with a message like the following: Error: Error creating service account: Post "https://iam.googleapis.com/v1/projects/PROJECT_ID/serviceAccounts: dial tcp [2001:db8:ffff:ffff::5f]:443: connect: cannot assign requested address If this error occurs, run the terraform apply command again. Configuration error If any of the resource arguments have values that aren't supported, an error like the following occurs: Error: Error creating Instance: googleapi: Error 400: Provided Redis version is not supported: REDIS_5_X │ com.google.apps.framework.request.StatusException: generic::INVALID_ARGUMENT: Provided Redis version is not supported: REDIS_5_X Details: │ [ │ { │ "@type": "type.googleapis.com/google.rpc.BadRequest", │ "fieldViolations": [ │ { │ "description": "Invalid value: REDIS_5_X", │ "field": "instance.redis_version" │ } │ ] │ } │ ] │ │ with google_redis_instance.main, │ on main.tf line 96, in resource "google_redis_instance" "main": │ 96: resource "google_redis_instance" "main" { In this case, the intent was to use Redis version 5, but the value specified for the instance.redis_version argument (REDIS_5_X) in the main.tf file is not valid. The correct value is REDIS_5_0, as enumerated in the Memorystore REST API documentation. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/ Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next Review the following documentation to learn about architectural and operational best practices for the products that are used in this solution: Cloud Run: General development tips Memorystore for Redis: Memory management best practices Memorystore for Redis: General best practices Cloud SQL: General best practices Send feedback \ No newline at end of file diff --git a/Tiered_hybrid_pattern.txt b/Tiered_hybrid_pattern.txt new file mode 100644 index 0000000000000000000000000000000000000000..be06c63378afd372fc9a736c7ad0ac7cba5198bc --- /dev/null +++ b/Tiered_hybrid_pattern.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/tiered-hybrid-pattern +Date Scraped: 2025-02-23T11:49:59.906Z + +Content: +Home Docs Cloud Architecture Center Send feedback Tiered hybrid pattern Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC The architecture components of an application can be categorized as either frontend or backend. In some scenarios, these components can be hosted to operate from different computing environments. As part of the tiered hybrid architecture pattern, the computing environments are located in an on-premises private computing environment and in Google Cloud. Frontend application components are directly exposed to end users or devices. As a result, these applications are often performance sensitive. To develop new features and improvements, software updates can be frequent. Because frontend applications usually rely on backend applications to store and manage data—and possibly business logic and user input processing—they're often stateless or manage only limited volumes of data. To be accessible and usable, you can build your frontend applications with various frameworks and technologies. Some key factors for a successful frontend application include application performance, response speed, and browser compatibility. Backend application components usually focus on storing and managing data. In some architectures, business logic might be incorporated within the backend component. New releases of backend applications tend to be less frequent than releases for frontend applications. Backend applications have the following challenges to manage: Handling a large volume of requests Handling a large volume of data Securing data Maintaining current and updated data across all the system replicas The three-tier application architecture is one of the most popular implementations for building business web applications, like ecommerce websites containing different application components. This architecture contains the following tiers. Each tier operates independently, but they're closely linked and all function together. Web frontend and presentation tier Application tier Data access or backend tier Putting these layers into containers separates their technical needs, like scaling requirements, and helps to migrate them in a phased approach. Also, it lets you deploy them on platform-agnostic cloud services that can be portable across environments, use automated management, and scale with cloud managed platforms, like Cloud Run or Google Kubernetes Engine (GKE) Enterprise edition. Also, Google Cloud-managed databases like Cloud SQL help to provide the backend as the database layer. Note: The implementation of this architecture and the definition of its components can vary depending on whether you separate the tiers into individual systems and layers or combine them. The tiered hybrid architecture pattern focuses on deploying existing frontend application components to the public cloud. In this pattern, you keep any existing backend application components in their private computing environment. Depending on the scale and the specific design of the application, you can migrate frontend application components on a case-by-case basis. For more information, see Migrate to Google Cloud. If you have an existing application with backend and frontend components hosted in your on-premises environment, consider the limits of your current architecture. For example, as your application scales and the demands on its performance and reliability increase, you should start evaluating whether parts of your application should be refactored or moved to a different and more optimal architecture. The tiered hybrid architecture pattern lets you shift some application workloads and components to the cloud before making a complete transition. It's also essential to consider the cost, time, and risk involved in such a migration. The following diagram shows a typical tiered hybrid architecture pattern. In the preceding diagram, client requests are sent to the application frontend that is hosted in Google Cloud. In turn, the application frontend sends data back to the on-premises environment where the application backend is hosted (ideally through an API gateway). With the tiered hybrid architecture pattern, you can take advantage of Google Cloud infrastructure and global services, as shown in the example architecture in the following diagram. The application frontend can be reached over Google Cloud. It can also add elasticity to the frontend by using auto-scaling to dynamically and efficiently respond to scaling demand without over provisioning infrastructure. There are different architectures that you can use to build and run scalable web apps on Google Cloud. Each architecture has advantages and disadvantages for different requirements. For more information, watch Three ways to run scalable web apps on Google Cloud on YouTube. To learn more about different ways to modernize your ecommerce platform on Google Cloud, see How to build a digital commerce platform on Google Cloud. In the preceding diagram, the application frontend is hosted on Google Cloud to provide a multi-regional and globally optimized user experience that uses global load balancing, autoscaling, and DDoS protection through Google Cloud Armor. Over time, the number of applications that you deploy to the public cloud might increase to the point where you might consider moving backend application components to the public cloud. If you expect to serve heavy traffic, opting for cloud-managed services might help you save engineering effort when managing your own infrastructure. Consider this option unless constraints or requirements mandate hosting backend application components on-premises. For example, if your backend data is subject to regulatory restrictions, you probably need to keep that data on-premises. Where applicable and compliant, however, using Sensitive Data Protection capabilities like de-identification techniques, can help you move that data when necessary. In the tiered hybrid architecture pattern, you also can use Google Distributed Cloud in some scenarios. Distributed Cloud lets you run Google Kubernetes Engine clusters on dedicated hardware that's provided and maintained by Google and is separate from Google Cloud data center. To ensure that Distributed Cloud meets your current and future requirements, know the limitations of Distributed Cloud when compared to a conventional cloud-based GKE zone. Advantages Focusing on frontend applications first has several advantages including the following: Frontend components depend on backend resources and occasionally on other frontend components. Backend components don't depend on frontend components. Therefore, isolating and migrating frontend applications tends to be less complex than migrating backend applications. Because frontend applications often are stateless or don't manage data by themselves, they tend to be less challenging to migrate than backends. Frontend components can be optimized as part of the migration to use stateless architecture. For more information, watch How to port stateful web apps to Cloud Run on YouTube. Deploying existing or newly developed frontend applications to the public cloud offers several advantages: Many frontend applications are subject to frequent changes. Running these applications in the public cloud simplifies the setup of a continuous integration/continuous deployment (CI/CD) process. You can use CI/CD to send updates in an efficient and automated manner. For more information, see CI/CD on Google Cloud. Performance-sensitive frontends with varying traffic load can benefit substantially from the load balancing, multi-regional deployments, Cloud CDN caching, serverless, and autoscaling capabilities that a cloud-based deployment enables (ideally with stateless architecture). Adopting microservices with containers using a cloud-managed platform, like GKE, lets you use modern architectures like microfrontend, which extend microservices to the frontend components. Extending microservices is commonly used with frontends that involve multiple teams collaborating on the same application. That kind of team structure requires an iterative approach and continuous maintenance. Some of the advantages of using microfrontend are as follows: It can be made into independent microservices modules for development, testing, and deployment. It provides separation where individual development teams can select their preferred technologies and code. It can foster rapid cycles of development and deployment without affecting the rest of the frontend components that might be managed by other teams. Whether they're implementing user interfaces or APIs, or handling Internet of Things (IoT) data ingestion, frontend applications can benefit from the capabilities of cloud services like Firebase, Pub/Sub, Apigee, Cloud CDN, App Engine, or Cloud Run. Cloud-managed API proxies help to: Decouple the app-facing API from your backend services, like microservices. Shield apps from backend code changes. Support your existing API-driven frontend architectures, like backend for frontend (BFF), microfrontend, and others. Expose your APIs on Google Cloud or other environments by implementing API proxies on Apigee. You can also apply the tiered hybrid pattern in reverse, by deploying backends in the cloud while keeping frontends in private computing environments. Although it's less common, this approach is best applied when you're dealing with a heavyweight and monolithic frontend. In such cases, it might be easier to extract backend functionality iteratively, and to deploy these new backends in the cloud. The third part of this series discusses possible networking patterns to enable such an architecture. Apigee hybrid helps as a platform for building and managing API proxies in a hybrid deployment model. For more information, see Loosely coupled architecture, including tiered monolithic and microservices architectures. Best practices Use the information in this section as you plan for your tiered hybrid architecture. Best practices to reduce complexity When you're applying the tiered hybrid architecture pattern, consider the following best practices that can help to reduce its overall deployment and operational complexity: Based on the assessment of the communication models of the identified applications, select the most efficient and effective communication solution for those applications. Because most user interaction involves systems that connect across multiple computing environments, fast and low-latency connectivity between those systems is important. To meet availability and performance expectations, you should design for high availability, low latency, and appropriate throughput levels. From a security point of view, communication needs to be fine-grained and controlled. Ideally, you should expose application components using secure APIs. For more information, see Gated egress. To minimize communication latency between environments, select a Google Cloud region that is geographically close to the private computing environment where your application backend components are hosted. For more information, see Best practices for Compute Engine regions selection. Minimize high dependencies between systems that are running in different environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges. With the tiered hybrid architecture pattern, you might have larger volumes of inbound traffic from on-premises environments coming into Google Cloud compared to outbound traffic leaving Google Cloud. Nevertheless, you should know the anticipated outbound data transfer volume leaving Google Cloud. If you plan to use this architecture long term with high outbound data transfer volumes, consider using Cloud Interconnect. Cloud Interconnect can help to optimize connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, you can use VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. To facilitate the establishment of hybrid setups, use Cloud Load Balancing with hybrid connectivity. That means you can extend the benefits of cloud load balancing to services hosted on your on-premises compute environment. This approach enables phased workload migrations to Google Cloud with minimal or no service disruption, ensuring a smooth transition for the distributed services. For more information, see Hybrid connectivity network endpoint groups overview. Sometimes, using an API gateway, or a proxy and an Application Load Balancer together, can provide a more robust solution for managing, securing, and distributing API traffic at scale. Using Cloud Load Balancing with API gateways lets you accomplish the following: Provide high-performing APIs with Apigee and Cloud CDN, to reduce latency, host APIs globally, and increase availability for peak traffic seasons. For more information, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube. Implement advanced traffic management. Use Google Cloud Armor as a DDoS protection and network security service to protect your APIs. Manage efficient load balancing across gateways in multiple regions. For more information, watch Securing APIs and Implementing multi-region failover with Private Service Connect and Apigee on YouTube. Use API management and service mesh to secure and control service communication and exposure with microservices architecture. Use Cloud Service Mesh to allow for service-to-service communication that maintains the quality of service in a system composed of distributed services where you can manage authentication, authorization, and encryption between services. Use an API management platform like Apigee that lets your organization and external entities consume those services by exposing them as APIs. Establish common identity between environments so that systems can authenticate securely across environment boundaries. Deploy CI/CD and configuration management systems in the public cloud. For more information, see Mirrored networking architecture pattern. To help increase operational efficiency, use consistent tooling and CI/CD pipelines across environments. Best practices for individual workload and application architectures Although the focus lies on frontend applications in this pattern, stay aware of the need to modernize your backend applications. If the development pace of backend applications is substantially slower than for frontend applications, the difference can cause extra complexity. Treating APIs as backend interfaces streamlines integrations, frontend development, service interactions, and hides backend system complexities. To address these challenges, Apigee facilitates API gateway/proxy development and management for hybrid and multicloud deployments. Choose the rendering approach for your frontend web application based on the content (static versus dynamic), the search engine optimization performance, and the expectations about page loading speeds. When selecting an architecture for content-driven web applications, various options are available, including monolithic, serverless, event-based, and microservice architectures. To select the most suitable architecture, thoroughly assess these options against your current and future application requirements. To help you make an architectural decision that's aligned with your business and technical objectives, see Comparison of different architectures for content-driven web application backends, and Key Considerations for web backends. With a microservices architecture, you can use containerized applications with Kubernetes as the common runtime layer. With the tiered hybrid architecture pattern, you can run it in either of the following scenarios: Across both environments (Google Cloud and your on-premises environments). When using containers and Kubernetes across environments, you have the flexibility to modernize workloads and then migrate to Google Cloud at different times. That helps when a workload depends heavily on another and can't be migrated individually, or to use hybrid workload portability to use the best resources available in each environment. In all cases, GKE Enterprise can be a key enabling technology. For more information, see GKE Enterprise hybrid environment. In a Google Cloud environment for the migrated and modernized application components. Use this approach when you have legacy backends on-premises that lack containerization support or require significant time and resources to modernize in the short-term. For more information about designing and refactoring a monolithic app to a microservice architecture to modernize your web application architecture, see Introduction to microservices. You can combine data storage technologies depending on the needs of your web applications. Using Cloud SQL for structured data and Cloud Storage for media files is a common approach to meet diverse data storage needs. That said, the choice depends heavily on your use case. For more information about data storage options for content-driven application backends and effective modalities, see Data Storage Options for Content-Driven Web Apps. Also, see Your Google Cloud database options, explained. Previous arrow_back Distributed architecture patterns Next Partitioned multicloud pattern arrow_forward Send feedback \ No newline at end of file diff --git a/Transcoder_API.txt b/Transcoder_API.txt new file mode 100644 index 0000000000000000000000000000000000000000..e59532a972e3ab5b6d5990052ebc57ae9dabb346 --- /dev/null +++ b/Transcoder_API.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/transcoder/docs +Date Scraped: 2025-02-23T12:06:35.527Z + +Content: +Home Transcoder API Documentation Stay organized with collections Save and categorize content based on your preferences. Transcoder API documentation View all product documentation The Transcoder API allows you to convert video files and package them for optimized delivery to web, mobile and connected TVs. Learn more Transcoder API is a service covered by Google's obligations set forth in the Cloud Data Processing Addendum. Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Quickstart: Transcode a video with the Transcoder API Creating and managing jobs Creating and managing job templates find_in_page Reference REST API RPC API Client libraries info Resources Pricing Quotas and limits Release notes \ No newline at end of file diff --git a/Transfer_Appliance.txt b/Transfer_Appliance.txt new file mode 100644 index 0000000000000000000000000000000000000000..cf9d2131566c31d23953281ba63740b978c42242 --- /dev/null +++ b/Transfer_Appliance.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/transfer-appliance/docs/4.0/overview +Date Scraped: 2025-02-23T12:06:56.549Z + +Content: +Home Documentation Transfer Appliance Guides Send feedback Overview Stay organized with collections Save and categorize content based on your preferences. Transfer Appliance is a high-capacity storage device that enables you to transfer and securely ship your data to a Google upload facility, where we upload your data to Cloud Storage. For Transfer Appliance capacities and requirements, refer to the Specifications page. How it works Request an appliance: We'll work with you to select the appropriate appliance for your requirements. Upload your data: Linux and Apple macOS systems mount the NFS share exposed by the appliance. Windows systems use SCP or SSH to upload data to the appliance. Ship the appliance back: Complete the transfer and seal the appliance. Google uploads the data: We upload the data to your Cloud Storage bucket, then wipe the appliance when we're done. Transfer is complete! You can now access your data in Google Cloud. Security features Your data and network security is important. Transfer Appliance helps ensure that you're connecting a trusted device to your equipment and network, and secures your data from end to end so that it is read by people you trust. To ensure Transfer Appliance is trusted and safe to connect to your devices, Transfer Appliance offers the following features: Tamper resistant: Bad actors cannot easily open Transfer Appliance's physical case. We also apply tamper-evident tags to the shipping case, so that you can visually inspect each appliance's integrity prior to opening the package. Ruggedized: Transfer Appliance's shipping container is ruggedized, ensuring your data arrives safely. Trusted Platform Module (TPM) chip: We validate the TPM's Platform Configuration Registers to ensure that the immutable root filesystem and software components haven't been tampered with. Hardware attestation: We use a remote attestation process to validate the appliance before you can connect it to your device and copy data to it. If anything is amiss, we work with you to quickly send you a new appliance. To ensure your data is safe during and after transit, Transfer Appliance uses the following features to protect you: AES 256 encryption: Your data is encrypted with industry-standard encryption to keep it safe. Customer-managed encryption keys: We use encryption keys that you manage using Cloud Key Management Service (Cloud KMS), enabling you to control and secure your data prior to shipping an appliance back to us. NIST 800-88 compliant data erasure: We securely erase your data from Transfer Appliance after uploading your data to Cloud Storage. You can request a wipe certificate to verify that we've wiped your data. For more information, refer to Security and encryption. Performance To enable you move data quickly and efficiently, Transfer Appliance has the following performance features: All SSD drives: Increased reliability over hard disk drives to ensure your transfer is smooth. Multiple network connectivity options: Quickly move data from your devices to Transfer Appliance, using either a 10Gbps RJ45 interface or a 40Gbps QSFP+ interface. Scalability with multiple appliances: You can scale your transfers by ordering multiple appliance to increase your transfer speed. Globally distributed processing: Reduced shipping times to and from Google ensures your data transfer to Cloud Storage is quick. Minimal software: For Linux and Apple macOS systems, copy directly to Transfer Appliance by mounting the exposed NFS share on the appliance to your workstation, using common software already installed on the system. For Microsoft Windows systems, copy directly to Transfer Appliance from your workstation using SCP. Online capabilities Enabling online mode allows you to perform online transfers by streaming data directly to your Cloud Storage bucket after copying it to your appliance. Online transfers offer the following benefits: Quickly transfer data to Cloud Storage with low latency: Online transfers are an accelerated method of transferring your data to Cloud Storage, omitting the need to wait for your appliance to be shipped back to Google before the data is copied to your destination bucket. Connect to multiple appliances: Online mode allows parallel connectivity to multiple appliances. Cost-effective: Online capability is offered as a low-cost, fully-managed method for transferring your data. Secure connection: Your data is encrypted during online transfers, ensuring end-to-end security. After the transfer is complete, your data is removed from the appliance. Easy to enable or disable: You can toggle between online and offline mode using simple commands. For more information on how to enable or disable online mode, refer to the Online/offline transfer page. Is Transfer Appliance suitable for me? Transfer Appliance is a good fit for your data transfer needs if: You are an existing Google Cloud customer. Your data resides in locations that Transfer Appliance is available. It would take more than one week to upload your data over the network. Other transfer options Other Google Cloud transfer options include: Storage Transfer Service to move data to a Cloud Storage bucket from other cloud storage providers or from your on-premises storage. BigQuery Data Transfer Service to move data from software as a service (SaaS) applications to BigQuery. Transfer service for on-premises data to move data from your on-premises machines to Cloud Storage. Where is Transfer Appliance available? Transfer Appliance is available in the following locations: Location TA7 Rackable TA40 and TA300 Freestanding TA40 and TA300 United States European Union member states United Kingdom Singapore Japan Canada Australia For a complete list of countries where Transfer Appliance is available, refer to the Order Appliance page on the Google Cloud console. If you don't find your country listed, reach out to Support at data-support@google.com. Data transfer speeds With a typical network bandwidth of 100 Mbps, 300 terabytes of data takes about 9 months to upload. However, with Transfer Appliance, you can receive the appliance and capture 300 terabytes of data in under 25 days. Your data can be accessed in Cloud Storage within another 25 days, all without consuming any outbound network bandwidth. Example use cases Data collection If you need to transfer data from researchers, vendors, or other sites to Google Cloud, Transfer Appliance can move that data for you. Once transferred to Cloud Storage or BigQuery, your data is accessible via our Dataflow processing service for machine learning projects. Google Cloud Machine Learning Engine is a managed service that enables you to easily build machine learning models, that work on any type of data, of any size. Data replication Transfer Appliance can assist you in taking advantage of hybrid architectures, supporting current operations with existing on-premises infrastructure while experimenting with the cloud. By transferring a copy of your data to Google Cloud, you can decommission duplicate datasets, test cloud infrastructure, and expose your data to machine learning and analysis. Data migration Offline data transfer is suited for moving large amounts of existing backup images and archives to Cloud Storage, which can be stored in ultra low-cost, highly-durable, and highly available storage classes such as Archive Storage. For structured and unstructured data sets, whether they are small and frequently accessed or huge and rarely referenced, Google offers solutions like Cloud Storage, BigQuery, and Dataproc to store and analyze that data. Data handling for the European Union For customers in the EU, appliances are shipped from Belgium. When data capture is complete, you ship the appliance to Belgium for data upload. Your data is then uploaded to a Cloud Storage location in a region that you have specified. If you choose a destination region within the EU, your data never leaves the boundaries of the European Union during any part of the data transfer process. What's next? Request Transfer Appliance. Learn more about Transfer Appliance pricing. Review the procedure for using Transfer Appliance. Send feedback \ No newline at end of file diff --git a/Transfer_your_large_datasets.txt b/Transfer_your_large_datasets.txt new file mode 100644 index 0000000000000000000000000000000000000000..7bd7cadeb3bfb0cd5c64053433db2236098bb203 --- /dev/null +++ b/Transfer_your_large_datasets.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets +Date Scraped: 2025-02-23T11:51:37.380Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Transfer your large datasets Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-13 UTC For many customers, the first step in adopting a Google Cloud product is getting their data into Google Cloud. This document explores that process, from planning a data transfer to using best practices in implementing a plan. Transferring large datasets involves building the right team, planning early, and testing your transfer plan before implementing it in a production environment. Although these steps can take as much time as the transfer itself, such preparations can help minimize disruption to your business operations during the transfer. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets (this document) Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs What is data transfer? For the purposes of this document, data transfer is the process of moving data without transforming it, for example, moving files as they are into objects. Data transfer isn't as simple as it sounds It's tempting to think of data transfer as one giant FTP session, where you put your files in one side and wait for them to come out the other side. However, in most enterprise environments, the transfer process involves many factors such as the following: Devising a transfer plan that accounts for administrative time, including time to decide on a transfer option, get approvals, and deal with unanticipated issues. Coordinating people in your organization, such as the team that executes the transfer, personnel who approve the tools and architecture, and business stakeholders who are concerned with the value and disruptions that moving data can bring. Choosing the right transfer tool based on your resources, cost, time, and other project considerations. Overcoming data transfer challenges, including "speed of light" issues (insufficient bandwidth), moving datasets that are in active use, protecting and monitoring the data while it's in flight, and ensuring the data is transferred successfully. This document aims to help you get started on a successful transfer initiative. Other projects related to data transfer The following list includes resources for other types of data transfer projects not covered in this document: If you need to transform your data (such as combining rows, joining datasets, or filtering out personal identifiable information), you should consider an extract, transform, and load (ETL) solution that can deposit data into a Google Cloud data warehouse. If you need to migrate a database and related apps (for example, to migrate a database app), see Database migration: Concepts and principles. Step 1: Assembling your team Planning a transfer typically requires personnel with the following roles and responsibilities: Enabling resources needed for a transfer: Storage, IT, and network admins, an executive sponsor, and other advisors (for example, a Google Account team or integration partners) Approving the transfer decision: Data owners or governors (for internal policies on who is allowed to transfer what data), legal advisors (for data-related regulations), and a security administrator (for internal policies on how data access is protected) Executing the transfer: A team lead, a project manager (for executing and tracking the project), an engineering team, and on-site receiving and shipping (to receive appliance hardware) It's crucial to identify who owns the preceding responsibilities for your transfer project and to include them in planning and decision meetings when appropriate. Poor organizational planning is often the cause of failed transfer initiatives. Gathering project requirements and input from these stakeholders can be challenging, but making a plan and establishing clear roles and responsibilities pays off. You can't be expected to know all the details of your data. Assembling a team gives you greater insight into the needs of the business. It's a best practice to identify potential issues before you invest time, money, and resources to complete the transfers. Step 2: Collecting requirements and available resources When you design a transfer plan, we recommend that you first collect requirements for your data transfer and then decide on a transfer option. To collect requirements, you can use the following process: Identify what datasets you need to move. Select tools like Data Catalog to organize your data into logical groupings that are moved and used together. Work with teams within your organization to validate or update these groupings. Identify what datasets you can move. Consider whether regulatory, security, or other factors prohibit some datasets from being transferred. If you need to transform some of your data before you move it (for example, to remove sensitive data or reorganize your data), consider using a data integration product like Dataflow or Cloud Data Fusion, or a workflow orchestration product like Cloud Composer. For datasets that are movable, determine where to transfer each dataset. Record which storage option you select to store your data. Typically, the target storage system on Google Cloud is Cloud Storage. Even if you need more complex solutions after your applications are up and running, Cloud Storage is a scalable and durable storage option. Understand what data access policies must be maintained after migration. Determine if you need to store this data in specific regions. Plan how to structure this data at the destination. For example, will it be the same as the source or different? Determine if you need to transfer data on an ongoing basis. For datasets that are movable, determine what resources are available to move them. Time: When does the transfer need to be completed? Cost: What is the budget available for the team and transfer costs? People: Who is available to execute the transfer? Bandwidth (for online transfers): How much of your available bandwidth for Google Cloud can be allocated for a transfer, and for what period of time? Before you evaluate and select transfer options in the next phase of planning, we recommend that you assess whether any part of your IT model can be improved, such as data governance, organization, and security. Your security model Many members of the transfer team might be granted new roles in your Google Cloud organization as part of your data transfer project. Data transfer planning is a great time to review your Identity and Access Management (IAM) permissions and best practices for using IAM securely. These issues can affect how you grant access to your storage. For example, you might place strict limits on write access to data that has been archived for regulatory reasons, but you might allow many users and applications to write data to your test environment. Your Google Cloud organization How you structure your data on Google Cloud depends on how you plan to use Google Cloud. Storing your data in the same Google Cloud project where you run your application may work, but it might not be optimal from a management perspective. Some of your developers might not have privilege to view the production data. In that case, a developer could develop code on sample data, while a privileged service account could access production data. Thus, you might want to keep your entire production dataset in a separate Google Cloud project, and then use a service account to allow access to the data from each application project. Google Cloud is organized around projects. Projects can be grouped into folders, and folders can be grouped under your organization. Roles are established at the project level and the access permissions are added to these roles at the Cloud Storage bucket levels. This structure aligns with the permissions structure of other object store providers. For best practices to structure a Google Cloud organization, see Decide a resource hierarchy for your Google Cloud landing zone. Step 3: Evaluating your transfer options To evaluate your data transfer options, the transfer team needs to consider several factors, including the following: Cost Transfer time Offline versus online transfer options Transfer tools and technologies Security Cost Most of the costs associated with transferring data include the following: Networking costs Ingress to Cloud Storage is free. However, if you're hosting your data on a public cloud provider, you can expect to pay an egress charge and potentially storage costs (for example, read operations) for transferring your data. This charge applies for data coming from Google or another cloud provider. If your data is hosted in a private data center that you operate, you might also incur added costs for setting up more bandwidth to Google Cloud. Storage and operation costs for Cloud Storage during and after the transfer of data Product costs (for example, a Transfer Appliance) Personnel costs for assembling your team and acquiring logistical support Transfer time Few things in computing highlight the hardware limitations of networks as transferring large amounts of data. Ideally, you can transfer 1 GB in eight seconds over a 1 Gbps network. If you scale that up to a huge dataset (for example, 100 TB), the transfer time is 12 days. Transferring huge datasets can test the limits of your infrastructure and potentially cause problems for your business. When you estimate how much time a transfer might take, include the following factors in your calculations: The size of the dataset you're moving. The bandwidth available for the transfer. A certain percentage of management time. The bandwidth efficiency, which can also have an impact on transfer time. You might not want to transfer large datasets out of your company network during peak work hours. If the transfer overloads the network, nobody else will be able to get necessary or mission-critical work completed. For this reason, the transfer team needs to consider the factor of time. After the data is transferred to Cloud Storage, you can use a number of technologies to process the new files as they arrive, such as Dataflow. Increasing network bandwidth How you increase network bandwidth depends on how you connect to Google Cloud. In a cloud-to-cloud transfer between Google Cloud and other cloud providers, Google provisions the connection between cloud vendor data centers, requiring no setup from you. If you're transferring data between your private data center and Google Cloud, there are several approaches, such as: A public internet connection by using a public API Direct Peering by using a public API Cloud Interconnect by using a private API When evaluating these approaches, it's helpful to consider your long-term connectivity needs. You might conclude that it's cost prohibitive to acquire bandwidth solely for transfer purposes, but when factoring in long-term use of Google Cloud and the network needs across your organization, the investment might be worthwhile. For more information about how to connect your networks to Google Cloud, see Choose a Network Connectivity product. If you opt for an approach that involves transferring data over the public internet, we recommend that you check with your security administrator on whether your company policy forbids such transfers. Also, check whether the public internet connection is used for your production traffic. Finally, consider that large-scale data transfers might negatively impact the performance of your production network. Online versus offline transfer A critical decision is whether to use an offline or online process for your data transfer. That is, you must choose between transferring over a network, whether it's a Cloud Interconnect or the public internet, or transferring by using storage hardware. To help with this decision, the following chart also shows some transfer speeds for various dataset sizes and bandwidths. A certain amount of management overhead is built into these calculations. As noted earlier, you might need to consider whether the cost to achieve lower latencies for your data transfer (such as acquiring network bandwidth) is offset by the value of that investment to your organization. Options available from Google Google offers several tools and technologies to help you perform a data transfer. Deciding among Google's transfer options Choosing a transfer option depends on your use case, as the following table shows. Where you're moving data from Scenario Suggested products Another cloud provider (for example, Amazon Web Services or Microsoft Azure) to Google Cloud — Storage Transfer Service Cloud Storage to Cloud Storage (two different buckets) — Storage Transfer Service Your private data center to Google Cloud Enough bandwidth to meet your project deadline gcloud storage command Your private data center to Google Cloud Enough bandwidth to meet your project deadline Storage Transfer Service for on-premises data Your private data center to Google Cloud Not enough bandwidth to meet your project deadline Transfer Appliance gcloud storage command for smaller transfers of on-premises data The gcloud storage command is the standard tool for small- to medium-sized transfers over a typical enterprise-scale network, from a private data center or from another cloud provider to Google Cloud. While gcloud storage supports uploading objects up to the maximum Cloud Storage object size, transfers of large objects are more likely to experience failures than short-running transfers. For more information about transferring large objects to Cloud Storage, see Storage Transfer Service for large transfers of on-premises data. The gcloud storage command is especially useful in the following scenarios: Your transfers need to be executed on an as-needed basis, or during command-line sessions by your users. You're transferring only a few files or very large files, or both. You're consuming the output of a program (streaming output to Cloud Storage). You need to watch a directory with a moderate number of files and sync any updates with very low latencies. Storage Transfer Service for large transfers of on-premises data Like the gcloud storage command, Storage Transfer Service for on-premises data enables transfers from network file system (NFS) storage to Cloud Storage. Storage Transfer Service for on-premises data is designed for large-scale transfers (up to petabytes of data, billions of files). It supports full copies or incremental copies, and it works on all transfer options listed earlier in Deciding among Google's transfer options. It also has a managed graphical user interface; even non-technically savvy users (after setup) can use it to move data. Storage Transfer Service for on-premises data is especially useful in the following scenarios: You have sufficient available bandwidth to move the data volumes. You support a large base of internal users who might find a command-line tool challenging to use. You need robust error-reporting and a record of all files and objects that are moved. You need to limit the impact of transfers on other workloads in your data center (this product can stay under a user-specified bandwidth limit). You want to run recurring transfers on a schedule. You set up Storage Transfer Service for on-premises data by installing on-premises software (known as agents) onto computers in your data center. After setting up Storage Transfer Service, you can initiate transfers in the Google Cloud console by providing a source directory, destination bucket, and time or schedule. Storage Transfer Service recursively crawls subdirectories and files in the source directory and creates objects with a corresponding name in Cloud Storage (the object /dir/foo/file.txt becomes an object in the destination bucket named /dir/foo/file.txt). Storage Transfer Service automatically re-attempts a transfer when it encounters any transient errors. While the transfers are running, you can monitor how many files are moved and the overall transfer speed, and you can view error samples. When Storage Transfer Service completes a transfer, it generates a tab-delimited file (TSV) with a full record of all files touched and any error messages received. Agents are fault tolerant, so if an agent goes down, the transfer continues with the remaining agents. Agents are also self-updating and self-healing, so you don't have to worry about patching the latest versions or restarting the process if it goes down because of an unanticipated issue. Things to consider when using Storage Transfer Service: Use an identical agent setup on every machine. All agents should see the same Network File System (NFS) mounts in the same way (same relative paths). This setup is a requirement for the product to function. More agents results in more speed. Because transfers are automatically parallelized across all agents, we recommend that you deploy many agents so that you use your available bandwidth. Bandwidth caps can protect your workloads. Your other workloads might be using your data center bandwidth, so set a bandwidth cap to prevent transfers from impacting your SLAs. Plan time for reviewing errors. Large transfers can often result in errors requiring review. Storage Transfer Service lets you see a sample of the errors encountered directly in the Google Cloud console. If needed, you can load the full record of all transfer errors to BigQuery to check on files or evaluate errors that remained even after retries. These errors might be caused by running apps that were writing to the source while the transfer occurred, or the errors might reveal an issue that requires troubleshooting (for example, permissions error). Set up Cloud Monitoring for long-running transfers. Storage Transfer Service lets Monitoring monitor agent health and throughput, so you can set alerts that notify you when agents are down or need attention. Acting on agent failures is important for transfers that take several days or weeks, so that you avoid significant slowdowns or interruptions that can delay your project timeline. Transfer Appliance for larger transfers For large-scale transfers (especially transfers with limited network bandwidth), Transfer Appliance is an excellent option, especially when a fast network connection is unavailable and it's too costly to acquire more bandwidth. Transfer Appliance is especially useful in the following scenarios: Your data center is in a remote location with limited or no access to bandwidth. Bandwidth is available, but cannot be acquired in time to meet your deadline. You have access to logistical resources to receive and connect appliances to your network. With this option, consider the following: Transfer Appliance requires that you're able to receive and ship back the Google-owned hardware. Depending on your internet connection, the latency for transferring data into Google Cloud is typically higher with Transfer Appliance than online. Transfer Appliance is available only in certain countries. The two main criteria to consider with Transfer Appliance are cost and speed. With reasonable network connectivity (for example, 1 Gbps), transferring 100 TB of data online takes over 10 days to complete. If this rate is acceptable, an online transfer is likely a good solution for your needs. If you only have a 100 Mbps connection (or worse from a remote location), the same transfer takes over 100 days. At this point, it's worth considering an offline-transfer option such as Transfer Appliance. Acquiring a Transfer Appliance is straightforward. In the Google Cloud console, you request a Transfer Appliance, indicate how much data you have, and then Google ships one or more appliances to your requested location. You're given a number of days to transfer your data to the appliance ("data capture") and ship it back to Google. Storage Transfer Service for cloud-to-cloud transfers Storage Transfer Service is a fully managed, highly scalable service to automate transfers from other public clouds into Cloud Storage. For example, you can use Storage Transfer Service to transfer data from Amazon S3 to Cloud Storage. For HTTP, you can give Storage Transfer Service a list of public URLs in a specified format. This approach requires that you write a script providing the size of each file in bytes, along with a Base64-encoded MD5 hash of the file contents. Sometimes the file size and hash are available from the source website. If not, you need local access to the files, in which case, it might be easier to use the gcloud storage command, as described earlier. If you have a transfer in place, Storage Transfer Service is a great way to get data and keep it, particularly when transferring from another public cloud. If you would like to move data from another cloud not supported by Storage Transfer Service, you can use the gcloud storage command from a cloud-hosted virtual machine instance. Security For many Google Cloud users, security is their primary focus, and there are different levels of security available. A few aspects of security to consider include protecting data at rest (authorization and access to the source and destination storage system), protecting data while in transit, and protecting access to the transfer product. The following table outlines these aspects of security by product. Product Data at rest Data in transit Access to transfer product Transfer Appliance All data is encrypted at rest. Data is protected with keys managed by the customer. Anyone can order an appliance, but to use it they need access to the data source. gcloud storage command Access keys required to access Cloud Storage, which is encrypted at rest. Data is sent over HTTPS and encrypted in transit. Anyone can download and run the Google Cloud CLI. They must have permissions to buckets and local files in order to move data. Storage Transfer Service for on-premises data Access keys required to access Cloud Storage, which is encrypted at rest. The agent process can access local files as OS permissions allow. Data is sent over HTTPS and encrypted in transit. You must have object editor permissions to access Cloud Storage buckets. Storage Transfer Service Access keys required for non-Google Cloud resources (for example, Amazon S3). Access keys are required to access Cloud Storage, which is encrypted at rest. Data is sent over HTTPS and encrypted in transit. You must have IAM permissions for the service account to access the source and object editor permissions for any Cloud Storage buckets. To achieve baseline security enhancements, online transfers to Google Cloud using the gcloud storage command are accomplished over HTTPS, data is encrypted in transit, and all data in Cloud Storage is, by default, encrypted at rest. If you use Transfer Appliance, security keys that you control can help protect your data. Generally, we recommend that you engage your security team to ensure that your transfer plan meets your company and regulatory requirements. Third-party transfer products For advanced network-level optimization or ongoing data transfer workflows, you might want to use more advanced tools. For information about more advanced tools, see Google Cloud partners. Step 4: Evaluating data migration approaches When migrating data, you can follow these general steps: Transfer data from the legacy site to the new site. Resolve any data integration issues that arise—for example, synchronizing the same data from multiple sources. Validate the data migration. Promote the new site to be the primary copy. When you no longer need the legacy site as a fallback option, retire it. You should base your data migration approach on the following questions: How much data do you need to migrate? How often does this data change? Can you afford the downtime represented by a cut-over window while migrating data? What is your current data consistency model? There is no best approach; choosing one depends on the environment and on your requirements. The following sections present four data migration approaches: Scheduled maintenance Continuous replication Y (writing and reading) Data-access microservice Each approach tackles different issues, depending on the scale and the requirements of the data migration. The data-access microservice approach is the preferred option in a microservices architecture. However, the other approaches are useful for data migration. They're also useful during the transition period that's necessary in order to modernize your infrastructure to use the data-access microservice approach. The following graph outlines the respective cut-over windows sizes, refactoring effort, and flexibility properties of each of these approaches. Before following any of these approaches, make sure that you've set up the required infrastructure in the new environment. Scheduled maintenance The scheduled maintenance approach is ideal if your workloads can afford a cut-over window. It's scheduled in the sense that you can plan when your cut-over window occurs. In this approach, your migration consists of these steps: Copy data that's in the legacy site to the new site. This initial copy minimizes the cut-over window; after this initial copy, you need to copy only the data that has changed during this window. Perform data validation and consistency checks to compare data in the legacy site against the copied data in the new site. Stop the workloads and services that have write access to the copied data, so that no further changes occur. Synchronize changes that occurred after the initial copy. Refactor workloads and services to use the new site. Start your workloads and services. When you no longer need the legacy site as a fallback option anymore, retire it. The scheduled maintenance approach places most of the burden on the operations side, because minimal refactoring of workload and services is needed. Continuous replication Because not all workloads can afford a long cut-over window, you can build on the scheduled maintenance approach by providing a continuous replication mechanism after the initial copy and validation steps. When you design a mechanism like this, you should also take into account the rate at which changes are applied to your data; it might be challenging to keep two systems synchronized. The continuous replication approach is more complex than the scheduled maintenance approach. However, the continuous replication approach minimizes the time for the required cut-over window, because it minimizes the amount of data that you need to synchronize. The sequence for a continuous replication migration is as follows: Copy data that's in the legacy site to the new site. This initial copy minimizes the cut-over window; after the initial copy, you need to copy only the data that changed during this window. Perform data validation and consistency checks to compare data in the legacy site against the copied data in the new site. Set up a continuous replication mechanism from the legacy site to the new site. Stop the workloads and services that have access to the data to migrate (that is, to the data involved in the previous step). Refactor workloads and services to use the new site. Wait for the replication to fully synchronize the new site with the legacy site. Start your workloads and services. When you no longer need the legacy site as a fallback option anymore, retire it. As with the scheduled maintenance approach, the continuous replication approach places most of the burden on the operations side. Y (writing and reading) If your workloads have hard high-availability requirements and you cannot afford the downtime represented by a cut-over window, you need to take a different approach. For this scenario, you can use an approach that in this document is referred to as Y (writing and reading), which is a form of parallel migration. With this approach, the workload is writing and reading data in both the legacy site and the new site during the migration. (The letter Y is used here as a graphic representation of the data flow during the migration period.) This approach is summarized as follows: Refactor workloads and services to write data both to the legacy site and to the new site and to read from the legacy site. Identify the data that was written before you enabled writes in the new site and copy it from the legacy site to the new site. Along with the preceding refactoring, this ensures that the data stores are aligned. Perform data validation and consistency checks that compare data in the legacy site against data in the new site. Switch read operations from the legacy site to the new site. Perform another round of data validation and consistency checks to compare data in the legacy site against the new site. Disable writing in the legacy site. When you no longer need the legacy site as a fallback option anymore, retire it. Unlike the scheduled maintenance and continuous replication approaches, the Y (writing and reading) approach shifts most of the efforts from the operations side to the development side due to the multiple refactorings. Data-access microservice If you want to reduce the refactoring effort necessary to follow the Y (writing and reading) approach, you can centralize data read and write operations by refactoring workloads and services to use a data-access microservice. This scalable microservice becomes the only entry point to your data storage layer, and it acts as a proxy for that layer. Of the approaches discussed here, this gives you the maximum flexibility, because you can refactor this component without impacting other components of the architecture and without requiring a cut-over window. Using a data-access microservice is much like the Y (writing and reading) approach. The difference is that the refactoring efforts focus on the data-access microservice alone, instead of having to refactor all the workloads and services that access the data storage layer. This approach is summarized as follows: Refactor the data-access microservice to write data both in the legacy site and the new site. Reads are performed against the legacy site. Identify the data that was written before you enabled writes in the new site and copy it from the legacy site to the new site. Along with the preceding refactoring, this ensures that the data stores are aligned. Perform data validation and consistency checks comparing data in the legacy site against data in the new site. Refactor the data-access microservice to read from the new site. Perform another round of data validation and consistency checks comparing data in the legacy site against data in the new site. Refactor the data-access microservice to write only in the new site. When you no longer need the legacy site as a fallback option anymore, retire it. Like the Y (writing and reading) approach, the data-access microservice approach places most of the burden on the development side. However, it's significantly lighter compared to the Y (writing and reading) approach, because the refactoring efforts are focused on the data-access microservice. Step 5: Preparing for your transfer For a large transfer, or a transfer with significant dependencies, it's important to understand how to operate your transfer product. Customers typically go through the following steps: Pricing and ROI estimation. This step provides many options to aid in decision making. Functional testing. In this step, you confirm that the product can be successfully set up and that network connectivity (where applicable) is working. You also test that you can move a representative sample of your data (including accompanying non-transfer steps, like moving a VM instance) to the destination. You can usually do this step before allocating all resources such as transfer machines or bandwidth. The goals of this step include the following: Confirm that you can install and operate the transfer. Surface potential project-stopping issues that block data movement (for example, network routes) or your operations (for example, training needed on a non-transfer step). Performance testing. In this step, you run a transfer on a large sample of your data (typically 3–5%) after production resources are allocated to do the following: Confirm that you can consume all allocated resources and can achieve getting the speeds you expect. Surface and fix bottlenecks (for example, slow source storage system). Step 6: Ensuring the integrity of your transfer To help ensure the integrity of your data during a transfer, we recommend taking the following precautions: Enable versioning and backup on your destination to limit the damage of accidental deletes. Validate your data before removing the source data. For large-scale data transfers (with petabytes of data and billions of files), a baseline latent error rate of the underlying source storage system as low as 0.0001% still results in a data loss of thousands of files and gigabytes. Typically, applications running at the source are already tolerant of these errors, in which case, extra validation isn't necessary. In some exceptional scenarios (for example, long-term archive), more validation is necessary before it's considered safe to delete data from the source. Depending on the requirements of your application, we recommend that you run some data integrity tests after the transfer is complete to ensure that the application continues to work as intended. Many transfer products have built-in data integrity checks. However, depending on your risk profile, you might want to do an extra set of checks on the data and the apps reading that data before you delete data from the source. For example, you might want to confirm whether a checksum that you recorded and computed independently matches the data written at the destination, or confirm that a dataset used by the application transferred successfully. What's next Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Marco Ferrari | Cloud Solutions ArchitectRoss Thomson | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Translation_AI.txt b/Translation_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..5927b7c98fa920769a5ad78a9aaefb042646eef4 --- /dev/null +++ b/Translation_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/translate +Date Scraped: 2025-02-23T12:02:16.465Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceTranslation AITranslate docs, audio, and videos in real time with Google AIGoogle Cloud’s AI-powered APIs help you translate documents, websites, apps, audio files, videos, and more at scale with best-in-class quality and enterprise-grade control and security.Try Translation APIRequest a demoHighlightsWhat are Google Cloud's translation offerings?Which translation product is right for me?How much will translation cost me?What is the Translation API?5-min intro with step-by-step in-console demoOverviewAdaptive translationAdaptive Translation helps you create nuanced, high-quality translations that capture the unique style, tone, and voice of your content. By combining the power of large language models (LLMs) with your own small datasets, Adaptive translation delivers results that are often on par with custom-built models – but without the complexity of training or maintaining them.Here are the languages supported by Adaptive Translation.Cloud Translation APICloud Translation API uses Google's neural machine translation technology to let you dynamically translate text through the API using a Google pre-trained, custom model, or a translation specialized large language model (LLMs). It comes in Basic and Advanced editions. Both provide fast and dynamic translation, but Advanced offers customization features, such as domain-specific translation, formatted document translation, and batch translation.The first 500,000 characters sent to the API to process (Basic and Advanced combined) per month are free (not applicable to LLMs).Model selectionFor advanced translations, you're not limited to a one-size-fits-all solution, ensuring the highest quality and accuracy for your specific content. You can choose from the following models, based on your needs:Neural Machine Translation (NMT) for general text in everyday use cases like website content or news articles.Translation Large Language Model (LLM) for conversational text like messages or social media posts. You can use "Adaptive" mode to fine-tune translations based on your own examples for an even closer match to your unique style.AutoML Translation for highly specialized or technical content. By training it on your own data, you'll get the most accurate translations possible for your specific domain or industry.AutoML TranslationAutoML Translation enables you to create custom translation models tailored to your specific domain or use case, no coding required. It utilizes machine learning to analyze your provided translated text pairs and develop a model that can translate new content in the same domain with a higher degree of accuracy than the standard Google pre-trained model. It seamlessly integrates with Cloud Translation API and Translation Hub service for a smooth workflow orchestration.Media file translation, subtitling and voice-over solutionsFor a simple translated transcript of a video or audio, Speech-to-text API transcribes your video or audio with high accuracy into a text file that can be translated by the Translation API into different languages.To subtitle your videos after transcription and translation, use Transcoder API to add subtitles.To voice-over your videos in different languages, transcribe and translate, then use Cloud Text-to-speech API to synthesize custom, lifelike speech in 380+ voices across 50+ languages.Multilingual contact center solution for global marketsCombining Contact Center AI (CCAI) and Translation API allows you to assist a customer interaction happening in two different languages seamlessly across phone and chat, all in real-time. CCAI helps break language barriers by natively supporting both customer sentiment and call driver analysis, across many different languages. These analyses can be fed back to agents, in their preferred language, for better call outcomes and customer experience.Data privacy and securityGoogle Cloud has industry-leading capabilities that give you—our customers—control over your data and provide visibility into when and how your data is accessed.As a Google Cloud customer, you own your customer data. We implement stringent security measures to safeguard your customer data and provide you with tools and features to control it on your terms. Customer data is your data, not Google’s. We only process your data according to your agreement(s).Learn more in our Privacy Resource Center.View moreCompare translation productsProductWhat is itBest forEditions and tiersKey featuresTranslation APIAn API that delivers best-in-class machine translation results using Google’s neural machine translation technology.Translate websites, apps, documents, user comments etc.Basic: short-form, casual or user-generated content Advanced: long-form content that requires consistency and accuracy.-Supports >100 language pairs-Advanced edition supports domain-specific translation with higher accuracyTranslation HubA fully-managed service that allows organizations to translate a large volume of documents and manage their workflows.Enterprise document translation workflow management.Basic: generic content.Advanced: domain-specific content and content that requires human review and editing.-Enterprise-grade control and security-Zero deployment time-Custom model-Format retention-Human reviewAutoML TranslationA custom translation machine learning model training service that integrates with Translation API and Translation Hub.Train a custom translation model for higher level of accuracy in domain-specific content.N/A-No-code ML model training-Seamlessly integrates with Translation API and Translation HubTo address complex use cases, Translation API, our foundational offering, works well with our full portfolio of APIs.Translation APIWhat is itAn API that delivers best-in-class machine translation results using Google’s neural machine translation technology.Best forTranslate websites, apps, documents, user comments etc.Editions and tiersBasic: short-form, casual or user-generated content Advanced: long-form content that requires consistency and accuracy.Key features-Supports >100 language pairs-Advanced edition supports domain-specific translation with higher accuracyTranslation HubWhat is itA fully-managed service that allows organizations to translate a large volume of documents and manage their workflows.Best forEnterprise document translation workflow management.Editions and tiersBasic: generic content.Advanced: domain-specific content and content that requires human review and editing.Key features-Enterprise-grade control and security-Zero deployment time-Custom model-Format retention-Human reviewAutoML TranslationWhat is itA custom translation machine learning model training service that integrates with Translation API and Translation Hub.Best forTrain a custom translation model for higher level of accuracy in domain-specific content.Editions and tiersN/AKey features-No-code ML model training-Seamlessly integrates with Translation API and Translation HubTo address complex use cases, Translation API, our foundational offering, works well with our full portfolio of APIs.How It WorksTranslation API is a service that helps you programmatically translate your apps, websites and programs in real-time. It works with other APIs for more sophisticated use cases—explained in the Common Uses section. Try Translation APIIn-console demo: How the Translation API workssparkGet solution recommendations for your use case, generated by AII want to generate subtitles in five languages for my videosI want to take in audio in one language and generate a translation in another language, in real timeAutomatically translate newly published pages on my site into 10 languagesMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionGenerate multilingual subtitlesprompt_suggestionTranslate real time audioprompt_suggestionAutomate website translation Common UsesTranslate a website or appProgrammatically translate your site and app with scaleTo translate long form content on your website or in your app, with consistency and higher level of accuracy, use Cloud Translation API - Advanced. Otherwise, use Cloud Translation API - Basic for simplicity and scale.Create a Google Cloud project with the API enabled to use the service, then install client libraries for common programming languages to make calls to the API. With Advanced edition, you can use glossaries, custom translation models, and batch translate for efficiency.Try Cloud Translation API free5:24Demo: Use Translation API in the Google Cloud consoleCodeLabs Tutorial: Use Translation API with PythonCloud Translation API Setup GuideCompare Translation API Basic and AdvancedFirst 500,000 characters sent to Cloud Translation API for processing every month are free.How-tosProgrammatically translate your site and app with scaleTo translate long form content on your website or in your app, with consistency and higher level of accuracy, use Cloud Translation API - Advanced. Otherwise, use Cloud Translation API - Basic for simplicity and scale.Create a Google Cloud project with the API enabled to use the service, then install client libraries for common programming languages to make calls to the API. With Advanced edition, you can use glossaries, custom translation models, and batch translate for efficiency.Try Cloud Translation API free5:24Demo: Use Translation API in the Google Cloud consoleCodeLabs Tutorial: Use Translation API with PythonCloud Translation API Setup GuideCompare Translation API Basic and AdvancedFirst 500,000 characters sent to Cloud Translation API for processing every month are free.Train a custom translation modelNo-code translation model training with AutoMLAccess AutoML translation capabilities via Cloud Translation - Advanced to build custom models. It retains all future model enhancements, and the models you build remain available to you across Translation API and Translation Hub advanced tier services. With no coding, you use your domain-specific datasets (pairs of segments in the source and target languages) to finetune Google’s pretrained model for higher accuracy. Your data remains yours and will not be used to train Google models.Start building your custom model5:40Demo: How to train a custom translation model with no codeTutorial: Create a custom translation model (sample data provided)AutoML Translation beginner's guideHow-tosNo-code translation model training with AutoMLAccess AutoML translation capabilities via Cloud Translation - Advanced to build custom models. It retains all future model enhancements, and the models you build remain available to you across Translation API and Translation Hub advanced tier services. With no coding, you use your domain-specific datasets (pairs of segments in the source and target languages) to finetune Google’s pretrained model for higher accuracy. Your data remains yours and will not be used to train Google models.Start building your custom model5:40Demo: How to train a custom translation model with no codeTutorial: Create a custom translation model (sample data provided)AutoML Translation beginner's guideSubtitle a video in different languagesSubtitle at scale with Google Cloud APIsUse Cloud Speech-to-text API and Cloud Translation API to subtitle your videos in different languages at the same time. Store the videos you’d like to subtitle in Cloud Storage. Cloud Speech-to-text API then transcribes your video with accuracy and supports speech recognition in 145 languages. Cloud Translation API takes the transcription output (plain text and srt files) and translates into up to 130+ languages. The translated files can then be uploaded to your video for subtitles to show up.Try Speech-to-Text API free11:30Demo: How to subtitle videos using Speech-to-text and Translation APIsIn-console tutorial for Speech-to-text APIDocumentation: Add subtitles to a video with Transcoder APIVideo: How to combine Google Cloud APIs to work togetherBoth APIs have a free tier that allows you to use the products for free up to monthly limits.How-tosSubtitle at scale with Google Cloud APIsUse Cloud Speech-to-text API and Cloud Translation API to subtitle your videos in different languages at the same time. Store the videos you’d like to subtitle in Cloud Storage. Cloud Speech-to-text API then transcribes your video with accuracy and supports speech recognition in 145 languages. Cloud Translation API takes the transcription output (plain text and srt files) and translates into up to 130+ languages. The translated files can then be uploaded to your video for subtitles to show up.Try Speech-to-Text API free11:30Demo: How to subtitle videos using Speech-to-text and Translation APIsIn-console tutorial for Speech-to-text APIDocumentation: Add subtitles to a video with Transcoder APIVideo: How to combine Google Cloud APIs to work togetherBoth APIs have a free tier that allows you to use the products for free up to monthly limits.Dub a video in different languagesDub your video with AIDubbing a video in a foreign language requires Cloud Speech-to-Text, Translation, and Text-to-Speech APIs. Speech-to-Text API’s transcription of your video gets translated into desired languages by the Translation API; then Text-to-Speech API generates audio from the translated text.All the APIs’ performance can be enhanced with a custom or domain-specific model. For example, Text-to-Speech API can create unique and natural-sounding voices for your organization with a custom voice model.Try Text-to-Speech API free5:11Demo: How to dub a video with Google AI-powered APIsQuickstarts: Cloud Text-to-Speech APIVideo: How to combine Google Cloud APIs to work togetherVideo: Boost Speech-to-text API accuracyBoth APIs have a free tier that allows you to use the product for free up to monthly limits.How-tosDub your video with AIDubbing a video in a foreign language requires Cloud Speech-to-Text, Translation, and Text-to-Speech APIs. Speech-to-Text API’s transcription of your video gets translated into desired languages by the Translation API; then Text-to-Speech API generates audio from the translated text.All the APIs’ performance can be enhanced with a custom or domain-specific model. For example, Text-to-Speech API can create unique and natural-sounding voices for your organization with a custom voice model.Try Text-to-Speech API free5:11Demo: How to dub a video with Google AI-powered APIsQuickstarts: Cloud Text-to-Speech APIVideo: How to combine Google Cloud APIs to work togetherVideo: Boost Speech-to-text API accuracyBoth APIs have a free tier that allows you to use the product for free up to monthly limits.Translate formatted documentsTranslate documents with rich format retentionCloud Translation API - Advanced provides a Document Translation API for directly translating formatted documents, including Google Workspace, Microsoft Office, and PDF files. It preserves the original formatting and layout in translated documents.You can create a Google Cloud project with the API enabled, and install client libraries for common programming languages to call the API. You can also use features such as glossaries, custom translation models, and batch translate for efficiency.Try Cloud Translation API freeQuickstart: Translate documents with Cloud Translation API - AdvancedFirst 500,000 characters sent to Cloud Translation API for processing every month are free.How-tosTranslate documents with rich format retentionCloud Translation API - Advanced provides a Document Translation API for directly translating formatted documents, including Google Workspace, Microsoft Office, and PDF files. It preserves the original formatting and layout in translated documents.You can create a Google Cloud project with the API enabled, and install client libraries for common programming languages to call the API. You can also use features such as glossaries, custom translation models, and batch translate for efficiency.Try Cloud Translation API freeQuickstart: Translate documents with Cloud Translation API - AdvancedFirst 500,000 characters sent to Cloud Translation API for processing every month are free.Translation-aided customer interactionsTranslate customer interactions in real time with CCAIWhen your customers and agents speak different languages, Contact Center AI (CCAI) works with the Translation API to deliver a seamless communication across online chat and phone calls.During a phone call, CCAI transcribes what the customer says, the Translation API translates it into the agent’s preferred language in real time, the agent replies in their language, which gets translated back into the customer’s language, and CCAI delivers the translated response in synthesized speech.Request a CCAI demoLearn more: How CCAI enables your agentsHow to stream phone call audios in CCAI and get human agent suggestionsQuickstart: Translation API BasicAgent Assist, under the CCAI solution, works with the Translation and other APIs to enable this use case.How-tosTranslate customer interactions in real time with CCAIWhen your customers and agents speak different languages, Contact Center AI (CCAI) works with the Translation API to deliver a seamless communication across online chat and phone calls.During a phone call, CCAI transcribes what the customer says, the Translation API translates it into the agent’s preferred language in real time, the agent replies in their language, which gets translated back into the customer’s language, and CCAI delivers the translated response in synthesized speech.Request a CCAI demoLearn more: How CCAI enables your agentsHow to stream phone call audios in CCAI and get human agent suggestionsQuickstart: Translation API BasicAgent Assist, under the CCAI solution, works with the Translation and other APIs to enable this use case.PricingPrice tableEditions and tiersDescriptionPricingTranslation APIBasic - use pretrained modelFirst 500,000 characters per monthFree500,000 to 1 billion characters per month$20per million charactersDocument translation$0.08per pageAdvanced - use a custom modelFirst 500,000 characters per monthFree500,000 to 250 million characters per month$80per million charactersAbove 250 million charactersSee detailed pricing pageDocument translation$0.25per pageTranslation HubBasic tierEnterprise document translation platform with general-purpose Google pretrained models.$0.15per page per target languageAdvanced tierOn top of basic features, it supports translation memory, use of custom translation models, human review, and machine translation quality prediction (MTQP) scores.$0.50per page per target languageAutoML TranslationCustom translation model training$45per hour, $300 max per training jobDetails: Translation API, Translation Hub and AutoML Translation.Translation APIEditions and tiersBasic - use pretrained modelDescriptionFirst 500,000 characters per monthPricingFreeEditions and tiers500,000 to 1 billion characters per monthDescription$20per million charactersEditions and tiersDocument translationDescription$0.08per pageAdvanced - use a custom modelEditions and tiersFirst 500,000 characters per monthDescriptionFreeEditions and tiers500,000 to 250 million characters per monthDescription$80per million charactersEditions and tiersAbove 250 million charactersDescriptionSee detailed pricing pageEditions and tiersDocument translationDescription$0.25per pageTranslation HubEditions and tiersBasic tierDescriptionEnterprise document translation platform with general-purpose Google pretrained models.Pricing$0.15per page per target languageAdvanced tierEditions and tiersOn top of basic features, it supports translation memory, use of custom translation models, human review, and machine translation quality prediction (MTQP) scores.Description$0.50per page per target languageAutoML TranslationEditions and tiersDescriptionCustom translation model trainingPricing$45per hour, $300 max per training jobDetails: Translation API, Translation Hub and AutoML Translation.PRICING CALCULATOREstimate the cost of your project by pulling in all the tools you need in a single place.Estimate your costCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization's unique needs.Request a quoteTake the next step with Google CloudNew customers get $300 in free creditsTry Translation API freeTalk to an expert to discuss your use caseContact salesCodelabs: Using the Translation API with C#Get startedHow-to: Translation Hub admin setupView nowQuickstart: Create a custom translation modelTry nowGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Unlocking_Legacy_Applications_Using_APIs.txt b/Unlocking_Legacy_Applications_Using_APIs.txt new file mode 100644 index 0000000000000000000000000000000000000000..71bca45dcc875514aa685c766c6ff5f345820f65 --- /dev/null +++ b/Unlocking_Legacy_Applications_Using_APIs.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/unlocking-legacy-applications +Date Scraped: 2025-02-23T11:58:56.076Z + +Content: +Unlocking legacy applications using APIsExtend the life of legacy applications, build modern services, and quickly deliver new experiences with Google’s API management platform as an abstraction layer on top of existing services.Contact usApigee is recognized as a Leader in the 2021 Gartner Magic Quadrant for Full Life Cycle API ManagementRegister to download the reportBenefitsAdapt to changing market needs while leveraging legacy systemsSpeed up time to marketLeverage existing legacy systems to drastically reduce the time it takes to bring new offerings and services into the hands of customers.Deliver dynamic customer experiencesBuild and deliver modern mobile, web, or voice applications and other connected experiences for customers, partners, and employees.Empower developers and partnersProvide partners and external developers with API-based service access for smoother onboarding and easier development.Key featuresConnect legacy and modern services seamlesslyPackage legacy applications with APIs as an abstraction layerAn API abstraction and modernization layer insulates client-facing applications from shifting backend services. Legacy back ends hold critical data that is required for client applications and therefore must be unlocked in order to be scaled out to new cloud services and apps. With an API management platform, you can add security, analytics, and scalability to legacy services.Connect to modern interfacesLegacy applications and services hold critical data that can enable new mobile, web, and voice experiences for your customers. To do this at an accelerated pace, you can modernize legacy services via RESTful interfaces, unlocking them to be consumed by new cloud services and apps.Ready to get started? Contact usHow API management supports the enterprise modernization journeySecuring APIs in the age of connected experiencesRegister to read ebook VideoHow to migrate to microservices like a proWatch videoVideoTackle disruption and cloud complexity using API managementWatch videoCustomersFaster time to market with ApigeeCase studyNationwide transforms IT shared services and drives an API-first strategy.5-min readVideoWoolworths connects technology and data with APIs for personalized shopping.47:28VideoL.L.Bean uses Apigee to power customer experience with new and legacy services.37:31Blog postHow APIs help National Bank of Pakistan modernize the banking experience.5-min readSee all customersPartnersWork with our partners to build exceptional digital experiences with ApigeeSee all partnersRelated servicesRecommended products and servicesUnlock legacy applications with Apigee, connect them to new cloud services, and engage your customers with AI and ML.API management with ApigeeGet control and visibility over the APIs that connect legacy and modern applications with Apigee API management.Engaging experiences with AI and machine learningProvide more engaging customer experiences by infusing your applications with sight, language, and conversation.Modern application development with AnthosUse Anthos to build new services and GKE for a reliable, efficient, and secure way to run Kubernetes microservice clusters.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Use_AI_for_security.txt b/Use_AI_for_security.txt new file mode 100644 index 0000000000000000000000000000000000000000..3694db52c1527c6db8b86c6fc7d22d18f498cb1d --- /dev/null +++ b/Use_AI_for_security.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/use-ai-for-security +Date Scraped: 2025-02-23T11:43:07.526Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use AI for security Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to use AI to help you improve the security of your cloud workloads. Because of the increasing number and sophistication of cyber attacks, it's important to take advantage of AI's potential to help improve security. AI can help to reduce the number of threats, reduce the manual effort required by security professionals, and help compensate for the scarcity of experts in the cyber-security domain. Principle overview Use AI capabilities to improve your existing security systems and processes. You can use Gemini in Security as well as the intrinsic AI capabilities that are built into Google Cloud services. These AI capabilities can transform security by providing assistance across every stage of the security lifecycle. For example, you can use AI to do the following: Analyze and explain potentially malicious code without reverse engineering. Reduce repetitive work for cyber-security practitioners. Use natural language to generate queries and interact with security event data. Surface contextual information. Offer recommendations for quick responses. Aid in the remediation of events. Summarize high-priority alerts for misconfigurations and vulnerabilities, highlight potential impacts, and recommend mitigations. Levels of security autonomy AI and automation can help you achieve better security outcomes when you're dealing with ever-evolving cyber-security threats. By using AI for security, you can achieve greater levels of autonomy to detect and prevent threats and improve your overall security posture. Google defines four levels of autonomy when you use AI for security, and they outline the increasing role of AI in assisting and eventually leading security tasks: Manual: Humans run all of the security tasks (prevent, detect, prioritize, and respond) across the entire security lifecycle. Assisted: AI tools, like Gemini, boost human productivity by summarizing information, generating insights, and making recommendations. Semi-autonomous: AI takes primary responsibility for many security tasks and delegates to humans only when required. Autonomous: AI acts as a trusted assistant that drives the security lifecycle based on your organization's goals and preferences, with minimal human intervention. Recommendations The following sections describe the recommendations for using AI for security. The sections also indicate how the recommendations align with Google's Secure AI Framework (SAIF) core elements and how they're relevant to the levels of security autonomy. Enhance threat detection and response with AI Simplify security for experts and non-experts Automate time-consuming security tasks with AI Incorporate AI into risk management and governance processes Implement secure development practices for AI systems Note: For more information about Google Cloud's overall vision for using Gemini across our products to accelerate AI for security, see the whitepaper Google Cloud's Product Vision for AI-Powered Security. Enhance threat detection and response with AI This recommendation is relevant to the following focus areas: Security operations (SecOps) Logging, auditing, and monitoring AI can analyze large volumes of security data, offer insights into threat actor behavior, and automate the analysis of potentially malicious code. This recommendation is aligned with the following SAIF elements: Extend detection and response to bring AI into your organization's threat universe. Automate defenses to keep pace with existing and new threats. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps with threat analysis and detection. Semi-autonomous: AI takes on more responsibility for the security task. Google Threat Intelligence, which uses AI to analyze threat actor behavior and malicious code, can help you implement this recommendation. Simplify security for experts and non-experts This recommendation is relevant to the following focus areas: Security operations (SecOps) Cloud governance, risk, and compliance AI-powered tools can summarize alerts and recommend mitigations, and these capabilities can make security more accessible to a wider range of personnel. This recommendation is aligned with the following SAIF elements: Automate defenses to keep pace with existing and new threats. Harmonize platform-level controls to ensure consistent security across the organization. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps you to improve the accessibility of security information. Semi-autonomous: AI helps to make security practices more effective for all users. Gemini in Security Command Center can provide summaries of alerts for misconfigurations and vulnerabilities. Automate time-consuming security tasks with AI This recommendation is relevant to the following focus areas: Infrastructure security Security operations (SecOps) Application security AI can automate tasks such as analyzing malware, generating security rules, and identifying misconfigurations. These capabilities can help to reduce the workload on security teams and accelerate response times. This recommendation is aligned with the SAIF element about automating defenses to keep pace with existing and new threats. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps you to automate tasks. Semi-autonomous: AI takes primary responsibility for security tasks, and only requests human assistance when needed. Gemini in Google SecOps can help to automate high-toil tasks by assisting analysts, retrieving relevant context, and making recommendations for next steps. Incorporate AI into risk management and governance processes This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. You can use AI to build a model inventory and risk profiles. You can also use AI to implement policies for data privacy, cyber risk, and third-party risk. This recommendation is aligned with the SAIF element about contextualizing AI system risks in surrounding business processes. Depending on your implementation, this recommendation can be relevant to the semi-autonomous level of autonomy. At this level, AI can orchestrate security agents that run processes to achieve your custom security goals. Implement secure development practices for AI systems This recommendation is relevant to the following focus areas: Application security AI and ML security You can use AI for secure coding, cleaning training data, and validating tools and artifacts. This recommendation is aligned with the SAIF element about expanding strong security foundations to the AI ecosystem. This recommendation can be relevant to all levels of security autonomy, because a secure AI system needs to be in place before AI can be used effectively for security. The recommendation is most relevant to the assisted level, where security practices are augmented by AI. To implement this recommendation, follow the Supply-chain Levels for Software Artifacts (SLSA) guidelines for AI artifacts and use validated container images. Previous arrow_back Use AI securely and responsibly Next Meet regulatory, compliance, and privacy needs arrow_forward Send feedback \ No newline at end of file diff --git a/Use_AI_securely_and_responsibly.txt b/Use_AI_securely_and_responsibly.txt new file mode 100644 index 0000000000000000000000000000000000000000..dfb95213f99e96365df0eb2288241c4c3bc1793b --- /dev/null +++ b/Use_AI_securely_and_responsibly.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/security/use-ai-securely-and-responsibly +Date Scraped: 2025-02-23T11:43:02.691Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use AI securely and responsibly Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to help you secure your AI systems. These recommendations are aligned with Google's Secure AI Framework (SAIF), which provides a practical approach to address the security and risk concerns of AI systems. SAIF is a conceptual framework that aims to provide industry-wide standards for building and deploying AI responsibly. Principle overview To help ensure that your AI systems meet your security, privacy, and compliance requirements, you must adopt a holistic strategy that starts with the initial design and extends to deployment and operations. You can implement this holistic strategy by applying the six core elements of SAIF. Google uses AI to enhance security measures, such as identifying threats, automating security tasks, and improving detection capabilities, while keeping humans in the loop for critical decisions. Google emphasizes a collaborative approach to advancing AI security. This approach involves partnering with customers, industries, and governments to enhance the SAIF guidelines and offer practical, actionable resources. The recommendations to implement this principle are grouped within the following sections: Recommendations to use AI securely Recommendations for AI governance Recommendations to use AI securely To use AI securely, you need both foundational security controls and AI-specific security controls. This section provides an overview of recommendations to ensure that your AI and ML deployments meet the security, privacy, and compliance requirements of your organization. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Define clear goals and requirements for AI usage This recommendation is relevant to the following focus areas: Cloud governance, risk, and compliance AI and ML security This recommendation aligns with the SAIF element about contextualizing AI system risks in the surrounding business processes. When you design and evolve AI systems, it's important to understand your specific business goals, risks, and compliance requirements. Keep data secure and prevent loss or mishandling This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security This recommendation aligns with the following SAIF elements: Expand strong security foundations to the AI ecosystem. This element includes data collection, storage, access control, and protection against data poisoning. Contextualize AI system risks. Emphasize data security to support business objectives and compliance. Keep AI pipelines secure and robust against tampering This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security This recommendation aligns with the following SAIF elements: Expand strong security foundations to the AI ecosystem. As a key element of establishing a secure AI system, secure your code and model artifacts. Adapt controls for faster feedback loops. Because it's important for mitigation and incident response, track your assets and pipeline runs. Deploy apps on secure systems using secure tools and artifacts This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security Using secure systems and validated tools and artifacts in AI-based applications aligns with the SAIF element about expanding strong security foundations to the AI ecosystem and supply chain. This recommendation can be addressed through the following steps: Implement a secure environment for ML training and deployment Use validated container images Apply Supply-chain Levels for Software Artifacts (SLSA) guidelines Protect and monitor inputs This recommendation is relevant to the following focus areas: Logging, auditing, and monitoring Security operations AI and ML security This recommendation aligns with the SAIF element about extending detection and response to bring AI into an organization's threat universe. To prevent issues, it's critical to manage prompts for generative AI systems, monitor inputs, and control user access. Recommendations for AI governance All of the recommendations in this section are relevant to the following focus area: Cloud governance, risk, and compliance. Google Cloud offers a robust set of tools and services that you can use to build responsible and ethical AI systems. We also offer a framework of policies, procedures, and ethical considerations that can guide the development, deployment, and use of AI systems. As reflected in our recommendations, Google's approach for AI governance is guided by the following principles: Fairness Transparency Accountability Privacy Security Use fairness indicators Vertex AI can detect bias during the data collection or post-training evaluation process. Vertex AI provides model evaluation metrics like data bias and model bias to help you evaluate your model for bias. These metrics are related to fairness across different categories like race, gender, and class. However, interpreting statistical deviations isn't a straightforward exercise, because differences across categories might not be a result of bias or a signal of harm. Use Vertex Explainable AI To understand how the AI models make decisions, use Vertex Explainable AI. This feature helps you to identify potential biases that might be hidden in the model's logic. This explainability feature is integrated with BigQuery ML and Vertex AI, which provide feature-based explanations. You can either perform explainability in BigQuery ML or register your model in Vertex AI and perform explainability in Vertex AI. Track data lineage Track the origin and transformation of data that's used in your AI systems. This tracking helps you understand the data's journey and identify potential sources of bias or error. Data lineage is a Dataplex feature that lets you track how data moves through your systems: where it comes from, where it's passed to, and what transformations are applied to it. Establish accountability Establish clear responsibility for the development, deployment, and outcomes of your AI systems. Use Cloud Logging to log key events and decisions made by your AI systems. The logs provide an audit trail to help you understand how the system is performing and identify areas for improvement. Use Error Reporting to systematically analyze errors made by the AI systems. This analysis can reveal patterns that point to underlying biases or areas where the model needs further refinement. Implement differential privacy During model training, add noise to the data in order to make it difficult to identify individual data points but still enable the model to learn effectively. With SQL in BigQuery, you can transform the results of a query with differentially private aggregations. Previous arrow_back Implement preemptive cyber defense Next Use AI for security arrow_forward Send feedback \ No newline at end of file diff --git a/Use_Google_Cloud_Armor,_load_balancing,_and_Cloud_CDN_to_deploy_programmable_global_front_ends.txt b/Use_Google_Cloud_Armor,_load_balancing,_and_Cloud_CDN_to_deploy_programmable_global_front_ends.txt new file mode 100644 index 0000000000000000000000000000000000000000..d5c6b845d44d030b95eec60bf1f98bdbf3f2e819 --- /dev/null +++ b/Use_Google_Cloud_Armor,_load_balancing,_and_Cloud_CDN_to_deploy_programmable_global_front_ends.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deploy-programmable-gfe-cloud-armor-lb-cdn +Date Scraped: 2025-02-23T11:56:12.684Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use Google Cloud Armor, load balancing, and Cloud CDN to deploy programmable global front ends Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-04 UTC This document provides a reference architecture for a web application that's hosted on Google Cloud. The architecture uses a global front end which incorporates Google Cloud best practices to help scale, secure, and accelerate the delivery of your internet-facing applications. The architecture includes support for Cloud Build, as well as third-party continuous integration (CI) and continuous delivery (CD) tools like Jenkins and GitLab. This architecture is intended for developers and app owners who want to scale their application with a load balancer, and protect their applications from distributed denial-of-service (DDoS) and web-based attacks with a web application firewall (WAF). Architecture The following diagram shows the architecture that this document describes. In this architecture, the application is load balanced with global external Application Load Balancers, which distribute HTTP and HTTPS traffic across multiple backend instances, across multiple regions. Cloud CDN accelerates internet-facing applications by using Google's edge points of presence (PoPs) and works with the global external Application Load Balancer to deliver content to users. The backends are protected by Google Cloud Armor security policies that provide Layer 7 filtering by scrubbing incoming requests for common web attacks or other Layer 7 attributes, helping to block traffic before it reaches the load-balanced backend services. Protection against volumetric DDoS attacks is enabled by default. When a user requests content from your service, that request is sent to the global front end for internet-facing applications, which is provided by the Cross-Cloud Network. The request is evaluated by Google Cloud Armor security policies, starting with Google Cloud Armor edge security policies. If the request is allowed and can be fulfilled by Cloud CDN, the content is retrieved from the Google Cloud Armor cache and sent back to the user. If the request results in a cache miss, it's evaluated by backend policies and then, according to the policy's rules, denied or fulfilled by your backend server. Architecture components The preceding diagram includes the following components: Global external Application Load Balancer: This Application Load Balancer is a proxy-based Layer 7 load balancer that lets you run and scale your services. The Application Load Balancer distributes HTTP and HTTPS traffic to backends that are hosted on a variety of Google Cloud platforms. The Application Load Balancer has the following features: Configurable backend: This architecture uses two Managed Instance Groups (MIGs) in different regions, but you can configure any backend that the global external Application Load Balancer supports. You can use the same load balancer for GKE, Cloud Run, Cloud Run functions, and App Engine applications, as well as those hosted on-premises and on other clouds using a different backend configuration. To learn more, see Application Load Balancer overview. Traffic splitting: You can use the Application Load Balancer for traffic management, including the management of software versions by sending different users to different backend servers. In the architecture described in this document, there is a 60/40 simple traffic split. However, you can change this split to create more complex traffic management schemes. To learn about additional configuration options, see the configurable timeouts and retries and determine your preferred balancing mode. Cloud CDN: The Cloud CDN platform acts as a cache. It's deployed with the origin server to provide the full suite of Cloud CDN features, including QUIC and HTTP/2, as well as routing and caching controls. This approach allows your application to scale globally without compromising performance, and also reduce bandwidth and front-end compute costs. The default configuration that the global front end uses is based on Cloud CDN content delivery best practices and web security best practices. Google Cloud Armor: This component includes DDoS Protection and WAF rules. The architecture has the following basic Google Cloud Armor configuration, which helps to mitigate against common threat vectors: Default protection against volumetric DDoS attacks (Layer 3 and Layer 4). Preconfigured WAF rules based on ModSecurity Core Rule Set CRS 3.3. These rules enable Google Cloud Armor to evaluate dozens of distinct traffic signatures by referring to pre-named rules, rather than requiring you to define each signature manually. A basic configuration of Google Cloud Armor edge security policy to filter incoming requests and control access to protected backend services and Cloud Storage buckets. Products used This reference architecture uses the following Google Cloud products: Cloud Load Balancing Cloud CDN Google Cloud Armor Design considerations This section provides guidance to help you use this document as a starting point to develop an architecture that meets your specific requirements for security, reliability, operational efficiency, cost, and performance. Security, privacy, and compliance This section describes additional factors that you should consider when you use this reference architecture to deploy the web application. Establish a security baseline To help to further enhance your security, the architecture described in this document is also compatible with the Enterprise foundations blueprint. The blueprint helps organizations that are using Google Cloud to establish a secure baseline for all future workloads, including the setup of Identity and Access Management (IAM), Cloud Key Management Service, and Security Command Center. Protect user data with Cloud CDN In this architecture, we recommend that you don't cache user-specific content. For caching HTML (text/html) and JSON (application/json) content types, set explicit cache-control headers in the Cloud CDN response. Make sure that you don't cache one user's data and serve it to all users. Control access to your application with IAP The architecture is also compatible with Identity-Aware Proxy (IAP). IAP verifies a user's identity and then determines whether that user should be permitted to access an application. To enable IAP for the Application Load Balancer for both the global external mode or the classic mode, you enable it on the backend services of the load balancer. Inbound HTTP/HTTPS requests are evaluated by Google Cloud Armor before they are sent for load balancing by the Application Load Balancer. If Google Cloud Armor blocks a request, IAP doesn't evaluate the request. If Google Cloud Armor allows a request, IAP then evaluates that request. The request is blocked if IAP doesn't authenticate the request. To learn more, see Integrating Google Cloud Armor with other Google products. Cost optimization As a general guideline, using Cloud CDN together with Google Cloud Armor can help to minimize the effect of data transfer out charges. Cloud CDN Static objects that are served to the client from the cache don't transit through the load balancer. An effective caching strategy can reduce the amount of outbound data being processed by the load balancer and lower your costs. Google Cloud Armor Google Cloud Armor helps you to lower costs by preventing your account from being charged for unwanted traffic. Requests that are blocked by Google Cloud Armor don't generate a response from your app, effectively reducing the amount of outbound data processed by the load balancer. The effect on your costs depends on the percentage of undesirable traffic blocked by the Google Cloud Armor security policies that you implement. Final costs can also vary, depending on how many services or applications you want to protect, the number of Google Cloud Armor policies and rules that you have, cache fill and egress, and data volume. To learn more, see the following: Google Cloud Armor pricing Cloud Load Balancing pricing Cloud CDN pricing To find the price for your specific deployment scenario, see the Google Cloud pricing calculator Deployment To deploy this reference architecture, use the Terraform example. To learn more, see the README file. The web_app_protection_example folder includes the (main.tf) file. The code in this file creates the architecture described in this document, and provides additional support for automatic deployment. The folder structure in the Terraform folder is as follows: Source code repository: The Web Application Protection Example is part of the Web Application and API Protection (WAAP) repository. CD and CI: The build folder contains the following descriptive files for Jenkins, GitLab, and Cloud Build: Jenkins: This repository includes the Jenkins file that contains the rules that the pipeline executes. GitLab: This repository includes a .gitlab-ci YAML file that contains the rules that the GitLab pipeline executes. Cloud Build: This repository includes the Cloud Build file that contains the rules based on branch names. The repository includes an option for multi-environment (production and development) deployment. For more information, see the README file. When you commit a change to any branch that your pipeline is based on, those changes trigger a pipeline run and the changes are integrated into a new release once it completes. When you pull the toolkit for the first time, the solution will be loaded to your chosen Google Cloud project. What's next Learn more about the best practices for the Google Cloud products used in this reference architecture: Web security best practices External Application Load Balancer performance best practices Content delivery best practices Best practices for tuning Google Cloud Armor WAF rules Cloud Armor Enterprise: The Google Cloud Armor capabilities in this architecture are available under the Google Cloud Armor Standard tier. Enrolling your project to Cloud Armor Enterprise lets you use additional features such as the following: Threat Intelligence, which lets you allow or block traffic to external Application Load Balancers based on several categories of threat intelligence data. Adaptive Protection, which helps to protect your Google Cloud applications, websites, and services against Layer 7 DDoS attacks such as HTTP floods, as well as other high-frequency Layer 7 (application-level) malicious activities. Adaptive Protection builds machine learning models that detect and alert on anomalous activity, generate a signature describing the potential attack, and generate a custom Google Cloud Armor WAF rule to block the signature. DDoS attack visibility, which provides visibility through metrics, as well as logging of events such as Layer 3 Layer 4 volumetric attack attempts. Additional services such as DDoS response support and DDoS bill protection. To learn more, see Cloud Armor Enterprise overview For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Lihi Shadmi | Product ManagerDavid Tu | Customer EngineerOther contributors: Alex Maclinovsky | Enterprise ArchitectAnderson Duboc | Customer EngineerGrant Sorbo | Solutions ArchitectMichele Chubirka | Cloud Security AdvocateRob Harman | Technical Solutions Engineer ManagerSusan Wu | Outbound Product Manager Send feedback \ No newline at end of file diff --git a/Use_Vertex_AI_Pipelines_for_propensity_modeling_on_Google_Cloud.txt b/Use_Vertex_AI_Pipelines_for_propensity_modeling_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..fcb7ba555c7c2ab3e820a50a2d76e09e66b60d82 --- /dev/null +++ b/Use_Vertex_AI_Pipelines_for_propensity_modeling_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/use-vertex-ai-pipelines-for-propensity-modeling +Date Scraped: 2025-02-23T11:46:41.302Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use Vertex AI Pipelines for propensity modeling on Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-25 UTC This document describes an example of a pipeline implemented in Google Cloud that performs propensity modeling. It's intended for data engineers, machine learning engineers, or marketing science teams that create and deploy machine learning models. The document assumes that you know machine learning concepts and that you are familiar with Google Cloud, BigQuery, Vertex AI Pipelines, Python, and Jupyter notebooks. It also assumes that you have an understanding of Google Analytics 4 and of the raw export feature in BigQuery. The pipeline that you work with uses Google Analytics sample data. The pipeline builds several models by using BigQuery ML and XGBoost, and you run the pipeline by using Vertex AI Pipelines. This document describes the processes of training the models, evaluating them, and deploying them. It also describes how you can automate the entire process. The full pipeline code is in a Jupyter notebook in a GitHub repository. What is propensity modeling? Propensity modeling predicts actions that a consumer might take. Examples of propensity modeling include predicting which consumers are likely to buy a product, to sign up for a service, or even to churn and no longer be an active customer for a brand. The output of a propensity model is a score between 0 and 1 for each consumer, where this score represents how likely the consumer is to take that action. One of the key drivers pushing organizations toward propensity modeling is the need to do more with first-party data. For marketing use cases, the best propensity models include signals both from online and offline sources, such as site analytics and CRM data. This demo uses GA4 sample data that's in BigQuery. For your use case, you might want to consider additional offline signals. How MLOps simplifies your ML pipelines Most ML models aren't used in production. Model results generate insights, and frequently after data science teams finish a model, an ML engineering or software engineering team needs to wrap it in code for production using a framework such as Flask or FastAPI. This process often requires the model to be built in a new framework, which means that the data must be retransformed. This work can take weeks or months, and many models therefore don't make it to production. Machine learning operations (MLOps) has become important for getting value from ML projects, and MLOps and is now an evolving skill set for data science organizations. To help organizations understand this value, Google Cloud has published a Practitioners Guide to MLOps that provides an overview of MLOps. By using MLOps principles and Google Cloud, you can push models to an endpoint using an automatic process that removes much of the complexity of the manual process. The tools and process described in this document discuss an approach to owning your pipeline end to end, which helps you get your models into production. The practitioners guide document mentioned earlier provides a horizontal solution and an outline of what's possible using MLOps and Google Cloud. What is Vertex AI Pipelines? Vertex AI Pipelines lets you run ML pipelines that were built using either Kubeflow Pipelines SDK or TensorFlow Extended (TFX). Without Vertex AI, running either of these open source frameworks at scale requires you to set up and maintain your own Kubernetes clusters. Vertex AI Pipelines addresses this challenge. Because it's a managed service, it scales up or scales down as required, and it doesn't require ongoing maintenance. Each step in the Vertex AI Pipelines process consists of an independent container that can take input or produce output in the form of artifacts. For example, if a step in the process builds your dataset, the output is the dataset artifact. This dataset artifact can be used as the input to the next step. Because each component is a separate container, you need to provide information for each component of the pipeline, such as the name of the base image and a list of any dependencies. The pipeline build process The example described in this document uses a Jupyter notebook to create the pipeline components and to compile, run, and automate them. As noted earlier, the notebook is in a GitHub repository. You can run the notebook code using a Vertex AI Workbench user-managed notebooks instance, which handles authentication for you. Vertex AI Workbench lets you work with notebooks to create machines, build notebooks, and connect to Git. (Vertex AI Workbench includes many more features, but those aren't covered in this document.) Vertex AI Workbench user-managed notebooks is deprecated. On January 30, 2025, support for user-managed notebooks will end and the ability to create user-managed notebooks instances will be removed. When the pipeline run finishes, a diagram similar to the following one is generated in Vertex AI Pipelines: The preceding diagram is a directed acyclic graph (DAG). Building and reviewing the DAG is a central step to understanding your data or ML pipeline. The key attributes of DAGs are that components flow in a single direction (in this case, from top to bottom) and that no cycle occurs—that is, a parent component doesn't rely on its child component. Some components can occur in parallel, while others have dependencies and therefore occur in series. The green checkbox in each component signifies that the code ran properly. If errors occurred, then you see a red exclamation point. You can click each component in the diagram to view more details of the job. The DAG diagram is included in this section of the document to serve as a blueprint for each component that's built by the pipeline. The following list provides a description of each component. The complete pipeline performs the following steps, as shown in the DAG diagram: create-input-view: This component creates a BigQuery view. The component copies SQL from a Cloud Storage bucket and fills in parameter values that you provide. This BigQuery view is the input dataset that's used for all models later in the pipeline. build-bqml-logistic: The pipeline uses BigQuery ML to create a logistic regression model. When this component completes, a new model is viewable in the BigQuery console. You can use this model object to view model performance and later to build predictions. evaluate-bqml-logistic: The pipeline uses this component to create a precision/recall curve (logistic_data_path in the DAG diagram) for the logistic regression. This artifact is stored in a Cloud Storage bucket. build-bqml-xgboost: This component creates an XGBoost model by using BigQuery ML. When this component completes, you can view a new model object (system.Model) in the BigQuery console. You can use this object to view model performance and later to build predictions. evaluate-bqml-xgboost: This component creates a precision/recall curve named xgboost_data_path for the XGBoost model. This artifact is stored in a Cloud Storage bucket. build-xgb-xgboost: The pipeline creates an XGBoost model. This component uses Python instead of BigQuery ML so that you can see different approaches to creating the model. When this component completes, it stores a model object and performance metrics in a Cloud Storage bucket. deploy-xgb: This component deploys the XGBoost model. It creates an endpoint that allows either batch or online predictions. You can explore the endpoint in the Models tab in the Vertex AI console page. The endpoint autoscales to match traffic. build-bqml-automl: The pipeline creates an AutoML model by using BigQuery ML. When this component completes, a new model object is viewable in the BigQuery console. You can use this object to view model performance and later to build predictions. evaluate-bqml-automl: The pipeline creates a precision/recall curve for the AutoML model. The artifact is stored in a Cloud Storage bucket. Notice that the process doesn't push the BigQuery ML models to an endpoint. That's because you can generate predictions directly from the model object that's in BigQuery. As you decide between using BigQuery ML and using other libraries for your solution, consider how predictions need to be generated. If a daily batch prediction satisfies your needs, then staying in the BigQuery environment can simplify your workflow. However, if you require real-time predictions, or if your scenario needs functionality that's in another library, then follow the steps in this document to push your saved model to an endpoint. Costs In this document, you use the following billable components of Google Cloud: BigQuery Vertex AI Cloud Storage Cloud Run functions Cloud Scheduler To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. The Jupyter notebook for this scenario The tasks for creating and building the pipeline are built into a Jupyter notebook that's in a GitHub repository. To perform the tasks, you get the notebook and then run the code cells in the notebook in order. The flow described in this document assumes you're running the notebooks in Vertex AI Workbench. Open the Vertex AI Workbench environment You start by cloning the GitHub repository into a Vertex AI Workbench environment. In the Google Cloud console, select the project where you want to create the notebook. Go to the Vertex AI Workbench page. Go to the Vertex AI Workbench page On the User-managed notebooks tab, click New Notebook. In the list of notebook types, choose a Python 3 notebook. In the New notebook, dialog, click Advanced Options and then under Machine type, select the machine type that you want to use. If you are unsure, then choose n1-standard-1 (1 cVPU, 3.75 GB RAM). Click Create. It takes a few moments for the notebook environment to be created. When the notebook has been created, select the notebook, and then click Open Jupyterlab. The JupyterLab environment opens in your browser. To launch a terminal tab, select File > New > Launcher. Click the Terminal icon in the Launcher tab. In the terminal, clone the mlops-on-gcp GitHub repository: git clone https://github.com/GoogleCloudPlatform/cloud-for-marketing/ When the command finishes, you see the cloud-for-marketing folder in the file browser. Configure notebooks settings Before you run the notebook, you must configure it. The notebook requires a Cloud Storage bucket to store pipeline artifacts, so you start by creating that bucket. Create a Cloud Storage bucket where the notebook can store pipeline artifacts. The name of the bucket must be globally unique. In the cloud-for-marketing/marketing-analytics/predicting/kfp_pipeline/ folder, open the Propensity_Pipeline.ipynb notebook. In the notebook, set the value of the PROJECT_ID variable to the ID of the Google Cloud project where you want to run the pipeline. Set the value of the BUCKET_NAME variable to the name of the bucket that you just created. The remainder of this document describes code snippets that are important for understanding how the pipeline works. For the complete implementation, see the GitHub repository. Build the BigQuery view The first step in the pipeline generates the input data, which will be used to build each model. This Vertex AI Pipelines component generates a BigQuery view. To simplify the process of creating the view, some SQL has already been generated and saved in a text file in GitHub. The code for each component begins by decorating (modifying a parent class or function through attributes) the Vertex AI Pipelines component class. The code then defines the create_input_view function, which is a step in the pipeline. The function requires several inputs. Some of these values are currently hardcoded into the code, like the start date and end date. When you automate your pipeline, you can modify the code to use suitable values (for example, using the CURRENT_DATE function for a date), or you can update the component to take these values as parameters rather than keeping them hard-coded. You must also change the value of ga_data_ref to the name of your GA4 table, and set the value of the conversion variable to your conversion. (This example uses the public GA4 sample data.) The following listing shows the code for the create-input-view component. @component( # this component builds a BigQuery view, which will be the underlying source for model packages_to_install=["google-cloud-bigquery", "google-cloud-storage"], base_image="python:3.9", output_component_file="output_component/create_input_view.yaml", ) def create_input_view(view_name: str, data_set_id: str, project_id: str, bucket_name: str, blob_path: str ): from google.cloud import bigquery from google.cloud import storage client = bigquery.Client(project=project_id) dataset = client.dataset(data_set_id) table_ref = dataset.table(view_name) ga_data_ref = 'bigquery-public-data.google_analytics_sample.ga_sessions_*' conversion = "hits.page.pageTitle like '%Shopping Cart%'" start_date = '20170101' end_date = '20170131' def get_sql(bucket_name, blob_path): from google.cloud import storage storage_client = storage.Client() bucket = storage_client.get_bucket(bucket_name) blob = bucket.get_blob(blob_path) content = blob.download_as_string() return content def if_tbl_exists(client, table_ref): ... else: content = get_sql() content = str(content, 'utf-8') create_base_feature_set_query = content. format(start_date = start_date, end_date = end_date, ga_data_ref = ga_data_ref, conversion = conversion) shared_dataset_ref = client.dataset(data_set_id) base_feature_set_view_ref = shared_dataset_ref.table(view_name) base_feature_set_view = bigquery.Table(base_feature_set_view_ref) base_feature_set_view.view_query = create_base_feature_set_query.format(project_id) base_feature_set_view = client.create_table(base_feature_set_view) Build the BigQuery ML model After the view is created, you run the component named build_bqml_logistic to build a BigQuery ML model. This block of the notebook is a core component. Using the training view that you created in the first block, it builds a BigQuery ML model. In this example, the notebook uses logistic regression. For information about model types and the hyperparameters available, see the BigQuery ML reference documentation. The following listing shows the code for this component. @component( # this component builds a logistic regression with BigQuery ML packages_to_install=["google-cloud-bigquery"], base_image="python:3.9", output_component_file="output_component/create_bqml_model_logistic.yaml" ) def build_bqml_logistic(project_id: str, data_set_id: str, model_name: str, training_view: str ): from google.cloud import bigquery client = bigquery.Client(project=project_id) model_name = f"{project_id}.{data_set_id}.{model_name}" training_set = f"{project_id}.{data_set_id}.{training_view}" build_model_query_bqml_logistic = ''' CREATE OR REPLACE MODEL `{model_name}` OPTIONS(model_type='logistic_reg' , INPUT_LABEL_COLS = ['label'] , L1_REG = 1 , DATA_SPLIT_METHOD = 'RANDOM' , DATA_SPLIT_EVAL_FRACTION = 0.20 ) AS SELECT * EXCEPT (fullVisitorId, label), CASE WHEN label is null then 0 ELSE label end as label FROM `{training_set}` '''.format(model_name = model_name, training_set = training_set) job_config = bigquery.QueryJobConfig() client.query(build_model_query_bqml_logistic, job_config=job_config) Use XGBoost instead of BigQuery ML The component illustrated in the previous section uses BigQuery ML. The next section of the notebooks shows you how to use XGBoost in Python directly instead of using BigQuery ML. You run the component named build_bqml_xgboost to build the component to run a standard XGBoost classification model with a grid search. The code then saves the model as an artifact in the Cloud Storage bucket that you created. The function supports additional parameters (metrics and model) for output artifacts; these parameters are required by Vertex AI Pipelines. @component( # this component builds an xgboost classifier with xgboost packages_to_install=["google-cloud-bigquery", "xgboost", "pandas", "sklearn", "joblib", "pyarrow"], base_image="python:3.9", output_component_file="output_component/create_xgb_model_xgboost.yaml" ) def build_xgb_xgboost(project_id: str, data_set_id: str, training_view: str, metrics: Output[Metrics], model: Output[Model] ): ... data_set = f"{project_id}.{data_set_id}.{training_view}" build_df_for_xgboost = ''' SELECT * FROM `{data_set}` '''.format(data_set = data_set) ... xgb_model = XGBClassifier(n_estimators=50, objective='binary:hinge', silent=True, nthread=1, eval_metric="auc") random_search = RandomizedSearchCV(xgb_model, param_distributions=params, n_iter=param_comb, scoring='precision', n_jobs=4, cv=skf.split(X_train,y_train), verbose=3, random_state=1001 ) random_search.fit(X_train, y_train) xgb_model_best = random_search.best_estimator_ predictions = xgb_model_best.predict(X_test) score = accuracy_score(y_test, predictions) auc = roc_auc_score(y_test, predictions) precision_recall = precision_recall_curve(y_test, predictions) metrics.log_metric("accuracy",(score * 100.0)) metrics.log_metric("framework", "xgboost") metrics.log_metric("dataset_size", len(df)) metrics.log_metric("AUC", auc) dump(xgb_model_best, model.path + ".joblib") Build an endpoint You run the component named deploy_xgb to build an endpoint by using the XGBoost model from the previous section. The component takes the previous XGBoost model artifact, builds a container, and then deploys the endpoint, while also providing the endpoint URL as an artifact so that you can view it. When this step is completed, a Vertex AI endpoint has been created and you can view the endpoint in the console page for Vertex AI. @component( # Deploys xgboost model packages_to_install=["google-cloud-aiplatform", "joblib", "sklearn", "xgboost"], base_image="python:3.9", output_component_file="output_component/xgboost_deploy_component.yaml", ) def deploy_xgb( model: Input[Model], project_id: str, vertex_endpoint: Output[Artifact], vertex_model: Output[Model] ): from google.cloud import aiplatform aiplatform.init(project=project_id) deployed_model = aiplatform.Model.upload( display_name="tai-propensity-test-pipeline", artifact_uri = model.uri.replace("model", ""), serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.1-4:latest" ) endpoint = deployed_model.deploy(machine_type="n1-standard-4") # Save data to the output params vertex_endpoint.uri = endpoint.resource_name vertex_model.uri = deployed_model.resource_name Define the pipeline To define the pipeline, you define each operation based on the components that you created previously. You can then specify the order of the pipeline elements if those aren't explicitly called in the component. For example, the following code in the notebook defines a pipeline. In this case, the code requires the build_bqml_logistic_op component to run after the create_input_view_op component. @dsl.pipeline( # Default pipeline root. You can override it when submitting the pipeline. pipeline_root=PIPELINE_ROOT, # A name for the pipeline. name="pipeline-test", description='Propensity BigQuery ML Test' ) def pipeline(): create_input_view_op = create_input_view( view_name = VIEW_NAME, data_set_id = DATA_SET_ID, project_id = PROJECT_ID, bucket_name = BUCKET_NAME, blob_path = BLOB_PATH ) build_bqml_logistic_op = build_bqml_logistic( project_id = PROJECT_ID, data_set_id = DATA_SET_ID, model_name = 'bqml_logistic_model', training_view = VIEW_NAME ) # several components have been deleted for brevity build_bqml_logistic_op.after(create_input_view_op) build_bqml_xgboost_op.after(create_input_view_op) build_bqml_automl_op.after(create_input_view_op) build_xgb_xgboost_op.after(create_input_view_op) evaluate_bqml_logistic_op.after(build_bqml_logistic_op) evaluate_bqml_xgboost_op.after(build_bqml_xgboost_op) evaluate_bqml_automl_op.after(build_bqml_automl_op) Compile and run the pipeline You can now compile and run the pipeline. The following code in the notebook sets the enable_caching value to true in order to enable caching. When caching is enabled, any previous runs where a component has successfully completed won't be re-run. This flag is useful especially when you're testing the pipeline because when caching is enabled, the run completes faster and uses fewer resources. compiler.Compiler().compile( pipeline_func=pipeline, package_path="pipeline.json" ) TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") run = pipeline_jobs.PipelineJob( display_name="test-pipeine", template_path="pipeline.json", job_id="test-{0}".format(TIMESTAMP), enable_caching=True ) run.run() Automate the pipeline At this stage, you've launched the first pipeline. You can check the Vertex AI Pipelines page in the console to see the status of this job. You can watch as each container is built and run. You can also track errors for specific components in this section by clicking each one. To schedule the pipeline, you build a Cloud Run function and use a scheduler that's similar to a cron job. The code in the last section of the notebook schedules the pipeline to run once a day, as shown in the following code snippet: from kfp.v2.google.client import AIPlatformClient api_client = AIPlatformClient(project_id=PROJECT_ID, region='us-central1' ) api_client.create_schedule_from_job_spec( job_spec_path='pipeline.json', schedule='0 * * * *', enable_caching=False ) Use the finished pipeline in production The completed pipeline has performed the following tasks: Created an input dataset. Trained several models using both BigQuery ML as well as Python's XGBoost. Analyzed model results. Deployed the XGBoost model. You've also automated the pipeline by using Cloud Run functions and Cloud Scheduler to run daily. The pipeline that's defined in the notebook was created to illustrate ways to create various models. You wouldn't run the pipeline as it is currently built in a production scenario. However, you can use this pipeline as a guide and modify the components to suit your needs. For example, you can edit the feature-creation process to take advantage of your data, modify date ranges, and perhaps build alternative models. You would also pick the model from among those illustrated that best meets your production requirements. When the pipeline is ready for production, you might implement additional tasks. For example, you might implement a champion/challenger model, where each day a new model is created and both the new model (the challenger) and the existing one (the champion) are scored on new data. You put the new model into production only if its performance is better than the performance of the current model. To monitor progress of your system, you might also keep a record of each day's model performance and visualize trending performance. Clean up Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. What's next To learn about using MLOps to create production-ready ML systems, see Practitioners Guide to MLOps. To learn about Vertex AI, see the Vertex AI documentation. To learn about Kubeflow Pipelines, see the KFP documentation. To learn about TensorFlow Extended, see the TFX User Guide. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Tai Conley | Cloud Customer EngineerOther contributor: Lars Ahlfors | Cloud Customer Engineer Send feedback \ No newline at end of file diff --git a/Use_a_Cloud_SDK_Client_Library.txt b/Use_a_Cloud_SDK_Client_Library.txt new file mode 100644 index 0000000000000000000000000000000000000000..2ef2ea7f457b53e42ecf77e3571eefcaf95520f9 --- /dev/null +++ b/Use_a_Cloud_SDK_Client_Library.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/application-development/cloud-client-api +Date Scraped: 2025-02-23T11:48:39.061Z + +Content: +Home Docs Cloud Architecture Center Send feedback Jump Start Solution: Cloud SDK Client Library Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-03-21 UTC This guide helps you understand and deploy the Cloud SDK Client Library solution. This solution lets you to interact with Google Cloud by using the Google Cloud SDK Client Libraries to process and aggregate data, and then show a radar visualization. Use this app to identify trends and observations based on the aggregate data. This solution will help you learn key skills for successfully making API calls. This solution uses the Google Cloud SDK Client Libraries to access Google Cloud APIs programmatically, leveraging Google Cloud services (Cloud Run jobs and Cloud Storage) to reduce boilerplate code. In this solution, the code parses a sample dataset (the 2018 Central Park Squirrel Census) with Cloud Run jobs and Cloud Storage. All Google Cloud SDK Client requests are logged into Cloud Logging, using a common pattern to enable troubleshooting and observability so you can see how long those requests take and where a process may encounter an error. This solution will also guide you through the execution of a Cloud Run job to process and store the dataset. APIs are the fundamental mechanism that developers use to interact with Google Cloud products and services. Google Cloud SDK provides language-specific Cloud Client libraries supporting eight different languages and their conventions and styles. Use this solution to learn how to use Google Cloud SDK Client Libraries to process data and deploy a frontend application where you can view the results. Note: This solution helps you explore the capabilities of Google Cloud. The solution is not intended to be used as is for production environments. For information about designing and setting up production-grade environments in Google Cloud, see Landing zone design in Google Cloud and Google Cloud setup checklist. Objectives This solution guide helps you do the following: Learn how to use a client library for Google Cloud API calls. Deploy an interactive dataset using Cloud Run jobs and Cloud Storage. Explore Google Cloud API calls using Cloud Logging. View the Cloud Run application, service account configurations, and enabled APIs and their usage. Architecture This solution deploys the raw data to a bucket in Cloud Storage, configures a Cloud Run job to process the data and store to a separate bucket in Cloud Storage, and deploys a frontend service in Cloud Run that can view and interact with the processed data. The following diagram shows the architecture of the solution: The following section describes the Google Cloud resources that are shown in the diagram. Components and configuration The following is the request processing flow of this solution. The steps in the flow are numbered as shown in the preceding architecture diagram. Unprocessed data has been uploaded to a Cloud Storage bucket. A Cloud Run job transforms the raw data into a more structured format that the frontend service can understand. The Cloud Run job uploads the processed data in a second Cloud Storage bucket. The frontend, hosted as a Cloud Run service, pulls the processed data from the second Cloud Storage bucket. The user can visit the web application served by the frontend Cloud Run service. Products used The solution uses the following Google Cloud products: Cloud Storage: An enterprise-ready service that provides low-cost, no-limit object storage for diverse data types. Data is accessible from within and outside of Google Cloud and is replicated geo-redundantly. Cloud Logging: A service that lets you store, search, analyze, monitor, and alert on logging data and events from Google Cloud and other clouds. Cloud Run: A fully managed service that lets you build and deploy serverless containerized apps. Google Cloud handles scaling and other infrastructure tasks so that you can focus on the business logic of your code. Cost For an estimate of the cost of the Google Cloud resources that the Cloud SDK Client Library solution uses, see the precalculated estimate in the Google Cloud Pricing Calculator. Use the estimate as a starting point to calculate the cost of your deployment. You can modify the estimate to reflect any configuration changes that you plan to make for the resources that are used in the solution. The precalculated estimate is based on assumptions for certain factors, including the following: The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. The Google Cloud locations where the resources are deployed. The amount of time that the resources are used. Deploy the solution The following sections guide you through the process of deploying the solution. Create or choose a Google Cloud project When you deploy the solution, you choose the Google Cloud project where the resources are deployed. You can either create a new project or use an existing project for the deployment. If you want to create a new project, do so before you begin the deployment. Using a new project can help avoid conflicts with previously provisioned resources, such as resources that are used for production workloads. To create a project, complete the following steps: In the Google Cloud console, go to the project selector page. Go to project selector Click Create project. Name your project. Make a note of your generated project ID. Edit the other fields as needed. Click Create. Get the required IAM permissions To start the deployment process, you need the Identity and Access Management (IAM) permissions that are listed in the following table. If you created a new project for this solution, then you have the roles/owner basic role in that project and have all the necessary permissions. If you don't have the roles/owner role, then ask your administrator to grant these permissions (or the roles that include these permissions) to you. IAM permission required Predefined role that includes the required permissions serviceusage.services.enable Service Usage Admin (roles/serviceusage.serviceUsageAdmin) iam.serviceAccounts.create Service Account Admin (roles/iam.serviceAccountAdmin) resourcemanager.projects.setIamPolicy Project IAM Admin (roles/resourcemanager.projectIamAdmin) config.deployments.create config.deployments.list Cloud Infrastructure Manager Admin (roles/config.admin) iam.serviceAccount.actAs Service Account User (roles/iam.serviceAccountUser) About temporary service account permissions If you start the deployment process through the console, Google creates a service account to deploy the solution on your behalf (and to delete the deployment later if you choose). This service account is assigned certain IAM permissions temporarily; that is, the permissions are revoked automatically after the solution deployment and deletion operations are completed. Google recommends that after you delete the deployment, you delete the service account, as described later in this guide. View the roles that are assigned to the service account These roles are listed here in case an administrator of your Google Cloud project or organization needs this information. roles/storage.admin roles/run.admin roles/iam.serviceAccountAdmin roles/iam.serviceAccountUser roles/resourcemanager.projectIamAdmin roles/iam.roleAdmin roles/serviceusage.serviceUsageAdmin Choose a deployment method To help you deploy this solution with minimal effort, a Terraform configuration is provided in GitHub. The Terraform configuration defines all the Google Cloud resources that are required for the solution. You can deploy the solution by using one of the following methods: Through the console: Use this method if you want to try the solution with the default configuration and see how it works. Cloud Build deploys all the resources that are required for the solution. When you no longer need the deployed solution, you can delete it through the console. Any resources that you create after you deploy the solution might need to be deleted separately. To use this deployment method, follow the instructions in Deploy through the console. Using the Terraform CLI: Use this method if you want to customize the solution or if you want to automate the provisioning and management of the resources by using the infrastructure as code (IaC) approach. Download the Terraform configuration from GitHub, optionally customize the code as necessary, and then deploy the solution by using the Terraform CLI. After you deploy the solution, you can continue to use Terraform to manage the solution. To use this deployment method, follow the instructions in Deploy using the Terraform CLI. Deploy through the console Complete the following steps to deploy the preconfigured solution. Note: If you want to customize the solution or automate the provisioning and management of the solution by using the infrastructure as code (IaC) approach, then see Deploy using the Terraform CLI. In the Google Cloud Jump Start Solutions catalog, go to the Cloud SDK Client Library solution. Go to the Cloud SDK Client Library solution Review the information that's provided on the page, such as the estimated cost of the solution and the estimated deployment time. When you're ready to start deploying the solution, click Deploy. A step-by-step configuration pane is displayed. Complete the steps in the configuration pane. Note the name that you enter for the deployment. This name is required later when you delete the deployment. When you click Deploy, the Solution deployments page is displayed. The Status field on this page shows Deploying. Wait for the solution to be deployed. If the deployment fails, the Status field shows Failed. You can use the Cloud Build log to diagnose the errors. For more information, see Errors when deploying through the console. After the deployment is completed, the Status field changes to Deployed. To view the solution, return to the Solution deployments page in the console. With this solution, you need to run the data processing job using Cloud Run jobs in order for you to transform and interact with the sample dataset. To follow step-by-step guidance for this task directly in Google Cloud console, click Start the data processing job. Start the data processing job To view the Google Cloud resources that are deployed and their configuration, choose an interactive tour in your preferred language (Python, Node.js, or Java). Choose a tour Now that you've processed the sample dataset into a Cloud Storage bucket, you can continue to use the Cloud SDK Client Library solution to explore more about interacting with Google Cloud APIs, how APIs are powered by Identity and Access Management, and troubleshooting API issues in the Cloud Client API apps. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Deploy using the Terraform CLI This section describes how you can customize the solution or automate the provisioning and management of the solution by using the Terraform CLI. Solutions that you deploy by using the Terraform CLI are not displayed in the Solution deployments page in the Google Cloud console. Note: If you want to deploy the solution with the default configuration to see how it works, then follow the instructions in Deploy through the console. Set up the Terraform client You can run Terraform either in Cloud Shell or on your local host. This guide describes how to run Terraform in Cloud Shell, which has Terraform preinstalled and configured to authenticate with Google Cloud. The Terraform code for this solution is available in a GitHub repository. Clone the GitHub repository to Cloud Shell. A prompt is displayed to confirm downloading the GitHub repository to Cloud Shell. Click Confirm. Cloud Shell is launched in a separate browser tab, and the Terraform code is downloaded to the $HOME/cloudshell_open directory of your Cloud Shell environment. In Cloud Shell, check whether the current working directory is $HOME/cloudshell_open/terraform-cloud-client-api/infra. This is the directory that contains the Terraform configuration files for the solution. If you need to change to that directory, run the following command: cd $HOME/cloudshell_open/terraform-cloud-client-api/infra Initialize Terraform by running the following command: terraform init Wait until you see the following message: Terraform has been successfully initialized! Configure the Terraform variables The Terraform code that you downloaded includes variables that you can use to customize the deployment based on your requirements. For example, you can specify the Google Cloud project and the region where you want the solution to be deployed. Make sure that the current working directory is $HOME/cloudshell_open/terraform-cloud-client-api/infra. If it isn't, go to that directory. In the same directory, create a text file named terraform.tfvars. In the terraform.tfvars file, copy the following code snippet, and set values for the required variables. Follow the instructions that are provided as comments in the code snippet. This code snippet includes only the variables for which you must set values. The Terraform configuration includes other variables that have default values. To review all the variables and the default values, see the variables.tf file that's available in the $HOME/cloudshell_open/terraform-cloud-client-api/infra directory. Make sure that each value that you set in the terraform.tfvars file matches the variable type as declared in the variables.tf file. For example, if the type that's defined for a variable in the variables.tf file is bool, then you must specify true or false as the value of that variable in the terraform.tfvars file. # ID of the project in which you want to deploy the solution project_id = "PROJECT_ID" # Google Cloud region where you want to deploy the solution # Example: us-central1 region = "REGION" # Programming language implementation to use # Example: python language = "LANGUAGE" # Version of application image to use # Example: 0.4.0 image_version = "IMAGE_VERSION" For information about the values that you can assign to the required variables, see the following: project_id: Identifying projects. region: Available regions. language: Programming language implementation to use. image_version: Version of application image to use. Validate and review the Terraform configuration Make sure that the current working directory is $HOME/cloudshell_open/terraform-cloud-client-api/infra. If it isn't, go to that directory. Verify that the Terraform configuration has no errors: terraform validate If the command returns any errors, make the required corrections in the configuration and then run the terraform validate command again. Repeat this step until the command returns the following message: Success! The configuration is valid. Review the resources that are defined in the configuration: terraform plan If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. The output of the terraform plan command is a list of the resources that Terraform provisions when you apply the configuration. If you want to make any changes, edit the configuration and then run the terraform validate and terraform plan commands again. Provision the resources When no further changes are necessary in the Terraform configuration, deploy the resources. Make sure that the current working directory is $HOME/cloudshell_open/terraform-cloud-client-api/infra. If it isn't, go to that directory. Apply the Terraform configuration: terraform apply If you didn't create the terraform.tfvars file as described earlier, Terraform prompts you to enter values for the variables that don't have default values. Enter the required values. Terraform displays a list of the resources that will be created. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress of the deployment. If the deployment can't be completed, Terraform displays the errors that caused the failure. Review the error messages and update the configuration to fix the errors. Then run the terraform apply command again. For help with troubleshooting Terraform errors, see Errors when deploying the solution using the Terraform CLI. After all the resources are created, Terraform displays the following message: Apply complete! To view the solution, return to the Solution deployments page in the console. With this solution, you need to run the data processing job using Cloud Run jobs in order for you transform and interact with the sample dataset. To follow step-by-step guidance for this task directly in Google Cloud console, click Start the data processing job. Start the data processing job To view the Google Cloud resources that are deployed and their configuration, choose an interactive tour in your preferred language (Python, Node.js, or Java). Choose a tour Now that you've processed the sample dataset into a Cloud Storage bucket, you can continue to use the Cloud SDK Client Library solution to explore more about interacting with Google Cloud APIs, how APIs are powered by Identity and Access Management, and troubleshooting API issues in the Cloud Client API apps. When you no longer need the solution, you can delete the deployment to avoid continued billing for the Google Cloud resources. For more information, see Delete the deployment. Delete the deployment When you no longer need the solution, to avoid continued billing for the resources that you created in this solution, delete all the resources. Delete through the console Use this procedure if you deployed the solution through the console. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. Locate the deployment that you want to delete. In the row for the deployment, click more_vert Actions and then select Delete. You might need to scroll to see Actions in the row. Enter the name of the deployment and then click Confirm. The Status field shows Deleting. If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Delete using the Terraform CLI Use this procedure if you deployed the solution by using the Terraform CLI. In Cloud Shell, make sure that the current working directory is $HOME/cloudshell_open/terraform-cloud-client-api/infra. If it isn't, go to that directory. Remove the resources that were provisioned by Terraform: terraform destroy Terraform displays a list of the resources that will be destroyed. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! If the deletion fails, see the troubleshooting guidance in Error when deleting a deployment. When you no longer need the Google Cloud project that you used for the solution, you can delete the project. For more information, see Optional: Delete the project. Optional: Delete the project If you deployed the solution in a new Google Cloud project, and if you no longer need the project, then delete it by completing the following steps: Caution: If you delete a project, all the resources in the project are permanently deleted. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. At the prompt, type the project ID, and then click Shut down. If you decide to retain the project, then delete the service account that was created for this solution, as described in the next section. Optional: Delete the service account If you deleted the project that you used for the solution, then skip this section. As mentioned earlier in this guide, when you deployed the solution, a service account was created on your behalf. The service account was assigned certain IAM permissions temporarily; that is, the permissions were revoked automatically after the solution deployment and deletion operations were completed, but the service account isn't deleted. Google recommends that you delete this service account. If you deployed the solution through the Google Cloud console, go to the Solution deployments page. (If you're already on that page, refresh the browser.) A process is triggered in the background to delete the service account. No further action is necessary. If you deployed the solution by using the Terraform CLI, complete the following steps: In the Google Cloud console, go to the Service accounts page. Go to Service accounts Select the project that you used for the solution. Select the service account that you want to delete. The email ID of the service account that was created for the solution is in the following format: goog-sc-DEPLOYMENT_NAME-NNN@PROJECT_ID.iam.gserviceaccount.com The email ID contains the following values: DEPLOYMENT_NAME: the name of the deployment. NNN: a random 3-digit number. PROJECT_ID: the ID of the project in which you deployed the solution. Click Delete. Troubleshoot errors The actions that you can take to diagnose and resolve errors depend on the deployment method and the complexity of the error. Errors when deploying through the console If the deployment fails when you use the console, do the following: Go to the Solution deployments page. If the deployment failed, the Status field shows Failed. View the details of the errors that caused the failure: In the row for the deployment, click more_vert Actions. You might need to scroll to see Actions in the row. Select View Cloud Build logs. Review the Cloud Build log and take appropriate action to resolve the issue that caused the failure. Errors when deploying using the Terraform CLI If the deployment fails when you use Terraform, the output of the terraform apply command includes error messages that you can review to diagnose the problem. The examples in the following sections show deployment errors that you might encounter when you use Terraform. API not enabled error If you create a project and then immediately attempt to deploy the solution in the new project, the deployment might fail with an error like the following: Error: Error creating Network: googleapi: Error 403: Compute Engine API has not been used in project PROJECT_ID before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=PROJECT_ID then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. If this error occurs, wait a few minutes and then run the terraform apply command again. Error when deleting a deployment In certain cases, attempts to delete a deployment might fail: After deploying a solution through the console, if you change any resource that was provisioned by the solution, and if you then try to delete the deployment, the deletion might fail. The Status field on the Solution deployments page shows Failed, and the Cloud Build log shows the cause of the error. After deploying a solution by using the Terraform CLI, if you change any resource by using a non-Terraform interface (for example, the console), and if you then try to delete the deployment, the deletion might fail. The messages in the output of the terraform destroy command show the cause of the error. Review the error logs and messages, identify and delete the resources that caused the error, and then try deleting the deployment again. If a console-based deployment doesn't get deleted and if you can't diagnose the error by using the Cloud Build log, then you can delete the deployment by using the Terraform CLI, as described in the next section. Delete a console-based deployment by using the Terraform CLI This section describes how to delete a console-based deployment if errors occur when you try to delete it through the console. In this approach, you download the Terraform configuration for the deployment that you want to delete and then use the Terraform CLI to delete the deployment. Identify the region where the deployment's Terraform code, logs, and other data are stored. This region might be different from the region that you selected while deploying the solution. In the Google Cloud console, go to the Solution deployments page. Go to Solution deployments Select the project that contains the deployment that you want to delete. In the list of deployments, identify the row for the deployment that you want to delete. Click expand_more View all row content. In the Location column, note the second location, as highlighted in the following example: In the Google Cloud console, activate Cloud Shell. Activate Cloud Shell At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize. Create environment variables for the project ID, region, and name of the deployment that you want to delete: export REGION="REGION" export PROJECT_ID="PROJECT_ID" export DEPLOYMENT_NAME="DEPLOYMENT_NAME" In these commands, replace the following: REGION: the location that you noted earlier in this procedure. PROJECT_ID: the ID of the project where you deployed the solution. DEPLOYMENT_NAME: the name of the deployment that you want to delete. Get the ID of the latest revision of the deployment that you want to delete: export REVISION_ID=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .latestRevision -r) echo $REVISION_ID The output is similar to the following: projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME/revisions/r-0 Get the Cloud Storage location of the Terraform configuration for the deployment: export CONTENT_PATH=$(curl \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/${REVISION_ID}" \ | jq .applyResults.content -r) echo $CONTENT_PATH The following is an example of the output of this command: gs://PROJECT_ID-REGION-blueprint-config/DEPLOYMENT_NAME/r-0/apply_results/content Download the Terraform configuration from Cloud Storage to Cloud Shell: gcloud storage cp $CONTENT_PATH $HOME --recursive cd $HOME/content/infra Wait until the Operation completed message is displayed, as shown in the following example: Operation completed over 45 objects/268.5 KiB Initialize Terraform: terraform init Wait until you see the following message: Terraform has been successfully initialized! Remove the deployed resources: terraform destroy Terraform displays a list of the resources that will be destroyed. If any warnings about undeclared variables are displayed, ignore the warnings. When you're prompted to perform the actions, enter yes. Terraform displays messages showing the progress. After all the resources are deleted, Terraform displays the following message: Destroy complete! Delete the deployment artifact: curl -X DELETE \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}?force=true&delete_policy=abandon" Wait a few seconds and then verify that the deployment artifact was deleted: curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://config.googleapis.com/v1alpha2/projects/${PROJECT_ID}/locations/${REGION}/deployments/${DEPLOYMENT_NAME}" \ | jq .error.message If the output shows null, wait a few seconds and then run the command again. After the deployment artifact is deleted, a message as shown in the following example is displayed: Resource 'projects/PROJECT_ID/locations/REGION/deployments/DEPLOYMENT_NAME' was not found Submit feedback Jump Start Solutions are for informational purposes only and are not officially supported products. Google may change or remove solutions without notice. To troubleshoot errors, review the Cloud Build logs and the Terraform output. To submit feedback, do the following: For documentation, in-console tutorials, or the solution, use the Send Feedback button on the page. For unmodified Terraform code, create issues in the GitHub repository. GitHub issues are reviewed on a best-effort basis and are not intended for general usage questions. For issues with the products that are used in the solution, contact Cloud Customer Care. What's next To explore more using the Cloud SDK Client Library solution: Learn about APIs that power Google Cloud, and how to interact with them in your programming language of choice. Learn how APIs are powered by Identity and Access Management on Google Cloud. Troubleshoot API issues in the Cloud Client API apps. ContributorsAuthor: Kadeem Dunn | Technical WriterOther contributor: Katie McLaughlin | Senior Developer Relations Engineer Send feedback \ No newline at end of file diff --git a/Use_cases-_locality-restricted_data_analytics_applications.txt b/Use_cases-_locality-restricted_data_analytics_applications.txt new file mode 100644 index 0000000000000000000000000000000000000000..2f958bab8ea62acd266766548c58823f7fa7be5a --- /dev/null +++ b/Use_cases-_locality-restricted_data_analytics_applications.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/dr-scenarios-locality-restricted-data-analytics +Date Scraped: 2025-02-23T11:54:39.056Z + +Content: +Home Docs Cloud Architecture Center Send feedback Disaster recovery use cases: locality-restricted data analytics applications Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-07-20 UTC This document is part of a series that discusses disaster recovery (DR) in Google Cloud. This document describes how to apply the locality restrictions from the document, Architecting disaster recovery for locality-restricted workloads, to data analytics applications. Specifically, this document describes how the components that you use in a data analytics platform fit into a DR architecture that meets locality restrictions that your applications or data might be subject to. The series consists of the following parts: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Architecting disaster recovery for cloud infrastructure outages Disaster recovery use cases: locality-restricted data analytics applications (this document) This document is intended for systems architects and IT managers. It assumes that you have the following knowledge and skills: Basic familiarity with Google Cloud data analytics services such as Dataflow, Dataproc, Cloud Composer, Pub/Sub, Cloud Storage, and BigQuery. Basic familiarity with Google Cloud networking services such as Cloud Interconnect and Cloud VPN. Familiarity with disaster recovery planning. Familiarity with Apache Hive and Apache Spark. Locality requirements for a data analytics platform Data analytics platforms are typically complex, multi-tiered applications that store data at rest. These applications produce events that are processed and stored in your analytics system. Both the application itself and the data stored in the system might be subject to locality regulations. These regulations vary not just across countries, but also across industry verticals. Therefore, you should have a clear understanding about your data locality requirements before you start to design your DR architecture. You can determine whether your data analytics platform has any locality requirements by answering the following two questions: Does your application need to be deployed to a specific region? Is your data at rest restricted to a specific region? If you answer "no" to both questions, your data analytics platform doesn't have any locality-specific requirements. Because your platform doesn't have locality requirements, follow the DR guidance for compliant services and products outlined in the Disaster recovery planning guide. However, if you answer "yes" to either of the questions, your application is locality-restricted. Because your analytics platform is locality-restricted, you must ask the following question: Can you use encryption techniques to mitigate data-at-rest requirements? If you're able to use encryption techniques, you can use the multi-regional and dual-regional services of Cloud External Key Manager and Cloud Key Management Service. You can then also follow the standard high availability and disaster recovery (HA/DR) techniques outlined in Disaster recovery scenarios for data. If you are unable to use encryption techniques, you must use custom solutions or partner offerings for DR planning. For more information about techniques for addressing locality restrictions for DR scenarios, see Architecting disaster recovery for locality-restricted workloads. Components in a data analytics platform When you understand locality requirements, the next step is to understand the components that your data analytics platform uses. Some common components of data analytics platform are databases, data lakes, processing pipelines, and data warehouses, as described in the following list: Google Cloud has a set of database services that fit different use cases. Data lakes, data warehouses, and processing pipelines can have slightly differing definitions. This document uses a set of definitions that reference Google Cloud services: A data lake is a scalable and secure platform for ingesting and storing data from multiple systems. A typical data lake might use Cloud Storage as the central storage repository. A processing pipeline is an end-to-end process that takes data or events from one or more systems, transforms that data or event, and loads it into another system. The pipeline could follow either an extract, transform, and load (ETL) or extract, load, and transform (ELT) process. Typically, the system into which the processed data is loaded is a data warehouse. Pub/Sub, Dataflow, and Dataproc are commonly used components of a processing pipeline. A data warehouse is an enterprise system used for analysis and reporting of data, which usually comes from an operational database. BigQuery is a commonly used data warehouse system running on Google Cloud. Depending on the locality requirements and the data analytics components that you are using, the actual DR implementation varies. The following sections demonstrate this variation with two use cases. Batch use case: a periodic ETL job The first use case describes a batch process in which a retail platform periodically collects sales events as files from various stores and then writes the files to a Cloud Storage bucket. The following diagram illustrates the data analytics architecture for this retailer's batch job. The architecture includes the following phases and components: The ingestion phase consists of the stores sending their point-of-sale (POS) data to Cloud Storage. This ingested data awaits processing. The orchestration phase uses Cloud Composer to orchestrate the end-to-end batch pipeline. The processing phase involves an Apache Spark job running on a Dataproc cluster. The Apache Spark job performs an ETL process on the ingested data. This ETL process provides business metrics. The data lake phase takes the processed data and stores information in the following components: The processed data is commonly stored in Cloud Storage columnar formats such as Parquet and ORC because these formats allow efficient storage and faster access for analytical workloads. The metadata about the process data (such as databases, tables, columns, and partitions) is stored in the Hive metastore service supplied by Dataproc Metastore. In locality-restricted scenarios, it might be difficult to provide redundant processing and orchestration capacity to maintain availability. Because the data is processed in batches, the processing and orchestration pipelines can be recreated, and batch jobs could be restarted after a disaster scenario is resolved. Therefore, for disaster recovery, the core focus is on recovering the actual data: the ingested POS data, the processed data stored in the data lake, and the metadata stored in the data lake. Ingestion into Cloud Storage Your first consideration should be the locality requirements for the Cloud Storage bucket used to ingest the data from the POS system. Use this locality information when considering the following options: If the locality requirements allow data at rest to reside in one of the multi-region or dual-region locations, choose the corresponding location type when you create the Cloud Storage bucket. The location type that you choose defines which Google Cloud regions are used to replicate your data at rest. If an outage occurs in one of the regions, data that resides in multi-region or dual-region buckets is still be accessible. Cloud Storage also supports both customer-managed encryption keys (CMEK) and customer-supplied encryption keys (CSEK). Some locality rules allow data at rest to be stored in multiple locations when you use CMEK or CSEK for key management. If your locality rules allow the use of CMEK or CSEK, you can design your DR architecture to use Cloud Storage regions. Your locality requirements might not permit you to use either location types or encryption-key management. When you can't use location types or encryption-key management, you can use the gcloud storage rsync command to synchronize data to another location, such as on-premises storage or storage solutions from another cloud provider. If a disaster occurs, the ingested POS data in the Cloud Storage bucket might have data that has not yet been processed and imported into the data lake. Alternatively, the POS data might not be able to be ingested into the Cloud Storage bucket. For these cases, you have the following disaster recovery options: Let the POS system retry. In the event that the system is unable to write the POS data to Cloud Storage, the data ingestion process fails. To mitigate this situation, you can implement a retry strategy to allow the POS system to retry the data ingestion operation. Because Cloud Storage provides high durability, data ingestion and the subsequent processing pipeline will resume with little to no data loss after the Cloud Storage service resumes. Make copies of ingested data. Cloud Storage supports both multi-region and dual-region location types. Depending on your data locality requirements, you might be able to set up a multi-region or dual-region Cloud Storage bucket to increase data availability. You can also use tools like the gcloud storage Google Cloud CLI command to synchronize data between Cloud Storage and another location. Orchestration and processing of ingested POS data In the architecture diagram for the batch use case, Cloud Composer carries out the following steps: Validates that new files have been uploaded to the Cloud Storage ingestion bucket. Starts an ephemeral Dataproc cluster. Starts an Apache Spark job to process the data. Deletes the Dataproc cluster at the end of the job. Cloud Composer uses directed acyclic graph (DAG) files that define these series of steps. These DAG files are stored in a Cloud Storage bucket that is not shown in the architecture diagram. In terms of dual-region and multi-region locality, the DR considerations for the DAG files bucket are the same as the ones discussed for the ingestion bucket. Dataproc clusters are ephemeral. That is, the clusters only exist for as long as the processing stage runs. This ephemeral nature means that you don't have to explicitly do anything for DR in regard to the Dataproc clusters. Data lake storage with Cloud Storage The Cloud Storage bucket that you use for the data lake has the same locality considerations as the ones discussed for the ingestion bucket: dual-region and multi-region considerations, the use of encryption, and the use of gcloud storage rsync gcloud CLI command. When designing the DR plan for your data lake, think about the following aspects: Data size. The volume of data in a data lake can be large. Restoring a large volume of data takes time. In a DR scenario, you need to consider the impact that the data lake's volume has on the amount of time that it takes to restore data to a point that meets the following criteria: Your application is available. You meet your recovery time objective (RTO) value. You get the data checkpoint that you need to met your recovery point objective (RPO) value. For the current batch use case, Cloud Storage is used for the data lake location and provides high durability. However, ransomware attacks have been on a rise. To ensure that you have the ability to recover from such attacks, it would be prudent to follow the best practices that are outlined in, How Cloud Storage delivers 11 nines of durability. Data dependency. Data in data lakes are usually consumed by other components of a data analytics system such as a processing pipeline. In a DR scenario, the processing pipeline and the data on which it depends should share the same RTO. In this context, focus on how long you can have the system be unavailable. Data age. Historical data might allow for higher RPO. This type of data might have already been analyzed or processed by other data analytics components and might have been persisted in another component that has its own DR strategies. For example, sales records that are exported to Cloud Storage are imported daily to BigQuery for analysis. With proper DR strategies for BigQuery, historical sales records that have been imported to BigQuery might have lower recovery objectives than those which haven't been imported. Data lake metadata storage with Dataproc Metastore Dataproc Metastore is an Apache Hive metastore that is fully managed, highly available, autohealing, and serverless. The metastore provides data abstraction and data discovery features. The metastore can be backed up and restored in the case of a disaster. The Dataproc Metastore service also lets you export and import metadata. You can add a task to export the metastore and maintain an external backup along with your ETL job. If you encounter a situation where there is a metadata mismatch, you can recreate the metastore from the data itself by using the MSCK command. Streaming use case: change data capture using ELT The second use case decribes a retail platform that uses change data capture (CDC) to update backend inventory systems and to track real-time sales metrics. The following diagram shows an architecture in which events from a transactional database, such as Cloud SQL, are processed and then stored in a data warehouse. The architecture includes the following phases and components: The ingestion phase consists of the incoming change events being pushed to Pub/Sub. As a message delivery service, Pub/Sub is used to reliably ingest and distribute streaming analytics data. Pub/Sub has the additional benefits of being both scalable and serverless. The processing phase involves using Dataflow to perform an ELT process on the ingested events. The data warehouse phase uses BigQuery to store the processed events. The merge operation inserts or updates a record in BigQuery. This operation allows the information stored in BigQuery to keep up to date with the transactional database. While this CDC architecture doesn't rely on Cloud Composer, some CDC architectures require Cloud Composer to orchestrate the integration of the stream into batch workloads. In those cases, the Cloud Composer workflow implements integrity checks, backfills, and projections that can't be done in real time because of latency constraints. DR techniques for Cloud Composer are discussed in the batch processing use case discussed earlier in the document. For a streaming data pipeline, you should consider the following when planning your DR architecture: Pipeline dependencies. Processing pipelines take input from one or more systems (the sources). Pipelines then process, transform, and store these inputs into some other systems (the sinks). It's important to design your DR architecture to achieve your end-to-end RTO. You need to ensure that the RPO of the data sources and sinks allow you to meet the RTO. In addition to designing your cloud architecture to meet your locality requirements, you'll also need to design your originating CDC sources to allow your end-to-end RTO to be met. Encryption and locality. If encryption lets you mitigate locality restrictions, you can use Cloud KMS, to attain the following goals: Manage your own encryption keys. Take advantage of the encryption capability of individual services. Deploy services in regions that would otherwise be not available to use due to locality restrictions. Locality rules on data in motion. Some locality rules might apply only to data at rest but not to data in motion. If your locality rules don't apply to data in motion, design your DR architecture to use resources in other regions to improve the recovery objectives. You can supplement the regional approach by integrating encryption techniques. Ingestion into Pub/Sub If you don't have locality restrictions, you can publish messages to the global Pub/Sub endpoint. Global Pub/Sub endpoints are visible and accessible from any Google Cloud location. If your locality requirements allow the use of encryption, it's possible to configure Pub/Sub to achieve a similar level of high availability as global endpoints. Although Pub/Sub messages are encrypted with Google-owned and Google-managed encryption keys by default, you can configure Pub/Sub to use CMEK instead. By using Pub/Sub with CMEK, you are able to meet locality rules about encryption while still maintaining high availability. Some locality rules require messages to stay in a specific location even after encryption. If your locality rules have this restriction, you can specify the message storage policy of a Pub/Sub topic and restrict it to a region. If you use this message storage approach, messages that are published to a topic are never persisted outside of the set of Google Cloud regions that you specify. If your locality rules allow more than one Google Cloud region to be used, you can increase service availability by enabling those regions in the Pub/Sub topic. You need to be aware that implementing a message storage policy to restrict Pub/Sub resource locations does come with trade-offs concerning availability. A Pub/Sub subscription lets you store unacknowledged messages for up to 7 days without any restrictions on the number of messages. If your service level agreement allows delayed data, you can buffer the data in your Pub/Sub subscription if the pipelines stop running. When the pipelines are running again, you can process the backed-up events. This design has the benefit of having a low RPO. For more information about the resource limits for Pub/Sub, see resource limits for Pub/Sub quotas. Event processing with Dataflow Dataflow is a managed service for executing a wide variety of data processing patterns. The Apache Beam SDK is an open source programming model that lets you develop both batch and streaming pipelines. You create your pipelines with an Apache Beam program and then run them on the Dataflow service. When designing for locality restrictions, you need to consider where your sources, sinks, and temporary files are located. If these file locations are outside of your job's region, your data might be sent across regions. If your locality rules allow data to be processed in a different location, design your DR architecture to deploy workers in other Google Cloud regions to provide high availability. If your locality restrictions limit processing to a single location, you can create a Dataflow job that is restricted to a specific region or zone. When you submit a Dataflow job, you can specify the regional endpoint, worker region, and worker zone parameters. These parameters control where workers are deployed and where job metadata is stored. Apache Beam provides a framework that allows pipelines to be executed across various runners. You can design your DR architecture to take advantage of this capability. For example, you might design a DR architecture to have a backup pipeline that runs on your local on-premises Spark cluster by using Apache Spark Runner. For information about whether a specific runner is capable of carrying out a certain pipeline operation, see Beam Capability Matrix. If encryption lets you mitigate locality restrictions, you can use CMEK in Dataflow to both encrypt pipeline state artifacts, and access sources and sinks that are protected with Cloud KMS keys. Using encryption, you can design a DR architecture that uses regions that would otherwise be not available due to locality restrictions. Data warehouse built on BigQuery Data warehouses support analytics and decision-making. Besides containing an analytical database, data warehouses contain multiple analytical components and procedures. When designing the DR plan for your data warehouse, think about the following characteristics: Size. Data warehouses are much larger than online transaction processing (OLTP) systems. It's not uncommon for data warehouses to have terabytes to petabytes of data. You need to consider how long it would take to restore this data to achieve your RPO and RTO values. When planning your DR strategy, you must also factor in the cost associated with recovering terabytes of data. For more information about DR mitigation techniques for BigQuery, see the BigQuery information in the section on backup and recovery mechanisms for the managed database services on Google Cloud. Availability. When you create a BigQuery dataset, you select a location in which to store your data: regional or multi-region. A regional location is a single, specific geographical location, such as Iowa (us-central1). A multi-region location is a large geographic area, such as the United States (US) or Europe (EU), that contains two or more geographic places. Note: In BigQuery, a multi-region location does not provide cross-region replication nor regional redundancy. Data will be stored in a single region within the geographic location. When designing your DR plan to meet locality restrictions, the failure domain (that is, whether the failure occurs at the machine level, zonal, or regional) will have a direct impact on you meeting your defined RTO. For more information about these failure domains and how they affect availability, see Availability and durability of BigQuery. Nature of the data. Data warehouses contain historic information, and most of the older data is often static. Many data warehouses are designed to be append-only. If your data warehouse is append-only, you may be able to achieve your RTO by restoring just the data that is being appended. In this approach, you backup just this appended data. If there is a disaster, you'll then be able to restore the appended data and have your data warehouse available to use, but with a subset of the data. Data addition and update pattern. Data warehouses are typically updated using ETL or ELT patterns. When you have controlled update paths, you can reproduce recent update events from alternative sources. Your locality requirements might limit whether you can use a single region or multiple regions for your data warehouse. Although BigQuery datasets can be regional, multi-region storage is the simplest and most cost-effective way to ensure the availability of your data if a disaster occurs. If multi-region storage is not available in your region, but you can use a different region, use the BigQuery Data Transfer Service to copy your dataset to a different region. If you can use encryption to mitigate the locality requirements, you can manage your own encryption keys with Cloud KMS and BigQuery. If you can use only one region, consider backing up the BigQuery tables. The most cost-effective solution to backup tables is to use BigQuery exports. Use Cloud Scheduler or Cloud Composer to periodically schedule an export job to write to Cloud Storage. You can use formats such as Avro with SNAPPY compression or JSON with GZIP compression. While you are designing your export strategies, take note of the limits on exports. You might also want to store BigQuery backups in columnar formats such as Parquet and ORC. These provide high compression and also allow interoperability with many open source engines, such as Hive and Presto, that you might have in your on-premises systems. The following diagram outlines the process of exporting BigQuery data to a columnar format for storage in an on-premises location. Specifically, this process of exporting BigQuery data to an on-premises storage location involves the following steps: The BigQuery data is sent to an Apache Spark job on Dataproc. The use of the Apache Spark job permits schema evolution. After the Dataproc job has processed the BigQuery files, the processed files are written to Cloud Storage and then transferred to an on-premises DR location. Cloud Interconnect is used to connect your Virtual Private Cloud network to your on-premises network. The transfer to the on-premises DR location can occur through the Spark job. If your warehouse design is append-only and is partitioned by date, you need to create a copy of the required partitions in a new table before you run a BigQuery export job on the new table. You can then use a tool such as gcloud storage gcloud CLI command to transfer the updated files to your backup location on-premises or in another cloud. (Egress charges might apply when you transfer data out of Google Cloud.) For example, you have a sales data warehouse that consists of an append-only orders table in which new orders are appended to the partition that represents the current date. However, a return policy might allow items to be returned within 7 days. Therefore, records in the orders table from within the last 7 days might be updated. Your export strategies need to take the return policy into account. In this example, any export job to backup the orders table needs to also export the partitions representing orders within the last 7 days to avoid missing updates due to returns. What's next Read other articles in this DR series: Disaster recovery planning guide Disaster recovery building blocks Disaster recovery scenarios for data Disaster recovery scenarios for applications Architecting disaster recovery for locality-restricted workloads Architecting disaster recovery for cloud infrastructure outages Read the whitepaper: Learn about data residency, operational transparency, and privacy for European customers on Google Cloud. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthors: Grace Mollison | Solutions LeadMarco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file diff --git a/Use_generative_AI_for_utilization_management.txt b/Use_generative_AI_for_utilization_management.txt new file mode 100644 index 0000000000000000000000000000000000000000..909b7a74b2124b74be93bf9f32ae65e827ae546e --- /dev/null +++ b/Use_generative_AI_for_utilization_management.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/use-generative-ai-utilization-management +Date Scraped: 2025-02-23T11:46:03.232Z + +Content: +Home Docs Cloud Architecture Center Send feedback Use generative AI for utilization management Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-19 UTC This document describes a reference architecture for health insurance companies who want to automate prior authorization (PA) request processing and improve their utilization review (UR) processes by using Google Cloud. It's intended for software developers and program administrators in these organizations. This architecture helps to enable health plan providers to reduce administrative overhead, increase efficiency, and enhance decision-making by automating data ingestion and the extraction of insights from clinical forms. It also allows them to use AI models for prompt generation and recommendations. Architecture The following diagram describes an architecture and an approach for automating the data ingestion workflow and optimizing the utilization management (UM) review process. This approach uses data and AI services in Google Cloud. The preceding architecture contains two flows of data, which are supported by the following subsystems: Claims data activator (CDA), which extracts data from unstructured sources, such as forms and documents, and ingests it into a database in a structured, machine-readable format. CDA implements the flow of data to ingest PA request forms. Utilization review service (UR service), which integrates PA request data, policy documents, and other care guidelines to generate recommendations. The UR service implements the flow of data to review PA requests by using generative AI. The following sections describe these flows of data. CDA flow of data The following diagram shows the flow of data for using CDA to ingest PA request forms. As shown in the preceding diagram, the PA case manager interacts with the system components to ingest, validate, and process the PA requests. The PA case managers are the individuals from the business operations team who are responsible for the intake of the PA requests. The flow of events is as follows: The PA case managers receive the PA request forms (pa_forms) from the healthcare provider and uploads them to the pa_forms_bkt Cloud Storage bucket. The ingestion_service service listens to the pa_forms_bkt bucket for changes. The ingestion_service service picks up pa_formsforms from the pa_forms_bkt bucket. The service identifies the pre-configured Document AI processors, which are called form_processors. These processors are defined to process the pa_forms forms. The ingestion_service service extracts information from the forms using the form_processors processors. The data extracted from the forms is in JSON format. The ingestion_service service writes the extracted information with field-level confidence scores into the Firestore database collection, which is called pa_form_collection. The hitl_app application fetches the information (JSON) with confidence scores from the pa_form_collection database. The application calculates the document-level confidence score from the field-level confidence scores made available in the output by the form_processors machine learning (ML) models. The hitl_app application displays the extracted information with the field and document level confidence scores to the PA case managers so that they can review and correct the information if the extracted values are inaccurate. PA case managers can update the incorrect values and save the document in the pa_form_collection database. UR service flow of data The following diagram shows the flow of data for the UR service. As shown in the preceding diagram, the UR specialists interact with the system components to conduct a clinical review of the PA requests. The UR specialists are typically nurses or physicians with experience in a specific clinical area who are employed by healthcare insurance companies. The case management and routing workflow for PA requests is out of scope for the workflow that this section describes. The flow of events is as follows: The ur_app application displays a list of PA requests and their review status to the UR specialists. The status shows as in_queue, in_progress, or completed. The list is created by fetching the pa_form information data from the pa_form_collection database. The UR specialist opens a request by clicking an item from the list displayed in the ur_app application. The ur_app application submits the pa_form information data to the prompt_model model. It uses the Vertex AI Gemini API to generate a prompt that's similar to the following: Review a PA request for {medication|device|medical service} for our member, {Patient Name}, who is {age} old, {gender} with {medical condition}. The patient is on {current medication|treatment list}, has {symptoms}, and has been diagnosed with {diagnosis}. The ur_app application displays the generated prompt to the UR specialists for review and feedback. UR specialists can update the prompt in the UI and send it to the application. The ur_app application sends the prompt to the ur_model model with a request to generate a recommendation. The model generates a response and returns to the application. The application displays the recommended outcome to the UR specialists. The UR specialists can use the ur_search_app application to search for clinical documents, care guidelines, and plan policy documents. The clinical documents, care guidelines, and plan policy documents are pre-indexed and accessible to the ur_search_app application. Components The architecture contains the following components: Cloud Storage buckets. UM application services require the following Cloud Storage buckets in your Google Cloud project: pa_forms_bkt: A bucket to ingest the PA forms that need approval. training_forms: A bucket to hold historical PA forms for training the DocAI form processors. eval_forms: A bucket to hold PA forms for evaluating the accuracy of the DocAI form processors. tuning_dataset: A bucket to hold the data required for tuning the large language model (LLM). eval_dataset: A bucket to hold the data required for evaluation of the LLM. clinical_docs: A bucket to hold the clinical documents that the providers submit as attachments to the PA forms or afterward to support the PA case. These documents get indexed by the search application in Vertex AI Agent Builder service. um_policies: A bucket to hold medical necessity and care guidelines, health plan policy documents, and coverage guidelines. These documents get indexed by the search application in the Vertex AI Agent Builder service. form_processors: These processors are trained to extract information from the pa_forms forms. pa_form_collection: A Firestore datastore to store the extracted information as JSON documents in the NoSQL database collection. ingestion_service: A microservice that reads the documents from the bucket, passes them to the DocAI endpoints for parsing, and stores the extracted data in Firestore database collection. hitl_app: A microservice (web application) that fetches and displays data values extracted from the pa_forms. It also renders the confidence score reported by form processors (ML models) to the PA case manager so that they can review, correct, and save the information in the datastore. ur_app: A microservice (web application) that UR specialists can use to review the PA requests using Generative AI. It uses the model named prompt_model to generate a prompt. The microservice passes the data extracted from the pa_forms forms to the prompt_model model to generate a prompt. It then passes the generated prompt to ur_model model to get the recommendation for a case. Vertex AI medically-tuned LLMs: Vertex AI has a variety of generative AI foundation models that can be tuned to reduce cost and latency. The models used in this architecture are as follows: prompt_model: An adapter on the LLM tuned to generate prompts based on the data extracted from the pa_forms. ur_model: An adapter on the LLM tuned to generate a draft recommendation based on the input prompt. ur_search_app: A search application built with Vertex AI Agent Builder to find personalized and relevant information to UR specialists from clinical documents, UM policies, and coverage guidelines. Products used This reference architecture uses the following Google Cloud products: Vertex AI: An ML platform that lets you train and deploy ML models and AI applications, and customize LLMs for use in AI-powered applications. Vertex AI Agent Builder: A platform that lets developers create and deploy enterprise-grade AI-powered agents and applications. Document AI: A document processing platform that takes unstructured data from documents and transforms it into structured data. Firestore: A NoSQL document database built for automatic scaling, high performance, and ease of application development. Cloud Run: A serverless compute platform that lets you run containers directly on top of Google's scalable infrastructure. Cloud Storage: A low-cost, no-limit object store for diverse data types. Data can be accessed from within and outside Google Cloud, and it's replicated across locations for redundancy. Cloud Logging: A real-time log management system with storage, search, analysis, and alerting. Cloud Monitoring: A service that provides visibility into the performance, availability, and health of your applications and infrastructure. Use case UM is a process used by health insurance companies primarily in the United States, but similar processes (with a few modifications) are used globally in the healthcare insurance market. The goal of UM is to help to ensure that patients receive the appropriate care in the correct setting, at the optimum time, and at the lowest possible cost. UM also helps to ensure that medical care is effective, efficient, and in line with evidence-based standards of care. PA is a UM tool that requires approval from the insurance company before a patient receives medical care. The UM process that many companies use is a barrier to providing and receiving timely care. It's costly, time-consuming, and overly administrative. It's also complex, manual, and slow. This process significantly impacts the ability of the health plan to effectively manage the quality of care, and improve the provider and member experience. However, if these companies were to modify their UM process, they could help ensure that patients receive high-quality, cost-effective treatment. By optimizing their UR process, health plans can reduce costs and denials through expedited processing of PA requests, which in turn can improve patient and provider experience. This approach helps to reduce the administrative burden on healthcare providers. When health plans receive requests for PA, the PA case managers create cases in the case management system to track, manage and process the requests. A significant amount of these requests are received by fax and mail, with attached clinical documents. However, the information in these forms and documents is not easily accessible to health insurance companies for data analytics and business intelligence. The current process of manually entering information from these documents into the case management systems is inefficient and time-consuming and can lead to errors. By automating the data ingestion process, health plans can reduce costs, data entry errors, and administrative burden on the staff. Extracting valuable information from the clinical forms and documents enables health insurance companies to expedite the UR process. Design considerations This section provides guidance to help you use this reference architecture to develop one or more architectures that help you to meet your specific requirements for security, reliability, operational efficiency, cost, and performance. Security, privacy, and compliance This section describes the factors that you should consider when you use this reference architecture to help design and build an architecture in Google Cloud which helps you to meet your security, privacy, and compliance requirements. In the United States, the Health Insurance Portability and Accountability Act (known as HIPAA, as amended, including by the Health Information Technology for Economic and Clinical Health — HITECH — Act) demands compliance with HIPAA's Security Rule, Privacy Rule, and Breach Notification Rule. Google Cloud supports HIPAA compliance, but ultimately, you are responsible for evaluating your own HIPAA compliance. Complying with HIPAA is a shared responsibility between you and Google. If your organization is subject to HIPAA and you want to use any Google Cloud products in connection with Protected Health Information (PHI), you must review and accept Google's Business Associate Agreement (BAA). The Google products covered under the BAA meet the requirements under HIPAA and align with our ISO/IEC 27001, 27017, and 27018 certifications and SOC 2 report. Not all LLMs hosted in the Vertex AI Model Garden support HIPAA. Evaluate and use the LLMs that support HIPAA. To assess how Google's products can meet your HIPAA compliance needs, you can reference the third party audit reports in the Compliance resource center. We recommend that customers consider the following when selecting AI use cases, and design with these considerations in mind: Data privacy: The Google Cloud Vertex AI platform and Document AI don't utilize customer data, data usage, content, or documents for improving or training the foundation models. You can tune the foundation models with your data and documents within your secured tenant on Google Cloud. Firestore server client libraries use Identity and Access Management (IAM) to manage access to your database. To learn about Firebase's security and privacy information, see Privacy and Security in Firebase. To help you store sensitive data,ingestion_service, hitl_app, and ur_app service images can be encrypted using customer-managed encryption keys (CMEKs) or integrated with Secret Manager. Vertex AI implements Google Cloud security controls to help secure your models and training data. Some security controls aren't supported by the generative AI features in Vertex AI. For more information, see Security Controls for Vertex AI and Security Controls for Generative AI. We recommend that you use IAM to implement the principles of least-privilege and separation-of-duties with cloud resources. This control can limit access at the project, folder, or dataset levels. Cloud Storage automatically stores data in an encrypted state. To learn more about additional methods to encrypt data, see Data encryption options. Google's products follow Responsible AI principles. For security principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Security in the Architecture Framework. Reliability This section describes design factors that you should consider to build and operate reliable infrastructure to automate PA request processing. Document AI form_processors is a regional service. Data is stored synchronously across multiple zones within a region. Traffic is automatically load-balanced across the zones. If a zone outage occurs, data isn't lost1. If a region outage occurs, the service is unavailable until Google resolves the outage. You can create Cloud Storage buckets in one of three locations: regional, dual-region, or multi-region, using pa_forms_bkt, training_forms, eval_forms, tuning_dataset, eval_dataset, clinical_docs or um_policies buckets. Data stored in regional buckets is replicated synchronously across multiple zones within a region. For higher availability, you can use dual-region or multi-region buckets, where data is replicated asynchronously across regions. In Firestore, the information extracted from the pa_form_collection database can sit across multiple data centers to help to ensure global scalability and reliability. The Cloud Run services, ingestion_service,hitl_app, and ur_app, are regional services. Data is stored synchronously across multiple zones within a region. Traffic is automatically load-balanced across the zones. If a zone outage occurs, Cloud Run jobs continue to run and data isn't lost. If a region outage occurs, the Cloud Run jobs stop running until Google resolves the outage. Individual Cloud Run jobs or tasks might fail. To handle such failures, you can use task retries and checkpointing. For more information, see Jobs retries and checkpoints best practices. Cloud Run general development tips describes some best practices for using Cloud Run. Vertex AI is a comprehensive and user-friendly machine learning platform that provides a unified environment for the machine learning lifecycle, from data preparation to model deployment and monitoring. For reliability principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Reliability in the Architecture Framework. Cost optimization This section provides guidance to optimize the cost of creating and running an architecture to automate PA request processing and improve your UR processes. Carefully managing resource usage and selecting appropriate service tiers can significantly impact the overall cost. Cloud Storage storage classes: Use the different storage classes (Standard, Nearline, Coldline, or Archive) based on the data access frequency. Nearline, Coldline, and Archive are more cost-effective for less frequently accessed data. Cloud Storage lifecycle policies: Implement lifecycle policies to automatically transition objects to lower-cost storage classes or delete them based on age and access patterns. Document AI is priced based on the number of processors deployed and based on the number of pages processed by the Document AI processors. Consider the following: Processor optimization: Analyze workload patterns to determine the optimal number of Document AI processors to deploy. Avoid overprovisioning resources. Page volume management: Pre-processes documents to remove unnecessary pages or optimize resolution can help to reduce processing costs. Firestore is priced based on activity related to documents, index entries, storage that the database uses, and the amount of network bandwidth. Consider the following: Data modeling: Design your data model to minimize the number of index entries and optimize query patterns for efficiency. Network bandwidth: Monitor and optimize network usage to avoid excess charges. Consider caching frequently accessed data. Cloud Run charges are calculated based on on-demand CPU usage, memory, and number of requests. Think carefully about resource allocation. Allocate CPU and memory resources based on workload characteristics. Use autoscaling to adjust resources dynamically based on demand. Vertex AI LLMs are typically charged based on the input and output of the text or media. Input and output token counts directly affect LLM costs. Optimize prompts and response generation for efficiency. Vertex AI Agent Builder search engine charges depend on the features that you use. To help manage your costs, you can choose from the following three options: Search Standard Edition, which offers unstructured search capabilities. Search Enterprise Edition, which offers unstructured search and website search capabilities. Search LLM Add-On, which offers summarization and multi-turn search capabilities. You can also consider the following additional considerations to help optimize costs: Monitoring and alerts: Set up Cloud Monitoring and billing alerts to track costs and receive notifications when usage exceeds the thresholds. Cost reports: Regularly review cost reports in the Google Cloud console to identify trends and optimize resource usage. Consider committed use discounts: If you have predictable workloads, consider committing to using those resources for a specified period to get discounted pricing. Carefully considering these factors and implementing the recommended strategies can help you to effectively manage and optimize the cost of running your PA and UR automation architecture on Google Cloud. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Deployment The reference implementation code for this architecture is available under open-source licensing. The architecture that this code implements is a prototype, and might not include all the features and hardening that you need for a production deployment. To implement and expand this reference architecture to more closely meet your requirements, we recommend that you contact Google Cloud Consulting. The starter code for this reference architecture is available in the following git repositories: CDA git repository: This repository contains Terraform deployment scripts for infrastructure provisioning and deployment of application code. UR service git repository: This repository contains code samples for the UR service. You can choose one of the following two options for to implement support and services for this reference architecture: Engage Google Cloud Consulting. Engage a partner who has built a packaged offering by using the products and solution components described in this architecture. What's next Learn how to build infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search. Learn how to build infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL. Infrastructure for a RAG-capable generative AI application using GKE Review the Google Cloud options for grounding generative AI responses. Learn how to optimize Python applications for Cloud Run. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Dharmesh Patel | Industry Solutions Architect, HealthcareOther contributors: Ben Swenka | Key Enterprise ArchitectEmily Qiao | AI/ML Customer EngineerLuis Urena | Developer Relations EngineerPraney Mittal | Group Product ManagerLakshmanan Sethu | Technical Account Manager For more information about region-specific considerations, see Geography and regions. ↩ Send feedback \ No newline at end of file diff --git a/VMware_Engine(1).txt b/VMware_Engine(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..f0d3d6145472e9534a591b2143382904d0430548 --- /dev/null +++ b/VMware_Engine(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vmware-engine +Date Scraped: 2025-02-23T12:06:59.482Z + +Content: +Jump to Google Cloud VMware EngineGoogle Cloud VMware EngineRapidly lift and transform your VMware-based applications to Google Cloud without changes to your apps, tools, or processes. Includes all the hardware and VMware licenses to run in a dedicated VMware SDDC in Google Cloud. Get startedFree migration cost assessmentRead our new infobrief with IDC, "Strategies for Successful Migration to Public Clouds" Get started quickly by migrating your VMware environment with just a few clicksProvision an entire VMware SDDC in about 30 minutes across 19 global regionsLearn how our customers modernize their apps with Google Cloud VMware EngineExplore the latest VMware Engine news, articles, and reportsMigrate and transform your VMware workloads with Google CloudWatch WebinarBenefitsFully integrated VMware experienceUnlike other solutions, we simplify the use of VMware services and tools with unified identities, management, support, and monitoring while providing all the licenses, cloud services, and billing you need.Fast provisioning and scaleTake advantage of the elasticity and scale of the cloud—through fast provisioning, you can spin up a new private cloud in about 30 minutes with dynamic resource management and auto scaling.Use familiar third-party apps in the cloud Continue to use your core enterprise apps in the cloud, without changes. Google Cloud VMware Engine integrates with leading database, storage, disaster recovery, and backup ISV solutions.Key featuresKey featuresFast networking and high availabilityVMware Engine is built on Google Cloud’s highly performant, scalable infrastructure with fully redundant and up to 200 Gbps networking, providing 99.99% availability to meet the needs of your most demanding enterprise workloads.Scale datastore capacity independently from compute capacityGoogle Filestore and Google Cloud NetApp Volumes are VMware certified to use as an NFS datastore with VMware Engine. You can use external NFS storage to augment built-in vSAN storage and scale storage independently from compute for your storage intensive VMs. Leverage vSAN for low latency VM storage requirements and use Filestore / Google Cloud VMware Engine to scale storage from TBs to PBs for the capacity hungry VMs.Integrated Google Cloud experienceBenefit from full access to innovative Google Cloud services. Native VPC networking gives you private layer-3 access between VMware environments and Google Cloud services, allowing you to use standard access mechanisms such as Cloud VPN or Interconnect. Billing, identity, and access control integrate to unify the experience with other Google Cloud services.Robust VMware ecosystem solutionsGoogle Cloud Backup and Disaster Recovery is a managed backup and disaster recovery (DR) service for centralized, application-consistent data protection. Alternatively, you can leverage IT management tools and third-party services consistent with your on-premises environment. We’re partnering closely with leading storage, backup, and disaster recovery providers such as NetApp, Veeam, Zerto, Cohesity, and Dell Technologies to ease the migration journey; and enable business continuity. Read the VMware Engine ecosystem brief.Familiar VMware tools and Google Cloud's operations suiteIt's easy to move with the same VMware tools, processes, and policies you use to manage your on-premises VMware workloads. Use Google Cloud's operations suite to monitor, troubleshoot, and improve application performance on your Google Cloud environment. View all featuresVIDEOAn overview and demo of VMware Engine5:01CustomersCustomers increase agility and leverage innovative servicesCase studyADT uses Google Cloud VMware Engine to further home automation and security with IT transformation5-min readVideoMitel overhauls its IT and data infrastructure with Google Cloud VMware EngineVideo (1:47)Case studyCarrefour migrates to Google Cloud with VMware Engine, reducing operating costs by 40%5-min readVideoDeutsche Bӧrse Group accelerates its journey to the cloud with VMware Engine01:08Case studyMitel migrated 1,000 VMware instances to Google Cloud VMware Engine in less than 90 days5-min readSee all customersWhat's newWhat's newCheck out our blog for the latest updates, and sign up for Google Cloud newsletters to receive product updates, event information, special offers, and more. VideoMigrate and transform your VMware workloads with Google CloudWatch videoReportSimplify your journey to the cloudLearn moreBlog postSpanning the globe with Google Cloud VMware EngineRead the blogReportVMware Engine whitepaper: Understanding the total cost of ownershipRead reportReportTechnical guide to running VMware-based applications in Google Cloud Read reportReportVMware Engine whitepaper: Migrate, scale, and innovate at speedRead reportDocumentationDocumentationTutorialGoogle Cloud VMware Engine how-to guidesLearn how to access the VMware Engine portal, migrate workloads, and manage private clouds, networking, the vSphere client, and more.Learn moreTutorialAdding Filestore NFS datastores to VMware EngineLearn how to add Filestore shares to Google Cloud VMware Engine to scale storage alongside vSAN storage with your VMware clusters. Learn moreWhitepaperRunning VMware-based applications in Google CloudDeeper insights into how VMware Engine facilitates migrating your apps to Google Cloud and the impact on networking, security, monitoring, and maintenance.Learn moreWhitepaperPrivate cloud networking for Google Cloud VMware EngineExplore networking concepts, typical traffic flows, and considerations for using VMware Engine to design a private cloud architecture within Google Cloud.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for VMware Engine.Use casesUse casesUse caseGoogle Cloud VMware Engine reference architectureBelow is a representative reference architecture on how you can migrate or extend your VMware environment to Google Cloud while also benefiting from Google Cloud services.Use caseData center lift and shift and hybrid cloudRetire or extend your data center to the cloud. Quickly meet your business needs by leveraging on-demand burst capacity. Pay for what you use and benefit from flexible consumption options.Use caseDisaster recoveryBuild or shift disaster recovery (DR) capabilities to the cloud, lower costs, and remove operational burdens. Leverage third-party DR services to maintain continuity with your on-premises environment.Use caseVirtual desktop infrastructureEnable your employees to work from anywhere by building and scaling your virtual desktop infrastructure (VDI) on demand such as VMware Horizon or Citrix. VMware Engine is built with high-performance, all-flash, hyper-converged architecture that can support your most demanding apps and help users achieve consistent performance.Use caseApplication modernizationMigrate and modernize your workloads by integrating with cloud-native services. Derive business insights with BigQuery, build intelligent prediction models with Cloud AI, modernize your applications with Anthos, and unify management with Google Cloud’s operations suite.Use caseVertical industry solutionsBoost agility and resilience in retail. Migrate your VMware apps to the cloud and transform them with Google services to drive scale, enhance customer experience, and become a data-driven retailer. Elevate agility and security in financial services. Simplify cloud migration so you can modernize your business with new insights, optimize IT operations, and drive omnichannel growth.Learn how telcos can provide new, scalable services in the cloud.View all technical guidesAll featuresAll featuresOn-demand self-service provisioning of VMware private cloudsDeploy, expand, or shrink your VMware private clouds in minutes. Pay for what you use and benefit from flexible consumption options.Integrated connectivity to Google Cloud servicesBenefit from full access and seamless integration with innovative Google Cloud services such as BigQuery, Google Cloud’s operations suite, Cloud Storage, Anthos, and Cloud AI.VMware ecosystem compatibilityVMware Engine allows users to obtain administrative rights to install and maintain continuity with third-party tools for backup, disaster recovery, and monitoring, along with adding external identity sources and users.Performant networkingVMware Engine is built on Google Cloud’s highly performant, scalable infrastructure with fully redundant and up to 200 Gbps networking. Cloud networking services such as Interconnect and Cloud VPN ease access from your on-premises environments to the cloud.Purpose built to run your most demanding workloadsA scalable hyper-converged architecture with all NVMe disks allows you to run and scale your infrastructure in minutes to meet the needs of your most demanding workloads such as transactional databases and applications. Storage Only Nodes enable more cost effective scaling of storage in VMware Engine. Please reach out to your Google sales representative for details.Simplified operationsVMware Engine is designed to minimize your operational burden so you can focus on your business. We take care of life cycle management of the VMware software stack and manage all related infrastructure and upgrades.PricingPricingPricing is based on consumption and commitment term; options include on-demand or committed use discounts for one- and three-year terms, with a three-node minimum. For pilot testing we offer a single node private cloud that may be used for up to 60 days. For detailed pricing information, please use our pricing calculator or visit our VMware Engine pricing page.View pricing detailsPartnersPartnersWe’re building integrations with technology partners and growing our service partner ecosystem to simplify migration and ensure continuity with how you run your applications today.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Get startedNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/VMware_Engine.txt b/VMware_Engine.txt new file mode 100644 index 0000000000000000000000000000000000000000..404806300661c435c10bddea54ed6b455b8bb539 --- /dev/null +++ b/VMware_Engine.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vmware-engine +Date Scraped: 2025-02-23T12:02:50.476Z + +Content: +Jump to Google Cloud VMware EngineGoogle Cloud VMware EngineRapidly lift and transform your VMware-based applications to Google Cloud without changes to your apps, tools, or processes. Includes all the hardware and VMware licenses to run in a dedicated VMware SDDC in Google Cloud. Get startedFree migration cost assessmentRead our new infobrief with IDC, "Strategies for Successful Migration to Public Clouds" Get started quickly by migrating your VMware environment with just a few clicksProvision an entire VMware SDDC in about 30 minutes across 19 global regionsLearn how our customers modernize their apps with Google Cloud VMware EngineExplore the latest VMware Engine news, articles, and reportsMigrate and transform your VMware workloads with Google CloudWatch WebinarBenefitsFully integrated VMware experienceUnlike other solutions, we simplify the use of VMware services and tools with unified identities, management, support, and monitoring while providing all the licenses, cloud services, and billing you need.Fast provisioning and scaleTake advantage of the elasticity and scale of the cloud—through fast provisioning, you can spin up a new private cloud in about 30 minutes with dynamic resource management and auto scaling.Use familiar third-party apps in the cloud Continue to use your core enterprise apps in the cloud, without changes. Google Cloud VMware Engine integrates with leading database, storage, disaster recovery, and backup ISV solutions.Key featuresKey featuresFast networking and high availabilityVMware Engine is built on Google Cloud’s highly performant, scalable infrastructure with fully redundant and up to 200 Gbps networking, providing 99.99% availability to meet the needs of your most demanding enterprise workloads.Scale datastore capacity independently from compute capacityGoogle Filestore and Google Cloud NetApp Volumes are VMware certified to use as an NFS datastore with VMware Engine. You can use external NFS storage to augment built-in vSAN storage and scale storage independently from compute for your storage intensive VMs. Leverage vSAN for low latency VM storage requirements and use Filestore / Google Cloud VMware Engine to scale storage from TBs to PBs for the capacity hungry VMs.Integrated Google Cloud experienceBenefit from full access to innovative Google Cloud services. Native VPC networking gives you private layer-3 access between VMware environments and Google Cloud services, allowing you to use standard access mechanisms such as Cloud VPN or Interconnect. Billing, identity, and access control integrate to unify the experience with other Google Cloud services.Robust VMware ecosystem solutionsGoogle Cloud Backup and Disaster Recovery is a managed backup and disaster recovery (DR) service for centralized, application-consistent data protection. Alternatively, you can leverage IT management tools and third-party services consistent with your on-premises environment. We’re partnering closely with leading storage, backup, and disaster recovery providers such as NetApp, Veeam, Zerto, Cohesity, and Dell Technologies to ease the migration journey; and enable business continuity. Read the VMware Engine ecosystem brief.Familiar VMware tools and Google Cloud's operations suiteIt's easy to move with the same VMware tools, processes, and policies you use to manage your on-premises VMware workloads. Use Google Cloud's operations suite to monitor, troubleshoot, and improve application performance on your Google Cloud environment. View all featuresVIDEOAn overview and demo of VMware Engine5:01CustomersCustomers increase agility and leverage innovative servicesCase studyADT uses Google Cloud VMware Engine to further home automation and security with IT transformation5-min readVideoMitel overhauls its IT and data infrastructure with Google Cloud VMware EngineVideo (1:47)Case studyCarrefour migrates to Google Cloud with VMware Engine, reducing operating costs by 40%5-min readVideoDeutsche Bӧrse Group accelerates its journey to the cloud with VMware Engine01:08Case studyMitel migrated 1,000 VMware instances to Google Cloud VMware Engine in less than 90 days5-min readSee all customersWhat's newWhat's newCheck out our blog for the latest updates, and sign up for Google Cloud newsletters to receive product updates, event information, special offers, and more. VideoMigrate and transform your VMware workloads with Google CloudWatch videoReportSimplify your journey to the cloudLearn moreBlog postSpanning the globe with Google Cloud VMware EngineRead the blogReportVMware Engine whitepaper: Understanding the total cost of ownershipRead reportReportTechnical guide to running VMware-based applications in Google Cloud Read reportReportVMware Engine whitepaper: Migrate, scale, and innovate at speedRead reportDocumentationDocumentationTutorialGoogle Cloud VMware Engine how-to guidesLearn how to access the VMware Engine portal, migrate workloads, and manage private clouds, networking, the vSphere client, and more.Learn moreTutorialAdding Filestore NFS datastores to VMware EngineLearn how to add Filestore shares to Google Cloud VMware Engine to scale storage alongside vSAN storage with your VMware clusters. Learn moreWhitepaperRunning VMware-based applications in Google CloudDeeper insights into how VMware Engine facilitates migrating your apps to Google Cloud and the impact on networking, security, monitoring, and maintenance.Learn moreWhitepaperPrivate cloud networking for Google Cloud VMware EngineExplore networking concepts, typical traffic flows, and considerations for using VMware Engine to design a private cloud architecture within Google Cloud.Learn moreNot seeing what you’re looking for?View all product documentationRelease notesRead about the latest releases for VMware Engine.Use casesUse casesUse caseGoogle Cloud VMware Engine reference architectureBelow is a representative reference architecture on how you can migrate or extend your VMware environment to Google Cloud while also benefiting from Google Cloud services.Use caseData center lift and shift and hybrid cloudRetire or extend your data center to the cloud. Quickly meet your business needs by leveraging on-demand burst capacity. Pay for what you use and benefit from flexible consumption options.Use caseDisaster recoveryBuild or shift disaster recovery (DR) capabilities to the cloud, lower costs, and remove operational burdens. Leverage third-party DR services to maintain continuity with your on-premises environment.Use caseVirtual desktop infrastructureEnable your employees to work from anywhere by building and scaling your virtual desktop infrastructure (VDI) on demand such as VMware Horizon or Citrix. VMware Engine is built with high-performance, all-flash, hyper-converged architecture that can support your most demanding apps and help users achieve consistent performance.Use caseApplication modernizationMigrate and modernize your workloads by integrating with cloud-native services. Derive business insights with BigQuery, build intelligent prediction models with Cloud AI, modernize your applications with Anthos, and unify management with Google Cloud’s operations suite.Use caseVertical industry solutionsBoost agility and resilience in retail. Migrate your VMware apps to the cloud and transform them with Google services to drive scale, enhance customer experience, and become a data-driven retailer. Elevate agility and security in financial services. Simplify cloud migration so you can modernize your business with new insights, optimize IT operations, and drive omnichannel growth.Learn how telcos can provide new, scalable services in the cloud.View all technical guidesAll featuresAll featuresOn-demand self-service provisioning of VMware private cloudsDeploy, expand, or shrink your VMware private clouds in minutes. Pay for what you use and benefit from flexible consumption options.Integrated connectivity to Google Cloud servicesBenefit from full access and seamless integration with innovative Google Cloud services such as BigQuery, Google Cloud’s operations suite, Cloud Storage, Anthos, and Cloud AI.VMware ecosystem compatibilityVMware Engine allows users to obtain administrative rights to install and maintain continuity with third-party tools for backup, disaster recovery, and monitoring, along with adding external identity sources and users.Performant networkingVMware Engine is built on Google Cloud’s highly performant, scalable infrastructure with fully redundant and up to 200 Gbps networking. Cloud networking services such as Interconnect and Cloud VPN ease access from your on-premises environments to the cloud.Purpose built to run your most demanding workloadsA scalable hyper-converged architecture with all NVMe disks allows you to run and scale your infrastructure in minutes to meet the needs of your most demanding workloads such as transactional databases and applications. Storage Only Nodes enable more cost effective scaling of storage in VMware Engine. Please reach out to your Google sales representative for details.Simplified operationsVMware Engine is designed to minimize your operational burden so you can focus on your business. We take care of life cycle management of the VMware software stack and manage all related infrastructure and upgrades.PricingPricingPricing is based on consumption and commitment term; options include on-demand or committed use discounts for one- and three-year terms, with a three-node minimum. For pilot testing we offer a single node private cloud that may be used for up to 60 days. For detailed pricing information, please use our pricing calculator or visit our VMware Engine pricing page.View pricing detailsPartnersPartnersWe’re building integrations with technology partners and growing our service partner ecosystem to simplify migration and ensure continuity with how you run your applications today.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Get startedNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/VMware_Engine_network_security_using_centralized_appliances.txt b/VMware_Engine_network_security_using_centralized_appliances.txt new file mode 100644 index 0000000000000000000000000000000000000000..45a76ad65913431d001360abc596eefa583c5732 --- /dev/null +++ b/VMware_Engine_network_security_using_centralized_appliances.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/gcve-advanced-network-security +Date Scraped: 2025-02-23T11:53:54.893Z + +Content: +Home Docs Cloud Architecture Center Send feedback VMware Engine network security using centralized appliances Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-14 UTC As part of your organization's defense-in-depth strategy, you might have security policies that require the use of centralized network appliances for in-line detection and blocking of suspicious network activity. This document helps you to design the following advanced network-protection features for Google Cloud VMware Engine workloads: Mitigation of distributed denial-of-service (DDoS) attacks SSL offloading Next-generation firewalls (NGFW) Intrusion Prevention System (IPS) and Intrusion Detection System (IDS) Deep packet inspection (DPI) The architectures in this document use Cloud Load Balancing and network appliances from Google Cloud Marketplace. Cloud Marketplace offers production-ready, vendor-supported network appliances from Google Cloud security partners for your enterprise IT needs. The guidance in this document is intended for security architects and network administrators who design, provision, and manage network connectivity for VMware Engine workloads. The document assumes that you're familiar with Virtual Private Cloud (VPC), VMware vSphere, VMware NSX, network address translation (NAT), and Cloud Load Balancing. Architecture The following diagram shows an architecture for network connectivity to VMware Engine workloads from on-premises networks and from the internet. Later in this document, this architecture is extended to meet the requirements of specific use cases. Figure 1. Basic architecture for network connectivity to VMware Engine workloads. Figure 1 shows the following main components of the architecture: VMware Engine private cloud: an isolated VMware stack that consists of virtual machines (VMs), storage, networking infrastructure, and a VMware vCenter Server. VMware NSX-T provides networking and security features such as microsegmentation and firewall policies. The VMware Engine VMs use IP addresses from network segments that you create in your private cloud. Public IP address service: provides external IP addresses to the VMware Engine VMs to enable ingress access from the internet. The internet gateway provides egress access by default for the VMware Engine VMs. VMware Engine tenant VPC network: a dedicated, Google-managed VPC network that's used with every VMware Engine private cloud to enable communication with Google Cloud services. Customer VPC networks: Customer VPC network 1 (external): a VPC network that hosts the public-facing interface of your network appliance and load balancer. Customer VPC network 2 (internal): a VPC network that hosts the internal interface of the network appliance and is peered with the VMware Engine tenant VPC network by using the private services access model. Note: The customer VPC network can also be a Shared VPC network, which is not covered in this document. Private services access: a private access model that uses VPC Network Peering to enable connectivity between Google-managed services and your VPC networks. Network appliances: networking software that you choose from Cloud Marketplace and deploy on Compute Engine instances. Cloud Load Balancing: a Google-managed service that you can use to manage traffic to highly available distributed workloads in Google Cloud. You can choose a load balancer type that suits your traffic protocol and access requirements. The architectures in this document don't use the built-in NSX-T load balancers. Configuration notes The following diagram shows the resources that are required to provide network connectivity for VMware Engine workloads: Figure 2. Resources required for network connectivity to VMware Engine workloads. Figure 2 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions. Create the external and internal VPC networks and subnets by following the instructions in Creating a custom mode VPC network. For each subnet, choose an IP address range that's unique across the VPC networks. The Management VPC network that's shown in the architecture diagram is optional. If necessary, you can use it to host management NIC interfaces for your network appliances. Deploy the required network appliances from Cloud Marketplace. For high availability of the network appliances, deploy each appliance in a pair of VMs distributed across two zones. You can deploy the network appliances in instance groups. The instance groups can be managed instance groups (MIGs) or unmanaged instance groups, depending on your management or vendor-support requirements. Provision the network interfaces as follows: nic0 in the external VPC network to route traffic to the public source. nic1 for management operations, if the appliance vendor requires it. nic2 in the internal VPC network for internal communication with the VMware Engine resources. Deploying the network interfaces in separate VPC networks helps you to ensure security-zone segregation at the interface level for public and on-premises connections. Set up VMware Engine: Create a VMware Engine private cloud. Create a network segment for the VMware Engine VMs. Use private services access to set up VPC Network Peering to connect the internal VPC network to the VPC network that VMware Engine manages. If you need hybrid connectivity to your on-premises network, use Cloud VPN or Cloud Interconnect. You can extend the architecture in figure 2 for the following use cases: Use case Products and services used NGFW for public-facing VMware Engine workloads Network appliances from Cloud Marketplace External passthrough Network Load Balancers NGFW, DDoS mitigation, SSL offloading, and Content Delivery Network (CDN) for public-facing VMware Engine workloads Network appliances from Cloud Marketplace External Application Load Balancers NGFW for private communication between VMware Engine workloads and on-premises data centers or other cloud providers Network appliances from Cloud Marketplace Internal passthrough Network Load Balancers Centralized egress points to the internet for VMware Engine workloads Network appliances from Cloud Marketplace Internal passthrough Network Load Balancers The following sections describe these use cases and provide an overview of the configuration tasks to implement the use cases. NGFW for public-facing workloads This use case has the following requirements: A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L4 load balancer as the common frontend. Protection for public VMware Engine workloads by using an IPS/IDS, NGFW, DPI, or NAT solution. More public IP addresses than are supported by the public IP address service of VMware Engine. The following diagram shows the resources that are required to provision an NGFW for your public-facing VMware Engine workloads: Figure 3. Resources required to provision an NGFW for public-facing VMware Engine workloads. Figure 3 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions. Provision an external passthrough Network Load Balancer in the external VPC network as the public-facing ingress entry point for VMware Engine workloads. Create multiple forwarding rules to support multiple VMware Engine workloads. Configure each forwarding rule with a unique IP address and a TCP or UDP port number. Configure the network appliances as backends for the load balancer. Configure the network appliances to perform destination-NAT (DNAT) for the forwarding rule's public IP address to the internal IP addresses of the VMs that host the public-facing applications in VMware Engine. The network appliances must perform source-NAT (SNAT) for the traffic from the nic2 interface to ensure a symmetric returned path. The network appliances must also route traffic destined for VMware Engine networks through the nic2 interface to the subnet's gateway (the first IP address of the subnet). For the health checks to pass, the network appliances must use secondary or loopback interfaces to respond to the forwarding rules' IP addresses. Set up the internal VPC network's route table to forward VMware Engine traffic to VPC Network Peering as a next hop. In this configuration, the VMware Engine VMs use the internet gateway service of VMware Engine for egress to internet resources. However, ingress is managed by the network appliances for the public IP addresses that are mapped to the VMs. NGFW, DDoS mitigation, SSL offloading, and CDN This use case has the following requirements: A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L7 load balancer as the common frontend, and URL mapping to route traffic to the appropriate backend. Protection for public VMware Engine workloads by using an IPS/IDS, NGFW, DPI, or NAT solution. L3—L7 DDoS mitigation for public VMware Engine workloads by using Google Cloud Armor. SSL termination using Google-managed SSL certificates, or SSL policies to control the SSL versions and ciphers that are used for HTTPS or SSL connections to public-facing VMware Engine workloads. Accelerated network delivery for VMware Engine workloads by using Cloud CDN to serve content from locations that are close to the users. The following diagram shows the resources that are required to provision NGFW capability, DDoS mitigation, SSL offloading, and CDN for your public-facing VMware Engine workloads: Figure 4. Resources required to provision an NGFW, DDoS mitigation, SSL offloading, and CDN for public-facing VMware Engine workloads. Figure 4 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions. Provision a global external Application Load Balancer in the external VPC network as the public-facing ingress entry point for VMware Engine workloads. Create multiple forwarding rules to support multiple VMware Engine workloads. Configure each forwarding rule with a unique public IP address, and set it up to listen to HTTP(S) traffic. Configure the network appliances as backends for the load balancer. In addition, you can do the following: To protect the network appliances, set up Google Cloud Armor security policies on the load balancer. To support routing, health-check, and anycast IP address for the network appliances that act as the CDN backends, set up Cloud CDN for the MIGs that host the network appliances. To route requests to different backends, set up URL mapping on the load balancer. For example, route requests to /api to Compute Engine VMs, requests to /images to a Cloud Storage bucket, and requests to /app through the network appliances to your VMware Engine VMs. Configure each network appliance to perform destination-NAT (DNAT) for the internal IP address of its nic0 interface to the internal IP addresses of the VMs that host the public-facing applications in VMware Engine. The network appliances must perform SNAT for the source traffic from the nic2 interface (internal IP address) to ensure a symmetric returned path. In addition, the network appliances must route traffic destined for VMware Engine networks through the nic2 interface to the subnet gateway (the first IP address of the subnet). The DNAT step is necessary because the load balancer is a proxy-based service that's implemented on a Google Front End (GFE) service. Depending on the location of your clients, multiple GFEs can initiate HTTP(S) connections to the internal IP addresses of the backend network appliances. The packets from the GFEs have source IP addresses from the same range that's used for the health-check probes (35.191.0.0/16 and 130.211.0.0/22), not the original client IP addresses. The load balancer appends the client IP addresses by using the X-Forwarded-For header. For the health checks to pass, configure the network appliances to respond to the forwarding rule's IP address by using secondary or loopback interfaces. Set up the internal VPC network's route table to forward VMware Engine traffic to VPC Network Peering. In this configuration, the VMware Engine VMs use the internet gateway service of VMware Engine for egress to the internet. However, ingress is managed by the network appliances for the public IP addresses of the VMs. NGFW for private connectivity This use case has the following requirements: A hybrid architecture that consists of VMware Engine and Compute Engine instances, with an L4 load balancer as the common frontend. Protection for your private VMware Engine workloads by using an IPS/IDS, NGFWs, DPI, or NAT solution. Cloud Interconnect or Cloud VPN for connectivity with the on-premises network. The following diagram shows the resources that are required to provision an NGFW for private connectivity between your VMware Engine workloads and on-premises networks or other cloud providers: Figure 5. Resources required to provision an NGFW for private connectivity to VMware Engine workloads. Figure 5 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions. Provision an internal passthrough Network Load Balancer in the external VPC network, with a single forwarding rule to listen to all the traffic. Configure the network appliances as backends for the load balancer. Set up the external VPC network's route table to point to the forwarding rule as a next hop for traffic destined to VMware Engine networks. Configure the network appliances as follows: Route traffic destined to VMware Engine networks through the nic2 interface to the subnet gateway (the first IP address of the subnet). For the health checks to pass, configure the network appliances to respond to the forwarding rule's IP address by using secondary or loopback interfaces. For the health checks to pass for the internal load balancers, configure multiple virtual routing domains to ensure proper routing. This step is necessary to allow the nic2 interface to return health-check traffic that is sourced from the public ranges (35.191.0.0/16 and 130.211.0.0/22), while the default route of the network appliances points to the nic0 interface. For more information about IP ranges for load-balancer health checks, see Probe IP ranges and firewall rules. Note: The internal passthrough Network Load Balancer supports symmetric hashing. You don't need to configure SNAT on the network appliance to ensure a symmetric returned path for the NGFW functionality. Set up the route table of the internal VPC network to forward VMware Engine traffic to VPC Network Peering as a next hop. For returned traffic or for traffic that's initiated from VMware Engine to remote networks, configure the internal passthrough Network Load Balancer as the next hop that's advertised over VPC Network Peering to the private services access VPC network. Centralized egress to the internet This use case has the following requirements: Centralized URL filtering, logging, and traffic enforcement for internet egress. Customized protection for VMware Engine workloads by using network appliances from Cloud Marketplace. The following diagram shows the resources required to provision centralized egress points from VMware Engine workloads to the internet: Figure 6. Resources required to provision centralized egress to the internet for VMware Engine workloads. Figure 6 shows the tasks that you must complete to set up and configure the resources in this architecture. The following is a description of each task, including a link to a document that provides more information and detailed instructions. Provision an internal passthrough Network Load Balancer in the internal VPC network as the egress entry point for VMware Engine workloads. Create a single forwarding rule to listen to all traffic. Configure the network appliances as backends for the load balancer. Configure the network appliances to SNAT the traffic from their public IP addresses (nic0). For the health checks to pass, the network appliances must respond to the forwarding rule's IP address by using secondary or loopback interfaces. Configure the internal VPC network to advertise a default route over VPC Network Peering to the private services access VPC network, with the internal load balancer's forwarding rule as the next hop. To allow traffic to egress through the network appliances instead of the internet gateway, use the same process as you would to enable the routing of internet traffic through an on-premises connection. What's next Learn more about VMware Engine. Review VPC network design best practices. Learn about VMware Engine networking. Learn about Cloud Load Balancing. Explore Cloud Marketplace. Send feedback \ No newline at end of file diff --git a/Validating_data_transfers.txt b/Validating_data_transfers.txt new file mode 100644 index 0000000000000000000000000000000000000000..f9693092dfeab414c53734ce1dc9b7a6cec3ea7e --- /dev/null +++ b/Validating_data_transfers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hadoop/validating-data-transfers +Date Scraped: 2025-02-23T11:52:40.193Z + +Content: +Home Docs Cloud Architecture Center Send feedback Validating data transfers between HDFS and Cloud Storage Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-04-15 UTC When you're copying or moving data between distinct storage systems such as multiple Apache Hadoop Distributed File System (HDFS) clusters or between HDFS and Cloud Storage, it's a good idea to perform some type of validation to guarantee data integrity. This validation is essential to be sure data wasn't altered during transfer. While various mechanisms already ensure point-to-point data integrity in transit (such as TLS for all communication with Cloud Storage), explicit end-to-end data integrity validation adds protection for cases that may go undetected by typical in-transit mechanisms. This can help you detect potential data corruption caused, for example, by noisy network links, memory errors on server computers and routers along the path, or software bugs (such as in a library that customers use). For Cloud Storage, this validation happens automatically client-side with Google Cloud CLI commands like cp and rsync. Those commands compute local file checksums, which are then validated against the checksums computed by Cloud Storage at the end of each operation. If the checksums don't match, the gcloud CLI deletes the invalid copies and prints a warning message. This mismatch rarely happens, and if it does, you can retry the operation. Now there's also a way to automatically perform end-to-end, client-side validation in Apache Hadoop across heterogeneous Hadoop-compatible file systems like HDFS and Cloud Storage. This article describes how the new feature lets you efficiently and accurately compare file checksums. How HDFS performs file checksums HDFS uses CRC32C, a 32-bit cyclic redundancy check (CRC) based on the Castagnoli polynomial, to maintain data integrity in different contexts: At rest, Hadoop DataNodes continuously verify data against stored CRCs to detect and repair bit-rot. In transit, the DataNodes send known CRCs along with the corresponding bulk data, and HDFS client libraries cooperatively compute per-chunk CRCs to compare against the CRCs received from the DataNodes. For HDFS administrative purposes, block-level checksums are used for low-level manual integrity checks of individual block files on DataNodes. For arbitrary application-layer use cases, the FileSystem interface defines getFileChecksum, and the HDFS implementation uses its stored fine-grained CRCs to define a file-level checksum. For most day-to-day uses, the CRCs are used transparently with respect to the application layer. The only CRCs used are the per-chunk CRC32Cs, which are already precomputed and stored in metadata files alongside block data. The chunk size is defined by dfs.bytes-per-checksum and has a default value of 512 bytes. Shortcomings of Hadoop's default file checksum type By default when using Hadoop, all API-exposed checksums take the form of an MD5 of a concatenation of chunk CRC32Cs, either at the block level through the low-level DataTransferProtocol, or at the file level through the top-level FileSystem interface. A file-level checksum is defined as the MD5 of the concatenation of all the block checksums, each of which is an MD5 of a concatenation of chunk CRCs, and is therefore referred to as an MD5MD5CRC32FileChecksum. This is effectively an on-demand, three-layer Merkle tree. This definition of the file-level checksum is sensitive to the implementation and data-layout details of HDFS, namely the chunk size (default 512 bytes) and the block size (default 128 MB). So this default file checksum isn't suitable in any of the following situations: Two different copies of the same files in HDFS, but with different per-file block sizes configured. Two different instances of HDFS with different block or chunk sizes configured. A combination of HDFS and non-HDFS Hadoop-compatible file systems (HCFS) such as Cloud Storage. The following diagram shows how the same file can end up with different checksums depending on the file system's configuration: You can display the default checksum for a file in HDFS by using the Hadoop fs -checksum command: hadoop fs -checksum hdfs:///user/bob/data.bin In an HDFS cluster that has a block size of 64 MB (dfs.block.size=67108864), this command displays a result like the following: hdfs:///user/bob/data.bin MD5-of-131072MD5-of-512CRC32C 000002000000000000020000e9378baa1b8599e50cca212ccec2f8b7 For the same file in another cluster that has a block size of 128 MB (dfs.block.size=134217728), you see a different output: hdfs:///user/bob/data.bin MD5-of-0MD5-of-512CRC32C 000002000000000000000000d3a7bae0b5200ed6707803a3911959f9 You can see in these examples that the two checksums differ for the same file. How Hadoop's new composite CRC file checksum works A new checksum type, tracked in HDFS-13056, was released in Apache Hadoop 3.1.1 to address these shortcomings. The new type, configured by dfs.checksum.combine.mode=COMPOSITE_CRC, defines new composite block CRCs and composite file CRCs as the mathematically composed CRC across the stored chunk CRCs. Using this type replaces using an MD5 of the component CRCs in order to calculate a single CRC that represents the entire block or file and is independent of the lower-level granularity of chunk CRCs. CRC composition has many benefits — it is efficient; it allows the resulting checksums to be completely chunk/block agnostic; and it allows comparison between striped and replicated files, between different HDFS instances, and between HDFS and other external storage systems. (You can learn more details about the CRC algorithm in this PDF download.) The following diagram provides a look at how a file's checksum is consistent after transfer across heterogeneous file system configurations: This feature is minimally invasive: it can be added in place to be compatible with existing block metadata, and doesn't need to change the normal path of chunk verification. This also means even large preexisting HDFS deployments can adopt this feature to retroactively sync data. For more details, you can download the full design PDF document. Using the new composite CRC checksum type To use the new composite CRC checksum type within Hadoop, set the dfs.checksum.combine.mode property to COMPOSITE_CRC (instead of the default value MD5MD5CRC). When a file is copied from one location to another, the chunk-level checksum type (that is, the property dfs.checksum.type that defaults to CRC32C) must also match in both locations. You can display the new checksum type for a file in HDFS by passing the -Ddfs.checksum.combine.mode=COMPOSITE_CRC argument to the Hadoop fs -checksum command: hadoop fs -Ddfs.checksum.combine.mode=COMPOSITE_CRC -checksum hdfs:///user/bob/data.bin Regardless of the block size configuration of the HDFS cluster, you see the same output, like the following: hdfs:///user/bob/data.bin COMPOSITE-CRC32C c517d290 For Cloud Storage, you must also explicitly set the Cloud Storage connector property fs.gs.checksum.type to CRC32C. This property otherwise defaults to NONE, causing file checksums to be disabled by default. This default behavior by the Cloud Storage connector is a preventive measure to avoid an issue with distcp, where an exception is raised if the checksum types mismatch instead of failing gracefully. The command looks like the following: hadoop fs -Ddfs.checksum.combine.mode=COMPOSITE_CRC -Dfs.gs.checksum.type=CRC32C -checksum gs://[BUCKET]/user/bob/data.bin The command displays the same output as in the previous example on HDFS: gs://[BUCKET]/user/bob/data.bin COMPOSITE-CRC32C c517d290 You can see that the composite CRC checksums returned by the previous commands all match, regardless of block size, as well as between HDFS and Cloud Storage. By using composite CRC checksums, you can now guarantee that data integrity is preserved when transferring files between all types of Hadoop cluster configurations. If you are running distcp, as in the following example, the validation is performed automatically: hadoop distcp -Ddfs.checksum.combine.mode=COMPOSITE_CRC -Dfs.gs.checksum.type=CRC32C hdfs:///user/bob/* gs://[BUCKET]/user/bob/ If distcp detects a file checksum mismatch between the source and destination during the copy, then the operation will fail and return a warning. Accessing the feature The new composite CRC checksum feature is available in Apache Hadoop 3.1.1 (see release notes), and backports to versions 2.7, 2.8 and 2.9 are in the works. It has been included by default in subminor versions of Cloud Dataproc 1.3 since late 2018. Send feedback \ No newline at end of file diff --git a/Vertex_AI.txt b/Vertex_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..a4fc51f1631a2470bc5edca17d5cdcad9f3c4b4f --- /dev/null +++ b/Vertex_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vertex-ai +Date Scraped: 2025-02-23T12:01:35.362Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceVertex AI PlatformInnovate faster with enterprise-ready AI, enhanced by Gemini modelsVertex AI is a fully-managed, unified AI development platform for building and using generative AI. Access and utilize Vertex AI Studio, Agent Builder, and 160+ foundation models.Try it in consoleContact sales Want training? Start a free course for Vertex AI Studio.Product highlightsBuild generative AI apps quickly with Gemini Train, test, and tune ML models on a single platformAccelerate development with unified data and AIGetting started with Gemini on Vertex AI5 min videoFeaturesGemini, Google’s most capable multimodal modelsVertex AI offers access to the latest Gemini models from Google. Gemini is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI Studio, using text, images, video, or code. Using Gemini’s advanced reasoning and state-of-the-art generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images to build next-gen AI applications.3:44How to use the Gemini APIs: Advanced techniques160+ generative AI models and tools Choose from the widest variety of models with first-party (Gemini, Imagen 3), third-party (Anthropic's Claude Model Family), and open models (Gemma, Llama 3.2) in Model Garden. Use extensions to enable models to retrieve real-time information and trigger actions. Customize models to your use case with a variety of tuning options for Google's text, image, or code models.Generative AI models and fully managed tools make it easy to prototype, customize, and integrate and deploy them into applications.Explore AI models and APIs in Model GardenView documentationOpen and integrated AI platformData scientists can move faster with Vertex AI Platform's tools for training, tuning, and deploying ML models.Vertex AI notebooks, including your choice of Colab Enterprise or Workbench, are natively integrated with BigQuery providing a single surface across all data and AI workloads.Vertex AI Training and Prediction help you reduce training time and deploy models to production easily with your choice of open source frameworks and optimized AI infrastructure.MLOps for predictive and generative AIVertex AI Platform provides purpose-built MLOps tools for data scientists and ML engineers to automate, standardize, and manage ML projects.Modular tools help you collaborate across teams and improve models throughout the entire development lifecycle—identify the best model for a use case with Vertex AI Evaluation, orchestrate workflows with Vertex AI Pipelines, manage any model with Model Registry, serve, share, and reuse ML features with Feature Store, and monitor models for input skew and drift.MLOps with Vertex AI: Model EvaluationNo cost training Get started Agent Builder Vertex AI Agent Builder enables developers to easily build and deploy enterprise ready generative AI experiences. It provides the convenience of a no code agent builder console alongside powerful grounding, orchestration, and customization capabilities. With Vertex AI Agent Builder developers can quickly create a range of generative AI agents and applications grounded in their organization’s data.View all featuresHow It WorksVertex AI provides several options for model training and deployment:Generative AI gives you access to large generative AI models, including Gemini 2.0 Flash, so you can evaluate, tune, and deploy them for use in your AI-powered applications.Model Garden lets you discover, test, customize, and deploy Vertex AI and select open-source (OSS) models and assets.Custom training gives you complete control over the training process, including using your preferred ML framework, writing your own training code, and choosing hyperparameter tuning options.View documentationVertex AI enables faster innovation with enterprise-ready generative AIsparkGet solution recommendations for your use case, generated by AICan I use both Google's Gemini and Meta's Llama in Google Cloud?I want to generate images that match my company's brandI want to train a custom model for a specific need that we haveMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionUse various generative modelsprompt_suggestionGenerate custom imagesprompt_suggestionTrain a custom modelCommon UsesBuild with GeminiAccess Gemini models via the Gemini API in Google Cloud Vertex AIView code samples for Python, JavaScript, Java, Go, and CurlPythonPythonJavaScriptJavaGoCurlLoading...model = genai.GenerativeModel(model_name="gemini-pro-vision") response = model.generate_content(["What's in this photo?", img])content_copyOpen full code Set up the Vertex AI Gemini APILearn multimodal design best practicesRepository for sample code and notebooks Code sampleAccess Gemini models via the Gemini API in Google Cloud Vertex AIView code samples for Python, JavaScript, Java, Go, and CurlPythonPythonJavaScriptJavaGoCurlLoading...model = genai.GenerativeModel(model_name="gemini-pro-vision") response = model.generate_content(["What's in this photo?", img])content_copyOpen full code Set up the Vertex AI Gemini APILearn multimodal design best practicesRepository for sample code and notebooks Generative AI in applications Get an introduction to generative AI on Vertex AIVertex AI Studio offers a Google Cloud console tool for rapidly prototyping and testing generative AI models. Learn how you can use Generative AI Studio to test models using prompt samples, design and save prompts, tune a foundation model, and convert between speech and text.View documentation overviewView available models in Vertex AI StudioIntroduction to prompt designGenerative AI prompt samplesSee how to tune LLMs in Vertex AI StudioTutorials, quickstarts, & labsGet an introduction to generative AI on Vertex AIVertex AI Studio offers a Google Cloud console tool for rapidly prototyping and testing generative AI models. Learn how you can use Generative AI Studio to test models using prompt samples, design and save prompts, tune a foundation model, and convert between speech and text.View documentation overviewView available models in Vertex AI StudioIntroduction to prompt designGenerative AI prompt samplesSee how to tune LLMs in Vertex AI StudioExtract, summarize, and classify dataUse gen AI for summarization, classification, and extractionLearn how to create text prompts for handling any number of tasks with Vertex AI’s generative AI support. Some of the most common tasks are classification, summarization, and extraction. Gemini on Vertex AI lets you design prompts with flexibility in terms of their structure and format.View text prompt design docsLearn how to design extraction promptsView all generative AI prompt samplesLearn how to design classification promptsSee how you can accelerate research and discovery with generative AI.Tutorials, quickstarts, & labsUse gen AI for summarization, classification, and extractionLearn how to create text prompts for handling any number of tasks with Vertex AI’s generative AI support. Some of the most common tasks are classification, summarization, and extraction. Gemini on Vertex AI lets you design prompts with flexibility in terms of their structure and format.View text prompt design docsLearn how to design extraction promptsView all generative AI prompt samplesLearn how to design classification promptsSee how you can accelerate research and discovery with generative AI.Train custom ML modelsCustom ML training overview and documentationGet an overview of the custom training workflow in Vertex AI, the benefits of custom training, and the various training options that are available. This page also details every step involved in the ML training workflow from preparing data to predictions.View overview documentationCustom training beginner's guideTrain a model using Vertex AI and the Python SDKLearn to build and deploy machine learning solutions on Vertex AIGet a video walkthrough of the steps required to train custom models on Vertex AI. Tutorials, quickstarts, & labsCustom ML training overview and documentationGet an overview of the custom training workflow in Vertex AI, the benefits of custom training, and the various training options that are available. This page also details every step involved in the ML training workflow from preparing data to predictions.View overview documentationCustom training beginner's guideTrain a model using Vertex AI and the Python SDKLearn to build and deploy machine learning solutions on Vertex AIGet a video walkthrough of the steps required to train custom models on Vertex AI. Train models with minimal ML expertiseTrain and create ML models with minimal technical expertiseThis guide walks you through how Vertex AI’s AutoML how to create and train high-quality custom machine learning models with minimal effort and machine learning expertise. This is perfect for those looking well to automate the tedious and time-consuming work of manually curating videos, images, texts, and tables. View AutoML beginner's guideView AutoML docs for tabular dataView AutoML docs for image dataView AutoML docs for text dataTutorials, quickstarts, & labsTrain and create ML models with minimal technical expertiseThis guide walks you through how Vertex AI’s AutoML how to create and train high-quality custom machine learning models with minimal effort and machine learning expertise. This is perfect for those looking well to automate the tedious and time-consuming work of manually curating videos, images, texts, and tables. View AutoML beginner's guideView AutoML docs for tabular dataView AutoML docs for image dataView AutoML docs for text dataDeploy a model for production useDeploy for batch or online predictionsWhen you're ready to use your model to solve a real-world problem, register your model to Vertex AI Model Registry and use the Vertex AI prediction service for batch and online predictions. Learn how to get predictions from an ML modelGet hands on with a Vertex AI Predictions codelabSimplify model serving with custom prediction routinesUse prebuilt containers for prediction and explanationWatch Prototype to Production, a video series that takes you from notebook code to a deployed model.Tutorials, quickstarts, & labsDeploy for batch or online predictionsWhen you're ready to use your model to solve a real-world problem, register your model to Vertex AI Model Registry and use the Vertex AI prediction service for batch and online predictions. Learn how to get predictions from an ML modelGet hands on with a Vertex AI Predictions codelabSimplify model serving with custom prediction routinesUse prebuilt containers for prediction and explanationWatch Prototype to Production, a video series that takes you from notebook code to a deployed model.PricingHow Vertex AI pricing worksPricing is based on the Vertex AI tools and services, storage, compute, and Google Cloud resources used. Tools and usageDescriptionPriceGenerative AIImagen model for image generationBased on image input, character input, or custom training pricing. Starting at$0.0001Text, chat, and code generationBased on every 1,000 characters of input (prompt) and every 1,000 characters of output (response).Starting at$0.0001per 1,000 characters​​AutoML modelsImage data training, deployment, and predictionBased on time to train per node hour, which reflects resource usage, and if for classification or object detection. Starting at$1.375 per node hourVideo data training and predictionBased on price per node hour and if classification, object tracking, or action recognition.Starting at$0.462per node hourTabular data training and predictionBased on price per node hour and if classification/regression or forecasting. Contact sales for potential discounts and pricing details. Contact salesText data upload, training, deployment, predictionBased on hourly rates for training and prediction, pages for legacy data upload (PDF only), and text records and pages for prediction.Starting at$0.05per hourCustom-trained modelsCustom model trainingBased on machine type used per hour, region, and any accelerators used. Get an estimate via sales or our pricing calculator. Contact salesVertex AI notebooksCompute and storage resourcesBased on the same rates as Compute Engine and Cloud Storage.Refer to productsManagement feesIn addition to the above resource usage, management fees apply based on region, instances, notebooks, and managed notebooks used. View details.Refer to detailsVertex AI PipelinesExecution and additional feesBased on execution charge, resources used, and any additional service fees. Starting at$0.03 per pipeline runVertex AI Vector Search Serving and building costsBased on the size of your data, the amount of queries per second (QPS) you want to run, and the number of nodes you use. View example.Refer to exampleView pricing details for all Vertex AI features and services. How Vertex AI pricing worksPricing is based on the Vertex AI tools and services, storage, compute, and Google Cloud resources used. Generative AIDescriptionImagen model for image generationBased on image input, character input, or custom training pricing. PriceStarting at$0.0001Text, chat, and code generationBased on every 1,000 characters of input (prompt) and every 1,000 characters of output (response).DescriptionStarting at$0.0001per 1,000 characters​​AutoML modelsDescriptionImage data training, deployment, and predictionBased on time to train per node hour, which reflects resource usage, and if for classification or object detection. PriceStarting at$1.375 per node hourVideo data training and predictionBased on price per node hour and if classification, object tracking, or action recognition.DescriptionStarting at$0.462per node hourTabular data training and predictionBased on price per node hour and if classification/regression or forecasting. Contact sales for potential discounts and pricing details. DescriptionContact salesText data upload, training, deployment, predictionBased on hourly rates for training and prediction, pages for legacy data upload (PDF only), and text records and pages for prediction.DescriptionStarting at$0.05per hourCustom-trained modelsDescriptionCustom model trainingBased on machine type used per hour, region, and any accelerators used. Get an estimate via sales or our pricing calculator. PriceContact salesVertex AI notebooksDescriptionCompute and storage resourcesBased on the same rates as Compute Engine and Cloud Storage.PriceRefer to productsManagement feesIn addition to the above resource usage, management fees apply based on region, instances, notebooks, and managed notebooks used. View details.DescriptionRefer to detailsVertex AI PipelinesDescriptionExecution and additional feesBased on execution charge, resources used, and any additional service fees. PriceStarting at$0.03 per pipeline runVertex AI Vector Search DescriptionServing and building costsBased on the size of your data, the amount of queries per second (QPS) you want to run, and the number of nodes you use. View example.PriceRefer to exampleView pricing details for all Vertex AI features and services. Pricing calculatorEstimate your Vertex AI costs, including region-specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Gemini 2.0, our most advanced multimodal models, in Vertex AIGo to my consoleHave a large project?Contact salesBrowse, customize, and deploy machine learning modelsBrowse Model GardenLearn how to set up a Vertex AI project environmentRead guideGet started with notebooks for machine learningWatch videoBusiness CaseUnlock the full potential of gen AI"The accuracy of Google Cloud's generative AI solution and practicality of the Vertex AI Platform gives us the confidence we needed to implement this cutting-edge technology into the heart of our business and achieve our long-term goal of a zero-minute response time."Abdol Moabery, CEO of GA TelesisLearn moreRelated contentHow grounding your models helps you gain a competitive edge.How generative AI takes enterprise search to a whole new level.See how our customers are implementing generative AI to transform their businesses.Analyst reportsGoogle is a Leader in The Forrester Wave™: AI Foundation Models For Language, Q2 2024. Read the report.Google named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, receiving the highest scores of any vendor evaluated in both Current Offering and Strategy. Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024. Learn more.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vertex_AI_Agent_Builder.txt b/Vertex_AI_Agent_Builder.txt new file mode 100644 index 0000000000000000000000000000000000000000..59389176e55408dc869b318a3b0a8616bf0948d4 --- /dev/null +++ b/Vertex_AI_Agent_Builder.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/products/agent-builder +Date Scraped: 2025-02-23T12:02:04.073Z + +Content: +Vertex AI Agent Builder is making generative AI more reliable for the enterprise. Read the blog.Vertex AI Agent Builder Build and deploy enterprise ready generative AI experiencesCreate AI agents and applications using natural language or a code-first approach. Easily ground your agents or apps in enterprise data with a range of options. Vertex AI Agent Builder gathers all the surfaces and tools that developers need to build their AI agents and applications.Try Vertex AI Agent BuilderContact sales Product highlightsBuild an agent using natural language instructions Orchestrate apps with LangChain and LlamaIndex on Vertex AIGround agents and apps in your data with our RAG offeringsCheck out some of the common uses of Vertex AI Agent BuilderFeaturesEasily build no code conversational AI agentsDesign, deploy, and manage intelligent conversational AI and process automation agents using natural language. Combine prompt-based agent builder tools with pre-built templates for rapid prototyping, experimentation, and deployment without the need to write extensive code. Stitch multiple agents together for your enterprise workflows and experiences. Tailor agent responses based on your business priorities, connect to enterprise data to drive transactions, and streamline interactions across multiple channels. Test and monitor the outputs of your agents and make performance changes in real time.Your guide to no code agentsCheck out our introductory video series on building no code AI agents with Vertex AI Agent Builder Watch now Ground in Google search and/or your enterprise data with our RAG offerings Ensure the accuracy and relevance of your AI agents by connecting them to your trusted data sources. Use our Gemini API to ground in results from Google Search and improve the completeness and accuracy of responses. To ground agents in your enterprise data, use Vertex AI Search's out-of-the-box RAG system which can help you get started with a few clicks. If you are looking to build a DIY RAG, you can use our component APIs for document processing, ranking, grounded generation, and performing checks on outputs. You can also use vector search to create powerful vector embeddings based applications. With connectors, your apps can index and surface fresh data from popular enterprise applications like JIRA, ServiceNow, and Hadoop. Vertex AI extensions and function calling tools enable your apps and agents to perform actions on your users' behalf.Watch video Provide better search and generative AI experiences with Vertex AI Search36:21Rapidly create low-code to high-code AI applicationsAccelerate the development of generative AI-powered applications with a combination of low-code APIs and code-first orchestration. LangChain on Vertex AI lets you leverage the LangChain open source library to build custom Generative AI applications and use Vertex AI for models, tools and deployment. With LlamaIndex on Vertex AI for RAG, you can enrich the LLM context with additional private information to reduce hallucinations and answer questions more accurately. And if you are developing with Firebase Genkit the Vertex AI plugin provides interfaces to several Google generative AI models through the Vertex AI API. These code first tools help you experiment swiftly and create next-generation AI-powered experiences tailored to your enterprise's unique business needs.Accelerate experimentation and deploymentEmploy comprehensive evaluation metrics and tools to assess the performance and quality of your generative AI applications, test your applications to fine-tune their behaviors and responses. Effortlessly deploy your generative AI applications to production environments, ensuring scalability and reliability to meet enterprise demands with Google Cloud’s enterprise ready infrastructure. Continuously monitor key metrics like usage, latency, safety, and cost to identify potential issues and optimize performance over time. Use Vertex AI Studio’s model tuning capabilities to work directly with foundation models or combine your apps with our out-of-the-box tools with our fully integrated platform.Benefit from enterprise-grade security and compliance Build AI experiences that meet the rigorous standards and scaling needs of your enterprise. Vertex AI Agent Builder offers built-in security, compliance, and governance features, aligning with industry certifications like HIPAA, ISO 27000-series, SOC-1/2/3, VPC-SC, and CMEK. Maintain data privacy and control over your AI apps, manage access, and ensure the responsible use of AI models and data.View all featuresHow It WorksVertex AI Agent Builder accelerates the creation of high quality generative AI experiences. Choose the ease of a no code agent builder interface to use natural language to build your agents or use powerful orchestration and customization capabilities, including LangChain on Vertex AI.View documentationCommon UsesBuild conversational AI agents in minutes Create an agent without a single line of code. Build powerful AI agents, no code required. For complex goals, you can easily stitch together multiple agents, with one agent functioning as the main agent and others as subagents. Train with your data, automate tasks, and iterate with ease. Launch and analyze - all within a user-friendly platform. Read the documentationWatch video: Build and deploy generative AI agents using natural language with Vertex AI Agent BuilderTutorials, quickstarts, & labsCreate an agent without a single line of code. Build powerful AI agents, no code required. For complex goals, you can easily stitch together multiple agents, with one agent functioning as the main agent and others as subagents. Train with your data, automate tasks, and iterate with ease. Launch and analyze - all within a user-friendly platform. Read the documentationWatch video: Build and deploy generative AI agents using natural language with Vertex AI Agent BuilderGround your apps in Google Search and surface fresh data to your users Getting started with our Gemini API By grounding our state-of-the-art models like Gemini with Google Search, we offer customers the combined power of Google’s latest foundation models along with access to fresh, high quality information to significantly improve the completeness and accuracy of responses. You can access Gemini enhanced with Google Search with just a click on our API. Grounding with Google Search will soon offer dynamic retrieval, a new capability to help customers balance quality with cost efficiency by intelligently selecting when to use Google Search results and when to use the model’s training data. Set up the Gemini API Learn more about grounding with Google search in this videoTutorials, quickstarts, & labsGetting started with our Gemini API By grounding our state-of-the-art models like Gemini with Google Search, we offer customers the combined power of Google’s latest foundation models along with access to fresh, high quality information to significantly improve the completeness and accuracy of responses. You can access Gemini enhanced with Google Search with just a click on our API. Grounding with Google Search will soon offer dynamic retrieval, a new capability to help customers balance quality with cost efficiency by intelligently selecting when to use Google Search results and when to use the model’s training data. Set up the Gemini API Learn more about grounding with Google search in this videoUse Vertex AI search for out of the box RAG for your agents and apps An out-of-the-box RAG system for your generative AI appsWith Vertex AI Search you can enjoy a Google quality out-of-the-box RAG (Retrieval Augmented Generation) system, enabling you to easily ground your language models in your own data. Vertex AI Search handles the complexities of indexing diverse data sources, offers powerful search powered by Google technology, and integrates seamlessly with language models within Vertex AI.Read more about Vertex AI search Watch this demo video to see how easily you can use Vertex AI Search for RAGLearning resourcesAn out-of-the-box RAG system for your generative AI appsWith Vertex AI Search you can enjoy a Google quality out-of-the-box RAG (Retrieval Augmented Generation) system, enabling you to easily ground your language models in your own data. Vertex AI Search handles the complexities of indexing diverse data sources, offers powerful search powered by Google technology, and integrates seamlessly with language models within Vertex AI.Read more about Vertex AI search Watch this demo video to see how easily you can use Vertex AI Search for RAGUse Vertex AI Search's component APIs for custom RAG implementationsFor DIY RAG builders Building custom Retrieval Augmented Generation (RAG) for unique AI needs? Vertex AI simplifies the process with these tools:Document AI Layout Parser: extracts key data from complex documents for better searchRanking API: LLM-powered ranking prioritizes relevant answerGrounded Generation API: creates accurate responses using your data or trusted sources; this API now offers a high-fidelity mode in experimental Preview, a new feature that will further reduce hallucinations by focusing on customer provided context for groundingCheck Grounding API: validates responses against facts for reliabilityRead the documentationLearn more about our search components APIs for grounded generation Learning resourcesFor DIY RAG builders Building custom Retrieval Augmented Generation (RAG) for unique AI needs? Vertex AI simplifies the process with these tools:Document AI Layout Parser: extracts key data from complex documents for better searchRanking API: LLM-powered ranking prioritizes relevant answerGrounded Generation API: creates accurate responses using your data or trusted sources; this API now offers a high-fidelity mode in experimental Preview, a new feature that will further reduce hallucinations by focusing on customer provided context for groundingCheck Grounding API: validates responses against facts for reliabilityRead the documentationLearn more about our search components APIs for grounded generation Vector search for powerful embeddings-based applications Build vector embeddings-powered experiences For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. Vector search can scale to billions of vectors and find the nearest neighbors in a few milliseconds. Vector search now offers hybrid search - an integration of vector-based and keyword-based search techniques to ensure the most relevant responses for your users.Read the blog With this video to learn how you can build highly scalable generative AI apps with Vector Search and RAGLearning resourcesBuild vector embeddings-powered experiences For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. Vector search can scale to billions of vectors and find the nearest neighbors in a few milliseconds. Vector search now offers hybrid search - an integration of vector-based and keyword-based search techniques to ensure the most relevant responses for your users.Read the blog With this video to learn how you can build highly scalable generative AI apps with Vector Search and RAGUse LangChain on Vertex AI Orchestrate highly performant, customized agents You can create highly performant, customized agents based on your needs with all the components with Vertex AI Agent Builder. LangChain on Vertex AI is just one of our many code-first offerings. With this, we help you easily build using one of the most popular open source Python frameworks that we know gen AI developers love and deploy on Vertex AI for enterprise scale.Learn more Partners & integrationsOrchestrate highly performant, customized agents You can create highly performant, customized agents based on your needs with all the components with Vertex AI Agent Builder. LangChain on Vertex AI is just one of our many code-first offerings. With this, we help you easily build using one of the most popular open source Python frameworks that we know gen AI developers love and deploy on Vertex AI for enterprise scale.Learn more Use LlamaIndex on Vertex AI Enrich LLM context with private information LlamaIndex on Vertex AI simplifies the retrieval augmented generation (RAG) process, from data ingestion and transformation to embedding, indexing, retrieval, and generation. Now Vertex AI customers can leverage Google’s models and AI-optimized infrastructure alongside LlamaIndex’s simple, flexible, open-source data framework, to connect custom data sources to generative models. Learn more Learning resourcesEnrich LLM context with private information LlamaIndex on Vertex AI simplifies the retrieval augmented generation (RAG) process, from data ingestion and transformation to embedding, indexing, retrieval, and generation. Now Vertex AI customers can leverage Google’s models and AI-optimized infrastructure alongside LlamaIndex’s simple, flexible, open-source data framework, to connect custom data sources to generative models. Learn more Build with Firebase Genkit Access Google models like Gemini for developing with FirebaseGenkit by Firebase, is an open-source Typescript/JavaScript framework designed to simplify the development, deployment, and monitoring of production-ready AI agents. Facilitated through the Vertex AI plugin, Firebase developers can now take advantage of Google models like Gemini and Imagen 2, as well as text embeddings. Learn more Learning resourcesAccess Google models like Gemini for developing with FirebaseGenkit by Firebase, is an open-source Typescript/JavaScript framework designed to simplify the development, deployment, and monitoring of production-ready AI agents. Facilitated through the Vertex AI plugin, Firebase developers can now take advantage of Google models like Gemini and Imagen 2, as well as text embeddings. Learn more PricingHow will Vertex AI Agent Builder pricing work? Published pricing will be available on this page. For offerings in preview, reach out to your sales team for pricing. Tools and usage Description Price Vertex AI agentsBuild and deploy generative AI agents using natural language. View detailsStarting at$12per 1,000 queriesVertex AI searchVertex AI search is a Google Search quality information retrieval and answer generation system that can help you create RAG-powered gen apps or improve the performance of your search applications.Starting at$2per 1,000 queries Vector search For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. View detailsRefer to pricing page for more detailsCheck out our $1,000 free trial offer for select SKUs. Please visit our pricing page for more details. To make it easier for you to get started, we're offering customers new to Vertex AI Agent Builder (this includes existing Google Cloud customers) a one time credit of $1,000 per Google Cloud billing account, applicable up to 12 months post activation (or until you consume the amount, whichever is sooner). Vertex AI Agent Builder is an expansion of Vertex AI Search and Conversation. Product pages are in the process of being updated to new branding. How will Vertex AI Agent Builder pricing work? Published pricing will be available on this page. For offerings in preview, reach out to your sales team for pricing. Vertex AI agentsDescription Build and deploy generative AI agents using natural language. View detailsPrice Starting at$12per 1,000 queriesVertex AI searchDescription Vertex AI search is a Google Search quality information retrieval and answer generation system that can help you create RAG-powered gen apps or improve the performance of your search applications.Price Starting at$2per 1,000 queries Vector search Description For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. View detailsPrice Refer to pricing page for more detailsCheck out our $1,000 free trial offer for select SKUs. Please visit our pricing page for more details. To make it easier for you to get started, we're offering customers new to Vertex AI Agent Builder (this includes existing Google Cloud customers) a one time credit of $1,000 per Google Cloud billing account, applicable up to 12 months post activation (or until you consume the amount, whichever is sooner). Vertex AI Agent Builder is an expansion of Vertex AI Search and Conversation. Product pages are in the process of being updated to new branding. PRICING CALCULATOREstimate your costs, including region-specific pricing and fees.Estimate your costs CUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptNew customers get $1,000 in free creditsCheck out our special offerSet up Vertex AI project environmentGet started Use Google quality search for grounding apps and agents Read more Use Model Builder to explore, fine tune, train, evaluate, and manage AI modelsRead more Contact your sales team to help with your project Contact sales Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vertex_AI_Platform.txt b/Vertex_AI_Platform.txt new file mode 100644 index 0000000000000000000000000000000000000000..3b91655105fb3e6ffa3fb3137fd0755b1477219d --- /dev/null +++ b/Vertex_AI_Platform.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vertex-ai +Date Scraped: 2025-02-23T12:01:59.216Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceVertex AI PlatformInnovate faster with enterprise-ready AI, enhanced by Gemini modelsVertex AI is a fully-managed, unified AI development platform for building and using generative AI. Access and utilize Vertex AI Studio, Agent Builder, and 160+ foundation models.Try it in consoleContact sales Want training? Start a free course for Vertex AI Studio.Product highlightsBuild generative AI apps quickly with Gemini Train, test, and tune ML models on a single platformAccelerate development with unified data and AIGetting started with Gemini on Vertex AI5 min videoFeaturesGemini, Google’s most capable multimodal modelsVertex AI offers access to the latest Gemini models from Google. Gemini is capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test Gemini in Vertex AI Studio, using text, images, video, or code. Using Gemini’s advanced reasoning and state-of-the-art generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images to build next-gen AI applications.3:44How to use the Gemini APIs: Advanced techniques160+ generative AI models and tools Choose from the widest variety of models with first-party (Gemini, Imagen 3), third-party (Anthropic's Claude Model Family), and open models (Gemma, Llama 3.2) in Model Garden. Use extensions to enable models to retrieve real-time information and trigger actions. Customize models to your use case with a variety of tuning options for Google's text, image, or code models.Generative AI models and fully managed tools make it easy to prototype, customize, and integrate and deploy them into applications.Explore AI models and APIs in Model GardenView documentationOpen and integrated AI platformData scientists can move faster with Vertex AI Platform's tools for training, tuning, and deploying ML models.Vertex AI notebooks, including your choice of Colab Enterprise or Workbench, are natively integrated with BigQuery providing a single surface across all data and AI workloads.Vertex AI Training and Prediction help you reduce training time and deploy models to production easily with your choice of open source frameworks and optimized AI infrastructure.MLOps for predictive and generative AIVertex AI Platform provides purpose-built MLOps tools for data scientists and ML engineers to automate, standardize, and manage ML projects.Modular tools help you collaborate across teams and improve models throughout the entire development lifecycle—identify the best model for a use case with Vertex AI Evaluation, orchestrate workflows with Vertex AI Pipelines, manage any model with Model Registry, serve, share, and reuse ML features with Feature Store, and monitor models for input skew and drift.MLOps with Vertex AI: Model EvaluationNo cost training Get started Agent Builder Vertex AI Agent Builder enables developers to easily build and deploy enterprise ready generative AI experiences. It provides the convenience of a no code agent builder console alongside powerful grounding, orchestration, and customization capabilities. With Vertex AI Agent Builder developers can quickly create a range of generative AI agents and applications grounded in their organization’s data.View all featuresHow It WorksVertex AI provides several options for model training and deployment:Generative AI gives you access to large generative AI models, including Gemini 2.0 Flash, so you can evaluate, tune, and deploy them for use in your AI-powered applications.Model Garden lets you discover, test, customize, and deploy Vertex AI and select open-source (OSS) models and assets.Custom training gives you complete control over the training process, including using your preferred ML framework, writing your own training code, and choosing hyperparameter tuning options.View documentationVertex AI enables faster innovation with enterprise-ready generative AIsparkGet solution recommendations for your use case, generated by AICan I use both Google's Gemini and Meta's Llama in Google Cloud?I want to generate images that match my company's brandI want to train a custom model for a specific need that we haveMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionUse various generative modelsprompt_suggestionGenerate custom imagesprompt_suggestionTrain a custom modelCommon UsesBuild with GeminiAccess Gemini models via the Gemini API in Google Cloud Vertex AIView code samples for Python, JavaScript, Java, Go, and CurlPythonPythonJavaScriptJavaGoCurlLoading...model = genai.GenerativeModel(model_name="gemini-pro-vision") response = model.generate_content(["What's in this photo?", img])content_copyOpen full code Set up the Vertex AI Gemini APILearn multimodal design best practicesRepository for sample code and notebooks Code sampleAccess Gemini models via the Gemini API in Google Cloud Vertex AIView code samples for Python, JavaScript, Java, Go, and CurlPythonPythonJavaScriptJavaGoCurlLoading...model = genai.GenerativeModel(model_name="gemini-pro-vision") response = model.generate_content(["What's in this photo?", img])content_copyOpen full code Set up the Vertex AI Gemini APILearn multimodal design best practicesRepository for sample code and notebooks Generative AI in applications Get an introduction to generative AI on Vertex AIVertex AI Studio offers a Google Cloud console tool for rapidly prototyping and testing generative AI models. Learn how you can use Generative AI Studio to test models using prompt samples, design and save prompts, tune a foundation model, and convert between speech and text.View documentation overviewView available models in Vertex AI StudioIntroduction to prompt designGenerative AI prompt samplesSee how to tune LLMs in Vertex AI StudioTutorials, quickstarts, & labsGet an introduction to generative AI on Vertex AIVertex AI Studio offers a Google Cloud console tool for rapidly prototyping and testing generative AI models. Learn how you can use Generative AI Studio to test models using prompt samples, design and save prompts, tune a foundation model, and convert between speech and text.View documentation overviewView available models in Vertex AI StudioIntroduction to prompt designGenerative AI prompt samplesSee how to tune LLMs in Vertex AI StudioExtract, summarize, and classify dataUse gen AI for summarization, classification, and extractionLearn how to create text prompts for handling any number of tasks with Vertex AI’s generative AI support. Some of the most common tasks are classification, summarization, and extraction. Gemini on Vertex AI lets you design prompts with flexibility in terms of their structure and format.View text prompt design docsLearn how to design extraction promptsView all generative AI prompt samplesLearn how to design classification promptsSee how you can accelerate research and discovery with generative AI.Tutorials, quickstarts, & labsUse gen AI for summarization, classification, and extractionLearn how to create text prompts for handling any number of tasks with Vertex AI’s generative AI support. Some of the most common tasks are classification, summarization, and extraction. Gemini on Vertex AI lets you design prompts with flexibility in terms of their structure and format.View text prompt design docsLearn how to design extraction promptsView all generative AI prompt samplesLearn how to design classification promptsSee how you can accelerate research and discovery with generative AI.Train custom ML modelsCustom ML training overview and documentationGet an overview of the custom training workflow in Vertex AI, the benefits of custom training, and the various training options that are available. This page also details every step involved in the ML training workflow from preparing data to predictions.View overview documentationCustom training beginner's guideTrain a model using Vertex AI and the Python SDKLearn to build and deploy machine learning solutions on Vertex AIGet a video walkthrough of the steps required to train custom models on Vertex AI. Tutorials, quickstarts, & labsCustom ML training overview and documentationGet an overview of the custom training workflow in Vertex AI, the benefits of custom training, and the various training options that are available. This page also details every step involved in the ML training workflow from preparing data to predictions.View overview documentationCustom training beginner's guideTrain a model using Vertex AI and the Python SDKLearn to build and deploy machine learning solutions on Vertex AIGet a video walkthrough of the steps required to train custom models on Vertex AI. Train models with minimal ML expertiseTrain and create ML models with minimal technical expertiseThis guide walks you through how Vertex AI’s AutoML how to create and train high-quality custom machine learning models with minimal effort and machine learning expertise. This is perfect for those looking well to automate the tedious and time-consuming work of manually curating videos, images, texts, and tables. View AutoML beginner's guideView AutoML docs for tabular dataView AutoML docs for image dataView AutoML docs for text dataTutorials, quickstarts, & labsTrain and create ML models with minimal technical expertiseThis guide walks you through how Vertex AI’s AutoML how to create and train high-quality custom machine learning models with minimal effort and machine learning expertise. This is perfect for those looking well to automate the tedious and time-consuming work of manually curating videos, images, texts, and tables. View AutoML beginner's guideView AutoML docs for tabular dataView AutoML docs for image dataView AutoML docs for text dataDeploy a model for production useDeploy for batch or online predictionsWhen you're ready to use your model to solve a real-world problem, register your model to Vertex AI Model Registry and use the Vertex AI prediction service for batch and online predictions. Learn how to get predictions from an ML modelGet hands on with a Vertex AI Predictions codelabSimplify model serving with custom prediction routinesUse prebuilt containers for prediction and explanationWatch Prototype to Production, a video series that takes you from notebook code to a deployed model.Tutorials, quickstarts, & labsDeploy for batch or online predictionsWhen you're ready to use your model to solve a real-world problem, register your model to Vertex AI Model Registry and use the Vertex AI prediction service for batch and online predictions. Learn how to get predictions from an ML modelGet hands on with a Vertex AI Predictions codelabSimplify model serving with custom prediction routinesUse prebuilt containers for prediction and explanationWatch Prototype to Production, a video series that takes you from notebook code to a deployed model.PricingHow Vertex AI pricing worksPricing is based on the Vertex AI tools and services, storage, compute, and Google Cloud resources used. Tools and usageDescriptionPriceGenerative AIImagen model for image generationBased on image input, character input, or custom training pricing. Starting at$0.0001Text, chat, and code generationBased on every 1,000 characters of input (prompt) and every 1,000 characters of output (response).Starting at$0.0001per 1,000 characters​​AutoML modelsImage data training, deployment, and predictionBased on time to train per node hour, which reflects resource usage, and if for classification or object detection. Starting at$1.375 per node hourVideo data training and predictionBased on price per node hour and if classification, object tracking, or action recognition.Starting at$0.462per node hourTabular data training and predictionBased on price per node hour and if classification/regression or forecasting. Contact sales for potential discounts and pricing details. Contact salesText data upload, training, deployment, predictionBased on hourly rates for training and prediction, pages for legacy data upload (PDF only), and text records and pages for prediction.Starting at$0.05per hourCustom-trained modelsCustom model trainingBased on machine type used per hour, region, and any accelerators used. Get an estimate via sales or our pricing calculator. Contact salesVertex AI notebooksCompute and storage resourcesBased on the same rates as Compute Engine and Cloud Storage.Refer to productsManagement feesIn addition to the above resource usage, management fees apply based on region, instances, notebooks, and managed notebooks used. View details.Refer to detailsVertex AI PipelinesExecution and additional feesBased on execution charge, resources used, and any additional service fees. Starting at$0.03 per pipeline runVertex AI Vector Search Serving and building costsBased on the size of your data, the amount of queries per second (QPS) you want to run, and the number of nodes you use. View example.Refer to exampleView pricing details for all Vertex AI features and services. How Vertex AI pricing worksPricing is based on the Vertex AI tools and services, storage, compute, and Google Cloud resources used. Generative AIDescriptionImagen model for image generationBased on image input, character input, or custom training pricing. PriceStarting at$0.0001Text, chat, and code generationBased on every 1,000 characters of input (prompt) and every 1,000 characters of output (response).DescriptionStarting at$0.0001per 1,000 characters​​AutoML modelsDescriptionImage data training, deployment, and predictionBased on time to train per node hour, which reflects resource usage, and if for classification or object detection. PriceStarting at$1.375 per node hourVideo data training and predictionBased on price per node hour and if classification, object tracking, or action recognition.DescriptionStarting at$0.462per node hourTabular data training and predictionBased on price per node hour and if classification/regression or forecasting. Contact sales for potential discounts and pricing details. DescriptionContact salesText data upload, training, deployment, predictionBased on hourly rates for training and prediction, pages for legacy data upload (PDF only), and text records and pages for prediction.DescriptionStarting at$0.05per hourCustom-trained modelsDescriptionCustom model trainingBased on machine type used per hour, region, and any accelerators used. Get an estimate via sales or our pricing calculator. PriceContact salesVertex AI notebooksDescriptionCompute and storage resourcesBased on the same rates as Compute Engine and Cloud Storage.PriceRefer to productsManagement feesIn addition to the above resource usage, management fees apply based on region, instances, notebooks, and managed notebooks used. View details.DescriptionRefer to detailsVertex AI PipelinesDescriptionExecution and additional feesBased on execution charge, resources used, and any additional service fees. PriceStarting at$0.03 per pipeline runVertex AI Vector Search DescriptionServing and building costsBased on the size of your data, the amount of queries per second (QPS) you want to run, and the number of nodes you use. View example.PriceRefer to exampleView pricing details for all Vertex AI features and services. Pricing calculatorEstimate your Vertex AI costs, including region-specific pricing and fees.Estimate your costsCustom quoteConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptTry Gemini 2.0, our most advanced multimodal models, in Vertex AIGo to my consoleHave a large project?Contact salesBrowse, customize, and deploy machine learning modelsBrowse Model GardenLearn how to set up a Vertex AI project environmentRead guideGet started with notebooks for machine learningWatch videoBusiness CaseUnlock the full potential of gen AI"The accuracy of Google Cloud's generative AI solution and practicality of the Vertex AI Platform gives us the confidence we needed to implement this cutting-edge technology into the heart of our business and achieve our long-term goal of a zero-minute response time."Abdol Moabery, CEO of GA TelesisLearn moreRelated contentHow grounding your models helps you gain a competitive edge.How generative AI takes enterprise search to a whole new level.See how our customers are implementing generative AI to transform their businesses.Analyst reportsGoogle is a Leader in The Forrester Wave™: AI Foundation Models For Language, Q2 2024. Read the report.Google named a Leader in The Forrester Wave™: AI Infrastructure Solutions, Q1 2024, receiving the highest scores of any vendor evaluated in both Current Offering and Strategy. Google named a leader in the Forrester Wave: AI/ML Platforms, Q3 2024. Learn more.Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vertex_AI_Search.txt b/Vertex_AI_Search.txt new file mode 100644 index 0000000000000000000000000000000000000000..550e3abaa635bdb0983b9393b2d1fde7a3171a4e --- /dev/null +++ b/Vertex_AI_Search.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/enterprise-search +Date Scraped: 2025-02-23T12:02:08.386Z + +Content: +Google Cloud expands grounding capabilities on Vertex AI.Vertex AI Search Vertex AI Search helps developers build secure, Google-quality search experiences for websites, intranet and RAG systems for generative AI agents and apps. Vertex AI Search is a part of Vertex AI Agent Builder.Go to console Get $1,000* credit *offer is also available to existing customers of Google Cloud who are new to Vertex AI Search. Credits are valid for 1 year period beginning from account signup with Vertex AI Search and automatically applied. Credit can be used across other eligible Agent Builder SKUs too. See pricing page for more details.Product HighlightsBest in class search Out of the box RAGDocument understanding Vector search capabilities Product overview4:31FeaturesPower your search with Google technologies Unlock Google-quality search for your enterprise apps and experiences. Built on Google’s deep expertise and decades of experience in semantic search technologies, Vertex AI Search delivers more relevant results across your content, both structured and unstructured.Get started in minutes and tailor search experiences to your specific needs with extensive customization abilities.Getting started can be as easy as simply adding a search widget for an improved website search experience.If you are building generative AI applications, Vertex AI Search can function as a grounding or Retrieval Augmented Generation (RAG) system using your data.Vertex AI Search is designed for enterprise environments, offering seamless scalability, robust privacy controls, and comprehensive governance features.Optimized for industriesVertex AI Search has specialized offerings tuned for unique industry requirements like searching product catalogs, media libraries, and clinical data repositories. Vertex AI Search for commerce offers retailers the ability to improve the search, product recommendations, and browsing experience on their channels.Vertex AI Search for media offers media and entertainment companies the ability to provide more personalized content recommendations powered by generative AI, increasing consumer time spent on their platforms, which can lead to higher engagement, revenue, and retention. Vertex AI Search for healthcare and life sciences is a medically tuned search that improves patient and provider experience.Solution for Retrieval Augmented Generation (RAG) in the enterprise Today, there is a lot of excitement about RAG, an architecture that combines LLMs with a data retrieval system, or in other words, a search engine. Grounding LLM responses in your company's own data improves the accuracy, reliability, and relevance of generative AI apps, something that's critical for real-world business applications. You could build your own RAG system but this can be a highly complex process. Vertex AI Search functions as an out-of-the-box RAG system for information retrieval. Under the hood with Vertex AI Search, we’ve simplified the end-to-end search and discovery process of managing ETL, OCR, chunking, embedding, indexing, storing, input cleaning, schema adjustments, information retrieval, and summarization to just a few clicks. This makes it easy for you to build RAG-powered apps using Vertex AI Search as your retrieval engine. Vertex AI also offers a comprehensive set of APIs that help developers create bespoke RAG solutions. These APIs expose the underlying components of Vertex AI Search's out-of-the-box RAG system, empowering developers to address custom use cases or serve customers who want granular control. These include the Document AI Layout Parser API, ranking API, grounded generation API, and check grounding API.Vector search for embeddings based applications Vertex AI Search lets organizations and developers set up search engines out of the box. These search engines offer adequate customization for most enterprise needs and even offer automatic fine-tuning for embeddings. In some cases, you may have custom embeddings, and Vertex AI Search works fine with your own embeddings. However, more advanced developers who need direct control of a highly performant vector database to power niche use cases like recommendations and ad serving can use vector search, the vector database used by Vertex AI Search as a component for their use cases. We’ve recently updated Vector Search’s user experience so developers can create and deploy indexes without coding. We’ve also significantly reduced indexing latency from hours to minutes for smaller datasets.AI for document understanding Vertex AI search benefits from the document processing capabilities of our Document AI suite. With document understanding you can easily turn structured and unstructured documents into actionable data to increase operational efficiency, simplify business processes, and make better decisions.Industry-compliant data privacy and securityWhen you use Vertex AI Search from Google Cloud, your data is secure in your cloud instance. Google does not access or use your data to train models or for any other purpose you have not explicitly authorized. Vertex AI Search also meets specific industry compliance standards like HIPAA, ISO 27000-series, and SOC -1/2/3. We’re expanding support for access transparency to provide customers with awareness of Googler administrative access to their data. Virtual Private Cloud Service Controls prevent customers or employees from infiltrating or exfiltrating data. We are also offering Customer-managed Encryption Keys (CMEK) in Preview, allowing customers to encrypt their core content with their own encryption keys. Data freshness through connectorsVertex AI Search can connect to your first-party, Google, and third-party applications through Vertex AI extensions and data connectors. Vertex AI extensions help in ingesting data and drive transactions on the users' behalf while data connectors ingest data with read-only access to key applications like Jira, Confluence, and Salesforce. Together, Vertex AI extensions and data connectors ensure your data is fresh across your search engines. View all featuresHow It WorksWith Vertex AI Search, you can go from frustrating keyword matching to modern conversational search experiences. You can also improve the quality of your generative AI applications by grounding them in your enterprise data using Vertex AI Search as an out of the box system for Retrieval Augmented Generation or RAG. Learn more Watch this video to learn how to make an internal search app with minimal coding and setupCommon UsesA complete suite for your enterprise information needsImprove search experiences for employees and customers Vertex AI Search offers a complete toolkit for accessing, processing and analyzing your enterprise information. With Google-grade search capabilities and Gemini generative AI, it offers specialized solutions for retail, media, healthcare, websites, intranets, and custom applications.Key features include out-of-the-box performance with advanced crawling, parsing, and document understanding, along with tuning and customization options like events-based reranking and autocomplete. Generative AI features enable grounded answers, blending multiple sources, and conversational AI capabilities.Vertex AI Search provides a foundational platform with connectors for various data sources, document AI capabilities, RAG APIs, vector search, and grounded Gemini for generative AI grounded on Google Search and your data.Read product documentation Learning resourcesImprove search experiences for employees and customers Vertex AI Search offers a complete toolkit for accessing, processing and analyzing your enterprise information. With Google-grade search capabilities and Gemini generative AI, it offers specialized solutions for retail, media, healthcare, websites, intranets, and custom applications.Key features include out-of-the-box performance with advanced crawling, parsing, and document understanding, along with tuning and customization options like events-based reranking and autocomplete. Generative AI features enable grounded answers, blending multiple sources, and conversational AI capabilities.Vertex AI Search provides a foundational platform with connectors for various data sources, document AI capabilities, RAG APIs, vector search, and grounded Gemini for generative AI grounded on Google Search and your data.Read product documentation Enable Google-quality search on your websiteBoost customer engagement with generative AI powered searchCreate a site index: This is done simply by adding your site URL. Your index is available right away to search if you don’t need generative answers. If you need generative answers, you will need to verify your domain ownership first.Connect to a search app: Connect your site index to a new search app, where you will be able to manage the search experience. Make sure to turn LLM features on if you intend to use generative answers.Configure your search experience: Set up the right configurations that will define your search experience such as choosing between getting search results only, or being able to receive generative answers.Test & refine the search: Preview search results for various queries, and refine your search based on your needs. You can for example add metadata based on your site’s html, boost results based on publication date or other information, filter based on metadata or url patterns.Deploy the search to your site: you can choose to deploy using our out of the box widget as an HTML component to add to your site, or to directly integrate using the API.Learn more Tutorials, quickstarts, & labsBoost customer engagement with generative AI powered searchCreate a site index: This is done simply by adding your site URL. Your index is available right away to search if you don’t need generative answers. If you need generative answers, you will need to verify your domain ownership first.Connect to a search app: Connect your site index to a new search app, where you will be able to manage the search experience. Make sure to turn LLM features on if you intend to use generative answers.Configure your search experience: Set up the right configurations that will define your search experience such as choosing between getting search results only, or being able to receive generative answers.Test & refine the search: Preview search results for various queries, and refine your search based on your needs. You can for example add metadata based on your site’s html, boost results based on publication date or other information, filter based on metadata or url patterns.Deploy the search to your site: you can choose to deploy using our out of the box widget as an HTML component to add to your site, or to directly integrate using the API.Learn more Use Vertex AI Search for RAGGrounding: increase factuality and relevance in generative AI agents and apps Tired of AI making things up? Grounding or Retrieval Augmented Generation (RAG) ensures your AI's answers are based on your enterprise truth. With Vertex AI's grounding feature, your generative AI models are anchored to reliable sources like Google Search or your own data, reducing "hallucinations" and boosting the trustworthiness of your results. Say goodbye to unreliable AI and embrace the power of grounded intelligence for accurate, relevant, and actionable insights. Experience the difference Grounding makes in your agents and apps. Learn more Learning resourcesGrounding: increase factuality and relevance in generative AI agents and apps Tired of AI making things up? Grounding or Retrieval Augmented Generation (RAG) ensures your AI's answers are based on your enterprise truth. With Vertex AI's grounding feature, your generative AI models are anchored to reliable sources like Google Search or your own data, reducing "hallucinations" and boosting the trustworthiness of your results. Say goodbye to unreliable AI and embrace the power of grounded intelligence for accurate, relevant, and actionable insights. Experience the difference Grounding makes in your agents and apps. Learn more Create vector search and embeddings based apps Build a recommendation engine with vector searchFind similar things in seconds, even with billions of items. Vector Search unlocks powerful semantic matching for recommendations, chatbots, and more. Let's see how to build a recommendation engine with Vector Search:Generate embeddings: Create a numerical representation (embedding) of your items to capture their semantic relationships. You can do this externally or use Vertex AI's generative AI.Upload to Cloud Storage: Store your embeddings in Cloud Storage for Vector Search to access.Connect to Vector Search: Link your embeddings to Vector Search to perform nearest neighbor search.Create and deploy index: Build an index from your embeddings and deploy it to an endpoint for querying.Query for recommendations: Use the index endpoint to query for approximate nearest neighbors, finding items semantically similar to your query.Evaluate and adjust: Assess the results and refine the algorithm's parameters or scaling as needed to ensure accuracy and performance.Watch video Read product documentation for detailed instructionsTutorials, quickstarts, & labsBuild a recommendation engine with vector searchFind similar things in seconds, even with billions of items. Vector Search unlocks powerful semantic matching for recommendations, chatbots, and more. Let's see how to build a recommendation engine with Vector Search:Generate embeddings: Create a numerical representation (embedding) of your items to capture their semantic relationships. You can do this externally or use Vertex AI's generative AI.Upload to Cloud Storage: Store your embeddings in Cloud Storage for Vector Search to access.Connect to Vector Search: Link your embeddings to Vector Search to perform nearest neighbor search.Create and deploy index: Build an index from your embeddings and deploy it to an endpoint for querying.Query for recommendations: Use the index endpoint to query for approximate nearest neighbors, finding items semantically similar to your query.Evaluate and adjust: Assess the results and refine the algorithm's parameters or scaling as needed to ensure accuracy and performance.Watch video Read product documentation for detailed instructionsImprove the e-commerce experience in retailImprove retail search and recommendations for your customers Transform your customers experience, delivering improved search experiences similar to Google.Increase conversions, reduce abandonment, and personalize recommendations – all with cutting-edge AI. Harness visual search, optimize results, and rest easy with fully managed infrastructure.Don't settle for mediocre search quality – unlock your e-commerce potential with Vertex AI Search for commerce.Learn more Learning resourcesImprove retail search and recommendations for your customers Transform your customers experience, delivering improved search experiences similar to Google.Increase conversions, reduce abandonment, and personalize recommendations – all with cutting-edge AI. Harness visual search, optimize results, and rest easy with fully managed infrastructure.Don't settle for mediocre search quality – unlock your e-commerce potential with Vertex AI Search for commerce.Learn more Create high-accuracy processors to extract, classify, and split documentsHarness documents for deeper insights Don't let your documents remain dormant data silos - transform them into actionable intelligence.Extract valuable insights, streamline workflows, and make data-driven decisions faster than ever before. No more tedious manual tasks or complex model training - simply upload your documents and let Document AI do the heavy lifting. With its advanced foundation models and customizable accuracy features, you'll unlock a new level of efficiency and accuracy in document analysis.Learn more Learning resourcesHarness documents for deeper insights Don't let your documents remain dormant data silos - transform them into actionable intelligence.Extract valuable insights, streamline workflows, and make data-driven decisions faster than ever before. No more tedious manual tasks or complex model training - simply upload your documents and let Document AI do the heavy lifting. With its advanced foundation models and customizable accuracy features, you'll unlock a new level of efficiency and accuracy in document analysis.Learn more PricingVertex AI SearchVertex AI Search consists of a number of features that can be used independently or together. Vertex AI Search is a part of Vertex AI Agent Builder. Check out our $1,000 free trial offer for select SKUs*Features and usage Description Price Vertex AI searchVertex AI search is a Google Search quality information retrieval and answer generation system that can help you create RAG-powered gen apps or improve the performance of your search applications.Starting at$2per 1,000 queries Vector search For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. View detailsRefer to pricing page for more detailsPlease visit our pricing page for more details on our $1,000 free trial offer. To make it easier for you to get started, we're offering customers new to Vertex AI Search (this includes existing Google Cloud customers) a one time credit of $1,000 per Google Cloud billing account, applicable up to 12 months post activation (or until you consume the amount, whichever is sooner). Vertex AI SearchVertex AI Search consists of a number of features that can be used independently or together. Vertex AI Search is a part of Vertex AI Agent Builder. Check out our $1,000 free trial offer for select SKUs*Vertex AI searchDescription Vertex AI search is a Google Search quality information retrieval and answer generation system that can help you create RAG-powered gen apps or improve the performance of your search applications.Price Starting at$2per 1,000 queries Vector search Description For enterprises who need custom embeddings-based information retrieval, Vertex AI offers powerful vector search capabilities. View detailsPrice Refer to pricing page for more detailsPlease visit our pricing page for more details on our $1,000 free trial offer. To make it easier for you to get started, we're offering customers new to Vertex AI Search (this includes existing Google Cloud customers) a one time credit of $1,000 per Google Cloud billing account, applicable up to 12 months post activation (or until you consume the amount, whichever is sooner). PRICING CALCULATOREstimate your costs, including region-specific pricing and fees.Estimate your costs CUSTOM QUOTEConnect with our sales team to get a custom quote for your organization.Request a quoteStart your proof of conceptNew customers get $1,000 in free creditsCheck out our special offer Setup a Vertex AI project environmentGet started Check out the full capabilities of Vertex AI Agent BuilderRead more Use Model Builder to explore, fine tune, train, evaluate and manage AI modelsRead more Contact your sales team to help with your project Contact sales Google Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vertex_AI_Search_for_retail.txt b/Vertex_AI_Search_for_retail.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ad954b694bae8f285ef97382645ba0038d5ef62 --- /dev/null +++ b/Vertex_AI_Search_for_retail.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/retail-product-discovery +Date Scraped: 2025-02-23T11:58:44.261Z + +Content: +Jump to Vertex AI Search for commerceVertex AI Search for commerceProvide Google-quality search, browsing, recommendations, and conversational commerce on your digital properties, increasing conversions and reducing search abandonment. Contact sales Power your digital commerce site or application with Google-quality capabilities highly tuned for e-commerceGuide users to refine and narrow broad search queries through back and forth conversationDeliver highly personalized recommendations at scaleSearch for products with an image and receive a ranked list of visually and semantically similar itemsImprove conversion and order value by personalizing the shopping experience BenefitsIncrease e-commerce revenueDrive higher revenue per visit with AI-powered search that optimizes product ranking for increased conversions and sales.Product discovery with a multi-modal approach Guide users via text and images to provide real-time results of similar, or complementary, items from your product catalog.Deliver relevant recommendations at scaleUnderstand nuances behind customer behavior, context, and SKUs in order to drive engagement across channels.Key featuresKey featuresState-of-the-art AITake advantage of Google's expertise in AI that enables advanced query understanding and personalization. This produces better search and browse results as well as recommendations from even the broadest queries. In addition, you can effectively match product attributes with website content for fast, relevant product discovery with semantic and conversational search.Optimized resultsLeverage user interaction and ranking models to meet specific business goals. Customize recommendations leveraging page-level optimization, buy-it-again and revenue optimization capabilities to deliver your desired outcome: engagement, revenue, or conversions. Apply business rules to fine-tune what customers see, diversify product displays, and filter by product availability, custom tags, etc.Fully managedNo need to preprocess data, train or hypertune machine learning models, load balance or manually provision your infrastructure to handle unpredictable traffic spikes. We do it all for you automatically. Plus, quickly connect data with your existing tools like Google Analytics 360, Tag Manager, Merchant Center, Cloud Storage, BigQuery.Security, privacy and complianceSecurity and privacy protocols that ensure your data is isolated with strong access controls - your data is yours alone. We support compliance with the General Data Protection Regulation (GDPR). See terms and the GDPR Resource Center for more information.Work with experts to build shopping experiences with searchWork with Google Cloud Consulting to take the experience one step further and improve conversions by personalizing the shopping experience. Contact sales to get started or learn more about our entire consulting portfolio.CustomersCustomers are making it easier to find relevant productsLearn how some of the world's leading companies are using Vertex AI Search for commerce to improve engagement and increase conversions.Blog postShopify and Google Cloud AI integration brings advanced ecommerce capabilities to retailers and merchants5-min readVideoMacy's delivers a personalized shopping experience with Google Cloud Discovery AIVideo (1:56)Blog postCustomers across the globe are transforming their site search with Google Cloud5-min readVideoHanes is working with Google Cloud to deliver personalized experiences to their customersVideo (2:03)Case studyDigitec Galaxus uses Recommendations AI to help their customers find what they are looking for5-min readSee all customersWe have been partnering with Google Cloud to return relevant results for long-tail searches and have seen an increase in click-through and search conversion and a drop in our No Results Found (NRF) rate since we launched.Neelima Sharma, Senior Vice President, Technology, E-commerce, Marketing and Merchandising - Lowe'sWhat's newExplore the latest updatesReportWhat’s driving search abandonment in online retailLearn moreBlog postNew research: Search abandonment continues to vex retailers worldwide Read the blogBlog postGoogle Cloud unveils new AI tools for retailersLearn moreDocumentationTechnical ResourcesTutorialVertex AI Search for commerce overview videoVideo to provide you an overview of Vertex AI Search for commerce capabilities and how they can help you drive your e-commerce business forward.Learn moreBest PracticeImplementing Vertex AI Search for commerceFollow this step-by-step guidance to set up the capability you need.Learn moreBest PracticeData ingestion for Vertex AI Search for commerceBlog to guide you on how to go about data ingestion for Vertex AI Search for commerce.Learn moreNot seeing what you’re looking for?View all product documentationPricingPricingSearch, browse, and recommendations AI pricingVision API Product Search pricingPartnersPartnersIntegrate Vertex AI Search for commerce into your systems with our trusted partners.See all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vertex_AI_Studio.txt b/Vertex_AI_Studio.txt new file mode 100644 index 0000000000000000000000000000000000000000..53cef3e785387dcc0e0011d381ba65196bc2a42c --- /dev/null +++ b/Vertex_AI_Studio.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/generative-ai-studio +Date Scraped: 2025-02-23T12:02:00.826Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceVertex AI Studio Test, tune and deploy enterprise-ready generative AIVertex AI provides APIs for leading foundation models, and tools to rapidly prototype, easily tune models with your own data, and seamlessly deploy to applications. Try it in consoleContact sales Want training? Start a free course for Vertex AI Studio.Product highlightsAccess Google's models like Gemini, Imagen, Codey, & ChirpEasily tune foundation models with your own dataEnable more people to build with Generative AIIntroduction to Vertex AI Studio27:50FeaturesGemini, Google’s most capable multimodal modelsVertex AI offers access to Gemini multimodal models from Google, capable of understanding virtually any input, combining different types of information, and generating almost any output. Prompt and test in Vertex AI with Gemini, using text, video, code, audio, or images. Using Gemini’s advanced reasoning and state-of-the-art generation capabilities, developers can try sample prompts for extracting text from images, converting image text to JSON, and even generate answers about uploaded images to build next-gen AI applications.Vertex AI Gemini API quickstartTry Vertex AI Gemini APIAccess to Foundation Models and APIsChoose the right model for your use case with 40+ proprietary models and 80+ OSS and 3rd party models on Vertex AI's Model Garden. With access to Google's foundation models as APIs, you can easily deploy these models to applications.In addition to Gemini, you also have access to Gemma, a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. VIDEOJumpstart your ML project with foundation models and APIs5:58Experiment and test models with prompt designAdapt models to your use case with prompt design. Iterate on the prompt through a familiar chat interface and choose from multiple ways to adjust responses. For example, you can change the response "temperature" to elicit a more creative response. Generative AI prompt samplesView examplesEasily tune models with your own data Improve the quality of model responses for your use case by tuning foundation models with your own data with Vertex AI Studio. Access state-of-the-art tuning options like adapter tuning and Reinforcement Learning from Human Feedback (RLHF) or style and subject tuning for image generation. Connect models to real-world data and real-time actionsVertex AI Extensions provides a set of fully-managed tools for building and managing extensions that connect models to proprietary data sources or 3rd party services. Now developers can create generative AI applications that deliver real-time information, incorporate company data, and take action on the user's behalf. Integration with end-to-end ML toolsVertex AI's managed endpoints make it easy to build generative capabilities into an application, with only a few lines of code and no ML background required. Developers can forget about the complexities of provisioning storage and compute resources, or of optimizing the model for inference. Once deployed, foundation models can be scaled, managed, and governed in production using Vertex AI’s end-to-end MLOps capabilities and fully managed AI infrastructure. Enterprise-grade data governance and security With Vertex AI, your data is completely protected, secure, and private when using it to customize a model, and you have full control over where and how or if their data is used. None of the customer’s data, model weights, or input prompts are used to tune the original foundation models. When enterprises tune a model with their own data, the original model remains unchanged, and the new model never leaves your company’s environment. View all featuresHow It WorksVertex AI Studio is a Google Cloud console tool for rapidly prototyping and testing generative AI models. You can test sample prompts, design your own prompts, and customize foundation models to handle tasks that meet your application's needs. Get started in Vertex AI Studio with our no cost introductory training. In this course, you learn what Vertex AI Studio is, its features and options, and how to use it by walking through demos of the product. Start no cost training Introduction to Vertex AI Studio sparkLooking to build a solution?Can I use both Google's Gemini and Meta's Llama in Google Cloud?I want to generate images that match my company's brandI want to train a custom model for a specific need that we haveMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionUse various generative modelsprompt_suggestionGenerate custom imagesprompt_suggestionTrain a custom modelCommon UsesBuild with Gemini Get started with Google's multimodal modelsPrompt and test in Vertex AI with Gemini, using natural language, code, or an image. Try sample prompts for extracting text from images, image mock up to HTML, and even generate answers about uploaded images.Try Gemini in Vertex AIGemini API FAQsView code samples for Python, JavaScript, Java, Go, and CurlDesign multimodal prompts Tutorials, quickstarts, & labsGet started with Google's multimodal modelsPrompt and test in Vertex AI with Gemini, using natural language, code, or an image. Try sample prompts for extracting text from images, image mock up to HTML, and even generate answers about uploaded images.Try Gemini in Vertex AIGemini API FAQsView code samples for Python, JavaScript, Java, Go, and CurlDesign multimodal prompts Imagen: Generate and customize imagesStudio-grade images at scale for any business needWith Imagen, it’s easy to create and edit high-quality images at scale with low latency and enterprise-grade data governance. Organizations can also customize and adapt Imagen to their business with object tuning and style tuning Leveraging the power of mask-free edit, image upscaling, and image captioning across over 300 languages, customers can quickly generate production ready images.Try Imagen in Vertex AI StudioView documentation for Imagen on Vertex AITune Imagen with your own dataNo cost Skills Boost: Introduction to Image GenerationLearning resourcesStudio-grade images at scale for any business needWith Imagen, it’s easy to create and edit high-quality images at scale with low latency and enterprise-grade data governance. Organizations can also customize and adapt Imagen to their business with object tuning and style tuning Leveraging the power of mask-free edit, image upscaling, and image captioning across over 300 languages, customers can quickly generate production ready images.Try Imagen in Vertex AI StudioView documentation for Imagen on Vertex AITune Imagen with your own dataNo cost Skills Boost: Introduction to Image GenerationChirp: Universal speech modelSpeech tasks ranging from voice control to voice assistanceChirp brings the power of large models to speech tasks. Trained on millions of hours of audio, Chirp supports over 100 languages and brings the model quality of the world’s most widely spoken languages to scores of additional languages and dialects. Chirp achieves 98% accuracy on English and relative improvement of up to 300% in languages with less than 10 million speakers.View documentationLearning resourcesSpeech tasks ranging from voice control to voice assistanceChirp brings the power of large models to speech tasks. Trained on millions of hours of audio, Chirp supports over 100 languages and brings the model quality of the world’s most widely spoken languages to scores of additional languages and dialects. Chirp achieves 98% accuracy on English and relative improvement of up to 300% in languages with less than 10 million speakers.View documentationCodey: Code completionDrive developer productivity with code assistanceCodey provides real-time code completion customizable to your own codebase. This code completion model supports 20+ coding languages, including Go, Google Standard SQL, Java, Javascript, Python, and TypeScript. It enables a wide variety of coding tasks including code chat which lets developers converse with a bot to get help with debugging, documentation, and learning new concepts. View documentationLearning resourcesDrive developer productivity with code assistanceCodey provides real-time code completion customizable to your own codebase. This code completion model supports 20+ coding languages, including Go, Google Standard SQL, Java, Javascript, Python, and TypeScript. It enables a wide variety of coding tasks including code chat which lets developers converse with a bot to get help with debugging, documentation, and learning new concepts. View documentationPricingVertex AI Pricing for generative AI varies by foundation models and APIs. ServicePricingVertex AILearn more about generative AI on Vertex AI pricing. Vertex AI Pricing for generative AI varies by foundation models and APIs. Vertex AIPricingLearn more about generative AI on Vertex AI pricing. Pricing calculatorEstimate your costs on Google CloudCheck out our pricing calculator Custom quoteConnect with our sales team to learn more about pricingContact salesExplore AI/ML on Google CloudView generative AI models and APIs available on Vertex AIGo to Model Garden Accelerate ML to production Learn more about Vertex AIAdd AI to applicationsWatch videoTake ML models to productionRead blogMLOps on Vertex AIRead documentationGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Video_Stitcher_API.txt b/Video_Stitcher_API.txt new file mode 100644 index 0000000000000000000000000000000000000000..3c3b4df7665aad1bd502e6bbacbf3f245cebb7cc --- /dev/null +++ b/Video_Stitcher_API.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/video-stitcher/docs +Date Scraped: 2025-02-23T12:06:36.967Z + +Content: +To enable the Video Stitcher API for your project, please reach out to your Account Representative or contact Sales to learn more. Home Video Stitcher API Documentation Stay organized with collections Save and categorize content based on your preferences. Video Stitcher API documentation View all product documentation The Video Stitcher API helps you generate dynamic content for delivery to client devices. Call the Video Stitcher API from your servers to dynamically insert ads into video-on-demand and livestreams for your users. Learn more Get started for free Start your next project with $300 in free credit Build and test a proof of concept with the free trial credits and free monthly usage of 20+ products. View free product offers Keep exploring with 20+ always-free products Access 20+ free products for common use cases, including AI APIs, VMs, data warehouses, and more. format_list_numbered Guides Integrate Google Ad Manager with VOD assets Integrate Google Ad Manager with livestreams Manage a VOD session Manage a live session find_in_page Reference REST API info Resources Pricing Quotas Release notes \ No newline at end of file diff --git a/View_in_one_page(1).txt b/View_in_one_page(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..8a620ffce3d6bfc58f8cd304ccd5834655d15565 --- /dev/null +++ b/View_in_one_page(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/security-foundations/printable +Date Scraped: 2025-02-23T11:45:44.163Z + +Content: +Home Docs Cloud Architecture Center Send feedback Enterprise foundations blueprint Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-12-20 UTC You can print this page or save the page in PDF format using your browser's print function and choose the Save as PDF option. For the standard version of this page, return to the Enterprise foundations blueprint. This content was last updated in December 2023, and represents the status quo as of the time it was written. Google's security policies and systems may change going forward, as we continually improve protection for our customers. This document describes the best practices that let you deploy a foundational set of resources in Google Cloud. A cloud foundation is the baseline of resources, configurations, and capabilities that enable companies to adopt Google Cloud for their business needs. A well-designed foundation enables consistent governance, security controls, scale, visibility, and access to shared services across all workloads in your Google Cloud environment. After you deploy the controls and governance that are described in this document, you can deploy workloads to Google Cloud. The enterprise foundations blueprint (formerly known as the security foundations blueprint) is intended for architects, security practitioners, and platform engineering teams who are responsible for designing an enterprise-ready environment on Google Cloud. This blueprint consists of the following: A terraform-example-foundation GitHub repository that contains the deployable Terraform assets. A guide that describes the architecture, design, and controls that you implement with the blueprint (this document). You can use this guide in one of two ways: To create a complete foundation based on Google's best practices. You can deploy all the recommendations from this guide as a starting point, and then customize the environment to address your business' specific requirements. To review an existing environment on Google Cloud. You can compare specific components of your design against Google-recommended best practices. Supported use cases The enterprise foundation blueprint provides a baseline layer of resources and configurations that help enable all types of workloads on Google Cloud. Whether you're migrating existing compute workloads to Google Cloud, building containerized web applications, or creating big data and machine learning workloads, the enterprise foundation blueprint helps you build your environment to support enterprise workloads at scale. After you deploy the enterprise foundation blueprint, you can deploy workloads directly or deploy additional blueprints to support complex workloads that require additional capabilities. A defense-in-depth security model Google Cloud services benefit from the underlying Google infrastructure security design. It is your responsibility to design security into the systems that you build on top of Google Cloud. The enterprise foundation blueprint helps you to implement a defense-in-depth security model for your Google Cloud services and workloads. The following diagram shows a defense-in-depth security model for your Google Cloud organization that combines architecture controls, policy controls, and detective controls. The diagram describes the following controls: Policy controls are programmatic constraints that enforce acceptable resource configurations and prevent risky configurations. The blueprint uses a combination of policy controls including infrastructure-as-code (IaC) validation in your pipeline and organization policy constraints. Architecture controls are the configuration of Google Cloud resources like networks and resource hierarchy. The blueprint architecture is based on security best practices. Detective controls let you detect anomalous or malicious behavior within the organization. The blueprint uses platform features such as Security Command Center, integrates with your existing detective controls and workflows such as a security operations center (SOC), and provides capabilities to enforce custom detective controls. Key decisions This section summarizes the high-level architectural decisions of the blueprint. The diagram describes how Google Cloud services contribute to key architectural decisions: Cloud Build: Infrastructure resources are managed using a GitOps model. Declarative IaC is written in Terraform and managed in a version control system for review and approval, and resources are deployed using Cloud Build as the continuous integration and continuous deployment (CI/CD) automation tool. The pipeline also enforces policy-as-code checks to validate that resources meet expected configurations before deployment. Cloud Identity: Users and group membership are synchronized from your existing identity provider. Controls for user account lifecycle management and single sign-on (SSO) rely on the existing controls and processes of your identity provider. Identity and Access Management (IAM): Allow policies (formerly known as IAM policies) allow access to resources and are applied to groups based on job function. Users are added to the appropriate groups to receive view-only access to foundation resources. All changes to foundation resources are deployed through the CI/CD pipeline which uses privileged service account identities. Resource Manager: All resources are managed under a single organization, with a resource hierarchy of folders that organizes projects by environments. Projects are labeled with metadata for governance including cost attribution. Networking: Network topologies use Shared VPC to provide network resources for workloads across multiple regions and zones, separated by environment, and managed centrally. All network paths between on-premises hosts, Google Cloud resources in the VPC networks, and Google Cloud services are private. No outbound traffic to or inbound traffic from the public internet is permitted by default. Cloud Logging: Aggregated log sinks are configured to collect logs relevant for security and auditing into a centralized project for long-term retention, analysis, and export to external systems. Organization Policy Service: Organization policy constraints are configured to prevent various high-risk configurations. Secret Manager: Centralized projects are created for a team responsible for managing and auditing the use of sensitive application secrets to help meet compliance requirements. Cloud Key Management Service (Cloud KMS): Centralized projects are created for a team responsible for managing and auditing encryption keys to help meet compliance requirements. Security Command Center: Threat detection and monitoring capabilities are provided using a combination of built-in security controls from Security Command Center and custom solutions that let you detect and respond to security events. For alternatives to these key decisions, see alternatives. What's next Read about authentication and authorization (next document in this series). Authentication and authorization This section introduces how to use Cloud Identity to manage the identities that your employees use to access Google Cloud services. External identity provider as the source of truth We recommend federating your Cloud Identity account with your existing identity provider. Federation helps you ensure that your existing account management processes apply to Google Cloud and other Google services. If you don't have an existing identity provider, you can create user accounts directly in Cloud Identity. Note: If you're already using Google Workspace, Cloud Identity uses the same console, administrative controls, and user accounts as your Google Workspace account. The following diagram shows a high-level view of identity federation and single sign-on (SSO). It uses Microsoft Active Directory, located in the on-premises environment, as the example identity provider. This diagram describes the following best practices: User identities are managed in an Active Directory domain that is located in the on-premises environment and federated to Cloud Identity. Active Directory uses Google Cloud Directory Sync to provision identities to Cloud Identity. Users attempting to sign in to Google services are redirected to the external identity provider for single sign-on with SAML, using their existing credentials to authenticate. No passwords are synchronized with Cloud Identity. The following table provides links to setup guidance for identity providers. Identity provider Guidance Active Directory Active Directory user account provisioning Active Directory single sign-on Microsoft Entra ID (formerly Azure AD) Federating Google Cloud with Microsoft Entra ID Other external identity providers (for example, Ping or Okta) Integrating Ping Identity Solutions with Google Identity Services Using Okta with Google Cloud Providers Best practices for federating Google Cloud with an external identity provider We strongly recommend that you enforce multi-factor authentication at your identity provider with a phishing-resistant mechanism such as a Titan Security Key. The recommended settings for Cloud Identity aren't automated through the Terraform code in this blueprint. See administrative controls for Cloud Identity for the recommended security settings that you must configure in addition to deploying the Terraform code. Groups for access control A principal is an identity that can be granted access to a resource. Principals include Google Accounts for users, Google groups, Google Workspace accounts, Cloud Identity domains, and service accounts. Some services also let you grant access to all users who authenticate with a Google Account, or to all users on the internet. For a principal to interact with Google Cloud services, you must grant them roles in Identity and Access Management (IAM). To manage IAM roles at scale, we recommend that you assign users to groups based on their job functions and access requirements, then grant IAM roles to those groups. You should add users to groups using the processes in your existing identity provider for group creation and membership. We don't recommend granting IAM roles to individual users because individual assignments can increase the complexity of managing and auditing roles. The blueprint configures groups and roles for view-only access to foundation resources. We recommend that you deploy all resources in the blueprint through the foundation pipeline, and that you don't grant roles to users to groups to modify foundation resources outside of the pipeline. The following table shows the groups that are configured by the blueprint for viewing foundation resources. Name Description Roles Scope grp-gcp-org-admin@example.com Highly privileged administrators who can grant IAM roles at the organization level. They can access any other role. This privilege is not recommended for daily use. Organization Administrator organization grp-gcp-billing-admin@example.com Highly privileged administrators who can modify the Cloud Billing account. This privilege is not recommended for daily use. Billing Account Admin organization grp-gcp-billing-viewer@example.com The team who is responsible for viewing and analyzing the spending across all projects. Billing Account Viewer organization BigQuery User billing project grp-gcp-audit-viewer@example.com The team who is responsible for auditing security-related logs. Logs Viewer BigQuery User logging project grp-gcp-security-reviewer@example.com The team who is responsible for reviewing cloud security. Security Reviewer organization grp-gcp-network-viewer@example.com The team who is responsible for viewing and maintaining network configurations. Compute Network Viewer organization grp-gcp-scc-admin@example.com The team who is responsible for configuring Security Command Center. Security Center Admin Editor organization grp-gcp-secrets-admin@example.com The team who is responsible for managing, storing, and auditing credentials and other secrets that are used by applications. Secret Manager Admin secrets projects grp-gcp-kms-admin@example.com The team who is responsible for enforcing encryption key management to meet compliance requirements. Cloud KMS Viewer kms projects As you build your own workloads on top of the foundation, you create additional groups and grant IAM roles that are based on the access requirements for each workload. We strongly recommend that you avoid basic roles (such as Owner, Editor, or Viewer) and use predefined roles instead. Basic roles are overly permissive and a potential security risk. Owner and Editor roles can lead to privilege escalation and lateral movement, and the Viewer role includes access to read all data. For best practices on IAM roles, see Use IAM securely. Super admin accounts Cloud Identity users with the super admin account bypass the organization's SSO settings and authenticate directly to Cloud Identity. This exception is by design, so that the super admin can still access the Cloud Identity console in the event of an SSO misconfiguration or outage. However, it means you must consider additional protection for super admin accounts. To protect your super admin accounts, we recommend that you always enforce 2-step verification with security keys in Cloud Identity. For more information, see Security best practices for administrator accounts. Issues with consumer user accounts If you didn't use Cloud Identity or Google Workspace before you onboarded to Google Cloud, it's possible that your organization's employees are already using consumer accounts that are associated with their corporate email identities to access other Google services such as Google Marketing Platform or YouTube. Consumer accounts are accounts that are fully owned and managed by the individuals who created them. Because those accounts aren't under your organization's control and might include both personal and corporate data, you must decide how to consolidate these accounts with other corporate accounts. We recommend that you consolidate existing consumer user accounts as part of onboarding to Google Cloud. If you aren't using Google Workspace for all your user accounts already, we recommend blocking the creation of new consumer accounts. Administrative controls for Cloud Identity Cloud Identity has various administrative controls that are not automated by Terraform code in the blueprint. We recommend that you enforce each of these best practice security controls early in the process of building your foundation. Control Description Deploy 2-step verification User accounts might be compromised through phishing, social engineering, password spraying, or various other threats. 2-step verification helps mitigate these threats. We recommend that you enforce 2-step verification for all user accounts in your organization with a phishing-resistant mechanism such as Titan Security Keys or other keys that are based on the phishing-resistant FIDO U2F (CTAP1) standards. Set session length for Google Cloud services Persistent OAuth tokens on developer workstations can be a security risk if exposed. We recommend that you set a reauthentication policy to require authentication every 16 hours using a security key. Set session length for Google Services (Google Workspace customers only) Persistent web sessions across other Google services can be a security risk if exposed. We recommend that you enforce a maximum web session length and align this with session length controls in your SSO provider. Share data from Cloud Identity with Google Cloud services Admin Activity audit logs from Google Workspace or Cloud Identity are ordinarily managed and viewed in the Admin Console, separately from your logs in your Google Cloud environment. These logs contain information that is relevant for your Google Cloud environment, such as user login events. We recommend that you share Cloud Identity audit logs to your Google Cloud environment to centrally manage logs from all sources. Set up post SSO verification The blueprint assumes that you set up SSO with your external identity provider. We recommend that you enable an additional layer of control based on Google's sign-in risk analysis. After you apply this setting, users might see additional risk-based login challenges at sign-in if Google deems that a user sign-in is suspicious. Remediate issues with consumer user accounts Users with a valid email address at your domain but no Google Account can sign up for unmanaged consumer accounts. These accounts might contain corporate data, but are not controlled by your account lifecycle management processes. We recommend that you take steps to ensure that all user accounts are managed accounts. Disable account recovery for super admin accounts Super admin account self-recovery is off by default for all new customers (existing customers might have this setting on). Turning this setting off helps to mitigate the risk that a compromised phone, compromised email, or social engineering attack could let an attacker gain super admin privileges over your environment. Plan an internal process for a super admin to contact another super admin in your organization if they have lost access to their account, and ensure that all super admins are familiar with the process for support-assisted recovery. Enforce and monitor password requirements for users In most cases, user passwords are managed through your external identity provider, but super admin accounts bypass SSO and must use a password to sign in to Cloud Identity. Disable password reuse and monitor password strength for any users who use a password to log in to Cloud Identity, particularly super admin accounts. Set organization-wide policies for using groups By default, external user accounts can be added to groups in Cloud Identity. We recommend that you configure sharing settings so that group owners can't add external members. Note that this restriction doesn't apply to the super admin account or other delegated administrators with Groups admin permissions. Because federation from your identity provider runs with administrator privileges, the group sharing settings don't apply to this group synchronization. We recommend that you review controls in the identity provider and synchronization mechanism to ensure that non-domain members aren't added to groups, or that you apply group restrictions. What's next Read about organization structure (next document in this series). Organization structure The root node for managing resources in Google Cloud is the organization. The Google Cloud organization provides a resource hierarchy that provides an ownership structure for resources and attachment points for organization policies and access controls. The resource hierarchy consists of folders, projects, and resources, and it defines the structure and use of Google Cloud services within an organization. Resources lower in the hierarchy inherit policies such as IAM allow policies and organization policies. All access permissions are denied by default, until you apply allow policies directly to a resource or the resource inherits the allow policies from a higher level in the resource hierarchy. The following diagram shows the folders and projects that are deployed by the blueprint. The following sections describe the folders and projects in the diagram. Folders The blueprint uses folders to group projects based on their environment. This logical grouping is used to apply configurations like allow policies and organization policies at the folder level and then all resources within the folder inherit the policies. The following table describes the folders that are part of the blueprint. Folder Description bootstrap Contains the projects that are used to deploy foundation components. common Contains projects with resources that are shared by all environments. production Contains projects with production resources. nonproduction Contains a copy of the production environment to let you test workloads before you promote them to production. development Contains the cloud resources that are used for development. networking Contains the networking resources that are shared by all environments. Projects The blueprint uses projects to group individual resources based on their functionality and intended boundaries for access control. This following table describes the projects that are included in the blueprint. Folder Project Description bootstrap prj-b-cicd Contains the deployment pipeline that's used to build out the foundation components of the organization. For more information, see deployment methodology. prj-b-seed Contains the Terraform state of your infrastructure and the Terraform service account that is required to run the pipeline. For more information, see deployment methodology. common prj-c-secrets Contains organization-level secrets. For more information, see store application credentials with Secret Manager. prj-c-logging Contains the aggregated log sources for audit logs. For more information, see centralized logging for security and audit. prj-c-scc Contains resources to help configure Security Command Center alerting and other custom security monitoring. For more information, see threat monitoring with Security Command Center. prj-c-billing-export Contains a BigQuery dataset with the organization's billing exports. For more information, see allocate costs between internal cost centers. prj-c-infra-pipeline Contains an infrastructure pipeline for deploying resources like VMs and databases to be used by workloads. For more information, see pipeline layers. prj-c-kms Contains organization-level encryption keys. For more information, see manage encryption keys. networking prj-net-{env}-shared-base Contains the host project for a Shared VPC network for workloads that don't require VPC Service Controls. For more information, see network topology. prj-net-{env}-shared-restricted Contains the host project for a Shared VPC network for workloads that do require VPC Service Controls. For more information, see network topology. prj-net-interconnect Contains the Cloud Interconnect connections that provide connectivity between your on-premises environment and Google Cloud. For more information, see hybrid connectivity. prj-net-dns-hub Contains resources for a central point of communication between your on-premises DNS system and Cloud DNS. For more information, see centralized DNS setup. prj-{env}-secrets Contains folder-level secrets. For more information, see store and audit application credentials with Secret Manager. prj-{env}-kms Contains folder-level encryption keys. For more information, see manage encryption keys. application projects Contains various projects in which you create resources for applications. For more information, see project deployment patterns and pipeline layers. Governance for resource ownership We recommend that you apply labels consistently to your projects to assist with governance and cost allocation. The following table describes the project labels that are added to each project for governance in the blueprint. Label Description application The human-readable name of the application or workload that is associated with the project. businesscode A short code that describes which business unit owns the project. The code shared is used for common projects that are not explicitly tied to a business unit. billingcode A code that's used to provide chargeback information. primarycontact The username of the primary contact that is responsible for the project. Because project labels can't include special characters such as the ampersand (@), it is set to the username without the @example.com suffix. secondarycontact The username of the secondary secondary contact that is responsible for the project. Because project labels can't include special characters such as @, set only the username without the @example.com suffix. environment A value that identifies the type of environment, such as bootstrap, common, production, non-production,development, or network. envcode A value that identifies the type of environment, shortened to b, c, p, n, d, or net. vpc The ID of the VPC network that this project is expected to use. Google might occasionally send important notifications such as account suspensions or updates to product terms. The blueprint uses Essential Contacts to send those notifications to the groups that you configure during deployment. Essential Contacts is configured at the organization node and inherited by all projects in the organization. We recommend that you review these groups and ensure that emails are monitored reliably. Essential Contacts is used for a different purpose than the primarycontact and secondarycontact fields that are configured in project labels. The contacts in project labels are intended for internal governance. For example, if you identify non-compliant resources in a workload project and need to contact the owners, you could use the primarycontact field to find the person or team responsible for that workload. What's next Read about networking (next document in this series). Networking Networking is required for resources to communicate within your Google Cloud organization and between your cloud environment and on-premises environment. This section describes the structure in the blueprint for VPC networks, IP address space, DNS, firewall policies, and connectivity to the on-premises environment. Network topology The blueprint repository provides the following options for your network topology: Use separate Shared VPC networks for each environment, with no network traffic directly allowed between environments. Use a hub-and-spoke model that adds a hub network to connect each environment in Google Cloud, with the network traffic between environments gated by a network virtual appliance (NVA). Choose the dual Shared VPC network topology when you don't want direct network connectivity between environments. Choose the hub-and-spoke network topology when you want to allow network connectivity between environments that is filtered by an NVA such as when you rely on existing tools that require a direct network path to every server in your environment. Both topologies use Shared VPC as a principal networking construct because Shared VPC allows a clear separation of responsibilities. Network administrators manage network resources in a centralized host project, and workload teams deploy their own application resources and consume the network resources in service projects that are attached to the host project. Both topologies include a base and restricted version of each VPC network. The base VPC network is used for resources that contain non-sensitive data, and the restricted VPC network is used for resources with sensitive data that require VPC Service Controls. For more information on implementing VPC Service Controls, see Protect your resources with VPC Service Controls. Dual Shared VPC network topology If you require network isolation between your development, non-production, and production networks on Google Cloud, we recommend the dual Shared VPC network topology. This topology uses separate Shared VPC networks for each environment, with each environment additionally split between a base Shared VPC network and a restricted Shared VPC network. The following diagram shows the dual Shared VPC network topology. The diagram describes these key concepts of the dual Shared VPC topology: Each environment (production, non-production, and development) has one Shared VPC network for the base network and one Shared VPC network for the restricted network. This diagram shows only the production environment, but the same pattern is repeated for each environment. Each Shared VPC network has two subnets, with each subnet in a different region. Connectivity with on-premises resources is enabled through four VLAN attachments to the Dedicated Interconnect instance for each Shared VPC network, using four Cloud Router services (two in each region for redundancy). For more information, see Hybrid connectivity between on-premises environment and Google Cloud. By design, this topology doesn't allow network traffic to flow directly between environments. If you do require network traffic to flow directly between environments, you must take additional steps to allow this network path. For example, you might configure Private Service Connect endpoints to expose a service from one VPC network to another VPC network. Alternatively, you might configure your on-premises network to let traffic flow from one Google Cloud environment to the on-premises environment and then to another Google Cloud environment. Hub-and-spoke network topology If you deploy resources in Google Cloud that require a direct network path to resources in multiple environments, we recommend the hub-and-spoke network topology. The hub-and-spoke topology uses several of the concepts that are part of the dual Shared VPC topology, but modifies the topology to add a hub network. The following diagram shows the hub-and-spoke topology. The diagram describes these key concepts of hub-and-spoke network topology: This model adds a hub network, and each of the development, non-production, and production networks (spokes) are connected to the hub network through VPC Network Peering. Alternatively, if you anticipate exceeding the quota limit, you can use an HA VPN gateway instead. Connectivity to on-premises networks is allowed only through the hub network. All spoke networks can communicate with shared resources in the hub network and use this path to connect to on-premises networks. The hub networks include an NVA for each region, deployed redundantly behind internal Network Load Balancer instances. This NVA serves as the gateway to allow or deny traffic to communicate between spoke networks. The hub network also hosts tooling that requires connectivity to all other networks. For example, you might deploy tools on VM instances for configuration management to the common environment. The hub-and-spoke model is duplicated for a base version and restricted version of each network. To enable spoke-to-spoke traffic, the blueprint deploys NVAs on the hub Shared VPC network that act as gateways between networks. Routes are exchanged from hub-to-spoke VPC networks through custom routes exchange. In this scenario, connectivity between spokes must be routed through the NVA because VPC Network Peering is non-transitive, and therefore, spoke VPC networks can't exchange data with each other directly. You must configure the virtual appliances to selectively allow traffic between spokes. Project deployment patterns When creating new projects for workloads, you must decide how resources in this project connect to your existing network. The following table describes the patterns for deploying projects that are used in the blueprint. Pattern Description Example usage Shared base projects These projects are configured as service projects to a base Shared VPC host project. Use this pattern when resources in your project have the following criteria: Require network connectivity to the on-premises environment or resources in the same Shared VPC topology. Require a network path to the Google services that are contained on the private virtual IP address. Don't require VPC Service Controls. example_base_shared_vpc_project.tf Shared restricted projects These projects are configured as service projects to a restricted Shared VPC host project. Use this pattern when resources in your project have the following criteria: Require network connectivity to the on-premises environment or resources in the same Shared VPC topology. Require a network path to the Google services contained on the restricted virtual IP address. Require VPC Service Controls. example_restricted_shared_vpc_project.tf Floating projects Floating projects are not connected to other VPC networks in your topology. Use this pattern when resources in your project have the following criteria: Don't require full mesh connectivity to an on-premises environment or resources in the Shared VPC topology. Don't require a VPC network, or you want to manage the VPC network for this project independently of your main VPC network topology (such as when you want to use an IP address range that clashes with the ranges already in use). You might have a scenario where you want to keep the VPC network of a floating project separate from the main VPC network topology but also want to expose a limited number of endpoints between networks. In this case, publish services by using Private Service Connect to share network access to an individual endpoint across VPC networks without exposing the entire network. example_floating_project.tf Peering projects Peering projects create their own VPC networks and peer to other VPC networks in your topology. Use this pattern when resources in your project have the following criteria: Require network connectivity in the directly peered VPC network, but don't require transitive connectivity to an on-premises environment or other VPC networks. Must manage the VPC network for this project independently of your main network topology. If you create peering projects, it's your responsibility to allocate non-conflicting IP address ranges and plan for peering group quota. example_peering_project.tf IP address allocation This section introduces how the blueprint architecture allocates IP address ranges. You might need to change the specific IP address ranges used based on the IP address availability in your existing hybrid environment. The following table provides a breakdown of the IP address space that's allocated for the blueprint. The hub environment only applies in the hub-and-spoke topology. Purpose VPC type Region Hub environment Development environment Non-production environment Production environment Primary subnet ranges Base Region 1 10.0.0.0/18 10.0.64.0/18 10.0.128.0/18 10.0.192.0/18 Region 2 10.1.0.0/18 10.1.64.0/18 10.1.128.0/18 10.1.192.0/18 Unallocated 10.{2-7}.0.0/18 10.{2-7}.64.0/18 10.{2-7}.128.0/18 10.{2-7}.192.0/18 Restricted Region 1 10.8.0.0/18 10.8.64.0/18 10.8.128.0/18 10.8.192.0/18 Region 2 10.9.0.0/18 10.9.64.0/18 10.9.128.0/18 10.9.192.0/18 Unallocated 10.{10-15}.0.0/18 10.{10-15}.64.0/18 10.{10-15}.128.0/18 10.{10-15}.192.0/18 Private services access Base Global 10.16.0.0/21 10.16.8.0/21 10.16.16.0/21 10.16.24.0/21 Restricted Global 10.16.32.0/21 10.16.40.0/21 10.16.48.0/21 10.16.56.0/21 Private Service Connect endpoints Base Global 10.17.0.1/32 10.17.0.2/32 10.17.0.3/32 10.17.0.4/32 Restricted Global 10.17.0.5/32 10.17.0.6/32 10.17.0.7/32 10.17.0.8/32 Proxy-only subnets Base Region 1 10.18.0.0/23 10.18.2.0/23 10.18.4.0/23 10.18.6.0/23 Region 2 10.19.0.0/23 10.19.2.0/23 10.19.4.0/23 10.19.6.0/23 Unallocated 10.{20-25}.0.0/23 10.{20-25}.2.0/23 10.{20-25}.4.0/23 10.{20-25}.6.0/23 Restricted Region 1 10.26.0.0/23 10.26.2.0/23 10.26.4.0/23 10.26.6.0/23 Region 2 10.27.0.0/23 10.27.2.0/23 10.27.4.0/23 10.27.6.0/23 Unallocated 10.{28-33}.0.0/23 10.{28-33}.2.0/23 10.{28-33}.4.0/23 10.{28-33}.6.0/23 Secondary subnet ranges Base Region 1 100.64.0.0/18 100.64.64.0/18 100.64.128.0/18 100.64.192.0/18 Region 2 100.65.0.0/18 100.65.64.0/18 100.65.128.0/18 100.65.192.0/18 Unallocated 100.{66-71}.0.0/18 100.{66-71}.64.0/18 100.{66-71}.128.0/18 100.{66-71}.192.0/18 Restricted Region 1 100.72.0.0/18 100.72.64.0/18 100.72.128.0/18 100.72.192.0/18 Region 2 100.73.0.0/18 100.73.64.0/18 100.73.128.0/18 100.73.192.0/18 Unallocated 100.{74-79}.0.0/18 100.{74-79}.64.0/18 100.{74-79}.128.0/18 100.{74-79}.192.0/18 The preceding table demonstrates these concepts for allocating IP address ranges: IP address allocation is subdivided into ranges for each combination of base Shared VPC, restricted Shared VPC, region, and environment. Some resources are global and don't require subdivisions for each region. By default, for regional resources, the blueprint deploys in two regions. In addition, there are unused IP address ranges so that you can can expand into six additional regions. The hub network is only used in the hub-and-spoke network topology, while the development, non-production, and production environments are used in both network topologies. The following table introduces how each type of IP address range is used. Purpose Description Primary subnet ranges Resources that you deploy to your VPC network, such as virtual machine instances, use internal IP addresses from these ranges. Private services access Some Google Cloud services such as Cloud SQL require you to preallocate a subnet range for private services access. The blueprint reserves a /21 range globally for each of the Shared VPC networks to allocate IP addresses for services that require private services access. When you create a service that depends on private services access, you allocate a regional /24 subnet from the reserved /21 range. Private Service Connect The blueprint provisions each VPC network with a Private Service Connect endpoint to communicate with Google Cloud APIs. This endpoint lets your resources in the VPC network reach Google Cloud APIs without relying on outbound traffic to the internet or publicly advertised internet ranges. Proxy-based load balancers Some types of Application Load Balancers require you to preallocate proxy-only subnets. Although the blueprint doesn't deploy Application Load Balancers that require this range, allocating ranges in advance helps reduce friction for workloads when they need to request a new subnet range to enable certain load balancer resources. Secondary subnet ranges Some use cases, such as container-based workloads, require secondary ranges. The blueprint allocates ranges from the RFC 6598 IP address space for secondary ranges. Centralized DNS setup For DNS resolution between Google Cloud and on-premises environments, we recommend that you use a hybrid approach with two authoritative DNS systems. In this approach, Cloud DNS handles authoritative DNS resolution for your Google Cloud environment and your existing on-premises DNS servers handle authoritative DNS resolution for on-premises resources. Your on-premises environment and Google Cloud environment perform DNS lookups between environments through forwarding requests. The following diagram demonstrates the DNS topology across the multiple VPC networks that are used in the blueprint. The diagram describes the following components of the DNS design that is deployed by the blueprint: The DNS hub project in the common folder is the central point of DNS exchange between the on-premises environment and the Google Cloud environment. DNS forwarding uses the same Dedicated Interconnect instances and Cloud Routers that are already configured in your network topology. In the dual Shared VPC topology, the DNS hub uses the base production Shared VPC network. In the hub-and-spoke topology, the DNS hub uses the base hub Shared VPC network. Servers in each Shared VPC network can resolve DNS records from other Shared VPC networks through DNS forwarding, which is configured between Cloud DNS in each Shared VPC host project and the DNS hub. On-premises servers can resolve DNS records in Google Cloud environments using DNS server policies that allow queries from on-premises servers. The blueprint configures an inbound server policy in the DNS hub to allocate IP addresses, and the on-premises DNS servers forward requests to these addresses. All DNS requests to Google Cloud reach the DNS hub first, which then resolves records from DNS peers. Servers in Google Cloud can resolve DNS records in the on-premises environment using forwarding zones that query on-premises servers. All DNS requests to the on-premises environment originate from the DNS hub. The DNS request source is 35.199.192.0/19. Firewall policies Google Cloud has multiple firewall policy types. Hierarchical firewall policies are enforced at the organization or folder level to inherit firewall policy rules consistently across all resources in the hierarchy. In addition, you can configure network firewall policies for each VPC network. The blueprint combines these firewall policies to enforce common configurations across all environments using Hierarchical firewall policies and to enforce more specific configurations at each individual VPC network using network firewall policies. The blueprint doesn't use legacy VPC firewall rules. We recommend using only firewall policies and avoid mixing use with legacy VPC firewall rules. Hierarchical firewall policies The blueprint defines a single hierarchical firewall policy and attaches the policy to each of the production, non-production, development, bootstrap, and common folders. This hierarchical firewall policy contains the rules that should be enforced broadly across all environments, and delegates the evaluation of more granular rules to the network firewall policy for each individual environment. The following table describes the hierarchical firewall policy rules deployed by the blueprint. Rule description Direction of traffic Filter (IPv4 range) Protocols and ports Action Delegate the evaluation of inbound traffic from RFC 1918 to lower levels in the hierarchy. Ingress 192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12 all Go to next Delegate the evaluation of outbound traffic to RFC 1918 to lower levels in the hierarchy. Egress 192.168.0.0/16, 10.0.0.0/8, 172.16.0.0/12 all Go to next IAP for TCP forwarding Ingress 35.235.240.0/20 tcp:22,3389 Allow Windows server activation Egress 35.190.247.13/32 tcp:1688 Allow Health checks for Cloud Load Balancing Ingress 130.211.0.0/22, 35.191.0.0/16, 209.85.152.0/22, 209.85.204.0/22 tcp:80,443 Allow Network firewall policies The blueprint configures a network firewall policy for each network. Each network firewall policy starts with a minimum set of rules that allow access to Google Cloud services and deny egress to all other IP addresses. In the hub-and-spoke model, the network firewall policies contain additional rules to allow communication between spokes. The network firewall policy allows outbound traffic from one to the hub or another spoke, and allows inbound traffic from the NVA in the hub network. The following table describes the rules in the global network firewall policy deployed for each VPC network in the blueprint. Rule description Direction of traffic Filter Protocols and ports Allow outbound traffic to Google Cloud APIs. Egress The Private Service Connect endpoint that is configured for each individual network. See Private access to Google APIs. tcp:443 Deny outbound traffic not matched by other rules. Egress all all Allow outbound traffic from one spoke to another spoke (for hub-and-spoke model only). Egress The aggregate of all IP addresses used in the hub-and-spoke topology. Traffic that leaves a spoke VPC is routed to the NVA in the hub network first. all Allow inbound traffic to a spoke from the NVA in the hub network (for hub-and-spoke model only). Ingress Traffic originating from the NVAs in the hub network. all When you first deploy the blueprint, a VM instance in a VPC network can communicate with Google Cloud services, but not to other infrastructure resources in the same VPC network. To allow VM instances to communicate, you must add additional rules to your network firewall policy and tags that explicitly allow the VM instances to communicate. Tags are added to VM instances, and traffic is evaluated against those tags. Tags additionally have IAM controls so that you can define them centrally and delegate their use to other teams. Note: All references to tags in this document refer to tags with IAM controls. We don't recommend VPC firewall rules with legacy network tags. The following diagram shows an example of how you can add custom tags and network firewall policy rules to let workloads communicate inside a VPC network. The diagram demonstrates the following concepts of this example: The network firewall policy contains Rule 1 that denies outbound traffic from all sources at priority 65530. The network firewall policy contains Rule 2 that allows inbound traffic from instances with the service=frontend tag to instances with the service=backend tag at priority 999. The instance-2 VM can receive traffic from instance-1 because the traffic matches the tags allowed by Rule 2. Rule 2 is matched before Rule 1 is evaluated, based on the priority value. The instance-3 VM doesn't receive traffic. The only firewall policy rule that matches this traffic is Rule 1, so outbound traffic from instance-1 is denied. Private access to Google Cloud APIs To let resources in your VPC networks or on-premises environment reach Google Cloud services, we recommend private connectivity instead of outbound internet traffic to public API endpoints. The blueprint configures Private Google Access on every subnet and creates internal endpoints with Private Service Connect to communicate with Google Cloud services. Used together, these controls allow a private path to Google Cloud services, without relying on internet outbound traffic or publicly advertised internet ranges. The blueprint configures Private Service Connect endpoints with API bundles to differentiate which services can be accessed in which network. The base network uses the all-apis bundle and can reach any Google service, and the restricted network uses the vpcsc bundle which allows access to a limited set of services that support VPC Service Controls. For access from hosts that are located in an on-premises environment, we recommend that you use a convention of custom FQDN for each endpoint, as described in the following table. The blueprint uses a unique Private Service Connect endpoint for each VPC network, configured for access to a different set of API bundles. Therefore, you must consider how to route service traffic from the on-premises environment to the VPC network with the correct API endpoint, and if you're using VPC Service Controls, ensure that traffic to Google Cloud services reaches the endpoint inside the intended perimeter. Configure your on-premise controls for DNS, firewalls, and routers to allow access to these endpoints, and configure on-premise hosts to use the appropriate endpoint. For more information, see access Google APIs through endpoints. The following table describes the Private Service Connect endpoints created for each network. VPC Environment API bundle Private Service Connect endpoint IP address Custom FQDN Base Common all-apis 10.17.0.1/32 c.private.googleapis.com Development all-apis 10.17.0.2/32 d.private.googleapis.com Non-production all-apis 10.17.0.3/32 n.private.googleapis.com Production all-apis 10.17.0.4/32 p.private.googleapis.com Restricted Common vpcsc 10.17.0.5/32 c.restricted.googleapis.com Development vpcsc 10.17.0.6/32 d.restricted.googleapis.com Non-production vpcsc 10.17.0.7/32 n.restricted.googleapis.com Production vpcsc 10.17.0.8/32 p.restricted.googleapis.com To ensure that traffic for Google Cloud services has a DNS lookup to the correct endpoint, the blueprint configures private DNS zones for each VPC network. The following table describes these private DNS zones. Private zone name DNS name Record type Data googleapis.com. *.googleapis.com. CNAME private.googleapis.com. (for base networks) or restricted.googleapis.com. (for restricted networks) private.googleapis.com (for base networks) or restricted.googleapis.com (for restricted networks) A The Private Service Connect endpoint IP address for that VPC network. gcr.io. *.gcr.io CNAME gcr.io. gcr.io A The Private Service Connect endpoint IP address for that VPC network. pkg.dev. *.pkg.dev. CNAME pkg.dev. pkg.dev. A The Private Service Connect endpoint IP address for that VPC network. The blueprint has additional configurations to enforce that these Private Service Connect endpoints are used consistently. Each Shared VPC network also enforces the following: A network firewall policy rule that allows outbound traffic from all sources to the IP address of the Private Service Connect endpoint on TCP:443. A network firewall policy rule that denies outbound traffic to 0.0.0.0/0, which includes the default domains that are used for access to Google Cloud services. Internet connectivity The blueprint doesn't allow inbound or outbound traffic between its VPC networks and the internet. For workloads that require internet connectivity, you must take additional steps to design the access paths required. For workloads that require outbound traffic to the internet, we recommend that you manage outbound traffic through Cloud NAT to allow outbound traffic without unsolicited inbound connections, or through Secure Web Proxy for more granular control to allow outbound traffic to trusted web services only. For workloads that require inbound traffic from the internet, we recommend that you design your workload with Cloud Load Balancing and Google Cloud Armor to benefit from DDoS and WAF protections. We don't recommend that you design workloads that allow direct connectivity between the internet and a VM using an external IP address on the VM. Hybrid connectivity between an on-premises environment and Google Cloud To establish connectivity between the on-premises environment and Google Cloud, we recommend that you use Dedicated Interconnect to maximize security and reliability. A Dedicated Interconnect connection is a direct link between your on-premises network and Google Cloud. The following diagram introduces hybrid connectivity between the on-premises environment and a Google Virtual Private Cloud network. The diagram describes the following components of the pattern for 99.99% availability for Dedicated Interconnect: Four Dedicated Interconnect connections, with two connections in one metropolitan area (metro) and two connections in another metro. Within each metro, there are two distinct zones within the colocation facility. The connections are divided into two pairs, with each pair connected to a separate on-premises data center. VLAN attachments are used to connect each Dedicated Interconnect instance to Cloud Routers that are attached to the Shared VPC topology. Each Shared VPC network has four Cloud Routers, two in each region, with the dynamic routing mode set to global so that every Cloud Router can announce all subnets, independent of region. With global dynamic routing, Cloud Router advertises routes to all subnets in the VPC network. Cloud Router advertises routes to remote subnets (subnets outside of the Cloud Router's region) with a lower priority compared to local subnets (subnets that are in the Cloud Router's region). Optionally, you can change advertised prefixes and priorities when you configure the BGP session for a Cloud Router. Traffic from Google Cloud to an on-premises environment uses the Cloud Router closest to the cloud resources. Within a single region, multiple routes to on-premises networks have the same multi-exit discriminator (MED) value, and Google Cloud uses equal cost multi-path (ECMP) routing to distribute outbound traffic between all possible routes. On-premises configuration changes To configure connectivity between the on-premises environment and Google Cloud, you must configure additional changes in your on-premises environment. The Terraform code in the blueprint automatically configures Google Cloud resources but doesn't modify any of your on-premises network resources. Some of the components for hybrid connectivity from your on-premises environment to Google Cloud are automatically enabled by the blueprint, including the following: Cloud DNS is configured with DNS forwarding between all Shared VPC networks to a single hub, as described in DNS setup. A Cloud DNS server policy is configured with inbound forwarder IP addresses. Cloud Router is configured to export routes for all subnets and custom routes for the IP addresses used by the Private Service Connect endpoints. To enable hybrid connectivity, you must take the following additional steps: Order a Dedicated Interconnect connection. Configure on-premises routers and firewalls to allow outbound traffic to the internal IP address space defined in IP address space allocation. Configure your on-premises DNS servers to forward DNS lookups bound for Google Cloud to the inbound forwarder IP addresses that is already configured by the blueprint. Configure your on-premises DNS servers, firewalls, and routers to accept DNS queries from the Cloud DNS forwarding zone (35.199.192.0/19). Configure on-premise DNS servers to respond to queries from on-premises hosts to Google Cloud services with the IP addresses defined in private access to Cloud APIs. For encryption in transit over the Dedicated Interconnect connection, configure MACsec for Cloud Interconnect or configure HA VPN over Cloud Interconnect for IPsec encryption. For more information, see Private Google Access for on-premises hosts. What's next Read about detective controls (next document in this series). Detective controls Threat detection and monitoring capabilities are provided using a combination of built-in security controls from Security Command Center and custom solutions that let you detect and respond to security events. Centralized logging for security and audit The blueprint configures logging capabilities to track and analyze changes to your Google Cloud resources with logs that are aggregated to a single project. The following diagram shows how the blueprint aggregates logs from multiple sources in multiple projects into a centralized log sink. The diagram describes the following: Log sinks are configured at the organization node to aggregate logs from all projects in the resource hierarchy. Multiple log sinks are configured to send logs that match a filter to different destinations for storage and analytics. The prj-c-logging project contains all the resources for log storage and analytics. Optionally, you can configure additional tooling to export logs to a SIEM. The blueprint uses different log sources and includes these logs in the log sink filter so that the logs can be exported to a centralized destination. The following table describes the log sources. Log source Description Admin Activity audit logs You cannot configure, disable, or exclude Admin Activity audit logs. System Event audit logs You cannot configure, disable, or exclude System Event audit logs. Policy Denied audit logs You cannot configure or disable Policy Denied audit logs, but you can optionally exclude them with exclusion filters. Data Access audit logs By default, the blueprint doesn't enable data access logs because the volume and cost of these logs can be high.To determine whether you should enable data access logs, evaluate where your workloads handle sensitive data and consider whether you have a requirement to enable data access logs for each service and environment working with sensitive data. VPC Flow Logs The blueprint enables VPC Flow Logs for every subnet. The blueprint configures log sampling to sample 50% of logs to reduce cost.If you create additional subnets, you must ensure that VPC Flow Logs are enabled for each subnet. Firewall Rules Logging The blueprint enables Firewall Rules Logging for every firewall policy rule.If you create additional firewall policy rules for workloads, you must ensure that Firewall Rules Logging is enabled for each new rule. Cloud DNS logging The blueprint enables Cloud DNS logs for managed zones.If you create additional managed zones, you must enable those DNS logs. Google Workspace audit logging Requires a one-time enablement step that is not automated by the blueprint. For more information, see Share data with Google Cloud services. Access Transparency logs Requires a one-time enablement step that is not automated by the blueprint. For more information, see Enable Access Transparency. The following table describes the log sinks and how they are used with supported destinations in the blueprint. Sink Destination Purpose sk-c-logging-la Logs routed to Cloud Logging buckets with Log Analytics and a linked BigQuery dataset enabled Actively analyze logs. Run ad hoc investigations by using Logs Explorer in the console, or write SQL queries, reports, and views using the linked BigQuery dataset. sk-c-logging-bkt Logs routed to Cloud Storage Store logs long-term for compliance, audit, and incident-tracking purposes.Optionally, if you have compliance requirements for mandatory data retention, we recommend that you additionally configure Bucket Lock. sk-c-logging-pub Logs routed to Pub/Sub Export logs to an external platform such as your existing SIEM.This requires additional work to integrate with your SIEM, such as the following mechanisms: For many tools, third-party integration with Pub/Sub is the preferred method to ingest logs. For Google Google Security Operations, you can ingest Google Cloud data to Google Security Operations without provisioning additional infrastructure. For Splunk, you can stream logs from Google Cloud to Splunk using Dataflow. For guidance on enabling additional log types and writing log sink filters, see the log scoping tool. Threat monitoring with Security Command Center We recommend that you activate Security Command Center Premium for your organization to automatically detect threats, vulnerabilities, and misconfigurations in your Google Cloud resources. Security Command Center creates security findings from multiple sources including the following: Security Health Analytics: detects common vulnerabilities and misconfigurations across Google Cloud resources. Attack path exposure: shows a simulated path of how an attacker could exploit your high-value resources, based on the vulnerabilities and misconfigurations that are detected by other Security Command Center sources. Event Threat Detection: applies detection logic and proprietary threat intelligence against your logs to identify threats in near-real time. Container Threat Detection: detects common container runtime attacks. Virtual Machine Threat Detection: detects potentially malicious applications that are running on virtual machines. Web Security Scanner: scans for OWASP Top Ten vulnerabilities in your web-facing applications on Compute Engine, App Engine, or Google Kubernetes Engine. For more information on the vulnerabilities and threats addressed by Security Command Center, see Security Command Center sources. You must activate Security Command Center after you deploy the blueprint. For instructions, see Activate Security Command Center for an organization. After you activate Security Command Center, we recommend that you export the findings that are produced by Security Command Center to your existing tools or processes for triaging and responding to threats. The blueprint creates the prj-c-scc project with a Pub/Sub topic to be used for this integration. Depending on your existing tools, use one of the following methods to export findings: If you use the console to manage security findings directly in Security Command Center, configure folder-level and project-level roles for Security Command Center to let teams view and manage security findings just for the projects for which they are responsible. If you use Google SecOps as your SIEM, ingest Google Cloud data to Google SecOps. If you use a SIEM or SOAR tool with integrations to Security Command Center, share data with Cortex XSOAR, Elastic Stack, ServiceNow, Splunk, or QRadar. If you use an external tool that can ingest findings from Pub/Sub, configure continuous exports to Pub/Sub and configure your existing tools to ingest findings from the Pub/Sub topic. Custom solution for automated log analysis You might have requirements to create alerts for security events that are based on custom queries against logs. Custom queries can help supplement the capabilities of your SIEM by analyzing logs on Google Cloud and exporting only the events that merit investigation, especially if you don't have the capacity to export all cloud logs to your SIEM. The blueprint helps enable this log analysis by setting up a centralized source of logs that you can query using a linked BigQuery dataset. To automate this capability, you must implement the code sample at bq-log-alerting and extend the foundation capabilities. The sample code lets you regularly query a log source and send a custom finding to Security Command Center. The following diagram introduces the high-level flow of the automated log analysis. The diagram shows the following concepts of automated log analysis: Logs from various sources are aggregated into a centralized logs bucket with log analytics and a linked BigQuery dataset. BigQuery views are configured to query logs for the security event that you want to monitor. Cloud Scheduler pushes an event to a Pub/Sub topic every 15 minutes and triggers Cloud Run functions. Cloud Run functions queries the views for new events. If it finds events, it pushes them to Security Command Center as custom findings. Security Command Center publishes notifications about new findings to another Pub/Sub topic. An external tool such as a SIEM subscribes to the Pub/Sub topic to ingest new findings. The sample has several use cases to query for potentially suspicious behavior. Examples include a login from a list of super admins or other highly privileged accounts that you specify, changes to logging settings, or changes to network routes. You can extend the use cases by writing new query views for your requirements. Write your own queries or reference security log analytics for a library of SQL queries to help you analyze Google Cloud logs. Custom solution to respond to asset changes To respond to events in real time, we recommend that you use Cloud Asset Inventory to monitor asset changes. In this custom solution, an asset feed is configured to trigger notifications to Pub/Sub about changes to resources in real time, and then Cloud Run functions runs custom code to enforce your own business logic based on whether the change should be allowed. The blueprint has an example of this custom governance solution that monitors for IAM changes that add highly sensitive roles including Organization Admin, Owner, and Editor. The following diagram describes this solution. The previous diagram shows these concepts: Changes are made to an allow policy. The Cloud Asset Inventory feed sends a real-time notification about the allow policy change to Pub/Sub. Pub/Sub triggers a function. Cloud Run functions runs custom code to enforce your policy. The example function has logic to assess if the change has added the Organization Admin, Owner, or Editor roles to an allow policy. If so, the function creates a custom security finding and sends it to Security Command Center. Optionally, you can use this model to automate remediation efforts. Write additional business logic in Cloud Run functions to automatically take action on the finding, such as reverting the allow policy to its previous state. In addition, you can extend the infrastructure and logic used by this sample solution to add custom responses to other events that are important to your business. What's next Read about preventative controls (next document in this series). Preventative controls for acceptable resource configurations We recommend that you define policy constraints that enforce acceptable resource configurations and prevent risky configurations. The blueprint uses a combination of organization policy constraints and infrastructure-as-code (IaC) validation in your pipeline. These controls prevent the creation of resources that don't meet your policy guidelines. Enforcing these controls early in the design and build of your workloads helps you to avoid remediation work later. Organization policy constraints The Organization Policy service enforces constraints to ensure that certain resource configurations can't be created in your Google Cloud organization, even by someone with a sufficiently privileged IAM role. The blueprint enforces policies at the organization node so that these controls are inherited by all folders and projects within the organization. This bundle of policies is designed to prevent certain high-risk configurations, such as exposing a VM to the public internet or granting public access to storage buckets, unless you deliberately allow an exception to the policy. The following table introduces the organization policy constraints that are implemented in the blueprint: Organization policy constraint Description compute.disableNestedVirtualization Nested virtualization on Compute Engine VMs can evade monitoring and other security tools for your VMs if poorly configured. This constraint prevents the creation of nested virtualization. compute.disableSerialPortAccess IAM roles like compute.instanceAdmin allow privileged access to an instance's serial port using SSH keys. If the SSH key is exposed, an attacker could access the serial port and bypass network and firewall controls. This constraint prevents serial port access. compute.disableVpcExternalIpv6 External IPv6 subnets can be exposed to unauthorized internet access if they are poorly configured. This constraint prevents the creation of external IPv6 subnets. compute.requireOsLogin The default behavior of setting SSH keys in metadata can allow unauthorized remote access to VMs if keys are exposed. This constraint enforces the use of OS Login instead of metadata-based SSH keys. compute.restrictProtocolForwardingCreationForTypes VM protocol forwarding for external IP addresses can lead to unauthorized internet egress if forwarding is poorly configured. This constraint allows VM protocol forwarding for internal addresses only. compute.restrictXpnProjectLienRemoval Deleting a Shared VPC host project can be disruptive to all the service projects that use networking resources. This constraint prevents accidental or malicious deletion of the Shared VPC host projects by preventing the removal of the project lien on these projects. compute.setNewProjectDefaultToZonalDNSOnly A legacy setting for global (project-wide) internal DNS is not recommended because it reduces service availability. This constraint prevents the use of the legacy setting. compute.skipDefaultNetworkCreation A default VPC network and overly permissive default VPC firewall rules are created in every new project that enables the Compute Engine API. This constraint skips the creation of the default network and default VPC firewall rules. compute.vmExternalIpAccess By default, a VM is created with an external IPv4 address that can lead to unauthorized internet access. This constraint configures an empty allowlist of external IP addresses that the VM can use and denies all others. essentialcontacts.allowedContactDomains By default, Essential Contacts can be configured to send notifications about your domain to any other domain. This constraint enforces that only email addresses in approved domains can be set as recipients for Essential Contacts. iam.allowedPolicyMemberDomains By default, allow policies can be granted to any Google Account, including unmanaged accounts, and accounts belonging to external organizations. This constraint ensures that allow policies in your organization can only be granted to managed accounts from your own domain. Optionally, you can allow additional domains. iam.automaticIamGrantsForDefaultServiceAccounts By default, default service accounts are automatically granted overly permissive roles. This constraint prevents the automatic IAM role grants to default service accounts. iam.disableServiceAccountKeyCreation Service account keys are a high-risk persistent credential, and in most cases a more secure alternative to service account keys can be used. This constraint prevents the creation of service account keys. iam.disableServiceAccountKeyUpload Uploading service account key material can increase risk if key material is exposed. This constraint prevents the uploading of service account keys. sql.restrictAuthorizedNetworks Cloud SQL instances can be exposed to unauthenticated internet access if the instances are configured to use authorized networks without a Cloud SQL Auth Proxy. This policy prevents the configuration of authorized networks for database access and forces the use of the Cloud SQL Auth Proxy instead. sql.restrictPublicIpCloud SQL instances can be exposed to unauthenticated internet access if the instances are created with public IP addresses. This constraint prevents public IP addresses on Cloud SQL instances. storage.uniformBucketLevelAccess By default, objects in Cloud Storage can be accessed through legacy Access Control Lists (ACLs) instead of IAM, which can lead to inconsistent access controls and accidental exposure if misconfigured. Legacy ACL access is not affected by the iam.allowedPolicyMemberDomains constraint. This constraint enforces that access can only be configured through IAM uniform bucket-level access, not legacy ACLs. storage.publicAccessPrevention Cloud Storage buckets can be exposed to unauthenticated internet access if misconfigured. This constraint prevents ACLs and IAM permissions that grant access to allUsers and allAuthenticatedUsers. These policies are a starting point that we recommend for most customers and most scenarios, but you might need to modify organization policy constraints to accommodate certain workload types. For example, a workload that uses a Cloud Storage bucket as the backend for Cloud CDN to host public resources is blocked by storage.publicAccessPrevention, or a public-facing Cloud Run app that doesn't require authentication is blocked by iam.allowedPolicyMemberDomains. In these cases, modify the organization policy at the folder or project level to allow a narrow exception. You can also conditionally add constraints to organization policy by defining a tag that grants an exception or enforcement for policy, then applying the tag to projects and folders. For additional constraints, see available constraints and custom constraints. Pre-deployment validation of infrastructure-as-code The blueprint uses a GitOps approach to manage infrastructure, meaning that all infrastructure changes are implemented through version-controlled infrastructure-as-code (IaC) and can be validated before deploying. The policies enforced in the blueprint define acceptable resource configurations that can be deployed by your pipeline. If code that is submitted to your GitHub repository does not pass the policy checks, no resources are deployed. For information on how pipelines are used and how controls are enforced through CI/CD automation, see deployment methodology. What's next Read about deployment methodology (next document in this series) Deployment methodology We recommend that you use declarative infrastructure to deploy your foundation in a consistent and controllable manner. This approach helps enable consistent governance by enforcing policy controls about acceptable resource configurations into your pipelines. The blueprint is deployed using a GitOps flow, with Terraform used to define infrastructure as code (IaC), a Git repository for version control and approval of code, and Cloud Build for CI/CD automation in the deployment pipeline. For an introduction to this concept, see managing infrastructure as code with Terraform, Cloud Build, and GitOps. The following sections describe how the deployment pipeline is used to manage resources in your organization. Pipeline layers To separate the teams and technology stack that are responsible for managing different layers of your environment, we recommend a model that uses different pipelines and different personas that are responsible for each layer of the stack. The following diagram introduces our recommended model for separating a foundation pipeline, infrastructure pipeline, and application pipeline. The diagram introduces the pipeline layers in this model: The foundation pipeline deploys the foundation resources that are used across the platform. We recommend that a single central team is responsible for managing the foundation resources that are consumed by multiple business units and workloads. The infrastructure pipeline deploys projects and infrastructure that are used by workloads, such as VM instances or databases. The blueprint sets up a separate infrastructure pipeline for each business unit, or you might prefer a single infrastructure pipeline used by multiple teams. The application pipeline deploys the artifacts for each workload, such as containers or images. You might have many different application teams with individual application pipelines. The following sections introduce the usage of each pipeline layer. The foundation pipeline The foundation pipeline deploys the foundation resources. It also sets up the infrastructure pipeline that is used to deploy infrastructure used by workloads. To create the foundation pipeline, you first clone or fork the terraform-example-foundation to your own Git repository. Follow the steps in the 0-bootstrap README file to configure your bootstrap folder and resources. Stage Description 0-bootstrap Bootstraps a Google Cloud organization. This step also configures a CI/CD pipeline for the blueprint code in subsequent stages. The CICD project contains the Cloud Build foundation pipeline for deploying resources. The seed project includes the Cloud Storage buckets that contain the Terraform state of the foundation infrastructure and includes highly privileged service accounts that are used by the foundation pipeline to create resources. The Terraform state is protected through storage Object Versioning. When the CI/CD pipeline runs, it acts as the service accounts that are managed in the seed project. After you create the foundation pipeline in the 0-bootstrap stage, the following stages deploy resources on the foundation pipeline. Review the README directions for each stage and implement each stage sequentially. Stage Description 1-org Sets up top-level shared folders, projects for shared services, organization-level logging, and baseline security settings through organization policies. 2-environments Sets up development, non-production, and production environments within the Google Cloud organization that you've created. 3-networks-dual-svpcor3-networks-hub-and-spoke Sets up shared VPCs in your chosen topology and the associated network resources. The infrastructure pipeline The infrastructure pipeline deploys the projects and infrastructure (for example, the VM instances and databases) that are used by workloads. The foundation pipeline deploys multiple infrastructure pipelines. This separation between the foundation pipeline and infrastructure pipeline allows for a separation between platform-wide resources and workload-specific resources. The following diagram describes how the blueprint configures multiple infrastructure pipelines that are intended for use by separate teams. The diagram describes the following key concepts: Each infrastructure pipeline is used to manage infrastructure resources independently of the foundation resources. Each business unit has its own infrastructure pipeline, managed in a dedicated project in the common folder. Each of the infrastructure pipelines has a service account with permission to deploy resources only to the projects that are associated with that business unit. This strategy creates a separation of duties between the privileged service accounts used for the foundation pipeline and those used by each infrastructure pipeline This approach with multiple infrastructure pipelines is recommended when you have multiple entities inside your organization that have the skills and appetite to manage their infrastructure separately, particularly if they have different requirements such as the types of pipeline validation policy they want to enforce. Alternatively, you might prefer to have a single infrastructure pipeline managed by a single team with consistent validation policies. In the terraform-example-foundation, stage 4 configures an infrastructure pipeline, and stage 5 demonstrates an example of using that pipeline to deploy infrastructure resources. Stage Description 4-projects Sets up a folder structure, projects, and an infrastructure pipeline. 5-app-infra (optional) Deploys workload projects with a Compute Engine instance using the infrastructure pipeline as an example. The application pipeline The application pipeline is responsible for deploying application artifacts for each individual workload, such as images or Kubernetes containers that run the business logic of your application. These artifacts are deployed to infrastructure resources that were deployed by your infrastructure pipeline. The enterprise foundation blueprint sets up your foundation pipeline and infrastructure pipeline, but doesn't deploy an application pipeline. For an example application pipeline, see the enterprise application blueprint. Automating your pipeline with Cloud Build The blueprint uses Cloud Build to automate CI/CD processes. The following table describes the controls are built into the foundation pipeline and infrastructure pipeline that are deployed by the terraform-example-foundation repository. If you are developing your own pipelines using other CI/CD automation tools, we recommend that you apply similar controls. Control Description Separate build configurations to validate code before deploying The blueprint uses two Cloud Build build configuration files for the entire pipeline, and each repository that is associated with a stage has two Cloud Build triggers that are associated with those build configuration files. When code is pushed to a repository branch, the build configuration files are triggered to first run cloudbuild-tf-plan.yaml which validates your code with policy checks and Terraform plan against that branch, then cloudbuild-tf-apply.yaml runs terraform apply on the outcome of that plan. Terraform policy checks The blueprint includes a set of Open Policy Agent constraints that are enforced by the policy validation in Google Cloud CLI. These constraints define the acceptable resource configurations that can be deployed by your pipeline. If a build doesn't meet policy in the first build configuration, then the second build configuration doesn't deploy any resources.The policies enforced in the blueprint are forked from GoogleCloudPlatform/policy-library on GitHub. You can write additional policies for the library to enforce custom policies to meet your requirements. Principle of least privilege The foundation pipeline has a different service account for each stage with an allow policy that grants only the minimum IAM roles for that stage. Each Cloud Build trigger runs as the specific service account for that stage. Using different accounts helps mitigate the risk that modifying one repository could impact the resources that are managed by another repository. To understand the particular IAM roles applied to each service account, see the sa.tf Terraform code in the bootstrap stage. Cloud Build private pools The blueprint uses Cloud Build private pools. Private pools let you optionally enforce additional controls such as restricting access to public repositories or running Cloud Build inside a VPC Service Controls perimeter. Cloud Build custom builders The blueprint creates its own custom builder to run Terraform. For more information, see 0-bootstrap/Dockerfile. This control enforces that the pipeline consistently runs with a known set of libraries at pinned versions. Deployment approval Optionally, you can add a manual approval stage to Cloud Build. This approval adds an additional checkpoint after the build is triggered but before it runs so that a privileged user can manually approve the build. Branching strategy We recommend a persistent branch strategy for submitting code to your Git system and deploying resources through the foundation pipeline. The following diagram describes the persistent branch strategy. The diagram describes three persistent branches in Git (development, non-production, and production) that reflect the corresponding Google Cloud environments. There are also multiple ephemeral feature branches that don't correspond to resources that are deployed in your Google Cloud environments. We recommend that you enforce a pull request (PR) process into your Git system so that any code that is merged to a persistent branch has an approved PR. To develop code with this persistent branch strategy, follow these high-level steps: When you're developing new capabilities or working on a bug fix, create a new branch based off of the development branch. Use a naming convention for your branch that includes the type of change, a ticket number or other identifier, and a human-readable description, like feature/123456-org-policies. When you complete the work in the feature branch, open a PR that targets the development branch. When you submit the PR, the PR triggers the foundation pipeline to perform terraform plan and terraform validate to stage and verify the changes. After you validate the changes to the code, merge the feature or bug fix into the development branch. The merge process triggers the foundation pipeline to run terraform apply to deploy the latest changes in the development branch to the development environment. Review the changes in the development environment using any manual reviews, functional tests, or end-to-end tests that are relevant to your use case. Then promote changes to the non-production environment by opening a PR that targets the non-production branch and merge your changes. To deploy resources to the production environment, repeat the same process as step 6: review and validate the deployed resources, open a PR to the production branch, and merge. What's next Read about operations best practices (next document in this series). Operations best practices This section introduces operations that you must consider as you deploy and operate additional workloads into your Google Cloud environment. This section isn't intended to be exhaustive of all operations in your cloud environment, but introduces decisions related to the architectural recommendations and resources deployed by the blueprint. Update foundation resources Although the blueprint provides an opinionated starting point for your foundation environment, your foundation requirements might grow over time. After your initial deployment, you might adjust configuration settings or build new shared services to be consumed by all workloads. To modify foundation resources, we recommend that you make all changes through the foundation pipeline. Review the branching strategy for an introduction to the flow of writing code, merging it, and triggering the deployment pipelines. Decide attributes for new workload projects When creating new projects through the project factory module of the automation pipeline, you must configure various attributes. Your process to design and create projects for new workloads should include decisions for the following: Which Google Cloud APIs to enable Which Shared VPC to use, or whether to create a new VPC network Which IAM roles to create for the initial project-service-account that is created by the pipeline Which project labels to apply The folder that the project is deployed to Which billing account to use Whether to add the project to a VPC Service Controls perimeter Whether to configure a budget and billing alert threshold for the project For a complete reference of the configurable attributes for each project, see the input variables for the project factory in the automation pipeline. Manage permissions at scale When you deploy workload projects on top of your foundation, you must consider how you will grant access to the intended developers and consumers of those projects. We recommend that you add users into a group that is managed by your existing identity provider, synchronize the groups with Cloud Identity, and then apply IAM roles to the groups. Always keep in mind the principle of least privilege. We also recommend that you use IAM recommender to identify allow policies that grant over-privileged roles. Design a process to periodically review recommendations or automatically apply recommendations into your deployment pipelines. Coordinate changes between the networking team and the application team The network topologies that are deployed by the blueprint assume that you have a team responsible for managing network resources, and separate teams responsible for deploying workload infrastructure resources. As the workload teams deploy infrastructure, they must create firewall rules to allow the intended access paths between components of their workload, but they don't have permission to modify the network firewall policies themselves. Plan how teams will work together to coordinate the changes to the centralized networking resources that are needed to deploy applications. For example, you might design a process where a workload team requests tags for their applications. The networking team then creates the tags and adds rules to the network firewall policy that allows traffic to flow between resources with the tags, and delegates the IAM roles to use the tags to the workload team. Optimize your environment with the Active Assist portfolio In addition to IAM recommender, Google Cloud provides the Active Assist portfolio of services to make recommendations about how to optimize your environment. For example, firewall insights or the unattended project recommender provide actionable recommendations that can help tighten your security posture. Design a process to periodically review recommendations or automatically apply recommendations into your deployment pipelines. Decide which recommendations should be managed by a central team and which should be the responsibility of workload owners, and apply IAM roles to access the recommendations accordingly. Grant exceptions to organization policies The blueprint enforces a set of organization policy constraints that are recommended to most customers in most scenarios, but you might have legitimate use cases that require limited exceptions to the organization policies you enforce broadly. For example, the blueprint enforces the iam.disableServiceAccountKeyCreation constraint. This constraint is an important security control because a leaked service account key can have a significant negative impact, and most scenarios should use more secure alternatives to service account keys to authenticate. However, there might be use cases that can only authenticate with a service account key, such as an on-premises server that requires access to Google Cloud services and cannot use workload identity federation. In this scenario, you might decide to allow an exception to the policy, so long as additional compensating controls like best practices for managing service account keys are enforced. Therefore, you should design a process for workloads to request an exception to policies, and ensure that the decision makers who are responsible for granting exceptions have the technical knowledge to validate the use case and consult on whether additional controls must be in place to compensate. When you grant an exception to a workload, modify the organization policy constraint as narrowly as possible. You can also conditionally add constraints to an organization policy by defining a tag that grants an exception or enforcement for policy, then applying the tag to projects and folders. Protect your resources with VPC Service Controls The blueprint helps prepare your environment for VPC Service Controls by separating the base and restricted networks. However, by default, the Terraform code doesn't enable VPC Service Controls because this enablement can be a disruptive process. A perimeter denies access to restricted Google Cloud services from traffic that originates outside the perimeter, which includes the console, developer workstations, and the foundation pipeline used to deploy resources. If you use VPC Service Controls, you must design exceptions to the perimeter that allow the access paths that you intend. A VPC Service Controls perimeter is intended for exfiltration controls between your Google Cloud organization and external sources. The perimeter isn't intended to replace or duplicate allow policies for granular access control to individual projects or resources. When you design and architect a perimeter, we recommend using a common unified perimeter for lower management overhead. If you must design multiple perimeters to granularly control service traffic within your Google Cloud organization, we recommend that you clearly define the threats that are addressed by a more complex perimeter structure and the access paths between perimeters that are needed for intended operations. To adopt VPC Service Controls, evaluate the following: Which of your use cases require VPC Service Controls. Whether the required Google Cloud services support VPC Service Controls. How to configure breakglass access to modify the perimeter in case it disrupts your automation pipelines. How to use best practices for enabling VPC Service Controls to design and implement your perimeter. After the perimeter is enabled, we recommend that you design a process to consistently add new projects to the correct perimeter, and a process to design exceptions when developers have a new use case that is denied by your current perimeter configuration. Test organization-wide changes in a separate organization We recommend that you never deploy changes to production without testing. For workload resources, this approach is facilitated by separate environments for development, non-production, and production. However, some resources at the organization don't have separate environments to facilitate testing. For changes at the organization-level, or other changes that can affect production environments like the configuration between your identity provider and Cloud Identity, consider creating a separate organization for test purposes. Control remote access to virtual machines Because we recommend that you deploy immutable infrastructure through the foundation pipeline, infrastructure pipeline, and application pipeline, we also recommend that you only grant developers direct access to a virtual machine through SSH or RDP for limited or exceptional use cases. For scenarios that require remote access, we recommend that you manage user access using OS Login where possible. This approach uses managed Google Cloud services to enforce access control, account lifecycle management, two-step verification, and audit logging. Alternatively, if you must allow access through SSH keys in metadata or RDP credentials, it is your responsibility to manage the credential lifecycle and store credentials securely outside of Google Cloud. In any scenario, a user with SSH or RDP access to a VM can be a privilege escalation risk, so you should design your access model with this in mind. The user can run code on that VM with the privileges of the associated service account or query the metadata server to view the access token that is used to authenticate API requests. This access can then be a privilege escalation if you didn't deliberately intend for the user to operate with the privileges of the service account. Mitigate overspending by planning budget alerts The blueprint implements best practices introduced in the Google Cloud Architecture Framework: Cost Optimization for managing cost, including the following: Use a single billing account across all projects in the enterprise foundation. Assign each project a billingcode metadata label that is used to allocate cost between cost centers. Set budgets and alert thresholds. It's your responsibility to plan budgets and configure billing alerts. The blueprint creates budget alerts for workload projects when the forecasted spending is on track to reach 120% of the budget. This approach lets a central team identify and mitigate incidents of significant overspending. Significant unexpected increases in spending without a clear cause can be an indicator of a security incident and should be investigated from the perspectives of both cost control and security. Note: Budget alerts cover a different type of notification than the Billing category of Essential Contacts. Budget alerts are related to the consumption of budgets that you define for each project. Billing notifications from Essential Contacts are related to pricing updates, errors, and credits. Depending on your use case, you might set a budget that is based on the cost of an entire environment folder, or all projects related to a certain cost center, instead of setting granular budgets for each project. We also recommend that you delegate budget and alert setting to workload owners who might set more granular alerting threshold for their day-to-day monitoring. For guidance on building FinOps capabilities, including forecasting budgets for workloads, see Getting started with FinOps on Google Cloud. Allocate costs between internal cost centers The console lets you view your billing reports to view and forecast cost in multiple dimensions. In addition to the prebuilt reports, we recommend that you export billing data to a BigQuery dataset in the prj-c-billing-export project. The exported billing records allow you to allocate cost on custom dimensions, such as your internal cost centers, based on project label metadata like billingcode. The following SQL query is a sample query to understand costs for all projects that are grouped by the billingcode project label. #standardSQL SELECT (SELECT value from UNNEST(labels) where key = 'billingcode') AS costcenter, service.description AS description, SUM(cost) AS charges, SUM((SELECT SUM(amount) FROM UNNEST(credits))) AS credits FROM PROJECT_ID.DATASET_ID.TABLE_NAME GROUP BY costcenter, description ORDER BY costcenter ASC, description ASC To set up this export, see export Cloud Billing data to BigQuery. If you require internal accounting or chargeback between cost centers, it's your responsibility to incorporate the data that is obtained from this query into your internal processes. Ingest findings from detective controls into your existing SIEM Although the foundation resources help you configure aggregated destinations for audit logs and security findings, it is your responsibility to decide how to consume and use these signals. If you have a requirement to aggregate logs across all cloud and on-premise environments into an existing SIEM, decide how to ingest logs from the prj-c-logging project and findings from Security Command Center into your existing tools and processes. You might create a single export for all logs and findings if a single team is responsible for monitoring security across your entire environment, or you might create multiple exports filtered to the set of logs and findings needed for multiple teams with different responsibilities. Alternatively, if log volume and cost are prohibitive, you might avoid duplication by retaining Google Cloud logs and findings only in Google Cloud. In this scenario, ensure that your existing teams have the right access and training to work with logs and findings directly in Google Cloud. For audit logs, design log views to grant access to a subset of logs in your centralized logs bucket to individual teams, instead of duplicating logs to multiple buckets which increases log storage cost. For security findings, grant folder-level and project-level roles for Security Command Center to let teams view and manage security findings just for the projects for which they are responsible, directly in the console. Continuously develop your controls library The blueprint starts with a baseline of controls to detect and prevent threats. We recommend that you review these controls and add additional controls based on your requirements. The following table summarizes the mechanisms to enforce governance policies and how to extend these for your additional requirements: Policy controls enforced by the blueprint Guidance to extend these controls Security Command Center detects vulnerabilities and threats from multiple security sources. Define custom modules for Security Health Analytics and custom modules for Event Threat Detection. The Organization Policy service enforces a recommended set of organization policy constraints on Google Cloud services. Enforce additional constraints from the premade list of available constraints or create custom constraints. Open Policy Agent (OPA) policy validates code in the foundation pipeline for acceptable configurations before deployment. Develop additional constraints based on the guidance at GoogleCloudPlatform/policy-library. Alerting on log-based metrics and performance metrics configures log-based metrics to alert on changes to IAM policies and configurations of some sensitive resources. Design additional log-based metrics and alerting policies for log events that you expect shouldn't occur in your environment. A custom solution for automated log analysis regularly queries logs for suspicious activity and creates Security Command Center findings. Write additional queries to create findings for security events that you want to monitor, using security log analytics as a reference. A custom solution to respond to asset changes creates Security Command Center findings and can automate remediation actions. Create additional Cloud Asset Inventory feeds to monitor changes for particular asset types and write additional Cloud Run functions with custom logic to respond to policy violations. These controls might evolve as your requirements and maturity on Google Cloud change. Manage encryption keys with Cloud Key Management Service Google Cloud provides default encryption at rest for all customer content, but also provides Cloud Key Management Service (Cloud KMS) to provide you additional control over your encryption keys for data at rest. We recommend that you evaluate whether the default encryption is sufficient, or whether you have a compliance requirement that you must use Cloud KMS to manage keys yourself. For more information, see decide how to meet compliance requirements for encryption at rest. The blueprint provides a prj-c-kms project in the common folder and a prj-{env}-kms project in each environment folder for managing encryption keys centrally. This approach lets a central team audit and manage encryption keys that are used by resources in workload projects, in order to meet regulatory and compliance requirements. Depending on your operational model, you might prefer a single centralized project instance of Cloud KMS under the control of a single team, you might prefer to manage encryption keys separately in each environment, or you might prefer multiple distributed instances so that accountability for encryption keys can be delegated to the appropriate teams. Modify the Terraform code sample as needed to fit your operational model. Optionally, you can enforce customer-managed encryption keys (CMEK) organization policies to enforce that certain resource types always require a CMEK key and that only CMEK keys from an allowlist of trusted projects can be used. Store and audit application credentials with Secret Manager We recommend that you never commit sensitive secrets (such as API keys, passwords, and private certificates) to source code repositories. Instead, commit the secret to Secret Manager and grant the Secret Manager Secret Accessor IAM role to the user or service account that needs to access the secret. We recommend that you grant the IAM role to an individual secret, not to all secrets in the project. When possible, you should generate production secrets automatically within the CI/CD pipelines and keep them inaccessible to human users except in breakglass situations. In this scenario, ensure that you don't grant IAM roles to view these secrets to any users or groups. The blueprint provides a single prj-c-secrets project in the common folder and a prj-{env}-secrets project in each environment folder for managing secrets centrally. This approach lets a central team audit and manage secrets used by applications in order to meet regulatory and compliance requirements. Depending on your operational model, you might prefer a single centralized instance of Secret Manager under the control of a single team, or you might prefer to manage secrets separately in each environment, or you might prefer multiple distributed instances of Secret Manager so that each workload team can manage their own secrets. Modify the Terraform code sample as needed to fit your operational model. Plan breakglass access to highly privileged accounts Although we recommend that changes to foundation resources are managed through version-controlled IaC that is deployed by the foundation pipeline, you might have exceptional or emergency scenarios that require privileged access to modify your environment directly. We recommend that you plan for breakglass accounts (sometimes called firecall or emergency accounts) that have highly privileged access to your environment in case of an emergency or when the automation processes break down. The following table describes some example purposes of breakglass accounts. Breakglass purpose Description Super admin Emergency access to the Super admin role used with Cloud Identity, to, for example, fix issues that are related to identity federation or multi-factor authentication (MFA). Organization administrator Emergency access to the Organization Administrator role, which can then grant access to any other IAM role in the organization. Foundation pipeline administrator Emergency access to modify the resources in your CICD project on Google Cloud and external Git repository in case the automation of the foundation pipeline breaks down. Operations or SRE An operations or SRE team needs privileged access to respond to outages or incidents. This can include tasks like restarting VMs or restoring data. Your mechanism to permit breakglass access depends on the existing tools and procedures you have in place, but a few example mechanisms include the following: Use your existing tools for privileged access management to temporarily add a user to a group that is predefined with highly-privileged IAM roles or use the credentials of a highly-privileged account. Pre-provision accounts intended only for administrator usage. For example, developer Dana might have an identity dana@example.com for daily use and admin-dana@example.com for breakglass access. Use an application like just-in-time privileged access that allows a developer to self-escalate to more privileged roles. Regardless of the mechanism you use, consider how you operationally address the following questions: How do you design the scope and granularity of breakglass access? For example, you might design a different breakglass mechanism for different business units to ensure that they cannot disrupt each other. How does your mechanism prevent abuse? Do you require approvals? For example, you might have split operations where one person holds credentials and one person holds the MFA token. How do you audit and alert on breakglass access? For example, you might configure a custom Event Threat Detection module to create a security finding when a predefined breakglass account is used. How do you remove the breakglass access and resume normal operations after the incident is over? For common privilege escalation tasks and rolling back changes, we recommend designing automated workflows where a user can perform the operation without requiring privilege escalation for their user identity. This approach can help reduce human error and improve security. For systems that require regular intervention, automating the fix might be the best solution. Google encourages customers to adopt a zero-touch production approach to make all production changes using automation, safe proxies, or audited breakglass. Google provides the SRE books for customers who are looking to adopt Google's SRE approach. What's next Read Deploy the blueprint (next document in this series). Deploy the blueprint This section describes the process that you can use to deploy the blueprint, its naming conventions, and alternatives to blueprint recommendations. Bringing it all together To deploy your own enterprise foundation in alignment with the best practices and recommendations from this blueprint, follow the high-level tasks summarized in this section. Deployment requires a combination of prerequisite setup steps, automated deployment through the terraform-example-foundation on GitHub, and additional steps that must be configured manually after the initial foundation deployment is complete. Process Steps Prerequisites before deploying the foundation pipeline resources Complete the following steps before you deploy the foundation pipeline: Create a Cloud Identity account and verify domain ownership. Apply for an invoiced billing account with your Google Cloud sales team or create a self-service billing account. Enforce security best practices for administrator accounts. Verify and reconcile issues with consumer user accounts. Configure your external identity provider as source of truth for synchronizing user accounts and SSO. Provision the groups for access control that are required to run the blueprint. Determine the network topology that you will use. Decide your source code management tool. The instructions for terraform-example-foundation are written to a Git repository that is hosted in Cloud Source Repositories. Decide your CI/CD automation tools. The terraform-example-foundation provides different sets of directions for different automation tools. To connect to an an existing on-premises environment, prepare the following: Plan your IP address allocation based on the number and size of ranges that are required by the blueprint. Order your Dedicated Interconnect connections. Steps to deploy the terraform-example-foundation from GitHub Follow the README directions for each stage to deploy the terraform-example-foundation from GitHub: Stage 0-bootstrap to create a foundation pipeline. If using a self-service billing account, you must request additional project quota before proceeding to the next stage. Stage 1-org to configure organization-level resources. Stage 2-environments to create environments. Stage either 3-networks-dual-svpc or 3-networks-hub-and-spoke to create networking resources in your preferred topology. Stage 4-projects to create an infrastructure pipeline. Optionally, stage 5-app-infra for sample usage of the infrastructure pipeline. Additional steps after IaC deployment After you deploy the Terraform code, complete the following: Complete the on-premises configuration changes. Activate Security Command Center Premium. Export Cloud Billing data to BigQuery. Sign up for a Cloud Customer Care plan. Enable Access Transparency logs. Share data from Cloud Identity with Google Cloud. Apply the administrative controls for Cloud Identity which aren't automated by the IaC deployment. Assess which of the additional administrative controls for customers with sensitive workloads are appropriate for your use case. Review operation best practices and plan how to connect your existing operations and capabilities to the foundation resources. Additional administrative controls for customers with sensitive workloads Google Cloud provides additional administrative controls that can help your security and compliance requirements. However, some controls involve additional cost or operational trade-offs that might not be appropriate for every customer. These controls also require customized inputs for your specific requirements that can't be fully automated in the blueprint with a default value for all customers. This section introduces security controls that you apply centrally to your foundation. This section isn't intended to be exhaustive of all the security controls that you can apply to specific workloads. For more information on Google's security products and solutions, see Google Cloud security best practices center. Evaluate whether the following controls are appropriate for your foundation based on your compliance requirements, risk appetite, and sensitivity of data. Control Description Protect your resources with VPC Service Controls VPC Service Controls lets you define security policies that prevent access to Google-managed services outside of a trusted perimeter, block access to data from untrusted locations, and mitigate data exfiltration risks. However, VPC Service Controls can cause existing services to break until you define exceptions to allow intended access patterns.Evaluate whether the value of mitigating exfiltration risks justifies the increased complexity and operational overhead of adopting VPC Service Controls. The blueprint prepares restricted networks and optional variables to configure VPC Service Controls, but the perimeter isn't enabled until you take additional steps to design and enable it. Restrict resource locations You might have regulatory requirements that cloud resources must only be deployed in approved geographical locations. This organization policy constraint enforces that resources can only be deployed in the list of locations you define. Enable Assured Workloads Assured Workloads provides additional compliance controls that help you meet specific regulatory regimes. The blueprint provides optional variables in the deployment pipeline for enablement. Enable data access logs You might have a requirement to log all access to certain sensitive data or resources.Evaluate where your workloads handle sensitive data that requires data access logs, and enable the logs for each service and environment working with sensitive data. Enable Access Approval Access Approval ensures that Cloud Customer Care and engineering require your explicit approval whenever they need to access your customer content.Evaluate the operational process required to review Access Approval requests to mitigate possible delays in resolving support incidents. Enable Key Access Justifications Key Access Justifications lets you programmatically control whether Google can access your encryption keys, including for automated operations and for Customer Care to access your customer content.Evaluate the cost and operational overhead associated with Key Access Justifications as well as its dependency on Cloud External Key Manager (Cloud EKM). Disable Cloud Shell Cloud Shell is an online development environment. This shell is hosted on a Google-managed server outside of your environment, and thus it isn't subject to the controls that you might have implemented on your own developer workstations.If you want to strictly control which workstations a developer can use to access cloud resources, disable Cloud Shell. You might also evaluate Cloud Workstations for a configurable workstation option in your own environment. Restrict access to the Google Cloud console Google Cloud lets you restrict access to the Google Cloud console based on access level attributes like group membership, trusted IP address ranges, and device verification. Some attributes require an additional subscription to Chrome Enterprise Premium.Evaluate the access patterns that you trust for user access to web-based applications such as the console as part of a larger zero trust deployment. Naming conventions We recommend that you have a standardized naming convention for your Google Cloud resources. The following table describes recommended conventions for resource names in the blueprint. Resource Naming convention Folder fldr-environmentenvironment is a description of the folder-level resources within the Google Cloud organization. For example, bootstrap, common, production, nonproduction, development, or network.For example: fldr-production Project ID prj-environmentcode-description-randomid environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). Shared VPC host projects use the environmentcode of the associated environment. Projects for networking resources that are shared across environments, like the interconnect project, use the net environment code. description is additional information about the project. You can use short, human-readable abbreviations. randomid is a randomized suffix to prevent collisions for resource names that must be globally unique and to mitigate against attackers guessing resource names. The blueprint automatically adds a random four-character alphanumeric identifier. For example: prj-c-logging-a1b2 VPC network vpc-environmentcode-vpctype-vpcconfig environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. For example: vpc-p-shared-base Subnet sn-environmentcode-vpctype-vpcconfig-region{-description} environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid hitting character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the subnet. You can use short, human-readable abbreviations. For example: sn-p-shared-restricted-uswest1 Firewall policies fw-firewalltype-scope-environmentcode{-description} firewalltype is hierarchical or network. scope is global or the Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). environmentcode is a short form of the environment field (one of b, c, p, n, d, or net) that owns the policy resource. description is additional information about the hierarchical firewall policy. You can use short, human-readable abbreviations. For example:fw-hierarchical-global-c-01fw-network-uswest1-p-shared-base Cloud Router cr-environmentcode-vpctype-vpcconfig-region{-description} environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the Cloud Router. You can use short, human-readable abbreviations. For example: cr-p-shared-base-useast1-cr1 Cloud Interconnect connection ic-dc-colo dc is the name of your data center to which a Cloud Interconnect is connected. colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with. For example: ic-mydatacenter-lgazone1 Cloud Interconnect VLAN attachment vl-dc-colo-environmentcode-vpctype-vpcconfig-region{-description} dc is the name of your data center to which a Cloud Interconnect is connected. colo is the colocation facility name that the Cloud Interconnect from the on-premises data center is peered with. environmentcode is a short form of the environment field (one of b, c, p, n, d, or net). vpctype is one of shared, float, or peer. vpcconfig is either base or restricted to indicate whether the network is intended to be used with VPC Service Controls or not. region is any valid Google Cloud region that the resource is located in. We recommend removing hyphens and using an abbreviated form of some regions and directions to avoid reaching character limits. For example, au (Australia), na (North America), sa (South America), eu (Europe), se (southeast), or ne (northeast). description is additional information about the VLAN. You can use short, human-readable abbreviations. For example: vl-mydatacenter-lgazone1-p-shared-base-useast1-cr1 Group grp-gcp-description@example.com Where description is additional information about the group. You can use short, human-readable abbreviations.For example: grp-gcp-billingadmin@example.com Custom role rl-descriptionWhere description is additional information about the role. You can use short, human-readable abbreviations.For example: rl-customcomputeadmin Service account sa-description@projectid.iam.gserviceaccount.comWhere: description is additional information about the service account. You can use short, human-readable abbreviations. projectid is the globally unique project identifier. For example: sa-terraform-net@prj-b-seed-a1b2.iam.gserviceaccount.com Storage bucket bkt-projectid-descriptionWhere: projectid is the globally unique project identifier. description is additional information about the storage bucket. You can use short, human-readable abbreviations. For example: bkt-prj-c-infra-pipeline-a1b2-app-artifacts Alternatives to default recommendations The best practices that are recommended in the blueprint might not work for every customer. You can customize any of the recommendations to meet your specific requirements. The following table introduces some of the common variations that you might require based on your existing technology stack and ways of working. Decision area Possible alternatives Organization: The blueprint uses a single organization as the root node for all resources. Decide a resource hierarchy for your Google Cloud landing zone introduces scenarios in which you might prefer multiple organizations, such as the following: Your organization includes sub-companies that are likely to be sold in the future or that run as completely separate entities. You want to experiment in a sandbox environment with no connectivity to your existing organization. Folder structure: The blueprint has a simple folder structure, with workloads divided into production, non-production and development folders at the top layer. Decide a resource hierarchy for your Google Cloud landing zone introduces other approaches for structuring folders based on how you want to manage resources and inherit policies, such as: Folders based on application environments Folders based on regional entities or subsidiaries Folders based on accountability framework Organization policies: The blueprint enforces all organization policy constraints at the organization node. You might have different security policies or ways of working for different parts of the business. In this scenario, enforce organization policy constraints at a lower node in the resource hierarchy. Review the complete list of organization policy constraints that help meet your requirements. Deployment pipeline tooling: The blueprint uses Cloud Build to run the automation pipeline. You might prefer other products for your deployment pipeline, such as Terraform Enterprise, GitLab Runners, GitHub Actions, or Jenkins. The blueprint includes alternative directions for each product. Code repository for deployment: The blueprint uses Cloud Source Repositories as the managed private Git repository. Use your preferred version control system for managing code repositories, such as GitLab, GitHub, or Bitbucket.If you use a private repository that is hosted in your on-premises environment, configure a private network path from your repository to your Google Cloud environment. Identity provider: The blueprint assumes an on-premises Active Directory and federates identities to Cloud Identity using Google Cloud Directory Sync. If you already use Google Workspace, you can use the Google identities that are already managed in Google Workspace.If you don't have an existing identity provider, you might create and manage user identities directly in Cloud Identity.If you have an existing identity provider, such as Okta, Ping, or Azure Entra ID, you might manage user accounts in your existing identity provider and synchronize to Cloud Identity.If you have data sovereignty or compliance requirements that prevent you from using Cloud Identity, and if you don't require managed Google user identities for other Google services such as Google Ads or Google Marketing Platform, then you might prefer workforce identity federation. In this scenario, be aware of limitations with supported services. Multiple regions: The blueprint deploys regional resources into two different Google Cloud regions to help enable workload design with high availability and disaster recovery requirements in mind. If you have end users in more geographical locations, you might configure more Google Cloud regions to create resources closer to the end user with less latency.If you have data sovereignty constraints or your availability needs can be met in a single region, you might configure only one Google Cloud region. IP address allocation: The blueprint provides a set of IP address ranges. You might need to change the specific IP address ranges that are used based on the IP address availability in your existing hybrid environment. If you modify the IP address ranges, use the blueprint as guidance for the number and size of ranges required, and review the valid IP address ranges for Google Cloud. Hybrid networking: The blueprint uses Dedicated Interconnect across multiple physical sites and Google Cloud regions for maximum bandwidth and availability. Depending on your requirements for cost, bandwidth, and reliability requirements, you might configure Partner Interconnect or Cloud VPN instead.If you need to start deploying resources with private connectivity before a Dedicated Interconnect can be completed, you might start with Cloud VPN and change to using Dedicated Interconnect later.If you don't have an existing on-premises environment, you might not need hybrid networking at all. VPC Service Controls perimeter: The blueprint recommends a single perimeter which includes all the service projects that are associated with a restricted VPC network. Projects that are associated with a base VPC network are not included inside the perimeter. You might have a use case that requires multiple perimeters for an organization or you might decide not to use VPC Service Controls at all.For information, see decide how to mitigate data exfiltration through Google APIs. Secret Manager: The blueprint deploys a project for using Secret Manager in the common folder for organization-wide secrets, and a project in each environment folder for environment-specific secrets. If you have a single team who is responsible for managing and auditing sensitive secrets across the organization, you might prefer to use only a single project for managing access to secrets.If you let workload teams manage their own secrets, you might not use a centralized project for managing access to secrets, and instead let teams use their own instances of Secret Manager in workload projects. Cloud KMS: The blueprint deploys a project for using Cloud KMS in the common folder for organization-wide keys, and a project for each environment folder for keys in each environment. If you have a single team who is responsible for managing and auditing encryption keys across the organization, you might prefer to use only a single project for managing access to keys. A centralized approach can help meet compliance requirements like PCI key custodians.If you let workload teams manage their own keys, you might not use a centralized project for managing access to keys, and instead let teams use their own instances of Cloud KMS in workload projects. Aggregated log sinks: The blueprint configures a set of log sinks at the organization node so that a central security team can review audit logs from across the entire organization. You might have different teams who are responsible for auditing different parts of the business, and these teams might require different logs to do their jobs. In this scenario, design multiple aggregated sinks at the appropriate folders and projects and create filters so that each team receives only the necessary logs, or design log views for granular access control to a common log bucket. Granularity of infrastructure pipelines: The blueprint uses a model where each business unit has a separate infrastructure pipeline to manage their workload projects. You might prefer a single infrastructure pipeline that is managed by a central team if you have a central team who is responsible for deploying all projects and infrastructure. This central team can accept pull requests from workload teams to review and approve before project creation, or the team can create the pull request themselves in response to a ticketed system.You might prefer more granular pipelines if individual workload teams have the ability to customize their own pipelines and you want to design more granular privileged service accounts for the pipelines. SIEM exports:The blueprint manages all security findings in Security Command Center. Decide whether you will export security findings from Security Command Center to tools such as Google Security Operations or your existing SIEM, or whether teams will use the console to view and manage security findings. You might configure multiple exports with unique filters for different teams with different scopes and responsibilities. DNS lookups for Google Cloud services from on-premises: The blueprint configures a unique Private Service Connect endpoint for each Shared VPC, which can help enable designs with multiple VPC Service Controls perimeters. You might not require routing from an on-premises environment to Private Service Connect endpoints at this level of granularity if you don't require multiple VPC Service Control perimeters.Instead of mapping on-premises hosts to Private Service Connect endpoints by environment, you might simplify this design to use a single Private Service Connect endpoint with the appropriate API bundle, or use the generic endpoints for private.googleapis.com and restricted.googleapis.com. What's next Implement the blueprint using the Terraform example foundation on GitHub. Learn more about best practice design principles with the Google Cloud Architecture Framework. Review the library of blueprints to help you accelerate the design and build of common enterprise workloads, including the following: Import data from Google Cloud into a secured BigQuery data warehouse Import data from an external network into a secured BigQuery data warehouse Deploy a secured serverless architecture using Cloud Run functions Deploy a secured serverless architecture using Cloud Run See related solutions to deploy on top of your foundation environment. For access to a demonstration environment, contact us at security-foundations-blueprint-support@google.com. Send feedback \ No newline at end of file diff --git a/View_in_one_page.txt b/View_in_one_page.txt new file mode 100644 index 0000000000000000000000000000000000000000..9b5ef96c4c9af9f58fcba9542f1ce4f596314d1d --- /dev/null +++ b/View_in_one_page.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/printable +Date Scraped: 2025-02-23T11:44:32.820Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud Architecture Framework Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This page provides a one-page view of all of the pages in the Google Cloud Architecture Framework. You can print this page or save it in PDF format by using your browser's print function. This page doesn't have a table of contents. You can't use the links on this page to navigate within the page. The Google Cloud Architecture Framework provides recommendations to help architects, developers, administrators, and other cloud practitioners design and operate a cloud topology that's secure, efficient, resilient, high-performing, and cost-effective. The Google Cloud Architecture Framework is our version of a well-architected framework. A cross-functional team of experts at Google validates the recommendations in the Architecture Framework. The team curates the Architecture Framework to reflect the expanding capabilities of Google Cloud, industry best practices, community knowledge, and feedback from you. For a summary of the significant changes to the Architecture Framework, see What's new. The Architecture Framework is relevant to applications built for the cloud and for workloads migrated from on-premises to Google Cloud, hybrid cloud deployments, and multi-cloud environments. Architecture Framework pillars and perspectives The Google Cloud Architecture Framework is organized into five pillars, as shown in the following diagram. We also provide cross-pillar perspectives that focus on recommendations for selected domains, industries, and technologies like AI and machine learning (ML). To view the content in all of the pillars and perspectives on a single page or to to get a PDF output of the content, see View in one page. Pillars construction Operational excellence Efficiently deploy, operate, monitor, and manage your cloud workloads. security Security, privacy, and compliance Maximize the security of your data and workloads in the cloud, design for privacy, and align with regulatory requirements and standards. restore Reliability Design and operate resilient and highly available workloads in the cloud. payment Cost optimization Maximize the business value of your investment in Google Cloud. speed Performance optimization Design and tune your cloud resources for optimal performance. Perspectives saved_search AI and ML A cross-pillar view of recommendations that are specific to AI and ML workloads. Core principles Before you explore the recommendations in each pillar of the Architecture Framework, review the following core principles: Design for change No system is static. The needs of its users, the goals of the team that builds the system, and the system itself are constantly changing. With the need for change in mind, build a development and production process that enables teams to regularly deliver small changes and get fast feedback on those changes. Consistently demonstrating the ability to deploy changes helps to build trust with stakeholders, including the teams responsible for the system, and the users of the system. Using DORA's software delivery metrics can help your team monitor the speed, ease, and safety of making changes to the system. Document your architecture When you start to move your workloads to the cloud or build your applications, lack of documentation about the system can be a major obstacle. Documentation is especially important for correctly visualizing the architecture of your current deployments. Quality documentation isn't achieved by producing a specific amount of documentation, but by how clear content is, how useful it is, and how it's maintained as the system changes. A properly documented cloud architecture establishes a common language and standards, which enable cross-functional teams to communicate and collaborate effectively. The documentation also provides the information that's necessary to identify and guide future design decisions. Documentation should be written with your use cases in mind, to provide context for the design decisions. Over time, your design decisions will evolve and change. The change history provides the context that your teams require to align initiatives, avoid duplication, and measure performance changes effectively over time. Change logs are particularly valuable when you onboard a new cloud architect who is not yet familiar with your current design, strategy, or history. Analysis by DORA has found a clear link between documentation quality and organizational performance — the organization's ability to meet their performance and profitability goals. Simplify your design and use fully managed services Simplicity is crucial for design. If your architecture is too complex to understand, it will be difficult to implement the design and manage it over time. Where feasible, use fully managed services to minimize the risks, time, and effort associated with managing and maintaining baseline systems. If you're already running your workloads in production, test with managed services to see how they might help to reduce operational complexities. If you're developing new workloads, then start simple, establish a minimal viable product (MVP), and resist the urge to over-engineer. You can identify exceptional use cases, iterate, and improve your systems incrementally over time. Decouple your architecture Research from DORA shows that architecture is an important predictor for achieving continuous delivery. Decoupling is a technique that's used to separate your applications and service components into smaller components that can operate independently. For example, you might separate a monolithic application stack into individual service components. In a loosely coupled architecture, an application can run its functions independently, regardless of the various dependencies. A decoupled architecture gives you increased flexibility to do the following: Apply independent upgrades. Enforce specific security controls. Establish reliability goals for each subsystem. Monitor health. Granularly control performance and cost parameters. You can start the decoupling process early in your design phase or incorporate it as part of your system upgrades as you scale. Use a stateless architecture A stateless architecture can increase both the reliability and scalability of your applications. Stateful applications rely on various dependencies to perform tasks, such as local caching of data. Stateful applications often require additional mechanisms to capture progress and restart gracefully. Stateless applications can perform tasks without significant local dependencies by using shared storage or cached services. A stateless architecture enables your applications to scale up quickly with minimum boot dependencies. The applications can withstand hard restarts, have lower downtime, and provide better performance for end users. Google Cloud Architecture Framework: Operational excellence The operational excellence pillar in the Google Cloud Architecture Framework provides recommendations to operate workloads efficiently on Google Cloud. Operational excellence in the cloud involves designing, implementing, and managing cloud solutions that provide value, performance, security, and reliability. The recommendations in this pillar help you to continuously improve and adapt workloads to meet the dynamic and ever-evolving needs in the cloud. The operational excellence pillar is relevant to the following audiences: Managers and leaders: A framework to establish and maintain operational excellence in the cloud and to ensure that cloud investments deliver value and support business objectives. Cloud operations teams: Guidance to manage incidents and problems, plan capacity, optimize performance, and manage change. Site reliability engineers (SREs): Best practices that help you to achieve high levels of service reliability, including monitoring, incident response, and automation. Cloud architects and engineers: Operational requirements and best practices for the design and implementation phases, to help ensure that solutions are designed for operational efficiency and scalability. DevOps teams: Guidance about automation, CI/CD pipelines, and change management, to help enable faster and more reliable software delivery. To achieve operational excellence, you should embrace automation, orchestration, and data-driven insights. Automation helps to eliminate toil. It also streamlines and builds guardrails around repetitive tasks. Orchestration helps to coordinate complex processes. Data-driven insights enable evidence-based decision-making. By using these practices, you can optimize cloud operations, reduce costs, improve service availability, and enhance security. Operational excellence in the cloud goes beyond technical proficiency in cloud operations. It includes a cultural shift that encourages continuous learning and experimentation. Teams must be empowered to innovate, iterate, and adopt a growth mindset. A culture of operational excellence fosters a collaborative environment where individuals are encouraged to share ideas, challenge assumptions, and drive improvement. For operational excellence principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Operational excellence in the Architecture Framework. Core principles The recommendations in the operational excellence pillar of the Architecture Framework are mapped to the following core principles: Ensure operational readiness and performance using CloudOps: Ensure that cloud solutions meet operational and performance requirements by defining service level objectives (SLOs) and by performing comprehensive monitoring, performance testing, and capacity planning. Manage incidents and problems: Minimize the impact of cloud incidents and prevent recurrence through comprehensive observability, clear incident response procedures, thorough retrospectives, and preventive measures. Manage and optimize cloud resources: Optimize and manage cloud resources through strategies like right-sizing, autoscaling, and by using effective cost monitoring tools. Automate and manage change: Automate processes, streamline change management, and alleviate the burden of manual labor. Continuously improve and innovate: Focus on ongoing enhancements and the introduction of new solutions to stay competitive. ContributorsAuthors: Ryan Cox | Principal ArchitectHadrian Knotz | Enterprise ArchitectOther contributors: Daniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMZach Seils | Networking SpecialistWade Holmes | Global Solutions Director Ensure operational readiness and performance using CloudOps This principle in the operational excellence pillar of the Google Cloud Architecture Framework helps you to ensure operational readiness and performance of your cloud workloads. It emphasizes establishing clear expectations and commitments for service performance, implementing robust monitoring and alerting, conducting performance testing, and proactively planning for capacity needs. Principle overview Different organizations might interpret operational readiness differently. Operational readiness is how your organization prepares to successfully operate workloads on Google Cloud. Preparing to operate a complex, multilayered cloud workload requires careful planning for both go-live and day-2 operations. These operations are often called CloudOps. Focus areas of operational readiness Operational readiness consists of four focus areas. Each focus area consists of a set of activities and components that are necessary to prepare to operate a complex application or environment in Google Cloud. The following table lists the components and activities of each focus area: Note: The recommendations in the operational excellence pillar of the Architecture Framework are relevant to one or more of these operational-readiness focus areas. Focus area of operational readiness Activities and components Workforce Defining clear roles and responsibilities for the teams that manage and operate the cloud resources. Ensuring that team members have appropriate skills. Developing a learning program. Establishing a clear team structure. Hiring the required talent. Processes Observability. Managing service disruptions. Cloud delivery. Core cloud operations. Tooling Tools that are required to support CloudOps processes. Governance Service levels and reporting. Cloud financials. Cloud operating model. Architectural review and governance boards. Cloud architecture and compliance. Recommendations To ensure operational readiness and performance by using CloudOps, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Define SLOs and SLAs A core responsibility of the cloud operations team is to define service level objectives (SLOs) and service level agreements (SLAs) for all of the critical workloads. This recommendation is relevant to the governance focus area of operational readiness. SLOs must be specific, measurable, achievable, relevant, and time-bound (SMART), and they must reflect the level of service and performance that you want. Specific: Clearly articulates the required level of service and performance. Measurable: Quantifiable and trackable. Achievable: Attainable within the limits of your organization's capabilities and resources. Relevant: Aligned with business goals and priorities. Time-bound: Has a defined timeframe for measurement and evaluation. For example, an SLO for a web application might be "99.9% availability" or "average response time less than 200 ms." Such SLOs clearly define the required level of service and performance for the web application, and the SLOs can be measured and tracked over time. SLAs outline the commitments to customers regarding service availability, performance, and support, including any penalties or remedies for noncompliance. SLAs must include specific details about the services that are provided, the level of service that can be expected, the responsibilities of both the service provider and the customer, and any penalties or remedies for noncompliance. SLAs serve as a contractual agreement between the two parties, ensuring that both have a clear understanding of the expectations and obligations that are associated with the cloud service. Google Cloud provides tools like Cloud Monitoring and service level indicators (SLIs) to help you define and track SLOs. Cloud Monitoring provides comprehensive monitoring and observability capabilities that enable your organization to collect and analyze metrics that are related to the availability, performance, and latency of cloud-based applications and services. SLIs are specific metrics that you can use to measure and track SLOs over time. By utilizing these tools, you can effectively monitor and manage cloud services, and ensure that they meet the SLOs and SLAs. Clearly defining and communicating SLOs and SLAs for all of your critical cloud services helps to ensure reliability and performance of your deployed applications and services. Implement comprehensive observability To get real-time visibility into the health and performance of your cloud environment, we recommend that you use a combination of Google Cloud Observability tools and third-party solutions. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Implementing a combination of observability solutions provides you with a comprehensive observability strategy that covers various aspects of your cloud infrastructure and applications. Google Cloud Observability is a unified platform for collecting, analyzing, and visualizing metrics, logs, and traces from various Google Cloud services, applications, and external sources. By using Cloud Monitoring, you can gain insights into resource utilization, performance characteristics, and overall health of your resources. To ensure comprehensive monitoring, monitor important metrics that align with system health indicators such as CPU utilization, memory usage, network traffic, disk I/O, and application response times. You must also consider business-specific metrics. By tracking these metrics, you can identify potential bottlenecks, performance issues, and resource constraints. Additionally, you can set up alerts to notify relevant teams proactively about potential issues or anomalies. To enhance your monitoring capabilities further, you can integrate third-party solutions with Google Cloud Observability. These solutions can provide additional functionality, such as advanced analytics, machine learning-powered anomaly detection, and incident management capabilities. This combination of Google Cloud Observability tools and third-party solutions lets you create a robust and customizable monitoring ecosystem that's tailored to your specific needs. By using this combination approach, you can proactively identify and address issues, optimize resource utilization, and ensure the overall reliability and availability of your cloud applications and services. Implement performance and load testing Performing regular performance testing helps you to ensure that your cloud-based applications and infrastructure can handle peak loads and maintain optimal performance. Load testing simulates realistic traffic patterns. Stress testing pushes the system to its limits to identify potential bottlenecks and performance limitations. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Tools like Cloud Load Balancing and load testing services can help you to simulate real-world traffic patterns and stress-test your applications. These tools provide valuable insights into how your system behaves under various load conditions, and can help you to identify areas that require optimization. Based on the results of performance testing, you can make decisions to optimize your cloud infrastructure and applications for optimal performance and scalability. This optimization might involve adjusting resource allocation, tuning configurations, or implementing caching mechanisms. For example, if you find that your application is experiencing slowdowns during periods of high traffic, you might need to increase the number of virtual machines or containers that are allocated to the application. Alternatively, you might need to adjust the configuration of your web server or database to improve performance. By regularly conducting performance testing and implementing the necessary optimizations, you can ensure that your cloud-based applications and infrastructure always run at peak performance, and deliver a seamless and responsive experience for your users. Doing so can help you to maintain a competitive advantage and build trust with your customers. Plan and manage capacity Proactively planning for future capacity needs—both organic or inorganic—helps you to ensure the smooth operation and scalability of your cloud-based systems. This recommendation is relevant to the processes focus area of operational readiness. Planning for future capacity includes understanding and managing quotas for various resources like compute instances, storage, and API requests. By analyzing historical usage patterns, growth projections, and business requirements, you can accurately anticipate future capacity requirements. You can use tools like Cloud Monitoring and BigQuery to collect and analyze usage data, identify trends, and forecast future demand. Historical usage patterns provide valuable insights into resource utilization over time. By examining metrics like CPU utilization, memory usage, and network traffic, you can identify periods of high demand and potential bottlenecks. Additionally, you can help to estimate future capacity needs by making growth projections based on factors like growth in the user base, new products and features, and marketing campaigns. When you assess capacity needs, you should also consider business requirements like SLAs and performance targets. When you determine the resource sizing for a workload, consider factors that can affect utilization of resources. Seasonal variations like holiday shopping periods or end-of-quarter sales can lead to temporary spikes in demand. Planned events like product launches or marketing campaigns can also significantly increase traffic. To make sure that your primary and disaster recovery (DR) system can handle unexpected surges in demand, plan for capacity that can support graceful failover during disruptions like natural disasters and cyberattacks. Autoscaling is an important strategy for dynamically adjusting your cloud resources based on workload fluctuations. By using autoscaling policies, you can automatically scale compute instances, storage, and other resources in response to changing demand. This ensures optimal performance during peak periods while minimizing costs when resource utilization is low. Autoscaling algorithms use metrics like CPU utilization, memory usage, and queue depth to determine when to scale resources. Continuously monitor and optimize To manage and optimize cloud workloads, you must establish a process for continuously monitoring and analyzing performance metrics. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. To establish a process for continuous monitoring and analysis, you track, collect, and evaluate data that's related to various aspects of your cloud environment. By using this data, you can proactively identify areas for improvement, optimize resource utilization, and ensure that your cloud infrastructure consistently meets or exceeds your performance expectations. An important aspect of performance monitoring is regularly reviewing logs and traces. Logs provide valuable insights into system events, errors, and warnings. Traces provide detailed information about the flow of requests through your application. By analyzing logs and traces, you can identify potential issues, identify the root causes of problems, and get a better understanding of how your applications behave under different conditions. Metrics like the round-trip time between services can help you to identify and understand bottlenecks that are in your workloads. Further, you can use performance-tuning techniques to significantly enhance application response times and overall efficiency. The following are examples of techniques that you can use: Caching: Store frequently accessed data in memory to reduce the need for repeated database queries or API calls. Database optimization: Use techniques like indexing and query optimization to improve the performance of database operations. Code profiling: Identify areas of your code that consume excessive resources or cause performance issues. By applying these techniques, you can optimize your applications and ensure that they run efficiently in the cloud. Manage incidents and problems This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you manage incidents and problems related to your cloud workloads. It involves implementing comprehensive monitoring and observability, establishing clear incident response procedures, conducting thorough root cause analysis, and implementing preventive measures. Many of the topics that are discussed in this principle are covered in detail in the Reliability pillar. Principle overview Incident management and problem management are important components of a functional operations environment. How you respond to, categorize, and solve incidents of differing severity can significantly affect your operations. You must also proactively and continuously make adjustments to optimize reliability and performance. An efficient process for incident and problem management relies on the following foundational elements: Continuous monitoring: Identify and resolve issues quickly. Automation: Streamline tasks and improve efficiency. Orchestration: Coordinate and manage cloud resources effectively. Data-driven insights: Optimize cloud operations and make informed decisions. These elements help you to build a resilient cloud environment that can handle a wide range of challenges and disruptions. These elements can also help to reduce the risk of costly incidents and downtime, and they can help you to achieve greater business agility and success. These foundational elements are spread across the four focus areas of operational readiness: Workforce, Processes, Tooling, and Governance. Note: The Google SRE Book defines many of the terms and concepts that are described in this document. We recommend the Google SRE Book as supplemental reading to support the recommendations that are described in this document. Recommendations To manage incidents and problems effectively, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Establish clear incident response procedures Clear roles and responsibilities are essential to ensure effective and coordinated response to incidents. Additionally, clear communication protocols and escalation paths help to ensure that information is shared promptly and effectively during an incident. This recommendation is relevant to these focus areas of operational readiness: workforce, processes, and tooling. To establish incident response procedures, you need to define the roles and expectations of each team member, such as incident commanders, investigators, communicators, and technical experts. Establishing communication and escalation paths includes identifying important contacts, setting up communication channels, and defining the process for escalating incidents to higher levels of management when necessary. Regular training and preparation helps to ensure that teams are equipped with the knowledge and skills to respond to incidents effectively. By documenting incident response procedures in a runbook or playbook, you can provide a standardized reference guide for teams to follow during an incident. The runbook must outline the steps to be taken at each stage of the incident response process, including communication, triage, investigation, and resolution. It must also include information about relevant tools and resources and contact information for important personnel. You must regularly review and update the runbook to ensure that it remains current and effective. Centralize incident management For effective tracking and management throughout the incident lifecycle, consider using a centralized incident management system. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. A centralized incident management system provides the following advantages: Improved visibility: By consolidating all incident-related data in a single location, you eliminate the need for teams to search in various channels or systems for context. This approach saves time and reduces confusion, and it gives stakeholders a comprehensive view of the incident, including its status, impact, and progress. Better coordination and collaboration: A centralized system provides a unified platform for communication and task management. It promotes seamless collaboration between the different departments and functions that are involved in incident response. This approach ensures that everyone has access to up-to-date information and it reduces the risk of miscommunication and misalignment. Enhanced accountability and ownership: A centralized incident management system enables your organization to allocate tasks to specific individuals or teams and it ensures that responsibilities are clearly defined and tracked. This approach promotes accountability and encourages proactive problem-solving because team members can easily monitor their progress and contributions. A centralized incident management system must offer robust features for incident tracking, task assignment, and communication management. These features let you customize workflows, set priorities, and integrate with other systems, such as monitoring tools and ticketing systems. By implementing a centralized incident management system, you can optimize your organization's incident response processes, improve collaboration, and enhance visibility. Doing so leads to faster incident resolution times, reduced downtime, and improved customer satisfaction. It also helps foster a culture of continuous improvement because you can learn from past incidents and identify areas for improvement. Conduct thorough post-incident reviews After an incident occurs, you must conduct a detailed post-incident review (PIR), which is also known as a postmortem, to identify the root cause, contributing factors, and lessons learned. This thorough review helps you to prevent similar incidents in the future. This recommendation is relevant to these focus areas of operational readiness: processes and governance. The PIR process must involve a multidisciplinary team that has expertise in various aspects of the incident. The team must gather all of the relevant information through interviews, documentation review, and site inspections. A timeline of events must be created to establish the sequence of actions that led up to the incident. After the team gathers the required information, they must conduct a root cause analysis to determine the factors that led to the incident. This analysis must identify both the immediate cause and the systemic issues that contributed to the incident. Along with identifying the root cause, the PIR team must identify any other contributing factors that might have caused the incident. These factors could include human error, equipment failure, or organizational factors like communication breakdowns and lack of training. The PIR report must document the findings of the investigation, including the timeline of events, root cause analysis, and recommended actions. The report is a valuable resource for implementing corrective actions and preventing recurrence. The report must be shared with all of the relevant stakeholders and it must be used to develop safety training and procedures. To ensure a successful PIR process, your organization must foster a blameless culture that focuses on learning and improvement rather than assigning blame. This culture encourages individuals to report incidents without fear of retribution, and it lets you address systemic issues and make meaningful improvements. By conducting thorough PIRs and implementing corrective measures based on the findings, you can significantly reduce the risk of similar incidents occurring in the future. This proactive approach to incident investigation and prevention helps to create a safer and more efficient work environment for everyone involved. Maintain a knowledge base A knowledge base of known issues, solutions, and troubleshooting guides is essential for incident management and resolution. Team members can use the knowledge base to quickly identify and address common problems. Implementing a knowledge base helps to reduce the need for escalation and it improves overall efficiency. This recommendation is relevant to these focus areas of operational readiness: workforce and processes. A primary benefit of a knowledge base is that it lets teams learn from past experiences and avoid repeating mistakes. By capturing and sharing solutions to known issues, teams can build a collective understanding of how to resolve common problems and best practices for incident management. Use of a knowledge base saves time and effort, and helps to standardize processes and ensure consistency in incident resolution. Along with helping to improve incident resolution times, a knowledge base promotes knowledge sharing and collaboration across teams. With a central repository of information, teams can easily access and contribute to the knowledge base, which promotes a culture of continuous learning and improvement. This culture encourages teams to share their expertise and experiences, leading to a more comprehensive and valuable knowledge base. To create and manage a knowledge base effectively, use appropriate tools and technologies. Collaboration platforms like Google Workspace are well-suited for this purpose because they let you easily create, edit, and share documents collaboratively. These tools also support version control and change tracking, which ensures that the knowledge base remains up-to-date and accurate. Make the knowledge base easily accessible to all relevant teams. You can achieve this by integrating the knowledge base with existing incident management systems or by providing a dedicated portal or intranet site. A knowledge base that's readily available lets teams quickly access the information that they need to resolve incidents efficiently. This availability helps to reduce downtime and minimize the impact on business operations. Regularly review and update the knowledge base to ensure that it remains relevant and useful. Monitor incident reports, identify common issues and trends, and incorporate new solutions and troubleshooting guides into the knowledge base. An up-to-date knowledge base helps your teams resolve incidents faster and more effectively. Automate incident response Automation helps to streamline your incident response and remediation processes. It lets you address security breaches and system failures promptly and efficiently. By using Google Cloud products like Cloud Run functions or Cloud Run, you can automate various tasks that are typically manual and time-consuming. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. Automated incident response provides the following benefits: Reduction in incident detection and resolution times: Automated tools can continuously monitor systems and applications, detect suspicious or anomalous activities in real time, and notify stakeholders or respond without intervention. This automation lets you identify potential threats or issues before they escalate into major incidents. When an incident is detected, automated tools can trigger predefined remediation actions, such as isolating affected systems, quarantining malicious files, or rolling back changes to restore the system to a known good state. Reduced burden on security and operations teams: Automated incident response lets the security and operations teams focus on more strategic tasks. By automating routine and repetitive tasks, such as collecting diagnostic information or triggering alerts, your organization can free up personnel to handle more complex and critical incidents. This automation can lead to improved overall incident response effectiveness and efficiency. Enhanced consistency and accuracy of the remediation process: Automated tools can ensure that remediation actions are applied uniformly across all affected systems, minimizing the risk of human error or inconsistency. This standardization of the remediation process helps to minimize the impact of incidents on users and the business. Manage and optimize cloud resources This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you manage and optimize the resources that are used by your cloud workloads. It involves right-sizing resources based on actual usage and demand, using autoscaling for dynamic resource allocation, implementing cost optimization strategies, and regularly reviewing resource utilization and costs. Many of the topics that are discussed in this principle are covered in detail in the Cost optimization pillar. Principle overview Cloud resource management and optimization play a vital role in optimizing cloud spending, resource usage, and infrastructure efficiency. It includes various strategies and best practices aimed at maximizing the value and return from your cloud spending. This pillar's focus on optimization extends beyond cost reduction. It emphasizes the following goals: Efficiency: Using automation and data analytics to achieve peak performance and cost savings. Performance: Scaling resources effortlessly to meet fluctuating demands and deliver optimal results. Scalability: Adapting infrastructure and processes to accommodate rapid growth and diverse workloads. By focusing on these goals, you achieve a balance between cost and functionality. You can make informed decisions regarding resource provisioning, scaling, and migration. Additionally, you gain valuable insights into resource consumption patterns, which lets you proactively identify and address potential issues before they escalate. Recommendations To manage and optimize resources, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Right-size resources Continuously monitoring resource utilization and adjusting resource allocation to match actual demand are essential for efficient cloud resource management. Over-provisioning resources can lead to unnecessary costs, and under-provisioning can cause performance bottlenecks that affect application performance and user experience. To achieve an optimal balance, you must adopt a proactive approach to right-sizing cloud resources. This recommendation is relevant to the governance focus area of operational readiness. Cloud Monitoring and Recommender can help you to identify opportunities for right-sizing. Cloud Monitoring provides real-time visibility into resource utilization metrics. This visibility lets you track resource usage patterns and identify potential inefficiencies. Recommender analyzes resource utilization data to make intelligent recommendations for optimizing resource allocation. By using these tools, you can gain insights into resource usage and make informed decisions about right-sizing the resources. In addition to Cloud Monitoring and Recommender, consider using custom metrics to trigger automated right-sizing actions. Custom metrics let you track specific resource utilization metrics that are relevant to your applications and workloads. You can also configure alerts to notify administrators when predefined thresholds are met. The administrators can then take necessary actions to adjust resource allocation. This proactive approach ensures that resources are scaled in a timely manner, which helps to optimize cloud costs and prevent performance issues. Use autoscaling Autoscaling compute and other resources helps to ensure optimal performance and cost efficiency of your cloud-based applications. Autoscaling lets you dynamically adjust the capacity of your resources based on workload fluctuations, so that you have the resources that you need when you need them and you can avoid over-provisioning and unnecessary costs. This recommendation is relevant to the processes focus area of operational readiness. To meet the diverse needs of different applications and workloads, Google Cloud offers various autoscaling options, including the following: Compute Engine managed instance groups (MIGs) are groups of VMs that are managed and scaled as a single entity. With MIGs, you can define autoscaling policies that specify the minimum and maximum number of VMs to maintain in the group, and the conditions that trigger autoscaling. For example, you can configure a policy to add VMs in a MIG when the CPU utilization reaches a certain threshold and to remove VMs when the utilization drops below a different threshold. Google Kubernetes Engine (GKE) autoscaling dynamically adjusts your cluster resources to match your application's needs. It offers the following tools: Cluster Autoscaler adds or removes nodes based on Pod resource demands. Horizontal Pod Autoscaler changes the number of Pod replicas based on CPU, memory, or custom metrics. Vertical Pod Autoscaler fine-tunes Pod resource requests and limits based on usage patterns. Node Auto-Provisioning automatically creates optimized node pools for your workloads. These tools work together to optimize resource utilization, ensure application performance, and simplify cluster management. Cloud Run is a serverless platform that lets you run code without having to manage infrastructure. Cloud Run offers built-in autoscaling, which automatically adjusts the number of instances based on the incoming traffic. When the volume of traffic increases, Cloud Run scales up the number of instances to handle the load. When traffic decreases, Cloud Run scales down the number of instances to reduce costs. By using these autoscaling options, you can ensure that your cloud-based applications have the resources that they need to handle varying workloads, while avoiding overprovisioning and unnecessary costs. Using autoscaling can lead to improved performance, cost savings, and more efficient use of cloud resources. Leverage cost optimization strategies Optimizing cloud spending helps you to effectively manage your organization's IT budgets. This recommendation is relevant to the governance focus area of operational readiness. Google Cloud offers several tools and techniques to help you optimize cloud costs. By using these tools and techniques, you can get the best value from your cloud spending. These tools and techniques help you to identify areas where costs can be reduced, such as identifying underutilized resources or recommending more cost-effective instance types. Google Cloud options to help optimize cloud costs include the following: Committed use discounts (CUDs) are discounts for committing to a certain level of usage over a period of time. Sustained use discounts in Compute Engine provide discounts for consistent usage of a service. Spot VMs provide access to unused VM capacity at a lower cost compared to regular VMs. Pricing models might change over time, and new features might be introduced that offer better performance or lower cost compared to existing options. Therefore, you should regularly review pricing models and consider alternative features. By staying informed about the latest pricing models and features, you can make informed decisions about your cloud architecture to minimize costs. Google Cloud's Cost Management tools, such as budgets and alerts, provide valuable insights into cloud spending. Budgets and alerts let users set budgets and receive alerts when the budgets are exceeded. These tools help users track their cloud spending and identify areas where costs can be reduced. Track resource usage and costs You can use tagging and labeling to track resource usage and costs. By assigning tags and labels to your cloud resources like projects, departments, or other relevant dimensions, you can categorize and organize the resources. This lets you monitor and analyze spending patterns for specific resources and identify areas of high usage or potential cost savings. This recommendation is relevant to these focus areas of operational readiness: governance and tooling. Tools like Cloud Billing and Cost Management help you to get a comprehensive understanding of your spending patterns. These tools provide detailed insights into your cloud usage and they let you identify trends, forecast costs, and make informed decisions. By analyzing historical data and current spending patterns, you can identify the focus areas for your cost-optimization efforts. Custom dashboards and reports help you to visualize cost data and gain deeper insights into spending trends. By customizing dashboards with relevant metrics and dimensions, you can monitor key performance indicators (KPIs) and track progress towards your cost optimization goals. Reports offer deeper analyses of cost data. Reports let you filter the data by specific time periods or resource types to understand the underlying factors that contribute to your cloud spending. Regularly review and update your tags, labels, and cost analysis tools to ensure that you have the most up-to-date information on your cloud usage and costs. By staying informed and conducting cost postmortems or proactive cost reviews, you can promptly identify any unexpected increases in spending. Doing so lets you make proactive decisions to optimize cloud resources and control costs. Establish cost allocation and budgeting Accountability and transparency in cloud cost management are crucial for optimizing resource utilization and ensuring financial control. This recommendation is relevant to the governance focus area of operational readiness. To ensure accountability and transparency, you need to have clear mechanisms for cost allocation and chargeback. By allocating costs to specific teams, projects, or individuals, your organization can ensure that each of these entities is responsible for its cloud usage. This practice fosters a sense of ownership and encourages responsible resource management. Additionally, chargeback mechanisms enable your organization to recover cloud costs from internal customers, align incentives with performance, and promote fiscal discipline. Establishing budgets for different teams or projects is another essential aspect of cloud cost management. Budgets enable your organization to define spending limits and track actual expenses against those limits. This approach lets you make proactive decisions to prevent uncontrolled spending. By setting realistic and achievable budgets, you can ensure that cloud resources are used efficiently and aligned with business objectives. Regular monitoring of actual spending against budgets helps you to identify variances and address potential overruns promptly. To monitor budgets, you can use tools like Cloud Billing budgets and alerts. These tools provide real-time insights into cloud spending and they notify stakeholders of potential overruns. By using these capabilities, you can track cloud costs and take corrective actions before significant deviations occur. This proactive approach helps to prevent financial surprises and ensures that cloud resources are used responsibly. Automate and manage change This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you automate and manage change for your cloud workloads. It involves implementing infrastructure as code (IaC), establishing standard operating procedures, implementing a structured change management process, and using automation and orchestration. Principle overview Change management and automation play a crucial role in ensuring smooth and controlled transitions within cloud environments. For effective change management, you need to use strategies and best practices that minimize disruptions and ensure that changes are integrated seamlessly with existing systems. Effective change management and automation include the following foundational elements: Change governance: Establish clear policies and procedures for change management, including approval processes and communication plans. Risk assessment: Identify potential risks associated with changes and mitigate them through risk management techniques. Testing and validation: Thoroughly test changes to ensure that they meet functional and performance requirements and mitigate potential regressions. Controlled deployment: Implement changes in a controlled manner, ensuring that users are seamlessly transitioned to the new environment, with mechanisms to seamlessly roll back if needed. These foundational elements help to minimize the impact of changes and ensure that changes have a positive effect on business operations. These elements are represented by the processes, tooling, and governance focus areas of operational readiness. Recommendations To automate and manage change, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Adopt IaC Infrastructure as code (IaC) is a transformative approach for managing cloud infrastructure. You can define and manage cloud infrastructure declaratively by using tools like Terraform. IaC helps you achieve consistency, repeatability, and simplified change management. It also enables faster and more reliable deployments. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. The following are the main benefits of adopting the IaC approach for your cloud deployments: Human-readable resource configurations: With the IaC approach, you can declare your cloud infrastructure resources in a human-readable format, like JSON or YAML. Infrastructure administrators and operators can easily understand and modify the infrastructure and collaborate with others. Consistency and repeatability: IaC enables consistency and repeatability in your infrastructure deployments. You can ensure that your infrastructure is provisioned and configured the same way every time, regardless of who is performing the deployment. This approach helps to reduce errors and ensures that your infrastructure is always in a known state. Accountability and simplified troubleshooting: The IaC approach helps to improve accountability and makes it easier to troubleshoot issues. By storing your IaC code in a version control system, you can track changes, and identify when changes were made and by whom. If necessary, you can easily roll back to previous versions. Implement version control A version control system like Git is a key component of the IaC process. It provides robust change management and risk mitigation capabilities, which is why it's widely adopted, either through in-house development or SaaS solutions. This recommendation is relevant to these focus areas of operational readiness: governance and tooling. By tracking changes to IaC code and configurations, version control provides visibility into the evolution of the code, making it easier to understand the impact of changes and identify potential issues. This enhanced visibility fosters collaboration among team members who work on the same IaC project. Most version control systems let you easily roll back changes if needed. This capability helps to mitigate the risk of unintended consequences or errors. By using tools like Git in your IaC workflow, you can significantly improve change management processes, foster collaboration, and mitigate risks, which leads to a more efficient and reliable IaC implementation. Build CI/CD pipelines Continuous integration and continuous delivery (CI/CD) pipelines streamline the process of developing and deploying cloud applications. CI/CD pipelines automate the building, testing, and deployment stages, which enables faster and more frequent releases with improved quality control. This recommendation is relevant to the tooling focus area of operational readiness. CI/CD pipelines ensure that code changes are continuously integrated into a central repository, typically a version control system like Git. Continuous integration facilitates early detection and resolution of issues, and it reduces the likelihood of bugs or compatibility problems. To create and manage CI/CD pipelines for cloud applications, you can use tools like Cloud Build and Cloud Deploy. Cloud Build is a fully managed build service that lets developers define and execute build steps in a declarative manner. It integrates seamlessly with popular source-code management platforms and it can be triggered by events like code pushes and pull requests. Cloud Deploy is a serverless deployment service that automates the process of deploying applications to various environments, such as testing, staging, and production. It provides features like blue-green deployments, traffic splitting, and rollback capabilities, making it easier to manage and monitor application deployments. Integrating CI/CD pipelines with version control systems and testing frameworks helps to ensure the quality and reliability of your cloud applications. By running automated tests as part of the CI/CD process, development teams can quickly identify and fix any issues before the code is deployed to the production environment. This integration helps to improve the overall stability and performance of your cloud applications. Use configuration management tools Tools like Puppet, Chef, Ansible, and VM Manager help you to automate the configuration and management of cloud resources. Using these tools, you can ensure resource consistency and compliance across your cloud environments. This recommendation is relevant to the tooling focus area of operational readiness. Automating the configuration and management of cloud resources provides the following benefits: Significant reduction in the risk of manual errors: When manual processes are involved, there is a higher likelihood of mistakes due to human error. Configuration management tools reduce this risk by automating processes, so that configurations are applied consistently and accurately across all cloud resources. This automation can lead to improved reliability and stability of the cloud environment. Improvement in operational efficiency: By automating repetitive tasks, your organization can free up IT staff to focus on more strategic initiatives. This automation can lead to increased productivity and cost savings and improved responsiveness to changing business needs. Simplified management of complex cloud infrastructure: As cloud environments grow in size and complexity, managing the resources can become increasingly difficult. Configuration management tools provide a centralized platform for managing cloud resources. The tools make it easier to track configurations, identify issues, and implement changes. Using these tools can lead to improved visibility, control, and security of your cloud environment. Automate testing Integrating automated testing into your CI/CD pipelines helps to ensure the quality and reliability of your cloud applications. By validating changes before deployment, you can significantly reduce the risk of errors and regressions, which leads to a more stable and robust software system. This recommendation is relevant to these focus areas of operational readiness: processes and tooling. The following are the main benefits of incorporating automated testing into your CI/CD pipelines: Early detection of bugs and defects: Automated testing helps to detect bugs and defects early in the development process, before they can cause major problems in production. This capability saves time and resources by preventing the need for costly rework and bug fixes at later stages in the development process. High quality and standards-based code: Automated testing can help improve the overall quality of your code by ensuring that the code meets certain standards and best practices. This capability leads to more maintainable and reliable applications that are less prone to errors. You can use various types of testing techniques in CI/CD pipelines. Each test type serves a specific purpose. Unit testing focuses on testing individual units of code, such as functions or methods, to ensure that they work as expected. Integration testing tests the interactions between different components or modules of your application to verify that they work properly together. End-to-end testing is often used along with unit and integration testing. End-to-end testing simulates real-world scenarios to test the application as a whole, and helps to ensure that the application meets the requirements of your end users. To effectively integrate automated testing into your CI/CD pipelines, you must choose appropriate testing tools and frameworks. There are many different options, each with its own strengths and weaknesses. You must also establish a clear testing strategy that outlines the types of tests to be performed, the frequency of testing, and the criteria for passing or failing a test. By following these recommendations, you can ensure that your automated testing process is efficient and effective. Such a process provides valuable insights into the quality and reliability of your cloud applications. Continuously improve and innovate This principle in the operational excellence pillar of the Google Cloud Architecture Framework provides recommendations to help you continuously optimize cloud operations and drive innovation. Principle overview To continuously improve and innovate in the cloud, you need to focus on continuous learning, experimentation, and adaptation. This helps you to explore new technologies and optimize existing processes and it promotes a culture of excellence that enables your organization to achieve and maintain industry leadership. Through continuous improvement and innovation, you can achieve the following goals: Accelerate innovation: Explore new technologies and services to enhance capabilities and drive differentiation. Reduce costs: Identify and eliminate inefficiencies through process-improvement initiatives. Enhance agility: Adapt rapidly to changing market demands and customer needs. Improve decision making: Gain valuable insights from data and analytics to make data-driven decisions. Organizations that embrace the continuous improvement and innovation principle can unlock the full potential of the cloud environment and achieve sustainable growth. This principle maps primarily to the Workforce focus area of operational readiness. A culture of innovation lets teams experiment with new tools and technologies to expand capabilities and reduce costs. Recommendations To continuously improve and innovate your cloud workloads, consider the recommendations in the following sections. Each recommendation in this document is relevant to one or more of the focus areas of operational readiness. Foster a culture of learning Encourage teams to experiment, share knowledge, and learn continuously. Adopt a blameless culture where failures are viewed as opportunities for growth and improvement. This recommendation is relevant to the workforce focus area of operational readiness. When you foster a culture of learning, teams can learn from mistakes and iterate quickly. This approach encourages team members to take risks, experiment with new ideas, and expand the boundaries of their work. It also creates a psychologically safe environment where individuals feel comfortable sharing failures and learning from them. Sharing in this way leads to a more open and collaborative environment. To facilitate knowledge sharing and continuous learning, create opportunities for teams to share knowledge and learn from each other. You can do this through informal and formal learning sessions and conferences. By fostering a culture of experimentation, knowledge sharing, and continuous learning, you can create an environment where teams are empowered to take risks, innovate, and grow. This environment can lead to increased productivity, improved problem-solving, and a more engaged and motivated workforce. Further, by promoting a blameless culture, you can create a safe space for employees to learn from mistakes and contribute to the collective knowledge of the team. This culture ultimately leads to a more resilient and adaptable workforce that is better equipped to handle challenges and drive success in the long run. Conduct regular retrospectives Retrospectives give teams an opportunity to reflect on their experiences, identify what went well, and identify what can be improved. By conducting retrospectives after projects or major incidents, teams can learn from successes and failures, and continuously improve their processes and practices. This recommendation is relevant to these focus areas of operational readiness: processes and governance. An effective way to structure a retrospective is to use the Start-Stop-Continue model: Start: In the Start phase of the retrospective, team members identify new practices, processes, and behaviors that they believe can enhance their work. They discuss why the changes are needed and how they can be implemented. Stop: In the Stop phase, team members identify and eliminate practices, processes, and behaviors that are no longer effective or that hinder progress. They discuss why these changes are necessary and how they can be implemented. Continue: In the Continue phase, team members identify practices, processes, and behaviors that work well and must be continued. They discuss why these elements are important and how they can be reinforced. By using a structured format like the Start-Stop-Continue model, teams can ensure that retrospectives are productive and focused. This model helps to facilitate discussion, identify the main takeaways, and identify actionable steps for future enhancements. Stay up-to-date with cloud technologies To maximize the potential of Google Cloud services, you must keep up with the latest advancements, features, and best practices. This recommendation is relevant to the workforce focus area of operational readiness. Participating in relevant conferences, webinars, and training sessions is a valuable way to expand your knowledge. These events provide opportunities to learn from Google Cloud experts, understand new capabilities, and engage with industry peers who might face similar challenges. By attending these sessions, you can gain insights into how to use new features effectively, optimize your cloud operations, and drive innovation within your organization. To ensure that your team members keep up with cloud technologies, encourage them to obtain certifications and attend training courses. Google Cloud offers a wide range of certifications that validate skills and knowledge in specific cloud domains. Earning these certifications demonstrates commitment to excellence and provides tangible evidence of proficiency in cloud technologies. The training courses that are offered by Google Cloud and our partners delve deeper into specific topics. They provide direct experience and practical skills that can be immediately applied to real-world projects. By investing in the professional development of your team, you can foster a culture of continuous learning and ensure that everyone has the necessary skills to succeed in the cloud. Actively seek and incorporate feedback Collect feedback from users, stakeholders, and team members. Use the feedback to identify opportunities to improve your cloud solutions. This recommendation is relevant to the workforce focus area of operational readiness. The feedback that you collect can help you to understand the evolving needs, issues, and expectations of the users of your solutions. This feedback serves as a valuable input to drive improvements and prioritize future enhancements. You can use various mechanisms to collect feedback: Surveys are an effective way to gather quantitative data from a large number of users and stakeholders. User interviews provide an opportunity for in-depth qualitative data collection. Interviews let you understand the specific challenges and experiences of individual users. Feedback forms that are placed within the cloud solutions offer a convenient way for users to provide immediate feedback on their experience. Regular meetings with team members can facilitate the collection of feedback on technical aspects and implementation challenges. The feedback that you collect through these mechanisms must be analyzed and synthesized to identify common themes and patterns. This analysis can help you prioritize future enhancements based on the impact and feasibility of the suggested improvements. By addressing the needs and issues that are identified through feedback, you can ensure that your cloud solutions continue to meet the evolving requirements of your users and stakeholders. Measure and track progress Key performance indicators (KPIs) and metrics are crucial for tracking progress and measuring the effectiveness of your cloud operations. KPIs are quantifiable measurements that reflect the overall performance. Metrics are specific data points that contribute to the calculation of KPIs. Review the metrics regularly and use them to identify opportunities for improvement and measure progress. Doing so helps you to continuously improve and optimize your cloud environment. This recommendation is relevant to these focus areas of operational readiness: governance and processes. A primary benefit of using KPIs and metrics is that they enable your organization to adopt a data-driven approach to cloud operations. By tracking and analyzing operational data, you can make informed decisions about how to improve the cloud environment. This data-driven approach helps you to identify trends, patterns, and anomalies that might not be visible without the use of systematic metrics. To collect and analyze operational data, you can use tools like Cloud Monitoring and BigQuery. Cloud Monitoring enables real-time monitoring of cloud resources and services. BigQuery lets you store and analyze the data that you gather through monitoring. Using these tools together, you can create custom dashboards to visualize important metrics and trends. Operational dashboards can provide a centralized view of the most important metrics, which lets you quickly identify any areas that need attention. For example, a dashboard might include metrics like CPU utilization, memory usage, network traffic, and latency for a particular application or service. By monitoring these metrics, you can quickly identify any potential issues and take steps to resolve them. Google Cloud Architecture Framework: Security, privacy, and compliance The Security, Privacy and Compliance pillar in the Google Cloud Architecture Framework provides recommendations to help you design, deploy, and operate cloud workloads that meet your requirements for security, privacy, and compliance. This document is designed to offer valuable insights and meet the needs of a range of security professionals and engineers. The following table describes the intended audiences for this document: Audience What this document provides Chief information security officers (CISOs), business unit leaders, and IT managers A general framework to establish and maintain security excellence in the cloud and to ensure a comprehensive view of security areas to make informed decisions about security investments. Security architects and engineers Key security practices for the design and operational phases to help ensure that solutions are designed for security, efficiency, and scalability. DevSecOps teams Guidance to incorporate overarching security controls to plan automation that enables secure and reliable infrastructure. Compliance officers and risk managers Key security recommendations to follow a structured approach to risk management with safeguards that help to meet compliance obligations. To ensure that your Google Cloud workloads meet your security, privacy, and compliance requirements, all of the stakeholders in your organization must adopt a collaborative approach. In addition, you must recognize that cloud security is a shared responsibility between you and Google. For more information, see Shared responsibilities and shared fate on Google Cloud. The recommendations in this pillar are grouped into core security principles. Each principle-based recommendation is mapped to one or more of the key deployment focus areas of cloud security that might be critical to your organization. Each recommendation highlights guidance about the use and configuration of Google Cloud products and capabilities to help improve your organization's security posture. Core principles The recommendations in this pillar are grouped within the following core principles of security. Every principle in this pillar is important. Depending on the requirements of your organization and workload, you might choose to prioritize certain principles. Implement security by design: Integrate cloud security and network security considerations starting from the initial design phase of your applications and infrastructure. Google Cloud provides architecture blueprints and recommendations to help you apply this principle. Implement zero trust: Use a never trust, always verify approach, where access to resources is granted based on continuous verification of trust. Google Cloud supports this principle through products like Chrome Enterprise Premium and Identity-Aware Proxy (IAP). Implement shift-left security: Implement security controls early in the software development lifecycle. Avoid security defects before system changes are made. Detect and fix security bugs early, fast, and reliably after the system changes are committed. Google Cloud supports this principle through products like Cloud Build, Binary Authorization, and Artifact Registry. Implement preemptive cyber defense: Adopt a proactive approach to security by implementing robust fundamental measures like threat intelligence. This approach helps you build a foundation for more effective threat detection and response. Google Cloud's approach to layered security controls aligns with this principle. Use AI securely and responsibly: Develop and deploy AI systems in a responsible and secure manner. The recommendations for this principle are aligned with guidance in the AI and ML perspective of the Architecture Framework and in Google's Secure AI Framework (SAIF). Use AI for security: Use AI capabilities to improve your existing security systems and processes through Gemini in Security and overall platform-security capabilities. Use AI as a tool to increase the automation of remedial work and ensure security hygiene to make other systems more secure. Meet regulatory, compliance, and privacy needs: Adhere to industry-specific regulations, compliance standards, and privacy requirements. Google Cloud helps you meet these obligations through products like Assured Workloads, Organization Policy Service, and our compliance resource center. Organizational security mindset A security-focused organizational mindset is crucial for successful cloud adoption and operation. This mindset should be deeply ingrained in your organization's culture and reflected in its practices, which are guided by core security principles as described earlier. An organizational security mindset emphasizes that you think about security during system design, assume zero trust, and integrate security features throughout your development process. In this mindset, you also think proactively about cyber-defense measures, use AI securely and for security, and consider your regulatory, privacy, and compliance requirements. By embracing these principles, your organization can cultivate a security-first culture that proactively addresses threats, protects valuable assets, and helps to ensure responsible technology usage. Focus areas of cloud security This section describes the areas for you to focus on when you plan, implement, and manage security for your applications, systems, and data. The recommendations in each principle of this pillar are relevant to one or more of these focus areas. Throughout the rest of this document, the recommendations specify the corresponding security focus areas to provide further clarity and context. Focus area Activities and components Related Google Cloud products, capabilities, and solutions Infrastructure security Secure network infrastructure. Encrypt data in transit and at rest. Control traffic flow. Secure IaaS and PaaS services. Protect against unauthorized access. Firewall Policies VPC Service Controls Google Cloud Armor Cloud Next Generation Firewall Secure Web Proxy Identity and access management Use authentication, authorization, and access controls. Manage cloud identities. Manage identity and access management policies. Cloud Identity Google's Identity and Access Management (IAM) service Workforce Identity Federation Workload Identity Federation Data security Store data in Google Cloud securely. Control access to the data. Discover and classify the data. Design necessary controls, such as encryption, access controls, and data loss prevention. Protect data at rest, in transit, and in use. Google's IAM service Sensitive Data Protection VPC Service Controls Cloud KMS Confidential Computing AI and ML security Apply security controls at different layers of the AI and ML infrastructure and pipeline. Ensure model safety. Google's SAIF Model Armor Security operations (SecOps) Adopt a modern SecOps platform and set of practices, for effective incident management, threat detection, and response processes. Monitor systems and applications continuously for security events. Google Security Operations Application security Secure applications against software vulnerabilities and attacks. Artifact Registry Artifact Analysis Binary Authorization Assured Open Source Software Google Cloud Armor Web Security Scanner Cloud governance, risk, and compliance Establish policies, procedures, and controls to manage cloud resources effectively and securely. Organization Policy Service Cloud Asset Inventory Security Command Center Enterprise Resource Manager Logging, auditing, and monitoring Analyze logs to identify potential threats. Track and record system activities for compliance and security analysis. Cloud Logging Cloud Monitoring Cloud Audit Logs VPC Flow Logs ContributorsAuthors: Wade Holmes | Global Solutions DirectorHector Diaz | Cloud Security ArchitectCarlos Leonardo Rosario | Google Cloud Security SpecialistJohn Bacon | Partner Solutions ArchitectSachin Kalra | Global Security Solution ManagerOther contributors: Anton Chuvakin | Security Advisor, Office of the CISODaniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerGino Pelliccia | Principal ArchitectJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperLaura Hyatt | Enterprise Cloud ArchitectMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistNoah McDonald | Cloud Security ConsultantOsvaldo Costa | Networking Specialist Customer EngineerRadhika Kanakam | Senior Program Manager, Cloud GTMSusan Wu | Outbound Product Manager Implement security by design This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to incorporate robust security features, controls, and practices into the design of your cloud applications, services, and platforms. From ideation to operations, security is more effective when it's embedded as an integral part of every stage of your design process. Principle overview As explained in An Overview of Google's Commitment to Secure by Design, secure by default and secure by design are often used interchangeably, but they represent distinct approaches to building secure systems. Both approaches aim to minimize vulnerabilities and enhance security, but they differ in scope and implementation: Secure by default: focuses on ensuring that a system's default settings are set to a secure mode, minimizing the need for users or administrators to take actions to secure the system. This approach aims to provide a baseline level of security for all users. Secure by design: emphasizes proactively incorporating security considerations throughout a system's development lifecycle. This approach is about anticipating potential threats and vulnerabilities early and making design choices that mitigate risks. This approach involves using secure coding practices, conducting security reviews, and embedding security throughout the design process. The secure-by-design approach is an overarching philosophy that guides the development process and helps to ensure that security isn't an afterthought but is an integral part of a system's design. Recommendations To implement the secure by design principle for your cloud workloads, consider the recommendations in the following sections: Choose system components that help to secure your workloads Build a layered security approach Use hardened and attested infrastructure and services Encrypt data at rest and in transit Choose system components that help to secure your workloads This recommendation is relevant to all of the focus areas. A fundamental decision for effective security is the selection of robust system components—including both hardware and software components—that constitute your platform, solution, or service. To reduce the security attack surface and limit potential damage, you must also carefully consider the deployment patterns of these components and their configurations. In your application code, we recommend that you use straightforward, safe, and reliable libraries, abstractions, and application frameworks in order to eliminate classes of vulnerabilities. To scan for vulnerabilities in software libraries, you can use third-party tools. You can also use Assured Open Source Software, which helps to reduce risks to your software supply chain by using open source software (OSS) packages that Google uses and secures. Your infrastructure must use networking, storage, and compute options that support safe operation and align with your security requirements and risk acceptance levels. Infrastructure security is important for both internet-facing and internal workloads. For information about other Google solutions that support this recommendation, see Implement shift-left security. Build a layered security approach This recommendation is relevant to the following focus areas: AI and ML security Infrastructure security Identity and access management Data security We recommend that you implement security at each layer of your application and infrastructure stack by applying a defense-in-depth approach. Use the security features in each component of your platform. To limit access and identify the boundaries of the potential impact (that is, the blast radius) in the event of a security incident, do the following: Simplify your system's design to accommodate flexibility where possible. Document the security requirements of each component. Incorporate a robust secured mechanism to address resiliency and recovery requirements. When you design the security layers, perform a risk assessment to determine the security features that you need in order to meet internal security requirements and external regulatory requirements. We recommend that you use an industry-standard risk assessment framework that applies to cloud environments and that is relevant to your regulatory requirements. For example, the Cloud Security Alliance (CSA) provides the Cloud Controls Matrix (CCM). Your risk assessment provides you with a catalog of risks and corresponding security controls to mitigate them. When you perform the risk assessment, remember that you have a shared responsibility arrangement with your cloud provider. Therefore, your risks in a cloud environment differ from your risks in an on-premises environment. For example, in an on-premises environment, you need to mitigate vulnerabilities to your hardware stack. In contrast, in a cloud environment, the cloud provider bears these risks. Also, remember that the boundaries of shared responsibilities differ between IaaS, PaaS, and SaaS services for each cloud provider. After you identify potential risks, you must design and create a mitigation plan that uses technical, administrative, and operational controls, as well as contractual protections and third-party attestations. In addition, a threat modeling method, such as the OWASP application threat modeling method, helps you to identify potential gaps and suggest actions to address the gaps. Use hardened and attested infrastructure and services This recommendation is relevant to all of the focus areas. A mature security program mitigates new vulnerabilities as described in security bulletins. The security program should also provide remediation to fix vulnerabilities in existing deployments and secure your VM and container images. You can use hardening guides that are specific to the OS and application of your images, as well as benchmarks like the one provided by the Center of Internet Security (CIS). If you use custom images for your Compute Engine VMs, you need to patch the images yourself. Alternatively, you can use Google-provided curated OS images, which are patched regularly. To run containers on Compute Engine VMs, use Google-curated Container-optimized OS images. Google regularly patches and updates these images. If you use GKE, we recommend that you enable node auto-upgrades so that Google updates your cluster nodes with the latest patches. Google manages GKE control planes, which are automatically updated and patched. To further reduce the attack surface of your containers, you can use distroless images. Distroless images are ideal for security-sensitive applications, microservices, and situations where minimizing the image size and attack surface is paramount. For sensitive workloads, use Shielded VM, which prevents malicious code from being loaded during the VM boot cycle. Shielded VM instances provide boot security, monitor integrity, and use the Virtual Trusted Platform Module (vTPM). To help secure SSH access, OS Login lets your employees connect to your VMs by using Identity and Access Management (IAM) permissions as the source of truth instead of relying on SSH keys. Therefore, you don't need to manage SSH keys throughout your organization. OS Login ties an administrator's access to their employee lifecycle, so when employees change roles or leave your organization, their access is revoked with their account. OS Login also supports Google two-factor authentication, which adds an extra layer of security against account takeover attacks. In GKE, application instances run within Docker containers. To enable a defined risk profile and to restrict employees from making changes to containers, ensure that your containers are stateless and immutable. The immutability principle means that your employees don't modify the container or access it interactively. If the container must be changed, you build a new image and redeploy that image. Enable SSH access to the underlying containers only in specific debugging scenarios. To help globally secure configurations across your environment, you can use organization policies to set constraints or guardrails on resources that affect the behavior of your cloud assets. For example, you can define the following organization policies and apply them either globally across a Google Cloud organization or selectively at the level of a folder or project: Disable external IP address allocation to VMs. Restrict resource creation to specific geographical locations. Disable the creation of Service Accounts or their keys. Encrypt data at rest and in transit This recommendation is relevant to the following focus areas: Infrastructure security Data security Data encryption is a foundational control to protect sensitive information, and it's a key part of data governance. An effective data protection strategy includes access control, data segmentation and geographical residency, auditing, and encryption implementation that's based on a careful assessment of requirements. By default, Google Cloud encrypts customer data that's stored at rest, with no action required from you. In addition to default encryption, Google Cloud provides options for envelope encryption and encryption key management. You must identify the solutions that best fit your requirements for key generation, storage, and rotation, whether you're choosing the keys for your storage, for compute, or for big data workloads. For example, Customer-managed encryption keys (CMEKs) can be created in Cloud Key Management Service (Cloud KMS). The CMEKs can be either software-based or HSM-protected to meet your regulatory or compliance requirements, such as the need to rotate encryption keys regularly. Cloud KMS Autokey lets you automate the provisioning and assignment of CMEKs. In addition, you can bring your own keys that are sourced from a third-party key management system by using Cloud External Key Manager (Cloud EKM). We strongly recommend that data be encrypted in-transit. Google encrypts and authenticates data in transit at one or more network layers when data moves outside physical boundaries that aren't controlled by Google or on behalf of Google. All VM-to-VM traffic within a VPC network and between peered VPC networks is encrypted. You can use MACsec for encryption of traffic over Cloud Interconnect connections. IPsec provides encryption for traffic over Cloud VPN connections. You can protect application-to-application traffic in the cloud by using security features like TLS and mTLS configurations in Apigee and Cloud Service Mesh for containerized applications. By default, Google Cloud encrypts data at rest and data in transit across the network. However, data isn't encrypted by default while it's in use in memory. If your organization handles confidential data, you need to mitigate any threats that undermine the confidentiality and integrity of either the application or the data in system memory. To mitigate these threats, you can use Confidential Computing, which provides a trusted execution environment for your compute workloads. For more information, see Confidential VM overview. Implement zero trust This principle in the security pillar of the Google Cloud Architecture Framework helps you ensure comprehensive security across your cloud workloads. The principle of zero trust emphasizes the following practices: Eliminating implicit trust Applying the principle of least privilege to access control Enforcing explicit validation of all access requests Adopting an assume-breach mindset to enable continuous verification and security posture monitoring Principle overview The zero-trust model shifts the security focus from perimeter-based security to an approach where no user or device is considered to be inherently trustworthy. Instead, every access request must be verified, regardless of its origin. This approach involves authenticating and authorizing every user and device, validating their context (location and device posture), and granting least privilege access to only the necessary resources. Implementing the zero-trust model helps your organization enhance its security posture by minimizing the impact of potential breaches and protecting sensitive data and applications against unauthorized access. The zero-trust model helps you ensure confidentiality, integrity, and availability of data and resources in the cloud. Recommendations To implement the zero-trust model for your cloud workloads, consider the recommendations in the following sections: Secure your network Verify every access attempt explicitly Monitor and maintain your network Secure your network This recommendation is relevant to the following focus area: Infrastructure security. Transitioning from conventional perimeter-based security to a zero-trust model requires multiple steps. Your organization might have already integrated certain zero-trust controls into its security posture. However, a zero-trust model isn't a singular product or solution. Instead, it's a holistic integration of multiple security layers and best practices. This section describes recommendations and techniques to implement zero trust for network security. Access control: Enforce access controls based on user identity and context by using solutions like Chrome Enterprise Premium and Identity-Aware Proxy (IAP). By doing this, you shift security from the network perimeter to individual users and devices. This approach enables granular access control and reduces the attack surface. Network security: Secure network connections between your on-premises, Google Cloud, and multicloud environments. Use the private connectivity methods from Cloud Interconnect and IPsec VPNs. To help secure access to Google Cloud services and APIs, use Private Service Connect. To help secure outbound access from workloads deployed on GKE Enterprise, use Cloud Service Mesh egress gateways. Network design: Prevent potential security risks by deleting default networks in existing projects and disabling the creation of default networks in new projects. To avoid conflicts, plan your network and IP address allocation carefully. To enforce effective access control, limit the number of Virtual Private Cloud (VPC) networks per project. Segmentation: Isolate workloads but maintain centralized network management. To segment your network, use Shared VPC. Define firewall policies and rules at the organization, folder, and VPC network levels. To prevent data exfiltration, establish secure perimeters around sensitive data and services by using VPC Service Controls. Perimeter security: Protect against DDoS attacks and web application threats. To protect against threats, use Google Cloud Armor. Configure security policies to allow, deny, or redirect traffic at the Google Cloud edge. Automation: Automate infrastructure provisioning by embracing infrastructure as code (IaC) principles and by using tools like Terraform, Jenkins, and Cloud Build. IaC helps to ensure consistent security configurations, simplified deployments, and rapid rollbacks in case of issues. Secure foundation: Establish a secure application environment by using the Enterprise foundations blueprint. This blueprint provides prescriptive guidance and automation scripts to help you implement security best practices and configure your Google Cloud resources securely. Verify every access attempt explicitly This recommendation is relevant to the following focus areas: Identity and access management Security operations (SecOps) Logging, auditing, and monitoring Implement strong authentication and authorization mechanisms for any user, device, or service that attempts to access your cloud resources. Don't rely on location or network perimeter as a security control. Don't automatically trust any user, device, or service, even if they are already inside the network. Instead, every attempt to access resources must be rigorously authenticated and authorized. You must implement strong identity verification measures, such as multi-factor authentication (MFA). You must also ensure that access decisions are based on granular policies that consider various contextual factors like user role, device posture, and location. To implement this recommendation, use the following methods, tools, and technologies: Unified identity management: Ensure consistent identity management across your organization by using a single identity provider (IdP). Google Cloud supports federation with most IdPs, including on-premises Active Directory. Federation lets you extend your existing identity management infrastructure to Google Cloud and enable single sign-on (SSO) for users. If you don't have an existing IdP, consider using Cloud Identity Premium or Google Workspace. Limited service account permissions: Use service accounts carefully, and adhere to the principle of least privilege. Grant only the necessary permissions required for each service account to perform its designated tasks. Use Workload Identity Federation for applications that run on Google Kubernetes Engine (GKE) or run outside Google Cloud to access resources securely. Robust processes: Update your identity processes to align with cloud security best practices. To help ensure compliance with regulatory requirements, implement identity governance to track access, risks, and policy violations. Review and update your existing processes for granting and auditing access-control roles and permissions. Strong authentication: Implement SSO for user authentication and implement MFA for privileged accounts. Google Cloud supports various MFA methods, including Titan Security Keys, for enhanced security. For workload authentication, use OAuth 2.0 or signed JSON Web Tokens (JWTs). Least privilege: Minimize the risk of unauthorized access and data breaches by enforcing the principles of least privilege and separation of duties. Avoid overprovisioning user access. Consider implementing just-in-time privileged access for sensitive operations. Logging: Enable audit logging for administrator and data access activities. For analysis and threat detection, scan the logs by using Security Command Center Enterprise or Google Security Operations. Configure appropriate log retention policies to balance security needs with storage costs. Monitor and maintain your network This recommendation is relevant to the following focus areas: Logging, auditing, and monitoring Application security Security operations (SecOps) Infrastructure security When you plan and implement security measures, assume that an attacker is already inside your environment. This proactive approach involves using the following multiple tools and techniques to provide visibility into your network: Centralized logging and monitoring: Collect and analyze security logs from all of your cloud resources through centralized logging and monitoring. Establish baselines for normal network behavior, detect anomalies, and identify potential threats. Continuously analyze network traffic flows to identify suspicious patterns and potential attacks. Insights into network performance and security: Use tools like Network Analyzer. Monitor traffic for unusual protocols, unexpected connections, or sudden spikes in data transfer, which could indicate malicious activity. Vulnerability scanning and remediation: Regularly scan your network and applications for vulnerabilities. Use Web Security Scanner, which can automatically identify vulnerabilities in your Compute Engine instances, containers, and GKE clusters. Prioritize remediation based on the severity of vulnerabilities and their potential impact on your systems. Intrusion detection: Monitor network traffic for malicious activity and automatically block or get alerts for suspicious events by using Cloud IDS and Cloud NGFW intrusion prevention service. Security analysis: Consider implementing Google SecOps to correlate security events from various sources, provide real-time analysis of security alerts, and facilitate incident response. Consistent configurations: Ensure that you have consistent security configurations across your network by using configuration management tools. Implement shift-left security This principle in the security pillar of the Google Cloud Architecture Framework helps you identify practical controls that you can implement early in the software development lifecycle to improve your security posture. It provides recommendations that help you implement preventive security guardrails and post-deployment security controls. Principle overview Shift-left security means adopting security practices early in the software development lifecycle. This principle has the following goals: Avoid security defects before system changes are made. Implement preventive security guardrails and adopt practices such as infrastructure as code (IaC), policy as code, and security checks in the CI/CD pipeline. You can also use other platform-specific capabilities like Organization Policy Service and hardened GKE clusters in Google Cloud. Detect and fix security bugs early, fast, and reliably after any system changes are committed. Adopt practices like code reviews, post-deployment vulnerability scanning, and security testing. The Implement security by design and shift-left security principles are related but they differ in scope. The security-by-design principle helps you to avoid fundamental design flaws that would require re-architecting the entire system. For example, a threat-modeling exercise reveals that the current design doesn't include an authorization policy, and all users would have the same level of access without it. Shift-left security helps you to avoid implementation defects (bugs and misconfigurations) before changes are applied, and it enables fast, reliable fixes after deployment. Recommendations To implement the shift-left security principle for your cloud workloads, consider the recommendations in the following sections: Adopt preventive security controls Automate provisioning and management of cloud resources Automate secure application releases Ensure that application deployments follow approved processes Scan for known vulnerabilities before application deployment Monitor your application code for known vulnerabilities Adopt preventive security controls This recommendation is relevant to the following focus areas: Identity and access management Cloud governance, risk, and compliance Preventive security controls are crucial for maintaining a strong security posture in the cloud. These controls help you proactively mitigate risks. You can prevent misconfigurations and unauthorized access to resources, enable developers to work efficiently, and help ensure compliance with industry standards and internal policies. Preventive security controls are more effective when they're implemented by using infrastructure as code (IaC). With IaC, preventive security controls can include more customized checks on the infrastructure code before changes are deployed. When combined with automation, preventive security controls can run as part of your CI/CD pipeline's automatic checks. The following products and Google Cloud capabilities can help you implement preventive controls in your environment: Organization Policy Service constraints: configure predefined and custom constraints with centralized control. VPC Service Controls: create perimeters around your Google Cloud services. Identity and Access Management (IAM), Privileged Access Manager, and principal access boundary policies: restrict access to resources. Policy Controller and Open Policy Agent (OPA): enforce IaC constraints in your CI/CD pipeline and avoid cloud misconfigurations. IAM lets you authorize who can act on specific resources based on permissions. For more information, see Access control for organization resources with IAM. Organization Policy Service lets you set restrictions on resources to specify how they can be configured. For example, you can use an organization policy to do the following: Limit resource sharing based on domain. Limit the use of service accounts. Restrict the physical location of newly created resources. In addition to using organizational policies, you can restrict access to resources by using the following methods: Tags with IAM: assign a tag to a set of resources and then set the access definition for the tag itself, rather than defining the access permissions on each resource. IAM Conditions: define conditional, attribute-based access control for resources. Defense in depth: use VPC Service Controls to further restrict access to resources. For more information about resource management, see Decide a resource hierarchy for your Google Cloud landing zone. Automate provisioning and management of cloud resources This recommendation is relevant to the following focus areas: Application security Cloud governance, risk, and compliance Automating the provisioning and management of cloud resources and workloads is more effective when you also adopt declarative IaC, as opposed to imperative scripting. IaC isn't a security tool or practice on its own, but it helps you to improve the security of your platform. Adopting IaC lets you create repeatable infrastructure and provides your operations team with a known good state. IaC also improves the efficiency of rollbacks, audit changes, and troubleshooting. When combined with CI/CD pipelines and automation, IaC also gives you the ability to adopt practices such as policy as code with tools like OPA. You can audit infrastructure changes over time and run automatic checks on the infrastructure code before changes are deployed. To automate the infrastructure deployment, you can use tools like Config Controller, Terraform, Jenkins, and Cloud Build. To help you build a secure application environment using IaC and automation, Google Cloud provides the enterprise foundations blueprint. This blueprint is Google's opinionated design that follows all of our recommended practices and configurations. The blueprint provides step-by-step instructions to configure and deploy your Google Cloud topology by using Terraform and Cloud Build. You can modify the scripts of the enterprise foundations blueprint to configure an environment that follows Google recommendations and meets your own security requirements. You can further build on the blueprint with additional blueprints or design your own automation. The Google Cloud Architecture Center provides other blueprints that can be implemented on top of the enterprise foundations blueprint. The following are a few examples of these blueprints: Deploy an enterprise developer platform on Google Cloud Deploy a secured serverless architecture using Cloud Run Build and deploy generative AI and machine learning models in an enterprise Import data from Google Cloud into a secured BigQuery data warehouse Deploy network monitoring and telemetry capabilities in Google Cloud Automate secure application releases This recommendation is relevant to the following focus area: Application security. Without automated tools, it can be difficult to deploy, update, and patch complex application environments to meet consistent security requirements. We recommend that you build automated CI/CD pipelines for your software development lifecycle (SDLC). Automated CI/CD pipelines help you to remove manual errors, provide standardized development feedback loops, and enable efficient product iterations. Continuous delivery is one of the best practices that the DORA framework recommends. Automating application releases by using CI/CD pipelines helps to improve your ability to detect and fix security bugs early, fast, and reliably. For example, you can scan for security vulnerabilities automatically when artifacts are created, narrow the scope of security reviews, and roll back to a known and safe version. You can also define policies for different environments (such as development, test, or production environments) so that only verified artifacts are deployed. To help you automate application releases and embed security checks in your CI/CD pipeline, Google Cloud provides multiple tools including Cloud Build, Cloud Deploy, Web Security Scanner, and Binary Authorization. To establish a process that verifies multiple security requirements in your SDLC, use the Supply-chain Levels for Software Artifacts (SLSA) framework, which has been defined by Google. SLSA requires security checks for source code, build process, and code provenance. Many of these requirements can be included in an automated CI/CD pipeline. To understand how Google applies these practices internally, see Google Cloud's approach to change. Ensure that application deployments follow approved processes This recommendation is relevant to the following focus area: Application security. If an attacker compromises your CI/CD pipeline, your entire application stack can be affected. To help secure the pipeline, you should enforce an established approval process before you deploy the code into production. If you use Google Kubernetes Engine (GKE), GKE Enterprise, or Cloud Run, you can establish an approval process by using Binary Authorization. Binary Authorization attaches configurable signatures to container images. These signatures (also called attestations) help to validate the image. At deployment time, Binary Authorization uses these attestations to determine whether a process was completed. For example, you can use Binary Authorization to do the following: Verify that a specific build system or CI pipeline created a container image. Validate that a container image is compliant with a vulnerability signing policy. Verify that a container image passes the criteria for promotion to the next deployment environment, such as from development to QA. By using Binary Authorization, you can enforce that only trusted code runs on your target platforms. Scan for known vulnerabilities before application deployment This recommendation is relevant to the following focus area: Application security. We recommend that you use automated tools that can continuously perform vulnerability scans on application artifacts before they're deployed to production. For containerized applications, use Artifact Analysis to automatically run vulnerability scans for container images. Artifact Analysis scans new images when they're uploaded to Artifact Registry. The scan extracts information about the system packages in the container. After the initial scan, Artifact Analysis continuously monitors the metadata of scanned images in Artifact Registry for new vulnerabilities. When Artifact Analysis receives new and updated vulnerability information from vulnerability sources, it does the following: Updates the metadata of the scanned images to keep them up to date. Creates new vulnerability occurrences for new notes. Deletes vulnerability occurrences that are no longer valid. Monitor your application code for known vulnerabilities This recommendation is relevant to the following focus area: Application security. Use automated tools to constantly monitor your application code for known vulnerabilities such as the OWASP Top 10. For more information about Google Cloud products and features that support OWASP Top 10 mitigation techniques, see OWASP Top 10 mitigation options on Google Cloud. Use Web Security Scanner to help identify security vulnerabilities in your App Engine, Compute Engine, and GKE web applications. The scanner crawls your application, follows all of the links within the scope of your starting URLs, and attempts to exercise as many user inputs and event handlers as possible. It can automatically scan for and detect common vulnerabilities, including cross-site scripting, code injection, mixed content, and outdated or insecure libraries. Web Security Scanner provides early identification of these types of vulnerabilities without distracting you with false positives. In addition, if you use GKE Enterprise to manage fleets of Kubernetes clusters, the security posture dashboard shows opinionated, actionable recommendations to help improve your fleet's security posture. Implement preemptive cyber defense This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to build robust cyber-defense programs as part of your overall security strategy. This principle emphasizes the use of threat intelligence to proactively guide your efforts across the core cyber-defense functions, as defined in The Defender's Advantage: A guide to activating cyber defense. Principle overview When you defend your system against cyber attacks, you have a significant, underutilized advantage against attackers. As the founder of Mandiant states, "You should know more about your business, your systems, your topology, your infrastructure than any attacker does. This is an incredible advantage." To help you use this inherent advantage, this document provides recommendations about proactive and strategic cyber-defense practices that are mapped to the Defender's Advantage framework. Recommendations To implement preemptive cyber defense for your cloud workloads, consider the recommendations in the following sections: Integrate the functions of cyber defense Use the Intelligence function in all aspects of cyber defense Understand and capitalize on your defender's advantage Validate and improve your defenses continuously Manage and coordinate cyber-defense efforts Integrate the functions of cyber defense This recommendation is relevant to all of the focus areas. The Defender's Advantage framework identifies six critical functions of cyber defense: Intelligence, Detect, Respond, Validate, Hunt, and Mission Control. Each function focuses on a unique part of the cyber-defense mission, but these functions must be well-coordinated and work together to provide an effective defense. Focus on building a robust and integrated system where each function supports the others. If you need a phased approach for adoption, consider the following suggested order. Depending on your current cloud maturity, resource topology, and specific threat landscape, you might want to prioritize certain functions. Intelligence: The Intelligence function guides all the other functions. Understanding the threat landscape—including the most likely attackers, their tactics, techniques, and procedures (TTPs), and the potential impact—is critical to prioritizing actions across the entire program. The Intelligence function is responsible for stakeholder identification, definition of intelligence requirements, data collection, analysis and dissemination, automation, and the creation of a cyber threat profile. Detect and Respond: These functions make up the core of active defense, which involves identifying and addressing malicious activity. These functions are necessary to act on the intelligence that's gathered by the intelligence function. The Detect function requires a methodical approach that aligns detections to attacker TTPs and ensures robust logging. The Respond function must focus on initial triage, data collection, and incident remediation. Validate: The Validate function is a continuous process that provides assurance that your security control ecosystem is up-to-date and operating as designed. This function ensures that your organization understands the attack surface, knows where vulnerabilities exist, and measures the effectiveness of controls. Security validation is also an important component of the detection engineering lifecycle and must be used to identify detection gaps and create new detections. Hunt: The Hunt function involves proactively searching for active threats within an environment. This function must be implemented when your organization has a baseline level of maturity in the Detect and Respond functions. The Hunt function expands the detection capabilities and helps to identify gaps and weaknesses in controls. The Hunt function must be based on specific threats. This advanced function benefits from a foundation of robust intelligence, detection, and response capabilities. Mission Control: The Mission Control function acts as the central hub that connects all of the other functions. This function is responsible for strategy, communication, and decisive action across your cyber-defense program. It ensures that all of the functions are working together and that they're aligned with your organization's business goals. You must focus on establishing a clear understanding of the purpose of the Mission Control function before you use it to connect the other functions. Use the Intelligence function in all aspects of cyber defense This recommendation is relevant to all of the focus areas. This recommendation highlights the Intelligence function as a core part of a strong cyber-defense program. Threat intelligence provides knowledge about threat actors, their TTPs, and indicators of compromise (IOCs). This knowledge should inform and prioritize actions across all cyber-defense functions. An intelligence-driven approach helps you align defenses to meet the threats that are most likely to affect your organization. This approach also helps with efficient allocation and prioritization of resources. The following Google Cloud products and features help you take advantage of threat intelligence to guide your security operations. Use these features to identify and prioritize potential threats, vulnerabilities, and risks, and then plan and implement appropriate actions. Google Security Operations (Google SecOps) helps you store and analyze security data centrally. Use Google SecOps to map logs into a common model, enrich the logs, and link the logs to timelines for a comprehensive view of attacks. You can also create detection rules, set up IoC matching, and perform threat-hunting activities. The platform also provides curated detections, which are predefined and managed rules to help identify threats. Google SecOps can also integrate with Mandiant frontline intelligence. Google SecOps uniquely integrates industry-leading AI, along with threat intelligence from Mandiant and Google VirusTotal. This integration is critical for threat evaluation and understanding who is targeting your organization and the potential impact. Security Command Center Enterprise, which is powered by Google AI, enables security professionals to efficiently assess, investigate, and respond to security issues across multiple cloud environments. The security professionals who can benefit from Security Command Center include security operations center (SOC) analysts, vulnerability and posture analysts, and compliance managers. Security Command Center Enterprise enriches security data, assesses risk, and prioritizes vulnerabilities. This solution provides teams with the information that they need to address high-risk vulnerabilities and to remediate active threats. Chrome Enterprise Premium offers threat and data protection, which helps to protect users from exfiltration risks and prevents malware from getting onto enterprise-managed devices. Chrome Enterprise Premium also provides visibility into unsafe or potentially unsafe activity that can happen within the browser. Network monitoring, through tools like Network Intelligence Center, provides visibility into network performance. Network monitoring can also help you detect unusual traffic patterns or detect data transfer amounts that might indicate an attack or data exfiltration attempt. Understand and capitalize on your defender's advantage This recommendation is relevant to all of the focus areas. As mentioned earlier, you have an advantage over attackers when you have a thorough understanding of your business, systems, topology, and infrastructure. To capitalize on this knowledge advantage, utilize this data about your environments during cyberdefense planning. Google Cloud provides the following features to help you proactively gain visibility to identify threats, understand risks, and respond in a timely manner to mitigate potential damage: Chrome Enterprise Premium helps you enhance security for enterprise devices by protecting users from exfiltration risks. It extends Sensitive Data Protection services into the browser, and prevents malware. It also offers features like protection against malware and phishing to help prevent exposure to unsafe content. In addition, it gives you control over the installation of extensions to help prevent unsafe or unvetted extensions. These capabilities help you establish a secure foundation for your operations. Security Command Center Enterprise provides a continuous risk engine that offers comprehensive and ongoing risk analysis and management. The risk engine feature enriches security data, assesses risk, and prioritizes vulnerabilities to help fix issues quickly. Security Command Center enables your organization to proactively identify weaknesses and implement mitigations. Google SecOps centralizes security data and provides enriched logs with timelines. This enables defenders to proactively identify active compromises and adapt defenses based on attackers' behavior. Network monitoring helps identify irregular network activity that might indicate an attack and it provides early indicators that you can use to take action. To help proactively protect your data from theft, continuously monitor for data exfiltration and use the provided tools. Validate and improve your defenses continuously This recommendation is relevant to all of the focus areas. This recommendation emphasizes the importance of targeted testing and continuous validation of controls to understand strengths and weaknesses across the entire attack surface. This includes validating the effectiveness of controls, operations, and staff through methods like the following: Penetration tests Red-blue team and purple team exercises Tabletop exercises You must also actively search for threats and use the results to improve detection and visibility. Use the following tools to continuously test and validate your defenses against real-world threats: Security Command Center Enterprise provides a continuous risk engine to evaluate vulnerabilities and prioritize remediation, which enables ongoing evaluation of your overall security posture. By prioritizing issues, Security Command Center Enterprise helps you to ensure that resources are used effectively. Google SecOps offers threat-hunting and curated detections that let you proactively identify weaknesses in your controls. This capability enables continuous testing and improvement of your ability to detect threats. Chrome Enterprise Premium provides threat and data protection features that can help you to address new and evolving threats, and continuously update your defenses against exfiltration risks and malware. Cloud Next Generation Firewall (Cloud NGFW) provides network monitoring and data-exfiltration monitoring. These capabilities can help you to validate the effectiveness of your current security posture and identify potential weaknesses. Data-exfiltration monitoring helps you to validate the strength of your organization's data protection mechanisms and make proactive adjustments where necessary. When you integrate threat findings from Cloud NGFW with Security Command Center and Google SecOps, you can optimize network-based threat detection, optimize threat response, and automate playbooks. For more information about this integration, see Unifying Your Cloud Defenses: Security Command Center & Cloud NGFW Enterprise. Manage and coordinate cyber-defense efforts This recommendation is relevant to all of the focus areas. As described earlier in Integrate the functions of cyber defense, the Mission Control function interconnects the other functions of the cyber-defense program. This function enables coordination and unified management across the program. It also helps you coordinate with other teams that don't work on cybersecurity. The Mission Control function promotes empowerment and accountability, facilitates agility and expertise, and drives responsibility and transparency. The following products and features can help you implement the Mission Control function: Security Command Center Enterprise acts as a central hub for coordinating and managing your cyber-defense operations. It brings tools, teams, and data together, along with the built-in Google SecOps response capabilities. Security Command Center provides clear visibility into your organization's security state and enables the identification of security misconfigurations across different resources. Google SecOps provides a platform for teams to respond to threats by mapping logs and creating timelines. You can also define detection rules and search for threats. Google Workspace and Chrome Enterprise Premium help you to manage and control end-user access to sensitive resources. You can define granular access controls based on user identity and the context of a request. Network monitoring provides insights into the performance of network resources. You can import network monitoring insights into Security Command Center and Google SecOps for centralized monitoring and correlation against other timeline based data points. This integration helps you to detect and respond to potential network usage changes caused by nefarious activity. Data-exfiltration monitoring helps to identify possible data loss incidents. With this feature, you can efficiently mobilize an incident response team, assess damages, and limit further data exfiltration. You can also improve current policies and controls to ensure data protection. Product summary The following table lists the products and features that are described in this document and maps them to the associated recommendations and security capabilities. Google Cloud product Applicable recommendations Google SecOps Use the Intelligence function in all aspects of cyber defense: Enables threat hunting and IoC matching, and integrates with Mandiant for comprehensive threat evaluation. Understand and capitalize on your defender's advantage: Provides curated detections and centralizes security data for proactive compromise identification. Validate and improve your defenses continuously: Enables continuous testing and improvement of threat detection capabilities. Manage and coordinate cyber-defense efforts through Mission Control: Provides a platform for threat response, log analysis, and timeline creation. Security Command Center Enterprise Use the Intelligence function in all aspects of cyber defense: Uses AI to assess risk, prioritize vulnerabilities, and provide actionable insights for remediation. Understand and capitalize on your defender's advantage: Offers comprehensive risk analysis, vulnerability prioritization, and proactive identification of weaknesses. Validate and improve your defenses continuously: Provides ongoing security posture evaluation and resource prioritization. Manage and coordinate cyber-defense efforts through Mission Control: Acts as a central hub for managing and coordinating cyber-defense operations. Chrome Enterprise Premium Use the Intelligence function in all aspects of cyber defense: Protects users from exfiltration risks, prevents malware, and provides visibility into unsafe browser activity. Understand and capitalize on your defender's advantage: Enhances security for enterprise devices through data protection, malware prevention, and control over extensions. Validate and improve your defenses continuously: Addresses new and evolving threats through continuous updates to defenses against exfiltration risks and malware. Manage and coordinate cyber-defense efforts through Mission Control: Manage and control end-user access to sensitive resources, including granular access controls. Google Workspace Manage and coordinate cyber-defense efforts through Mission Control: Manage and control end-user access to sensitive resources, including granular access controls. Network Intelligence Center Use the Intelligence function in all aspects of cyber defense: Provides visibility into network performance and detects unusual traffic patterns or data transfers. Cloud NGFW Validate and improve your defenses continuously: Optimizes network-based threat detection and response through integration with Security Command Center and Google SecOps. Use AI securely and responsibly This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to help you secure your AI systems. These recommendations are aligned with Google's Secure AI Framework (SAIF), which provides a practical approach to address the security and risk concerns of AI systems. SAIF is a conceptual framework that aims to provide industry-wide standards for building and deploying AI responsibly. Principle overview To help ensure that your AI systems meet your security, privacy, and compliance requirements, you must adopt a holistic strategy that starts with the initial design and extends to deployment and operations. You can implement this holistic strategy by applying the six core elements of SAIF. Google uses AI to enhance security measures, such as identifying threats, automating security tasks, and improving detection capabilities, while keeping humans in the loop for critical decisions. Google emphasizes a collaborative approach to advancing AI security. This approach involves partnering with customers, industries, and governments to enhance the SAIF guidelines and offer practical, actionable resources. The recommendations to implement this principle are grouped within the following sections: Recommendations to use AI securely Recommendations for AI governance Recommendations to use AI securely To use AI securely, you need both foundational security controls and AI-specific security controls. This section provides an overview of recommendations to ensure that your AI and ML deployments meet the security, privacy, and compliance requirements of your organization. For an overview of architectual principles and recommendations that are specific to AI and ML workloads in Google Cloud, see the AI and ML perspective in the Architecture Framework. Define clear goals and requirements for AI usage This recommendation is relevant to the following focus areas: Cloud governance, risk, and compliance AI and ML security This recommendation aligns with the SAIF element about contextualizing AI system risks in the surrounding business processes. When you design and evolve AI systems, it's important to understand your specific business goals, risks, and compliance requirements. Keep data secure and prevent loss or mishandling This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security This recommendation aligns with the following SAIF elements: Expand strong security foundations to the AI ecosystem. This element includes data collection, storage, access control, and protection against data poisoning. Contextualize AI system risks. Emphasize data security to support business objectives and compliance. Keep AI pipelines secure and robust against tampering This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security This recommendation aligns with the following SAIF elements: Expand strong security foundations to the AI ecosystem. As a key element of establishing a secure AI system, secure your code and model artifacts. Adapt controls for faster feedback loops. Because it's important for mitigation and incident response, track your assets and pipeline runs. Deploy apps on secure systems using secure tools and artifacts This recommendation is relevant to the following focus areas: Infrastructure security Identity and access management Data security Application security AI and ML security Using secure systems and validated tools and artifacts in AI-based applications aligns with the SAIF element about expanding strong security foundations to the AI ecosystem and supply chain. This recommendation can be addressed through the following steps: Implement a secure environment for ML training and deployment Use validated container images Apply Supply-chain Levels for Software Artifacts (SLSA) guidelines Protect and monitor inputs This recommendation is relevant to the following focus areas: Logging, auditing, and monitoring Security operations AI and ML security This recommendation aligns with the SAIF element about extending detection and response to bring AI into an organization's threat universe. To prevent issues, it's critical to manage prompts for generative AI systems, monitor inputs, and control user access. Recommendations for AI governance All of the recommendations in this section are relevant to the following focus area: Cloud governance, risk, and compliance. Google Cloud offers a robust set of tools and services that you can use to build responsible and ethical AI systems. We also offer a framework of policies, procedures, and ethical considerations that can guide the development, deployment, and use of AI systems. As reflected in our recommendations, Google's approach for AI governance is guided by the following principles: Fairness Transparency Accountability Privacy Security Use fairness indicators Vertex AI can detect bias during the data collection or post-training evaluation process. Vertex AI provides model evaluation metrics like data bias and model bias to help you evaluate your model for bias. These metrics are related to fairness across different categories like race, gender, and class. However, interpreting statistical deviations isn't a straightforward exercise, because differences across categories might not be a result of bias or a signal of harm. Use Vertex Explainable AI To understand how the AI models make decisions, use Vertex Explainable AI. This feature helps you to identify potential biases that might be hidden in the model's logic. This explainability feature is integrated with BigQuery ML and Vertex AI, which provide feature-based explanations. You can either perform explainability in BigQuery ML or register your model in Vertex AI and perform explainability in Vertex AI. Track data lineage Track the origin and transformation of data that's used in your AI systems. This tracking helps you understand the data's journey and identify potential sources of bias or error. Data lineage is a Dataplex feature that lets you track how data moves through your systems: where it comes from, where it's passed to, and what transformations are applied to it. Establish accountability Establish clear responsibility for the development, deployment, and outcomes of your AI systems. Use Cloud Logging to log key events and decisions made by your AI systems. The logs provide an audit trail to help you understand how the system is performing and identify areas for improvement. Use Error Reporting to systematically analyze errors made by the AI systems. This analysis can reveal patterns that point to underlying biases or areas where the model needs further refinement. Implement differential privacy During model training, add noise to the data in order to make it difficult to identify individual data points but still enable the model to learn effectively. With SQL in BigQuery, you can transform the results of a query with differentially private aggregations. Use AI for security This principle in the security pillar of the Google Cloud Architecture Framework provides recommendations to use AI to help you improve the security of your cloud workloads. Because of the increasing number and sophistication of cyber attacks, it's important to take advantage of AI's potential to help improve security. AI can help to reduce the number of threats, reduce the manual effort required by security professionals, and help compensate for the scarcity of experts in the cyber-security domain. Principle overview Use AI capabilities to improve your existing security systems and processes. You can use Gemini in Security as well as the intrinsic AI capabilities that are built into Google Cloud services. These AI capabilities can transform security by providing assistance across every stage of the security lifecycle. For example, you can use AI to do the following: Analyze and explain potentially malicious code without reverse engineering. Reduce repetitive work for cyber-security practitioners. Use natural language to generate queries and interact with security event data. Surface contextual information. Offer recommendations for quick responses. Aid in the remediation of events. Summarize high-priority alerts for misconfigurations and vulnerabilities, highlight potential impacts, and recommend mitigations. Levels of security autonomy AI and automation can help you achieve better security outcomes when you're dealing with ever-evolving cyber-security threats. By using AI for security, you can achieve greater levels of autonomy to detect and prevent threats and improve your overall security posture. Google defines four levels of autonomy when you use AI for security, and they outline the increasing role of AI in assisting and eventually leading security tasks: Manual: Humans run all of the security tasks (prevent, detect, prioritize, and respond) across the entire security lifecycle. Assisted: AI tools, like Gemini, boost human productivity by summarizing information, generating insights, and making recommendations. Semi-autonomous: AI takes primary responsibility for many security tasks and delegates to humans only when required. Autonomous: AI acts as a trusted assistant that drives the security lifecycle based on your organization's goals and preferences, with minimal human intervention. Recommendations The following sections describe the recommendations for using AI for security. The sections also indicate how the recommendations align with Google's Secure AI Framework (SAIF) core elements and how they're relevant to the levels of security autonomy. Enhance threat detection and response with AI Simplify security for experts and non-experts Automate time-consuming security tasks with AI Incorporate AI into risk management and governance processes Implement secure development practices for AI systems Note: For more information about Google Cloud's overall vision for using Gemini across our products to accelerate AI for security, see the whitepaper Google Cloud's Product Vision for AI-Powered Security. Enhance threat detection and response with AI This recommendation is relevant to the following focus areas: Security operations (SecOps) Logging, auditing, and monitoring AI can analyze large volumes of security data, offer insights into threat actor behavior, and automate the analysis of potentially malicious code. This recommendation is aligned with the following SAIF elements: Extend detection and response to bring AI into your organization's threat universe. Automate defenses to keep pace with existing and new threats. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps with threat analysis and detection. Semi-autonomous: AI takes on more responsibility for the security task. Google Threat Intelligence, which uses AI to analyze threat actor behavior and malicious code, can help you implement this recommendation. Simplify security for experts and non-experts This recommendation is relevant to the following focus areas: Security operations (SecOps) Cloud governance, risk, and compliance AI-powered tools can summarize alerts and recommend mitigations, and these capabilities can make security more accessible to a wider range of personnel. This recommendation is aligned with the following SAIF elements: Automate defenses to keep pace with existing and new threats. Harmonize platform-level controls to ensure consistent security across the organization. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps you to improve the accessibility of security information. Semi-autonomous: AI helps to make security practices more effective for all users. Gemini in Security Command Center can provide summaries of alerts for misconfigurations and vulnerabilities. Automate time-consuming security tasks with AI This recommendation is relevant to the following focus areas: Infrastructure security Security operations (SecOps) Application security AI can automate tasks such as analyzing malware, generating security rules, and identifying misconfigurations. These capabilities can help to reduce the workload on security teams and accelerate response times. This recommendation is aligned with the SAIF element about automating defenses to keep pace with existing and new threats. Depending on your implementation, this recommendation can be relevant to the following levels of autonomy: Assisted: AI helps you to automate tasks. Semi-autonomous: AI takes primary responsibility for security tasks, and only requests human assistance when needed. Gemini in Google SecOps can help to automate high-toil tasks by assisting analysts, retrieving relevant context, and making recommendations for next steps. Incorporate AI into risk management and governance processes This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. You can use AI to build a model inventory and risk profiles. You can also use AI to implement policies for data privacy, cyber risk, and third-party risk. This recommendation is aligned with the SAIF element about contextualizing AI system risks in surrounding business processes. Depending on your implementation, this recommendation can be relevant to the semi-autonomous level of autonomy. At this level, AI can orchestrate security agents that run processes to achieve your custom security goals. Implement secure development practices for AI systems This recommendation is relevant to the following focus areas: Application security AI and ML security You can use AI for secure coding, cleaning training data, and validating tools and artifacts. This recommendation is aligned with the SAIF element about expanding strong security foundations to the AI ecosystem. This recommendation can be relevant to all levels of security autonomy, because a secure AI system needs to be in place before AI can be used effectively for security. The recommendation is most relevant to the assisted level, where security practices are augmented by AI. To implement this recommendation, follow the Supply-chain Levels for Software Artifacts (SLSA) guidelines for AI artifacts and use validated container images. Meet regulatory, compliance, and privacy needs This principle in the security pillar of the Google Cloud Architecture Framework helps you identify and meet regulatory, compliance, and privacy requirements for cloud deployments. These requirements influence many of the decisions that you need to make about the security controls that must be used for your workloads in Google Cloud. Principle overview Meeting regulatory, compliance, and privacy needs is an unavoidable challenge for all businesses. Cloud regulatory requirements depend on several factors, including the following: The laws and regulations that apply to your organization's physical locations The laws and regulations that apply to your customers' physical locations Your industry's regulatory requirements Privacy regulations define how you can obtain, process, store, and manage your users' data. You own your own data, including the data that you receive from your users. Therefore, many privacy controls are your responsibility, including controls for cookies, session management, and obtaining user permission. The recommendations to implement this principle are grouped within the following sections: Recommendations to address organizational risks Recommendations to address regulatory and compliance obligations Recommendations to manage your data sovereignty Recommendations to address privacy requirements Recommendations to address organizational risks This section provides recommendations to help you identify and address risks to your organization. Identify risks to your organization This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Before you create and deploy resources on Google Cloud, complete a risk assessment. This assessment should determine the security features that you need to meet your internal security requirements and external regulatory requirements. Your risk assessment provides you with a catalog of organization-specific risks, and informs you about your organization's capability to detect and counteract security threats. You must perform a risk analysis immediately after deployment and whenever there are changes in your business needs, regulatory requirements, or threats to your organization. As mentioned in the Implement security by design principle, your security risks in a cloud environment differ from on-premises risks. This difference is due to the shared responsibility model in the cloud, which varies by service (IaaS, PaaS, or SaaS) and your usage. Use a cloud-specific risk assessment framework like the Cloud Controls Matrix (CCM). Use threat modeling, like OWASP application threat modeling, to identify and address vulnerabilities. For expert help with risk assessments, contact your Google account representative or consult Google Cloud's partner directory. After you catalog your risks, you must determine how to address them—that is, whether you want to accept, avoid, transfer, or mitigate the risks. For mitigation controls that you can implement, see the next section about mitigating your risks. Mitigate your risks This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. When you adopt new public cloud services, you can mitigate risks by using technical controls, contractual protections, and third-party verifications or attestations. Technical controls are features and technologies that you use to protect your environment. These include built-in cloud security controls like firewalls and logging. Technical controls can also include using third-party tools to reinforce or support your security strategy. There are two categories of technical controls: You can implement Google Cloud's security controls to help you mitigate the risks that apply to your environment. For example, you can secure the connection between your on-premises networks and your cloud networks by using Cloud VPN and Cloud Interconnect. Google has robust internal controls and auditing to protect against insider access to customer data. Our audit logs provide you with near real-time logs of Google administrator access on Google Cloud. Contractual protections refer to the legal commitments made by us regarding Google Cloud services. Google is committed to maintaining and expanding our compliance portfolio. The Cloud Data Processing Addendum (CDPA) describes our commitments with regard to the processing and security of your data. The CDPA also outlines the access controls that limit Google support engineers' access to customers' environments, and it describes our rigorous logging and approval process. We recommend that you review Google Cloud's contractual controls with your legal and regulatory experts, and verify that they meet your requirements. If you need more information, contact your technical account representative. Third-party verifications or attestations refer to having a third-party vendor audit the cloud provider to ensure that the provider meets compliance requirements. For example, to learn about Google Cloud attestations with regard to the ISO/IEC 27017 guidelines, see ISO/IEC 27017 - Compliance. To view the current Google Cloud certifications and letters of attestation, see Compliance resource center. Recommendations to address regulatory and compliance obligations A typical compliance journey has three stages: assessment, gap remediation, and continual monitoring. This section provides recommendations that you can use during each of these stages. Assess your compliance needs This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Compliance assessment starts with a thorough review of all of your regulatory obligations and how your business is implementing them. To help you with your assessment of Google Cloud services, use the Compliance resource center. This site provides information about the following: Service support for various regulations Google Cloud certifications and attestations To better understand the compliance lifecycle at Google and how your requirements can be met, you can contact sales to request help from a Google compliance specialist. Or, you can contact your Google Cloud account manager to request a compliance workshop. For more information about tools and resources that you can use to manage security and compliance for Google Cloud workloads, see Assuring Compliance in the Cloud. Automate implementation of compliance requirements This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. To help you stay in compliance with changing regulations, determine whether you can automate how you implement compliance requirements. You can use both compliance-focused capabilities that Google Cloud provides and blueprints that use recommended configurations for a particular compliance regime. Assured Workloads builds on the controls within Google Cloud to help you meet your compliance obligations. Assured Workloads lets you do the following: Select your compliance regime. Then, the tool automatically sets the baseline personnel access controls for the selected regime. Set the location for your data by using organization policies so that your data at rest and your resources remain only in that region. Select the key-management option (such as the key rotation period) that best meets your security and compliance requirements. Select the access criteria for Google support personnel to meet certain regulatory requirements such as FedRAMP Moderate. For example, you can select whether Google support personnel have completed the appropriate background checks. Use Google-owned and Google-owned and Google-managed encryption key that are FIPS-140-2 compliant and support FedRAMP Moderate compliance. For an added layer of control and for the separation of duties, you can use customer-managed encryption keys (CMEK). For more information about keys, see Encrypt data at rest and in transit. In addition to Assured Workloads, you can use Google Cloud blueprints that are relevant to your compliance regime. You can modify these blueprints to incorporate your security policies into your infrastructure deployments. To help you build an environment that supports your compliance requirements, Google's blueprints and solution guides include recommended configurations and provide Terraform modules. The following table lists blueprints that address security and alignment with compliance requirements. Requirement Blueprints and solution guides FedRAMP Google Cloud FedRAMP implementation guide Setting up a FedRAMP Aligned Three-Tier Workload on Google Cloud HIPAA Protecting healthcare data on Google Cloud Setting up a HIPAA-aligned workload using Data Protection Toolkit Monitor your compliance This recommendation is relevant to the following focus areas: Cloud governance, risk, and compliance Logging, monitoring, and auditing Most regulations require that you monitor particular activities, which include access-related activities. To help with your monitoring, you can use the following: Access Transparency: View near real-time logs when Google Cloud administrators access your content. Firewall Rules Logging: Record TCP and UDP connections inside a VPC network for any rules that you create. These logs can be useful for auditing network access or for providing early warning that the network is being used in an unapproved manner. VPC Flow Logs: Record network traffic flows that are sent or received by VM instances. Security Command Center Premium: Monitor for compliance with various standards. OSSEC (or another open source tool): Log the activity of individuals who have administrator access to your environment. Key Access Justifications: View the reasons for a key-access request. Security Command Center notifications: Get alerts when noncompliance issues occur. For example, get alerts when users disable two-step verification or when service accounts are over-privileged. You can also set up automatic remediation for specific notifications. Recommendations to manage your data sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Data sovereignty provides you with a mechanism to prevent Google from accessing your data. You approve access only for provider behaviors that you agree are necessary. For example, you can manage your data sovereignty in the following ways: Store and manage encryption keys outside the cloud. Grant access to these keys based on detailed access justifications. Protect data in use by using Confidential Computing. Manage your operational sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Operational sovereignty provides you with assurances that Google personnel can't compromise your workloads. For example, you can manage operational sovereignty in the following ways: Restrict the deployment of new resources to specific provider regions. Limit Google personnel access based on predefined attributes such as their citizenship or geographic location. Manage software sovereignty This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Software sovereignty provides you with assurances that you can control the availability of your workloads and run them wherever you want. Also, you can have this control without being dependent or locked in with a single cloud provider. Software sovereignty includes the ability to survive events that require you to quickly change where your workloads are deployed and what level of outside connection is allowed. For example, to help you manage your software sovereignty, Google Cloud supports hybrid and multicloud deployments. In addition, GKE Enterprise lets you manage and deploy your applications in both cloud environments and on-premises environments. If you choose on-premises deployments for data sovereignty reasons, Google Distributed Cloud is a combination of hardware and software that brings Google Cloud into your data center. Recommendations to address privacy requirements Google Cloud includes the following controls that promote privacy: Default encryption of all data when it's at rest, when it's in transit, and while it's being processed. Safeguards against insider access. Support for numerous privacy regulations. The following recommendations address additional controls that you can implement. For more information, see Privacy Resource Center. Control data residency This recommendation is relevant to the following focus area: Cloud governance, risk, and compliance. Data residency describes where your data is stored at rest. Data residency requirements vary based on system design objectives, industry regulatory concerns, national law, tax implications, and even culture. Controlling data residency starts with the following: Understand your data type and its location. Determine what risks exist for your data and which laws and regulations apply. Control where your data is stored or where it goes. To help you comply with data residency requirements, Google Cloud lets you control where your data is stored, how it's accessed, and how it's processed. You can use resource location policies to restrict where resources are created and to limit where data is replicated between regions. You can use the location property of a resource to identify where the service is deployed and who maintains it. For more information, see Resource locations supported services. Classify your confidential data This recommendation is relevant to the following focus area: Data security. You must define what data is confidential, and then ensure that the confidential data is properly protected. Confidential data can include credit card numbers, addresses, phone numbers, and other personally identifiable information (PII). Using Sensitive Data Protection, you can set up appropriate classifications. You can then tag and tokenize your data before you store it in Google Cloud. Additionally, Dataplex offers a catalog service that provides a platform for storing, managing, and accessing your metadata. For more information and an example of data classification and de-identification, see De-identification and re-identification of PII using Sensitive Data Protection. Lock down access to sensitive data This recommendation is relevant to the following focus areas: Data security Identity and access management Place sensitive data in its own service perimeter by using VPC Service Controls. VPC Service Controls improves your ability to mitigate the risk of unauthorized copying or transferring of data (data exfiltration) from Google-managed services. With VPC Service Controls, you configure security perimeters around the resources of your Google-managed services to control the movement of data across the perimeter. Set Google Identity and Access Management (IAM) access controls for that data. Configure multifactor authentication (MFA) for all users who require access to sensitive data. Shared responsibilities and shared fate on Google Cloud This document describes the differences between the shared responsibility model and shared fate in Google Cloud. It discusses the challenges and nuances of the shared responsibility model. This document describes what shared fate is and how we partner with our customers to address cloud security challenges. Understanding the shared responsibility model is important when determining how to best protect your data and workloads on Google Cloud. The shared responsibility model describes the tasks that you have when it comes to security in the cloud and how these tasks are different for cloud providers. Understanding shared responsibility, however, can be challenging. The model requires an in-depth understanding of each service you utilize, the configuration options that each service provides, and what Google Cloud does to secure the service. Every service has a different configuration profile, and it can be difficult to determine the best security configuration. Google believes that the shared responsibility model stops short of helping cloud customers achieve better security outcomes. Instead of shared responsibility, we believe in shared fate. Shared fate includes us building and operating a trusted cloud platform for your workloads. We provide best practice guidance and secured, attested infrastructure code that you can use to deploy your workloads in a secure way. We release solutions that combine various Google Cloud services to solve complex security problems and we offer innovative insurance options to help you measure and mitigate the risks that you must accept. Shared fate involves us more closely interacting with you as you secure your resources on Google Cloud. Shared responsibility You're the expert in knowing the security and regulatory requirements for your business, and knowing the requirements for protecting your confidential data and resources. When you run your workloads on Google Cloud, you must identify the security controls that you need to configure in Google Cloud to help protect your confidential data and each workload. To decide which security controls to implement, you must consider the following factors: Your regulatory compliance obligations Your organization's security standards and risk management plan Security requirements of your customers and your vendors Defined by workloads Traditionally, responsibilities are defined based on the type of workload that you're running and the cloud services that you require. Cloud services include the following categories: Cloud service Description Infrastructure as a service (IaaS) IaaS services include Compute Engine, Cloud Storage, and networking services such as Cloud VPN, Cloud Load Balancing, and Cloud DNS. IaaS provides compute, storage, and network services on demand with pay-as-you-go pricing. You can use IaaS if you plan on migrating an existing on-premises workload to the cloud using lift-and-shift, or if you want to run your application on particular VMs, using specific databases or network configurations. In IaaS, the bulk of the security responsibilities are yours, and our responsibilities are focused on the underlying infrastructure and physical security. Platform as a service (PaaS) PaaS services include App Engine, Google Kubernetes Engine (GKE), and BigQuery. PaaS provides the runtime environment that you can develop and run your applications in. You can use PaaS if you're building an application (such as a website), and want to focus on development not on the underlying infrastructure. In PaaS, we're responsible for more controls than in IaaS. Typically, this will vary by the services and features that you use. You share responsibility with us for application-level controls and IAM management. You remain responsible for your data security and client protection. Software as a service (SaaS) SaaS applications include Google Workspace, Google Security Operations, and third-party SaaS applications that are available in Google Cloud Marketplace. SaaS provides online applications that you can subscribe to or pay for in some way. You can use SaaS applications when your enterprise doesn't have the internal expertise or business requirement to build the application themselves, but does require the ability to process workloads. In SaaS, we own the bulk of the security responsibilities. You remain responsible for your access controls and the data that you choose to store in the application. Function as a service (FaaS) or serverless FaaS provides the platform for developers to run small, single-purpose code (called functions) that run in response to particular events. You would use FaaS when you want particular things to occur based on a particular event. For example, you might create a function that runs whenever data is uploaded to Cloud Storage so that it can be classified. FaaS has a similar shared responsibility list as SaaS. Cloud Run functions is a FaaS application. The following diagram shows the cloud services and defines how responsibilities are shared between the cloud provider and customer. As the diagram shows, the cloud provider always remains responsible for the underlying network and infrastructure, and customers always remain responsible for their access policies and data. Defined by industry and regulatory framework Various industries have regulatory frameworks that define the security controls that must be in place. When you move your workloads to the cloud, you must understand the following: Which security controls are your responsibility Which security controls are available as part of the cloud offering Which default security controls are inherited Inherited security controls (such as our default encryption and infrastructure controls) are controls that you can provide as part of your evidence of your security posture to auditors and regulators. For example, the Payment Card Industry Data Security Standard (PCI DSS) defines regulations for payment processors. When you move your business to the cloud, these regulations are shared between you and your CSP. To understand how PCI DSS responsibilities are shared between you and Google Cloud, see Google Cloud: PCI DSS Shared Responsibility Matrix. As another example, in the United States, the Health Insurance Portability and Accountability Act (HIPAA) has set standards for handling electronic personal health information (PHI). These responsibilities are also shared between the CSP and you. For more information on how Google Cloud meets our responsibilities under HIPAA, see HIPAA - Compliance. Other industries (for example, finance or manufacturing) also have regulations that define how data can be gathered, processed, and stored. For more information about shared responsibility related to these, and how Google Cloud meets our responsibilities, see Compliance resource center. Defined by location Depending on your business scenario, you might need to consider your responsibilities based on the location of your business offices, your customers, and your data. Different countries and regions have created regulations that inform how you can process and store your customer's data. For example, if your business has customers who reside in the European Union, your business might need to abide by the requirements that are described in the General Data Protection Regulation (GDPR), and you might be obligated to keep your customer data in the EU itself. In this circumstance, you are responsible for ensuring that the data that you collect remains in the Google Cloud regions in the EU. For more information about how we meet our GDPR obligations, see GDPR and Google Cloud. For information about the requirements related to your region, see Compliance offerings. If your scenario is particularly complicated, we recommend speaking with our sales team or one of our partners to help you evaluate your security responsibilities. Challenges for shared responsibility Though shared responsibility helps define the security roles that you or the cloud provider has, relying on shared responsibility can still create challenges. Consider the following scenarios: Most cloud security breaches are the direct result of misconfiguration (listed as number 3 in the Cloud Security Alliance's Pandemic 11 Report) and this trend is expected to increase. Cloud products are constantly changing, and new ones are constantly being launched. Keeping up with constant change can seem overwhelming. Customers need cloud providers to provide them with opinionated best practices to help keep up with the change, starting with best practices by default and having a baseline secure configuration. Though dividing items by cloud services is helpful, many enterprises have workloads that require multiple cloud services types. In this circumstance, you must consider how various security controls for these services interact, including whether they overlap between and across services. For example, you might have an on-premises application that you're migrating to Compute Engine, use Google Workspace for corporate email, and also run BigQuery to analyze data to improve your products. Your business and markets are constantly changing; as regulations change, as you enter new markets, or as you acquire other companies. Your new markets might have different requirements, and your new acquisition might host their workloads on another cloud. To manage the constant changes, you must constantly re-assess your risk profile and be able to implement new controls quickly. How and where to manage your data encryption keys is an important decision that ties with your responsibilities to protect your data. The option that you choose depends on your regulatory requirements, whether you're running a hybrid cloud environment or still have an on-premises environment, and the sensitivity of the data that you're processing and storing. Incident management is an important, and often overlooked, area where your responsibilities and the cloud provider responsibilities aren't easily defined. Many incidents require close collaboration and support from the cloud provider to help investigate and mitigate them. Other incidents can result from poorly configured cloud resources or stolen credentials, and ensuring that you meet the best practices for securing your resources and accounts can be quite challenging. Advanced persistent threats (APTs) and new vulnerabilities can impact your workloads in ways that you might not consider when you start your cloud transformation. Ensuring that you remain up-to-date on the changing landscape, and who is responsible for threat mitigation is difficult, particularly if your business doesn't have a large security team. Shared fate We developed shared fate in Google Cloud to start addressing the challenges that the shared responsibility model doesn't address. Shared fate focuses on how all parties can better interact to continuously improve security. Shared fate builds on the shared responsibility model because it views the relationship between cloud provider and customer as an ongoing partnership to improve security. Shared fate is about us taking responsibility for making Google Cloud more secure. Shared fate includes helping you get started with a secured landing zone and being clear, opinionated, and transparent about recommended security controls, settings, and associated best practices. It includes helping you better quantify and manage your risk with cyber-insurance, using our Risk Protection Program. Using shared fate, we want to evolve from the standard shared responsibility framework to a better model that helps you secure your business and build trust in Google Cloud. The following sections describe various components of shared fate. Help getting started A key component of shared fate is the resources that we provide to help you get started, in a secure configuration in Google Cloud. Starting with a secure configuration helps reduce the issue of misconfigurations which is the root cause of most security breaches. Our resources include the following: Enterprise foundations blueprint that discuss top security concerns and our top recommendations. Secure blueprints that let you deploy and maintain secure solutions using infrastructure as code (IaC). Blueprints have our security recommendations enabled by default. Many blueprints are created by Google security teams and managed as products. This support means that they're updated regularly, go through a rigorous testing process, and receive attestations from third-party testing groups. Blueprints include the enterprise foundations blueprint and the secured data warehouse blueprint. Architecture Framework best practices that address the top recommendations for building security into your designs. The Architecture Framework includes a security section and a community zone that you can use to connect with experts and peers. Landing zone navigation guides that step you through the top decisions that you need to make to build a secure foundation for your workloads, including resource hierarchy, identity onboarding, security and key management, and network structure. Risk Protection Program Shared fate also includes the Risk Protection Program (currently in preview), which helps you use the power of Google Cloud as a platform to manage risk, rather than just seeing cloud workloads as another source of risk that you need to manage. The Risk Protection Program is a collaboration between Google Cloud and two leading cyber insurance companies, Munich Re and Allianz Global & Corporate Speciality. The Risk Protection Program includes Risk Manager, which provides data-driven insights that you can use to better understand your cloud security posture. If you're looking for cyber insurance coverage, you can share these insights from Risk Manager directly with our insurance partners to obtain a quote. For more information, see Google Cloud Risk Protection Program now in Preview. Help with deployment and governance Shared fate also helps with your continued governance of your environment. For example, we focus efforts on products such as the following: Assured Workloads, which helps you meet your compliance obligations. Security Command Center Premium, which uses threat intelligence, threat detection, web scanning, and other advanced methods to monitor and detect threats. It also provides a way to resolve many of these threats quickly and automatically. Organization policies and resource settings that let you configure policies throughout your hierarchy of folders and projects. Policy Intelligence tools that provide you with insights on access to accounts and resources. Confidential Computing, which allows you to encrypt data in use. Sovereign Controls by Partners, which is available in certain countries and helps enforce data residency requirements. Putting shared responsibility and shared fate into practice As part of your planning process, consider the following actions to help you understand and implement appropriate security controls: Create a list of the type of workloads that you will host in Google Cloud, and whether they require IaaS, PaaS, and SaaS services. You can use the shared responsibility diagram as a checklist to ensure that you know the security controls that you need to consider. Create a list of regulatory requirements that you must comply with, and access resources in the Compliance resource center that relate to those requirements. Review the list of available blueprints and architectures in the Architecture Center for the security controls that you require for your particular workloads. The blueprints provide a list of recommended controls and the IaC code that you require to deploy that architecture. Use the landing zone documentation and the recommendations in the enterprise foundations guide to design a resource hierarchy and network architecture that meets your requirements. You can use the opinionated workload blueprints, like the secured data warehouse, to accelerate your development process. After you deploy your workloads, verify that you're meeting your security responsibilities using services such as the Risk Manager, Assured Workloads, Policy Intelligence tools, and Security Command Center Premium. For more information, see the CISO's Guide to Cloud Transformation paper. What's next Review the core security principles. Keep up to date with shared fate resources. Familiarize yourself with available blueprints, including the security foundations blueprint and workload examples like the secured data warehouse. Read more about shared fate. Read about our underlying secure infrastructure in the Google infrastructure security design overview. Read how to implement NIST Cybersecurity Framework best practices in Google Cloud (PDF). Google Cloud Architecture Framework: Reliability The reliability pillar in the Google Cloud Architecture Framework provides principles and recommendations to help you design, deploy, and manage reliable workloads in Google Cloud. This document is intended for cloud architects, developers, platform engineers, administrators, and site reliability engineers. Reliability is a system's ability to consistently perform its intended functions within the defined conditions and maintain uninterrupted service. Best practices for reliability include redundancy, fault-tolerant design, monitoring, and automated recovery processes. As a part of reliability, resilience is the system's ability to withstand and recover from failures or unexpected disruptions, while maintaining performance. Google Cloud features, like multi-regional deployments, automated backups, and disaster recovery solutions, can help you improve your system's resilience. Reliability is important to your cloud strategy for many reasons, including the following: Minimal downtime: Downtime can lead to lost revenue, decreased productivity, and damage to reputation. Resilient architectures can help ensure that systems can continue to function during failures or recover efficiently from failures. Enhanced user experience: Users expect seamless interactions with technology. Resilient systems can help maintain consistent performance and availability, and they provide reliable service even during high demand or unexpected issues. Data integrity: Failures can cause data loss or data corruption. Resilient systems implement mechanisms such as backups, redundancy, and replication to protect data and ensure that it remains accurate and accessible. Business continuity: Your business relies on technology for critical operations. Resilient architectures can help ensure continuity after a catastrophic failure, which enables business functions to continue without significant interruptions and supports a swift recovery. Compliance: Many industries have regulatory requirements for system availability and data protection. Resilient architectures can help you to meet these standards by ensuring systems remain operational and secure. Lower long-term costs: Resilient architectures require upfront investment, but resiliency can help to reduce costs over time by preventing expensive downtime, avoiding reactive fixes, and enabling more efficient resource use. Organizational mindset To make your systems reliable, you need a plan and an established strategy. This strategy must include education and the authority to prioritize reliability alongside other initiatives. Set a clear expectation that the entire organization is responsible for reliability, including development, product management, operations, platform engineering, and site reliability engineering (SRE). Even the business-focused groups, like marketing and sales, can influence reliability. Every team must understand the reliability targets and risks of their applications. The teams must be accountable to these requirements. Conflicts between reliability and regular product feature development must be prioritized and escalated accordingly. Plan and manage reliability holistically, across all your functions and teams. Consider setting up a Cloud Centre of Excellence (CCoE) that includes a reliability pillar. For more information, see Optimize your organization's cloud journey with a Cloud Center of Excellence. Focus areas for reliability The activities that you perform to design, deploy, and manage a reliable system can be categorized in the following focus areas. Each of the reliability principles and recommendations in this pillar is relevant to one of these focus areas. Scoping: To understand your system, conduct a detailed analysis of its architecture. You need to understand the components, how they work and interact, how data and actions flow through the system, and what could go wrong. Identify potential failures, bottlenecks, and risks, which helps you to take actions to mitigate those issues. Observation: To help prevent system failures, implement comprehensive and continuous observation and monitoring. Through this observation, you can understand trends and identify potential problems proactively. Response: To reduce the impact of failures, respond appropriately and recover efficiently. Automated responses can also help reduce the impact of failures. Even with planning and controls, failures can still occur. Learning: To help prevent failures from recurring, learn from each experience, and take appropriate actions. Core principles The recommendations in the reliability pillar of the Architecture Framework are mapped to the following core principles: Define reliability based on user-experience goals Set realistic targets for reliability Build highly available systems through redundant resources Take advantage of horizontal scalability Detect potential failures by using observability Design for graceful degradation Perform testing for recovery from failures Perform testing for recovery from data loss Conduct thorough postmortems Note: To learn about the building blocks of infrastructure reliability in Google Cloud, see Google Cloud infrastructure reliability guide. ContributorsAuthors: Laura Hyatt | Enterprise Cloud ArchitectJose Andrade | Enterprise Infrastructure Customer EngineerGino Pelliccia | Principal ArchitectOther contributors: Andrés-Leonardo Martínez-Ortiz | Technical Program ManagerBrian Kudzia | Enterprise Infrastructure Customer EngineerDaniel Lees | Cloud Security ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMRyan Cox | Principal ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Define reliability based on user-experience goals This principle in the reliability pillar of the Google Cloud Architecture Framework helps you to assess your users' experience, and then map the findings to reliability goals and metrics. This principle is relevant to the scoping focus area of reliability. Principle overview Observability tools provide large amounts of data, but not all of the data directly relates to the impacts on the users. For example, you might observe high CPU usage, slow server operations, or even crashed tasks. However, if these issues don't affect the user experience, then they don't constitute an outage. To measure the user experience, you need to distinguish between internal system behavior and user-facing problems. Focus on metrics like the success ratio of user requests. Don't rely solely on server-centric metrics, like CPU usage, which can lead to misleading conclusions about your service's reliability. True reliability means that users can consistently and effectively use your application or service. Recommendations To help you measure user experience effectively, consider the recommendations in the following sections. Measure user experience To truly understand your service's reliability, prioritize metrics that reflect your users' actual experience. For example, measure the users' query success ratio, application latency, and error rates. Ideally, collect this data directly from the user's device or browser. If this direct data collection isn't feasible, shift your measurement point progressively further away from the user in the system. For example, you can use the load balancer or frontend service as the measurement point. This approach helps you identify and address issues before those issues can significantly impact your users. Analyze user journeys To understand how users interact with your system, you can use tracing tools like Cloud Trace. By following a user's journey through your application, you can find bottlenecks and latency issues that might degrade the user's experience. Cloud Trace captures detailed performance data for each hop in your service architecture. This data helps you identify and address performance issues more efficiently, which can lead to a more reliable and satisfying user experience. Set realistic targets for reliability This principle in the reliability pillar of the Google Cloud Architecture Framework helps you define reliability goals that are technically feasible for your workloads in Google Cloud. This principle is relevant to the scoping focus area of reliability. Principle overview Design your systems to be just reliable enough for user happiness. It might seem counterintuitive, but a goal of 100% reliability is often not the most effective strategy. Higher reliability might result in a significantly higher cost, both in terms of financial investment and potential limitations on innovation. If users are already happy with the current level of service, then efforts to further increase happiness might yield a low return on investment. Instead, you can better spend resources elsewhere. You need to determine the level of reliability at which your users are happy, and determine the point where the cost of incremental improvements begin to outweigh the benefits. When you determine this level of sufficient reliability, you can allocate resources strategically and focus on features and improvements that deliver greater value to your users. Recommendations To set realistic reliability targets, consider the recommendations in the following subsections. Accept some failure and prioritize components Aim for high availability such as 99.99% uptime, but don't set a target of 100% uptime. Acknowledge that some failures are inevitable. The gap between 100% uptime and a 99.99% target is the allowance for failure. This gap is often called the error budget. The error budget can help you take risks and innovate, which is fundamental to any business to stay competitive. Prioritize the reliability of the most critical components in the system. Accept that less critical components can have a higher tolerance for failure. Balance reliability and cost To determine the optimal reliability level for your system, conduct thorough cost-benefit analyses. Consider factors like system requirements, the consequences of failures, and your organization's risk tolerance for the specific application. Remember to consider your disaster recovery metrics, such as the recovery time objective (RTO) and recovery point objective (RPO). Decide what level of reliability is acceptable within the budget and other constraints. Look for ways to improve efficiency and reduce costs without compromising essential reliability features. Build highly available systems through resource redundancy This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to plan, build, and manage resource redundancy, which can help you to avoid failures. This principle is relevant to the scoping focus area of reliability. Principle overview After you decide the level of reliability that you need, you must design your systems to avoid any single points of failure. Every critical component in the system must be replicated across multiple machines, zones, and regions. For example, a critical database can't be located in only one region, and a metadata server can't be deployed in only one single zone or region. In those examples, if the sole zone or region has an outage, the system has a global outage. Recommendations To build redundant systems, consider the recommendations in the following subsections. Identify failure domains and replicate services Map out your system's failure domains, from individual VMs to regions, and design for redundancy across the failure domains. To ensure high availability, distribute and replicate your services and applications across multiple zones and regions. Configure the system for automatic failover to make sure that the services and applications continue to be available in the event of zone or region outages. For examples of multi-zone and multi-region architectures, see Design reliable infrastructure for your workloads in Google Cloud. Detect and address issues promptly Continuously track the status of your failure domains to detect and address issues promptly. You can monitor the current status of Google Cloud services in all regions by using the Google Cloud Service Health dashboard. You can also view incidents relevant to your project by using Personalized Service Health. You can use load balancers to detect resource health and automatically route traffic to healthy backends. For more information, see Health checks overview. Test failover scenarios Like a fire drill, regularly simulate failures to validate the effectiveness of your replication and failover strategies. For more information, see Simulate a zone outage for a regional MIG and Simulate a zone failure in GKE regional clusters. Take advantage of horizontal scalability This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you use horizontal scalability. By using horizontal scalability, you can help ensure that your workloads in Google Cloud can scale efficiently and maintain performance. This principle is relevant to the scoping focus area of reliability. Principle overview Re-architect your system to a horizontal architecture. To accommodate growth in traffic or data, you can add more resources. You can also remove resources when they're not in use. To understand the value of horizontal scaling, consider the limitations of vertical scaling. A common scenario for vertical scaling is to use a MySQL database as the primary database with critical data. As database usage increases, more RAM and CPU is required. Eventually, the database reaches the memory limit on the host machine, and needs to be upgraded. This process might need to be repeated several times. The problem is that there are hard limits on how much a database can grow. VM sizes are not unlimited. The database can reach a point when it's no longer possible to add more resources. Even if resources were unlimited, a large VM can become a single point of failure. Any problem with the primary database VM can cause error responses or cause a system-wide outage that affects all users. Avoid single points of failure, as described in Build highly available systems through redundant resources. Besides these scaling limits, vertical scaling tends to be more expensive. The cost can increase exponentially as machines with greater amounts of compute power and memory are acquired. Horizontal scaling, by contrast, can cost less. The potential for horizontal scaling is virtually unlimited in a system that's designed to scale. Recommendations To transition from a single VM architecture to a horizontal multiple-machine architecture, you need to plan carefully and use the right tools. To help you achieve horizontal scaling, consider the recommendations in the following subsections. Use managed services Managed services remove the need to manually manage horizontal scaling. For example, with Compute Engine managed instance groups (MIGs), you can add or remove VMs to scale your application horizontally. For containerized applications, Cloud Run is a serverless platform that can automatically scale your stateless containers based on incoming traffic. Promote modular design Modular components and clear interfaces help you scale individual components as needed, instead of scaling the entire application. For more information, see Promote modular design in the performance optimization pillar. Implement a stateless design Design applications to be stateless, meaning no locally stored data. This lets you add or remove instances without worrying about data consistency. Detect potential failures by using observability This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you proactively identify areas where errors and failures might occur. This principle is relevant to the observation focus area of reliability. Principle overview To maintain and improve the reliability of your workloads in Google Cloud, you need to implement effective observability by using metrics, logs, and traces. Metrics are numerical measurements of activities that you want to track for your application at specific time intervals. For example, you might want to track technical metrics like request rate and error rate, which can be used as service-level indicators (SLIs). You might also need to track application-specific business metrics like orders placed and payments received. Logs are time-stamped records of discrete events that occur within an application or system. The event could be a failure, an error, or a change in state. Logs might include metrics, and you can also use logs for SLIs. A trace represents the journey of a single user or transaction through a number of separate applications or the components of an application. For example, these components could be microservices. Traces help you to track what components were used in the journeys, where bottlenecks exist, and how long the journeys took. Metrics, logs, and traces help you monitor your system continuously. Comprehensive monitoring helps you find out where and why errors occurred. You can also detect potential failures before errors occur. Recommendations To detect potential failures efficiently, consider the recommendations in the following subsections. Gain comprehensive insights To track key metrics like response times and error rates, use Cloud Monitoring and Cloud Logging. These tools also help you to ensure that the metrics consistently meet the needs of your workload. To make data-driven decisions, analyze default service metrics to understand component dependencies and their impact on overall workload performance. To customize your monitoring strategy, create and publish your own metrics by using the Google Cloud SDK. Perform proactive troubleshooting Implement robust error handling and enable logging across all of the components of your workloads in Google Cloud. Activate logs like Cloud Storage access logs and VPC Flow Logs. When you configure logging, consider the associated costs. To control logging costs, you can configure exclusion filters on the log sinks to exclude certain logs from being stored. Optimize resource utilization Monitor CPU consumption, network I/O metrics, and disk I/O metrics to detect under-provisioned and over-provisioned resources in services like GKE, Compute Engine, and Dataproc. For a complete list of supported services, see Cloud Monitoring overview. Prioritize alerts For alerts, focus on critical metrics, set appropriate thresholds to minimize alert fatigue, and ensure timely responses to significant issues. This targeted approach lets you proactively maintain workload reliability. For more information, see Alerting overview. Design for graceful degradation This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you to design your Google Cloud workloads to fail gracefully. This principle is relevant to the response focus area of reliability. Principle overview Graceful degradation is a design approach where a system that experiences a high load continues to function, possibly with reduced performance or accuracy. Graceful degradation ensures continued availability of the system and prevents complete failure, even if the system's work isn't optimal. When the load returns to a manageable level, the system resumes full functionality. For example, during periods of high load, Google Search prioritizes results from higher-ranked web pages, potentially sacrificing some accuracy. When the load decreases, Google Search recomputes the search results. Recommendations To design your systems for graceful degradation, consider the recommendations in the following subsections. Implement throttling Ensure that your replicas can independently handle overloads and can throttle incoming requests during high-traffic scenarios. This approach helps you to prevent cascading failures that are caused by shifts in excess traffic between zones. Use tools like Apigee to control the rate of API requests during high-traffic times. You can configure policy rules to reflect how you want to scale back requests. Drop excess requests early Configure your systems to drop excess requests at the frontend layer to protect backend components. Dropping some requests prevents global failures and enables the system to recover more gracefully.With this approach, some users might experience errors. However, you can minimize the impact of outages, in contrast to an approach like circuit-breaking, where all traffic is dropped during an overload. Handle partial errors and retries Build your applications to handle partial errors and retries seamlessly. This design helps to ensure that as much traffic as possible is served during high-load scenarios. Test overload scenarios To validate that the throttle and request-drop mechanisms work effectively, regularly simulate overload conditions in your system. Testing helps ensure that your system is prepared for real-world traffic surges. Monitor traffic spikes Use analytics and monitoring tools to predict and respond to traffic surges before they escalate into overloads. Early detection and response can help maintain service availability during high-demand periods. Perform testing for recovery from failures This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery in the event of failures. This principle is relevant to the learning focus area of reliability. Principle overview To be sure that your system can recover from failures, you must periodically run tests that include regional failovers, release rollbacks, and data restoration from backups. This testing helps you to practice responses to events that pose major risks to reliability, such as the outage of an entire region. This testing also helps you verify that your system behaves as intended during a disruption. In the unlikely event of an entire region going down, you need to fail over all traffic to another region. During normal operation of your workload, when data is modified, it needs to be synchronized from the primary region to the failover region. You need to verify that the replicated data is always very recent, so that users don't experience data loss or session breakage. The load balancing system must also be able to shift traffic to the failover region at any time without service interruptions. To minimize downtime after a regional outage, operations engineers also need to be able to manually and efficiently shift user traffic away from a region, in as less time as possible. This operation is sometimes called draining a region, which means you stop the inbound traffic to the region and move all the traffic elsewhere. Recommendations When you design and run tests for failure recovery, consider the recommendations in the following subsections. Define the testing objectives and scope Clearly define what you want to achieve from the testing. For example, your objectives can include the following: Validate the recovery time objective (RTO) and the recovery point objective (RPO). For details, see Basics of DR planning. Assess system resilience and fault tolerance under various failure scenarios. Test the effectiveness of automated failover mechanisms. Decide which components, services, or regions are in the testing scope. The scope can include specific application tiers like the frontend, backend, and database, or it can include specific Google Cloud resources like Cloud SQL instances or GKE clusters. The scope must also specify any external dependencies, such as third-party APIs or cloud interconnections. Prepare the environment for testing Choose an appropriate environment, preferably a staging or sandbox environment that replicates your production setup. If you conduct the test in production, ensure that you have safety measures ready, like automated monitoring and manual rollback procedures. Create a backup plan. Take snapshots or backups of critical databases and services to prevent data loss during the test. Ensure that your team is prepared to do manual interventions if the automated failover mechanisms fail. To prevent test disruptions, ensure that your IAM roles, policies, and failover configurations are correctly set up. Verify that the necessary permissions are in place for the test tools and scripts. Inform stakeholders, including operations, DevOps, and application owners, about the test schedule, scope, and potential impact. Provide stakeholders with an estimated timeline and the expected behaviors during the test. Simulate failure scenarios Plan and execute failures by using tools like Chaos Monkey. You can use custom scripts to simulate failures of critical services such as a shutdown of a primary node in a multi-zone GKE cluster or a disabled Cloud SQL instance. You can also use scripts to simulate a region-wide network outage by using firewall rules or API restrictions based on your scope of test. Gradually escalate the failure scenarios to observe system behavior under various conditions. Introduce load testing alongside failure scenarios to replicate real-world usage during outages. Test cascading failure impacts, such as how frontend systems behave when backend services are unavailable. To validate configuration changes and to assess the system's resilience against human errors, test scenarios that involve misconfigurations. For example, run tests with incorrect DNS failover settings or incorrect IAM permissions. Monitor system behavior Monitor how load balancers, health checks, and other mechanisms reroute traffic. Use Google Cloud tools like Cloud Monitoring and Cloud Logging to capture metrics and events during the test. Observe changes in latency, error rates, and throughput during and after the failure simulation, and monitor the overall performance impact. Identify any degradation or inconsistencies in the user experience. Ensure that logs are generated and alerts are triggered for key events, such as service outages or failovers. Use this data to verify the effectiveness of your alerting and incident response systems. Verify recovery against your RTO and RPO Measure how long it takes for the system to resume normal operations after a failure, and then compare this data with the defined RTO and document any gaps. Ensure that data integrity and availability align with the RPO. To test database consistency, compare snapshots or backups of the database before and after a failure. Evaluate service restoration and confirm that all services are restored to a functional state with minimal user disruption. Document and analyze results Document each test step, failure scenario, and corresponding system behavior. Include timestamps, logs, and metrics for detailed analyses. Highlight bottlenecks, single points of failure, or unexpected behaviors observed during the test. To help prioritize fixes, categorize issues by severity and impact. Suggest improvements to the system architecture, failover mechanisms, or monitoring setups. Based on test findings, update any relevant failover policies and playbooks. Present a postmortem report to stakeholders. The report should summarize the outcomes, lessons learned, and next steps. For more information, see Conduct thorough postmortems. Iterate and improve To validate ongoing reliability and resilience, plan periodic testing (for example, quarterly). Run tests under different scenarios, including infrastructure changes, software updates, and increased traffic loads. Automate failover tests by using CI/CD pipelines to integrate reliability testing into your development lifecycle. During the postmortem, use feedback from stakeholders and end users to improve the test process and system resilience. Perform testing for recovery from data loss This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery from data loss. This principle is relevant to the learning focus area of reliability. Principle overview To ensure that your system can recover from situations where data is lost or corrupted, you need to run tests for those scenarios. Instances of data loss might be caused by a software bug or some type of natural disaster. After such events, you need to restore data from backups and bring all of the services back up again by using the freshly restored data. We recommend that you use three criteria to judge the success or failure of this type of recovery test: data integrity, recovery time objective (RTO), and recovery point objective (RPO). For details about the RTO and RPO metrics, see Basics of DR planning. The goal of data restoration testing is to periodically verify that your organization can continue to meet business continuity requirements. Besides measuring RTO and RPO, a data restoration test must include testing of the entire application stack and all the critical infrastructure services with the restored data. This is necessary to confirm that the entire deployed application works correctly in the test environment. Recommendations When you design and run tests for recovering from data loss, consider the recommendations in the following subsections. Verify backup consistency and test restoration processes You need to verify that your backups contain consistent and usable snapshots of data that you can restore to immediately bring applications back into service. To validate data integrity, set up automated consistency checks to run after each backup. To test backups, restore them in a non-production environment. To ensure your backups can be restored efficiently and that the restored data meets application requirements, regularly simulate data recovery scenarios. Document the steps for data restoration, and train your teams to execute the steps effectively during a failure. Schedule regular and frequent backups To minimize data loss during restoration and to meet RPO targets, it's essential to have regularly scheduled backups. Establish a backup frequency that aligns with your RPO. For example, if your RPO is 15 minutes, schedule backups to run at least every 15 minutes. Optimize the backup intervals to reduce the risk of data loss. Use Google Cloud tools like Cloud Storage, Cloud SQL automated backups, or Spanner backups to schedule and manage backups. For critical applications, use near-continuous backup solutions like point-in-time recovery (PITR) for Cloud SQL or incremental backups for large datasets. Define and monitor RPO Set a clear RPO based on your business needs, and monitor adherence to the RPO. If backup intervals exceed the defined RPO, use Cloud Monitoring to set up alerts. Monitor backup health Use Google Cloud Backup and DR service or similar tools to track the health of your backups and confirm that they are stored in secure and reliable locations. Ensure that the backups are replicated across multiple regions for added resilience. Plan for scenarios beyond backup Combine backups with disaster recovery strategies like active-active failover setups or cross-region replication for improved recovery time in extreme cases. For more information, see Disaster recovery planning guide. Conduct thorough postmortems This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you conduct effective postmortems after failures and incidents. This principle is relevant to the learning focus area of reliability. Principle overview A postmortem is a written record of an incident, its impact, the actions taken to mitigate or resolve the incident, the root causes, and the follow-up actions to prevent the incident from recurring. The goal of a postmortem is to learn from mistakes and not assign blame. The following diagram shows the workflow of a postmortem: The workflow of a postmortem includes the following steps: Create postmortem Capture the facts Identify and analyze the root causes Plan for the future Execute the plan Conduct postmortem analyses after major events and non-major events like the following: User-visible downtimes or degradations beyond a certain threshold. Data losses of any kind. Interventions from on-call engineers, such as a release rollback or rerouting of traffic. Resolution times above a defined threshold. Monitoring failures, which usually imply manual incident discovery. Recommendations Define postmortem criteria before an incident occurs so that everyone knows when a post mortem is necessary. To conduct effective postmortems, consider the recommendations in the following subsections. Conduct blameless postmortems Effective postmortems focus on processes, tools, and technologies, and don't place blame on individuals or teams. The purpose of a postmortem analysis is to improve your technology and future, not to find who is guilty. Everyone makes mistakes. The goal should be to analyze the mistakes and learn from them. The following examples show the difference between feedback that assigns blame and blameless feedback: Feedback that assigns blame: "We need to rewrite the entire complicated backend system! It's been breaking weekly for the last three quarters and I'm sure we're all tired of fixing things piecemeal. Seriously, if I get paged one more time I'll rewrite it myself…" Blameless feedback: "An action item to rewrite the entire backend system might actually prevent these pages from continuing to happen. The maintenance manual for this version is quite long and really difficult to be fully trained up on. I'm sure our future on-call engineers will thank us!" Make the postmortem report readable by all the intended audiences For each piece of information that you plan to include in the report, assess whether that information is important and necessary to help the audience understand what happened. You can move supplementary data and explanations to an appendix of the report. Reviewers who need more information can request it. Avoid complex or over-engineered solutions Before you start to explore solutions for a problem, evaluate the importance of the problem and the likelihood of a recurrence. Adding complexity to the system to solve problems that are unlikely to occur again can lead to increased instability. Share the postmortem as widely as possible To ensure that issues don't remain unresolved, publish the outcome of the postmortem to a wide audience and get support from management. The value of a postmortem is proportional to the learning that occurs after the postmortem. When more people learn from incidents, the likelihood of similar failures recurring is reduced. Google Cloud Architecture Framework: Cost optimization The cost optimization pillar in the Google Cloud Architecture Framework describes principles and recommendations to optimize the cost of your workloads in Google Cloud. The intended audience includes the following: CTOs, CIOs, CFOs, and other executives who are responsible for strategic cost management. Architects, developers, administrators, and operators who make decisions that affect cost at all the stages of an organization's cloud journey. The cost models for on-premises and cloud workloads differ significantly. On-premises IT costs include capital expenditure (CapEx) and operational expenditure (OpEx). On-premises hardware and software assets are acquired and the acquisition costs are depreciated over the operating life of the assets. In the cloud, the costs for most cloud resources are treated as OpEx, where costs are incurred when the cloud resources are consumed. This fundamental difference underscores the importance of the following core principles of cost optimization. Note: You might be able to classify the cost of some Google Cloud services (like Compute Engine sole-tenant nodes) as capital expenditure. For more information, see Sole-tenancy accounting FAQ. For cost optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Cost optimization in the Architecture Framework. Core principles The recommendations in the cost optimization pillar of the Architecture Framework are mapped to the following core principles: Align cloud spending with business value: Ensure that your cloud resources deliver measurable business value by aligning IT spending with business objectives. Foster a culture of cost awareness: Ensure that people across your organization consider the cost impact of their decisions and activities, and ensure that they have access to the cost information required to make informed decisions. Optimize resource usage: Provision only the resources that you need, and pay only for the resources that you consume. Optimize continuously: Continuously monitor your cloud resource usage and costs, and proactively make adjustments as needed to optimize your spending. This approach involves identifying and addressing potential cost inefficiencies before they become significant problems. These principles are closely aligned with the core tenets of cloud FinOps. FinOps is relevant to any organization, regardless of its size or maturity in the cloud. By adopting these principles and following the related recommendations, you can control and optimize costs throughout your journey in the cloud. ContributorsAuthor: Nicolas Pintaux | Customer Engineer, Application Modernization SpecialistOther contributors: Anuradha Bajpai | Solutions ArchitectDaniel Lees | Cloud Security ArchitectEric Lam | Head of Google Cloud FinOpsFernando Rubbo | Cloud Solutions ArchitectFilipe Gracio, PhD | Customer EngineerGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKent Hua | Solutions ManagerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerRadhika Kanakam | Senior Program Manager, Cloud GTMSteve McGhee | Reliability AdvocateSergei Lilichenko | Solutions ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist Align cloud spending with business value This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to align your use of Google Cloud resources with your organization's business goals. Principle overview To effectively manage cloud costs, you need to maximize the business value that the cloud resources provide and minimize the total cost of ownership (TCO). When you evaluate the resource options for your cloud workloads, consider not only the cost of provisioning and using the resources, but also the cost of managing them. For example, virtual machines (VMs) on Compute Engine might be a cost-effective option for hosting applications. However, when you consider the overhead to maintain, patch, and scale the VMs, the TCO can increase. On the other hand, serverless services like Cloud Run can offer greater business value. The lower operational overhead lets your team focus on core activities and helps to increase agility. To ensure that your cloud resources deliver optimal value, evaluate the following factors: Provisioning and usage costs: The expenses incurred when you purchase, provision, or consume resources. Management costs: The recurring expenses for operating and maintaining resources, including tasks like patching, monitoring and scaling. Indirect costs: The costs that you might incur to manage issues like downtime, data loss, or security breaches. Business impact: The potential benefits from the resources, like increased revenue, improved customer satisfaction, and faster time to market. By aligning cloud spending with business value, you get the following benefits: Value-driven decisions: Your teams are encouraged to prioritize solutions that deliver the greatest business value and to consider both short-term and long-term cost implications. Informed resource choice: Your teams have the information and knowledge that they need to assess the business value and TCO of various deployment options, so they choose resources that are cost-effective. Cross-team alignment: Cross-functional collaboration between business, finance, and technical teams ensures that cloud decisions are aligned with the overall objectives of the organization. Recommendations To align cloud spending with business objectives, consider the following recommendations. Prioritize managed services and serverless products Whenever possible, choose managed services and serverless products to reduce operational overhead and maintenance costs. This choice lets your teams concentrate on their core business activities. They can accelerate the delivery of new features and functionalities, and help drive innovation and value. The following are examples of how you can implement this recommendation: To run PostgreSQL, MySQL, or Microsoft SQL Server server databases, use Cloud SQL instead of deploying those databases on VMs. To run and manage Kubernetes clusters, use Google Kubernetes Engine (GKE) Autopilot instead of deploying containers on VMs. For your Apache Hadoop or Apache Spark processing needs, use Dataproc and Dataproc Serverless. Per-second billing can help to achieve significantly lower TCO when compared to on-premises data lakes. Balance cost efficiency with business agility Controlling costs and optimizing resource utilization are important goals. However, you must balance these goals with the need for flexible infrastructure that lets you innovate rapidly, respond quickly to changes, and deliver value faster. The following are examples of how you can achieve this balance: Adopt DORA metrics for software delivery performance. Metrics like change failure rate (CFR), time to detect (TTD), and time to restore (TTR) can help to identify and fix bottlenecks in your development and deployment processes. By reducing downtime and accelerating delivery, you can achieve both operational efficiency and business agility. Follow Site Reliability Engineering (SRE) practices to improve operational reliability. SRE's focus on automation, observability, and incident response can lead to reduced downtime, lower recovery time, and higher customer satisfaction. By minimizing downtime and improving operational reliability, you can prevent revenue loss and avoid the need to overprovision resources as a safety net to handle outages. Enable self-service optimization Encourage a culture of experimentation and exploration by providing your teams with self-service cost optimization tools, observability tools, and resource management platforms. Enable them to provision, manage, and optimize their cloud resources autonomously. This approach helps to foster a sense of ownership, accelerate innovation, and ensure that teams can respond quickly to changing needs while being mindful of cost efficiency. Adopt and implement FinOps Adopt FinOps to establish a collaborative environment where everyone is empowered to make informed decisions that balance cost and value. FinOps fosters financial accountability and promotes effective cost optimization in the cloud. Promote a value-driven and TCO-aware mindset Encourage your team members to adopt a holistic attitude toward cloud spending, with an emphasis on TCO and not just upfront costs. Use techniques like value stream mapping to visualize and analyze the flow of value through your software delivery process and to identify areas for improvement. Implement unit costing for your applications and services to gain a granular understanding of cost drivers and discover opportunities for cost optimization. For more information, see Maximize business value with cloud FinOps. Foster a culture of cost awareness This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to promote cost awareness across your organization and ensure that team members have the cost information that they need to make informed decisions. Conventionally, the responsibility for cost management might be centralized to a few select stakeholders and primarily focused on initial project architecture decisions. However, team members across all cloud user roles (analyst, architect, developer, or administrator) can help to reduce the cost of your resources in Google Cloud. By sharing cost data appropriately, you can empower team members to make cost-effective decisions throughout their development and deployment processes. Principle overview Stakeholders across various roles – product owners, developers, deployment engineers, administrators, and financial analysts – need visibility into relevant cost data and its relationship to business value. When provisioning and managing cloud resources, they need the following data: Projected resource costs: Cost estimates at the time of design and deployment. Real-time resource usage costs: Up-to-date cost data that can be used for ongoing monitoring and budget validation. Costs mapped to business metrics: Insights into how cloud spending affects key performance indicators (KPIs), to enable teams to identify cost-effective strategies. Every individual might not need access to raw cost data. However, promoting cost awareness across all roles is crucial because individual decisions can affect costs. By promoting cost visibility and ensuring clear ownership of cost management practices, you ensure that everyone is aware of the financial implications of their choices and everyone actively contributes to the organization's cost optimization goals. Whether through a centralized FinOps team or a distributed model, establishing accountability is crucial for effective cost optimization efforts. Recommendations To promote cost awareness and ensure that your team members have the cost information that they need to make informed decisions, consider the following recommendations. Provide organization-wide cost visibility To achieve organization-wide cost visibility, the teams that are responsible for cost management can take the following actions: Standardize cost calculation and budgeting: Use a consistent method to determine the full costs of cloud resources, after factoring in discounts and shared costs. Establish clear and standardized budgeting processes that align with your organization's goals and enable proactive cost management. Use standardized cost management and visibility tools: Use appropriate tools that provide real-time insights into cloud spending and generate regular (for example, weekly) cost progression snapshots. These tools enable proactive budgeting, forecasting, and identification of optimization opportunities. The tools could be cloud provider tools (like the Google Cloud Billing dashboard), third-party solutions, or open-source solutions like the Cost Attribution solution. Implement a cost allocation system: Allocate a portion of the overall cloud budget to each team or project. Such an allocation gives the teams a sense of ownership over cloud spending and encourages them to make cost-effective decisions within their allocated budget. Promote transparency: Encourage teams to discuss cost implications during the design and decision-making processes. Create a safe and supportive environment for sharing ideas and concerns related to cost optimization. Some organizations use positive reinforcement mechanisms like leaderboards or recognition programs. If your organization has restrictions on sharing raw cost data due to business concerns, explore alternative approaches for sharing cost information and insights. For example, consider sharing aggregated metrics (like the total cost for an environment or feature) or relative metrics (like the average cost per transaction or user). Understand how cloud resources are billed Pricing for Google Cloud resources might vary across regions. Some resources are billed monthly at a fixed price, and others might be billed based on usage. To understand how Google Cloud resources are billed, use the Google Cloud pricing calculator and product-specific pricing information (for example, Google Kubernetes Engine (GKE) pricing). Understand resource-based cost optimization options For each type of cloud resource that you plan to use, explore strategies to optimize utilization and efficiency. The strategies include rightsizing, autoscaling, and adopting serverless technologies where appropriate. The following are examples of cost optimization options for a few Google Cloud products: Cloud Run lets you configure always-allocated CPUs to handle predictable traffic loads at a fraction of the price of the default allocation method (that is, CPUs allocated only during request processing). You can purchase BigQuery slot commitments to save money on data analysis. GKE provides detailed metrics to help you understand cost optimization options. Understand how network pricing can affect the cost of data transfers and how you can optimize costs for specific networking services. For example, you can reduce the data transfer costs for external Application Load Balancers by using Cloud CDN or Google Cloud Armor. For more information, see Ways to lower external Application Load Balancer costs. Understand discount-based cost optimization options Familiarize yourself with the discount programs that Google Cloud offers, such as the following examples: Committed use discounts (CUDs): CUDs are suitable for resources that have predictable and steady usage. CUDs let you get significant reductions in price in exchange for committing to specific resource usage over a period (typically one to three years). You can also use CUD auto-renewal to avoid having to manually repurchase commitments when they expire. Sustained use discounts: For certain Google Cloud products like Compute Engine and GKE, you can get automatic discount credits after continuous resource usage beyond specific duration thresholds. Spot VMs: For fault-tolerant and flexible workloads, Spot VMs can help to reduce your Compute Engine costs. The cost of Spot VMs is significantly lower than regular VMs. However, Compute Engine might preemptively stop or delete Spot VMs to reclaim capacity. Spot VMs are suitable for batch jobs that can tolerate preemption and don't have high availability requirements. Discounts for specific product options: Some managed services like BigQuery offer discounts when you purchase dedicated or autoscaling query processing capacity. Evaluate and choose the discounts options that align with your workload characteristics and usage patterns. Incorporate cost estimates into architecture blueprints Encourage teams to develop architecture blueprints that include cost estimates for different deployment options and configurations. This practice empowers teams to compare costs proactively and make informed decisions that align with both technical and financial objectives. Use a consistent and standard set of labels for all your resources You can use labels to track costs and to identify and classify resources. Specifically, you can use labels to allocate costs to different projects, departments, or cost centers. Defining a formal labeling policy that aligns with the needs of the main stakeholders in your organization helps to make costs visible more widely. You can also use labels to filter resource cost and usage data based on target audience. Use automation tools like Terraform to enforce labeling on every resource that is created. To enhance cost visibility and attribution further, you can use the tools provided by the open-source cost attribution solution. Share cost reports with team members By sharing cost reports with your team members, you empower them to take ownership of their cloud spending. This practice enables cost-effective decision making, continuous cost optimization, and systematic improvements to your cost allocation model. Cost reports can be of several types, including the following: Periodic cost reports: Regular reports inform teams about their current cloud spending. Conventionally, these reports might be spreadsheet exports. More effective methods include automated emails and specialized dashboards. To ensure that cost reports provide relevant and actionable information without overwhelming recipients with unnecessary detail, the reports must be tailored to the target audiences. Setting up tailored reports is a foundational step toward more real-time and interactive cost visibility and management. Automated notifications: You can configure cost reports to proactively notify relevant stakeholders (for example, through email or chat) about cost anomalies, budget thresholds, or opportunities for cost optimization. By providing timely information directly to those who can act on it, automated alerts encourage prompt action and foster a proactive approach to cost optimization. Google Cloud dashboards: You can use the built-in billing dashboards in Google Cloud to get insights into cost breakdowns and to identify opportunities for cost optimization. Google Cloud also provides FinOps hub to help you monitor savings and get recommendations for cost optimization. An AI engine powers the FinOps hub to recommend cost optimization opportunities for all the resources that are currently deployed. To control access to these recommendations, you can implement role-based access control (RBAC). Custom dashboards: You can create custom dashboards by exporting cost data to an analytics database, like BigQuery. Use a visualization tool like Looker Studio to connect to the analytics database to build interactive reports and enable fine-grained access control through role-based permissions. Multicloud cost reports: For multicloud deployments, you need a unified view of costs across all the cloud providers to ensure comprehensive analysis, budgeting, and optimization. Use tools like BigQuery to centralize and analyze cost data from multiple cloud providers, and use Looker Studio to build team-specific interactive reports. Optimize resource usage This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you plan and provision resources to match the requirements and consumption patterns of your cloud workloads. Principle overview To optimize the cost of your cloud resources, you need to thoroughly understand your workloads resource requirements and load patterns. This understanding is the basis for a well defined cost model that lets you forecast the total cost of ownership (TCO) and identify cost drivers throughout your cloud adoption journey. By proactively analyzing and forecasting cloud spending, you can make informed choices about resource provisioning, utilization, and cost optimization. This approach lets you control cloud spending, avoid overprovisioning, and ensure that cloud resources are aligned with the dynamic needs of your workloads and environments. Recommendations To effectively optimize cloud resource usage, consider the following recommendations. Choose environment-specific resources Each deployment environment has different requirements for availability, reliability and scalability. For example, developers might prefer an environment that lets them rapidly deploy and run applications for short durations, but might not need high availability. On the other hand, a production environment typically needs high availability. To maximize the utilization of your resources, define environment-specific requirements based on your business needs. The following table lists examples of environment-specific requirements. Note: The requirements that are listed in this table are not exhaustive or prescriptive. They're meant to serve as examples to help you understand how requirements can vary based on the environment type. Environment Requirements Production High availability Predictable performance Operational stability Security with robust resources Development and testing Cost efficiency Flexible infrastructure with burstable capacity Ephemeral infrastructure when data persistence is not necessary Other environments (like staging and QA) Tailored resource allocation based on environment-specific requirements Choose workload-specific resources Each of your cloud workloads might have different requirements for availability, scalability, security, and performance. To optimize costs, you need to align resource choices with the specific requirements of each workload. For example, a stateless application might not require the same level of availability or reliability as a stateful backend. The following table lists more examples of workload-specific requirements. Note: The requirements that are listed in this table are not exhaustive or prescriptive. They're meant to serve as examples to help you understand how requirements can vary based on the workload type. Workload type Workload requirements Resource options Mission-critical Continuous availability, robust security, and high performance Premium resources and managed services like Spanner for high availability and global consistency of data. Non-critical Cost-efficient and autoscaling infrastructure Resources with basic features and ephemeral resources like Spot VMs. Event-driven Dynamic scaling based on the current demand for capacity and performance Serverless services like Cloud Run and Cloud Run functions. Experimental workloads Low cost and flexible environment for rapid development, iteration, testing, and innovation Resources with basic features, ephemeral resources like Spot VMs, and sandbox environments with defined spending limits. A benefit of the cloud is the opportunity to take advantage of the most appropriate computing power for a given workload. Some workloads are developed to take advantage of processor instruction sets, and others might not be designed in this way. Benchmark and profile your workloads accordingly. Categorize your workloads and make workload-specific resource choices (for example, choose appropriate machine families for Compute Engine VMs). This practice helps to optimize costs, enable innovation, and maintain the level of availability and performance that your workloads need. The following are examples of how you can implement this recommendation: For mission-critical workloads that serve globally distributed users, consider using Spanner. Spanner removes the need for complex database deployments by ensuring reliability and consistency of data in all regions. For workloads with fluctuating load levels, use autoscaling to ensure that you don't incur costs when the load is low and yet maintain sufficient capacity to meet the current load. You can configure autoscaling for many Google Cloud services, including Compute Engine VMs, Google Kubernetes Engine (GKE) clusters, and Cloud Run. When you set up autoscaling, you can configure maximum scaling limits to ensure that costs remain within specified budgets. Select regions based on cost requirements For your cloud workloads, carefully evaluate the available Google Cloud regions and choose regions that align with your cost objectives. The region with lowest cost might not offer optimal latency or it might not meet your sustainability requirements. Make informed decisions about where to deploy your workloads to achieve the desired balance. You can use the Google Cloud Region Picker to understand the trade-offs between cost, sustainability, latency, and other factors. Use built-in cost optimization options Google Cloud products provide built-in features to help you optimize resource usage and control costs. The following table lists examples of cost optimization features that you can use in some Google Cloud products: Product Cost optimization feature Compute Engine Automatically add or remove VMs based on the current load by using autoscaling. Avoid overprovisioning by creating and using custom machine types that match your workload's requirements. For non-critical or fault-tolerant workloads, reduce costs by using Spot VMs. In development environments, reduce costs by limiting the run time of VMs or by suspending or stopping VMs when you don't need them. GKE Automatically adjust the size of GKE clusters based on the current load by using cluster autoscaler. Automatically create and manage node pools based on workload requirements and ensure optimal resource utilization by using node auto-provisioning. Cloud Storage Automatically transition data to lower-cost storage classes based on the age of data or based on access patterns by using Object Lifecycle Management. Dynamically move data to the most cost-effective storage class based on usage patterns by using Autoclass. BigQuery Reduce query processing costs for steady-state workloads by using capacity-based pricing. Optimize query performance and costs by using partitioning and clustering techniques. Google Cloud VMware Engine Reduce VMware costs by using cost-optimization strategies like CUDs, optimizing storage consumption, and rightsizing ESXi clusters. Optimize resource sharing To maximize the utilization of cloud resources, you can deploy multiple applications or services on the same infrastructure, while still meeting the security and other requirements of the applications. For example, in development and testing environments, you can use the same cloud infrastructure to test all the components of an application. For the production environment, you can deploy each component on a separate set of resources to limit the extent of impact in case of incidents. The following are examples of how you can implement this recommendation: Use a single Cloud SQL instance for multiple non-production environments. Enable multiple development teams to share a GKE cluster by using the fleet team management feature in GKE Enterprise with appropriate access controls. Use GKE Autopilot to take advantage of cost-optimization techniques like bin packing and autoscaling that GKE implements by default. For AI and ML workloads, save GPU costs by using GPU-sharing strategies like multi-instance GPUs, time-sharing GPUs, and NVIDIA MPS. Develop and maintain reference architectures Create and maintain a repository of reference architectures that are tailored to meet the requirements of different deployment environments and workload types. To streamline the design and implementation process for individual projects, the blueprints can be centrally managed by a team like a Cloud Center of Excellence (CCoE). Project teams can choose suitable blueprints based on clearly defined criteria, to ensure architectural consistency and adoption of best practices. For requirements that are unique to a project, the project team and the central architecture team should collaborate to design new reference architectures. You can share the reference architectures across the organization to foster knowledge sharing and expand the repository of available solutions. This approach ensures consistency, accelerates development, simplifies decision-making, and promotes efficient resource utilization. Review the reference architectures provided by Google for various use cases and technologies. These reference architectures incorporate best practices for resource selection, sizing, configuration, and deployment. By using these reference architectures, you can accelerate your development process and achieve cost savings from the start. Enforce cost discipline by using organization policies Consider using organization policies to limit the available Google Cloud locations and products that team members can use. These policies help to ensure that teams adhere to cost-effective solutions and provision resources in locations that are aligned with your cost optimization goals. Estimate realistic budgets and set financial boundaries Develop detailed budgets for each project, workload, and deployment environment. Make sure that the budgets cover all aspects of cloud operations, including infrastructure costs, software licenses, personnel, and anticipated growth. To prevent overspending and ensure alignment with your financial goals, establish clear spending limits or thresholds for projects, services, or specific resources. Monitor cloud spending regularly against these limits. You can use proactive quota alerts to identify potential cost overruns early and take timely corrective action. In addition to setting budgets, you can use quotas and limits to help enforce cost discipline and prevent unexpected spikes in spending. You can exercise granular control over resource consumption by setting quotas at various levels, including projects, services, and even specific resource types. The following are examples of how you can implement this recommendation: Project-level quotas: Set spending limits or resource quotas at the project level to establish overall financial boundaries and control resource consumption across all the services within the project. Service-specific quotas: Configure quotas for specific Google Cloud services like Compute Engine or BigQuery to limit the number of instances, CPUs, or storage capacity that can be provisioned. Resource type-specific quotas: Apply quotas to individual resource types like Compute Engine VMs, Cloud Storage buckets, Cloud Run instances, or GKE nodes to restrict their usage and prevent unexpected cost overruns. Quota alerts: Get notifications when your quota usage (at the project level) reaches a percentage of the maximum value. By using quotas and limits in conjunction with budgeting and monitoring, you can create a proactive and multi-layered approach to cost control. This approach helps to ensure that your cloud spending remains within defined boundaries and aligns with your business objectives. Remember, these cost controls are not permanent or rigid. To ensure that the cost controls remain aligned with current industry standards and reflect your evolving business needs, you must review the controls regularly and adjust them to include new technologies and best practices. Optimize continuously This principle in the cost optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you optimize the cost of your cloud deployments based on constantly changing and evolving business goals. As your business grows and evolves, your cloud workloads need to adapt to changes in resource requirements and usage patterns. To derive maximum value from your cloud spending, you must maintain cost-efficiency while continuing to support business objectives. This requires a proactive and adaptive approach that focuses on continuous improvement and optimization. Principle overview To optimize cost continuously, you must proactively monitor and analyze your cloud environment and make suitable adjustments to meet current requirements. Focus your monitoring efforts on key performance indicators (KPIs) that directly affect your end users' experience, align with your business goals, and provide insights for continuous improvement. This approach lets you identify and address inefficiencies, adapt to changing needs, and continuously align cloud spending with strategic business goals. To balance comprehensive observability with cost effectiveness, understand the costs and benefits of monitoring resource usage and use appropriate process-improvement and optimization strategies. Recommendations To effectively monitor your Google Cloud environment and optimize cost continuously, consider the following recommendations. Focus on business-relevant metrics Effective monitoring starts with identifying the metrics that are most important for your business and customers. These metrics include the following: User experience metrics: Latency, error rates, throughput, and customer satisfaction metrics are useful for understanding your end users' experience when using your applications. Business outcome metrics: Revenue, customer growth, and engagement can be correlated with resource usage to identify opportunities for cost optimization. DevOps Research & Assessment (DORA) metrics: Metrics like deployment frequency, lead time for changes, change failure rate, and time to restore provide insights into the efficiency and reliability of your software delivery process. By improving these metrics, you can increase productivity, reduce downtime, and optimize cost. Site Reliability Engineering (SRE) metrics: Error budgets help teams to quantify and manage the acceptable level of service disruption. By establishing clear expectations for reliability, error budgets empower teams to innovate and deploy changes more confidently, knowing their safety margin. This proactive approach promotes a balance between innovation and stability, helping prevent excessive operational costs associated with major outages or prolonged downtime. Use observability for resource optimization The following are recommendations to use observability to identify resource bottlenecks and underutilized resources in your cloud deployments: Monitor resource utilization: Use resource utilization metrics to identify Google Cloud resources that are underutilized. For example, use metrics like CPU and memory utilization to identify idle VM resources. For Google Kubernetes Engine (GKE), you can view a detailed breakdown of costs and cost-related optimization metrics. For Google Cloud VMware Engine, review resource utilization to optimize CUDs, storage consumption, and ESXi right-sizing. Use cloud recommendations: Active Assist is a portfolio of intelligent tools that help you optimize your cloud operations. These tools provide actionable recommendations to reduce costs, increase performance, improve security and even make sustainability-focused decisions. For example, VM rightsizing insights can help to optimize resource allocation and avoid unnecessary spending. Correlate resource utilization with performance: Analyze the relationship between resource utilization and application performance to determine whether you can downgrade to less expensive resources without affecting the user experience. Balance troubleshooting needs with cost Detailed observability data can help with diagnosing and troubleshooting issues. However, storing excessive amounts of observability data or exporting unnecessary data to external monitoring tools can lead to unnecessary costs. For efficient troubleshooting, consider the following recommendations: Collect sufficient data for troubleshooting: Ensure that your monitoring solution captures enough data to efficiently diagnose and resolve issues when they arise. This data might include logs, traces, and metrics at various levels of granularity. Use sampling and aggregation: Balance the need for detailed data with cost considerations by using sampling and aggregation techniques. This approach lets you collect representative data without incurring excessive storage costs. Understand the pricing models of your monitoring tools and services: Evaluate different monitoring solutions and choose options that align with your project's specific needs, budget, and usage patterns. Consider factors like data volume, retention requirements, and the required features when making your selection. Regularly review your monitoring configuration: Avoid collecting excessive data by removing unnecessary metrics or logs. Tailor data collection to roles and set role-specific retention policies Consider the specific data needs of different roles. For example, developers might primarily need access to traces and application-level logs, whereas IT administrators might focus on system logs and infrastructure metrics. By tailoring data collection, you can reduce unnecessary storage costs and avoid overwhelming users with irrelevant information. Additionally, you can define retention policies based on the needs of each role and any regulatory requirements. For example, developers might need access to detailed logs for a shorter period, while financial analysts might require longer-term data. Consider regulatory and compliance requirements In certain industries, regulatory requirements mandate data retention. To avoid legal and financial risks, you need to ensure that your monitoring and data retention practices help you adhere to relevant regulations. At the same time, you need to maintain cost efficiency. Consider the following recommendations: Determine the specific data retention requirements for your industry or region, and ensure that your monitoring strategy meets the requirements of those requirements. Implement appropriate data archival and retrieval mechanisms to meet audit and compliance needs while minimizing storage costs. Implement smart alerting Alerting helps to detect and resolve issues in a timely manner. However, a balance is necessary between an approach that keeps you informed, and one that overwhelms you with notifications. By designing intelligent alerting systems, you can prioritize critical issues that have higher business impact. Consider the following recommendations: Prioritize issues that affect customers: Design alerts that trigger rapidly for issues that directly affect the customer experience, like website outages, slow response times, or transaction failures. Tune for temporary problems: Use appropriate thresholds and delay mechanisms to avoid unnecessary alerts for temporary problems or self-healing system issues that don't affect customers. Customize alert severity: Ensure that the most urgent issues receive immediate attention by differentiating between critical and noncritical alerts. Use notification channels wisely: Choose appropriate channels for alert notifications (email, SMS, or paging) based on the severity and urgency of the alerts. Google Cloud Architecture Framework: Performance optimization This pillar in the Google Cloud Architecture Framework provides recommendations to optimize the performance of workloads in Google Cloud. This document is intended for architects, developers, and administrators who plan, design, deploy, and manage workloads in Google Cloud. The recommendations in this pillar can help your organization to operate efficiently, improve customer satisfaction, increase revenue, and reduce cost. For example, when the backend processing time of an application decreases, users experience faster response times, which can lead to higher user retention and more revenue. The performance optimization process can involve a trade-off between performance and cost. However, optimizing performance can sometimes help you reduce costs. ​​For example, when the load increases, autoscaling can help to provide predictable performance by ensuring that the system resources aren't overloaded. Autoscaling also helps you to reduce costs by removing unused resources during periods of low load. Performance optimization is a continuous process, not a one-time activity. The following diagram shows the stages in the performance optimization process: The performance optimization process is an ongoing cycle that includes the following stages: Define requirements: Define granular performance requirements for each layer of the application stack before you design and develop your applications. To plan resource allocation, consider the key workload characteristics and performance expectations. Design and deploy: Use elastic and scalable design patterns that can help you meet your performance requirements. Monitor and analyze: Monitor performance continually by using logs, tracing, metrics, and alerts. Optimize: Consider potential redesigns as your applications evolve. Rightsize cloud resources and use new features to meet changing performance requirements. As shown in the preceding diagram, continue the cycle of monitoring, re-assessing requirements, and adjusting the cloud resources. For performance optimization principles and recommendations that are specific to AI and ML workloads, see AI and ML perspective: Performance optimization in the Architecture Framework. Core principles The recommendations in the performance optimization pillar of the Architecture Framework are mapped to the following core principles: Plan resource allocation Take advantage of elasticity Promote modular design Continuously monitor and improve performance ContributorsAuthors: Daniel Lees | Cloud Security ArchitectGary Harmson | Customer EngineerLuis Urena | Developer Relations EngineerZach Seils | Networking SpecialistOther contributors: Filipe Gracio, PhD | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRyan Cox | Principal ArchitectRadhika Kanakam | Senior Program Manager, Cloud GTMWade Holmes | Global Solutions Director Plan resource allocation This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you plan resources for your workloads in Google Cloud. It emphasizes the importance of defining granular requirements before you design and develop applications for cloud deployment or migration. Principle overview To meet your business requirements, it's important that you define the performance requirements for your applications, before design and development. Define these requirements as granularly as possible for the application as a whole and for each layer of the application stack. For example, in the storage layer, you must consider the throughput and I/O operations per second (IOPS) that the applications need. From the beginning, plan application designs with performance and scalability in mind. Consider factors such as the number of users, data volume, and potential growth over time. Performance requirements for each workload vary and depend on the type of workload. Each workload can contain a mix of component systems and services that have unique sets of performance characteristics. For example, a system that's responsible for periodic batch processing of large datasets has different performance demands than an interactive virtual desktop solution. Your optimization strategies must address the specific needs of each workload. Select services and features that align with the performance goals of each workload. For performance optimization, there's no one-size-fits-all solution. When you optimize each workload, the entire system can achieve optimal performance and efficiency. Consider the following workload characteristics that can influence your performance requirements: Deployment archetype: The deployment archetype that you select for an application can influence your choice of products and features, which then determine the performance that you can expect from your application. Resource placement: When you select a Google Cloud region for your application resources, we recommend that you prioritize low latency for end users, adhere to data-locality regulations, and ensure the availability of required Google Cloud products and services. Network connectivity: Choose networking services that optimize data access and content delivery. Take advantage of Google Cloud's global network, high-speed backbones, interconnect locations, and caching services. Application hosting options: When you select a hosting platform, you must evaluate the performance advantages and disadvantages of each option. For example, consider bare metal, virtual machines, containers, and serverless platforms. Storage strategy: Choose an optimal storage strategy that's based on your performance requirements. Resource configurations: The machine type, IOPS, and throughput can have a significant impact on performance. Additionally, early in the design phase, you must consider appropriate security capabilities and their impact on resources. When you plan security features, be prepared to accommodate the necessary performance trade-offs to avoid any unforeseen effects. Recommendations To ensure optimal resource allocation, consider the recommendations in the following sections. Configure and manage quotas Ensure that your application uses only the necessary resources, such as memory, storage, and processing power. Over-allocation can lead to unnecessary expenses, while under-allocation might result in performance degradation. To accommodate elastic scaling and to ensure that adequate resources are available, regularly monitor the capacity of your quotas. Additionally, track quota usage to identify potential scaling constraints or over-allocation issues, and then make informed decisions about resource allocation. Educate and promote awareness Inform your users about the performance requirements and provide educational resources about effective performance management techniques. To evaluate progress and to identify areas for improvement, regularly document the target performance and the actual performance. Load test your application to find potential breakpoints and to understand how you can scale the application. Monitor performance metrics Use Cloud Monitoring to analyze trends in performance metrics, to analyze the effects of experiments, to define alerts for critical metrics, and to perform retrospective analyses. Active Assist is a set of tools that can provide insights and recommendations to help optimize resource utilization. These recommendations can help you to adjust resource allocation and improve performance. Take advantage of elasticity This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you incorporate elasticity, which is the ability to adjust resources dynamically based on changes in workload requirements. Elasticity allows different components of a system to scale independently. This targeted scaling can help improve performance and cost efficiency by allocating resources precisely where they're needed, without over provisioning or under provisioning your resources. Principle overview The performance requirements of a system directly influence when and how the system scales vertically or scales horizontally. You need to evaluate the system's capacity and determine the load that the system is expected to handle at baseline. Then, you need to determine how you want the system to respond to increases and decreases in the load. When the load increases, the system must scale out horizontally, scale up vertically, or both. For horizontal scaling, add replica nodes to ensure that the system has sufficient overall capacity to fulfill the increased demand. For vertical scaling, replace the application's existing components with components that contain more capacity, more memory, and more storage. When the load decreases, the system must scale down (horizontally, vertically, or both). Define the circumstances in which the system scales up or scales down. Plan to manually scale up systems for known periods of high traffic. Use tools like autoscaling, which responds to increases or decreases in the load. Recommendations To take advantage of elasticity, consider the recommendations in the following sections. Plan for peak load periods You need to plan an efficient scaling path for known events, such as expected periods of increased customer demand. Consider scaling up your system ahead of known periods of high traffic. For example, if you're a retail organization, you expect demand to increase during seasonal sales. We recommend that you manually scale up or scale out your systems before those sales to ensure that your system can immediately handle the increased load or immediately adjust existing limits. Otherwise, the system might take several minutes to add resources in response to real-time changes. Your application's capacity might not increase quickly enough and cause some users to experience delays. For unknown or unexpected events, such as a sudden surge in demand or traffic, you can use autoscaling features to trigger elastic scaling that's based on metrics. These metrics can include CPU utilization, load balancer serving capacity, latency, and even custom metrics that you define in Cloud Monitoring. For example, consider an application that runs on a Compute Engine managed instance group (MIG). This application has a requirement that each instance performs optimally until the average CPU utilization reaches 75%. In this example, you might define an autoscaling policy that creates more instances when the CPU utilization reaches the threshold. These newly-created instances help absorb the load, which helps ensure that the average CPU utilization remains at an optimal rate until the maximum number of instances that you've configured for the MIG is reached. When the demand decreases, the autoscaling policy removes the instances that are no longer needed. Plan resource slot reservations in BigQuery or adjust the limits for autoscaling configurations in Spanner by using the managed autoscaler. Use predictive scaling If your system components include Compute Engine, you must evaluate whether predictive autoscaling is suitable for your workload. Predictive autoscaling forecasts the future load based on your metrics' historical trends—for example, CPU utilization. Forecasts are recomputed every few minutes, so the autoscaler rapidly adapts its forecast to very recent changes in load. Without predictive autoscaling, an autoscaler can only scale a group reactively, based on observed real-time changes in load. Predictive autoscaling works with both real-time data and historical data to respond to both the current and the forecasted load. Implement serverless architectures Consider implementing a serverless architecture with serverless services that are inherently elastic, such as the following: Cloud Run Cloud Run functions BigQuery Spanner Eventarc Workflows Pub/Sub Unlike autoscaling in other services that require fine-tuning rules (for example, Compute Engine), serverless autoscaling is instant and can scale down to zero resources. Use Autopilot mode for Kubernetes For complex applications that require greater control over Kubernetes, consider Autopilot mode in Google Kubernetes Engine (GKE). Autopilot mode provides automation and scalability by default. GKE automatically scales nodes and resources based on traffic. GKE manages nodes, creates new nodes for your applications, and configures automatic upgrades and repairs. Promote modular design This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you promote a modular design. Modular components and clear interfaces can enable flexible scaling, independent updates, and future component separation. Principle overview Understand the dependencies between the application components and the system components to design a scalable system. Modular design enables flexibility and resilience, regardless of whether a monolithic or microservices architecture was initially deployed. By decomposing the system into well-defined, independent modules with clear interfaces, you can scale individual components to meet specific demands. Targeted scaling can help optimize resource utilization and reduce costs in the following ways: Provisions only the necessary resources to each component, and allocates fewer resources to less-demanding components. Adds more resources during high-traffic periods to maintain the user experience. Removes under-utilized resources without compromising performance. Modularity also enhances maintainability. Smaller, self-contained units are easier to understand, debug, and update, which can lead to faster development cycles and reduced risk. While modularity offers significant advantages, you must evaluate the potential performance trade-offs. The increased communication between modules can introduce latency and overhead. Strive for a balance between modularity and performance. A highly modular design might not be universally suitable. When performance is critical, a more tightly coupled approach might be appropriate. System design is an iterative process, in which you continuously review and refine your modular design. Recommendations To promote modular designs, consider the recommendations in the following sections. Design for loose coupling Design a loosely coupled architecture. Independent components with minimal dependencies can help you build scalable and resilient applications. As you plan the boundaries for your services, you must consider the availability and scalability requirements. For example, if one component has requirements that are different from your other components, you can design the component as a standalone service. Implement a plan for graceful failures for less-important subprocesses or services that don't impact the response time of the primary services. Design for concurrency and parallelism Design your application to support multiple tasks concurrently, like processing multiple user requests or running background jobs while users interact with your system. Break large tasks into smaller chunks that can be processed at the same time by multiple service instances. Task concurrency lets you use features like autoscaling to increase the resource allocation in products like the following: Compute Engine GKE BigQuery Spanner Balance modularity for flexible resource allocation Where possible, ensure that each component uses only the necessary resources (like memory, storage, and processing power) for specific operations. Resource over-allocation can result in unnecessary costs, while resource under-allocation can compromise performance. Use well-defined interfaces Ensure modular components communicate effectively through clear, standardized interfaces (like APIs and message queues) to reduce overhead from translation layers or from extraneous traffic. Use stateless models A stateless model can help ensure that you can handle each request or interaction with the service independently from previous requests. This model facilitates scalability and recoverability, because you can grow, shrink, or restart the service without losing the data necessary for in-progress requests or processes. Choose complementary technologies Choose technologies that complement the modular design. Evaluate programming languages, frameworks, and databases for their modularity support. For more information, see the following resources: Re-architecting to cloud native Introduction to microservices Continuously monitor and improve performance This principle in the performance optimization pillar of the Google Cloud Architecture Framework provides recommendations to help you continuously monitor and improve performance. After you deploy applications, continuously monitor their performance by using logs, tracing, metrics, and alerts. As your applications grow and evolve, you can use the trends in these data points to re-assess your performance requirements. You might eventually need to redesign parts of your applications to maintain or improve their performance. Principle overview The process of continuous performance improvement requires robust monitoring tools and strategies. Cloud observability tools can help you to collect key performance indicators (KPIs) such as latency, throughput, error rates, and resource utilization. Cloud environments offer a variety of methods to conduct granular performance assessments across the application, the network, and the end-user experience. Improving performance is an ongoing effort that requires a multi-faceted approach. The following key mechanisms and processes can help you to boost performance: To provide clear direction and help track progress, define performance objectives that align with your business goals. Set SMART goals: specific, measurable, achievable, relevant, and time-bound. To measure performance and identify areas for improvement, gather KPI metrics. To continuously monitor your systems for issues, use visualized workflows in monitoring tools. Use architecture process mapping techniques to identify redundancies and inefficiencies. To create a culture of ongoing improvement, provide training and programs that support your employees' growth. To encourage proactive and continuous improvement, incentivize your employees and customers to provide ongoing feedback about your application's performance. Recommendations To promote modular designs, consider the recommendations in the following sections. Define clear performance goals and metrics Define clear performance objectives that align with your business goals. This requires a deep understanding of your application's architecture and the performance requirements of each application component. As a priority, optimize the most critical components that directly influence your core business functions and user experience. To help ensure that these components continue to run efficiently and meet your business needs, set specific and measurable performance targets. These targets can include response times, error rates, and resource utilization thresholds. This proactive approach can help you to identify and address potential bottlenecks, optimize resource allocation, and ultimately deliver a seamless and high-performing experience for your users. Monitor performance Continuously monitor your cloud systems for performance issues and set up alerts for any potential problems. Monitoring and alerts can help you to catch and fix issues before they affect users. Application profiling can help to identify bottlenecks and can help to optimize resource use. You can use tools that facilitate effective troubleshooting and network optimization. Use Google Cloud Observability to identify areas that have high CPU consumption, memory consumption, or network consumption. These capabilities can help developers improve efficiency, reduce costs, and enhance the user experience. Network Intelligence Center shows visualizations of the topology of your network infrastructure, and can help you to identify high-latency paths. Incentivize continuous improvement Create a culture of ongoing improvement that can benefit both the application and the user experience. Provide your employees with training and development opportunities that enhance their skills and knowledge in performance techniques across cloud services. Establish a community of practice (CoP) and offer mentorship and coaching programs to support employee growth. To prevent reactive performance management and encourage proactive performance management, encourage ongoing feedback from your employees, your customers, and your stakeholders. You can consider gamifying the process by tracking KPIs on performance and presenting those metrics to teams on a frequent basis in the form of a league table. To understand your performance and user happiness over time, we recommend that you measure user feedback quantitatively and qualitatively. The HEART framework can help you capture user feedback across five categories: Happiness Engagement Adoption Retention Task success By using such a framework, you can incentivize engineers with data-driven feedback, user-centered metrics, actionable insights, and a clear understanding of goals. Design for environmental sustainability This document in the Google Cloud Architecture Framework summarizes how you can approach environmental sustainability for your workloads in Google Cloud. It includes information about how to minimize your carbon footprint on Google Cloud. Understand your carbon footprint To understand the carbon footprint from your Google Cloud usage, use the Carbon Footprint dashboard. The Carbon Footprint dashboard attributes emissions to the Google Cloud projects that you own and the cloud services that you use. Choose the most suitable cloud regions One effective way to reduce carbon emissions is to choose cloud regions with lower carbon emissions. To help you make this choice, Google publishes carbon data for all Google Cloud regions. When you choose a region, you might need to balance lowering emissions with other requirements, such as pricing and network latency. To help select a region, use the Google Cloud Region Picker. Choose the most suitable cloud services To help reduce your existing carbon footprint, consider migrating your on-premises VM workloads to Compute Engine. Consider serverless options for workloads that don't need VMs. These managed services often optimize resource usage automatically, reducing costs and carbon footprint. Minimize idle cloud resources Idle resources incur unnecessary costs and emissions. Some common causes of idle resources include the following: Unused active cloud resources, such as idle VM instances. Over-provisioned resources, such as larger VM instances machine types than necessary for a workload. Non-optimal architectures, such as lift-and-shift migrations that aren't always optimized for efficiency. Consider making incremental improvements to these architectures. The following are some general strategies to help minimize wasted cloud resources: Identify idle or overprovisioned resources and either delete them or rightsize them. Refactor your architecture to incorporate a more optimal design. Migrate workloads to managed services. Reduce emissions for batch workloads Run batch workloads in regions with lower carbon emissions. For further reductions, run workloads at times that coincide with lower grid carbon intensity when possible. What's next Learn how to use Carbon Footprint data to measure, report, and reduce your cloud carbon emissions. Architecture Framework: AI and ML perspective This document in the Google Cloud Architecture Framework describes principles and recommendations to help you to design, build, and manage AI and ML workloads in Google Cloud that meet your operational, security, reliability, cost, and performance goals. The target audience for this document includes decision makers, architects, administrators, developers, and operators who design, build, deploy, and maintain AI and ML workloads in Google Cloud. The following pages describe principles and recommendations that are specific to AI and ML, for each pillar of the Google Cloud Architecture Framework: AI and ML perspective: Operational excellence AI and ML perspective: Security AI and ML perspective: Reliability AI and ML perspective: Cost optimization AI and ML perspective: Performance optimization ContributorsAuthors: Benjamin Sadik | AI and ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerIsaac Lo | AI Business Development ManagerKamilla Kurta | GenAI/ML Specialist Customer EngineerMohamed Fawzi | Benelux Security and Compliance LeadRick (Rugui) Chen | AI Infrastructure Solutions ArchitectSannya Dang | AI Solution ArchitectOther contributors: Daniel Lees | Cloud Security ArchitectGary Harmson | Customer EngineerJose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization SpecialistRadhika Kanakam | Senior Program Manager, Cloud GTMRyan Cox | Principal ArchitectStef Ruinard | Generative AI Field Solutions ArchitectWade Holmes | Global Solutions DirectorZach Seils | Networking Specialist AI and ML perspective: Operational excellence This document in the Architecture Framework: AI and ML perspective provides an overview of the principles and recommendations to help you to build and operate robust AI and ML systems on Google Cloud. These recommendations help you to set up foundational elements like observability, automation, and scalability. This document's recommendations align with the operational excellence pillar of the Architecture Framework. Operational excellence within the AI and ML domain is the ability to seamlessly deploy, manage, and govern the intricate AI and ML systems and pipelines that power your organization's strategic objectives. Operational excellence lets you respond efficiently to changes, reduce operational complexity, and ensure that operations remain aligned with business goals. Build a robust foundation for model development Establish a robust foundation to streamline model development, from problem definition to deployment. Such a foundation ensures that your AI solutions are built on reliable and efficient components and choices. This kind of foundation helps you to release changes and improvements quickly and easily. Consider the following recommendations: Define the problem that the AI system solves and the outcome that you want. Identify and gather relevant data that's required to train and evaluate your models. Then, clean and preprocess the raw data. Implement data validation checks to ensure data quality and integrity. Choose the appropriate ML approach for the task. When you design the structure and parameters of the model, consider the model's complexity and computational requirements. Adopt a version control system for code, model, and data. Automate the model-development lifecycle From data preparation and training to deployment and monitoring, automation helps you to improve the quality and efficiency of your operations. Automation enables seamless, repeatable, and error-free model development and deployment. Automation minimizes manual intervention, speeds up release cycles, and ensures consistency across environments. Consider the following recommendations: Use a managed pipeline orchestration system to orchestrate and automate the ML workflow. The pipeline must handle the major steps of your development lifecycle: preparation, training, deployment, and evaluation. Implement CI/CD pipelines for the model-development lifecycle. These pipelines should automate the building, testing, and deployment of models. The pipelines should also include continuous training to retrain models on new data as needed. Implement phased release approaches such as canary deployments or A/B testing, for safe and controlled model releases. Implement observability When you implement observability, you can gain deep insights into model performance, data drift, and system health. Implement continuous monitoring, alerting, and logging mechanisms to proactively identify issues, trigger timely responses, and ensure operational continuity. Consider the following recommendations: Implement permanent and automated performance monitoring for your models. Use metrics and success criteria for ongoing evaluation of the model after deployment. Monitor your deployment endpoints and infrastructure to ensure service availability. Set up custom alerting based on business-specific thresholds and anomalies to ensure that issues are identified and resolved in a timely manner. Use explainable AI techniques to understand and interpret model outputs. Build a culture of operational excellence Operational excellence is built on a foundation of people, culture, and professional practices. The success of your team and business depends on how effectively your organization implements methodologies that enable the reliable and rapid development of AI capabilities. Consider the following recommendations: Champion automation and standardization as core development methodologies. Streamline your workflows and manage the ML lifecycle efficiently by using MLOps techniques. Automate tasks to free up time for innovation, and standardize processes to support consistency and easier troubleshooting. Prioritize continuous learning and improvement. Promote learning opportunities that team members can use to enhance their skills and stay current with AI and ML advancements. Encourage experimentation and conduct regular retrospectives to identify areas for improvement. Cultivate a culture of accountability and ownership. Define clear roles so that everyone understands their contributions. Empower teams to make decisions within boundaries and track progress by using transparent metrics. Embed AI ethics and safety into the culture. Prioritize responsible systems by integrating ethics considerations into every stage of the ML lifecycle. Establish clear ethics principles and foster open discussions about ethics-related challenges. Design for scalability Architect your AI solutions to handle growing data volumes and user demands. Use scalable infrastructure so that your models can adapt and perform optimally as your project expands. Consider the following recommendations: Plan for capacity and quotas. Anticipate future growth, and plan your infrastructure capacity and resource quotas accordingly. Prepare for peak events. Ensure that your system can handle sudden spikes in traffic or workload during peak events. Scale AI applications for production. Design for horizontal scaling to accommodate increases in the workload. Use frameworks like Ray on Vertex AI to parallelize tasks across multiple machines. Use managed services where appropriate. Use services that help you to scale while minimizing the operational overhead and complexity of manual interventions. ContributorsAuthors: Sannya Dang | AI Solution ArchitectFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerRyan Cox | Principal ArchitectStef Ruinard | Generative AI Field Solutions Architect AI and ML perspective: Security This document in the Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to ensure that your AI and ML deployments meet the security and compliance requirements of your organization. The recommendations in this document align with the security pillar of the Architecture Framework. Secure deployment of AI and ML workloads is a critical requirement, particularly in enterprise environments. To meet this requirement, you need to adopt a holistic security approach that starts from the initial conceptualization of your AI and ML solutions and extends to development, deployment, and ongoing operations. Google Cloud offers robust tools and services that are designed to help secure your AI and ML workloads. Define clear goals and requirements It's easier to integrate the required security and compliance controls early in your design and development process, than to add the controls after development. From the start of your design and development process, make decisions that are appropriate for your specific risk environment and your specific business priorities. Consider the following recommendations: Identify potential attack vectors and adopt a security and compliance perspective from the start. As you design and evolve your AI systems, keep track of the attack surface, potential risks, and obligations that you might face. Align your AI and ML security efforts with your business goals and ensure that security is an integral part of your overall strategy. Understand the effects of your security choices on your main business goals. Keep data secure and prevent loss or mishandling Data is a valuable and sensitive asset that must be kept secure. Data security helps you to maintain user trust, support your business objectives, and meet your compliance requirements. Consider the following recommendations: Don't collect, keep, or use data that's not strictly necessary for your business goals. If possible, use synthetic or fully anonymized data. Monitor data collection, storage, and transformation. Maintain logs for all data access and manipulation activities. The logs help you to audit data access, detect unauthorized access attempts, and prevent unwanted access. Implement different levels of access (for example, no-access, read-only, or write) based on user roles. Ensure that permissions are assigned based on the principle of least privilege. Users must have only the minimum permissions that are necessary to let them perform their role activities. Implement measures like encryption, secure perimeters, and restrictions on data movement. These measures help you to prevent data exfiltration and data loss. Guard against data poisoning for your ML training systems. Keep AI pipelines secure and robust against tampering Your AI and ML code and the code-defined pipelines are critical assets. Code that isn't secured can be tampered with, which can lead to data leaks, compliance failure, and disruption of critical business activities. Keeping your AI and ML code secure helps to ensure the integrity and value of your models and model outputs. Consider the following recommendations: Use secure coding practices, such as dependency management or input validation and sanitization, during model development to prevent vulnerabilities. Protect your pipeline code and your model artifacts, like files, model weights, and deployment specifications, from unauthorized access. Implement different access levels for each artifact based on user roles and needs. Enforce lineage and tracking of your assets and pipeline runs. This enforcement helps you to meet compliance requirements and to avoid compromising production systems. Deploy on secure systems with secure tools and artifacts Ensure that your code and models run in a secure environment that has a robust access control system with security assurances for the tools and artifacts that are deployed in the environment. Consider the following recommendations: Train and deploy your models in a secure environment that has appropriate access controls and protection against unauthorized use or manipulation. Follow standard Supply-chain Levels for Software Artifacts (SLSA) guidelines for your AI-specific artifacts, like models and software packages. Prefer using validated prebuilt container images that are specifically designed for AI workloads. Protect and monitor inputs AI systems need inputs to make predictions, generate content, or automate actions. Some inputs might pose risks or be used as attack vectors that must be detected and sanitized. Detecting potential malicious inputs early helps you to keep your AI systems secure and operating as intended. Consider the following recommendations: Implement secure practices to develop and manage prompts for generative AI systems, and ensure that the prompts are screened for harmful intent. Monitor inputs to predictive or generative systems to prevent issues like overloaded endpoints or prompts that the systems aren't designed to handle. Ensure that only the intended users of a deployed system can use it. Monitor, evaluate, and prepare to respond to outputs AI systems deliver value because they produce outputs that augment, optimize, or automate human decision-making. To maintain the integrity and trustworthiness of your AI systems and applications, you need to make sure that the outputs are secure and within expected parameters. You also need a plan to respond to incidents. Consider the following recommendations: Monitor the outputs of your AI and ML models in production, and identify any performance, security, and compliance issues. Evaluate model performance by implementing robust metrics and security measures, like identifying out-of-scope generative responses or extreme outputs in predictive models. Collect user feedback on model performance. Implement robust alerting and incident response procedures to address any potential issues. ContributorsAuthors: Kamilla Kurta | GenAI/ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerMohamed Fawzi | Benelux Security and Compliance LeadOther contributors: Daniel Lees | Cloud Security ArchitectKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerWade Holmes | Global Solutions Director AI and ML perspective: Reliability This document in the Architecture Framework: AI and ML perspective provides an overview of the principles and recommendations to design and operate reliable AI and ML systems on Google Cloud. It explores how to integrate advanced reliability practices and observability into your architectural blueprints. The recommendations in this document align with the reliability pillar of the Architecture Framework. In the fast-evolving AI and ML landscape, reliable systems are essential for ensuring customer satisfaction and achieving business goals. You need AI and ML systems that are robust, reliable, and adaptable to meet the unique demands of both predictive ML and generative AI. To handle the complexities of MLOps—from development to deployment and continuous improvement—you need to use a reliability-first approach. Google Cloud offers a purpose-built AI infrastructure that's aligned with Site Reliability Engineering (SRE) principles and provides a powerful foundation for reliable AI and ML systems. Ensure that infrastructure is scalable and highly available By architecting for scalability and availability, you enable your applications to handle varying levels of demand without service disruptions or performance degradation. This means that your AI services are still available to users during infrastructure outages and when traffic is very high. Consider the following recommendations: Design your AI systems with automatic and dynamic scaling capabilities to handle fluctuations in demand. This helps to ensure optimal performance, even during traffic spikes. Manage resources proactively and anticipate future needs through load testing and performance monitoring. Use historical data and predictive analytics to make informed decisions about resource allocation. Design for high availability and fault tolerance by adopting the multi-zone and multi-region deployment archetypes in Google Cloud and by implementing redundancy and replication. Distribute incoming traffic across multiple instances of your AI and ML services and endpoints. Load balancing helps to prevent any single instance from being overloaded and helps to ensure consistent performance and availability. Use a modular and loosely coupled architecture To make your AI systems resilient to failures in individual components, use a modular architecture. For example, design the data processing and data validation components as separate modules. When a particular component fails, the modular architecture helps to minimize downtime and lets your teams develop and deploy fixes faster. Consider the following recommendations: Separate your AI and ML system into small self-contained modules or components. This approach promotes code reusability, simplifies testing and maintenance, and lets you develop and deploy individual components independently. Design the loosely coupled modules with well-defined interfaces. This approach minimizes dependencies, and it lets you make independent updates and changes without impacting the entire system. Plan for graceful degradation. When a component fails, the other parts of the system must continue to provide an adequate level of functionality. Use APIs to create clear boundaries between modules and to hide the module-level implementation details. This approach lets you update or replace individual components without affecting interactions with other parts of the system. Build an automated MLOps platform With an automated MLOps platform, the stages and outputs of your model lifecycle are more reliable. By promoting consistency, loose coupling, and modularity, and by expressing operations and infrastructure as code, you remove fragile manual steps and maintain AI and ML systems that are more robust and reliable. Consider the following recommendations: Automate the model development lifecycle, from data preparation and validation to model training, evaluation, deployment, and monitoring. Manage your infrastructure as code (IaC). This approach enables efficient version control, quick rollbacks when necessary, and repeatable deployments. Validate that your models behave as expected with relevant data. Automate performance monitoring of your models, and build appropriate alerts for unexpected outputs. Validate the inputs and outputs of your AI and ML pipelines. For example, validate data, configurations, command arguments, files, and predictions. Configure alerts for unexpected or unallowed values. Adopt a managed version-control strategy for your model endpoints. This kind of strategy enables incremental releases and quick recovery in the event of problems. Maintain trust and control through data and model governance The reliability of AI and ML systems depends on the trust and governance capabilities of your data and models. AI outputs can fail to meet expectations in silent ways. For example, the outputs might be formally consistent but they might be incorrect or unwanted. By implementing traceability and strong governance, you can ensure that the outputs are reliable and trustworthy. Consider the following recommendations: Use a data and model catalog to track and manage your assets effectively. To facilitate tracing and audits, maintain a comprehensive record of data and model versions throughout the lifecycle. Implement strict access controls and audit trails to protect sensitive data and models. Address the critical issue of bias in AI, particularly in generative AI applications. To build trust, strive for transparency and explainability in model outputs. Automate the generation of feature statistics and implement anomaly detection to proactively identify data issues. To ensure model reliability, establish mechanisms to detect and mitigate the impact of changes in data distributions. Implement holistic AI and ML observability and reliability practices To continuously improve your AI operations, you need to define meaningful reliability goals and measure progress. Observability is a foundational element of reliable systems. Observability lets you manage ongoing operations and critical events. Well-implemented observability helps you to build and maintain a reliable service for your users. Consider the following recommendations: Track infrastructure metrics for processors (CPUs, GPUs, and TPUs) and for other resources like memory usage, network latency, and disk usage. Perform load testing and performance monitoring. Use the test results and metrics from monitoring to manage scaling and capacity for your AI and ML systems. Establish reliability goals and track application metrics. Measure metrics like throughput and latency for the AI applications that you build. Monitor the usage patterns of your applications and the exposed endpoints. Establish model-specific metrics like accuracy or safety indicators in order to evaluate model reliability. Track these metrics over time to identify any drift or degradation. For efficient version control and automation, define the monitoring configurations as code. Define and track business-level metrics to understand the impact of your models and reliability on business outcomes. To measure the reliability of your AI and ML services, consider adopting the SRE approach and define service level objectives (SLOs). ContributorsAuthors: Rick (Rugui) Chen | AI Infrastructure Solutions ArchitectFilipe Gracio, PhD | Customer EngineerOther contributors: Jose Andrade | Enterprise Infrastructure Customer EngineerKumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer Engineer AI and ML perspective: Cost optimization This document in Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to optimize the cost of your AI systems throughout the ML lifecycle. By adopting a proactive and informed cost management approach, your organization can realize the full potential of AI and ML systems and also maintain financial discipline. The recommendations in this document align with the cost optimization pillar of the Architecture Framework. AI and ML systems can help you to unlock valuable insights and predictive capabilities from data. For example, you can reduce friction in internal processes, improve user experiences, and gain deeper customer insights. The cloud offers vast amounts of resources and quick time-to-value without large up-front investments for AI and ML workloads. To maximize business value and to align the spending with your business goals, you need to understand the cost drivers, proactively optimize costs, set up spending controls, and adopt FinOps practices. Define and measure costs and returns To effectively manage your AI and ML costs in Google Cloud, you must define and measure the expenses for cloud resources and the business value of your AI and ML initiatives. Google Cloud provides comprehensive tools for billing and cost management to help you to track expenses granularly. Business value metrics that you can measure include customer satisfaction, revenue, and operational costs. By establishing concrete metrics for both costs and business value, you can make informed decisions about resource allocation and optimization. Consider the following recommendations: Establish clear business objectives and key performance indicators (KPIs) for your AI and ML projects. Use the billing information provided by Google Cloud to implement cost monitoring and reporting processes that can help you to attribute costs to specific AI and ML activities. Establish dashboards, alerting, and reporting systems to track costs and returns against KPIs. Optimize resource allocation To achieve cost efficiency for your AI and ML workloads in Google Cloud, you must optimize resource allocation. By carefully aligning resource allocation with the needs of your workloads, you can avoid unnecessary expenses and ensure that your AI and ML systems have the resources that they need to perform optimally. Consider the following recommendations: Use autoscaling to dynamically adjust resources for training and inference. Start with small models and data. Save costs by testing hypotheses at a smaller scale when possible. Discover your compute needs through experimentation. Rightsize the resources that are used for training and serving based on your ML requirements. Adopt MLOps practices to reduce duplication, manual processes, and inefficient resource allocation. Enforce data management and governance practices Effective data management and governance practices play a critical role in cost optimization. Well-organized data helps your organization to avoid needless duplication, reduces the effort required to obtain high quality data, and encourages teams to reuse datasets. By proactively managing data, you can reduce storage costs, enhance data quality, and ensure that your ML models are trained and operate on the most relevant and valuable data. Consider the following recommendations: Establish and adopt a well-defined data governance framework. Apply labels and relevant metadata to datasets at the point of data ingestion. Ensure that datasets are discoverable and accessible across the organization. Make your datasets and features reusable throughout the ML lifecycle wherever possible. Automate and streamline with MLOps A primary benefit of adopting MLOps practices is a reduction in costs, both from a technology perspective and in terms of personnel activities. Automation helps you to avoid duplication of ML activities and improve the productivity of data scientists and ML engineers. Consider the following recommendations: Increase the level of automation and standardization in your data collection and processing technologies to reduce development effort and time. Develop automated training pipelines to reduce the need for manual interventions and increase engineer productivity. Implement mechanisms for the pipelines to reuse existing assets like prepared datasets and trained models. Use the model evaluation and tuning services in Google Cloud to increase model performance with fewer iterations. This enables your AI and ML teams to achieve more objectives in less time. Use managed services and pre-trained or existing models There are many approaches to achieving business goals by using AI and ML. Adopt an incremental approach to model selection and model development. This helps you to avoid excessive costs that are associated with starting fresh every time. To control costs, start with a simple approach: use ML frameworks, managed services, and pre-trained models. Consider the following recommendations: Enable exploratory and quick ML experiments by using notebook environments. Use existing and pre-trained models as a starting point to accelerate your model selection and development process. Use managed services to train or serve your models. Both AutoML and managed custom model training services can help to reduce the cost of model training. Managed services can also help to reduce the cost of your model-serving infrastructure. Foster a culture of cost awareness and continuous optimization Cultivate a collaborative environment that encourages communication and regular reviews. This approach helps teams to identify and implement cost-saving opportunities throughout the ML lifecycle. Consider the following recommendations: Adopt FinOps principles across your ML lifecycle. Ensure that all costs and business benefits of AI and ML projects have assigned owners with clear accountability. ContributorsAuthors: Isaac Lo | AI Business Development ManagerFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerNicolas Pintaux | Customer Engineer, Application Modernization Specialist AI and ML perspective: Performance optimization This document in the Architecture Framework: AI and ML perspective provides an overview of principles and recommendations to help you to optimize the performance of your AI and ML workloads on Google Cloud. The recommendations in this document align with the performance optimization pillar of the Architecture Framework. AI and ML systems enable new automation and decision-making capabilities for your organization. The performance of these systems can directly affect your business drivers like revenue, costs, and customer satisfaction. To realize the full potential of your AI and ML systems, you need to optimize their performance based on your business goals and technical requirements. The performance optimization process often involves certain trade-offs. For example, a design choice that provides the required performance might lead to higher costs. The recommendations in this document prioritize performance over other considerations like costs. To optimize AI and ML performance, you need to make decisions regarding factors like the model architecture, parameters, and training strategy. When you make these decisions, consider the entire lifecycle of the AI and ML systems and their deployment environment. For example, LLMs that are very large can be highly performant on massive training infrastructure, but very large models might not perform well in capacity-constrained environments like mobile devices. Translate business goals to performance objectives To make architectural decisions that optimize performance, start with a clear set of business goals. Design AI and ML systems that provide the technical performance that's required to support your business goals and priorities. Your technical teams must understand the mapping between performance objectives and business goals. Consider the following recommendations: Translate business objectives into technical requirements: Translate the business objectives of your AI and ML systems into specific technical performance requirements and assess the effects of not meeting the requirements. For example, for an application that predicts customer churn, the ML model should perform well on standard metrics, like accuracy and recall, and the application should meet operational requirements like low latency. Monitor performance at all stages of the model lifecycle: During experimentation and training after model deployment, monitor your key performance indicators (KPIs) and observe any deviations from business objectives. Automate evaluation to make it reproducible and standardized: With a standardized and comparable platform and methodology for experiment evaluation, your engineers can increase the pace of performance improvement. Run and track frequent experiments To transform innovation and creativity into performance improvements, you need a culture and a platform that supports experimentation. Performance improvement is an ongoing process because AI and ML technologies are developing continuously and quickly. To maintain a fast-paced, iterative process, you need to separate the experimentation space from your training and serving platforms. A standardized and robust experimentation process is important. Consider the following recommendations: Build an experimentation environment: Performance improvements require a dedicated, powerful, and interactive environment that supports the experimentation and collaborative development of ML pipelines. Embed experimentation as a culture: Run experiments before any production deployment. Release new versions iteratively and always collect performance data. Experiment with different data types, feature transformations, algorithms, and hyperparameters. Build and automate training and serving services Training and serving AI models are core components of your AI services. You need robust platforms and practices that support fast and reliable creation, deployment, and serving of AI models. Invest time and effort to create foundational platforms for your core AI training and serving tasks. These foundational platforms help to reduce time and effort for your teams and improve the quality of outputs in the medium and long term. Consider the following recommendations: Use AI-specialized components of a training service: Such components include high-performance compute and MLOps components like feature stores, model registries, metadata stores, and model performance-evaluation services. Use AI-specialized components of a prediction service: Such components provide high-performance and scalable resources, support feature monitoring, and enable model performance monitoring. To prevent and manage performance degradation, implement reliable deployment and rollback strategies. Match design choices to performance requirements When you make design choices to improve performance, carefully assess whether the choices support your business requirements or are wasteful and counterproductive. To choose the appropriate infrastructure, models, or configurations, identify performance bottlenecks and assess how they're linked to your performance measures. For example, even on very powerful GPU accelerators, your training tasks can experience performance bottlenecks due to data I/O issues from the storage layer or due to performance limitations of the model itself. Consider the following recommendations: Optimize hardware consumption based on performance goals: To train and serve ML models that meet your performance requirements, you need to optimize infrastructure at the compute, storage, and network layers. You must measure and understand the variables that affect your performance goals. These variables are different for training and inference. Focus on workload-specific requirements: Focus your performance optimization efforts on the unique requirements of your AI and ML workloads. Rely on managed services for the performance of the underlying infrastructure. Choose appropriate training strategies: Several pre-trained and foundational models are available, and more such models are released often. Choose a training strategy that can deliver optimal performance for your task. Decide whether you should build your own model, tune a pre-trained model on your data, or use a pre-trained model API. Recognize that performance-optimization strategies can have diminishing returns: When a particular performance-optimization strategy doesn't provide incremental business value that's measurable, stop pursuing that strategy. Link performance metrics to design and configuration choices To innovate, troubleshoot, and investigate performance issues, establish a clear link between design choices and performance outcomes. In addition to experimentation, you must reliably record the lineage of your assets, deployments, model outputs, and the configurations and inputs that produced the outputs. Consider the following recommendations: Build a data and model lineage system: All of your deployed assets and their performance metrics must be linked back to the data, configurations, code, and the choices that resulted in the deployed systems. In addition, model outputs must be linked to specific model versions and how the outputs were produced. Use explainability tools to improve model performance: Adopt and standardize tools and benchmarks for model exploration and explainability. These tools help your ML engineers understand model behavior and improve performance or remove biases. ContributorsAuthors: Benjamin Sadik | AI and ML Specialist Customer EngineerFilipe Gracio, PhD | Customer EngineerOther contributors: Kumar Dhanagopal | Cross-Product Solution DeveloperMarwan Al Shawi | Partner Customer EngineerZach Seils | Networking Specialist Send feedback \ No newline at end of file diff --git a/View_the_guide_as_a_single_page(1).txt b/View_the_guide_as_a_single_page(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..cba5bbe2fdb232360a96559ccb913b5df5a4d5cf --- /dev/null +++ b/View_the_guide_as_a_single_page(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/one-page-view +Date Scraped: 2025-02-23T11:50:18.992Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC This page provides a single-page view of all the pages in Build hybrid and multicloud architectures using Google Cloud. You can print this page, or you can save it in PDF format by using your browser's print function and choosing the Save as PDF option. This page does not provide a table of contents (ToC) pane on the right side. This document is the second of three documents in a set. It discusses common hybrid and multicloud architecture patterns. It also describes the scenarios that these patterns are best suited for. Finally, it provides the best practices you can use when deploying such architectures in Google Cloud. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud. Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy (this document). Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective. Every enterprise has a unique portfolio of application workloads that place requirements and constraints on the architecture of a hybrid or multicloud setup. Although you must design and tailor your architecture to meet these constraints and requirements, you can rely on some common patterns to define the foundational architecture. An architecture pattern is a repeatable way to structure multiple functional components of a technology solution, application, or service to create a reusable solution that addresses certain requirements or use cases. A cloud-based technology solution is often made of several distinct and distributed cloud services. These services collaborate to deliver required functionality. In this context, each service is considered a functional component of the technology solution. Similarly, an application can consist of multiple functional tiers, modules, or services, and each can represent a functional component of the application architecture. Such an architecture can be standardized to address specific business use cases and serve as a foundational, reusable pattern. To generally define an architecture pattern for an application or solution, identify and define the following: The components of the solution or application. The expected functions for each component—for example, frontend functions to provide a graphical user interface or backend functions to provide data access. How the components communicate with each other and with external systems or users. In modern applications, these components interact through well-defined interfaces or APIs. There are a wide range of communication models such as asynchronous and synchronous, request-response, or queue-based. The following are the two main categories of hybrid and multicloud architecture patterns: Distributed architecture patterns: These patterns rely on a distributed deployment of workloads or application components. That means they run an application (or specific components of that application) in the computing environment that suits the pattern best. Doing so lets the pattern capitalize on the different properties and characteristics of distributed and interconnected computing environments. Redundant architecture patterns: These patterns are based on redundant deployments of workloads. In these patterns, you deploy the same applications and their components in multiple computing environments. The goal is to either increase the performance capacity or resiliency of an application, or to replicate an existing environment for development and testing. When you implement the architecture pattern that you select, you must use a suitable deployment archetype. Deployment archetypes are zonal, regional, multi-regional, or global. This selection forms the basis for constructing application-specific deployment architectures. Each deployment archetype defines a combination of failure domains within which an application can operate. These failure domains can encompass one or more Google Cloud zones or regions, and can be expanded to include your on-premises data centers or failure domains in other cloud providers. This series contains the following pages: Distributed architecture patterns Tiered hybrid pattern Partitioned multicloud pattern Analytics hybrid and multicloud pattern Edge hybrid pattern Redundant architecture patterns Environment hybrid pattern Business continuity hybrid and multicloud patterns Cloud bursting pattern ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Distributed architecture patterns When migrating from a non-hybrid or non-multicloud computing environment to a hybrid or multicloud architecture, first consider the constraints of your existing applications and how those constraints could lead to application failure. This consideration becomes more important when your applications or application components operate in a distributed manner across different environments. After you have considered your constraints, develop a plan to avoid or overcome them. Make sure to consider the unique capabilities of each computing environment in a distributed architecture. Note: You can apply different architecture patterns to different applications, based on their use cases and requirements. This means that you might have multiple applications with different hybrid and multicloud architecture patterns operating at the same time. Design considerations The following design considerations apply to distributed deployment patterns. Depending on the target solution and business objectives, the priority and the effect of each consideration can vary. Latency In any architecture pattern that distributes application components (frontends, backends, or microservices) across different computing environments, communication latency can occur. This latency is influenced by the hybrid network connectivity (Cloud VPN and Cloud Interconnect) and the geographical distance between the on-premises site and the cloud regions, or between cloud regions in a multicloud setup. Therefore, it's crucial to assess the latency requirements of your applications and their sensitivity to network delays. Applications that can tolerate latency are more suitable candidates for initial distributed deployment in a hybrid or multicloud environment. Temporary versus final state architecture To specify the expectations and any potential implications for cost, scale, and performance, it's important to analyze what type of architecture you need and the intended duration as part of the planning stage. For example, if you plan to use a hybrid or multicloud architecture for a long time or permanently, you might want to consider using Cloud Interconnect. To reduce outbound data transfer costs and to optimize hybrid connectivity network performance, Cloud Interconnect discounts the outbound data transfer charges that meet the discounted data transfer rate conditions. Reliability Reliability is a major consideration when architecting IT systems. Uptime availability is an essential aspect of system reliability. In Google Cloud, you can increase the resiliency of an application by deploying redundant components of that application across multiple zones in a single region1, or across multiple regions, with switchover capabilities. Redundancy is one of the key elements to improve the overall availability of an application. For applications with a distributed setup across hybrid and multicloud environments, it's important to maintain a consistent level of availability. To enhance the availability of a system in an on-premises environment, or in other cloud environments, consider what hardware or software redundancy—with failover mechanisms—you need for your applications and their components. Ideally, you should consider the availability of a service or an application across the various components and supporting infrastructure (including hybrid connectivity availability) across all the environments. This concept is also referred to as the composite availability of an application or service. Based on the dependencies between the components or services, the composite availability for an application might be higher or lower than for an individual service or component. For more information, see Composite availability: calculating the overall availability of cloud infrastructure. To achieve the level of system reliability that you want, define clear reliability metrics and design applications to self-heal and endure disruptions effectively across the different environments. To help you define appropriate ways to measure the customer experience of your services, see Define reliability based on user-experience goals. Hybrid and multicloud connectivity The requirements of the communication between the distributed applications components should influence your selection of a hybrid network connectivity option. Each connectivity option has its advantages and disadvantages, as well as specific drivers to consider, such as cost, traffic volume, security, and so forth. For more information, see the connectivity design considerations section. Manageability Consistent and unified management and monitoring tools are essential for successful hybrid and multicloud setups (with or without workload portability). In the short term, these tools can add development, testing, and operations costs. Technically, the more cloud providers you use, the more complex managing your environments becomes. Most public cloud vendors not only have different features, but also have varying tools, SLAs, and APIs for managing cloud services. Therefore, weigh the strategic advantages of your selected architecture against the potential short-term complexity versus the long-term benefits. Cost Each cloud service provider in a multicloud environment has its own billing metrics and tools. To provide better visibility and unified dashboards, consider using multicloud cost management and optimization tooling. For example, when building cloud-first solutions across multiple cloud environments each provider's products, pricing, discounts, and management tools can create cost inconsistencies between those environments. We recommend having a single, well-defined method for calculating the full costs of cloud resources, and to provide cost visibility. Cost visibility is essential for cost optimization. For example, by combining billing data from the cloud providers you use and using Google Cloud Looker Cloud Cost Management Block, you can create a centralized view of your multicloud costs. This view can help provide a consolidated reporting view of your spend across multiple clouds. For more information, see The strategy for effectively optimizing cloud billing cost management. We also recommend using FinOps practice to make costs visible. As a part of a strong FinOps practice, a central team can delegate the decision making for resource optimization to any other teams involved in a project to encourage individual accountability. In this model, the central team should standardize the process, the reporting, and the tooling for cost optimization. For more information about the different cost optimization aspects and recommendations that you should consider, see Google Cloud Architecture Framework: Cost optimization. Data movement Data movement is an important consideration for hybrid and multicloud strategy and architecture planning, especially for distributed systems. Enterprises need to identify their different business use cases, the data that powers them, and how the data is classified (for regulated industries). They should also consider how data storage, sharing, and access for distributed systems across environments might affect application performance and data consistency. Those factors might influence the application and the data pipeline architecture. Google Cloud's comprehensive set of data movement options makes it possible for businesses to meet their specific needs and adopt hybrid and multicloud architectures without compromising simplicity, efficiency, or performance. Security When migrating applications to the cloud, it's important to consider cloud-first security capabilities like consistency, observability, and unified security visibility. Each public cloud provider has its own approach, best practices, and capabilities for security. It's important to analyze and align these capabilities to build a standard, functional security architecture. Strong IAM controls, data encryption, vulnerability scanning, and compliance with industry regulations are also important aspects of cloud security. When planning a migration strategy, we recommend that you analyze the previously mentioned considerations. They can help you minimize the chances of introducing complexities to the architecture as your applications or traffic volumes grow. Also, designing and building a landing zone is almost always a prerequisite to deploying enterprise workloads in a cloud environment. A landing zone helps your enterprise deploy, use, and scale cloud services more securely across multiple areas and includes different elements, such as identities, resource management, security, and networking. For more information, see Landing zone design in Google Cloud. The following documents in this series describe other distributed architecture patterns: Tiered hybrid pattern Partitioned multicloud pattern Analytics hybrid and multicloud pattern Edge hybrid pattern Tiered hybrid pattern The architecture components of an application can be categorized as either frontend or backend. In some scenarios, these components can be hosted to operate from different computing environments. As part of the tiered hybrid architecture pattern, the computing environments are located in an on-premises private computing environment and in Google Cloud. Frontend application components are directly exposed to end users or devices. As a result, these applications are often performance sensitive. To develop new features and improvements, software updates can be frequent. Because frontend applications usually rely on backend applications to store and manage data—and possibly business logic and user input processing—they're often stateless or manage only limited volumes of data. To be accessible and usable, you can build your frontend applications with various frameworks and technologies. Some key factors for a successful frontend application include application performance, response speed, and browser compatibility. Backend application components usually focus on storing and managing data. In some architectures, business logic might be incorporated within the backend component. New releases of backend applications tend to be less frequent than releases for frontend applications. Backend applications have the following challenges to manage: Handling a large volume of requests Handling a large volume of data Securing data Maintaining current and updated data across all the system replicas The three-tier application architecture is one of the most popular implementations for building business web applications, like ecommerce websites containing different application components. This architecture contains the following tiers. Each tier operates independently, but they're closely linked and all function together. Web frontend and presentation tier Application tier Data access or backend tier Putting these layers into containers separates their technical needs, like scaling requirements, and helps to migrate them in a phased approach. Also, it lets you deploy them on platform-agnostic cloud services that can be portable across environments, use automated management, and scale with cloud managed platforms, like Cloud Run or Google Kubernetes Engine (GKE) Enterprise edition. Also, Google Cloud-managed databases like Cloud SQL help to provide the backend as the database layer. Note: The implementation of this architecture and the definition of its components can vary depending on whether you separate the tiers into individual systems and layers or combine them. The tiered hybrid architecture pattern focuses on deploying existing frontend application components to the public cloud. In this pattern, you keep any existing backend application components in their private computing environment. Depending on the scale and the specific design of the application, you can migrate frontend application components on a case-by-case basis. For more information, see Migrate to Google Cloud. If you have an existing application with backend and frontend components hosted in your on-premises environment, consider the limits of your current architecture. For example, as your application scales and the demands on its performance and reliability increase, you should start evaluating whether parts of your application should be refactored or moved to a different and more optimal architecture. The tiered hybrid architecture pattern lets you shift some application workloads and components to the cloud before making a complete transition. It's also essential to consider the cost, time, and risk involved in such a migration. The following diagram shows a typical tiered hybrid architecture pattern. In the preceding diagram, client requests are sent to the application frontend that is hosted in Google Cloud. In turn, the application frontend sends data back to the on-premises environment where the application backend is hosted (ideally through an API gateway). With the tiered hybrid architecture pattern, you can take advantage of Google Cloud infrastructure and global services, as shown in the example architecture in the following diagram. The application frontend can be reached over Google Cloud. It can also add elasticity to the frontend by using auto-scaling to dynamically and efficiently respond to scaling demand without over provisioning infrastructure. There are different architectures that you can use to build and run scalable web apps on Google Cloud. Each architecture has advantages and disadvantages for different requirements. For more information, watch Three ways to run scalable web apps on Google Cloud on YouTube. To learn more about different ways to modernize your ecommerce platform on Google Cloud, see How to build a digital commerce platform on Google Cloud. In the preceding diagram, the application frontend is hosted on Google Cloud to provide a multi-regional and globally optimized user experience that uses global load balancing, autoscaling, and DDoS protection through Google Cloud Armor. Over time, the number of applications that you deploy to the public cloud might increase to the point where you might consider moving backend application components to the public cloud. If you expect to serve heavy traffic, opting for cloud-managed services might help you save engineering effort when managing your own infrastructure. Consider this option unless constraints or requirements mandate hosting backend application components on-premises. For example, if your backend data is subject to regulatory restrictions, you probably need to keep that data on-premises. Where applicable and compliant, however, using Sensitive Data Protection capabilities like de-identification techniques, can help you move that data when necessary. In the tiered hybrid architecture pattern, you also can use Google Distributed Cloud in some scenarios. Distributed Cloud lets you run Google Kubernetes Engine clusters on dedicated hardware that's provided and maintained by Google and is separate from Google Cloud data center. To ensure that Distributed Cloud meets your current and future requirements, know the limitations of Distributed Cloud when compared to a conventional cloud-based GKE zone. Advantages Focusing on frontend applications first has several advantages including the following: Frontend components depend on backend resources and occasionally on other frontend components. Backend components don't depend on frontend components. Therefore, isolating and migrating frontend applications tends to be less complex than migrating backend applications. Because frontend applications often are stateless or don't manage data by themselves, they tend to be less challenging to migrate than backends. Frontend components can be optimized as part of the migration to use stateless architecture. For more information, watch How to port stateful web apps to Cloud Run on YouTube. Deploying existing or newly developed frontend applications to the public cloud offers several advantages: Many frontend applications are subject to frequent changes. Running these applications in the public cloud simplifies the setup of a continuous integration/continuous deployment (CI/CD) process. You can use CI/CD to send updates in an efficient and automated manner. For more information, see CI/CD on Google Cloud. Performance-sensitive frontends with varying traffic load can benefit substantially from the load balancing, multi-regional deployments, Cloud CDN caching, serverless, and autoscaling capabilities that a cloud-based deployment enables (ideally with stateless architecture). Adopting microservices with containers using a cloud-managed platform, like GKE, lets you use modern architectures like microfrontend, which extend microservices to the frontend components. Extending microservices is commonly used with frontends that involve multiple teams collaborating on the same application. That kind of team structure requires an iterative approach and continuous maintenance. Some of the advantages of using microfrontend are as follows: It can be made into independent microservices modules for development, testing, and deployment. It provides separation where individual development teams can select their preferred technologies and code. It can foster rapid cycles of development and deployment without affecting the rest of the frontend components that might be managed by other teams. Whether they're implementing user interfaces or APIs, or handling Internet of Things (IoT) data ingestion, frontend applications can benefit from the capabilities of cloud services like Firebase, Pub/Sub, Apigee, Cloud CDN, App Engine, or Cloud Run. Cloud-managed API proxies help to: Decouple the app-facing API from your backend services, like microservices. Shield apps from backend code changes. Support your existing API-driven frontend architectures, like backend for frontend (BFF), microfrontend, and others. Expose your APIs on Google Cloud or other environments by implementing API proxies on Apigee. You can also apply the tiered hybrid pattern in reverse, by deploying backends in the cloud while keeping frontends in private computing environments. Although it's less common, this approach is best applied when you're dealing with a heavyweight and monolithic frontend. In such cases, it might be easier to extract backend functionality iteratively, and to deploy these new backends in the cloud. The third part of this series discusses possible networking patterns to enable such an architecture. Apigee hybrid helps as a platform for building and managing API proxies in a hybrid deployment model. For more information, see Loosely coupled architecture, including tiered monolithic and microservices architectures. Best practices Use the information in this section as you plan for your tiered hybrid architecture. Best practices to reduce complexity When you're applying the tiered hybrid architecture pattern, consider the following best practices that can help to reduce its overall deployment and operational complexity: Based on the assessment of the communication models of the identified applications, select the most efficient and effective communication solution for those applications. Because most user interaction involves systems that connect across multiple computing environments, fast and low-latency connectivity between those systems is important. To meet availability and performance expectations, you should design for high availability, low latency, and appropriate throughput levels. From a security point of view, communication needs to be fine-grained and controlled. Ideally, you should expose application components using secure APIs. For more information, see Gated egress. To minimize communication latency between environments, select a Google Cloud region that is geographically close to the private computing environment where your application backend components are hosted. For more information, see Best practices for Compute Engine regions selection. Minimize high dependencies between systems that are running in different environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges. With the tiered hybrid architecture pattern, you might have larger volumes of inbound traffic from on-premises environments coming into Google Cloud compared to outbound traffic leaving Google Cloud. Nevertheless, you should know the anticipated outbound data transfer volume leaving Google Cloud. If you plan to use this architecture long term with high outbound data transfer volumes, consider using Cloud Interconnect. Cloud Interconnect can help to optimize connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, you can use VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. To facilitate the establishment of hybrid setups, use Cloud Load Balancing with hybrid connectivity. That means you can extend the benefits of cloud load balancing to services hosted on your on-premises compute environment. This approach enables phased workload migrations to Google Cloud with minimal or no service disruption, ensuring a smooth transition for the distributed services. For more information, see Hybrid connectivity network endpoint groups overview. Sometimes, using an API gateway, or a proxy and an Application Load Balancer together, can provide a more robust solution for managing, securing, and distributing API traffic at scale. Using Cloud Load Balancing with API gateways lets you accomplish the following: Provide high-performing APIs with Apigee and Cloud CDN, to reduce latency, host APIs globally, and increase availability for peak traffic seasons. For more information, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube. Implement advanced traffic management. Use Google Cloud Armor as a DDoS protection and network security service to protect your APIs. Manage efficient load balancing across gateways in multiple regions. For more information, watch Securing APIs and Implementing multi-region failover with Private Service Connect and Apigee on YouTube. Use API management and service mesh to secure and control service communication and exposure with microservices architecture. Use Cloud Service Mesh to allow for service-to-service communication that maintains the quality of service in a system composed of distributed services where you can manage authentication, authorization, and encryption between services. Use an API management platform like Apigee that lets your organization and external entities consume those services by exposing them as APIs. Establish common identity between environments so that systems can authenticate securely across environment boundaries. Deploy CI/CD and configuration management systems in the public cloud. For more information, see Mirrored networking architecture pattern. To help increase operational efficiency, use consistent tooling and CI/CD pipelines across environments. Best practices for individual workload and application architectures Although the focus lies on frontend applications in this pattern, stay aware of the need to modernize your backend applications. If the development pace of backend applications is substantially slower than for frontend applications, the difference can cause extra complexity. Treating APIs as backend interfaces streamlines integrations, frontend development, service interactions, and hides backend system complexities. To address these challenges, Apigee facilitates API gateway/proxy development and management for hybrid and multicloud deployments. Choose the rendering approach for your frontend web application based on the content (static versus dynamic), the search engine optimization performance, and the expectations about page loading speeds. When selecting an architecture for content-driven web applications, various options are available, including monolithic, serverless, event-based, and microservice architectures. To select the most suitable architecture, thoroughly assess these options against your current and future application requirements. To help you make an architectural decision that's aligned with your business and technical objectives, see Comparison of different architectures for content-driven web application backends, and Key Considerations for web backends. With a microservices architecture, you can use containerized applications with Kubernetes as the common runtime layer. With the tiered hybrid architecture pattern, you can run it in either of the following scenarios: Across both environments (Google Cloud and your on-premises environments). When using containers and Kubernetes across environments, you have the flexibility to modernize workloads and then migrate to Google Cloud at different times. That helps when a workload depends heavily on another and can't be migrated individually, or to use hybrid workload portability to use the best resources available in each environment. In all cases, GKE Enterprise can be a key enabling technology. For more information, see GKE Enterprise hybrid environment. In a Google Cloud environment for the migrated and modernized application components. Use this approach when you have legacy backends on-premises that lack containerization support or require significant time and resources to modernize in the short-term. For more information about designing and refactoring a monolithic app to a microservice architecture to modernize your web application architecture, see Introduction to microservices. You can combine data storage technologies depending on the needs of your web applications. Using Cloud SQL for structured data and Cloud Storage for media files is a common approach to meet diverse data storage needs. That said, the choice depends heavily on your use case. For more information about data storage options for content-driven application backends and effective modalities, see Data Storage Options for Content-Driven Web Apps. Also, see Your Google Cloud database options, explained. Partitioned multicloud pattern The partitioned multicloud architecture pattern combines multiple public cloud environments that are operated by different cloud service providers. This architecture provides the flexibility to deploy an application in an optimal computing environment that accounts for the multicloud drivers and considerations discussed in the first part of this series. The following diagram shows a partitioned multicloud architecture pattern. This architecture pattern can be built in two different ways. The first approach is based on deploying the application components in different public cloud environments. This approach is also referred to as a composite architecture and is the same approach as the tiered hybrid architecture pattern. Instead of using an on-premises environment with a public cloud, however, it uses at least two cloud environments. In a composite architecture, a single workload or application uses components from more than one cloud. The second approach deploys different applications on different public cloud environments. The following non-exhaustive list describes some of the business drivers for the second approach: To fully integrate applications hosted in disparate cloud environments during a merger and acquisition scenario between two enterprises. To promote flexibility and cater to diverse cloud preferences within your organization. Adopt this approach to encourage organizational units to choose the cloud provider that best suits their specific needs and preferences. To operate in a multi-regional or global-cloud deployment. If an enterprise is required to adhere to data residency regulations in specific regions or countries, then they need to choose from among the available cloud providers in that location if their primary cloud provider does not have a cloud region there. With the partitioned multicloud architecture pattern, you can optionally maintain the ability to shift workloads as needed from one public cloud environment to another. In that case, the portability of your workloads becomes a key requirement. When you deploy workloads to multiple computing environments, and want to maintain the ability to move workloads between environments, you must abstract away the differences between the environments. By using GKE Enterprise, you can design and build a solution to solve multicloud complexity with consistent governance, operations, and security postures. For more information, see GKE Multi-Cloud. As previously mentioned, there are some situations where there might be both business and technical reasons to combine Google Cloud with another cloud provider and to partition workloads across those cloud environments. Multicloud solutions offer you the flexibility to migrate, build, and optimize applications portability across multicloud environments while minimizing lock-in, and helping you to meet your regulatory requirements. For example, you might connect Google Cloud with Oracle Cloud Infrastructure (OCI), to build a multicloud solution that harnesses the capabilities of each platform using a private Cloud Interconnect to combine components running in OCI with resources running on Google Cloud. For more information, see Google Cloud and Oracle Cloud Infrastructure – making the most of multicloud. In addition, Cross-Cloud Interconnect facilitates high-bandwidth dedicated connectivity between Google Cloud and other supported cloud service providers, enabling you to architect and build multicloud solutions to handle high inter-cloud traffic volume. Advantages While using a multicloud architecture offers several business and technical benefits, as discussed in Drivers, considerations, strategy, and approaches, it's essential to perform a detailed feasibility assessment of each potential benefit. Your assessment should carefully consider any associated direct or indirect challenges or potential roadblocks, and your ability to navigate them effectively. Also, consider that the long-term growth of your applications or services can introduce complexities that might outweigh the initial benefits. Here are some key advantages of the partitioned multicloud architecture pattern: In scenarios where you might need to minimize committing to a single cloud provider, you can distribute applications across multiple cloud providers. As a result, you could relatively reduce vendor lock-in with the ability to change plans (to some extent) across your cloud providers. Open Cloud helps to bring Google Cloud capabilities, like GKE Enterprise, to different physical locations. By extending Google Cloud capabilities on-premises, in multiple public clouds, and on the edge, it provides flexibility, agility, and drives transformation. For regulatory reasons, you can serve a certain segment of your user base and data from a country where Google Cloud doesn't have a cloud region. The partitioned multicloud architecture pattern can help to reduce latency and improve the overall quality of the user experience in locations where the primary cloud provider does not have a cloud region or a point of presence. This pattern is especially useful when using high-capacity and low latency multicloud connectivity, such as Cross-Cloud Interconnect and CDN Interconnect with a distributed CDN. You can deploy applications across multiple cloud providers in a way that lets you choose among the best services that the other cloud providers offer. The partitioned multicloud architecture pattern can help facilitate and accelerate merger and acquisition scenarios, where the applications and services of the two enterprises might be hosted in different public cloud environments. Best practices Start by deploying a non-mission-critical workload. This initial deployment in the secondary cloud can then serve as a pattern for future deployments or migrations. However, this approach probably isn't applicable in situations where the specific workload is legally or regulatorily required to reside in a specific cloud region, and the primary cloud provider doesn't have a region in the required territory. Minimize dependencies between systems that are running in different public cloud environments, particularly when communication is handled synchronously. These dependencies can slow performance, decrease overall availability, and potentially incur additional outbound data transfer charges. To abstract away the differences between environments, consider using containers and Kubernetes where supported by the applications and feasible. Ensure that CI/CD pipelines and tooling for deployment and monitoring are consistent across cloud environments. Select the optimal network architecture pattern that provides the most efficient and effective communication solution for the applications you're using. Communication must be fine-grained and controlled. Use secure APIs to expose application components. Consider using either the meshed architecture pattern or one of the gated networking patterns, based on your specific business and technical requirements. To meet your availability and performance expectations, design for end-to-end high availability (HA), low latency, and appropriate throughput levels. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available, based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect. If you're using multiple CDNs as part of your multicloud partitioned architecture pattern, and you're populating your other CDN with large data files from Google Cloud, consider using CDN Interconnect links between Google Cloud and supported providers to optimize this traffic and, potentially, its cost. Extend your identity management solution between environments so that systems can authenticate securely across environment boundaries. To effectively balance requests across Google Cloud and another cloud platform, you can use Cloud Load Balancing. For more information, see Routing traffic to an on-premises location or another cloud. If the outbound data transfer volume from Google Cloud toward other environments is high, consider using Cross-Cloud Interconnect. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backends, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee hybrid lets you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. In some of the following cases, using Cloud Load Balancing with an API gateway can provide a robust and secure solution for managing, securing, and distributing API traffic at scale across multiple regions: Deploying multi-region failover for Apigee API runtimes in different regions. Increasing performance with Cloud CDN. Providing WAF and DDoS protection through Google Cloud Armor. Use consistent tools for logging and monitoring across cloud environments where possible. You might consider using open source monitoring systems. For more information, see Hybrid and multicloud monitoring and logging patterns. If you're deploying application components in a distributed manner where the components of a single application are deployed in more than one cloud environment, see the best practices for the tiered hybrid architecture pattern. Analytics hybrid and multicloud pattern This document discusses that the objective of the analytics hybrid and multicloud pattern is to capitalize on the split between transactional and analytics workloads. In enterprise systems, most workloads fall into these categories: Transactional workloads include interactive applications like sales, financial processing, enterprise resource planning, or communication. Analytics workloads include applications that transform, analyze, refine, or visualize data to aid decision-making processes. Analytics systems obtain their data from transactional systems by either querying APIs or accessing databases. In most enterprises, analytics and transactional systems tend to be separate and loosely coupled. The objective of the analytics hybrid and multicloud pattern is to capitalize on this pre-existing split by running transactional and analytics workloads in two different computing environments. Raw data is first extracted from workloads that are running in the private computing environment and then loaded into Google Cloud, where it's used for analytical processing. Some of the results might then be fed back to transactional systems. The following diagram illustrates conceptually possible architectures by showing potential data pipelines. Each path/arrow represents a possible data movement and transformation pipeline option that can be based on ETL or ELT, depending on the available data quality and targeted use case. To move your data into Google Cloud and unlock value from it, use data movement services, a complete suite of data ingestion, integration, and replication services. As shown in the preceding diagram, connecting Google Cloud with on-premises environments and other cloud environments can enable various data analytics use cases, such as data streaming and database backups. To power the foundational transport of a hybrid and multicloud analytics pattern that requires a high volume of data transfer, Cloud Interconnect and Cross-Cloud Interconnect provide dedicated connectivity to on-premises and other cloud providers. Advantages Running analytics workloads in the cloud has several key advantages: Inbound traffic—moving data from your private computing environment or other clouds to Google Cloud—might be free of charge. Analytics workloads often need to process substantial amounts of data and can be bursty, so they're especially well suited to being deployed in a public cloud environment. By dynamically scaling compute resources, you can quickly process large datasets while avoiding upfront investments or having to overprovision computing equipment. Google Cloud provides a rich set of services to manage data throughout its entire lifecycle, ranging from initial acquisition through processing and analyzing to final visualization. Data movement services on Google Cloud provide a complete suite of products to move, integrate, and transform data seamlessly in different ways. Cloud Storage is well suited for building a data lake. Google Cloud helps you to modernize and optimize your data platform to break down data silos. Using a data lakehouse helps to standardize across different storage formats. It can also provide the flexibility, scalability, and agility needed to help ensure that your data generates value for your business, rather than inefficiencies. For more information, see BigLake. BigQuery Omni, provides compute power that runs locally to the storage on AWS or Azure. It also helps you query your own data stored in Amazon Simple Storage Service (Amazon S3) or Azure Blob Storage. This multicloud analytics capability lets data teams break down data silos. For more information about querying data stored outside of BigQuery, see Introduction to external data sources. Best practices To implement the analytics hybrid and multicloud architecture pattern, consider the following general best practices: Use the handover networking pattern to enable the ingestion of data. If analytical results need to be fed back to transactional systems, you might combine both the handover and the gated egress pattern. Use Pub/Sub queues or Cloud Storage buckets to hand over data to Google Cloud from transactional systems that are running in your private computing environment. These queues or buckets can then serve as sources for data-processing pipelines and workloads. To deploy ETL and ELT data pipelines, consider using Cloud Data Fusion or Dataflow depending on your specific use case requirements. Both are fully managed, cloud-first data processing services for building and managing data pipelines. To discover, classify, and protect your valuable data assets, consider using Google Cloud Sensitive Data Protection capabilities, like de-identification techniques. These techniques let you mask, encrypt, and replace sensitive data—like personally identifiable information (PII)—using a randomly generated or pre-determined key, where applicable and compliant. When you have existing Hadoop or Spark workloads, consider migrating jobs to Dataproc and migrating existing HDFS data to Cloud Storage. When you're performing an initial data transfer from your private computing environment to Google Cloud, choose the transfer approach that is best suited for your dataset size and available bandwidth. For more information, see Migration to Google Cloud: Transferring your large datasets. If data transfer or exchange between Google Cloud and other clouds is required for the long term with high traffic volume, you should evaluate using Google Cloud Cross-Cloud Interconnect to help you establish high-bandwidth dedicated connectivity between Google Cloud and other cloud service providers (available in certain locations). If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cross-Cloud Interconnect. Use consistent tooling and processes across environments. In an analytics hybrid scenario, this practice can help increase operational efficiency, although it's not a prerequisite. Edge hybrid pattern Running workloads in the cloud requires that clients in some scenarios have fast and reliable internet connectivity. Given today's networks, this requirement rarely poses a challenge for cloud adoption. There are, however, scenarios when you can't rely on continuous connectivity, such as: Sea-going vessels and other vehicles might be connected only intermittently or have access only to high-latency satellite links. Factories or power plants might be connected to the internet. These facilities might have reliability requirements that exceed the availability claims of their internet provider. Retail stores and supermarkets might be connected only occasionally or use links that don't provide the necessary reliability or throughput to handle business-critical transactions. The edge hybrid architecture pattern addresses these challenges by running time- and business-critical workloads locally, at the edge of the network, while using the cloud for all other kinds of workloads. In an edge hybrid architecture, the internet link is a noncritical component that is used for management purposes and to synchronize or upload data, often asynchronously, but isn't involved in time or business-critical transactions. Advantages Running certain workloads at the edge and other workloads in the cloud offers several advantages: Inbound traffic—moving data from the edge to Google Cloud—might be free of charge. Running workloads that are business- and time-critical at the edge helps ensure low latency and self-sufficiency. If internet connectivity fails or is temporarily unavailable, you can still run all important transactions. At the same time, you can benefit from using the cloud for a significant portion of your overall workload. You can reuse existing investments in computing and storage equipment. Over time, you can incrementally reduce the fraction of workloads that are run at the edge and move them to the cloud, either by reworking certain applications or by equipping some edge locations with internet links that are more reliable. Internet of Things (IoT)-related projects can become more cost-efficient by performing data computations locally. This allows enterprises to run and process some services locally at the edge, closer to the data sources. It also allows enterprises to selectively send data to the cloud, which can help to reduce the capacity, data transfer, processing, and overall costs of the IoT solution. Edge computing can act as an intermediate communication layer between legacy and modernized services. For example, services that might be running a containerized API gateway such as Apigee hybrid). This enables legacy applications and systems to integrate with modernized services, like IoT solutions. Best practices Consider the following recommendations when implementing the edge hybrid architecture pattern: If communication is unidirectional, use the gated ingress pattern. If communication is bidirectional, consider the gated egress and gated ingress pattern. If the solution consists of many edge remote sites connecting to Google Cloud over the public internet, you can use a software-defined WAN (SD-WAN) solution. You can also use Network Connectivity Center with a third-party SD-WAN router supported by a Google Cloud partner to simplify the provisioning and management of secure connectivity at scale. Minimize dependencies between systems that are running at the edge and systems that are running in the cloud environment. Each dependency can undermine the reliability and latency advantages of an edge hybrid setup. To manage and operate multiple edge locations efficiently, you should have a centralized management plane and monitoring solution in the cloud. Ensure that CI/CD pipelines along with tooling for deployment and monitoring are consistent across cloud and edge environments. Consider using containers and Kubernetes when applicable and feasible, to abstract away differences among various edge locations and also among edge locations and the cloud. Because Kubernetes provides a common runtime layer, you can develop, run, and operate workloads consistently across computing environments. You can also move workloads between the edge and the cloud. To simplify the hybrid setup and operation, you can use GKE Enterprise for this architecture (if containers are used across the environments). Consider the possible connectivity options that you have to connect a GKE Enterprise cluster running in your on-premises or edge environment to Google Cloud. As part of this pattern, although some GKE Enterprise components might sustain during a temporary connectivity interruption to Google Cloud, don't use GKE Enterprises when it's disconnected from Google Cloud as a nominal working mode. For more information, see Impact of temporary disconnection from Google Cloud. To overcome inconsistencies in protocols, APIs, and authentication mechanisms across diverse backend and edge services, we recommend, where applicable, to deploy an API gateway or proxy as a unifying facade. This gateway or proxy acts as a centralized control point and performs the following measures: Implements additional security measures. Shields client apps and other services from backend code changes. Facilitates audit trails for communication between all cross-environment applications and its decoupled components. Acts as an intermediate communication layer between legacy and modernized services. Apigee and Apigee Hybrid let you host and manage enterprise-grade and hybrid gateways across on-premises environments, edge, other clouds, and Google Cloud environments. Establish common identity between environments so that systems can authenticate securely across environment boundaries. Because the data that is exchanged between environments might be sensitive, ensure that all communication is encrypted in transit by using VPN tunnels, TLS, or both. Environment hybrid pattern With the environment hybrid architecture pattern, you keep the production environment of a workload in the existing data center. You then use the public cloud for your development and testing environments, or other environments. This pattern relies on the redundant deployment of the same applications across multiple computing environments. The goal of the deployment is to help increase capacity, agility, and resiliency. When assessing which workloads to migrate, you might notice cases when running a specific application in the public cloud presents challenges: Jurisdictional or regulatory constraints might require that you keep data in a specific country. Third-party licensing terms might prevent you from operating certain software in a cloud environment. An application might require access to hardware devices that are available only locally. In such cases, consider not only the production environment but all environments that are involved in the lifecycle of an application, including development, testing, and staging systems. These restrictions often apply to the production environment and its data. They might not apply to other environments that don't use the actual data. Check with the compliance department of your organization or the equivalent team. The following diagram shows a typical environment hybrid architecture pattern: Running development and test systems in different environments than your production systems might seem risky and could deviate from your existing best practices or from your attempts to minimize differences between your environments. While such concerns are justified, they don't apply if you distinguish between the stages of the development and testing processes: Although development, testing, and deployment processes differ for each application, they usually involve variations of the following stages: Development: Creating a release candidate. Functional testing or user acceptance testing: Verifying that the release candidate meets functional requirements. Performance and reliability testing: Verifying that the release candidate meets nonfunctional requirements. It's also known as load testing. Staging or deployment testing: Verifying that the deployment procedure works. Production: Releasing new or updated applications. Performing more than one of these stages in a single environment is rarely practical, so each stage usually requires one or more dedicated environments. Note: The term staging environment is often confused with testing environment. The primary purpose of a testing environment is to run functional tests. The primary purpose of a staging environment is to test if your application deployment procedures work as intended. By the time a release reaches a staging environment, your functional testing should be complete. Staging is the last step before you deploy software to your production deployment. To ensure that test results are meaningful and that they apply to the production deployment, the set of environments that you use throughout an application's lifecycle must satisfy the following rules, to the extent possible: All environments are functionally equivalent. That is, the architecture, APIs, and versions of operating systems and libraries are equivalent, and systems behave the same across environments. This equivalence avoids situations where applications work in one environment but fail in another, or where defects aren't reproducible. Environments that are used for performance and reliability testing, staging, and production are non-functionally equivalent. That is, their performance, scale, and configuration, and the way they're operated and maintained, are either the same or differ only in insignificant ways. Otherwise, performance and staging tests become meaningless. In general, it's fine if the environments that are used for development and functional testing differ non-functionally from the other environments. As illustrated in the following diagram, the test and development environments are built on Google Cloud. A managed database, like Cloud SQL, can be used as an option for development and testing in Google Cloud. Development and testing can use the same database engine and version in the on-premises environment, one that's functionally equivalent, or a new version that's rolled out to the production environment after the testing stage. However, because the underlying infrastructure of the two environments aren't identical, this approach to performance load testing isn't valid. The following scenarios can fit well with the environment hybrid pattern: Achieve functional equivalence across all environments by relying on Kubernetes as a common runtime layer where applicable and feasible. Google Kubernetes Engine (GKE) Enterprise edition can be a key enabling technology for this approach. Ensure workload portability and abstract away differences between computing environments. With a zero trust service mesh, you can control and maintain the required communication separation between the different environments. Run development and functional testing environments in the public cloud. These environments can be functionally equivalent to the remaining environments but might differ in nonfunctional aspects, like performance. This concept is illustrated in the preceding diagram. Run environments for production, staging, and performance (load testing) and reliability testing in the private computing environment, ensuring functional and nonfunctional equivalence. Design Considerations Business needs: Each deployment and release strategy for applications has its own advantages and disadvantages. To ensure that the approach that you select aligns with your specific requirements, base your selections on a thorough assessment of your business needs and constraints. Environment differences: As part of this pattern, the main goal of using this cloud environment is for development and testing. The final state is to host the tested application in the private on-premises environment (production). To avoid developing and testing a capability that might function as expected in the cloud environment and fail in the production environment (on-premises), the technical team must know and understand the architectures and capabilities of both environments. This includes dependencies on other applications and on the hardware infrastructure—for example, security systems that perform traffic inspection. Governance: To control what your company is allowed to develop in the cloud and what data they can use for testing, use an approval and governance process. This process can also help your company make sure that it doesn't use any cloud features in your development and testing environments that don't exist in your on-premises production environment. Success criteria: There must be clear, predefined, and measurable testing success criteria that align with the software quality assurance standards for your organization. Apply these standards to any application that you develop and test. Redundancy: Although development and testing environments might not require as much reliability as the production environment, they still need redundant capabilities and the ability to test different failure scenarios. Your failure-scenario requirements might drive the design to include redundancy as part of your development and testing environment. Advantages Running development and functional testing workloads in the public cloud has several advantages: You can automatically start and stop environments as the need arises. For example, you can provision an entire environment for each commit or pull request, allow tests to run, and then turn it off again. This approach also offers the following advantages: You can reduce costs by stopping virtual machine (VM) instances when they're inactive, or by provisioning environments only on demand. You can speed up development and testing by starting ephemeral environments for each pull-request. Doing so also reduces maintenance overhead and reduces inconsistencies in the build environment. Running these environments in the public cloud helps build familiarity and confidence in the cloud and related tools, which might help with migrating other workloads. This approach is particularly helpful if you decide to explore Workload portability using containers and Kubernetes—for example, using GKE Enterprise across environments. Best practices To implement the environment hybrid architecture pattern successfully, consider the following recommendations: Define your application communication requirements, including the optimal network and security design. Then, use the mirrored network pattern to help you design your network architecture to prevent direct communications between systems from different environments. If communication is required across environments, it has to be in a controlled manner. The application deployment and testing strategy you choose should align with your business objectives and requirements. This might involve rolling out changes without downtime or implementing features gradually to a specific environment or user group before wider release. To make workloads portable and to abstract away differences between environments, you might use containers with Kubernetes. For more information, see GKE Enterprise hybrid environment reference architecture. Establish a common tool chain that works across computing environments for deploying, configuring, and operating workloads. Using Kubernetes gives you this consistency. Ensure that CI/CD pipelines are consistent across computing environments, and that the exact same set of binaries, packages, or containers is deployed across those environments. When using Kubernetes, use a CI system such as Tekton to implement a deployment pipeline that deploys to clusters and works across environments. For more information, see DevOps solutions on Google Cloud. To help you with the continuous release of secure and reliable applications, incorporate security as an integral part of the DevOps process (DevSecOps). For more information, see Deliver and secure your internet-facing application in less than an hour using Dev(Sec)Ops Toolkit. Use the same tools for logging and monitoring across Google Cloud and existing cloud environments. Consider using open source monitoring systems. For more information, see Hybrid and multicloud monitoring and logging patterns. If different teams manage test and production workloads, using separate tooling might be acceptable. However, using the same tools with different view permissions can help reduce your training effort and complexity. When you choose database, storage, and messaging services for functional testing, use products that have a managed equivalent on Google Cloud. Relying on managed services helps decrease the administrative effort of maintaining development and testing environments. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available that are based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. The following table shows which Google Cloud products are compatible with common OSS products. OSS product Compatible with Google Cloud product Apache HBase Bigtable Apache Beam Dataflow CDAP Cloud Data Fusion Apache Hadoop Dataproc MySQL, PostgreSQL Cloud SQL Redis Cluster, Redis, Memcached Memorystore Network File System (NFS) Filestore JMS, Kafka Pub/Sub Kubernetes GKE Enterprise Business continuity hybrid and multicloud patterns The main driver of considering business continuity for mission-critical systems is to help an organization to be resilient and continue its business operations during and following failure events. By replicating systems and data over multiple geographical regions and avoiding single points of failure, you can minimize the risks of a natural disaster that affects local infrastructure. Other failure scenarios include severe system failure, a cybersecurity attack, or even a system configuration error. Optimizing a system to withstand failures is essential for establishing effective business continuity. System reliability can be influenced by several factors, including, but not limited to, performance, resilience, uptime availability, security, and user experience. For more information on how to architect and operate reliable services on Google Cloud, see the reliability pillar of the Google Cloud Architecture Framework and building blocks of reliability in Google Cloud. This architecture pattern relies on a redundant deployment of applications across multiple computing environments. In this pattern, you deploy the same applications in multiple computing environments with the aim of increasing reliability. Business continuity can be defined as the ability of an organization to continue its key business functions or services at predefined acceptable levels following a disruptive event. Disaster recovery (DR) is considered a subset of business continuity, explicitly focusing on ensuring that the IT systems that support critical business functions are operational as soon as possible after a disruption. In general, DR strategies and plans often help form a broader business continuity strategy. From a technology point of view, when you start creating disaster recovery strategies, your business impact analysis should define two key metrics: the recovery point objective (RPO) and the recovery time objective (RTO). For more guidance on using Google Cloud to address disaster recovery, see the Disaster recovery planning guide. The smaller the RPO and RTO target values are, the faster services might recover from an interruption with minimal data loss. However, this implies higher cost because it means building redundant systems. Redundant systems that are capable of performing near real-time data replication and that operate at the same scale following a failure event, increase complexity, administrative overhead, and cost. The decision to select a DR strategy or pattern should be driven by a business impact analysis. For example, the financial losses incurred from even a few minutes of downtime for a financial services organization might far exceed the cost of implementing a DR system. However, businesses in other industries might sustain hours of downtime without a significant business effect. When you run mission-critical systems in an on-premises data center, one DR approach is to maintain standby systems in a second data center in a different region. A more cost-effective approach, however, is to use a public cloud–based computing environment for failover purposes. This approach is the main driver of the business continuity hybrid pattern. The cloud can be especially appealing from a cost point of view, because it lets you turn off some of your DR infrastructure when it's not in use. To achieve a lower cost DR solution, a cloud solution lets a business accept the potential increase in RPO and RTO values. The preceding diagram illustrates the use of the cloud as a failover or disaster recovery environment to an on-premises environment. A less common (and rarely required) variant of this pattern is the business continuity multicloud pattern. In that pattern, the production environment uses one cloud provider and the DR environment uses another cloud provider. By deploying copies of workloads across multiple cloud providers, you might increase availability beyond what a multi-region deployment offers. Evaluating a DR across multiple clouds versus using one cloud provider with different regions requires a thorough analysis of several considerations, including the following: Manageability Security Overall feasibility. Cost The potential outbound data transfer charges from more than one cloud provider could be costly with continuous inter-cloud communication. There can be a high volume of traffic when replicating databases. TCO and the cost of managing inter-cloud network infrastructure. Note: For more information about how Cross-Cloud Interconnect might help you to lower TCO and reduce complexity, see Announcing Cross-Cloud Interconnect: seamless connectivity to all your clouds. If your data needs to stay in your country to meet regulatory requirements, using a second cloud provider that's also in your country as a DR can be an option. That use of a second cloud provider assumes that there's no option to use an on-premises environment to build a hybrid setup. To avoid rearchitecting your cloud solution, ideally your second cloud provider should offer all the required capabilities and services you need in-region. Design considerations DR expectation: The RPO and the RTO targets your business wants to achieve should drive your DR architecture and build planning. Solution architecture: With this pattern, you need to replicate the existing functions and capabilities of your on-premises environment to meet your DR expectations. Therefore, you need to assess the feasibility and viability of rehosting, refactoring, or rearchitecting your applications to provide the same (or more optimized) functions and performance in the cloud environment. Design and build: Building a landing zone is almost always a prerequisite to deploying enterprise workloads in a cloud environment. For more information, see Landing zone design in Google Cloud. DR invocation: It's important for your DR design and process to consider the following questions: What triggers a DR scenario? For example, a DR might be triggered by the failure of specific functions or systems in the primary site. How is the failover to the DR environment invoked? Is it a manual approval process, or can it be automated to achieve a low RTO target? How should system failure detection and notification mechanisms be designed to invoke failover in alignment with the expected RTO? How is traffic rerouted to the DR environment after the failure is detected? Validate your answers to these questions through testing. Testing: Thoroughly test and evaluate the failover to DR. Ensure that it meets your RPO and RTO expectations. Doing so could give you more confidence to invoke DR when required. Any time a new change or update is made to the process or technology solution, conduct the tests again. Team skills: One or more technical teams must have the skills and expertise to build, operate, and troubleshoot the production workload in the cloud environment, unless your environment is managed by a third party. Advantages Using Google Cloud for business continuity offers several advantages: Because Google Cloud has many regions across the globe to choose from, you can use it to back up or replicate data to a different site within the same continent. You can also back up or replicate data to a site on a different continent. Google Cloud offers the ability to store data in Cloud Storage in a dual-region or multi-region bucket. Data is stored redundantly in at least two separate geographic regions. Data stored in dual-region and multi-region buckets are replicated across geographic regions using default replication. Dual-region buckets provide geo-redundancy to support business continuity and DR plans. Also, to replicate faster, with a lower RPO, objects stored in dual-regions can optionally use turbo replication across those regions. Similarly multi-region replication provides redundancy across multiple regions, by storing your data within the geographic boundary of the multi-region. Provides one or more of the following options to reduce capital expenses and operating expenses to build a DR: Stopped VM instances only incur storage costs and are substantially cheaper than running VM instances. That means you can minimize the cost of maintaining cold standby systems. The pay-per-use model of Google Cloud means that you only pay for the storage and compute capacity that you actually use. Elasticity capabilities, like autoscaling, let you automatically scale or shrink your DR environment as needed. For example, the following diagram shows an application running in an on-premises environment (production) that uses recovery components on Google Cloud with Compute Engine, Cloud SQL, and Cloud Load Balancing. In this scenario, the database is pre-provisioned using a VM-based database or using a Google Cloud managed database, like Cloud SQL, for faster recovery with continuous data replication. You can launch Compute Engine VMs from pre-created snapshots to reduce cost during normal operations. With this setup, and following a failure event, DNS needs to point to the Cloud Load Balancing external IP address. To have the application operational in the cloud, you need to provision the web and application VMs. Depending on the targeted RTO level and company policies, the entire process to invoke a DR, provision the workload in the cloud, and reroute the traffic, can be completed manually or automatically. To speed up and automate the provisioning of the infrastructure, consider managing the infrastructure as code. You can use Cloud Build, which is a continuous integration service, to automatically apply Terraform manifests to your environment. For more information, see Managing infrastructure as code with Terraform, Cloud Build, and GitOps. Best practices When you're using the business continuity pattern, consider the following best practices: Create a disaster recovery plan that documents your infrastructure along with failover and recovery procedures. Consider the following actions based on your business impact analysis and the identified required RPO and RTO targets: Decide whether backing up data to Google Cloud is sufficient, or whether you need to consider another DR strategy (cold, warm, or hot standby systems). Define the services and products that you can use as building blocks for your DR plan. Frame the applicable DR scenarios for your applications and data as part of your selected DR strategy. Consider using the handover pattern when you're only backing up data. Otherwise, the meshed pattern might be a good option to replicate the existing environment network architecture. Minimize dependencies between systems that are running in different environments, particularly when communication is handled synchronously. These dependencies can slow performance and decrease overall availability. Avoid the spilt-brain problem. If you replicate data bidirectionally across environments, you might be exposed to the split-brain problem. The split-brain problem occurs when two environments that replicate data bidirectionally lose communication with each other. This split can cause systems in both environments to conclude that the other environment is unavailable and that they have exclusive access to the data. This can lead to conflicting modifications of the data. There are two common ways to avoid the split-brain problem: Use a third computing environment. This environment allows systems to check for a quorum before modifying data. Allow conflicting data modifications to be reconciled after connectivity is restored. With SQL databases, you can avoid the split-brain problem by making the original primary instance inaccessible before clients start using the new primary instance. For more information, see Cloud SQL database disaster recovery. Ensure that CI/CD systems and artifact repositories don't become a single point of failure. When one environment is unavailable, you must still be able to deploy new releases or apply configuration changes. Make all workloads portable when using standby systems. All workloads should be portable (where supported by the applications and feasible) so that systems remain consistent across environments. You can achieve this approach by considering containers and Kubernetes. By using Google Kubernetes Engine (GKE) Enterprise edition, you can simplify the build and operations. Integrate the deployment of standby systems into your CI/CD pipeline. This integration helps ensure that application versions and configurations are consistent across environments. Ensure that DNS changes are propagated quickly by configuring your DNS with a reasonably short time to live value so that you can reroute users to standby systems when a disaster occurs. Select the DNS policy and routing policy that align with your architecture and solution behavior. Also, you can combine multiple regional load balancers with DNS routing policies to create global load-balancing architectures for different use cases, including hybrid setup. Use multiple DNS providers. When using multiple DNS providers, you can: Improve the availability and resiliency of your applications and services. Simplify the deployment or migration of hybrid applications that have dependencies across on-premises and cloud environments with a multi-provider DNS configuration. Google Cloud offers an open source solution based on octoDNS to help you set up and operate an environment with multiple DNS providers. For more information, see Multi-provider public DNS using Cloud DNS. Use load balancers when using standby systems to create an automatic failover. Keep in mind that load balancer hardware can fail. Use Cloud Load Balancing instead of hardware load balancers to power some scenarios that occur when using this architecture pattern. Internal client requests or external client requests can be redirected to the primary environment or the DR environment based on different metrics, such as weight-based traffic splitting. For more information, see Traffic management overview for global external Application Load Balancer. Consider using Cloud Interconnect or Cross-Cloud Interconnect if the outbound data transfer volume from Google Cloud toward the other environment is high. Cloud Interconnect can help to optimize the connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. Consider using your preferred partner solution on Google Cloud Marketplace to help facilitate the data backups, replications, and other tasks that meet your requirements, including your RPO and RTO targets. Test and evaluate DR invocation scenarios to understand how readily the application can recover from a disaster event when compared to the target RTO value. Encrypt communications in transit. To protect sensitive information, we recommend encrypting all communications in transit. If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. Cloud bursting pattern Internet applications can experience extreme fluctuations in usage. While most enterprise applications don't face this challenge, many enterprises must deal with a different kind of bursty workload: batch or CI/CD jobs. This architecture pattern relies on a redundant deployment of applications across multiple computing environments. The goal is to increase capacity, resiliency, or both. While you can accommodate bursty workloads in a data-center-based computing environment by overprovisioning resources, this approach might not be cost effective. With batch jobs, you can optimize use by stretching their execution over longer time periods, although delaying jobs isn't practical if they're time sensitive. The idea of the cloud bursting pattern is to use a private computing environment for the baseline load and burst to the cloud temporarily when you need extra capacity. In the preceding diagram, when data capacity is at its limit in an on-premises private environment, the system can gain extra capacity from a Google Cloud environment when needed. The key drivers of this pattern are saving money and reducing the time and effort needed to respond to scale requirement changes. With this approach, you only pay for the resources used when handling extra loads. That means you don't need to overprovision your infrastructure. Instead you can take advantage of on-demand cloud resources and scale them to fit the demand, and any predefined metrics. As a result, your company might avoid service interruptions during peak demand times. A potential requirement for cloud bursting scenarios is workload portability. When you allow workloads to be deployed to multiple environments, you must abstract away the differences between the environments. For example, Kubernetes gives you the ability to achieve consistency at the workload level across diverse environments that use different infrastructures. For more information, see GKE Enterprise hybrid environment reference architecture. Design considerations The cloud bursting pattern applies to interactive and batch workloads. When you're dealing with interactive workloads, however, you must determine how to distribute requests across environments: You can route incoming user requests to a load balancer that runs in the existing data center, and then have the load balancer distribute requests across the local and cloud resources. This approach requires the load balancer or another system that is running in the existing data center to also track the resources that are allocated in the cloud. The load balancer or another system must also initiate the automatic upscaling or downscaling of resources. Using this approach you can decommission all cloud resources during times of low activity. However, implementing mechanisms to track resources might exceed the capabilities of your load balancer solutions, and therefore increase overall complexity. Instead of implementing mechanisms to track resources, you can use Cloud Load Balancing with a hybrid connectivity network endpoint group (NEG) backend. You use this load balancer to route internal client requests or external client requests to backends that are located both on-premises and in Google Cloud and that are based on different metrics, like weight-based traffic splitting. Also you can scale backends based on load balancing serving capacity for workloads in Google Cloud. For more information, see Traffic management overview for global external Application Load Balancer. This approach has several additional benefits, such as taking advantage of Google Cloud Armor DDoS protection capabilities, WAF, and caching content at the cloud edge using Cloud CDN. However, you need to size the hybrid network connectivity to handle the additional traffic. As highlighted in Workload portability, an application might be portable to a different environment with minimal changes to achieve workload consistency, but that doesn't mean that the application performs equally the same in both environments. Differences in underlying compute, infrastructure security capabilities, or networking infrastructure, along with proximity to dependent services, typically determine performance. Through testing, you can have more accurate visibility and understand the performance expectations. You can use cloud infrastructure services to build an environment to host your applications without portability. Use the following approaches to handle client requests when traffic is redirected during peak demand times: Use consistent tooling to monitor and manage these two environments. Ensure consistent workload versioning and that your data sources are current. You might need to add automation to provision the cloud environment and reroute traffic when demand increases and the cloud workload is expected to accept client requests for your application. If you intend to shut down all Google Cloud resources during times of low demand, using DNS routing policies primarily for traffic load balancing might not always be optimal. This is mainly because: Resources can require some time to initialize before they can serve users. DNS updates tend to propagate slowly over the internet. As a result: Users might be routed to the Cloud environment even when no resources are available to process their requests. Users might keep being routed to the on-premises environment temporarily while DNS updates propagate across the internet. With Cloud DNS, you can choose the DNS policy and routing policy that align with your solution architecture and behavior, such as geolocation DNS routing policies. Cloud DNS also supports health checks for internal passthrough Network Load Balancer, and internal Application Load Balancer. In which case, you could incorporate it with your overall hybrid DNS setup that's based on this pattern. In some scenarios, you can use Cloud DNS to distribute client requests with health checks on Google Cloud, like when using internal Application Load Balancers or cross-region internal Application Load Balancers. In this scenario, Cloud DNS checks the overall health of the internal Application Load Balancer, which itself checks the health of the backend instances. For more information, see Manage DNS routing policies and health checks. You can also use Cloud DNS split horizon. Cloud DNS split horizon is an approach for setting up DNS responses or records to the specific location or network of the DNS query originator for the same domain name. This approach is commonly used to address requirements where an application is designed to offer both a private and a public experience, each with unique features. The approach also helps to distribute traffic load across environments. Given these considerations, cloud bursting generally lends itself better to batch workloads than to interactive workloads. Advantages Key advantages of the cloud bursting architecture pattern include: Cloud bursting lets you reuse existing investments in data centers and private computing environments. This reuse can either be permanent or in effect until existing equipment becomes due for replacement, at which point you might consider a full migration. Because you no longer have to maintain excess capacity to satisfy peak demands, you might be able to increase the use and cost effectiveness of your private computing environments. Cloud bursting lets you run batch jobs in a timely fashion without the need for overprovisioning compute resources. Best practices When implementing cloud bursting, consider the following best practices: To ensure that workloads running in the cloud can access resources in the same fashion as workloads running in an on-premises environment, use the meshed pattern with the least privileged security access principle. If the workload design permits it, you can allow access only from the cloud to the on-premises computing environment, not the other way round. To minimize latency for communication between environments, pick a Google Cloud region that is geographically close to your private computing environment. For more information, see Best practices for Compute Engine regions selection. When using cloud bursting for batch workloads only, reduce the security attack surface by keeping all Google Cloud resources private. Disallow any direct access from the internet to these resources, even if you're using Google Cloud external load balancing to provide the entry point to the workload. Select the DNS policy and routing policy that aligns with your architecture pattern and the targeted solution behavior. As part of this pattern, you can apply the design of your DNS policies permanently or when you need extra capacity using another environment during peak demand times. You can use geolocation DNS routing policies to have a global DNS endpoint for your regional load balancers. This tactic has many use cases for geolocation DNS routing policies, including hybrid applications that use Google Cloud alongside an on-premises deployment where Google Cloud region exists. If you need to provide different records for the same DNS queries, you can use split horizon DNS—for example, queries from internal and external clients. For more information, see reference architectures for hybrid DNS To ensure that DNS changes are propagated quickly, configure your DNS with a reasonably short time to live value so that you can reroute users to standby systems when you need extra capacity using cloud environments. For jobs that aren't highly time critical, and don't store data locally, consider using Spot VM instances, which are substantially cheaper than regular VM instances. A prerequisite, however, is that if the VM job is preempted, the system must be able to automatically restart the job. Use containers to achieve workload portability where applicable. Also, GKE Enterprise can be a key enabling technology for that design. For more information, see GKE Enterprise hybrid environment reference architecture. Monitor any traffic sent from Google Cloud to a different computing environment. This traffic is subject to outbound data transfer charges. If you plan to use this architecture long term with high outbound data transfer volume, consider using Cloud Interconnect. Cloud Interconnect can help to optimize the connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. When Cloud Load Balancing is used, you should use its application capacity optimizations abilities where applicable. Doing so can help you address some of the capacity challenges that can occur in globally distributed applications. Authenticate the people who use your systems by establishing common identity between environments so that systems can securely authenticate across environment boundaries. To protect sensitive information, encrypting all communications in transit is highly recommended. If encryption is required at the connectivity layer, various options are available based on the selected hybrid connectivity solution. These options include VPN tunnels, HA VPN over Cloud Interconnect, and MACsec for Cloud Interconnect. Hybrid and multicloud architecture patterns: What's next Learn how to approach hybrid and multicloud architecture and how to choose suitable workloads. Find out more about the networking architecture patterns suitable for the selected hybrid and multicloud architecture patterns. Learn more about Deployment Archetypes for Cloud Applications. Learn how to design and deploy an ecommerce web application using different architectures, including a microservice-based ecommerce web application using GKE, and a dynamic web application that's Serverless API based. For more information about region-specific considerations, see Geography and regions. ↩ Send feedback \ No newline at end of file diff --git a/View_the_guide_as_a_single_page(2).txt b/View_the_guide_as_a_single_page(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..fc2554d2fa5aca9dfc3900b9b56559138a223ff3 --- /dev/null +++ b/View_the_guide_as_a_single_page(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/one-page-view +Date Scraped: 2025-02-23T11:50:47.755Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud secure networking architecture patterns Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-01-23 UTC This page provides a single-page view of all the pages in Build hybrid and multicloud architectures using Google Cloud. You can print this page, or you can save it in PDF format by using your browser's print function and choosing the Save as PDF option. This page does not provide a table of contents (ToC) pane on the right side. This document is the third of three documents in a set. It discusses hybrid and multicloud networking architecture patterns. This part explores several common secure network architecture patterns that you can use for hybrid and multicloud architectures. It describes the scenarios that these networking patterns are best suited for, and provides best practices for implementing them with Google Cloud. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud. Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy. Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective (this document). Connecting private computing environments to Google Cloud securely and reliably is essential for any successful hybrid and multicloud architecture. The hybrid networking connectivity and cloud networking architecture pattern you choose for a hybrid and multicloud setup must meet the unique requirements of your enterprise workloads. It must also suit the architecture patterns you intend to apply. Although you might need to tailor each design, there are common patterns you can use as a blueprint. The networking architecture patterns in this document shouldn't be considered alternatives to the landing zone design in Google Cloud. Instead, you should design and deploy the architecture patterns you select as part of the overall Google Cloud landing zone design, which spans the following areas: Identities Resource management Security Networking Monitoring Different applications can use different networking architecture patterns, which are incorporated as part of a landing zone architecture. In a multicloud setup, you should maintain the consistency of the landing zone design across all environments. This series contains the following pages: Design considerations Architecture patterns Mirrored pattern Meshed pattern Gated patterns Gated egress Gated ingress Gated egress and ingress Handover General best practices ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Architecture patterns The documents in this series discuss networking architecture patterns that are designed based on the required communication models between applications residing in Google Cloud and in other environments (on-premises, in other clouds, or both). These patterns should be incorporated into the overall organization landing zone architecture, which can include multiple networking patterns to address the specific communication and security requirements of different applications. The documents in this series also discuss the different design variations that can be used with each architecture pattern. The following networking patterns can help you to meet communication and security requirements for your applications: Mirrored pattern Meshed pattern Gated patterns Gated egress Gated ingress Gated egress and gated ingress Handover pattern Mirrored pattern The mirrored pattern is based on replicating the design of a certain existing environment or environments to a new environment or environments. Therefore, this pattern applies primarily to architectures that follow the environment hybrid pattern. In that pattern, you run your development and testing workloads in one environment while you run your staging and production workloads in another. The mirrored pattern assumes that testing and production workloads aren't supposed to communicate directly with one another. However, it should be possible to manage and deploy both groups of workloads in a consistent manner. If you use this pattern, connect the two computing environments in a way that aligns with the following requirements: Continuous integration/continuous deployment (CI/CD) can deploy and manage workloads across all computing environments or specific environments. Monitoring, configuration management, and other administrative systems should work across computing environments. Workloads can't communicate directly across computing environments. If necessary, communication has to be in a fine-grained and controlled fashion. Architecture The following architecture diagram shows a high level reference architecture of this pattern that supports CI/CD, Monitoring, configuration management, other administrative systems, and workload communication: The description of the architecture in the preceding diagram is as follows: Workloads are distributed based on the functional environments (development, testing, CI/CD and administrative tooling) across separate VPCs on the Google Cloud side. Shared VPC is used for development and testing workloads. An extra VPC is used for the CI/CD and administrative tooling. With shared VPCs: The applications are managed by different teams per environment and per service project. The host project administers and controls the network communication and security controls between the development and test environments—as well as to outside the VPC. CI/CD VPC is connected to the network running the production workloads in your private computing environment. Firewall rules permit only allowed traffic. You might also use Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention without changing the design or routing. Cloud Next Generation Firewall Enterprise works by creating Google-managed zonal firewall endpoints that use packet intercept technology to transparently inspect the workloads for the configured threat signatures. It also protects workloads against threats. Enables communication among the peered VPCs using internal IP addresses. The peering in this pattern allows CI/CD and administrative systems to deploy and manage development and testing workloads. Consider these general best practices. You establish this CI/CD connection by using one of the discussed hybrid and multicloud networking connectivity options that meet your business and applications requirements. To let you deploy and manage production workloads, this connection provides private network reachability between the different computing environments. All environments should have overlap-free RFC 1918 IP address space. If the instances in the development and testing environments require internet access, consider the following options: You can deploy Cloud NAT into the same Shared VPC host project network. Deploying into the same Shared VPC host project network helps to avoid making these instances directly accessible from the internet. For outbound web traffic, you can use Secure Web Proxy. The proxy offers several benefits. For more information about the Google Cloud tools and capabilities that help you to build, test, and deploy in Google Cloud and across hybrid and multicloud environments, see the DevOps and CI/CD on Google Cloud explained blog. Variations To meet different design requirements, while still considering all communication requirements, the mirrored architecture pattern offers these options, which are described in the following sections: Shared VPC per environment Centralized application layer firewall Hub-and-spoke topology Microservices zero trust distributed architecture Shared VPC per environment The shared VPC per environment design option allows for application- or service-level separation across environments, including CI/CD and administrative tools that might be required to meet certain organizational security requirements. These requirements limit communication, administrative domain, and access control for different services that also need to be managed by different teams. This design achieves separation by providing network- and project-level isolation between the different environments, which enables more fine-grained communication and Identity and Access Management (IAM) access control. From a management and operations perspective, this design provides the flexibility to manage the applications and workloads created by different teams per environment and per service project. VPC networking, and its security features can be provisioned and managed by networking operations teams based on the following possible structures: One team manages all host projects across all environments. Different teams manage the host projects in their respective environments. Decisions about managing host projects should be based on the team structure, security operations, and access requirements of each team. You can apply this design variation to the Shared VPC network for each environment landing zone design option. However, you need to consider the communication requirements of the mirrored pattern to define what communication is allowed between the different environments, including communication over the hybrid network. You can also provision a Shared VPC network for each main environment, as illustrated in the following diagram: Centralized application layer firewall In some scenarios, the security requirements might mandate the consideration of application layer (Layer 7) and deep packet inspection with advanced firewalling mechanisms that exceed the capabilities of Cloud Next Generation Firewall. To meet the security requirements and standards of your organization, you can use an NGFW appliance hosted in a network virtual appliance (NVA). Several Google Cloud security partners offer options well suited to this task. As illustrated in the following diagram, you can place the NVA in the network path between Virtual Private Cloud and the private computing environment using multiple network interfaces. This design also can be used with multiple shared VPCs as illustrated in the following diagram. The NVA in this design acts as the perimeter security layer. It also serves as the foundation for enabling inline traffic inspection and enforcing strict access control policies. For a robust multilayer security strategy that includes VPC firewall rules and intrusion prevention service capabilities, include further traffic inspection and security control to both east-west and north-south traffic flows. Note: In supported cloud regions, and when technically feasible for your design, NVAs can be deployed without requiring multiple VPC networks or appliance interfaces. This deployment is based on using load balancing and policy-based routing capabilities. These capabilities enable a topology-independent, policy-driven mechanism for integrating NVAs into your cloud network. For more details, see Deploy network virtual appliances (NVAs) without multiple VPCs. Hub-and-spoke topology Another possible design variation is to use separate VPCs (including shared VPCs) for your development and different testing stages. In this variation, as shown in the following diagram, all stage environments connect with the CI/CD and administrative VPC in a hub-and-spoke architecture. Use this option if you must separate the administrative domains and the functions in each environment. The hub-and-spoke communication model can help with the following requirements: Applications need to access a common set of services, like monitoring, configuration management tools, CI/CD, or authentication. A common set of security policies needs to be applied to inbound and outbound traffic in a centralized manner through the hub. For more information about hub-and-spoke design options, see Hub-and-spoke topology with centralized appliances and Hub-and-spoke topology without centralized appliances. As shown in the preceding diagram, the inter-VPC communication and hybrid connectivity all pass through the hub VPC. As part of this pattern, you can control and restrict the communication at the hub VPC to align with your connectivity requirements. As part of the hub-and-spoke network architecture the following are the primary connectivity options (between the spokes and hub VPCs) on Google Cloud: VPC Network Peering VPN Using network virtual appliance (NVA) With multiple network interfaces With Network Connectivity Center (NCC) For more information on which option you should consider in your design, see Hub-and-spoke network architecture. A key influencing factor for selecting VPN over VPC peering between the spokes and the hub VPC is when traffic transitivity is required. Traffic transitivity means that traffic from a spoke can reach other spokes through the hub. Microservices zero trust distributed architecture Hybrid and multicloud architectures can require multiple clusters to achieve their technical and business objectives, including separating the production environment from the development and testing environments. Therefore, network perimeter security controls are important, especially when they're required to comply with certain security requirements. It's not enough to support the security requirements of current cloud-first distributed microservices architectures, you should also consider zero trust distributed architectures. The microservices zero trust distributed architecture supports your microservices architecture with microservice level security policy enforcement, authentication, and workload identity. Trust is identity-based and enforced for each service. By using a distributed proxy architecture, such as a service mesh, services can effectively validate callers and implement fine-grained access control policies for each request, enabling a more secure and scalable microservices environment. Cloud Service Mesh gives you the flexibility to have a common mesh that can span your Google Cloud and on-premises deployments. The mesh uses authorization policies to help secure service-to-service communications. You might also incorporate Apigee Adapter for Envoy, which is a lightweight Apigee API gateway deployment within a Kubernetes cluster, with this architecture. Apigee Adapter for Envoy is an open source edge and service proxy that's designed for cloud-first applications. For more information about this topic, see the following articles: Zero Trust Distributed Architecture GKE Enterprise hybrid environment Connect to Google Connect an on-premises GKE Enterprise cluster to a Google Cloud network. Set up a multicloud or hybrid mesh Deploy Cloud Service Mesh across environments and clusters. Mirrored pattern best practices The CI/CD systems required for deploying or reconfiguring production deployments must be highly available, meaning that all architecture components must be designed to provide the expected level of system availability. For more information, see Google Cloud infrastructure reliability. To eliminate configuration errors for repeated processes like code updates, automation is essential to standardize your builds, tests, and deployments. The integration of centralized NVAs in this design might require the incorporation of multiple segments with varying levels of security access controls. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by your NVA vendor. By not exporting on-premises IP routes over VPC peering or VPN to the development and testing VPC, you can restrict network reachability from development and testing environments to the on-premises environment. For more information, see VPC Network Peering custom route exchange. For workloads with private IP addressing that can require Google's APIs access, you can expose Google APIs by using a Private Service Connect endpoint within a VPC network. For more information, see Gated ingress, in this series. Review the general best practices for hybrid and multicloud networking architecture patterns. Meshed pattern The meshed pattern is based on establishing a hybrid network architecture. That architecture spans multiple computing environments. In these environments, all systems can communicate with one another and aren't limited to one-way communication based on the security requirements of your applications. This networking pattern applies primarily to tiered hybrid, partitioned multicloud, or bursting architectures. It's also applicable to business continuity design to provision a disaster recovery (DR) environment in Google Cloud. In all cases, it requires that you connect computing environments in a way that align with the following communication requirements: Workloads can communicate with one another across environment boundaries using private RFC 1918 IP addresses. Communication can be initiated from either side. The specifics of the communications model can vary based on the applications and security requirements, such as the communication models discussed in the design options that follow. The firewall rules that you use must allow traffic between specific IP address sources and destinations based on the requirements of the application, or applications, for which the pattern is designed. Ideally, you can use a multi-layered security approach to restrict traffic flows in a fine-grained fashion, both between and within computing environments. Architecture The following diagram illustrates a high level reference architecture of the meshed pattern. All environments should use an overlap-free RFC 1918 IP address space. On the Google Cloud side, you can deploy workloads into a single or multiple shared VPCs or non-shared VPCs. For other possible design options of this pattern, refer to the design variations that follow. The selected structure of your VPCs should align with the projects and resources hierarchy design of your organization. The VPC network of Google Cloud extends to other computing environments. Those environments can be on-premises or in another cloud. Use one of the hybrid and multicloud networking connectivity options that meet your business and application requirements. Limit communications to only the allowed IP addresses of your sources and destinations. Use any of the following capabilities, or a combination of them: Firewall rules or firewall policies. Network virtual appliance (NVA) with next generation firewall (NGFW) inspection capabilities, placed in the network path. Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention without changing the network design or routing. Variations The meshed architecture pattern can be combined with other approaches to meet different design requirements, while still considering the communication requirements of the pattern. The pattern options are described in the following sections: One VPC per environment Use a centralized application layer firewall Microservices zero trust distributed architecture One VPC per environment The common reasons to consider the one-VPC-per-environment option are as follows: The cloud environment requires network-level separation of the VPC networks and resources, in alignment with your organization's resource hierarchy design. If administrative domain separation is required, it can also be combined with a separate project per environment. To centrally manage network resources in a common network and provide network isolation between the different environments, use a shared VPC for each environment that you have in Google Cloud, such as development, testing, and production. Scale requirements that might need to go beyond the VPC quotas for a single VPC or project. As illustrated in the following diagram, the one-VPC-per-environment design lets each VPC integrate directly with the on-premises environment or other cloud environments using VPNs, or a Cloud Interconnect with multiple VLAN attachments. The pattern displayed in the preceding diagram can be applied on a landing zone hub-and-spoke network topology. In that topology, a single (or multiple) hybrid connection can be shared with all spoke VPCs. It's shared by using a transit VPC to terminate both the hybrid connectivity and the other spoke VPCs. You can also expand this design by adding NVA with next-generation firewall (NGFW) inspection capabilities at the transit VPC, as described in the next section, "Use a centralized application layer firewall." Use a centralized application layer firewall If your technical requirements mandate considering application layer (Layer 7) and deep packet inspection with advanced firewalling capabilities that exceed the capabilities of Cloud Next Generation Firewall, you can use an NGFW appliance hosted in an NVA. However, that NVA must meet the security needs of your organization. To implement these mechanisms, you can extend the topology to pass all cross-environment traffic through a centralized NVA firewall, as shown in the following diagram. You can apply the pattern in the following diagram on the landing zone design by using a hub-and-spoke topology with centralized appliances: As shown in the preceding diagram, The NVA acts as the perimeter security layer and serves as the foundation for enabling inline traffic inspection. It also enforces strict access control policies. To inspect both east-west and north-south traffic flows, the design of a centralized NVA might include multiple segments with different levels of security access controls. Microservices zero trust distributed architecture When containerized applications are used, the microservices zero trust distributed architecture discussed in the mirrored pattern section is also applicable to this architecture pattern. The key difference between this pattern and the mirrored pattern is that the communication model between workloads in Google Cloud and other environments can be initiated from either side. Traffic must be controlled and fine-grained, based on the application requirements and security requirements using Service Mesh. Meshed pattern best practices Before you do anything else, decide on your resource hierarchy design, and the design required to support any project and VPC. Doing so can help you select the optimal networking architecture that aligns with the structure of your Google Cloud projects. Use a zero trust distributed architecture when using Kubernetes within your private computing environment and Google Cloud. When you use centralized NVAs in your design, you should define multiple segments with different levels of security access controls and traffic inspection policies. Base these controls and policies on the security requirements of your applications. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by the Google Cloud security vendor that supplies your NVAs. To provide increased privacy, data integrity, and a controlled communication model, expose applications through APIs using API gateways, like Apigee and Apigee hybrid with end-to-end mTLS. You can also use a shared VPC with Apigee in the same organization resource. If the design of your solution requires exposing a Google Cloud based application to the public internet, consider the design recommendations discussed in Networking for internet-facing application delivery. To help protect Google Cloud services in your projects, and to help mitigate the risk of data exfiltration, use VPC Service Controls to specify service perimeters at the project or VPC network level. Also, you can extend service perimetersto a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Review the general best practices for hybrid and multicloud networking patterns. If you intend to enforce stricter isolation and more fine-grained access between your applications hosted in Google Cloud, and in other environments, consider using one of the gated patterns that are discussed in the other documents in this series. Gated patterns The gated pattern is based on an architecture that exposes select applications and services in a fine-grained manner, based on specific exposed APIs or endpoints between the different environments. This guide categorizes this pattern into three possible options, each determined by the specific communication model: Gated egress Gated ingress Gated egress and ingress (bidirectional gated in both directions) As previously mentioned in this guide, the networking architecture patterns described here can be adapted to various applications with diverse requirements. To address the specific needs of different applications, your main landing zone architecture might incorporate one pattern or a combination of patterns simultaneously. The specific deployment of the selected architecture is determined by the specific communication requirements of each gated pattern. Note: In general, the gated pattern can be applied or incorporated with the landing zone design option that exposes the services in a consumer-producer model. This series discusses each gated pattern and its possible design options. However, one common design option applicable to all gated patterns is the Zero Trust Distributed Architecture for containerized applications with microservice architecture. This option is powered by Cloud Service Mesh, Apigee, and Apigee Adapter for Envoy—a lightweight Apigee gateway deployment within a Kubernetes cluster. Apigee Adapter for Envoy is a popular, open source edge and service proxy that's designed for cloud-first applications. This architecture controls allowed secure service-to-service communications and the direction of communication at a service level. Traffic communication policies can be designed, fine-tuned, and applied at the service level based on the selected pattern. Gated patterns allow for the implementation of Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to perform deep packet inspection for threat prevention without any design or routing modifications. That inspection is subject to the specific applications being accessed, the communication model, and the security requirements. If security requirements demand Layer 7 and deep packet inspection with advanced firewalling mechanisms that surpass the capabilities of Cloud Next Generation Firewall, you can use a centralized next generation firewall (NGFW) hosted in a network virtual appliance (NVA). Several Google Cloud security partners offer NGFW appliances that can meet your security requirements. Integrating NVAs with these gated patterns can require introducing multiple security zones within the network design, each with distinct access control levels. Gated egress The architecture of the gated egress networking pattern is based on exposing select APIs from the on-premises environment or another cloud environment to workloads that are deployed in Google Cloud. It does so without directly exposing them to the public internet from an on-premises environment or from other cloud environments. You can facilitate this limited exposure through an API gateway or proxy, or a load balancer that serves as a facade for existing workloads. You can deploy the API gateway functionality in an isolated perimeter network segment, like a perimeter network. The gated egress networking pattern applies primarily to (but isn't limited to) tiered application architecture patterns and partitioned application architecture patterns. When deploying backend workloads within an internal network, gated egress networking helps to maintain a higher level of security within your on-premises computing environment. The pattern requires that you connect computing environments in a way that meets the following communication requirements: Workloads that you deploy in Google Cloud can communicate with the API gateway or load balancer (or a Private Service Connect endpoint) that exposes the application by using internal IP addresses. Other systems in the private computing environment can't be reached directly from within Google Cloud. Communication from the private computing environment to any workloads deployed in Google Cloud isn't allowed. Traffic to the private APIs in other environments is only initiated from within the Google Cloud environment. The focus of this guide is on hybrid and multicloud environments connected over a private hybrid network. If the security requirements of your organization permit it, API calls to remote target APIs with public IP addresses can be directly reached over the internet. But you must consider the following security mechanisms: API OAuth 2.0 with Transport Layer Security (TLS). Rate limiting. Threat protection policies. Mutual TLS configured to the backend of your API layer. IP address allowlist filtering configured to only allow communication with predefined API sources and destinations from both sides. To secure an API proxy, consider these other security aspects. For more information, see Best practices for securing your applications and APIs using Apigee. Architecture The following diagram shows a reference architecture that supports the communication requirements listed in the previous section: Data flows through the preceding diagram as follows: On the Google Cloud side, you can deploy workloads into virtual private clouds (VPCs). The VPCs can be single or multiple (shared or non-shared). The deployment should be in alignment with the projects and resource hierarchy design of your organization. The VPC networks of the Google Cloud environment are extended to the other computing environments. The environments can be on-premises or in another cloud. To facilitate the communication between environments using internal IP addresses, use a suitable hybrid and multicloud networking connectivity. To limit the traffic that originates from specific VPC IP addresses, and is destined for remote gateways or load balancers, use IP address allowlist filtering. Return traffic from these connections is allowed when using stateful firewall rules. You can use any combination of the following capabilities to secure and limit communications to only the allowed source and destination IP addresses: Firewall rules or firewall policies. Network virtual appliance (NVA) with next generation firewall (NGFW) inspection capabilities that are placed in the network path. Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packet inspection for threat prevention. All environments share overlap-free RFC 1918 IP address space. Variations The gated egress architecture pattern can be combined with other approaches to meet different design requirements that still consider the communication requirements of this pattern. The pattern offers the following options: Use Google Cloud API gateway and global frontend Expose remote services using Private Service Connect Use Google Cloud API gateway and global frontend With this design approach, API exposure and management reside within Google Cloud. As shown in the preceding diagram, you can accomplish this through the implementation of Apigee as the API platform. The decision to deploy an API gateway or load balancer in the remote environment depends on your specific needs and current configuration. Apigee provides two options for provisioning connectivity: With VPC peering Without VPC peering Google Cloud global frontend capabilities like Cloud Load Balancing, Cloud CDN (when accessed over Cloud Interconnect), and Cross-Cloud Interconnect enhance the speed with which users can access applications that have backends hosted in your on-premises environments and in other cloud environments. Optimizing content delivery speeds is achieved by delivering those applications from Google Cloud points of presence (PoP). Google Cloud PoPs are present on over 180 internet exchanges and at over 160 interconnection facilities around the world. To see how PoPs help to deliver high-performing APIs when using Apigee with Cloud CDN to accomplish the following, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube: Reduce latency. Host APIs globally. Increase availability for peak traffic. The design example illustrated in the preceding diagram is based on Private Service Connect without VPC peering. The northbound network in this design is established through: A load balancer (LB in the diagram), where client requests terminate, processes the traffic and then routes it to a Private Service Connect backend. A Private Service Connect backend lets a Google Cloud load balancer send clients requests over a Private Service Connect connection associated with a producer service attachment to the published service (Apigee runtime instance) using Private Service Connect network endpoint groups (NEGs). The southbound networking is established through: A Private Service Connect endpoint that references a service attachment associated with an internal load balancer (ILB in the diagram) in the customer VPC. The ILB is deployed with hybrid connectivity network endpoint groups (hybrid connectivity NEGs). Hybrid services are accessed through the hybrid connectivity NEG over a hybrid network connectivity, like VPN or Cloud Interconnect. For more information, see Set up a regional internal proxy Network Load Balancer with hybrid connectivity and Private Service Connect deployment patterns. Note: Depending on your requirements, the APIs of the on-premises backends can be exposed through Apigee Hybrid, a third party API gateway or proxy, or a load balancer. Expose remote services using Private Service Connect Use the Private Service Connect option to expose remote services for the following scenarios: You aren't using an API platform or you want to avoid connecting your entire VPC network directly to an external environment for the following reasons: You have security restrictions or compliance requirements. You have an IP address range overlap, such as in a merger and acquisition scenario. To enable secure uni-directional communications between clients, applications, and services across the environments even when you have a short deadline. You might need to provide connectivity to multiple consumer VPCs through a service-producer VPC (transit VPC) to offer highly scalable multi-tenant or single-tenant service models, to reach published services on other environments. Using Private Service Connect for applications that are consumed as APIs provides an internal IP address for the published applications, enabling secure access within the private network across regions and over hybrid connectivity. This abstraction facilitates the integration of resources from diverse clouds and on-premises environments over a hybrid and multicloud connectivity model. You can accelerate application integration and securely expose applications that reside in an on-premises environment, or another cloud environment, by using Private Service Connect to publish the service with fine-grained access. In this case, you can use the following option: A service attachment that references a regional internal proxy Network Load Balancer or an internal Application Load Balancer. The load balancer uses a hybrid network endpoint group (hybrid connectivity NEG) in a producer VPC that acts in this design as a transit VPC. In the preceding diagram, the workloads in the VPC network of your application can reach the hybrid services running in your on-premises environment, or in other cloud environments, through the Private Service Connect endpoint, as illustrated in the following diagram. This design option for uni-directional communications provides an alternative option to peering to a transit VPC. As part of the design in the preceding diagram, multiple frontends, backends, or endpoints can connect to the same service attachment, which lets multiple VPC networks or multiple consumers access the same service. As illustrated in the following diagram, you can make the application accessible to multiple VPCs. This accessibility can help in multi-tenant services scenarios where your service is consumed by multiple consumer VPCs even if their IP address ranges overlap. IP address overlap is one of most common issues when integrating applications that reside in different environments. The Private Service Connect connection in the following diagram helps to avoid the IP address overlap issue. It does so without requiring provisioning or managing any additional networking components, like Cloud NAT or an NVA, to perform the IP address translation. For an example configuration, see Publish a hybrid service by using Private Service Connect. The design has the following advantages: Avoids potential shared scaling dependencies and complex manageability at scale. Improves security by providing fine-grained connectivity control. Reduces IP address coordination between the producer and consumer of the service and the remote external environment. The design approach in the preceding diagram can expand at later stages to integrate Apigee as the API platform by using the networking design options discussed earlier, including the Private Service Connect option. You can make the Private Service Connect endpoint accessible from other regions by using Private Service Connect global access. The client connecting to the Private Service Connect endpoint can be in the same region as the endpoint or in a different region. This approach might be used to provide high availability across services hosted in multiple regions, or to access services available in a single region from other regions. When a Private Service Connect endpoint is accessed by resources hosted in other regions, inter-regional outbound charges apply to the traffic destined to endpoints with global access. Note: To achieve distributed wellness checks and to facilitate connecting multiple VPCs to on-premises environments over multiple hybrid connections, chain an internal Application Load Balancer with an external Application Load Balancer. For more information, see Explicit Chaining of Google Cloud L7 Load Balancers with PSC. Best practices Considering Apigee and Apigee Hybrid as your API platform solution offers several benefits. It provides a proxy layer, and an abstraction or facade, for your backend service APIs combined with security capabilities, rate limiting, quotas, and analytics. Use Apigee Adapter for Envoy with an Apigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture. VPCs and project design in Google Cloud should be driven by your resource hierarchy and your secure communication model requirements. When APIs with API gateways are used, you should also use an IP address allowlist. An allowlist limits communications to the specific IP address sources and destinations of the API consumers and API gateways that might be hosted in different environments. Use VPC firewall rules or firewall policies to control access to Private Service Connect resources through the Private Service Connect endpoint. If an application is exposed externally through an application load balancer, consider using Google Cloud Armor as an extra layer of security to protect against DDoS and application layer security threats. If instances require internet access, use Cloud NAT in the application (consumer) VPC to allow workloads to access the internet. Doing so lets you avoid assigning VM instances with external public IP addresses in systems that are deployed behind an API gateway or a load balancer. For outbound web traffic, you can use Google Cloud Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Gated ingress The architecture of the gated ingress pattern is based on exposing select APIs of workloads running in Google Cloud to the private computing environment without exposing them to the public internet. This pattern is the counterpart to the gated egress pattern and is well suited for edge hybrid, tiered hybrid, and partitioned multicloud scenarios. Like with the gated egress pattern, you can facilitate this limited exposure through an API gateway or load balancer that serves as a facade for existing workloads or services. Doing so makes it accessible to private computing environments, on-premises environments, or on other cloud environment, as follows: Workloads that you deploy in the private computing environment or other cloud environments are able to communicate with the API gateway or load balancer by using internal IP addresses. Other systems deployed in Google Cloud can't be reached. Communication from Google Cloud to the private computing environment or to other cloud environments isn't allowed. Traffic is only initiated from the private environment or other cloud environments to the APIs in Google Cloud. Architecture The following diagram shows a reference architecture that meets the requirements of the gated ingress pattern. The description of the architecture in the preceding diagram is as follows: On the Google Cloud side, you deploy workloads into an application VPC (or multiple VPCs). The Google Cloud environment network extends to other computing environments (on-premises or on another cloud) by using hybrid or multicloud network connectivity to facilitate the communication between environments. Optionally, you can use a transit VPC to accomplish the following: Provide additional perimeter security layers to allow access to specific APIs outside of your application VPC. Route traffic to the IP addresses of the APIs. You can create VPC firewall rules to prevent some sources from accessing certain APIs through an endpoint. Inspect Layer 7 traffic at the transit VPC by integrating a network virtual appliance (NVA). Access APIs through an API gateway or a load balancer (proxy or application load balancer) to provide a proxy layer, and an abstraction layer or facade for your service APIs. If you need to distribute traffic across multiple API gateway instances, you could use an internal passthrough Network Load Balancer. Provide limited and fine-grained access to a published service through a Private Service Connect endpoint by using a load balancer through Private Service Connect to expose an application or service. All environments should use an overlap-free RFC 1918 IP address space. The following diagram illustrates the design of this pattern using Apigee as the API platform. In the preceding diagram, using Apigee as the API platform provides the following features and capabilities to enable the gated ingress pattern: Gateway or proxy functionality Security capabilities Rate limiting Analytics In the design: The northbound networking connectivity (for traffic coming from other environments) passes through a Private Service Connect endpoint in your application VPC that's associated with the Apigee VPC. At the application VPC, an internal load balancer is used to expose the application APIs through a Private Service Connect endpoint presented in the Apigee VPC. For more information, see Architecture with VPC peering disabled. Configure firewall rules and traffic filtering at the application VPC. Doing so provides fine-grained and controlled access. It also helps stop systems from directly reaching your applications without passing through the Private Service Connect endpoint and API gateway. Also, you can restrict the advertisement of the internal IP address subnet of the backend workload in the application VPC to the on-premises network to avoid direct reachability without passing through the Private Service Connect endpoint and the API gateway. Certain security requirements might require perimeter security inspection outside the application VPC, including hybrid connectivity traffic. In such cases, you can incorporate a transit VPC to implement additional security layers. These layers, like next generation firewalls (NGFWs) NVAs with multiple network interfaces, or Cloud Next Generation Firewall Enterprise with intrusion prevention service (IPS), perform deep packet inspection outside of your application VPC, as illustrated in the following diagram: As illustrated in the preceding diagram: The northbound networking connectivity (for traffic coming from other environments) passes through a separate transit VPC toward the Private Service Connect endpoint in the transit VPC that's associated with the Apigee VPC. At the application VPC, an internal load balancer (ILB in the diagram) is used to expose the application through a Private Service Connect endpoint in the Apigee VPC. You can provision several endpoints in the same VPC network, as shown in the following diagram. To cover different use cases, you can control the different possible network paths using Cloud Router and VPC firewall rules. For example, If you're connecting your on-premises network to Google Cloud using multiple hybrid networking connections, you could send some traffic from on-premises to specific Google APIs or published services over one connection and the rest over another connection. Also, you can use Private Service Connect global access to provide failover options. Variations The gated ingress architecture pattern can be combined with other approaches to meet different design requirements, while still considering the communication requirements of the pattern. The pattern offers the following options: Access Google APIs from other environments Expose application backends to other environments using Private Service Connect Use a hub and spoke architecture to expose application backends to other environments Access Google APIs from other environments For scenarios requiring access to Google services, like Cloud Storage or BigQuery, without sending traffic over the public internet, Private Service Connect offers a solution. As shown in the following diagram, it enables reachability to the supported Google APIs and services (including Google Maps, Google Ads, and Google Cloud) from on-premises or other cloud environments through a hybrid network connection using the IP address of the Private Service Connect endpoint. For more information about accessing Google APIs through Private Service Connect endpoints, see About accessing Google APIs through endpoints. In the preceding diagram, your on-premises network must be connected to the transit (consumer) VPC network using either Cloud VPN tunnels or a Cloud Interconnect VLAN attachment. Google APIs can be accessed by using endpoints or backends. Endpoints let you target a bundle of Google APIs. Backends let you target a specific regional Google API. Note: Private Service Connect endpoints are registered with Service Directory for Google APIs where you can store, manage, and publish services. Expose application backends to other environments using Private Service Connect In specific scenarios, as highlighted by the tiered hybrid pattern, you might need to deploy backends in Google Cloud while maintaining frontends in private computing environments. While less common, this approach is applicable when dealing with heavyweight, monolithic frontends that might rely on legacy components. Or, more commonly, when managing distributed applications across multiple environments, including on-premises and other clouds, that require connectivity to backends hosted in Google Cloud over a hybrid network. In such an architecture, you can use a local API gateway or load balancer in the private on-premises environment, or other cloud environments, to directly expose the application frontend to the public internet. Using Private Service Connect in Google Cloud facilitates private connectivity to the backends that are exposed through a Private Service Connect endpoint, ideally using predefined APIs, as illustrated in the following diagram: The design in the preceding diagram uses an Apigee Hybrid deployment consisting of a management plane in Google Cloud and a runtime plane hosted in your other environment. You can install and manage the runtime plane on a distributed API gateway on one of the supported Kubernetes platforms in your on-premises environment or in other cloud environments. Based on your requirements for distributed workloads across Google Cloud and other environments, you can use Apigee on Google Cloud with Apigee Hybrid. For more information, see Distributed API gateways. Use a hub and spoke architecture to expose application backends to other environments Exposing APIs from application backends hosted in Google Cloud across different VPC networks might be required in certain scenarios. As illustrated in the following diagram, a hub VPC serves as a central point of interconnection for the various VPCs (spokes), enabling secure communication over private hybrid connectivity. Optionally, local API gateway capabilities in other environments, such as Apigee Hybrid, can be used to terminate client requests locally where the application frontend is hosted. As illustrated in the preceding diagram: To provide additional NGFW Layer 7 inspection abilities, the NVA with NGFW capabilities is optionally integrated with the design. You might require these abilities to comply with specific security requirements and the security policy standards of your organization. This design assumes that spoke VPCs don't require direct VPC to VPC communication. If spoke-to-spoke communication is required, you can use the NVA to facilitate such communication. If you have different backends in different VPCs, you can use Private Service Connect to expose these backends to the Apigee VPC. If VPC peering is used for the northbound and southbound connectivity between spoke VPCs and hub VPC, you need to consider the transitivity limitation of VPC networking over VPC peering. To overcome this limitation, you can use any of the following options: To interconnect the VPCs, use an NVA. Where applicable, consider the Private Service Connect model. To establish connectivity between the Apigee VPC and backends that are located in other Google Cloud projects in the same organization without additional networking components, use Shared VPC. If NVAs are required for traffic inspection—including traffic from your other environments—the hybrid connectivity to on-premises or other cloud environments should be terminated on the hybrid-transit VPC. If the design doesn't include the NVA, you can terminate the hybrid connectivity at the hub VPC. If certain load-balancing functionalities or security capabilities are required, like adding Google Cloud Armor DDoS protection or WAF, you can optionally deploy an external Application Load Balancer at the perimeter through an external VPC before routing external client requests to the backends. Best practices For situations where client requests from the internet need to be received locally by a frontend hosted in a private on-premises or other cloud environment, consider using Apigee Hybrid as an API gateway solution. This approach also facilitates a seamless migration of the solution to a completely Google Cloud-hosted environment while maintaining the consistency of the API platform (Apigee). Use Apigee Adapter for Envoy with an Apigee Hybrid deployment with Kubernetes architecture where applicable to your requirements and the architecture. The design of VPCs and projects in Google Cloud should follow the resource hierarchy and secure communication model requirements, as described in this guide. Incorporating a transit VPC into this design provides the flexibility to provision additional perimeter security measures and hybrid connectivity outside the workload VPC. Use Private Service Connect to access Google APIs and services from on-premises environments or other cloud environments using the internal IP address of the endpoint over a hybrid connectivity network. For more information, see Access the endpoint from on-premises hosts. To help protect Google Cloud services in your projects and help mitigate the risk of data exfiltration, use VPC Service Controls to specify service perimeters at the project or VPC network level. When needed, you can extend service perimeters to a hybrid environment over a VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Use VPC firewall rules or firewall policies to control network-level access to Private Service Connect resources through the Private Service Connect endpoint. For example, outbound firewall rules at the application (consumer) VPC can restrict access from VM instances to the IP address or subnet of your endpoints. For more information about VPC firewall rules in general, see VPC firewall rules. When designing a solution that includes NVAs, it's important to consider the high availability (HA) of the NVAs to avoid a single point of failure that could block all communication. Follow the HA and redundancy design and implementation guidance provided by your NVA vendor. To strengthen perimeter security and secure your API gateway that's deployed in the respective environment, you can optionally implement load balancing and web application firewall mechanisms in your other computing environment (hybrid or other cloud). Implement these options at the perimeter network that's directly connected to the internet. If instances require internet access, use Cloud NAT in the application VPC to allow workloads to access the internet. Doing so lets you avoid assigning VM instances with external public IP addresses in systems that are deployed behind an API gateway or a load balancer. For outbound web traffic, use Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Gated egress and gated ingress The gated egress and gated ingress pattern uses a combination of gated egress and gated ingress for scenarios that demand bidirectional usage of selected APIs between workloads. Workloads can run in Google Cloud, in private on-premises environments, or in other cloud environments. In this pattern, you can use API gateways, Private Service Connect endpoints, or load balancers to expose specific APIs and optionally provide authentication, authorization, and API call audits. The key distinction between this pattern and the meshed pattern lies in its application to scenarios that solely require bidirectional API usage or communication with specific IP address sources and destinations—for example, an application published through a Private Service Connect endpoint. Because communication is restricted to the exposed APIs or specific IP addresses, the networks across the environments don't need to align in your design. Common applicable scenarios include, but aren't limited to, the following: Mergers and acquisitions. Application integrations with partners. Integrations between applications and services of an organization with different organizational units that manage their own applications and host them in different environments. The communication works as follows: Workloads that you deploy in Google Cloud can communicate with the API gateway (or specific destination IP addresses) by using internal IP addresses. Other systems deployed in the private computing environment can't be reached. Conversely, workloads that you deploy in other computing environments can communicate with the Google Cloud-side API gateway (or a specific published endpoint IP address) by using internal IP addresses. Other systems deployed in Google Cloud can't be reached. Architecture The following diagram shows a reference architecture for the gated egress and gated ingress pattern: The design approach in the preceding diagram has the following elements: On the Google Cloud side, you deploy workloads in a VPC (or shared VPC) without exposing them directly to the internet. The Google Cloud environment network is extended to other computing environments. That environment can be on-premises or on another cloud. To extend the environment, use a suitable hybrid and multicloud connectivity communication pattern to facilitate the communication between environments so they can use internal IP addresses. Optionally, by enabling access to specific target IP addresses, you can use a transit VPC to help add a perimeter security layer outside of your application VPC. You can use Cloud Next Generation Firewall or network virtual appliances (NVAs) with next generation firewalls (NGFWs) at the transit VPC to inspect traffic and to allow or prohibit access to certain APIs from specific sources before reaching your application VPC. APIs should be accessed through an API gateway or a load balancer to provide a proxy layer, and an abstraction or facade for your service APIs. For applications consumed as APIs, you can also use Private Service Connect to provide an internal IP address for the published application. All environments use overlap-free RFC 1918 IP address space. A common application of this pattern involves deploying application backends (or a subset of application backends) in Google Cloud while hosting other backend and frontend components in on-premises environments or in other clouds (tiered hybrid pattern or partitioned multicloud pattern). As applications evolve and migrate to the cloud, dependencies and preferences for specific cloud services often emerge. Sometimes these dependencies and preferences lead to the distribution of applications and backends across different cloud providers. Also, some applications might be built with a combination of resources and services distributed across on-premises environments and multiple cloud environments. For distributed applications, the capabilities of external Cloud Load Balancing hybrid and multicloud connectivity can be used to terminate user requests and route them to frontends or backends in other environments. This routing occurs over a hybrid network connection, as illustrated in the following diagram. This integration enables the gradual distribution of application components across different environments. Requests from the frontend to backend services hosted in Google Cloud communicate securely over the established hybrid network connection facilitated by an internal load balancer (ILB in the diagram). Using the Google Cloud design in the preceding diagram helps with the following: Facilitates two-way communication between Google Cloud, on-premises, and other cloud environments using predefined APIs on both sides that align with the communication model of this pattern. To provide global frontends for internet-facing applications with distributed application components (frontends or backends), and to accomplish the following goals, you can use the advanced load balancing and security capabilities of Google Cloud distributed at points of presence (PoPs): Reduce capital expenses and simplify operations by using serverless managed services. Optimize connections to application backends globally for speed and latency. Google Cloud Cross-Cloud Network enables multicloud communication between application components over optimal private connections. Cache high demand static content and improve application performance for applications using global Cloud Load Balancing by providing access to Cloud CDN. Secure the global frontends of the internet facing applications by using Google Cloud Armor capabilities that provide globally distributed web application firewall (WAF) and DDoS mitigation services. Optionally, you can incorporate Private Service Connect into your design. Doing so enables private, fine-grained access to Google Cloud service APIs or your published services from other environments without traversing the public internet. Variations The gated egress and gated ingress architecture patterns can be combined with other approaches to meet different design requirements, while still considering the communication requirements of this pattern. The patterns offer the following options: Distributed API gateways Bidirectional API communication using Private Service Connect Bidirectional communication using Private Service Connect endpoints and interfaces Distributed API gateways In scenarios like the one based on the partitioned multicloud pattern, applications (or application components) can be built in different cloud environments—including a private on-premises environment. The common requirement is to route client requests to the application frontend directly to the environment where the application (or the frontend component) is hosted. This kind of communication requires a local load balancer or an API gateway. These applications and their components might also require specific API platform capabilities for integration. The following diagram illustrates how Apigee and Apigee Hybrid are designed to address such requirements with a localized API gateway in each environment. API platform management is centralized in Google Cloud. This design helps to enforce strict access control measures where only pre-approved IP addresses (target and destination APIs or Private Service Connect endpoint IP addresses) can communicate between Google Cloud and the other environments. The following list describes the two distinct communication paths in the preceding diagram that use Apigee API gateway: Client requests arrive at the application frontend directly in the environment that hosts the application (or the frontend component). API gateways and proxies within each environment handle client and application API requests in different directions across multiple environments. The API gateway functionality in Google Cloud (Apigee) exposes the application (frontend or backend) components that are hosted in Google Cloud. The API gateway functionality in another environment (Hybrid) exposes the application frontend (or backend) components that are hosted in that environment. Optionally, you can consider using a transit VPC. A transit VPC can provide flexibility to separate concerns and to perform security inspection and hybrid connectivity in a separate VPC network. From an IP address reachability standpoint, a transit VPC (where the hybrid connectivity is attached) facilitates the following requirements to maintain end-to-end reachability: The IP addresses for target APIs need to be advertised to the other environments where clients/requesters are hosted. The IP addresses for the hosts that need to communicate with the target APIs have to be advertised to the environment where the target API resides—for example, the IP addresses of the API requester (the client). The exception is when communication occurs through a load balancer, proxy, Private Service Connect endpoint, or NAT instance. To extend connectivity to the remote environment, this design uses direct VPC peering with customer route exchange capability. The design lets specific API requests that originate from workloads hosted within the Google Cloud application VPC to route through the transit VPC. Alternatively, you can use a Private Service Connect endpoint in the application VPC that's associated with a load balancer with a hybrid network endpoint group backend in the transit VPC. That setup is described in the next section: Bidirectional API communication using Private Service Connect. Bidirectional API communication using Private Service Connect Sometimes, enterprises might not need to use an API gateway (like Apigee) immediately, or might want to add it later. However, there might be business requirements to enable communication and integration between certain applications in different environments. For example, if your company acquired another company, you might need to expose certain applications to that company. They might need to expose applications to your company. Both companies might each have their own workloads hosted in different environments (Google Cloud, on-premises, or in other clouds), and must avoid IP address overlap. In such cases, you can use Private Service Connect to facilitate effective communication. For applications consumed as APIs, you can also use Private Service Connect to provide a private address for the published applications, enabling secure access within the private network across regions and over hybrid connectivity. This abstraction facilitates the integration of resources from diverse clouds and on-premises environments over a hybrid and multicloud connectivity model. It also enables the assembly of applications across multicloud and on-premises environments. This can satisfy different communication requirements, like integrating secure applications where an API gateway isn't used or isn't planned to be used. By using Private Service Connect with Cloud Load Balancing, as shown in the following diagram, you can achieve two distinct communication paths. Each path is initiated from a different direction for a separate connectivity purpose, ideally through API calls. All the design considerations and recommendations of Private Service Connect discussed in this guide apply to this design. If additional Layer 7 inspection is required, you can integrate NVAs with this design (at the transit VPC). This design can be used with or without API gateways. The two connectivity paths depicted in the preceding diagram represent independent connections and don't illustrate two-way communication of a single connection or flow. Bidirectional communication using Private Service Connect endpoints and interfaces As discussed in the gated ingress pattern, one of the options to enable client-service communication is by using a Private Service Connect endpoint to expose a service in a producer VPC to a consumer VPC. That connectivity can be extended to an on-premises environment or even another cloud provider environment over a hybrid connectivity. However, in some scenarios, the hosted service can also require private communication. To access a certain service, like retrieving data from data sources that can be hosted within the consumer VPC or outside it, this private communication can be between the application (producer) VPC and a remote environment, such as an on-premises environment. In such a scenario, Private Service Connect interfaces enable a service producer VM instance to access a consumer's network. It does so by sharing a network interface, while still maintaining the separation of producer and consumer roles. With this network interface in the consumer VPC, the application VM can access consumer resources as if they resided locally in the producer VPC. A Private Service Connect interface is a network interface attached to the consumer (transit) VPC. It's possible to reach external destinations that are reachable from the consumer (transit) VPC where the Private Service Connect interface is attached. Therefore, this connection can be extended to an external environment over a hybrid connectivity such as an on-premises environment, as illustrated in the following diagram: If the consumer VPC is an external organization or entity, like a third-party organization, typically you won't have the ability to secure the communication to the Private Service Connect interface in the consumer VPC. In such a scenario, you can define security policies in the guest OS of the Private Service Connect interface VM. For more information, see Configure security for Private Service Connect interfaces. Or, you might consider an alternative approach if it doesn't comply with the security compliance or standards of your organization. Best practices For situations where client requests from the internet need to be received locally by a frontend hosted in a private on-premises or other cloud environment, consider using Hybrid as an API gateway solution. This approach also facilitates a migration of the solution to a fully Google Cloud-hosted environment while maintaining the consistency of the API platform (Apigee). To minimize latency and optimize costs for high volumes of outbound data transfers to your other environments when those environments are in long-term or permanent hybrid or multicloud setups, consider the following: Use Cloud Interconnect or Cross-Cloud Interconnect. To terminate user connections at the targeted frontend in the appropriate environment, use Hybrid. Where applicable to your requirements and the architecture, use Apigee Adapter for Envoy with a Hybrid deployment with Kubernetes. Before designing the connectivity and routing paths, you first need to identify what traffic or API requests need to be directed to a local or remote API gateway, along with the source and destination environments. Use VPC Service Controls to protect Google Cloud services in your projects and to mitigate the risk of data exfiltration, by specifying service perimeters at the project or VPC network level. You can extend service perimeters to a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Use Virtual Private Cloud (VPC) firewall rules or firewall policies to control network-level access to Private Service Connect resources through the Private Service Connect endpoint. For example, outbound firewall rules at the application (consumer) VPC can restrict access from VM instances to the IP address or subnet of your endpoints. When using a Private Service Connect interface, you must protect the communication to the interface by configuring security for the Private Service Connect interface. If a workload in a private subnet requires internet access, use Cloud NAT to avoid assigning an external IP address to the workload and exposing it to the public internet. For outbound web traffic, use Secure Web Proxy. The proxy offers several benefits. Review the general best practices for hybrid and multicloud networking patterns. Handover patterns With the handover pattern, the architecture is based on using Google Cloud-provided storage services to connect a private computing environment to projects in Google Cloud. This pattern applies primarily to setups that follow the analytics hybrid multicloud architecture pattern, where: Workloads that are running in a private computing environment or in another cloud upload data to shared storage locations. Depending on use cases, uploads might happen in bulk or in smaller increments. Google Cloud-hosted workloads or other Google services (data analytics and artificial intelligence services, for example) consume data from the shared storage locations and process it in a streaming or batch fashion. Architecture The following diagram shows a reference architecture for the handover pattern. The preceding architecture diagram shows the following workflows: On the Google Cloud side, you deploy workloads into an application VPC. These workloads can include data processing, analytics, and analytics-related frontend applications. To securely expose frontend applications to users, you can use Cloud Load Balancing or API Gateway. A set of Cloud Storage buckets or Pub/Sub queues uploads data from the private computing environment and makes it available for further processing by workloads deployed in Google Cloud. Using Identity and Access Management (IAM) policies, you can restrict access to trusted workloads. Use VPC Service Controls to restrict access to services and to minimize unwarranted data exfiltration risks from Google Cloud services. In this architecture, communication with Cloud Storage buckets, or Pub/Sub, is conducted over public networks, or through private connectivity using VPN, Cloud Interconnect, or Cross-Cloud Interconnect. Typically, the decision on how to connect depends on several aspects, such as the following: Expected traffic volume Whether it's a temporary or permanent setup Security and compliance requirements Variation The design options outlined in the gated ingress pattern, which uses Private Service Connect endpoints for Google APIs, can also be applied to this pattern. Specifically, it provides access to Cloud Storage, BigQuery, and other Google Service APIs. This approach requires private IP addressing over a hybrid and multicloud network connection such as VPN, Cloud Interconnect and Cross-Cloud Interconnect. Best practices Lock down access to Cloud Storage buckets and Pub/Sub topics. When applicable, use cloud-first, integrated data movement solutions like the Google Cloud suite of solutions. To meet your use case needs, these solutions are designed to efficiently move, integrate, and transform data. Assess the different factors that influence the data transfer options, such as cost, expected transfer time, and security. For more information, see Evaluating your transfer options. To minimize latency and prevent high-volume data transfer and movement over the public internet, consider using Cloud Interconnect or Cross-Cloud Interconnect, including accessing Private Service Connect endpoints within your Virtual Private Cloud for Google APIs. To protect Google Cloud services in your projects and to mitigate the risk of data exfiltration, use VPC Service Controls. These service controls can specify service perimeters at the project or VPC network level. You can extend service perimeters to a hybrid environment over an authorized VPN or Cloud Interconnect. For more information about the benefits of service perimeters, see Overview of VPC Service Controls. Communicate with publicly published data analytics workloads that are hosted on VM instances through an API gateway, a load balancer, or a virtual network appliance. Use one of these communication methods for added security and to avoid making these instances directly reachable from the internet. If internet access is required, Cloud NAT can be used in the same VPC to handle outbound traffic from the instances to the public internet. Review the general best practices for hybrid and multicloud networking topologies. General best practices When designing and onboarding cloud identities, resource hierarchy, and landing zone networks, consider the design recommendations in Landing zone design in Google Cloud and the Google Cloud security best practices covered in the enterprise foundations blueprint. Validate your selected design against the following documents: Best practices and reference architectures for VPC design Decide a resource hierarchy for your Google Cloud landing zone Google Cloud Architecture Framework: Security, privacy, and compliance Also, consider the following general best practices: When choosing a hybrid or multicloud network connectivity option, consider business and application requirements such as SLAs, performance, security, cost, reliability, and bandwidth. For more information, see Choosing a Network Connectivity product and Patterns for connecting other cloud service providers with Google Cloud. Use shared VPCs on Google Cloud instead of multiple VPCs when appropriate and aligned with your resource hierarchy design requirements. For more information, see Deciding whether to create multiple VPC networks. Follow the best practices for planning accounts and organizations. Where applicable, establish a common identity between environments so that systems can authenticate securely across environment boundaries. To securely expose applications to corporate users in a hybrid setup, and to choose the approach that best fits your requirements, you should follow the recommended ways to integrate Google Cloud with your identity management system. Also, see Patterns for authenticating workforce users in a hybrid environment. When designing your on-premises and cloud environments, consider IPv6 addressing early on, and account for which services support it. For more information, see An Introduction to IPv6 on Google Cloud. It summarizes the services that were supported when the blog was written. When designing, deploying, and managing your VPC firewall rules, you can: Use service-account-based filtering over network-tag-based filtering if you need strict control over how firewall rules are applied to VMs. Use firewall policies when you group several firewall rules, so that you can update them all at once. You can also make the policy hierarchical. For hierarchical firewall policy specifications and details, see Hierarchical firewall policies. Use geo-location objects in firewall policy when you need to filter external IPv4 and external IPv6 traffic based on specific geographic locations or regions. Use Threat Intelligence for firewall policy rules if you need to secure your network by allowing or blocking traffic based on Threat Intelligence data, such as known malicious IP addresses or based on public cloud IP address ranges. For example, you can allow traffic from specific public cloud IP address ranges if your services need to communicate with that public cloud only. For more information, see Best practices for firewall rules. You should always design your cloud and network security using a multilayer security approach by considering additional security layers, like the following: Google Cloud Armor Cloud Intrusion Detection System Cloud Next Generation Firewall IPS Threat Intelligence for firewall policy rules These additional layers can help you filter, inspect, and monitor a wide variety of threats at the network and application layers for analysis and prevention. When deciding where DNS resolution should be performed in a hybrid setup, we recommend using two authoritative DNS systems for your private Google Cloud environment and for your on-premises resources that are hosted by existing DNS servers in your on-premises environment. For more information see, Choose where DNS resolution is performed. Where possible, always expose applications through APIs using an API gateway or load balancer. We recommend that you consider an API platform like Apigee. Apigee acts as an abstraction or facade for your backend service APIs, combined with security capabilities, rate limiting, quotas, and analytics. An API platform (gateway or proxy) and Application Load Balancer aren't mutually exclusive. Sometimes, using both API gateways and load balancers together can provide a more robust and secure solution for managing and distributing API traffic at scale. Using Cloud Load Balancing API gateways lets you accomplish the following: Deliver high-performing APIs with Apigee and Cloud CDN, to: Reduce latency Host APIs globally Increase availability for peak traffic seasons For more information, watch Delivering high-performing APIs with Apigee and Cloud CDN on YouTube. Implement advanced traffic management. Use Google Cloud Armor as DDoS protection, WAF, and network security service to protect your APIs. Manage efficient load balancing across gateways in multiple regions. For more information, watch Securing APIs and Implementing multi-region failover with PSC and Apigee. To determine which Cloud Load Balancing product to use, you must first determine what traffic type your load balancers must handle. For more information, see Choose a load balancer. When Cloud Load Balancing is used, you should use its application capacity optimization abilities where applicable. Doing so can help you address some of the capacity challenges that can occur in globally distributed applications. For a deep dive on latency, see Optimize application latency with load balancing. While Cloud VPN encrypts traffic between environments, with Cloud Interconnect you need to use either MACsec or HA VPN over Cloud Interconnect to encrypt traffic in transit at the connectivity layer. For more information, see How can I encrypt my traffic over Cloud Interconnect. You can also consider service layer encryption using TLS. For more information, see Decide how to meet compliance requirements for encryption in transit. If you need more traffic volume over a VPN hybrid connectivity than a single VPN tunnel can support, you can consider using active/active HA VPN routing option. For long-term hybrid or multicloud setups with high outbound data transfer volumes, consider Cloud Interconnect or Cross-Cloud Interconnect. Those connectivity options help to optimize connectivity performance and might reduce outbound data transfer charges for traffic that meets certain conditions. For more information, see Cloud Interconnect pricing. When connecting to Google Cloud resources and trying to choose between Cloud Interconnect, Direct Peering, or Carrier Peering, we recommend using Cloud Interconnect, unless you need to access Google Workspace applications. For more information, you can compare the features of Direct Peering with Cloud Interconnect and Carrier Peering with Cloud Interconnect. Allow enough IP address space from your existing RFC 1918 IP address space to accommodate your cloud-hosted systems. If you have technical restrictions that require you to keep your IP address range, you can: Use the same internal IP addresses for your on-premises workloads while migrating them to Google Cloud, using hybrid subnets. Provision and use your own public IPv4 addresses for Google Cloud resources using bring your own IP (BYOIP) to Google. If the design of your solution requires exposing a Google Cloud-based application to the public internet, consider the design recommendations discussed in Networking for internet-facing application delivery. Where applicable, use Private Service Connect endpoints to allow workloads in Google Cloud, on-premises, or in another cloud environment with hybrid connectivity, to privately access Google APIs or published services, using internal IP addresses in a fine-grained fashion. When using Private Service Connect, you must control the following: Who can deploy Private Service Connect resources. Whether connections can be established between consumers and producers. Which network traffic is allowed to access those connections. For more information, see Private Service Connect security. To achieve a robust cloud setup in the context of hybrid and multicloud architecture: Perform a comprehensive assessment of the required levels of reliability of the different applications across environments. Doing so can help you meet your objectives for availability and resilience. Understand the reliability capabilities and design principles of your cloud provider. For more information, see Google Cloud infrastructure reliability. Cloud network visibility and monitoring are essential to maintain reliable communications. Network Intelligence Center provides a single console for managing network visibility, monitoring, and troubleshooting. Hybrid and multicloud secure networking architecture patterns: What's next Learn more about the common architecture patterns that you can realize by using the networking patterns discussed in this document. Learn how to approach hybrid and multicloud and how to choose suitable workloads. Learn more about Google Cloud Cross-Cloud Network a global network platform that is open, secure, and optimized for applications and users across on-premises and other clouds. Design reliable infrastructure for your workloads in Google Cloud: Design guidance to help to protect your applications against failures at the resource, zone, and region level. To learn more about designing highly available architectures in Google Cloud, check out patterns for resilient and scalable apps. Learn more about the possible connectivity options to connect GKE Enterprise cluster running in your on-premises/edge environment, to Google Cloud network along with the Impact of temporary disconnection from Google Cloud. Send feedback \ No newline at end of file diff --git a/View_the_guide_as_a_single_page.txt b/View_the_guide_as_a_single_page.txt new file mode 100644 index 0000000000000000000000000000000000000000..c78555df5712ef289d950bd0d1b363dfda8e4271 --- /dev/null +++ b/View_the_guide_as_a_single_page.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/one-page-view +Date Scraped: 2025-02-23T11:49:53.737Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build hybrid and multicloud architectures using Google Cloud Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC This page provides a single-page view of all the pages in Build hybrid and multicloud architectures using Google Cloud. You can print this page, or you can save it in PDF format by using your browser's print function and choosing the Save as PDF option. This page does not provide a table of contents (ToC) pane on the right side. This architecture guide provides practical guidance on planning and architecting your hybrid and multicloud environments using Google Cloud. This document is the first of three documents in the set. It examines the opportunities and considerations associated with these architectures from a business and technology point of view. It also analyzes and discusses many proven hybrid and multicloud architecture patterns. The document set for hybrid and multicloud architecture patterns consists of these parts: Build hybrid and multicloud architectures: discusses planning a strategy for architecting a hybrid and multicloud setup with Google Cloud (this article). Hybrid and multicloud architecture patterns: discusses common architecture patterns to adopt as part of a hybrid and multicloud strategy. Hybrid and multicloud secure networking architecture patterns: discusses hybrid and multicloud networking architecture patterns from a networking perspective. You can read each of these architecture articles independently, but for the most benefit, we recommend reading them in sequence before making an architectural decision. The rapid pace of change in market demands has increased the requirements and expectations that are placed on enterprise IT, such as dynamic scale, increased performance for optimized user experience, and security. Many enterprise-level companies find it challenging to meet these demands and expectations using only traditional infrastructure and processes. IT departments are also under pressure to improve their cost effectiveness, making it difficult to justify additional capital investments in data centers and equipment. A hybrid cloud strategy that uses public cloud computing capabilities provides a pragmatic solution. By using the public cloud, you can extend the capacity and capabilities of your computing platforms without up-front capital investment costs. By adding one or more public cloud based solutions, like Google Cloud, to your existing infrastructure, you not only preserve your existing investments, but you also avoid committing yourself to a single cloud vendor. Also, by using a hybrid strategy, you can modernize applications and processes incrementally as resources permit. To help you plan for your architectural decision and hybrid or multicloud strategy planning, there are several potential challenges and design considerations that you should consider. This multi-part architecture guide highlights both the potential benefits of various architectures and the potential challenges. Note: This guide doesn't discuss multicloud architectures that use SaaS products, like customer relationship management (CRM) systems or email, alongside a cloud service provider (CSP). Overview of hybrid cloud and multicloud Because workloads, infrastructure, and processes are unique to each enterprise, each hybrid cloud strategy must be adapted to your specific needs. The result is that the terms hybrid cloud and multicloud are sometimes used inconsistently. Within the context of this Google Cloud architecture guide, the term hybrid cloud describes an architecture in which workloads are deployed across multiple computing environments, one based in the public cloud, and at least one being private—for example, an on-premises data center or a colocation facility. The term multicloud describes an architecture that combines at least two public CSPs. As illustrated in the following diagram, sometimes this architecture includes a private computing environment (that might include the use of a private cloud component). That arrangement is called a hybrid and multicloud architecture. Note: The term hybrid and multicloud refers to any combination of the three architectures displayed in the preceding diagram. However, where possible this series attempts to be specific when discussing architecture patterns. ContributorsAuthor: Marwan Al Shawi | Partner Customer EngineerOther contributors: Saud Albazei | Customer Engineer, Application ModernizationAnna Berenberg | Engineering FellowMarco Ferrari | Cloud Solutions ArchitectVictor Moreno | Product Manager, Cloud NetworkingJohannes Passing | Cloud Solutions ArchitectMark Schlagenhauf | Technical Writer, NetworkingDaniel Strebel | EMEA Solution Lead, Application ModernizationAmmett Williams | Developer Relations Engineer Drivers, considerations, strategy, and approaches This document defines and discusses business objectives, drivers, and requirements, and how these factors can influence your design decisions when constructing hybrid and multicloud architectures. Objectives An organization can adopt a hybrid or multicloud architecture either as a permanent solution to meet specific business objectives, or as a temporary state to facilitate certain requirements, such as a migration to the cloud. Answering the following questions about your business is a good way to define your business requirements, and to establish specific expectations about how to achieve some or all of your business objectives. These questions focus on what's needed for your business, not how to achieve it technically. Which business goals are driving the decision to adopt a hybrid or multicloud architecture? What business and technical objectives is a hybrid or multicloud architecture going to help achieve? What business drivers influenced these objectives? What are the specific business requirements? In the context of hybrid and multicloud architectures, one business goal for an enterprise customer might be to expand online sales operations or markets from a single region to become one of the global leaders in their market segment. One of the business objectives might be to start accepting purchase orders from users across the globe (or from specific regions) within six months. To support the previously mentioned business requirements and objectives, one potential primary technical objective is to expand the IT infrastructure and applications architecture of a company from an on-premises-only model to a hybrid architecture, using the global capabilities and services of public clouds. This objective should be specific and measurable, clearly defining the expansion scope in terms of target regions and timelines. Note: Sometimes business requirements are defined to satisfy certain business strategies. A business strategy can be defined as a long term plan to draw a path to achieve certain business objectives. In general, a hybrid or multicloud architecture is rarely a goal in itself, but rather a means of meeting technical objectives driven by certain business requirements. Therefore, choosing the right hybrid or multicloud architecture requires first clarifying these requirements. It's important to differentiate between the business objectives and technical objectives of your IT project. Your business objectives should focus on the goal and mission of your organization. Your technical objectives should focus on building a technological foundation that enables your organization to meet their business requirements and objectives. Business drivers influence the achievement of the business objective and goals. Therefore, clearly identifying the business drivers can help shape the business objectives or goals to be more relevant to market needs and trends. The following flowchart illustrates business drivers, goals, objectives, and requirements, and the technical objectives and requirements, and how all these factors relate to each other: Business and technical drivers Consider how your business drivers influence your technical objectives. Some common, influencing, business drivers when choosing a hybrid architecture include the following: Heeding laws and regulations about data sovereignty. Reducing capital expenditure (CAPEX) or general IT spending with the support of cloud financial management and cost optimization disciplines like FinOps. Cloud adoption can be driven by scenarios that help reduce CAPEX, like building a Disaster Recovery solution in a hybrid or multicloud architecture. Improving the user experience. Increasing flexibility and agility to respond to changing market demands. Improving transparency about costs and resource consumption. Consider your list of business drivers for adopting a hybrid or multicloud architecture together. Don't consider them in isolation. Your final decision should depend on the balance of your business priorities. After your organization realizes the benefits of the cloud, it might decide to fully migrate if there are no constraints—like costs or specific compliance requirements that require highly secure data to be hosted on-premises—that prevent it from doing so. Although adopting a single cloud provider can offer several benefits, such as reduced complexity, built-in integrations among services, and cost optimization options like committed use discounts, there are still some scenarios where a multicloud architecture can be beneficial for a business. The following are the common business drivers for adopting a multicloud architecture, along with the associated considerations for each driver: Heeding laws and regulations about data sovereignty: The most common scenario is when an organization is expanding its business to a new region or country and has to comply with new data-hosting regulations. If the existing used cloud service provider (CSP) has no local cloud region in that country, then for compliance purposes the common solution is to use another CSP that has a local cloud region in that country. Reducing costs: Cost reduction is often the most common business driver for adopting a technology or architecture. However, it's important to consider more than just the cost of services and potential pricing discounts when deciding whether to adopt a multicloud architecture. Account for the cost of building and operating a solution across multiple clouds, and any architecture constraints that might arise from existing systems. Sometimes, the potential challenges associated with a multicloud strategy might outweigh the benefits. A multicloud strategy might introduce additional costs later on. Common challenges associated with developing a multicloud strategy include the following: Increasing management complexity. Maintaining consistent security. Integrating software environments. Achieving consistent cross-cloud performance and reliability. Building a technical team with multicloud skills might be expensive and might require expanding the team, unless it's managed by a third party company. Managing the product pricing and management tools from each CSP. Without a solution that can provide unified cost visibility and dashboards, it can be difficult to efficiently manage costs across multiple environments. In such cases, you might use the Looker cloud cost management solution where applicable. For more information, see The strategy for effectively optimizing cloud billing cost management. Using the unique capabilities from each CSP: A multicloud architecture enables organizations to use additional new technologies to improve their own business capability offerings without being limited to the choices offered by a single cloud provider. To avoid any unforeseen risk or complexity, assess your potential challenges through a feasibility and effectiveness assessment, including the common challenges mentioned previously. Avoiding vendor lock-in: Sometimes, enterprises want to avoid being locked into a single cloud provider. A multicloud approach lets them choose the best solution for their business needs. However, the feasibility of this decision depends on several factors, such as the following: Technical dependencies Interoperability considerations between applications Costs of rebuilding or refactoring applications Technical skill sets Consistent security and manageability Enhancing the reliability and availability level of business critical applications: In some scenarios, a multicloud architecture can provide resilience to outages. For example, if one region of a CSP goes down, traffic can be routed to another CSP in the same region. This scenario assumes that both cloud providers support the required capabilities or services in that region. When data residency regulations in a specific country or region mandate the storage of sensitive data—like personally identifiable information (PII)—within that location, a multicloud approach can provide a compliant solution. By using two CSPs in one region to provide resilience to outages, you can facilitate compliance with regulatory restrictions while also addressing availability requirements. The following are some resilience considerations to assess before adopting a multicloud architecture: Data movement: How often might data move within your multicloud environment? Might data movement incur significant data transfer charges? Security and manageability: Are there any potential security or manageability complexities? Capability parity: Do both CSPs in the selected region offer the required capabilities and services? Technical skill set: Does the technical team have the skills required to manage a multicloud architecture? Consider all these factors when assessing the feasibility of using a multicloud architecture to improve resilience. When assessing the feasibility of a multicloud architecture, it's important to consider the long-term benefits. For example, deploying applications on multiple clouds for disaster recovery or increased reliability might increase costs in the short term, but could prevent outages or failures. Such failures can cause long-term financial and reputational damage. Therefore, it's important to weigh short-term costs against the long-term potential value of adopting multicloud. Also, the long-term potential value can vary based on the organization size, technology scale, criticality of the technology solution, and the industry. Organizations that plan to successfully create a hybrid or multicloud environment, should consider building a Cloud Center of Excellence (COE). A COE team can become the conduit for transforming the way that internal teams within your organization serve the business during your transition to the cloud. A COE is one of the ways that your organization can adopy the cloud faster, drive standardization, and maintain stronger alignment between your business strategy and your cloud investments. If the objective of the hybrid or multicloud architecture is to create a temporary state, common business drivers include: The need to reduce CAPEX or general IT spending for short-term projects. The ability to provision such infrastructure quickly to support a business use case. For example: This architecture might be used for limited-time projects. It could be used to support a project that requires a high scale distributed infrastructure within a limited duration, while still using data that is on-premises. The need for multi-year digital transformation projects that require a large enterprise to establish and that use a hybrid architecture for some time to help them align their infrastructure and applications modernization with their business priorities. The need to create a temporary hybrid, multicloud, or mixed architecture after a corporate merger. Doing so enables the new organization to define a strategy for the final state of its new cloud architecture. It's common for two merging companies to use different cloud providers, or for one company to use an on-premises private data center and the other to use the cloud. In either case, the first step in merger and acquisition is almost always to integrate the IT systems. Technical drivers The preceding section discussed business drivers. To get approved, major architectural decisions almost always need the support of those drivers. However, technical drivers, which can be based on either a technical gain or a constraint, can also influence business drivers. In some scenarios, it's necessary to translate technical drivers into business drivers and explain how they might positively or negatively affect the business. The following non-exhaustive list contains some common technical drivers for adopting a hybrid or multicloud architecture: Building out technological capabilities, such as advanced analytics services and AI, that might be difficult to implement in existing environments. Improving the quality and performance of service. Automating and accelerating application rollouts to achieve a faster time to market and shorter cycle times. Using high-level APIs and services to speed up development. Accelerating the provisioning of compute and storage resources. Using serverless services to build elastic services and capabilities faster and at scale. Using global infrastructure capabilities to build global or multi-regional architectures to satisfy certain technical requirements. The most common technical driver for both temporary hybrid and temporary multicloud architectures is to facilitate a migration from on-premises to the cloud or to an extra cloud. In general, cloud migrations almost always naturally lead to hybrid cloud setup. Enterprises often have to systematically transition applications and data based on their priorities. Similarly, a short-term setup might be intended to facilitate a proof of concept using advanced technologies available in the cloud for a certain period. Technical design decisions The identified technical objective and its drivers are key to making a business-driven architecture decision and to selecting one of the architecture patterns discussed in this guide. For example, to support a specific business goal, a company might set a business objective to build a research and development practice for three to six months. The main business requirement to support this objective might be to build the required technology environment for research and design with the lowest possible CAPEX. The technical objective in this case is to have a temporary hybrid cloud setup. The driver for this technical objective is to take advantage of the on-demand pricing model of the cloud to meet the previously mentioned business requirement. Another driver is influenced by the specific technology requirements that require a cloud-based solution with high compute capacity and quick setup. Use Google Cloud for hybrid and multicloud architectures Using open source solutions can make it easier to adopt a hybrid and multicloud approach, and to minimize vendor lock-in. However, you should consider the following potential complexities when planning an architecture: Interoperability Manageability Cost Security Building on a cloud platform that contributes to and supports open source might help to simplify your path to adopting hybrid and multicloud architectures. Open cloud empowers you with an approach that provides maximum choice and abstracts complexity. In addition, Google Cloud offers the flexibility to migrate, build, and optimize applications across hybrid and multicloud environments while minimizing vendor lock-in, using best-in-breed solutions, and meeting regulatory requirements. Google is also one of the largest contributors to the open source ecosystem and works with the open source community to develop well-known open source technologies like Kubernetes. When rolled out as a managed service, Kubernetes can help reduce complexities around hybrid and multicloud manageability and security. Plan a hybrid and multicloud strategy This document focuses on how to apply predefined business considerations when planning a hybrid and multicloud strategy. It expands on guidance in Drivers, considerations, strategy, and approaches. That article defines and analyzes the business considerations enterprises should account for when planning such a strategy. Clarify and agree on the vision and objectives Ultimately, the main purpose of a hybrid or multicloud strategy is to achieve the identified business requirements and the associated technical objectives for each business use case aligned with specific business objectives. To achieve this goal, create a well-structured plan that includes the following considerations: Which workloads should be run in each computing environment. Which application architecture patterns to apply across multiple workloads. Which technology and networking architecture pattern to use. Know that defining a plan that considers all workloads and requirements is difficult at best, especially in a complex IT environment. In addition, planning takes time and might lead to competing stakeholder visions. To avoid such situations, initially formulate a vision statement that addresses the following questions (at minimum): What's the targeted business use case to meet specific business objectives? Why is the current approach and computing environment insufficient to meet the business objectives? What are the primary technological aspects to optimize for by using the public cloud? Why and how is the new approach going to optimize and meet your business objectives? How long do you plan to use your hybrid or multicloud setup? Agreeing on the key business and technical objectives and drivers, then obtaining relevant stakeholder sign-off can provide a foundation for the next steps in the planning process. To effectively align your proposed solution with the overarching architectural vision of your organization, align with your team and the stakeholders responsible for leading and sponsoring this initiative. Identify and clarify other considerations While planning a hybrid or multicloud architecture, it's important to identify and agree about the architectural and operational constraints of your project. On the operations side, the following non-exhaustive list provides some requirements that might create some constraints to consider when planning your architecture: Managing and configuring multiple clouds separately versus building a holistic model to manage and secure the different cloud environments. Ensuring consistent authentication, authorization, auditing, and policies across environments. Using consistent tooling and processes across environments to provide a holistic view into security, costs, and opportunities for optimization. Using consistent compliance and security standards to apply unified governance. On the architecture-planning side, the biggest constraints often stem from existing systems and can include the following: Dependencies between applications Performance and latency requirements for communication between systems Reliance on hardware or operating systems that might not be available in the public cloud Licensing restrictions Dependence on the availability of required capabilities in the selected regions of a multicloud architecture For more information about the other considerations related to workload portability, data movement, and security aspects, see Other considerations. Design a hybrid and multicloud architecture strategy After you have clarified the specifics of the business and technical objectives with the associated business requirements (and ideally clarified and agreed on a vision statement), you can build your strategy to create a hybrid or multicloud architecture. The following flowchart summarizes the logical steps to build such a strategy. To help you determine your hybrid or multicloud architecture technical objectives and needs, the steps in the preceding flowchart start with the business requirements and objectives. How you implement your strategy can vary depending on the objectives, drivers, and the technological migration path of each business use case. It's important to remember that a migration is a journey. The following diagram illustrates the phases of this journey as described in Migrate to Google Cloud. This section provides guidance about the "Assess," "Plan," "Deploy," and "Optimize" phases in the preceding diagram. It presents this information in the context of a hybrid or multicloud migration. You should align any migration with the guidance and best practices discussed in the migration path section of the Migrate to Google Cloud guide. These phases might apply to each workload individually, not to all workloads at once. At any point in time, several workloads might be in different phases: Assess phase In the Assess phase, you conduct an initial workload assessment. During this phase, consider the goals outlined in your vision and strategy planning documents. Decide on a migration plan by first identifying a candidate list of workloads that could benefit from being deployed or migrated to the public cloud. To start, choose a workload that isn't business-critical or too difficult to migrate (with minimal or no dependencies on any workload in other environments), yet typical enough to serve as a blueprint for upcoming deployments or migrations. Ideally, the workload or application you select should be part of a targeted business use case or function that has a measurable effect on the business after it's complete. To evaluate and mitigate any potential migration risks, conduct a migration risks assessment. it's important to assess your candidate workload to determine its suitability for migration to a multicloud environment. This assessment involves evaluating various aspects of the applications and infrastructure including the following: Application compatibility requirements with your selected cloud providers Pricing models Security features offered by your selected cloud providers Application interoperability requirements Running an assessment also helps you identify data privacy requirements, compliance requirements, consistency requirements, and solutions across multiple cloud environments. The risks you identify can affect the workloads you choose to migrate or operate. There are several types of tools, like Google Cloud Migration Center, to help you assess existing workloads. For more information, see Migration to Google Cloud: Choose an assessment tool. From a workload modernization perspective, the fit assessment tool helps to assess a VM workload to determine if the workload is fit for modernization to a container or for migration to Compute Engine. Plan phase In the Plan phase, start with the identified applications and required cloud workloads and perform the following tasks: Develop a prioritized migration strategy that defines application migration waves and paths. Identify the applicable high-level hybrid or multicloud application architecture pattern. Select a networking architecture pattern that supports the selected application architecture pattern. Ideally, you should incorporate the cloud networking pattern with the landing zone design. The landing zone design serves as a key foundational element of overall hybrid and multicloud architectures. The design requires seamless integration with these patterns. Don't design the landing zone in isolation. Consider these networking patterns as a subset of the landing zone design. A landing zone might consist of different applications, each with a different networking architecture pattern. Also, in this phase, it's important to decide on the design of the Google Cloud organization, projects, and resource hierarchy to prepare your cloud environment landing zone for the hybrid or multicloud integration and deployment. As part of this phase you should consider the following: Define the migration and modernization approach. There's more information about migration approaches later in this guide. It's also covered in more detail in the migration types section of Migrate to Google Cloud. Use your assessment and discovery phase findings. Align them with the candidate workload you plan to migrate. Then develop an application migration waves plan. The plan should incorporate the estimated resource sizing requirements that you determined during the assessment phase. Define the communication model required between the distributed applications and among application components for the intended hybrid or multicloud architecture. Decide on a suitable deployment archetype to deploy your workload, such as zonal, regional, multi-regional, or global, for the chosen architecture pattern. The archetype you select forms the basis for constructing the application-specific deployment architectures tailored to your business and technical needs. Decide on measurable success criteria for the migration, with clear milestones for each migration phase or wave. Selecting criteria is essential, even if the technical objective is to have the hybrid architecture as a short term setup. Define application SLAs and KPIs when your applications operate in a hybrid setup, especially for those applications that might have distributed components across multiple environments. For more information, see About migration planning to help plan a successful migration and to minimize the associated risks. Deploy phase In the Deploy phase, you are ready to start executing your migration strategy. Given the potential number of requirements, it's best to take an iterative approach. Prioritize your workloads based on the migration and application waves that you developed during the planning phase. With hybrid and multicloud architectures, start your deployment by establishing the necessary connectivity between Google Cloud and the other computing environments. To facilitate the required communication model for your hybrid or multicloud architecture, base the deployment on your selected design and network connectivity type, along with the applicable networking pattern. We recommend that you take this approach for your overall landing zone design decision. In addition, you must test and validate the application or service based on the defined application success criteria. Ideally, these criteria should include both functional and load testing (non-functional) requirements before moving to production. Optimize phase In the Optimize phase, test your deployment: After you complete testing, and the application or service meets the functional and performance capacity expectations, you can move it to production. Cloud monitoring and visibility tools, such as Cloud Monitoring, can provide insights into the performance, availability, and health of your applications and infrastructure and help you optimize where needed. For more information, see Migrate to Google Cloud: Optimize your environment. To learn more about how to design such tools for hybrid or multicloud architecture, see Hybrid and multicloud monitoring and logging patterns. Assess candidate workloads The choice of computing environments for different workloads significantly affects the success of a hybrid and multicloud strategy. Workload placement decisions should align with specific business objectives. Therefore, these decisions should be guided by targeted business use cases that enable measurable business effects. However, starting with the most business-critical workload/application isn't always necessary nor recommended. For more information, see Choosing the apps to migrate first in the Migrate to Google Cloud guide. As discussed in the Business and technical drivers section, there are different types of drivers and considerations for hybrid and multicloud architectures. The following summarized list of factors can help you evaluate your migration use case in the context of a hybrid or multicloud architecture with opportunities to have a measurable business effect: Potential for market differentiation or innovation that is enabled by using cloud services to enable certain business functions or capabilities, such as artificial intelligence capabilities that use existing on-premises data to train machine learning models. Potential savings in total cost of ownership for an application. Potential improvements in availability, resiliency, security, or performance—for example adding a disaster recovery (DR) site in the cloud. Potential speedup of the development and release processes—for example, building your development and testing environments in the cloud. The following factors can help you evaluate migration risks: The potential effect of outages that are caused by a migration. The experience your team has with public cloud deployments, or with deployments for a new or second cloud provider. The need to comply with any existing legal or regulatory restrictions. The following factors can help you evaluate the technical difficulties of a migration: The size, complexity, and age of the application. The number of dependencies with other applications and services across different computing environments. Any restrictions imposed by third-party licenses. Any dependencies on specific versions of operating systems, databases, or other environment configurations. After you have assessed your initial workloads, you can start prioritizing them and defining your migration waves and approaches. Then, you can identify applicable architecture patterns and supporting networking patterns. This step might require multiple iterations, because your assessment could change over time. It's therefore worth re-evaluating workloads after you make your first cloud deployments. Architectural approaches to adopt a hybrid or multicloud architecture This document provides guidance on common and proven approaches and considerations to migrate your workload to the cloud. It expands on guidance in Design a hybrid and multicloud architecture strategy, which discusses several possible, and recommended, steps to design a strategy for adopting a hybrid or multicloud architecture. Note: The phrase migrate your workload to the cloud refers to hybrid and multicloud scenarios, not to a complete cloud migration. Cloud first A common way to begin using the public cloud is the cloud-first approach. In this approach, you deploy your new workloads to the public cloud while your existing workloads stay where they are. In that case, consider a classic deployment to a private computing environment only if a public cloud deployment is impossible for technical or organizational reasons. The cloud-first strategy has advantages and disadvantages. On the positive side, it's forward looking. You can deploy new workloads in a modernized fashion while avoiding (or at least minimizing) the hassles of migrating existing workloads. While a cloud-first approach can provide certain advantages, it could potentially result in missed opportunities for improving or using existing workloads. New workloads might represent a fraction of the overall IT landscape, and their effect on IT expenses and performance can be limited. Allocating time and resources to migrating an existing workload could potentially lead to more substantial benefits or cost savings compared to attempting to accommodate a new workload in the cloud environment. Following a strict cloud-first approach also risks increasing the overall complexity of your IT environment. This approach might create redundancies, lower performance due to potential excessive cross-environment communication, or result in a computing environment that isn't well suited for the individual workload. Also, compliance with industry regulations and data privacy laws can restrict enterprises from migrating certain applications that hold sensitive data. Considering these risks, you might be better off using a cloud-first approach only for selected workloads. Using a cloud-first approach lets you concentrate on the workloads that can benefit the most from a cloud deployment or migration. This approach also considers the modernization of existing workloads. A common example of a cloud-first hybrid architecture is when legacy applications and services holding critical data must be integrated with new data or applications. To complete the integration, you can use a hybrid architecture that modernizes legacy services by using API interfaces, which unlocks them for consumption by new cloud services and applications. With a cloud API management platform, like Apigee, you can implement such use cases with minimal application changes and add security, analytics, and scalability to the legacy services. Migration and modernization Hybrid multicloud and IT modernization are distinct concepts that are linked in a virtuous circle. Using the public cloud can facilitate and simplify the modernization of IT workloads. Modernizing your IT workloads can help you get more from the cloud. The primary goals of modernizing workloads are as follows: Achieve greater agility so that you can adapt to changing requirements. Reduce the costs of your infrastructure and operations. Increase reliability and resiliency to minimize risk. However, it might not be feasible to modernize every application in the migration process at the same time. As described in Migration to Google Cloud, you can implement one of the following migration types, or even combine multiple types as needed: Rehost (lift and shift) Replatform (lift and optimize) Refactor (move and improve) Rearchitect (continue to modernize) Rebuild (remove and replace, sometimes called rip and replace) Repurchase When making strategic decisions about your hybrid and multicloud architectures, it's important to consider the feasibility of your strategy from a cost and time perspective. You might want to consider a phased migration approach, starting with lifting and shifting or replatforming and then refactoring or rearchitecting as the next step. Typically, lifting and shifting helps to optimize applications from an infrastructure perspective. After applications are running in the cloud, it's easier to use and integrate cloud services to further optimize them using cloud-first architectures and capabilities. Also, these applications can still communicate with other environments over a hybrid network connection. For example, you can refactor or rearchitect a large, monolithic VM-based application and turn it into several independent microservices, based on a cloud-based microservice architecture. In this example, the microservices architecture uses Google Cloud managed container services like Google Kubernetes Engine (GKE) or Cloud Run. However, if the architecture or infrastructure of an application isn't supported in the target cloud environment as it is, you might consider starting with replatforming, refactoring, or rearchitecting your migration strategy to overcome those constraints where feasible. When using any of these migration approaches, consider modernizing your applications (where applicable and feasible). Modernization can require adopting and implementing Site Reliability Engineering (SRE) or DevOps principles, such that you might also need to extend application modernization to your private environment in a hybrid setup. Even though implementing SRE principles involves engineering at its core, it's more of a transformation process than a technical challenge. As such, it will likely require procedural and cultural changes. To learn more about how the first step to implementing SRE in an organization is to get leadership buy-in, see With SRE, failing to plan is planning to fail. Mix and match migration approaches Each migration approach discussed here has certain strengths and weaknesses. A key advantage of following a hybrid and multicloud strategy is that it isn't necessary to settle on a single approach. Instead, you can decide which approach works best for each workload or application stack, as shown in the following diagram. This conceptual diagram illustrates the various migration and modernization paths or approaches that can be simultaneously applied to different workloads, driven by the unique business, technical requirements, and objectives of each workload or application. In addition, it's not necessary that the same application stack components follow the same migration approach or strategy. For example: The backend on-premises database of an application can be replatformed from self-hosted MySQL to a managed database using Cloud SQL in Google Cloud. The application frontend virtual machines can be refactored to run on containers using GKE Autopilot, where Google manages the cluster configuration, including nodes, scaling, security, and other preconfigured settings. The on-premises hardware load balancing solution and web application firewall WAF capabilities can be replaced with Cloud Load Balancing and Google Cloud Armor. Choose rehost (lift and shift), if any of the following is true of the workloads: They have a relatively small number of dependencies on their environment. They aren't considered worth refactoring, or refactoring before migration isn't feasible. They are based on third-party software. Consider refactor (move and improve) for these types of workloads: They have dependencies that must be untangled. They rely on operating systems, hardware, or database systems that can't be accommodated in the cloud. They aren't making efficient use of compute or storage resources. They can't be deployed in an automated fashion without some effort. Consider whether rebuild (remove and replace) meets your needs for these types of workloads: They no longer satisfy current requirements. They can be incorporated with other applications that provide similar capabilities without compromising business requirements. They are based on third-party technology that has reached its end of life. They require third-party license fees that are no longer economical. The Rapid Migration Program shows how Google Cloud helps customers to use best practices, lower risk, control costs, and simplify their path to cloud success. Other considerations This document highlights the core design considerations that play a pivotal role in shaping your overall hybrid and multicloud architecture. Holistically analyze and assess these considerations across your entire solution architecture, encompassing all workloads, not just specific ones. Refactor In a refactor migration, you modify your workloads to take advantage of cloud capabilities, not just to make them work in the new environment. You can improve each workload for performance, features, cost, and user experience. As highlighted in Refactor: move and improve, some refactor scenarios let you modify workloads before migrating them to the cloud. This refactoring approach offers the following benefits, especially if your goal is to build a hybrid architecture as a long term targeted architecture: You can improve the deployment process. You can help speed up the release cadence and shorten feedback cycles by investing in continuous integration/continuous deployment (CI/CD) infrastructure and tooling. You can use refactoring as a foundation to build and manage hybrid architecture with application portability. To work well, this approach typically requires certain investments in on-premises infrastructure and tooling. For example, setting up a local Container Registry and provisioning Kubernetes clusters to containerize applications. Google Kubernetes Engine (GKE) Enterprise edition can be useful in this approach for hybrid environments. More information about GKE Enterprise is covered in the following section. You can also refer to the GKE Enterprise hybrid environment reference architecture for more details. Workload portability With hybrid and multicloud architectures, you might want to be able to shift workloads between the computing environments that host your data. To help enable the seamless movement of workloads between environments, consider the following factors: You can move an application from one computing environment to another without significantly modifying the application and its operational model: Application deployment and management are consistent across computing environments. Visibility, configuration, and security are consistent across computing environments. The ability to make a workload portable shouldn't conflict with the workload being cloud-first. Infrastructure automation Infrastructure automation is essential for portability in hybrid and multicloud architectures. One common approach to automating infrastructure creation is through infrastructure as code (IaC). IaC involves managing your infrastructure in files instead of manually configuring resources—like a VM, a security group, or a load balancer—in a user interface. Terraform is a popular IaC tool to define infrastructure resources in a file. Terraform also lets you automate the creation of those resources in heterogeneous environments. For more information about Terraform core functions that can help you automate provisioning and managing Google Cloud resources, see Terraform blueprints and modules for Google Cloud. You can use configuration management tools such as Ansible, Puppet, or Chef to establish a common deployment and configuration process. Alternatively, you can use an image-baking tool like Packer to create VM images for different platforms. By using a single, shared configuration file, you can use Packer and Cloud Build to create a VM image for use on Compute Engine. Finally, you can use solutions such as Prometheus and Grafana to help ensure consistent monitoring across environments. Based on these tools, you can assemble a common tool chain as illustrated in the following logical diagram. This common tool chain abstracts away the differences between computing environments. It also lets you unify provisioning, deployment, management, and monitoring. Although a common tool chain can help you achieve portability, it's subject to several of the following shortcomings: Using VMs as a common foundation can make it difficult to implement true cloud-first applications. Also, using VMs only can prevent you from using cloud-managed services. You might miss opportunities to reduce administrative overhead. Building and maintaining a common tool chain incurs overhead and operational costs. As the tool chain expands, it can develop unique complexities tailored to the specific needs of your company. This increased complexity can contribute to rising training costs. Before deciding to develop tooling and automation, explore the managed services your cloud provider offers. When your provider offers managed services that support the same use case, you can abstract away some of its complexity. Doing so lets you focus on the workload and the application architecture rather than the underlying infrastructure. For example, you can use the Kubernetes Resource Model to automate the creation of Kubernetes clusters using a declarative configuration approach. You can use Deployment Manager convert to convert your Deployment Manager configurations and templates to other declarative configuration formats that Google Cloud supports (like Terraform and the Kubernetes Resource Model) so they're portable when you publish. You can also consider automating the creation of projects and the creation of resources within those projects. This automation can help you adopt an infrastructure-as-code approach for project provisioning. Containers and Kubernetes Using cloud-managed capabilities helps to reduce the complexity of building and maintaining a custom tool chain to achieve workload automation and portability. However, only using VMs as a common foundation makes it difficult to implement truly cloud-first applications. One solution is to use containers and Kubernetes instead. Containers help your software to run reliably when you move it from one environment to another. Because containers decouple applications from the underlying host infrastructure, they facilitate the deployment across computing environments, such as hybrid and multicloud. Kubernetes handles the orchestration, deployment, scaling, and management of your containerized applications. It's open source and governed by the Cloud Native Computing Foundation. Using Kubernetes provides the services that form the foundation of a cloud-first application. Because you can install and run Kubernetes on many computing environments, you can also use it to establish a common runtime layer across computing environments: Kubernetes provides the same services and APIs in a cloud or private computing environment. Moreover, the level of abstraction is much higher than when working with VMs, which generally translates into less required groundwork and improved developer productivity. Unlike a custom tool chain, Kubernetes is widely adopted for both development and application management, so you can tap into existing expertise, documentation, and third-party support. Kubernetes supports all container implementations that: Support the Kubernetes Container Runtime Interface (CRI) Are industry-adopted for application Aren't tied to any specific vendor When a workload is running on Google Cloud, you can avoid the effort of installing and operating Kubernetes by using a managed Kubernetes platform such as Google Kubernetes Engine (GKE). Doing so can help operations staff shift their focus from building and maintaining infrastructure to building and maintaining applications. You can also use Autopilot, a GKE mode of operation that manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. When using GKE Autopilot, consider your scaling requirements and its scaling limits. Technically, you can install and run Kubernetes on many computing environments to establish a common runtime layer. Practically, however, building and operating such an architecture can create complexity. The architecture gets even more complex when you require container-level security control (service mesh). To simplify managing multi-cluster deployments, you can use GKE Enterprise to run modern applications anywhere at scale. GKE includes powerful managed open source components to secure workloads, enforce compliance policies, and provide deep network observability and troubleshooting. As illustrated in the following diagram, using GKE Enterprise means you can operate multi-cluster applications as fleets. GKE Enterprise helps with the following design options to support hybrid and multicloud architectures: Design and build cloud-like experiences on-premises or unified solutions for transitioning applications to GKE Enterprise hybrid environment. For more information, see the GKE Enterprise hybrid environment reference architecture. Design and build a solution to solve multicloud complexity with a consistent governance, operations, and security posture with GKE Multi-Cloud. For more information, see the GKE Multi-Cloud documentation. GKE Enterprise also provides logical groupings of similar environments with consistent security, configuration, and service management. For example, GKE Enterprise powers zero trust distributed architecture. In a zero trust distributed architecture, services that are deployed on-premises or in another cloud environment can communicate across environments through end-to-end mTLS secure service-to-service communications. Workload portability considerations Kubernetes and GKE Enterprise provide a layer of abstraction for workloads that can hide the many intricacies and differences between computing environments. The following list describes some of those abstractions: An application might be portable to a different environment with minimal changes, but that doesn't mean that the application performs equally well in both environments. Differences in underlying compute, infrastructure security capabilities, or networking infrastructure, along with proximity to dependent services, might lead to substantially different performance. Moving a workload between computing environments might also require you to move data. Different environments can have different data storage and management services and facilities. The behavior and performance of load balancers provisioned with Kubernetes or GKE Enterprise might differ between environments. Data movement Because it can be complex to move, share, and access data at scale between computing environments, enterprise-level companies might hesitate to build a hybrid or multicloud architecture. This hesitation might increase if they are already storing most of their data on-premises or in one cloud. However, the various data movement options offered by Google Cloud, provide enterprises with a comprehensive set of solutions to help move, integrate, and transform their data. These options help enterprises to store, share, and access data across different environments in a way that meets their specific use cases. That ability ultimately makes it easier for business and technology decision-makers to adopt hybrid and multicloud architectures. Data movement is an important consideration for hybrid and multicloud strategy and architecture planning. Your team needs to identify your different business use cases and the data that powers them. You should also think about storage type, capacity, accessibility, and movement options. If an enterprise has a data classification for regulated industries, that classification can help to identify storage locations and cross-region data movement restrictions for certain data classes. For more information, see Sensitive Data Protection. Sensitive Data Protection is a fully managed service designed to help you discover, classify, and protect your data assets. To explore the process, from planning a data transfer to using best practices in implementing a plan, see Migration to Google Cloud: Transferring your large datasets. Security As organizations adopt hybrid and multicloud architectures, their attack surface can increase depending on the way their systems and data are distributed across different environments. Combined with the constantly evolving threat landscape, increased attack surfaces can lead to an increased risk of unauthorized access, data loss, and other security incidents. Carefully consider security when planning and implementing hybrid or multicloud strategies. For more information, see Attack Surface Management for Google Cloud. When architecting for a hybrid architecture, it's not always technically feasible or viable to extend on-premises security approaches to the cloud. However, many of the networking security capabilities of hardware appliances are cloud-first features and they operate in a distributed manner. For more information about the cloud-first network security capabilities of Google Cloud, see Cloud network security. Hybrid and multicloud architectures can introduce additional security challenges, such as consistency and observability. Every public cloud provider has its own approach to security, including different models, best practices, infrastructure and application security capabilities, compliance obligations, and even the names of security services. These inconsistencies can increase security risk. Also, the shared responsibility model of each cloud provider can differ. It's essential to identify and understand the exact demarcation of responsibilities in a multicloud architecture. Observability is key to gaining insights and metrics from the different environments. In a multicloud architecture, each cloud typically provides tools to monitor for security posture and misconfigurations. However, using these tools results in siloed visibility, which prevents building advanced threat intelligence across the entire environment. As a result, the security team must switch between tools and dashboards to keep the cloud secure. Without an overarching end-to-end security visibility for the hybrid and multicloud environments, it's difficult to prioritize and mitigate vulnerabilities. To obtain the full visibility and posture of all your environments, prioritize your vulnerabilities, and mitigate the vulnerabilities you identify. We recommend a centralized visibility model. A centralized visibility model avoids the need for manual correlation between different tools and dashboards from different platforms. For more information, see Hybrid and multicloud monitoring and logging patterns. As part of your planning to mitigate security risks and deploy workloads on Google Cloud, and to help you plan and design your cloud solution for meeting your security and compliance objectives, explore the Google Cloud security best practices center and the enterprise foundations blueprint. Compliance objectives can vary, as they are influenced by both industry-specific regulations and the varying regulatory requirements of different regions and countries. For more information, see the Google Cloud compliance resource center. The following are some of the primary recommended approaches for architecting secure hybrid and multicloud architecture: Develop a unified tailored cloud security strategy and architecture. Hybrid and multicloud security strategies should be tailored to the specific needs and objectives of your organization. It's essential to understand the targeted architecture and environment before implementing security controls, because each environment can use different features, configurations, and services. Consider a unified security architecture across hybrid and multicloud environments. Standardize cloud design and deployments, especially security design and capabilities. Doing so can improve efficiency and enable unified governance and tooling. Use multiple security controls. Typically, no single security control can adequately address all security protection requirements. Therefore, organizations should use a combination of security controls in a layered defense approach, also known as defense-in-depth. Monitor and continuously improve security postures: Your organization should monitor its different environments for security threats and vulnerabilities. It should also try to continuously improve its security posture. Consider using cloud security posture management (CSPM) to identify and remediate security misconfigurations and cybersecurity threats. CSPM also provides vulnerability assessments across hybrid and multicloud environments. Security Command Center is a built-in security and risk management solution for Google Cloud that helps to identify misconfigurations and vulnerabilities and more. Security Health Analytics is a managed vulnerability assessment scanning tool. It's a feature of Security Command Center that identifies security risks and vulnerabilities in your Google Cloud environment and provides recommendations for remediating them. Mandiant Attack Surface Management for Google Cloud lets your organization better see their multicloud or hybrid cloud environment assets. It automatically discovers assets from multiple cloud providers, DNS, and the extended external attack surface to give your enterprise a deeper understanding of its ecosystem. Use this information to prioritize remediation on the vulnerabilities and exposures that present the most risk. Cloud security information and event management (SIEM) solution: Helps to collect and analyze security logs from hybrid and multicloud environments to detect and respond to threats. Google Security Operations SIEM from Google Cloud helps to provide security Information and event management by collecting, analyzing, detecting, and investigating all of your security data in one place. Build hybrid and multicloud architectures using Google Cloud: What's next Learn more about how to get started with your migration to Google Cloud. Learn about common architecture patterns for hybrid and multicloud, which scenarios they're best suited for, and how to apply them. Find out more about networking patterns for hybrid and multicloud architectures, and how to design them. Explore, analyze, and compare the different deployment archetypes on Google Cloud. Learn about landing zone design in Google Cloud. Learn more about Google Cloud Architecture Framework Read about our best practices for migrating VMs to Compute Engine. Send feedback \ No newline at end of file diff --git a/Virtual_Desktops.txt b/Virtual_Desktops.txt new file mode 100644 index 0000000000000000000000000000000000000000..17d9f9a7bdba5f0bd82bb5a05957c8b4d527859b --- /dev/null +++ b/Virtual_Desktops.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/solutions/virtual-desktops +Date Scraped: 2025-02-23T11:59:46.079Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Virtual desktopsAs a core component of the end-user computing stack, Virtual desktops (VDI) can be a great solution for companies looking to streamline operations, reduce costs, enhance security, and support legacy applications.New — Cameyo is now part of Google Cloud. Learn moreContact usFree migration cost assessmentBenefitsSupport your global workforce with desktops that enhance security, save costs, and improve productivitySecureCentralized data: Since data is stored on central servers rather than individual devices, it's easier to protect against loss, theft, or unauthorized access.Data recovery: Centralized backups make data recovery more efficient in case of disasters or hardware failures.Advanced controls: VDI environments can be more easily secured with measures like encryption, access controls, and regular security updates.User authentication and authorization with Google Workspace, IAP, or Active Directory. Encrypted desktop streaming delivers wherever your employees may be.View webinar on reducing risk with virtual desktops.Cost-effectiveHardware: VDI can reduce hardware costs by allowing older or less powerful devices to access virtual desktops. This eliminates the need for frequent hardware upgrades.Maintenance: Centralized management of virtual desktops simplifies maintenance tasks like patching, updates, and troubleshooting, reducing IT overhead.Scalability: VDI can easily scale up or down to accommodate changing business needs, adding or removing virtual desktops as required.Google works with partners so you get a best-fit solution running on infrastructure that can scale up and down quickly and cost-effectively.ProductiveConsistent work environment: VDI provides a standardized desktop experience across devices, ensuring consistency and reducing compatibility issues.Faster deployment: New employees can be quickly set up with virtual desktops, reducing onboarding time.Reduced downtime: Maintenance tasks can be performed centrally without disrupting users, minimizing downtime.Increase business agility and reduce operational complexity by migrating and replatforming applications to the cloud.Key features High-performance virtual desktop partnersLeading software vendors provide virtual desktop solutions on Google Cloud to meet your specific needs, listed below in alphabetical order: CameyoCameyo is a Virtual Application Delivery (VAD) platform that allows you to deliver Windows desktop applications to any device, including Chromebooks, directly from the browser, without the need for traditional virtual desktops or VPNs. Now a part of Google CloudCitrixCitrix Virtual Apps and Desktops secures the delivery of applications and desktops to any device, on any network, empowering a modern digital workspace. Citrix virtualization balances user experience with advanced security and management tools to streamline work for both IT admins and users.Dizzion FrameDizzion Frame lets you stream your Windows apps and desktops from Google Cloud to any device with a browser. This desktop as a service (DaaS) is easy to deploy and also integrates seamlessly with Chromebooks, Google Drive, and Google Authentication. OmnissaHorizon® 8 simplifies the management and delivery of virtual desktops and apps on-premises, in the cloud, or in a hybrid or multi-cloud configuration. With Google Cloud VMware Engine, customers can natively run Horizon 8, which helps IT control, manage, and protect all of the Windows resources end users want, at the speed they expect, with the efficiency business demands.WorkspotWorkspot’s turnkey, enterprise-ready SaaS platform leverages Google Cloud to place virtual Windows 10 desktops and workstations at the edge of the Google Cloud region nearest users for unparalleled performance. The fully managed service features predictable flat-rate pricing, which includes the cost of cloud compute, Go-Live Deployment Services, and ongoing support.Ready to get started? Contact usCustomersCustomers accelerate productivity with virtual desktop solutions Blog postHow Equifax is transforming the majority of their IT operations on the strong foundation of Google Cloud.3-min readBlog postTELUS transitioned tens of thousands of employees to a work-from-home model.3-min readCase studyOncology Venture improves patient outcomes through advanced cancer analysis.6-min readBlog postHow we adopted Pixelbooks and changed ATB Financial’s work culture, step by step.4-min readVideoMiddlesex Hospital paramedics improve patient care with Chromebooks with Citrix.01:34See all customersPartnersAdditional partnersAdditional partner options for industry and workload specific use cases.See all partnersRelated servicesRelated productsCompute EngineWindows and Linux computing infrastructure in predefined or custom machine sizes to accelerate your cloud transformation.Managed Service for Microsoft Active DirectoryHighly available, hardened service running actual Microsoft® Active Directory (AD).Identity and Access ManagementFine-grained access control and visibility for centrally managing cloud resources.Google Cloud VMware EngineEnables you to seamlessly migrate your existing VMware-based applications to Google Cloud without refactoring or rewriting them.Chrome EnterpriseEnable remote workers. Chrome OS provides a secure, manageable solution for employees that need access to applications through VDI.What's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoEffectively reducing ransomware risk with Chrome and virtual desktopsWatch videoVideoMobilize your workforce with virtualization on Chrome OSWatch videoReportReplace thin clients with virtualization on Chrome OSLearn moreReportIDC: Google Cloud is an ideal platform for Windows Server-based applicationsRead reportTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Virtual_Private_Cloud.txt b/Virtual_Private_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..fde2f6a54b3b56acf39a6b2f5900acdc9c6f90b7 --- /dev/null +++ b/Virtual_Private_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vpc +Date Scraped: 2025-02-23T12:07:19.661Z + +Content: +Tech leaders: Get an insider view of Google Cloud’s App Dev and Infrastructure solutions on Oct 30. Register for the Summit.Jump to Virtual Private Cloud (VPC)Virtual Private Cloud (VPC)Global virtual network that spans all regions. Single VPC for an entire organization, isolated within projects. Increase IP space with no downtime.New customers get $300 in free credits to spend on VPC.Go to consoleContact salesStart using VPC networks with these how-to-guidesNetwork of 40 regions, 121 zones, presence in 200+ countries and territories, and 3.2 million kilometers of terrestrial and subsea fiber Use a single VPC to span multiple regions without communicating across the public internetLearn more about what customers are saying about Google VPCsVIDEONetwork Function Optimizer in Google Kubernetes Engine (GKE) demo21:42Key featuresKey featuresVPC networkVPC can automatically set up your virtual topology, configuring prefix ranges for your subnets and network policies, or you can configure your own. You can also expand CIDR ranges without downtime.VPC Flow LogsVPC Flow Logs capture information about the IP traffic going to and from network interfaces on Compute Engine. These logs help with network monitoring, forensics, real-time security analysis, and expense optimization. VPC Flow Logs are updated every five seconds, providing immediate visibility, and can be quickly and easily visualized with Flow Analyzer. VPC PeeringConfigure private communication across the same or different organizations without bandwidth bottlenecks or single points of failure.Shared VPCConfigure a VPC network to be shared across several projects in your organization. Connectivity routes and firewalls associated are managed centrally. Your developers have their own projects with separate billing and quotas, while they simply connect to a shared private network where they can communicate.Bring your own IPsBring your own IP addresses to Google’s network across all regions to minimize downtime during migration and reduce your networking infrastructure cost. After you bring your own IPs, Google Cloud will advertise them globally to all peers. Your prefixes can be broken into blocks as small as 16 addresses (/28), creating more flexibility with your resources.View all featuresLooking for other networking products, including Cloud CDN, Private Service Connect, or Cloud DNS?Browse all productsCustomersLearn from customers using Virtual Private CloudBlog postThe secret to Bitly’s shortened cloud migration5-min readCase studyPlanet: Imaging a dynamic, ever-changing Earth5-min readCase studyMETRO: Migrating to the cloud to better serve customer needs5-min readSee all customersWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.Blog postRouting in Google Cloud: Where can I send my IP packet from a VM?Read the blogBlog postAnnouncing VPC Service Controls with private IPs to extend data exfiltration protectionRead the blogBlog postNetwork Performance Decoded: Benchmarking TCP and UDP bulk flowsRead the blogBlog postIntroducing the Verified Peering Provider program, a simple alternative to Direct PeeringRead the blogBlog postWhat’s new with Google Cloud Networking at Next ’24Read the blogBlog postGet timely networking health updates with Personalized Service Health emerging incidentsRead the blogDocumentationDocumentationGoogle Cloud BasicsUsing VPCVirtual Private Cloud documentation, how-to-guides, and support.Learn moreGoogle Cloud BasicsCreating a virtual private network (VPN)How-to-guides, tutorials, and other support to create a VPN.Learn moreGoogle Cloud BasicsUsing Cloud RouterDocumentation and resources for Cloud Router.Learn moreGoogle Cloud BasicsExtend your on-premises network to Google with InterconnectLearn how to use Dedicated Interconnect to connect directly to Google or use Partner Interconnect to connect to Google through a supported service provider.Learn moreTutorialGoogle Cloud Skills Boost: Networking in Google CloudA two-day class giving participants a broad study of networking options on Google Cloud in addition to common network design patterns and automated deployment. Learn moreNot seeing what you’re looking for?View all product documentationExplore more docsGet a quick intro to using this product.Learn to complete specific tasks with this product.Browse guides and tutorials for this product.View APIs, references, and other resources for this product.Release notesRead about the latest releases for VPCAll featuresAll featuresVPC networkVPC can automatically set up your virtual topology, configuring prefix ranges for your subnets and network policies, or you can configure your own. You can also expand CIDR ranges without downtime.VPC Flow LogsVPC Flow Logs capture information about the IP traffic going to and from network interfaces on Compute Engine. These VPC flow logs help with network monitoring, forensics, real-time security analysis, and expense optimization. VPC Flow Logs are updated every five seconds, providing immediate visibility, and can be quickly and easily visualized with Flow Analyzer.Bring your own IPsBring your own IP addresses to Google’s network across all regions to minimize downtime during migration and reduce your networking infrastructure cost. After you bring your own IPs, Google Cloud will advertise them globally to all peers. Your prefixes can be broken into blocks as small as 16 addresses (/28), creating more flexibility with your resources.VPC PeeringConfigure private communication across the same or different organizations without bandwidth bottlenecks or single points of failure.FirewallSegment your networks with a globally distributed firewall to restrict access to instances. VPC Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. It logs firewall access and denies events with the same responsiveness of VPC flow logs.RoutesForward traffic from one instance to another instance within the same network, even across subnets, without requiring external IP addresses.Shared VPCConfigure a VPC network to be shared across several projects in your organization. Connectivity routes and firewalls associated are managed centrally. Your developers have their own projects with separate billing and quota, while they simply connect to a shared private network, where they can communicate.Packet mirroringTroubleshoot your existing VPCs by collecting and inspecting network traffic at scale, providing intrusion detection, application performance monitoring, and compliance controls with Packet Mirroring.VPNSecurely connect your existing network to a VPC network over IPsec.Private accessGet private access to Google services, such as storage, big data, analytics, or machine learning, without having to give your service a public IP address. Configure your application’s front end to receive internet requests and shield your backend services from public endpoints, all while being able to access Google Cloud services.VPC Service ControlsMitigate data exfiltration risks by enforcing a security perimeter to isolate resources of multi-tenant Google Cloud services. Configure private communications between cloud resources from VPC networks spanning cloud and on-premise deployments. Keep sensitive data private and take advantage of the fully managed storage and data processing capabilities.PricingPricingPricing for VPC is based on data transfer, data leaving a Google Cloud resource, such as a VM. To get a custom pricing quote, connect with a sales representative.If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.View pricing detailsPartnersPacket Mirroring partnersSee all partnersTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips and best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Vision_AI.txt b/Vision_AI.txt new file mode 100644 index 0000000000000000000000000000000000000000..9462907d7bdcaf6bc686d9406ddbc378dccbe03b --- /dev/null +++ b/Vision_AI.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/vision +Date Scraped: 2025-02-23T12:02:22.034Z + +Content: +Try Gemini 2.0 Flash, our newest model with low latency and enhanced performanceVision AI Extract insights from images, documents, and videos Access advanced vision models via APIs to automate vision tasks, streamline analysis, and unlock actionable insights. Or build custom apps with no-code model training and low cost in a managed environment.Try it in consoleContact salesYou can also try deploying Google-recommended document summarizing and AI/ML image processing solutions.HighlightsWhat are Google Cloud’s computer vision offerings?Which computer vision offering is right for me?OCR with generative AIHow computer vision works7-min videoOverviewWhat is computer vision? Computer vision is a field of artificial intelligence (AI) that enables computers and systems to interpret and analyze visual data and derive meaningful information from digital images, videos, and other visual inputs. Some of its typical real-world applications include: object detection, visual content (images, documents, videos) processing, understanding and analysis, product search, image classification and search, and content moderation.Advanced multimodal gen AIGoogle Cloud's Vertex AI offers access to Gemini, a family of cutting-edge, multimodal model that is capable of understanding virtually any input, combining different types of information, and generating almost any output. While Gemini is best suited for tasks that mix visuals, text, and code, Gemini Pro Vision excels at a wide variety of vision related tasks, such as object recognition, digital content understanding, and captioning/description. It can be accessed through an API.Vision focused gen AIImagen on Vertex AI brings Google's state-of-the-art image generative AI capabilities to application developers via an API. Some of its key features include image generation (restricted GA) with text prompts, image editing (restricted GA) with text prompts, describing an image in text (also known as visual captioning, GA), and subject model fine-tuning (restricted GA). Learn more about its key features and launch stages.Ready-to-use Vision AIPowered by Google’s pretrained computer vision ML models, Cloud Vision API is a readily available API (REST and RPC) that allows developers to easily integrate common vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content. Each feature you apply to an image is a billable unit—Cloud Vision API lets you use 1,000 units of its features for free every month. See pricing details.Document understanding gen AIDocument AI is a document understanding platform that combines computer vision and other technologies such as natural language processing to extract text and data from scanned documents, transforming unstructured data into structured information and business insights. It offers a wide range of pretrained processors optimized for different types of documents. It also makes it easy to build custom processors to classify, split, and extract structured data from documents via Document AI Workbench.4:37Intro to Document AIReady-to-use Vision AI for videosWith computer vision technology at its core, Video Intelligence API is an easy way to process, analyze, and understand video content. Its pretrained ML models automatically recognize a vast number of objects, places, and actions in stored and streaming video, with exceptional quality. It’s highly efficient for common use cases such as content moderation and recommendation, media archives, and contextual advertisements. You can also train custom ML models with Vertex AI Vision for your specific needs. 6:21Demo: How to use Video Intelligence API to create a searchable video archiveVisual Inspection AIVisual Inspection AI automates visual inspection tasks in manufacturing and other industrial settings. It leverages advanced computer vision and deep learning techniques to analyze images and videos, identify anomalies, detect and locate defects, and check missing and defect parts in assembled products.You can train custom models with no technical expertise and minimum labeled images, efficiently run inference at production lines, and continuously refresh models with fresh data from the factory floor.5:15Demo - How does Visual Inspection AI work?Unified Vision AI PlatformVertex AI Vision is a fully managed application development environment that lets developers easily build, deploy, and manage computer vision applications to process a variety of data modalities, such as text, image, video, and tabular data. It reduces time to build from days to minutes at one tenth the cost of current offerings.You can build and deploy your own custom models, and manage and scale them with CI/CD pipelines. It also integrates with popular open source tools like TensorFlow and PyTorch.58:59Demo - How Vertex AI Vision worksData privacy and securityGoogle Cloud has industry-leading capabilities that give you—our customers—control over your data and provide visibility into when and how your data is accessed.As a Google Cloud customer, you own your customer data. We implement stringent security measures to safeguard your customer data and provide you with tools and features to control it on your terms. Customer data is your data, not Google’s. We only process your data according to your agreement(s).Learn more in our Privacy Resource Center.View moreCompare computer vision productsOfferingBest forKey featuresCloud Vision APIQuick and easy integration of basic vision features.Prebuilt features like image labeling, face and landmark detection, OCR, safe search. Cost-effective, pay-per-use.Document AIExtracting insights from scanned documents and images, automating document workflows.OCR (powered by Gen AI), NLP, ML for document understanding, text extraction, entity identification, document categorization.Video Intelligence APIAnalyzing video content, content moderation and recommendation, media archives, and contextual ads.Object detection and tracking, scene understanding, activity recognition, face detection and analysis, text detection and recognition.Visual Inspection AIAutomating visual inspection tasks in manufacturing and industrial settingsDetecting anomaly, detecting and locating defects, and checking assembly.Vertex AI VisionBuilding and deploying custom models for specific needs.Data preparation tools, model training and deployment, complete control over your solution. Requires technical expertise.Gemini Pro VisionVisual analysis and understanding, multimodal question answering.Info seeking, object recognition, digital content understanding, structured content generation, captioning/description, and extrapolation.Imagen on Vertex AIGet automated image descriptions. Image classification and search.Content moderation and recommendations.Image generation, image editing, visual captioning, and multimodal embedding. See full list of features and their launch stages.Optimized for different purposes, these products allow you to take advantage of pretrained ML models and hit the ground running, with the ability to easily fine-tune.Cloud Vision APIBest forQuick and easy integration of basic vision features.Key featuresPrebuilt features like image labeling, face and landmark detection, OCR, safe search. Cost-effective, pay-per-use.Document AIBest forExtracting insights from scanned documents and images, automating document workflows.Key featuresOCR (powered by Gen AI), NLP, ML for document understanding, text extraction, entity identification, document categorization.Video Intelligence APIBest forAnalyzing video content, content moderation and recommendation, media archives, and contextual ads.Key featuresObject detection and tracking, scene understanding, activity recognition, face detection and analysis, text detection and recognition.Visual Inspection AIBest forAutomating visual inspection tasks in manufacturing and industrial settingsKey featuresDetecting anomaly, detecting and locating defects, and checking assembly.Vertex AI VisionBest forBuilding and deploying custom models for specific needs.Key featuresData preparation tools, model training and deployment, complete control over your solution. Requires technical expertise.Gemini Pro VisionBest forVisual analysis and understanding, multimodal question answering.Key featuresInfo seeking, object recognition, digital content understanding, structured content generation, captioning/description, and extrapolation.Imagen on Vertex AIBest forGet automated image descriptions. Image classification and search.Content moderation and recommendations.Key featuresImage generation, image editing, visual captioning, and multimodal embedding. See full list of features and their launch stages.Optimized for different purposes, these products allow you to take advantage of pretrained ML models and hit the ground running, with the ability to easily fine-tune.How It WorksGoogle Cloud’s Vision AI suite of tools combines computer vision with other technologies to understand and analyze video and easily integrate vision detection features within applications, including image labeling, face and landmark detection, optical character recognition (OCR), and tagging of explicit content.These tools are available via APIs while remaining customizable for specific needs.Try Vision AIDemoSee how computer vision works with your own filessparkGet solution recommendations for your use case, generated by AII want to automatically categorize my product images by type, color, and styleI want to automatically generate social media tags and metadata for marketing videosI want to build an automated quality control system that can detect manufacturing defects from images of itemsMy use casesGenerate recommendationsDo not enter any sensitive, confidential, or personal information.Try popular use casesprompt_suggestionCategorize imagesprompt_suggestionGenerate metadataprompt_suggestionDetect manufacturing defectsCommon UsesDetect text in raw files and automatically summarizeSummarize large documents with gen AIThe solution depicted in the architecture diagram on the right deploys a pipeline that is triggered when you add a new PDF document to your Cloud Storage bucket. The pipeline extracts text from your document, creates a summary from the extracted text, and stores the summary in a database for you to view and search.You can invoke the application by either uploading files via Jupyter Notebook, or directly to Cloud Storage in the Google Cloud console.Deploy in Google Cloud consoleFull step-by-step guide: Summarize large documents with generative AIDownload the solution as Terraform on GitHubHow-to: Deploy the solution using Terraform CLIEstimated deployment time: 11 mins (1 min to configure, 10 min to deploy).How-tosSummarize large documents with gen AIThe solution depicted in the architecture diagram on the right deploys a pipeline that is triggered when you add a new PDF document to your Cloud Storage bucket. The pipeline extracts text from your document, creates a summary from the extracted text, and stores the summary in a database for you to view and search.You can invoke the application by either uploading files via Jupyter Notebook, or directly to Cloud Storage in the Google Cloud console.Deploy in Google Cloud consoleFull step-by-step guide: Summarize large documents with generative AIDownload the solution as Terraform on GitHubHow-to: Deploy the solution using Terraform CLIEstimated deployment time: 11 mins (1 min to configure, 10 min to deploy).Build an image processing pipelineScalable image processing on a serverless architectureThe solution, depicted in the diagram on the right, uses pretrained machine learning models to analyze images provided by users and generate image annotations. Deploying this solution creates an image processing service that can help you handle unsafe or harmful user-generated content, digitize text from physical documents, detect and classify objects in images, and more.You will be able to review configuration and security settings to understand how to adapt the image processing service to different needs.Deploy in Google Cloud consoleTutorial: build a vision analytics pipeline to process a large quantity of imagesFull documentation: AI/ML image processing on Cloud FunctionsFull step-by-step guide: Deploy the image processing pipeline using the Terraform CLIEstimated deployment time: 12 mins (2 mins to configure, 10 mins to deploy).How-tosScalable image processing on a serverless architectureThe solution, depicted in the diagram on the right, uses pretrained machine learning models to analyze images provided by users and generate image annotations. Deploying this solution creates an image processing service that can help you handle unsafe or harmful user-generated content, digitize text from physical documents, detect and classify objects in images, and more.You will be able to review configuration and security settings to understand how to adapt the image processing service to different needs.Deploy in Google Cloud consoleTutorial: build a vision analytics pipeline to process a large quantity of imagesFull documentation: AI/ML image processing on Cloud FunctionsFull step-by-step guide: Deploy the image processing pipeline using the Terraform CLIEstimated deployment time: 12 mins (2 mins to configure, 10 mins to deploy).Get automated image descriptions with gen AIThe Visual Captioning feature of Imagen lets you generate a relevant description for an image, You can use it to get more detailed metadata about images for storing and searching, to generate automated captioning to support accessibility use cases, and receive quick descriptions of products and visual assets.Available in English, French, German, Italian, and Spanish, this feature can be accessed in the Google Cloud console, or via an API call.Try Visual CaptioningQuickstart: Visual Captioning and Visual Question Answering (VQA)Samples: Get short-form captions of an imageDocumentation: Visual CaptioningHow-tosThe Visual Captioning feature of Imagen lets you generate a relevant description for an image, You can use it to get more detailed metadata about images for storing and searching, to generate automated captioning to support accessibility use cases, and receive quick descriptions of products and visual assets.Available in English, French, German, Italian, and Spanish, this feature can be accessed in the Google Cloud console, or via an API call.Try Visual CaptioningQuickstart: Visual Captioning and Visual Question Answering (VQA)Samples: Get short-form captions of an imageDocumentation: Visual CaptioningStream-process videosGain insights from streaming videos with Vertex AI VisionBefore analyzing your video data with your application, create a pipeline for the continuous flow of data with Streams service in Vertex AI Vision. Ingested data is then analyzed by Google’s pretrained models or your custom model. The analysis output from the streams is then stored in Vertex AI Vision Warehouse where you can use advanced AI-powered search capabilities to query unstructured media content. Try Vertex AI VisionDocumentation: Vertex AI VisionQuickstart: Build an object detector app with Vertex AI VisionHow-to: Create a stream and ingest dataHow-tosGain insights from streaming videos with Vertex AI VisionBefore analyzing your video data with your application, create a pipeline for the continuous flow of data with Streams service in Vertex AI Vision. Ingested data is then analyzed by Google’s pretrained models or your custom model. The analysis output from the streams is then stored in Vertex AI Vision Warehouse where you can use advanced AI-powered search capabilities to query unstructured media content. Try Vertex AI VisionDocumentation: Vertex AI VisionQuickstart: Build an object detector app with Vertex AI VisionHow-to: Create a stream and ingest dataExtract text and insights from documents with generative AIUnlock insights from nuanced documents with Document AIPowered by a foundational model, Document AI Custom Extractor extracts text and data from generic and domain-specific documents faster and with higher accuracy. Easily fine-tune with just 5-10 documents for even better performance. If you want to train your own model, auto-label your datasets with the foundational model for faster time to production.You can also choose to use pretrained specialized processors—see the full list of processors.Deploy Document AI APIQuickstart: Set up the Document AI APIHands-on lab: Build an end-to-end document processing pipelineView Document AI code samplesHow-tosUnlock insights from nuanced documents with Document AIPowered by a foundational model, Document AI Custom Extractor extracts text and data from generic and domain-specific documents faster and with higher accuracy. Easily fine-tune with just 5-10 documents for even better performance. If you want to train your own model, auto-label your datasets with the foundational model for faster time to production.You can also choose to use pretrained specialized processors—see the full list of processors.Deploy Document AI APIQuickstart: Set up the Document AI APIHands-on lab: Build an end-to-end document processing pipelineView Document AI code samplesHigh-precision visual inspectionAutomate quality inspection with Visual Inspection AIVisual Inspection AI is optimized in every step so it’s easy to set up and fast to see ROI. With up to 300 times fewer labeled images to start training high-performance inspection models than general purpose ML platforms, it has shown to deliver up to 10 times higher accuracy. You can train models with no technical expertise, and they run on-premises. Best of all, the models can be continuously refreshed with data flowing from the factory floor, giving you increased accuracy as you discover new use cases.Try Visual Inspection AI APISkillsboost Lab: Create a Cosmetic Anomaly Detection Model Skillsboost Lab: Detect manufacturing defectsVideo: Experience real-time visual inspection at the edgeHow-tosAutomate quality inspection with Visual Inspection AIVisual Inspection AI is optimized in every step so it’s easy to set up and fast to see ROI. With up to 300 times fewer labeled images to start training high-performance inspection models than general purpose ML platforms, it has shown to deliver up to 10 times higher accuracy. You can train models with no technical expertise, and they run on-premises. Best of all, the models can be continuously refreshed with data flowing from the factory floor, giving you increased accuracy as you discover new use cases.Try Visual Inspection AI APISkillsboost Lab: Create a Cosmetic Anomaly Detection Model Skillsboost Lab: Detect manufacturing defectsVideo: Experience real-time visual inspection at the edgePricingHow Vision AI pricing worksEach vision offering has a set of features or processors, which have different pricing—check the detailed pricing pages for details.Free tierProduct/ServiceDiscounted pricingDetailsVision APIFirst 1,000 unitsevery month are free5,000,001+ unitsper monthDetailed pricing pageDocument AIN/APricing is processor-sensitive.5,000,001+ pagesper month for Enterprise Document OCR ProcessorDetailed pricing pageVideo Intelligence APIFirst 1,000 minutesper month are free100,000+ minutesper monthDetailed pricing pageVertex AI VisionN/APricing is feature-sensitive. Detailed pricing pageImagen—multimodal embeddings US $0.0001per image inputImagen—visual captioning US $0.0015per imageGemini Pro VisionDetailed pricing pageHow Vision AI pricing worksEach vision offering has a set of features or processors, which have different pricing—check the detailed pricing pages for details.Vision APIProduct/ServiceFirst 1,000 unitsevery month are freeDiscounted pricing5,000,001+ unitsper monthDetailsDetailed pricing pageDocument AIProduct/ServiceN/APricing is processor-sensitive.Discounted pricing5,000,001+ pagesper month for Enterprise Document OCR ProcessorDetailsDetailed pricing pageVideo Intelligence APIProduct/ServiceFirst 1,000 minutesper month are freeDiscounted pricing100,000+ minutesper monthDetailsDetailed pricing pageVertex AI VisionProduct/ServiceN/APricing is feature-sensitive.Discounted pricing DetailsDetailed pricing pageImagen—multimodal embeddingsProduct/Service Discounted pricing DetailsUS $0.0001per image inputImagen—visual captioningProduct/Service Discounted pricing DetailsUS $0.0015per imageGemini Pro VisionProduct/ServiceDiscounted pricingDetailsDetailed pricing pagePRICING CALCULATOREstimate the cost of your project by pulling in all the tools you need in a single place.Estimate your costCUSTOM QUOTEConnect with our sales team to get a custom quote for your organization's unique needs.Request a quoteStart your proof of conceptTry Vision AI in the consoleGo to my console1,000 pages/month are free with Document OCRTry Document AI API freeLearn how to stream live videos with Video Intelligence APIRead guideLearn how to build an object detector app in Vertex AI VisionRead guideGet code samples for Vision APIView code samplesGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Web3.txt b/Web3.txt new file mode 100644 index 0000000000000000000000000000000000000000..31201b098784ec0e6326f491d7b2062c144e3c31 --- /dev/null +++ b/Web3.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/web3 +Date Scraped: 2025-02-23T12:10:22.821Z + +Content: +Announcing the launch of our new Web3 Portal. Check out our new home for all things Web3 at Google Cloud.Google Cloud for Web3Build and scale faster with simple, secure tools, and infrastructure for Web3. Get co-sell and growth opportunities, like promotion on Google Cloud Marketplace, and support for on- and off-chain governance.Contact usGo to Web3 portal2:22See how Aptos, Dapper Labs, Near, and Solana built with Google CloudCalling all Web3 projects and startupsGet the technology, community, and resources you need to build.Our Web3 startup program gives you up to $200,000 USD over two years for Google Cloud and Firebase usage, access to our Discord channel, foundation grants, VIP event access, and more.Learn more about the programSpend your time building what matters mostWeb3 companies and projects choose Google Cloud because it’s faster and easier to get things done. Reduce the need for infrastructure maintenance, custom tooling, and operations.SCALEGet low-latency networking everywhere, scale node-hosting environments, and scale your database globallyUse Google’s own backbone to provide low-latency access to nodes and apps, and quickly expand regions with a Virtual Private Cloud.Support new protocols and scale existing clusters with GKE on any cloud using Service Mesh. Scale your database on Spanner with up to 99.999% availability.Related contentNode hosting on Google Cloud: A pillar of Web3 infrastructure2-min readService Mesh comes to GKE Autopilot5-min readNext ’22: Building applications with transformative databasesVideo (18:30)SECURITYSubmit and authorize digital assets transactions, keep signatures and data encrypted while in use, and reduce your attack surfaceUse Cloud KMS to manage encryption keys and sign transactions. Keep signatures and data encrypted and integrity-protected with Confidential Space trusted execution environment (TEE) backed by Confidential VMs.Utilize Container-Optimized OS, which is open source, has a small footprint, and is security hardened for containers.Related contentAnnouncing general availability of Confidential GKE Nodes5-min readHow to secure digital assets with multi-party computation and Confidential Space3-min readHow to transact digital assets with multi-party computation and Confidential SpaceCode labSPEEDBuild fast with industry-leading containers, simplify node deployment, and operate in the cleanBuild, deploy, and manage code changes with Firebase, GKE, and Compute Engine. Provision dedicated nodes and minimize node operations with Blockchain Node Engine. Improve the environmental impact of blockchain infrastructure with Carbon Footprint and build on the most efficient data centers.Related contentIntroducing Blockchain Node Engine: Fully managed node-hosting for Web3 development3-min readFour ways to cut your GKE costs (and your carbon footprint)5-min readSee how Google Cloud achieves efficiency with data centers5-min readINTELLIGENCEHandle events at scale, get real-time node and network insights, and stream and process your dataStream blockchain data in real time into BigQuery through Pub/Sub for on-demand analysis. Analyze data with BigQuery to automatically scale to thousands of cores in seconds.Use Dataflow and serverless Spark as fully managed and highly scalable services to run Apache Spark and Apache Beam workloads.Related contentCryptocurrencies in BigQuery public datasets—and how to analyze them5-min readNansen: Empowering crypto investors with blockchain data intelligence driven by Google Cloud5-min readWhat is Cloud Pub/Sub?Video (5:30)Instantly scale with demandSCALEGet low-latency networking everywhere, scale node-hosting environments, and scale your database globallyUse Google’s own backbone to provide low-latency access to nodes and apps, and quickly expand regions with a Virtual Private Cloud.Support new protocols and scale existing clusters with GKE on any cloud using Service Mesh. Scale your database on Spanner with up to 99.999% availability.Related contentNode hosting on Google Cloud: A pillar of Web3 infrastructure2-min readService Mesh comes to GKE Autopilot5-min readNext ’22: Building applications with transformative databasesVideo (18:30)Protect digital assets end to endSECURITYSubmit and authorize digital assets transactions, keep signatures and data encrypted while in use, and reduce your attack surfaceUse Cloud KMS to manage encryption keys and sign transactions. Keep signatures and data encrypted and integrity-protected with Confidential Space trusted execution environment (TEE) backed by Confidential VMs.Utilize Container-Optimized OS, which is open source, has a small footprint, and is security hardened for containers.Related contentAnnouncing general availability of Confidential GKE Nodes5-min readHow to secure digital assets with multi-party computation and Confidential Space3-min readHow to transact digital assets with multi-party computation and Confidential SpaceCode labGet to market fasterSPEEDBuild fast with industry-leading containers, simplify node deployment, and operate in the cleanBuild, deploy, and manage code changes with Firebase, GKE, and Compute Engine. Provision dedicated nodes and minimize node operations with Blockchain Node Engine. Improve the environmental impact of blockchain infrastructure with Carbon Footprint and build on the most efficient data centers.Related contentIntroducing Blockchain Node Engine: Fully managed node-hosting for Web3 development3-min readFour ways to cut your GKE costs (and your carbon footprint)5-min readSee how Google Cloud achieves efficiency with data centers5-min readEasily analyze on-chain dataINTELLIGENCEHandle events at scale, get real-time node and network insights, and stream and process your dataStream blockchain data in real time into BigQuery through Pub/Sub for on-demand analysis. Analyze data with BigQuery to automatically scale to thousands of cores in seconds.Use Dataflow and serverless Spark as fully managed and highly scalable services to run Apache Spark and Apache Beam workloads.Related contentCryptocurrencies in BigQuery public datasets—and how to analyze them5-min readNansen: Empowering crypto investors with blockchain data intelligence driven by Google Cloud5-min readWhat is Cloud Pub/Sub?Video (5:30)Fast and secure node infrastructure with Blockchain Node EngineDevelopers can easily deploy nodes with the reliability, performance, and security expected on Google Cloud.Learn moreWorking with Web3 leadersCheck out a collection of the most popular conversations with Web3 leaders that are changing the industry using Google Cloud tech.BNB ChainBNB Chain works with Google Cloud in developing sustainable innovations for Web3.Video (4:45)ChainstackChainstack's platform, built with Google Cloud, enables developers across DeFi, NFTs, and more.5-min readCoinbaseCoinbase is partnering with Google Cloud to build advanced exchange and data services.Video (2:40)AELFAELF is partnering with Google Cloud to provide highly available and secure blockchain solutions.5-min readIoTeXIoTeX handles 35 million transactions with 99.9% platform reliability and 20% monthly growth rates.5-min readHederaHedera and Google Cloud are making it easier for diverse organizations to make decisions.Video (8:38)Kyber NetworkTogether with Google Cloud, Kyber Network is ensuring platform security and user insights.Video (4:37)Merkle ScienceMerkle Science cut three months off development time and used BigQuery to build intelligence solutions.5-min readNansenNansen grew its customer base 20x with real-time data empowerment and AI predictive modeling.5-min readSky MavisSky Mavis is establishing secure gaming and transaction experiences with Google Cloud.Video (2:13)Web3AuthWeb3Auth is providing 17 million users seamless authentication using Google Cloud.Video (6:15)InabitInabit offers a highly secure platform for managing digital assets and cryptocurrencies with Google Cloud.5-min readView MoreWeb3 customers and Google Cloud deliver togetherWeb3 companies are leveraging Google Cloud managed services to build more robust digital assets and customized experiences for their growing user communities.Find more exciting Web3 newsFireblocks x Google Cloud’s Confidential Space to enhance digital asset securityNovember 26, 2024 Fireblocks announces new integration with Google Cloud’s Confidential Space, which dramatically enhances the security of customer digital assets.Read moreCelo announces partnership with Google CloudApril 4, 2023 Celo Foundation announces collaboration with Google Cloud to help sustainability-focused startups in the Celo network build and scale Web3 applications.Read nowThe Block's Frank Chaparro dives into Google Cloud Web3 March 23, 2023 In this episode of the Scoop, hear about Google Cloud's Web3 Strategy and how Blockchain Node Engine is making it easier for Web3 developers to focus solely on building their products.Listen nowGoogle Cloud adds Casper Labs to growing portfolio of Web3 partnershipsFeb 28, 2023 Google Cloud will provide the infrastructure for Casper Labs' professional services team to deploy and manage blockchain solutions that can meet customers' private and hybrid needs.Read nowGoogle Cloud broadens Web3 Slate by joining Tezos ‘Bakers’Feb 22, 2023 The Tezos Foundation has teamed up with Google Cloud to allow customers of the cloud computing business to deploy Tezos nodes and build Web3 applications on the blockchain.Read nowDeutsche Börse partners with Google Cloud to innovate in digital assetsFeb 10, 2023 Deutsche Börse will leverage Google Cloud’s secure infrastructure and data and analytics capabilities to accelerate the development of its digital securities platform D7 and more.Read nowAlphabet was recognized in the Forbes annual Blockchain 50 listFeb 8, 2023 Google Cloud formed a specialized team devoted to helping companies access crypto market data more quickly and launch blockchain-based products. They will soon accept cryptocurrency.Read nowGoogle to take payments with cryptocurrencies using CoinbaseOct 11, 2022 Google will start allowing a subset of customers to pay for cloud services with digital currencies early next year.Read nowNear Protocol partners with Google Cloud to support Web3 developersOct 6, 2022 In partnership with Near Protocol, Google Cloud will support Near developers in building and scaling Web3 projects and DApps.Read nowGoogle Cloud sits at the intersection of being a recognized contributor to the developer community and having deep technical expertise in blockchain infrastructure and running validators.Aleksander Larsen, COO, Sky MavisRead more on VentureBeatBuild faster and easier with our Web3 startup programBenefits for Web3 projects and startups give you the technology, community, and resources that you need to focus on innovation over infrastructure as you build dApps, Web3 tooling, services, and more.Apply nowLearn moreGoogle CloudLearn more about our productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Web_App_and_API_Protection.txt b/Web_App_and_API_Protection.txt new file mode 100644 index 0000000000000000000000000000000000000000..b88246898ffdd5e9b054fe81636250a9e8921442 --- /dev/null +++ b/Web_App_and_API_Protection.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/security/solutions/web-app-and-api-protection +Date Scraped: 2025-02-23T12:01:00.833Z + +Content: +Explore cutting-edge innovations from Google Cloud and gain insights from Mandiant experts at Google Cloud Security Summit. Register now.Web App and API Protection (WAAP)Protect your applications and APIs against threats and fraud, help ensure availability and compliance.Try our new Security & Resilience Framework Discovery platform and get recommendations.Contact usStart DiscoveryWatch an overview of how Google's WAAP solution worksWatch the webinarBenefitsProven protection for apps in the cloud, on-premises, or hybrid deploymentsProtect against existing and emerging threatsAnti-DDoS, anti-bot, WAF, and API protection help you protect against new and existing threats while helping you keep your apps and APIs compliant and continuously available.Simplify operationsReduce the number of vendors you work with to protect your apps; leverage integrations with Google Cloud tools for consolidated management and visibility.WAAP anywhere, for lessGet proven, comprehensive protection of applications and APIs from a single vendor while potentially saving 50%–70% over competing solutions.Key featuresProtect applications and APIs with a comprehensive solutionGet support and protection against modern internet threats with Cloud Armor, reCAPTCHA Enterprise, and Apigee—all from Google Cloud.Cloud ArmorProtect your applications from DDoS attacks, filter incoming web requests by geo or a host of L7 parameters like request headers, cookies, or query strings with Cloud Armor. Cloud Armor is also a full-fledged web application firewall (WAF), and contains preconfigured rules from ModSecurity Core Rule Set to prevent against the most common web attacks and vulnerability exploit attempts.reCAPTCHA EnterpriseDefend your website from fraudulent activity, spam, and abuse like scraping, credential stuffing, automated account creation, and exploits from automated bots. reCAPTCHA Enterprise uses an adaptive risk analysis engine to keep automated software from engaging in abusive activities on your site.ApigeeBuild security into your APIs in minutes. Out-of-the-box policies enable developers to augment APIs with features to control traffic, enhance performance, and enforce security. Apigee provides for a positive security model understanding the structure of API requests so it can more accurately distinguish between valid and invalid traffic.Learn more about our WAAP solution's capabilitiesVideoLearn how to enhance API security with Apigee and Cloud ArmorWatch videoVideoLearn how to help protect against fraud and abuse with reCAPTCHA EnterpriseWatch videoVideoSee how Cloud Armor can help you protect your websites and applicationsWatch videoDocumentationExplore documentation for Web Application and API ProtectionUse the assets below to learn more about how WAAP can help you protect your apps anywhere to address modern threats.WhitepaperMeeting the challenges of securing modern web apps with WAAPModern web applications have introduced new security challenges. Read this ESG whitepaper to learn more about meeting and addressing those challenges.Learn moreBest PracticeBest practices for securing your apps and APIs using ApigeeRead more about the best practices that can help you to secure your applications and APIs using Apigee API management and other Google Cloud products.Learn moreNot seeing what you’re looking for?View documentationWhat's newWhat's newSign up for Google Cloud newsletters to receive product updates, event information, special offers, and more.VideoProtect your web apps and APIs against threats and fraud with Google CloudLearn moreBlog postBetter protect your web apps and APIs against threats and fraudLearn moreBlog postApigee X is now available and works with WAAPLearn moreTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Website_hosting.txt b/Website_hosting.txt new file mode 100644 index 0000000000000000000000000000000000000000..f5abd8411bdbe8449dfc0fc864a268beefe71bf5 --- /dev/null +++ b/Website_hosting.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/web-serving-overview +Date Scraped: 2025-02-23T11:48:45.912Z + +Content: +Home Docs Cloud Architecture Center Send feedback Website hosting Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-06-11 UTC This article discusses how to host a website on Google Cloud. Google Cloud provides a robust, flexible, reliable, and scalable platform for serving websites. Google built Google Cloud by using the same infrastructure that Google uses to serve content from sites such as Google.com, YouTube, and Gmail. You can host your website's content by using the type and design of infrastructure that best suits your needs. You might find this article useful if you are: Knowledgeable about how to create a website and have deployed and run some web-hosting infrastructure before. Evaluating whether and how to migrate your site to Google Cloud. If you want to build a simple website, consider using Google Sites, a structured wiki- and web page–creation tool. For more information, visit Sites help. Note: You might find it helpful to read the Concepts page of the Google Cloud overview before reading this article. Some of those concepts are referenced in this article without further explanation. If you're already a bit familiar with Google Cloud, you can skip this step. Choosing an option If you're new to using Google Cloud, it's a reasonable approach to start by using the kind of technology you're already familiar with. For example, if you currently use hardware servers or virtual machines (VMs) to host your site, perhaps with another cloud provider or on your own hardware, Compute Engine provides a familiar paradigm for you. If you prefer serverless computing, then Cloud Run is probably a good option for you. If you use a platform-as-a-service (PaaS) offering, such as Heroku or Engine Yard, then App Engine can be the best place to start. After you become more familiar with Google Cloud, you can explore the richness of products and services that Google Cloud provides. For example, if you started by using Compute Engine, you might augment your site's capabilities by using Google Kubernetes Engine (GKE) or migrate some or all of the functionality to App Engine and Cloud Run. The following table summarizes your hosting options on Google Cloud: Option Product Data storage Load balancing Scalability Logging and monitoring Static website Cloud Storage Firebase Hosting Cloud Storage bucket HTTP(S) optional Automatically Cloud Logging Cloud Monitoring Virtual machines Compute Engine Cloud SQL, Cloud Storage, Firestore, and Bigtable, or you can use another external storage provider. Hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD). HTTP(S) TCP Proxy SSL Proxy IPv6 termination Network Cross-region Internal Automatically with managed instance groups Cloud Logging Cloud Monitoring Monitoring console Containers GKE Similar to Compute Engine but interacts with persistent disks differently Network HTTP(S) Cluster autoscaler Cloud Logging Cloud Monitoring Monitoring console Serverless Cloud Run Google Cloud services such as Cloud SQL, Firestore, Cloud Storage, and accessible third-party databases HTTP(S)Managed by Google Managed by Google Cloud Logging Cloud Monitoring Monitoring console Managed platform App Engine Google Cloud services such as Cloud SQL, Firestore, Cloud Storage, and accessible third-party databases HTTP(S)Managed by Google Managed by Google Cloud Logging Cloud Monitoring Monitoring console This article can help you to understand the main technologies that you can use for web hosting on Google Cloud and give you a glimpse of how the technologies work. The article provides links to complete documentation, tutorials, and solutions articles that can help you build a deeper understanding, when you're ready. Understanding costs Because there are so many variables and each implementation is different, it's beyond the scope of this article to provide specific advice about costs. To understand Google's principles about how pricing works on Google Cloud, see the pricing page. To understand pricing for individual services, see the product pricing section. You can also use the pricing calculator to estimate what your Google Cloud usage might look like. You can provide details about the services you want to use and then see a pricing estimate. Setting up domain name services Usually, you will want to register a domain name for your site. You can use a public domain name registrar to register a unique name for your site. If you want complete control of your own domain name system (DNS), you can use Cloud DNS to serve as your DNS provider. The Cloud DNS documentation includes a quickstart to get you going. If you have an existing DNS provider that you want to use, you generally need to create a couple of records with that provider. For a domain name such as example.com, you create an A record with your DNS provider. For the www.example.com sub-domain, you create a CNAME record for www to point it to the example.com domain. The A record maps a hostname to an IP address. The CNAME record creates an alias for the A record. If your domain name registrar is also your DNS provider, that's probably all you need to do. If you use separate providers for registration and DNS, make sure that your domain name registrar has the correct name servers associated with your domain. After making your DNS changes, the record updates can take some time to propagate depending on your time-to-live (TTL) values in your zone. If this is a new hostname, the changes go into effect quickly because the DNS resolvers don't have cached previous values and can contact the DNS provider to get the necessary information to route requests. Hosting a static website The simplest way to serve website content over HTTP(S) is to host static web pages. Static web pages are served unchanged, as they were written, usually by using HTML. Using a static website is a good option if your site's pages rarely change after they have been published, such as blog posts or pages that are part of a small-business website. You can do a lot with static web pages, but if you need your site to have robust interactions with users through server-side code, you should consider the other options discussed in this article. Hosting a static website with Cloud Storage Note: Though Cloud Storage serves content over HTTPS, it doesn't support end-to-end HTTPS for custom domains. If you need end-to-end HTTPS serving, check out Firebase hosting in the next section. Alternatively, you can use HTTP(S) load balancing with Cloud Storage to serve content from a custom domain over HTTPS. To host a static site in Cloud Storage, you need to create a Cloud Storage bucket, upload the content, and test your new site. You can serve your data directly from storage.googleapis.com, or you can verify that you own your domain and use your domain name. You can create your static web pages however you choose. For example, you could hand-author pages by using HTML and CSS. You can use a static-site generator, such as Jekyll, Ghost, or Hugo, to create the content. With static-site generators, you create a static website by authoring in markdown, and providing templates and tools. Site generators generally provide a local web server that you can use to preview your content. After your static site is working, you can update the static pages by using any process you like. That process can be as straightforward as hand-copying an updated page to the bucket. You might choose to use a more automated approach, such as storing your content on GitHub and then using a webhook to run a script that updates the bucket. An even more advanced system might use a continuous-integration/continuous-delivery (CI/CD) tool, such as Jenkins, to update the content in the bucket. Jenkins has a Cloud Storage plugin that provides a Google Cloud Storage Uploader post-build step to publish build artifacts to Cloud Storage. If you have a web app that needs to serve static content or user-uploaded static media, using Cloud Storage can be a cost-effective and efficient way to host and serve this content, while reducing the amount of dynamic requests to your web app. Additionally, Cloud Storage can directly accept user-submitted content. This feature lets users upload large media files directly and securely without proxying through your servers. To get the best performance from your static website, see Best practices for Cloud Storage. For more information, see the following pages: Hosting a static website J is for Jenkins (blog post) Band Aid 30 on Google Cloud (blog post) Cloud Storage documentation Hosting a static website with Firebase Hosting Firebase Hosting provides fast and secure static hosting for your web app. With Firebase Hosting, you can deploy web apps and static content to a global content-delivery network (CDN) by using a single command. Here are some benefits you get when you use Firebase Hosting: Zero-configuration SSL is built into Firebase Hosting. Provisions SSL certificates on custom domains for free. All of your content is served over HTTPS. Your content is delivered to your users from CDN edges around the world. Using the Firebase CLI, you can get your app up and running in seconds. Use command-line tools to add deployment targets into your build process. You get release management features, such as atomic deployment of new assets, full versioning, and one-click rollbacks. Hosting offers a configuration useful for single-page apps and other sites that are more app-like. Hosting is built to be used seamlessly with other Firebase features. For more information, see the following pages: Firebase Hosting guide Get started with Firebase Hosting Using virtual machines with Compute Engine For infrastructure as a service (IaaS) use cases, Google Cloud provides Compute Engine. Compute Engine provides a robust computing infrastructure, but you must choose and configure the platform components that you want to use. With Compute Engine, it's your responsibility to configure, administer, and monitor the systems. Google ensures that resources are available, reliable, and ready for you to use, but it's up to you to provision and manage them. The advantage, here, is that you have complete control of the systems and unlimited flexibility. Use Compute Engine to design and deploy nearly any website-hosting system you want. You can use VMs, called instances, to build your app, much like you would if you had your own hardware infrastructure. Compute Engine offers a variety of machine types to customize your configuration to meet your needs and your budget. You can choose which operating systems, development stacks, languages, frameworks, services, and other software technologies you prefer. Setting up automatically with Google Cloud Marketplace The easiest way to deploy a complete web-hosting stack is by using Google Cloud Marketplace. With just a few clicks, you can deploy any of over 100 fully realized solutions with Google Click to Deploy or Bitnami. For example, you can set up a LAMP stack or WordPress with Cloud Marketplace. The system deploys a complete, working software stack in just a few minutes on a single instance. Before you deploy, Cloud Marketplace shows you cost estimates for running the site, gives you clear information about which versions of the software components it installs for you, and lets you customize your configuration by changing component instance names, choosing the machine type, and choosing a disk size. After you deploy, you have complete control over the Compute Engine instances, their configurations, and the software. Setting up manually You can also create your infrastructure on Compute Engine manually, either building your configuration from scratch or building on a Google Cloud Marketplace deployment. For example, you might want to use a version of a software component not offered by Cloud Marketplace, or perhaps you prefer to install and configure everything on your own. Providing a complete framework and best practices for setting up a website is beyond the scope of this article. But from a high-level view, the technical side of setting up a web-hosting infrastructure on Compute Engine requires that you: Understand the requirements. If you're building a new website, make sure you understand the components you need, such as instances, storage needs, and networking infrastructure. If you're migrating your app from an existing solution, you probably already understand these requirements, but you need think through how your existing setup maps to Google Cloud services. Plan the design. Think through your architecture and write down your design. Be as explicit as you can. Create the components. The components that you might usually think of as physical assets, such as computers and network switches, are provided through services in Compute Engine. For example, if you want a computer, you have to create a Compute Engine instance. If you want a persistent hard disk drive, you create that, too. Infrastructure as code tools, such as Terraform, makes this an easy and repeatable process. Configure and customize. After you have the components you want, you need to configure them, install and configure software, and write and deploy any customization code that you require. You can replicate the configuration by running shell scripts, which helps to speed future deployments. Terraform helps here, too, by providing declarative, flexible configuration templates for automatic deployment of resources. You can also take advantage of IT automation tools such as Puppet and Chef. Deploy the assets. Presumably, you have web pages and images. Test. Verify that everything works as you expect. Deploy to production. Open up your site for the world to see and use. Storing data with Compute Engine Most websites need some kind of storage. You might need storage for a variety of reasons, such as saving files that your users upload, and of course the assets that your site uses. Google Cloud provides a variety of managed storage services, including: A SQL database in Cloud SQL, which is a fully managed relational database service for MySQL, PostgreSQL, and SQL Server. Two options for NoSQL data storage: Firestore and Bigtable. Memorystore, which is a fully managed in-memory data store service for Redis and Memcached. Consistent, scalable, large-capacity object storage in Cloud Storage. Cloud Storage comes in several classes: Standard provides maximum availability. Nearline provides a low-cost choice ideal for data accessed less than once a month. Coldline provides a low-cost choice ideal for data accessed less than once a quarter. Archive provides the lowest-cost choice for archiving, backup, and disaster recovery. Persistent disks on Compute Engine for use as primary storage for your instances. Compute Engine offers both hard-disk-based persistent disks, called standard persistent disks, and solid-state persistent disks (SSD). You can also choose to set up your preferred storage technology on Compute Engine by using persistent disks. For example, you can set up PostgreSQL as your SQL database or MongoDB as your NoSQL storage. To understand the full range and benefits of storage services on Google Cloud, see Choosing a storage option. Load balancing with Compute Engine For any website that operates at scale, using load-balancing technologies to distribute the workload among servers is often a requirement. You have a variety of options when architecting your load-balanced web servers on Compute Engine, including: HTTP(S) load balancing. Explains the fundamentals of using Cloud Load Balancing. Content-based load balancing. Demonstrates how to distribute traffic to different instances based on the incoming URL. Cross-region load balancing. Demonstrates configuring VM instances in different regions and using HTTP or HTTPS load balancing to distribute traffic across the regions. TCP Proxy load balancing. Demonstrates setting up global TCP Proxy load balancing for a service that exists in multiple regions. SSL Proxy load balancing. Demonstrates setting up global SSL Proxy load balancing for a service that exists in multiple regions. IPv6 termination for HTTP(S), SSL Proxy, and TCP Proxy load balancing. Explains IPv6 termination and the options for configuring load balancers to handle IPv6 requests. Network load balancing. Shows a basic scenario that sets up a layer 3 load balancing configuration to distribute HTTP traffic across healthy instances. Cross-region load balancing using Microsoft IIS backends. Shows how to use the Compute Engine load balancer to distribute traffic to Microsoft Internet Information Services (IIS) servers. Setting up internal load balancing You can set up a load balancer that distributes network traffic on a private network that isn't exposed to the internet. Internal load balancing is useful not only for intranet apps where all traffic remains on a private network, but also for complex web apps where a frontend sends requests to backend servers by using a private network. Load balancing deployment is flexible, and you can use Compute Engine with your existing solutions. For example, HTTP(S) load balancing using Nginx is one possible solution that you could use in place of the Compute Engine load balancer. Content distribution with Compute Engine Because response time is a fundamental metric for any website, using a CDN to lower latency and increase performance is often a requirement, especially for a site with global web traffic. Cloud CDN uses Google's globally distributed edge points of presence to deliver content from cache locations closest to users. Cloud CDN works with HTTP(S) load balancing. To serve content out of Compute Engine, Cloud Storage, or both from a single IP address, enable Cloud CDN for an HTTP(S) load balancer. Autoscaling with Compute Engine You can set up your architecture to add and remove servers as demand varies. This approach can help to ensure that your site performs well under peak load, while keeping costs under control during more-typical demand periods. Compute Engine provides an autoscaler that you can use for this purpose. Autoscaling is a feature of managed instance groups. A managed instance group is a pool of homogeneous virtual machine instances, created from a common instance template. An autoscaler adds or remove instances in a managed instance group. Although Compute Engine has both managed and unmanaged instance groups, you can only use managed instance groups with an autoscaler. For more information, see autoscaling on Compute Engine. For an in-depth look at what it takes to build a scalable and resilient web-app solution, see Building scalable and resilient web apps. Logging and monitoring with Compute Engine Google Cloud includes features that you can use to keep tabs on what's happening with your website. Cloud Logging collects and stores logs from apps and services on Google Cloud. You can view or export logs and integrate third-party logs by using a logging agent. Cloud Monitoring provides dashboards and alerts for your site. You configure Monitoring with the Google Cloud console. You can review performance metrics for cloud services, virtual machines, and common open source servers such as MongoDB, Apache, Nginx, and Elasticsearch. You can use the Cloud Monitoring API to retrieve monitoring data and create custom metrics. Cloud Monitoring also provides uptime checks, which send requests to your websites to see if they respond. You can monitor a website's availability by deploying an alerting policy that creates an incident if the uptime check fails. Managing DevOps with Compute Engine For information about managing DevOps with Compute Engine, see Distributed load testing using Kubernetes. Using containers with GKE You might already be using containers, such as Docker containers. For web hosting, containers offer several advantages, including: Componentization. You can use containers to separate the various components of your web app. For example, suppose your site runs a web server and a database. You can run these components in separate containers, modifying and updating one component without affecting the other. As your app's design becomes more complex, containers are a good fit for a service-oriented architecture, including microservices. This kind of design supports scalability, among other goals. Portability. A container has everything it needs to run—your app and its dependencies are bundled together. You can run your containers on a variety of platforms, without worrying about the underlying system details. Rapid deployment. When it's time to deploy, your system is built from a set of definitions and images, so the parts can be deployed quickly, reliably, and automatically. Containers are typically small and deploy much more quickly compared to, for example, virtual machines. Container computing on Google Cloud offers even more advantages for web hosting, including: Orchestration. GKE is a managed service built on Kubernetes, the open source container-orchestration system introduced by Google. With GKE, your code runs in containers that are part of a cluster that is composed of Compute Engine instances. Instead of administering individual containers or creating and shutting down each container manually, you can automatically manage the cluster through GKE, which uses the configuration you define. Image registration. Artifact Registry provides private storage for Docker images on Google Cloud. You can access the registry through an HTTPS endpoint, so you can pull images from any machine, whether it's a Compute Engine instance or your own hardware. The registry service hosts your custom images in Cloud Storage under your Google Cloud project. This approach ensures by default that your custom images are only accessible by principals that have access to your project. Mobility. This means that you have the flexibility to move and combine workloads with other cloud providers, or mix cloud computing workloads with on-premises implementations to create a hybrid solution. Storing data with GKE Because GKE runs on Google Cloud and uses Compute Engine instances as nodes, your storage options have a lot in common with storage on Compute Engine. You can access Cloud SQL, Cloud Storage, Firestore, and Bigtable through their APIs, or you can use another external storage provider if you choose. However, GKE does interact with Compute Engine persistent disks in a different way than a normal Compute Engine instance would. A Compute Engine instance includes an attached disk. When you use Compute Engine, as long as the instance exists, the disk volume remains with the instance. You can even detach the disk and use it with a different instance. But in a container, on-disk files are ephemeral. When a container restarts, such as after a crash, the on-disk files are lost. Kubernetes solves this issue by using volume and Storage Class abstractions. One type of storage class is GCE PD. This means that you can use Compute Engine persistent disks with containers to keep your data files from being deleted when you use GKE. To understand the features and benefits of a volume, you should first understand a bit about pods. You can think of a pod as an app-specific logical host for one or more containers. A pod runs on a node instance. When containers are members of a pod, they can share several resources, including a set of shared storage volumes. These volumes enable data to survive container restarts and to be shared among the containers within the pod. Of course, you can use a single container and volume in a pod, too, but the pod is a required abstraction to logically connect these resources to each other. For an example, see the tutorial Using persistent disks with WordPress and MySQL. Load balancing with GKE Many large web-hosting architectures need to have multiple servers running that can share the traffic demands. Because you can create and manage multiple containers, nodes, and pods with GKE, it's a natural fit for a load-balanced web-hosting system. Using network load balancing The easiest way to create a load balancer in GKE is to use Compute Engine's network load balancing. Network load balancing can balance the load of your systems based on incoming internet protocol data, such as the address, port, and protocol type. Network load balancing uses forwarding rules. These rules point to target pools that list which instances are available to be used for load balancing. With network load balancing, you can load balance additional TCP/UDP-based protocols such as SMTP traffic, and your app can directly inspect the packets. You can deploy network load balancing simply by adding the type: LoadBalancer field to your service configuration file. Using HTTP(S) load balancing If you need more advanced load-balancing features, such as HTTPS load balancing, content-based load balancing, or cross-region load balancing, you can integrate your GKE service with Compute Engine's HTTP/HTTPS load balancing feature. Kubernetes provides the Ingress resource that encapsulates a collection of rules for routing external traffic to Kubernetes endpoints. In GKE, an Ingress resource handles provisioning and configuring the Compute Engine HTTP/HTTPS load balancer. For more information about using HTTP/HTTPS load balancing in GKE, see Setting up HTTP load balancing with Ingress. Scaling with GKE For automatic resizing of clusters, you can use the Cluster Autoscaler. This feature periodically checks whether there are any pods that are waiting for a node with free resources but aren't being scheduled. If such pods exist, then the autoscaler resizes the node pool if resizing would allow the waiting pods to be scheduled. Cluster Autoscaler also monitors the usage of all nodes. If a node isn't needed for an extended period of time, and all of its pods can be scheduled elsewhere, then the node is deleted. For more information about the Cluster Autoscaler, its limitations, and best practices, see the Cluster Autoscaler documentation. Logging and monitoring with GKE Like on Compute Engine, Logging and Monitoring provide your logging and monitoring services. Logging collects and stores logs from apps and services. You can view or export logs and integrate third-party logs by using a logging agent. Monitoring provides dashboards and alerts for your site. You configure Monitoring with the Google Cloud console. You can review performance metrics for cloud services, virtual machines, and common open source servers such as MongoDB, Apache, Nginx, and Elasticsearch. You can use the Monitoring API to retrieve monitoring data and create custom metrics. Managing DevOps with GKE When you use GKE, you're already getting many of the benefits most people think of when they think of DevOps. This is especially true when it comes to ease of packaging, deployment, and management. For your CI/CD workflow needs, you can take advantage of tools that are built for the cloud, such as Cloud Build, and Cloud Deploy, or popular tools such as Jenkins. For more information, see the following articles: Jenkins on GKE Setting up Jenkins on GKE Configuring Jenkins for GKE Building on a serverless platform with Cloud Run Google Cloud's serverless platform lets you write code your way without worrying about the underlying infrastructure. You can build full-stack serverless applications with Google Cloud’s storage, databases, machine learning, and more. For your containerized websites, you can also deploy them to Cloud Run in addition to using GKE. Cloud Run is a fully managed serverless platform that lets you run highly scalable containerized applications on Google Cloud. You only pay for the time that your code runs. Using containers with Cloud Run, you can take advantage of mature technologies such as Nginx, Express.js, and Django to build your websites, access your SQL database on Cloud SQL, and render dynamic HTML pages. The Cloud Run documentation includes a quickstart to get you going. Storing data with Cloud Run Cloud Run containers are ephemeral and you need to understand their quotas and limits for your use cases. Files can be temporarily stored for processing in a container instance, but this storage comes out of the available memory for the service as described in the runtime contract. For persistent storage, similar to App Engine, you can choose Google Cloud's services such as Cloud Storage, Firestore or Cloud SQL. Alternatively, you can also use a third-party storage solution. Load balancing and autoscaling with Cloud Run By default, when you build on Cloud Run, it automatically routes incoming requests to appropriate back-end containers and do load balancing for you. However, if you want to take advantage of Google Cloud’s fully featured enterprise-grade HTTP(S) load balancing capabilities, you can use serverless network endpoint groups. With HTTP(S) load balancing, you can enable Cloud CDN or serve traffic from multiple regions. In addition, you can use middleware such as API Gateway to enhance your service. For Cloud Run, Google Cloud manages container instance autoscaling for you. Each revision is automatically scaled to the number of container instances needed to handle all incoming requests. When a revision doesn't receive any traffic, by default it's scaled to zero container instances. However, if desired, you can change this default to specify an instance to be kept idle or warm using the minimum instances setting. Logging and monitoring with Cloud Run Cloud Run has two types of logs, which are automatically sent to Cloud Logging: Request logs: logs of requests sent to Cloud Run services. These logs are created automatically. Container logs: logs emitted from the container instances, typically from your own code, written to supported locations as described in Writing container logs. You can view logs for your service in a couple of ways: Use the Cloud Run page in the Google Cloud console. Use Cloud Logging Logs Explorer in the Google Cloud console. Both of these viewing methods examine the same logs stored in Cloud Logging, but the Logs Explorer provides more details and more filtering capabilities. Cloud Monitoring provides Cloud Run performance monitoring, metrics, and uptime checks, along with alerts to send notifications when certain metric thresholds are exceeded. Google Cloud Observability pricing applies, which means there is no charge for metrics on the fully managed version of Cloud Run. Note that you can also use Cloud Monitoring custom metrics. Cloud Run is integrated with Cloud Monitoring with no setup or configuration required. This means that metrics of your Cloud Run services are automatically captured when they are running. Building on a managed platform with App Engine On Google Cloud, the managed platform as a service (PaaS) is called App Engine. When you build your website on App Engine, you get to focus on coding up your features and let Google worry about managing the supporting infrastructure. App Engine provides a wide range of features that make scalability, load balancing, logging, monitoring, and security much easier than if you had to build and manage them yourself. App Engine lets you code in a variety of programming languages, and it can use a variety of other Google Cloud services. App Engine provides the standard environment, which lets you run apps in a secure, sandboxed environment. The App Engine standard environment distributes requests across multiple servers, and scales servers to meet traffic demands. Your app runs in its own secure, reliable environment that's independent of the hardware, operating system, or physical location of the server. To give you more options, App Engine offers the flexible environment. When you use the flexible environment, your app runs on configurable Compute Engine instances, but App Engine manages the hosting environment for you. This means that you can use additional runtimes, including custom runtimes, for more programming language choices. You can also take advantage of some of the flexibility that Compute Engine offers, such as choosing from a variety of CPU and memory options. Programming languages The App Engine standard environment provides default runtimes, and you write source code in specific versions of the supported programming languages. With the flexible environment, you write source code in a version of any of the supported programming languages. You can customize these runtimes or provide your own runtime with a custom Docker image or Dockerfile. If the programming language you use is a primary concern, you need to decide whether the runtimes provided by the App Engine standard environment meet your requirements. If they don't, you should consider using the flexible environment. To determine which environment best meets your app's needs, see Choosing an App Engine environment. Getting started tutorials by language The following tutorials can help you get started using the App Engine standard environment: Hello World in Python Hello World in Java Hello World in PHP Hello World in Ruby Hello World in Go Hello World in Node.js The following tutorials can help you get started using the flexible environment: Getting started with Python Getting started with Java Getting started with PHP Getting started with Go Getting started with Node.js Getting started with Ruby Getting started with .NET Storing data with App Engine App Engine gives you options for storing your data: Name Structure Consistency Firestore Schemaless Strongly consistent. Cloud SQL Relational Strongly consistent. Cloud Storage Files and their associated metadata Strongly consistent except when performing list operations that get a list of buckets or objects. You can also use several third-party databases with the standard environment. For more details about storage in App Engine, see Choosing a storage option, and then select your preferred programming language. When you use the flexible environment, you can use all of the same storage options as you can with the standard environment, and a wider range of third-party databases as well. For more information about third-party databases in the flexible environment, see Using third-party databases. Load balancing and autoscaling with App Engine By default, App Engine automatically routes incoming requests to appropriate backend instances and does load balancing for you. However, if you want to take advantage of Google Cloud’s fully featured enterprise-grade HTTP(S) load balancing capabilities, you can use serverless network endpoint groups. For scaling, App Engine can automatically create and shut down instances as traffic fluctuates, or you can specify a number of instances to run regardless of the amount of traffic. Logging and monitoring with App Engine In App Engine, requests are logged automatically, and you can view these logs in the Google Cloud console. App Engine also works with standard, language-specific libraries that provide logging functionality and forwards the log entries to the logs in the Google Cloud console. For example, in Python you can use the standard Python logging module and in Java you can integrate the logback appender or java.util.logging with Cloud Logging. This approach enables the full features of Cloud Logging and requires only a few lines of Google Cloud-specific code. Cloud Monitoring provides features for monitoring your App Engine apps. Through the Google Cloud console, you can monitor incidents, uptime checks, and other details. Building content management systems Hosting a website means managing your website assets. Cloud Storage provides a global repository for these assets. One common architecture deploys static content to Cloud Storage and then syncs to Compute Engine to render dynamic pages. Cloud Storage works with many third-party content management systems, such as WordPress, Drupal, and Joomla. Cloud Storage also offers an Amazon S3 compatible API, so any system that works with Amazon S3 can work with Cloud Storage. The diagram below is a sample architecture for a content management system. What's next Explore reference architectures, diagrams, and best practices about Google Cloud. Take a look at our Cloud Architecture Center. Send feedback \ No newline at end of file diff --git a/What's_new(1).txt b/What's_new(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..de578ee027e9aa7d09ae4533c09ceaadbcd558be --- /dev/null +++ b/What's_new(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/framework/whats-new +Date Scraped: 2025-02-23T11:42:38.425Z + +Content: +Home Docs Cloud Architecture Center Send feedback What's new in the Architecture Framework Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2025-02-05 UTC This document lists significant changes to the Google Cloud Architecture Framework. February 5, 2025 Security, privacy, and compliance pillar: Major update to align the recommendations with core principles of security, privacy, and compliance. December 30, 2024 Reliability pillar: Major update to align the recommendations with core principles of reliability. December 6, 2024 Performance optimization pillar: Major update to align the recommendations with core principles of performance optimization. October 31, 2024 Operational excellence pillar: Major update to align the recommendations with core principles of operational excellence. October 11, 2024 Added the AI and ML perspective. These documents describe principles and recommendations that are specific to AI and ML, aligned to each pillar of the Architecture Framework. September 25, 2024 Cost optimization pillar: Major update to align the recommendations with core principles of cost optimization. August 27, 2024 The System Design category of the Architecture Framework has been removed. The guidance in this category is available in other Google Cloud documentation. June 4, 2024 Cost optimization category: Added information about monitoring the cost of Dataflow resources. Add recommendations for optimizing cost by using the at-least-once streaming mode and the dynamic thread scaling feature. May 31, 2024 Cost optimization category: Added information about the FinOps hub. May 30, 2024 System design category: Added a new design principle, Design for change. May 23, 2024 Cost optimization category: Added guidance in Optimize cost: Compute, containers, and serverless about reducing cost by suspending idle Compute Engine VMs . May 16, 2024 System design category: Added information about the Batch service in Choose and manage compute. May 9, 2024 Cost optimization category: Updated guidance about Dataflow Shuffle in Optimize cost: Databases and smart analytics. April 22, 2024 Cost optimization category: Added a recommendation about considering storage cost when configuring the log-retention period in Optimize cost: Cloud operations. March 29, 2024 Reliability category: Reorganized the guidance related to SLOs and SLIs to improve usability and alignment with SRE guidance. January 4, 2024 Performance optimization category: Updated the guidance for Network Service Tiers. November 28, 2023 Reliability category: Reorganized the content to improve readability and consistency: Define SLOs: Moved "Terminology" section to new page: Terminology Adopt SLOs: Moved "SLOs and Alerts" section to new page: SLOs and Alerts November 9, 2023 System design category: Added guidance to help cloud architects to choose deployment archetypes for workloads in Google Cloud. Send feedback \ No newline at end of file diff --git a/What's_new.txt b/What's_new.txt new file mode 100644 index 0000000000000000000000000000000000000000..9e050874fdf3280889b8df93a430e2a6562e7fd8 --- /dev/null +++ b/What's_new.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/release-notes +Date Scraped: 2025-02-23T11:42:29.312Z + +Content: +Home Docs Cloud Architecture Center Send feedback What's new in the Architecture Center Stay organized with collections Save and categorize content based on your preferences. This page lists new and updated content in the Google Cloud Architecture Center. To get the latest content updates delivered to you, add the URL of this page to your feed reader, or add the feed URL directly: https://cloud.google.com/feeds/architecture-center-release-notes.xml February 07, 2025 Google Cloud Architecture Framework: Security, privacy, and compliance: Major update to align the recommendations with core principles of security. January 31, 2025 Best practices and reference architectures for VPC design: Updates to the document to reflect feature releases over the past months. Cross-Cloud Network for distributed applications: Updates to the document set to reflect feature releases over the past months. January 30, 2025 (New guide) Cross-Cloud Network inter-VPC connectivity using Network Connectivity Center: Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network with Network Connectivity Center. January 22, 2025 (New guide) Optimize AI and ML workloads with Parallelstore: Learn how to optimize performance for artificial intelligence (AI) or machine learning (ML) workloads with parallel file system storage by using Parallelstore. January 17, 2025 Cross-Cloud Network for distributed applications: Updates to the document set to reflect feature releases over the past months. January 16, 2025 (New guide) Implement two-tower retrieval for large-scale candidate generation: Describes how to implement an end-to-end two-tower candidate generation workflow with Vertex AI. December 30, 2024 Google Cloud Architecture Framework: Reliability pillar: Major update to align the recommendations with core principles of reliability. December 20, 2024 (New guide) Confidential computing for data analytics and AI: Provides an overview of confidential computing, explores use cases for data analytics and federated learning across various industries, and includes architecture examples for some use cases. December 11, 2024 Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB: Added more design alternatives. Infrastructure for a RAG-capable generative AI application using GKE: Added design alternatives. Deploy automated malware scanning for files uploaded to Cloud Storage: Added the Deploy using the Terraform CLI section. December 09, 2024 (New guide) Stream logs from Google Cloud to Datadog: Provides an architecture to send log event data from across your Google Cloud ecosystem to Datadog Log Management. The architecture is accompanied by a deployment guide. December 06, 2024 (New guide) Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search: Describes how to design infrastructure for a generative AI application with retrieval-augmented generation (RAG) by using Vector Search. Google Cloud Architecture Framework: Performance optimization: Major update to align the recommendations with core principles of performance optimization. November 19, 2024 (New guide) Cross-Cloud Network inter-VPC connectivity using VPC Network Peering: Describes how to configure hub-and-spoke Cross-Cloud Network using VPC Network Peering. (New guide) Deploy and operate generative AI applications: Describes how you can adapt DevOps and MLOps processes to develop, deploy, and operate generative AI applications on existing foundation models. November 01, 2024 (New guide) Migrate from AWS Lambda to Cloud Run: Describes how to design, implement, and validate a plan to migrate from AWS Lambda to Cloud Run. October 31, 2024 Google Cloud Architecture Framework: Operational excellence: Major update to align the recommendations with core principles of operational excellence. October 22, 2024 Design an optimal storage strategy for your cloud workload: Added information about Parallelstore. Updated NetApp Volumes availability capabilities and capacity limits. October 11, 2024 (New series) Architecture Framework: AI and ML perspective: Describes principles and recommendations that are specific to AI and ML, for each pillar of the Architecture Framework: operational excellence, security, reliability, cost optimization, and performance optimization. October 01, 2024 (New guide) Enterprise application on Compute Engine VMs with Oracle Exadata in Google Cloud: Provides a reference architecture for an application that's hosted on Compute Engine VMs with connectivity to Oracle Cloud Infrastructure (OCI) Exadata databases in Google Cloud. September 27, 2024 (New guide) Business continuity with CI/CD on Google Cloud: Learn how to plan and implement business continuity and disaster recovery (DR) for the CI/CD process. September 25, 2024 Google Cloud Architecture Framework: Cost optimization: Major update to align the recommendations with core principles of cost optimization. September 19, 2024 (New guide) Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL and AlloyDB for PostgreSQL: Describes how to design, implement, and validate a plan to migrate from Amazon Relational Database Service (RDS) or Amazon Aurora for PostgreSQL to Cloud SQL. September 17, 2024 (New guide) Scalable BigQuery backup automation: Build a solution to automate recurrent BigQuery backup operations at scale, with two backup methods: BigQuery snapshots and exports to Cloud Storage. This architecture is accompanied by a deployment guide. September 16, 2024 Design an optimal storage strategy for your cloud workload: Updated guidance about storage recommendations and storage options decision tree with information about Hyperdisk ML and Hyperdisk Balanced. Updated file storage guidance based on performance scalability and supported file system protocols. September 05, 2024 (New guide) Enterprise application with Oracle Database on Compute Engine: Provides a reference architecture to host an application that uses an Oracle database, deployed on Compute Engine VMs. August 30, 2024 (New guide) Select a managed container runtime environment: Learn about managed runtime environments and assess your requirements to choose between Cloud Run and GKE Autopilot. August 19, 2024 (New guide) Use generative AI for utilization management: A reference architecture for health insurance companies to automate prior authorization (PA) request processing and improve their utilization review (UR) processes. August 16, 2024 (New guide) Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL: Describes how to design, implement, and validate a plan to migrate from Amazon RDS or Amazon Aurora to Cloud SQL for MySQL. Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Organization Policy Service. August 15, 2024 (New guide) Manage and scale networking for Windows applications that run on managed Kubernetes: Discusses how to manage networking for Windows applications that run on Google Kubernetes Engine using Cloud Service Mesh and Envoy gateways. This reference architecture is accompanied by a deployment guide. August 05, 2024 Disaster recovery scenarios for data: Added guidance about using the following capabilities to back up and recover self-managed databases deployed in Google Cloud: Backup and DR Service Persistent Disk Asynchronous Replication Disaster recovery scenarios for applications: Added guidance about using the following capabilities to back up and recover applications deployed in Google Cloud: Backup and DR Service Persistent Disk Asynchronous Replication July 24, 2024 File storage on Compute Engine: Added guidance about Filestore Regional. (New guide) Architect your workloads: Design resilient, single-region environments on Google Cloud. July 09, 2024 Architecting disaster recovery for cloud infrastructure outages: Updated the DR guidance for Google Security Operations SIEM. June 30, 2024 (New guide) From edge to multi-cluster mesh: Globally distributed applications exposed through GKE Gateway and Cloud Service Mesh: Describes exposing applications externally through Google Kubernetes Engine (GKE) Gateways running on multiple GKE clusters within a service mesh. (New guide) From edge to multi-cluster mesh: Deploy globally distributed applications through GKE Gateway and Cloud Service Mesh: Provides the steps needed to deploy applications externally through Google Kubernetes Engine (GKE) Gateways running on multiple GKE clusters within a service mesh. June 28, 2024 (New guide) Migrate from AWS to Google Cloud: Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server: Describes how to design, implement, and validate a plan to migrate from Amazon Relational Database Service (RDS) to Cloud SQL for SQL Server. June 07, 2024 Infrastructure for a RAG-capable generative AI application using Vertex AI: Added a design alternative that uses Vertex AI Vector Search for the vector store and semantic search components in the architecture. June 05, 2024 (New guide: 1 of 4) Cross-Cloud Network for distributed applications: Provides an overview about how you can design Cross-Cloud Network for distributed applications. (New guide 2 of 4) Network segmentation and connectivity for distributed applications in Cross-Cloud Network: Describes how to design the network segmentation structure and connectivity of Cross-Cloud Network for distributed applications. (New guide 3 of 4) Service networking for distributed applications in Cross-Cloud Network: Describes how to design Cross-Cloud Network service networking for distributed applications. (New guide 4 of 4) Network security for distributed applications in Cross-Cloud Network: Describes how to design Cross-Cloud Network security for distributed applications. May 29, 2024 Design an optimal storage strategy for your cloud workload: Added information about the Regional service tier of Filestore. May 28, 2024 (New guide) Build an ML vision analytics solution with Dataflow and Cloud Vision API: Deploy a Dataflow pipeline to process large-scale image files with Cloud Vision. Dataflow stores the results in BigQuery so that you can use them to train BigQuery ML pre-built models. This architecture is accompanied by a reference architecture and a deployment guide. May 16, 2024 Infrastructure for a RAG-capable generative AI application using Vertex AI: Added information about getting started with deploying the reference architecture by using a Jump Start Solution. May 14, 2024 (New guide) Global deployment with Compute Engine and Spanner: Learn how to architect a multi-tier application that runs on Compute Engine VMs and Spanner in a global topology on Google Cloud. May 08, 2024 (New guide) C3 AI architecture on Google Cloud: Develop applications using C3 AI and Google Cloud. April 17, 2024 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Personalized Service Health. April 16, 2024 Disaster recovery building blocks: Added DNS policies to the DR building blocks. April 15, 2024 Disaster recovery building blocks: Added information about the soft-deletion feature in Cloud Storage. April 12, 2024 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Vertex AI online predictions. Deploying the enterprise application blueprint: Added information about using a single Git repository (a monorepo) instead of a separate repository for each application. April 11, 2024 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Vertex AI batch predictions. April 08, 2024 Deploy an enterprise developer platform on Google Cloud: Consolidated the eab-fleet-(env) project into the eab-gke-(env) project in each environment. April 05, 2024 (New guide) Use Google Cloud Armor, load balancing, and Cloud CDN to deploy programmable global front ends: Provides an architecture that uses a global front end incorporating Google Cloud best practices to help scale, secure, and accelerate the delivery of internet-facing applications. April 04, 2024 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Cloud Billing. April 03, 2024 (New guide) Infrastructure for a RAG-capable generative AI application using GKE: Design the infrastructure to run a generative AI application with retrieval-augmented generation (RAG) using GKE, Cloud SQL, and open source tools like Ray, Hugging Face, and LangChain. April 01, 2024 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Vertex ML Metadata. Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Vertex AI Pipelines. March 28, 2024 (New guide) Model development and data labeling with Google Cloud and Labelbox: Provides guidance for building a standardized pipeline to help accelerate the development of ML models. (New guide) Jump Start Solution: Generative AI RAG with Cloud SQL: Deploy a retrieval augmented generation (RAG) application with vector embeddings and Cloud SQL. (New guide) Build and deploy generative AI and machine learning models in an enterprise: Describes the generative AI and machine learning blueprint, which deploys a pipeline for creating AI models. March 27, 2024 (New guide) Jump Start Solution: Generative AI Knowledge Base: Demonstrates how to build an extractive question-answering (EQA) pipeline to produce content for an internal knowledge base. AI and machine learning resources: Added introduction information with guiding links to our generative AI and traditional AI resources. March 26, 2024 (New guide) Cross-silo and cross-device federated learning on Google Cloud: Provides guidance to help you create a federated learning platform that supports either a cross-silo or cross-device architecture. March 20, 2024 (New guide) Design storage for AI and ML workloads in Google Cloud: Select the recommended storage options for your AI and ML workloads. March 14, 2024 Design an optimal storage strategy for your cloud workload: Added guidance about data transfer options. March 04, 2024 Architecting disaster recovery for cloud infrastructure outages: Added information about zonal and regional resilience of Speech-to-Text, Looker, and Cloud Intrusion Detection System. February 28, 2024 (New guide) Configure networks for FedRAMP and DoD in Google Cloud: Provides configuration guidance to help you comply with design requirements for FedRAMP High and DoD IL2, IL4, and IL5 when you deploy Google Cloud networking policies. (New guide) Infrastructure for a RAG-capable generative AI application using Vertex AI: Design infrastructure to run a generative AI application with retrieval-augmented generation (RAG) to help improve the factual accuracy and contextual relevance of LLM-generated content. February 15, 2024 Architecting disaster recovery for cloud infrastructure outages: Added information about zonal and regional resilience of Sole Tenant Nodes. February 09, 2024 From edge to mesh: Deploy service mesh applications through GKE Gateway: Switched from Ingress API to the more modern Gateway API. Updated relevant sections to reflect this change. February 08, 2024 (New guide) Single-zone deployment on Compute Engine: Provides a reference architecture for a multi-tier application that runs on Compute Engine VMs in a single Google Cloud zone and describes the design factors to consider when you build a single-zone architecture. January 31, 2024 (New guide) Regional deployment on Compute Engine: Architect a multi-tier application that runs on Compute Engine VMs in multiple zones within a Google Cloud region. January 25, 2024 (New guide) Use RIOT Live Migration to migrate to Redis Enterprise Cloud: Migrate from Redis compatible sources like Redis Open Source (Redis OSS), AWS ElastiCache, and Azure Cache for Redis to a fully managed Redis Enterprise Cloud instance in Google Cloud using the Redis Input and Output Tool (RIOT) Live Migration service. This architecture is accompanied by a deployment guide and an assessment guide. January 19, 2024 Disaster recovery building blocks: Updated the guidance for Google Kubernetes Engine (GKE) with information about the Backup for GKE and multi-cluster Gateway features. January 17, 2024 Architecting disaster recovery for cloud infrastructure outages: Added information about zonal and regional resilience of Connectivity Tests and Network Analyzer. January 09, 2024 (New guide) Import logs from Cloud Storage to Cloud Logging: Import logs that were previously exported to Cloud Storage back to Cloud Logging. This architecture is accompanied by a deployment guide. Architecture fundamentals: This page provides a consolidated view of the Architecture Center resources that provide fundamental architectural guidance applicable to all the technology categories. January 08, 2024 Manage just-in-time privileged access to projects: Updated the deployment instructions for JIT Access 1.6. January 03, 2024 (New guide) Okta user provisioning and single sign-on: Set up federated user provisioning and single sign-on using Okta. December 21, 2023 (New guide) Multi-regional deployment on Compute Engine: Reference architecture for a multi-region, multi-tier topology on Compute Engine VMs and a third-party database. December 20, 2023 File storage on Compute Engine: Changed Filestore High Scale to Zonal, updated Filestore Zonal support for the CSI Driver, added Google Cloud NetApp Volumes, and removed NetApp Cloud Volume Service. (New guide) Deploy an enterprise developer platform on Google Cloud: Provides a blueprint to help enterprises set up a developer platform for building and managing container-based applications in Google Cloud. Enterprise foundations blueprint: Major rewrite of the guide and updates to the deployable Terraform code: Guide rebranded as "Enterprise foundations blueprint" to reflect broader coverage (previously "Security foundations blueprint"). Prescriptive recommendations with an emphasis on the decisions needed to align with existing operations and technology stack. Multiple deployment options: Jenkins, GitHub Actions, GitLab CI/CD, and Terraform Cloud. Scripts to automate deployment across multiple stages and repositories. Enhancements to the GitHub code to include updated product capabilities and best practices like centralizing logs to a Log Analytics enabled bucket, replacing VPC firewall rules with network firewall policies, and customizable detective controls. December 19, 2023 (New guide) Jump Start Solution: Stateful app with zero downtime deployment on Compute Engine: Update a live app without a noticeable disruption by using the Stateful app with zero downtime deployment on Compute Engine app. (New guide) Jump Start Solution: Stateful app with zero downtime deployment on GKE: Update a live app without a noticeable disruption by using the Stateful app with zero downtime deployment on GKE app. December 15, 2023 (New Guide: 1 of 3) Build hybrid and multicloud architectures using Google Cloud: Provides practical guidance on planning and architecting your hybrid and multi-cloud environments using Google Cloud. Adds new content and revises existing content. (New Guide: 2 of 3) Hybrid and multicloud architecture patterns: Discusses common hybrid and multicloud architecture patterns, and describes the scenarios that these patterns are best suited for. Adds new content and revises existing content. (New Guide: 3 of 3) Hybrid and multicloud secure networking architecture patterns: Discusses several common secure network architecture patterns that you can use for hybrid and multicloud architectures. Adds new content and revises existing content. December 14, 2023 (New guide) Data transformation between MongoDB Atlas and Google Cloud: Data transformation between MongoDB Atlas as the operational data store and BigQuery as the analytics data warehouse. December 08, 2023 Design an optimal storage strategy for your cloud workload: Updated the capacity numbers for Hyperdisk and Local SSD. December 06, 2023 Limiting scope of compliance for PCI environments in Google Cloud: Updated to reflect PCI DSS 4.0. Architecting disaster recovery for cloud infrastructure outages: Added information about zonal and regional resilience of Certificate Authority Service. Best practices for running tightly coupled HPC applications: Removed the Libfabric script, because it is no longer needed from Intel MPI 2021.10 onwards. December 05, 2023 (New series) Migrate across Google Cloud regions: Start preparing your workloads and data for migration across Google Cloud regions. (New guide) Design resilient single-region environments on Google Cloud (New guide) Prepare data and batch workloads for migration across regions November 30, 2023 (New guide) Set up an embedded finance solution using Google Cloud and Cloudentity: Describes architectural options for providing your customers with a seamless and secure embedded finance solution. (New guide) Migrate to Google Cloud: Minimize costs: Minimize costs of your single- and multi-region Google Cloud environments, and of migrations across Google Cloud regions. PCI Data Security Standard compliance: Updated to reflect the release of PCI DSS 4.0. November 28, 2023 Google Cloud Architecture Framework: Reorganized the Reliability category and moved SLO content to new pages. November 27, 2023 Deploy Apache Guacamole on GKE and Cloud SQL: Updated deployment to use Artifact Registry, and updated Cloud Shell commands for compatibility with latest Terraform provider. November 21, 2023 (New guide) FortiGate architecture in Google Cloud: Deploy a FortiGate Next Generation Firewall in Google Cloud, using Compute Engine and Virtual Private Cloud networking. November 20, 2023 Jump Start Solution: Analytics lakehouse: Updated the Deploy the solution section to clarify that the organizational policy constraint constraints/compute.requireOsLogin must not be enforced. November 16, 2023 Parallel file systems for HPC workloads: Added Sycomp Storage Fueled by IBM Spectrum Scale as an option for parallel file system (PFS) storage, and replaced NetApp Cloud Volumes Service with Google Cloud NetApp Volumes. November 14, 2023 Parallel file systems for HPC workloads: Added Parallelstore and Weka Data Platform as options for parallel file system (PFS) storage. November 13, 2023 Designing networks for migrating enterprise workloads: Adds Cross-Cloud Interconnect functionality and updates Private Service Connect information. November 09, 2023 (New guide) Google Cloud Architecture Framework: Added the deployment archetypes page in the System Design category. November 06, 2023 Scalable TensorFlow inference system: Converted the Tensorflow inference system guide into a reference architecture that includes design considerations. November 03, 2023 (New guide) Google Cloud deployment archetypes: Overview and comparative analysis of the zonal, regional, multi-regional, global, hybrid, and multicloud deployment archetypes. October 31, 2023 PCI DSS compliance on GKE: Updated to meet the requirements of PCI DSS version 4.0, use Cloud IDS instead of a third-party IDS, and use the PodSecurity admission controller instead of PodSecurityPolicy. October 23, 2023 Inter-service communication in a microservices setup: Updated the architecture, design guidance, and deployment steps based on the latest demo application. October 16, 2023 Architecting disaster recovery for cloud infrastructure outages: Added DR guidance for Access Transparency. October 09, 2023 Best practices for running tightly coupled HPC applications: Updated to include guidance for H3 compute-optimized VMs. Architectures for high availability of PostgreSQL clusters on Compute Engine: Added information about the write-ahead log and the Log Sequence Number. October 04, 2023 (New guide) Migrate from AWS to Google Cloud: Migrate from Amazon EKS to GKE: Design, implement, and validate a plan to migrate from Amazon EKS to Google Kubernetes Engine. October 01, 2023 Migrating Node.js apps from Heroku to Cloud Run: Updated for the latest Heroku changes. September 28, 2023 (New guide) Design secure deployment pipelines: Best practices for designing secure deployment pipelines based on your confidentiality, integrity, and availability requirements. September 27, 2023 Twelve-factor app development on Google Cloud: Added new product information and security considerations. Removed outdated content. September 26, 2023 (New guide) Identify and prioritize security risks with Wiz Security Graph and Google Cloud: Describes how to identify and prioritize security risks in your cloud workloads with Wiz Security Graph and Google Cloud. September 15, 2023 (New guide) Connect Google Virtual Private Clouds to Oracle Cloud Infrastructure using Equinix: Use Equinix Network Edge and Partner Interconnect to deploy private, multi-cloud connectivity between Google Cloud VPC networks and Oracle® VCNs. September 12, 2023 Stream logs from Google Cloud to Splunk: Converted the Google Cloud-to-Splunk logging guide into a reference architecture that includes design considerations. Decide the network design for your Google Cloud landing zone: Added more details to the design options. Implement your Google Cloud landing zone network design: Updated to reflect the current features of Private Service Connect. September 08, 2023 Google Cloud Architecture Framework: Updated the best practices in the Cost Optimization category. September 01, 2023 Google Cloud infrastructure reliability guide: Updated the aggregate availability calculations to reflect changes in the availability SLAs for Compute Engine and Cloud SQL. August 31, 2023 Landing zone design in Google Cloud: Updated the section, "Identify resources to help implement your landing zone." August 28, 2023 Google Cloud Architecture Framework: AI/ML: Updated the list of AI and ML services in the System Design category. August 15, 2023 (New guide) Import data from an external network into a secured BigQuery data warehouse: Describes an architecture that you can use to help secure a data warehouse in a production environment, and provides best practices for importing data into BigQuery from an external network, such as an on-premises environment. GKE Enterprise reference architecture: Google Distributed Cloud Virtual for Bare Metal: Added load balancing information and project details. Updated the IP address allocation, cluster architecture, and node sizing information. August 11, 2023 (New guide) Use distributed tracing to observe microservice latency: Shows how to capture trace information on microservice applications using OpenTelemetry and Cloud Trace. August 06, 2023 (New guide) Deploy a secured serverless architecture using Cloud Functions: Provides guidance on how to help protect serverless applications that use Cloud Functions (2nd gen) by layering additional controls onto your existing foundation. Send feedback \ No newline at end of file diff --git a/What's_next(1).txt b/What's_next(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..ccfaff6c01781531b5f37177a672206324b711e9 --- /dev/null +++ b/What's_next(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns/whats-next +Date Scraped: 2025-02-23T11:49:50.969Z + +Content: +Home Docs Cloud Architecture Center Send feedback Build hybrid and multicloud architectures using Google Cloud: What's next Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC Learn more about how to get started with your migration to Google Cloud. Learn about common architecture patterns for hybrid and multicloud, which scenarios they're best suited for, and how to apply them. Find out more about networking patterns for hybrid and multicloud architectures, and how to design them. Explore, analyze, and compare the different deployment archetypes on Google Cloud. Learn about landing zone design in Google Cloud. Learn more about Google Cloud Architecture Framework Read about our best practices for migrating VMs to Compute Engine. Previous arrow_back Other considerations Send feedback \ No newline at end of file diff --git a/What's_next(2).txt b/What's_next(2).txt new file mode 100644 index 0000000000000000000000000000000000000000..8ee78247aa943473beecb0fd178d771fbdc9d8da --- /dev/null +++ b/What's_next(2).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-patterns-and-practices/whats-next +Date Scraped: 2025-02-23T11:50:15.551Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud architecture patterns: What's next Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC Learn how to approach hybrid and multicloud architecture and how to choose suitable workloads. Find out more about the networking architecture patterns suitable for the selected hybrid and multicloud architecture patterns. Learn more about Deployment Archetypes for Cloud Applications. Learn how to design and deploy an ecommerce web application using different architectures, including a microservice-based ecommerce web application using GKE, and a dynamic web application that's Serverless API based. Previous arrow_back Cloud bursting pattern Send feedback \ No newline at end of file diff --git a/What's_next(3).txt b/What's_next(3).txt new file mode 100644 index 0000000000000000000000000000000000000000..33be1699af9c85c4f5728cbbb04a45e9dc7db01d --- /dev/null +++ b/What's_next(3).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/hybrid-multicloud-secure-networking-patterns/whats-next +Date Scraped: 2025-02-23T11:50:44.205Z + +Content: +Home Docs Cloud Architecture Center Send feedback Hybrid and multicloud secure networking architecture patterns: What's next Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-27 UTC Learn more about the common architecture patterns that you can realize by using the networking patterns discussed in this document. Learn how to approach hybrid and multicloud and how to choose suitable workloads. Learn more about Google Cloud Cross-Cloud Network a global network platform that is open, secure, and optimized for applications and users across on-premises and other clouds. Design reliable infrastructure for your workloads in Google Cloud: Design guidance to help to protect your applications against failures at the resource, zone, and region level. To learn more about designing highly available architectures in Google Cloud, check out patterns for resilient and scalable apps. Learn more about the possible connectivity options to connect GKE Enterprise cluster running in your on-premises/edge environment, to Google Cloud network along with the Impact of temporary disconnection from Google Cloud. Previous arrow_back General best practices Send feedback \ No newline at end of file diff --git a/What's_next(4).txt b/What's_next(4).txt new file mode 100644 index 0000000000000000000000000000000000000000..56fcbee68bc05269a8fc98a8702b5137fb7652dc --- /dev/null +++ b/What's_next(4).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/infra-reliability-guide/whats-next +Date Scraped: 2025-02-23T11:54:19.348Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud infrastructure reliability: What's next Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This page provides links to reliability-focused documentation that supplements the architectural guidance in the Google Cloud infrastructure reliability guide. Design scalable and resilient applications. Patterns and best practices for building cloud applications that are resilient and scalable. Mitigate ransomware attacks. Best practices to help you identify, prevent, detect, and respond to ransomware attacks. Plan for disaster recovery (DR). A series of documents that focus on designing for DR in the cloud. Google Cloud Architecture Framework: Reliability pillar. Principles and recommendations to help you build, deploy, and operate reliable and resilient workloads on Google Cloud. Learn about deployment archetypes for cloud applications An article in ACM Computing Surveys that explores six cloud-based deployment archetypes and their tradeoffs between high availability, latency, and cost. Learn about Google Cloud deployment archetypes. Describes the following six deployment archetypes: zonal, regional, multi-regional, global, hybrid, and multicloud. It also presents Google Cloud-specific design considerations for each deployment archetype. Learn how Google Cloud manages changes. Explains how Google Cloud teams apply reliability best practices and engineering standards to develop and release changes to our platform and services. Previous arrow_back Manage and monitor infrastructure Send feedback \ No newline at end of file diff --git a/What's_next.txt b/What's_next.txt new file mode 100644 index 0000000000000000000000000000000000000000..974b8687cb747f2d9d66f40166cffd8dcefd7bbc --- /dev/null +++ b/What's_next.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/whats-next +Date Scraped: 2025-02-23T11:44:53.400Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud deployment archetypes: What's next Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2023-11-03 UTC This section of the Google Cloud deployment archetypes guide lists resources that you can use to learn more about cloud deployment archetypes and best practices for building architectures for your cloud workloads. Deployment Archetypes for Cloud Applications: Research paper that this guide is based on. Google Cloud Architecture Framework: Recommendations to help you design and operate a cloud topology that's secure, efficient, resilient, high-performing, and cost-effective. Hybrid and multicloud architecture patterns: Common architectural patterns for the hybrid and multicloud deployment archetypes. Patterns for connecting other cloud service providers with Google Cloud: Options for network connections between Google Cloud and other cloud platforms. Design reliable infrastructure for your workloads in Google Cloud: Design guidance to help to protect your applications against failures at the resource, zone, and region level. Enterprise foundations blueprint: An opinionated enterprise foundations blueprint for a landing zone, with step-by-step guidance to configure and deploy your Google Cloud estate. Previous arrow_back Comparative analysis Send feedback \ No newline at end of file diff --git a/Whitepapers.txt b/Whitepapers.txt new file mode 100644 index 0000000000000000000000000000000000000000..e816d3c001ea9bb826b8f3cdd7f8c20853196a3d --- /dev/null +++ b/Whitepapers.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/whitepapers +Date Scraped: 2025-02-23T11:57:38.918Z + +Content: +Google Cloud whitepapersWritten by Googlers, independent analysts, customers, and partners, these whitepapers explain the technology underlying our products and services or examine topics such as security, architecture, and data governance.Contact usMore from Google CloudAnalyst reportsSolutionsBlogFilter byFiltersFeatured whitepapersAI and MLCloud basicsData analyticsDatabasesGovernance and cost managementHybrid and multicloudIndustryMigrationModernizationNetworkingSecurityStorageSustainabilitysearchsendFeatured whitepapersDownload popular whitepapers that are trending now.Boston Consulting Group: Any company can become a resilient data championInsights from 700 global business leaders reveal the secrets to data maturity.Deliver software securelyLearn industry standards and best practices to secure every stage in your software supply chain.Evaluate your cloud migration optionsDevise your migration strategy that empowers both the business and IT.Understand the principles of cost optimizationDiscover five ways to reduce overall cloud spend with a cost optimization strategy.Designing Cloud Teams How to Build a Better Cloud Center of Excellence. AI and MLEvaluating Large Language Models—Principles, Approaches, and ApplicationsThis whitepaper (from Neurips 2024 tutorial) provides a comprehensive overview of the challenges and methods for evaluating large language models (LLMs).A Platform Approach to Scaling Generative AI in the EnterpriseExplore why AI platforms, not just models are important for scaling generative AI for enterprises. Enabling Generative AI Value: Creating An Evaluation Framework for Your OrganizationExplore how to develop a framework to evaluate generative AI for deployment beyond basic task metrics.Selecting the right model Customization and Augmentation techniquesHow to use retrieval-augmented generation (RAG), fine-tuning, prompt engineering, and long-context-window techniques to improve performance.Machine Learning Operations (MLOps) for Generative AI (GenAI)This whitepaper details how to operationalize GenAI deployments in production by adapting MLOps to GenAI and foundational models.Adaptation of Large Foundation ModelsThis whitepaper is aimed at outlining Google’s approach on adapter tuning.Google Cloud’s AI Adoption FrameworkAdopt AI with confidence. Assess where you are in the journey and leverage a structure for building scalable AI capabilities.Practitioners guide to machine learning operations (MLOps)A framework written for ML leaders and architects who want to understand MLOps in theory and practice.Improving the Speed and Efficiency of AI-Enabled Damage Assessment in InsuranceLearn how semi-supervised ML on TPUs can significantly reduce model training time and cost.ScaNN for AlloyDBNext generation vector search to address memory, indexing speed, and querying performance.Cloud basicsGoogle’s guide to innovation: How to unlock strategy, resources, and technologyLearn how Google developed a work culture that fosters creative thinking.Increasing business value with better IT operations: A guide to SRELearn what SRE is and how it can improve the way you do IT operations. Then get an introduction to what Google Cloud offers to help you implement SRE.SAP on Google Cloud: High availability Learn how to architect SAP systems in Google Cloud for high availability.The Google Cloud Adoption FrameworkIs your organization ready for the cloud? Use Google’s cloud adoption framework as a guide to find out. Designing Cloud Teams How to Build a Better Cloud Center of Excellence. Data analyticsBuild a modern, unified analytics data platform with Google CloudThis paper discusses the decision points necessary in creating a modern, unified analytics data platform built on Google Cloud.An insider’s guide to BigQuery cost optimizationDiscover the best practices that help you optimize both cost and performance of BigQueryEnriching Databricks Pipelines with Google Cloud’s pre-trained ML APIsApply ML classification models to bring meaning to unstructured dataBuild scalable and trustworthy data pipelines with dbt and BigQueryKey concepts of BigQuery and dbt with best practices for utilizing their combined power to build data pipelinesBuilding a data lakehouse on Google CloudDiscuss the new data lakehouse architecture and its key benefits.Building a unified analytics data platformLearn about the elements of a unified analytics data platform built on Google Cloud and key differences in platform architectures.Enhance your investment research with Google CloudFour keys to discovering valuable insights at scale.SAP Analytics on BigQuery with Qlik ReplicateLearn how to extract SAP data into BigQuery.Transforming options market data with the Dataflow SDKThis paper introduces some of the main concepts behind building and running applications that use Cloud Dataflow.What type of data processing organization are you?Data processing techniques depending on organization type.Building a data mesh on Google CloudHow to build a data mesh architecture on Google Cloud.Data Vault 2.0 on Google Cloud BigQueryData Vault overview and how to use it on BigQuery.DatabasesUnlocking gen AI’s full potential with operational databasesYour guide to generative AI app development in the cloud.Cloud Databases: An Essential Building Block for Transforming Customer ExperiencesHarvard Business Review Analytic Services report on how you can unlock new possibilities, implement new features quickly, increase application reliability, and improve operational efficiencies with cloud databases.Make your database your secret advantageLearn how Google Cloud databases can help you create great customer experiences that will transform your business.Guide: Migrating your databases to Google CloudLearn how Google Cloud provides managed databases that are easy to use and operate, without cumbersome maintenance tasks and operational overhead.IDC Whitepaper: Deploy Faster and Reduce Costs for MySQL and PostgreSQL DatabasesMigrating your databases to Cloud SQL can lower costs, boost agility, and speed up deployments. Get details in this IDC report.AlloyDB: Google forges its own new PostgreSQL blendTony Baer of dbInsight analyzes the role of AlloyDB within Google Cloud's databases and analytics portfolio.Cloud SQL Architecture Patterns for MicroservicesIf your application is implemented according to a microservices paradigm, consider these database deployment architecture options.Failure scenarios and resiliency with SpannerExplore the different failure scenarios of Spanner categorized into three levels of severity–including when operating outside of Google Cloud.ScaNN for AlloyDBNext generation vector search to address memory, indexing speed, and querying performance.Data Protection and Recovery Solutions in SpannerThis document guides you in configuring Spanner for an ideal deployment, ensuring your business's data protection and recovery needs are met.Governance and cost managementUnderstand the principles of cost optimizationDiscover five ways to reduce overall cloud spend with a cost optimization strategy.Unlock the Value of Cloud FinOps with a New Operating ModelLearn how your organization can establish strong financial governance and create a cost-conscious culture by adopting a new operating model.A guide to financial governance in the cloudReview the importance of financial governance controls and the role these controls play in increasing the predictability of cloud costs.Data deletion on Google CloudThis paper gives an overview of the secure process that occurs when you delete your customer data stored in Google Cloud.Data governance: Principles for securing and managing logsLearn strategies for data governance and security in the four stages of logs data.Getting started with FinOps on Google CloudActionable set of steps to help your organization maximize your cloud investment.IDC: Business value of Google CloudLearn how Google Cloud helps SMEs accelerate business growth through increased productivity and scalability while achieving greater cost efficiencies.Maximize business value with cloud FinOpsAn operational and cultural framework to drive financial accountability and accelerate business value through cloud transformation.Overcoming logs data compliance challenges with Google Cloud LoggingGuidance to help you keep logs data compliant with regional and industry regulations.Principles and best practices for data governance in the cloudLearn about the benefits, frameworks, and how you can operationalize cloud-based data governance within your organization.SAP on Google Cloud: Backup strategies and solutionsLearn which options are available to back up SAP systems in Google Cloud.Taking the cloud-native approach with microservicesThis paper covers foundational elements of transitioning a monolithic, software application architecture to microservices.Shared services cost allocationCloud FinOps point of view on cost allocation of shared cloud services.Driving Cloud FinOps at scale with Google Cloud TaggingLearn why cloud tagging is fundamental and critical to enable Cloud FinOps capabilities across your organization.Hybrid and multicloud5 ways Google can help you succeed in the hybrid and multicloud worldLearn more about the strengths which make Google a great hybrid and multicloud partner.Kubernetes: Your hybrid cloud strategyThis paper discusses how Kubernetes provides a holistic hybrid cloud solution that simplifies your deployment, management, and operational concerns.IndustryCPG Digital Transformation: where to invest nowFind out what use cases for cloud, AI/ML and analytics drive the most P&L impact.Deloitte Access Economics FSI 2021 reportEmbracing cloud transformation: benefits for the financial services industry in Australia and New Zealand.ESG technical whitepaper: Google Cloud for ecommerceLearn how to overcome the challenges of increasing customer experience demands.From weeks to days: a faster home loan journeyResearch from Google on the home loan experience as well as insights on how the process can be made more simple and all about the borrower.Google Cloud industries: The impact of COVID-19 on manufacturersLearn about the latest trends affecting manufacturers, from the pandemic's impact on operations to subsequent changing technology use.How data is driving resilient and sustainable supply chainsDiscover the processes to address in your supply chain transformation journey.How to be a data-driven retailerYour guide to getting smarter with data including success stories & practical next steps.Interpretable sequence learning for COVID-19 forecastingA novel approach that integrates machine learning into compartmental disease modeling to predict the progression of COVID-19.Personalizing media for global digital audiencesCreate new business insights and engage your audiences with data and AI. Download now.Scaling to build the Consolidated Audit Trail: A financial services application of BigtableWe consider Bigtable in the context of the Consolidated Audit Trail (CAT) and perform experiments measuring the speed and scalability of Bigtable.The future of market data: Distribution and consumption through cloud and AIResearch reveals trending innovation areas for market data.The startup's guide to Google CloudLearn why top startups chose to build their businesses with Google Cloud.Improving the Speed and Efficiency of AI-Enabled Damage Assessment in InsuranceLearn how semi-supervised ML on TPUs can significantly reduce model training time and cost.Revenue Growth Management powered by GoogleWhy you need AI & ML in your Revenue Management Strategy.Data driven supply chain control towerThis whitepaper provides a framework to build a data driven supply chain control tower.Retail Insights Canonical Framework for Supply ChainData insights framework to accelerate the development of consistent and interoperable retail applications.MigrationAccelerating the journey to the cloud with a product mindsetOverhaul how you approach business operations to accelerate your migration to cloud.Building a large-scale migration program with Google CloudGoogle Cloud’s four-stage approach to migration and how to create a team that drives large-scale migration of enterprise applications to the cloud.How to put your company on a path to successful cloud migrationAn outline of the key things to consider when crafting your migration journey.Managing change in the cloudThis Google Cloud Adoption Framework whitepaper explores cloud migration and helping your people thrive in the cloud.CIO's guide to application migrationThis guide covers technical requirements, business goals, and cloud technology for you to consider for your migration.Accelerate the migration of your workloads to the public cloudLearn how to properly navigate the four key phases of your cloud migration.IDC: Migrate VM-Based Enterprise Workloads to Google CloudLearn how to migrate VM-based applications to cloud with optimal pricing, performance, and security.Up, out, or both? How to evaluate your cloud migration optionsMigration frameworks built based on conversations with CIOs, CTOs, and technical staff.Migrating to Cloud Storage Nearline from Amazon GlacierBest practices for migrating data from Amazon Glacier to Cloud Storage Nearline, a low-cost storage class for archival workloads..NET apps: How to lift and shift a line-of-business application onto Google CloudGo step by step through moving an ASP.NET Windows application to Google Cloud.SAP on Google Cloud: Migration strategiesLearn the most common migration strategies, use cases, and options to use when you move SAP systems to Google Cloud.ModernizationThe Total Economic Impact™ Of Google Cloud’s Operations SuiteCost Savings And Business Benefits Enabled By Google Cloud’s Operations Suite.API management and service mesh: better togetherUsing service mesh and API management as complementary components for successful application modernization.CIO's guide to application modernizationA series of approaches to app modernization, from identifying needs and developing a road map, to methods of identifying and effecting meaningful change.Data center transformation with GoogleCraft your strategy to get out of the data center and into public cloud. This guide helps you design a successful modernization journey.Forrester's Total Economic Impact of Cloud RunThis report outlines cost savings and business benefits enabled by Cloud Run.IDC Whitepaper: Modernize Applications with Open Source Software on Google CloudHow using open source software on Google Cloud brings better performance, costs, and more.IDC Infobrief: Modernize Windows Server workloads using Google CloudLearn more about how Google Cloud enables Windows Server-based applications to perform better and more cost effectively..NET apps: Running your modern .NET application on KubernetesThis paper covers .NET application modernization, including containerization and orchestration concepts..NET apps: Modernizing your .NET application for Google CloudThis paper explores the process of taking an on-premises monolithic .NET application and making it cloud ready.How Google Cloud and Wix collaborate to optimize reliabilityHere's how Wix built a secure, dependable, and scalable infrastructure to help achieve its goal to eliminate downtime from its customers' vocabulary.IDC: A Built-In Observability Tool Adoption Blueprint for Public CloudWhitepaper for DevOps, Development, Operations, and SRE.NetworkingConsiderations When Benchmarking TCP Bulk FlowsOptimize TCP bulk data transfer by adjusting Linux networking stack settings.Considerations When Benchmarking UDP Bulk FlowsOptimize UDP bulk flow performance between two systems.Measuring Cloud Network Performance with PerfKit BenchmarkerOpen source toolkit for automated cloud network benchmarking.A Brief Look at Network Performance LimitersNot all Mbits are the same. Fine-tune your network for better performance and efficiency, no matter the advertised speed.A Brief Look at Round Trip TimeUnderstand the "how" of TCP RTT metrics to better apply them.A Brief Look at Path MTU discoveryDon't get all broken-up by fragmentation. Optimize your network setup and troubleshoot packet size problems.SecurityGoogle Cloud NGFW Enterprise Certified Secure Test ReportRead Miercom's test results on Google Cloud Next Generation Firewall Enterprise.Google Cloud NGFW Enterprise CyberRisk Validation ReportRead SecureIQlab's test results on Google Cloud Next Generation Firewall Enterprise.Converging architectures: Bringing data lakes and data warehouses togetherExplore the implications of the convergence of data lakes and data warehouses.Migrating to Cloud Storage Nearline from Amazon GlacierBest practices for migrating data from Amazon Glacier to Cloud Storage Nearline, a low-cost storage class for archival workloads.Optimizing, monitoring, and troubleshooting VACUUM operations of PostgreSQLThe fundamentals of the VACUUM operation in PostgreSQL databases.StorageConverging architectures: Bringing data lakes and data warehouses togetherExplore the implications of the convergence of data lakes and data warehouses.Migrating to Cloud Storage Nearline from Amazon GlacierBest practices for migrating data from Amazon Glacier to Cloud Storage Nearline, a low-cost storage class for archival workloads.Optimizing, monitoring, and troubleshooting VACUUM operations of PostgreSQLThe fundamentals of the VACUUM operation in PostgreSQL databases.SustainabilityConverging architectures: Bringing data lakes and data warehouses togetherExplore the implications of the convergence of data lakes and data warehouses.Migrating to Cloud Storage Nearline from Amazon GlacierBest practices for migrating data from Amazon Glacier to Cloud Storage Nearline, a low-cost storage class for archival workloads.Optimizing, monitoring, and troubleshooting VACUUM operations of PostgreSQLThe fundamentals of the VACUUM operation in PostgreSQL databases.Take the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Why_Google_Cloud.txt b/Why_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..fe897421abeeb0d68a04e3bc248c75f7f3753e88 --- /dev/null +++ b/Why_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/why-google-cloud +Date Scraped: 2025-02-23T11:57:08.247Z + +Content: +Why Google CloudThere has never been a more exciting time in technology. We are experiencing one of the most significant shifts in history, where AI is creating entirely new ways to solve problems, engage customers, and work more efficiently. Google Cloud is ready to help organizations build a new way forward in an increasingly AI-driven world.Contact salesWelcome to the new way to cloudWant more than the status quo? Only Google Cloud brings together innovations from across Google to help customers digitally transform with AI that’s ready for what’s next, data insights that speed innovation, infrastructure that’s designed to meet industry needs, collaboration tools that help teams do their best work, and security that can proactively stop threats.Explore the new way1:11It's time for a new way of workingRaise your AI gameThe world is buzzing about generative AI. Now what? Get everyone in your organization learning, building with, and deploying generative AI—all while keeping your data confidential. Kickstart your journey today with enterprise-ready generative AI solutions.Put your data to workBring the simplicity, scale, security, and intelligence of Google’s information approach to your organization. Google offers a complete data foundation to unify all workloads and manage the entire data life cycle. The solution is designed to run data anywhere, so you can leverage your data across all clouds, on-premises, and access it in the most popular SaaS apps. This solution is built with and for AI, so you can get the latest tools for machine learning analysis, prompting, tuning, training, and deploying custom foundation models—all connected to your business data.Explore the Data and AI cloud900+Partners and software integrations in our data and AI ecosystem 135Languages translated in just a few clicks with Translation Hub90%Nearly 90% of generative AI unicorns are Google Cloud customers110+TB of data per second analyzed by BigQuery customers 1:34Gemini for Google Cloud: the next frontier in AI-powered developer productivityModernize your infrastructureYou’re ready for AI, but is your cloud? Google Cloud helps developers build quickly, securely, and cost effectively with the next generation of modern infrastructure designed to meet specific workload and industry needs. Get infrastructure that's optimized for AI, container-based applications, traditional enterprise workloads, and high-performance, distributed workloads—all while helping to cut costs and your carbon footprint.Explore infrastructure modernization solutionsCreate a culture of innovationEmpower teams of all sizes to do their best work—anywhere, and across a variety of devices. Google Workspace brings together innovative tools preferred by the modern workforce for collaboration and creation, including Gmail, Google Chat, Google Calendar, Google Drive, Google Docs, Google Sheets, and Google Meet. And, we’ve embedded new, easy-to-use generative AI features to help supercharge team productivity. With more than 3 billion monthly active users, Workspace offers the most popular productivity and collaboration software in the world.Discover why1:34Explore the new era of work1,800Organizations get back to business quickly every year post-breach40K+Vulnerabilities found by Google in open source software projects6M+Sites protected by reCAPTCHA9BFiles and URLs analyzed in threat observatory platform VirusTotalGet built-in securityBenefit from the same security capabilities that Google uses to keep more people and organizations safe online than anyone else. We help organizations transform their cybersecurity programs with frontline intelligence from Mandiant to understand the latest cyber attacks; a modern security operations platform for detecting, investigating, and responding to threats; and a secure-by-design, secure-by-default infrastructure platform with controls to help maintain digital sovereignty.Strengthen security with AIWork more sustainablyDecarbonize your digital infrastructure and increase climate resilience with features that help organizations go from ambition to action. Learn about applying technology to key challenges like responsible materials sourcing, climate risk analysis, and sustainable logistics.Get inspiredDiscover digital transformation stories and curated cloud computing resources.Seeking green: How focusing on sustainability can build financial resilienceRead the storyAre you fluent in prompts and embeddings? A generative AI primer for busy executivesRead the storyHow to maximize your generative AI investments with cloud FinOpsRead the storyTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact usWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleContinue browsingSee all productsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Windows_on_Google_Cloud.txt b/Windows_on_Google_Cloud.txt new file mode 100644 index 0000000000000000000000000000000000000000..39ab9972d442ff0dc6b1101b538a0b1711014e2b --- /dev/null +++ b/Windows_on_Google_Cloud.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/windows +Date Scraped: 2025-02-23T11:59:40.952Z + +Content: +Missed Next '24? All sessions are now available on demand. Watch now.Migrate and modernize Microsoft workloads on Google CloudA first-class experience for Windows workloads. Self-manage or leverage managed services. Use license-included images or bring your own. Migrate, optimize, and modernize with enterprise-class support backed by Microsoft.Contact usFree migration cost assessmentMicrosoft licenses demystified: Guidance to common licensing scenarios Read the guideBenefits Simplify your modernization journey from WindowsStart with migrationMigrate to increase IT agility and reduce on-premises footprint. Tools like Migrate to Virtual Machines and Migrate to Containers can help migrate and upgrade. Optimize license usage to reduce costOptimize VM usage. Managed SQL Server and Active Directory reduce total cost of ownership. Move .NET to .NET core. Move SQL Server to Linux. Modernize to reduce single-vendor dependencyCreate an open path to modernization—containerization of Windows server, cloud-native development, and multi-cloud readiness with Anthos. Looking for a fast, frictionless way to test things out? Check out our Microsoft and Windows on Google Cloud Simulation Center. You can also get the latest news about Microsoft and Windows on Google Cloud via our blog.Key featuresDrive a strategy for migration, optimization, and modernizationPlan for the future while reducing your Microsoft licensing dependency. Get all you need to migrate, optimize, and modernize your legacy platform. Bring your own licensesIn addition to on-demand licenses, Google Cloud provides you with flexibility for bringing your existing licenses and running them on Compute Engine. Use Sole-Tenant Nodes to run on dedicated hardware with configurable maintenance policies to support your on-premises licenses while maintaining workload uptime and security through host-level live migration. License-included VM imagesYou can deploy your Windows applications (including SQL Server) on our fully tested images with bundled licenses on Compute Engine and take advantage of many benefits available to virtual machine instances such as reliable storage options, the speed of the Google network, and autoscaling.Enterprise-class support backed by Microsoft on Google Cloud Google Cloud provides enterprise-class support for Windows workloads. Our experts have the backing of Microsoft Premier Support for Partners to help you solve any challenge.If your operating system reaches end-of-support, you can purchase Extended Security Updates (ESUs) to get critical security patches and use them on your instances. Fully managed SQL server and active directoryGet fully managed Cloud SQL for SQL Server and the Managed Service for Microsoft Active Directory with built-in patching, backups, regular maintenance, and licensing included. Windows on KubernetesRunning your Windows Server containers on GKE can save you on licensing costs, as you can pack many Windows Server containers on each Windows node.Ready to get started? Contact usRead moreThe Total Economic Impact™ Of Migrating Expensive OSes and Software to Google CloudDownload reportLearn how to migrate and modernize .NET applications on Google CloudDownload WhitepaperCustomersReduce license cost and help increase agility, security, and scalabilityCase studyVolusion improves performance, conversion, and ecommerce revenue.6-min readBlog postHow Geotab is modernizing applications with Google Cloud.5-min readSee all customersPartners Recommended partnersSystem integrators help migrate and run Windows applications from on-premises to Google Cloud. Technology partners have validated their product to work on Google Cloud. Expand allSystem integratorsTechnology partnersSee all partnersRelated services Products for running Windows workloadsChoose between a license-included image or bring your own license. Manage workloads yourself or use a fully managed service.Compute EngineBring your own licenses and run them in Compute Engine or use license-included images. With custom VMs get more RAM without licensing more cores.Cloud SQL for SQL ServerFully managed relational database service for SQL Server.Managed Service for Microsoft ADUse a highly available, hardened service running actual Microsoft Active Directory (AD).Google Kubernetes EngineAn enterprise-grade platform for containerized applications for Windows and Linux. Sole-tenant nodes A physical Compute Engine server that is dedicated to hosting VM instances only for your specific project. .NET Build, deploy, debug, and monitor highly scalable .NET apps.Documentation Explore common use casesThe first step on the path to modernization is to migrate your Windows workloads to Google Cloud. To help you with this journey, here are some detailed guides, managed services, and resources.Google Cloud BasicsRun Windows Server on Google Cloud Read how you can leverage reliable storage options, Google’s network, and autoscaling when you run Windows applications on Compute Engine. Learn moreTutorial Run Microsoft SQL Server workloads on Google Cloud Bring your existing SQL Server licenses and run them on Google Cloud using Compute Engine.Learn moreTutorial Run Microsoft Windows applications on Google Cloud Migrate and deploy Windows-server based and .NET applications on Google Cloud.Learn moreTutorial Google Cloud Skills Boost Quest: Windows on Google Cloud Get hands-on practice running many of the popular Windows services in Google Cloud.Learn moreAPIs & LibrariesUse Visual Studio and PowershellBuild and deploy applications to Google Cloud using Visual Studio and Powershell.Learn morePatternVirtual DesktopsEnable and accelerate the shift to remote work flexibly and securely.Learn moreTutorialHost .NET applications on Cloud RunRun a .NET application on Cloud Run that persists data to Firestore, stores files in Cloud Storage, and monitors your logs and errors.Learn moreTutorialDeploy ASP.NET in ContainersHost legacy ASP.NET applications with Windows Authentication on Windows containers on GKE.Learn moreNot seeing what you’re looking for?View documentationTake the next stepTell us what you’re solving for. A Google Cloud expert will help you find the best solution.Contact salesWork with a trusted partnerFind a partnerStart using Google CloudGo to consoleDeploy ready-to-go solutionsExplore marketplaceGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Workflows(1).txt b/Workflows(1).txt new file mode 100644 index 0000000000000000000000000000000000000000..335de1e0d74201d3bf89b11664a89aea785f6a1c --- /dev/null +++ b/Workflows(1).txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/workflows +Date Scraped: 2025-02-23T12:09:43.936Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Jump to WorkflowsWorkflowsCombine Google Cloud services and APIs to build reliable applications, process automation, and data and machine learning pipelines.New customers get $300 in free credits to spend on Workflows. All customers get 5,000 steps and 2,000 external API calls per month, not charged against your credits.Go to consoleContact salesDeploy and execute a Workflow that connects a series of services together with this tutorialReliably automate processes that include waiting and retries for up to one yearImplement real-time processing with low-latency, event-driven executions4:37Introducing Google Cloud WorkflowsBenefitsSimplify your architectureStateful Workflows allow you to visualize and monitor complex service integrations without additional dependencies.Incorporate reliability and fault toleranceControl failures with default or custom retry logic and error handling even when other systems fail—checkpointing every step to Spanner to help you keep track of progress.Zero maintenanceScale as needed: There’s nothing to patch or maintain. Pay only when your workflows run, with no cost while waiting or inactive.Key featuresKey featuresReliable workflow executionCloud Run functions Connectors make Google Cloud services particularly easy to use by taking care of request formatting, retries and waiting to complete long-running operations.Powerful execution controlUse expressions and functions to transform response data and prepare request inputs. Automate conditions based on input and service responses. Specify retry policies and error handling. Wait for asynchronous operations and events with polling and callbacks.Pay per useOnly pay when workflows take steps.View all featuresBLOGWorkflows, Google Cloud’s serverless orchestration engineDocumentationDocumentationGoogle Cloud BasicsUnderstand WorkflowsDiscover the core concepts and key capabilities of Workflows in this product overview.Learn moreQuickstartWorkflows quickstartsLearn how to create, deploy, and execute a workflow using the Cloud Console,the gcloud command-line tool, or Terraform.Learn moreTutorialWorkflows how-to guidesLearn how to control the order of execution in a workflow, invoke services and make HTTP requests, wait using callbacks or polling, and create automated triggers.Learn moreAPIs & LibrariesSyntax overviewLearn how to write workflows to call services and APIs, work with response data, and add conditions, retries, and error handling.Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseApp integration and microservice orchestrationCombine sequences of service invocations into reliable and observable workflows. For example, use a workflow to implement receipt processing in an expense application. When a receipt image is uploaded to a Cloud Storage bucket, Workflows sends the image to Document AI. After processing is complete, a Cloud Function determines whether approval is required. Finally, the receipt is made visible to users by adding an entry in a Firestore database.Use caseBusiness process automationRun line-of-business operations with Workflows. For example, automate order fulfillment and tracking with a workflow. After checking inventory, a shipment is requested from the warehouse and a customer notification is sent. The shipment is scanned when departing the warehouse, updating the workflow via a callback that adds tracking information to the order. Orders not marked as delivered within 30 days are escalated to customer service.Use caseData and ML pipelinesImplement batch and real-time data pipelines using workflows that sequence exports, transformations, queries, and machine learning jobs. Workflows connectors for Google Cloud services like BigQuery make it easy to perform operations and wait for completion. Cloud Scheduler integration makes it simple to run workflows on a recurring schedule.Use caseIT process automationAutomate cloud infrastructure with workflows that control Google Cloud services. For example, schedule a monthly workflow to detect and remediate security compliance issues. Iterating through critical resources and IAM permissions, send required requests for approval renewal using a Cloud Function. Remove access for any permissions not renewed within 14 days.View all technical guidesAll featuresAll featuresRedundancy and fault-toleranceWorkflows are automatically replicated across multiple zones and checkpoint state after each step, ensuring executions continue even after outages. Failures in other services are handled through default and customizable retry policies, timeouts, and custom error handling.Self-documentingSpecify workflows in YAML or JSON with named steps, making them easy to visualize, understand, and observe. These machine-readable formats support programmatic generation and parsing of workflows.Wait up to one yearWait for a given period to implement polling. Connectors provide blocking steps for many Google Cloud services with long-running operations. Simply write your steps and know each is complete before the next runs.Event-driven, scheduled, and programmatic triggersWorkflow executions are low-latency, supporting both real-time and batch processing. Through Eventarc, workflows can be executed when events occur, such as when a file is uploaded to Cloud Storage or when a Pub/Sub message is published.HTTP callbacksCreate unique callback URLs inside your workflow. Then wait (with a configurable timeout of up to one year) for the URL to be called, receiving the HTTP request data in your workflow. Useful for waiting for external systems and implementing human-in-the-loop processes.SecurityWorkflows run in a sandboxed environment and have no code dependencies that will require security patches. Store and retrieve secrets with Secret Manager.Seamless authentication within Google Cloud Orchestrate work of any Google Cloud product without worrying about authentication. Use a proper service account and let Workflows do the rest.Low-latency executionFast scheduling of workflow executions and transitions between steps. Predictable performance with no cold starts.Fast deploysDeploy in seconds to support a fast developer experience and quick production changes.Integrated logging and monitoringOut-of-the-box integration with Cloud Logging with automatic and custom entries provides insight into each workflow execution. Cloud Monitoring tracks execution volume, error rates, and execution time.PricingPricingPay-per-use, with an always-free tier, rounded up to the nearest 1,000 executed steps. Pay only for the executed steps in your workflow; pay nothing if your workflow doesn’t run. Use the Google Cloud Pricing Calculator for an estimate.INTERNAL STEPSPrice per monthFirst 5,000 stepsFreeSteps 5,000 to 100,000,000$0.01 per increment of 1,000 stepsSteps after 100,000,000Contact sales for pricing optionsEXTERNAL HTTP CALLSPRICE PER MONTHFirst 2,000 callsFreeSteps 2,000 to 100,000,000$0.025 per increment of 1,000 callsSteps after 100,000,000Contact sales for pricing optionsIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Workflows.txt b/Workflows.txt new file mode 100644 index 0000000000000000000000000000000000000000..88eab813196941816b5b12cf84587315f0632efe --- /dev/null +++ b/Workflows.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/workflows +Date Scraped: 2025-02-23T12:05:19.247Z + +Content: +Catch up on the latest product launches, demos, and trainings from Next '23. Let's go.Jump to WorkflowsWorkflowsCombine Google Cloud services and APIs to build reliable applications, process automation, and data and machine learning pipelines.New customers get $300 in free credits to spend on Workflows. All customers get 5,000 steps and 2,000 external API calls per month, not charged against your credits.Go to consoleContact salesDeploy and execute a Workflow that connects a series of services together with this tutorialReliably automate processes that include waiting and retries for up to one yearImplement real-time processing with low-latency, event-driven executions4:37Introducing Google Cloud WorkflowsBenefitsSimplify your architectureStateful Workflows allow you to visualize and monitor complex service integrations without additional dependencies.Incorporate reliability and fault toleranceControl failures with default or custom retry logic and error handling even when other systems fail—checkpointing every step to Spanner to help you keep track of progress.Zero maintenanceScale as needed: There’s nothing to patch or maintain. Pay only when your workflows run, with no cost while waiting or inactive.Key featuresKey featuresReliable workflow executionCloud Run functions Connectors make Google Cloud services particularly easy to use by taking care of request formatting, retries and waiting to complete long-running operations.Powerful execution controlUse expressions and functions to transform response data and prepare request inputs. Automate conditions based on input and service responses. Specify retry policies and error handling. Wait for asynchronous operations and events with polling and callbacks.Pay per useOnly pay when workflows take steps.View all featuresBLOGWorkflows, Google Cloud’s serverless orchestration engineDocumentationDocumentationGoogle Cloud BasicsUnderstand WorkflowsDiscover the core concepts and key capabilities of Workflows in this product overview.Learn moreQuickstartWorkflows quickstartsLearn how to create, deploy, and execute a workflow using the Cloud Console,the gcloud command-line tool, or Terraform.Learn moreTutorialWorkflows how-to guidesLearn how to control the order of execution in a workflow, invoke services and make HTTP requests, wait using callbacks or polling, and create automated triggers.Learn moreAPIs & LibrariesSyntax overviewLearn how to write workflows to call services and APIs, work with response data, and add conditions, retries, and error handling.Learn moreNot seeing what you’re looking for?View all product documentationUse casesUse casesUse caseApp integration and microservice orchestrationCombine sequences of service invocations into reliable and observable workflows. For example, use a workflow to implement receipt processing in an expense application. When a receipt image is uploaded to a Cloud Storage bucket, Workflows sends the image to Document AI. After processing is complete, a Cloud Function determines whether approval is required. Finally, the receipt is made visible to users by adding an entry in a Firestore database.Use caseBusiness process automationRun line-of-business operations with Workflows. For example, automate order fulfillment and tracking with a workflow. After checking inventory, a shipment is requested from the warehouse and a customer notification is sent. The shipment is scanned when departing the warehouse, updating the workflow via a callback that adds tracking information to the order. Orders not marked as delivered within 30 days are escalated to customer service.Use caseData and ML pipelinesImplement batch and real-time data pipelines using workflows that sequence exports, transformations, queries, and machine learning jobs. Workflows connectors for Google Cloud services like BigQuery make it easy to perform operations and wait for completion. Cloud Scheduler integration makes it simple to run workflows on a recurring schedule.Use caseIT process automationAutomate cloud infrastructure with workflows that control Google Cloud services. For example, schedule a monthly workflow to detect and remediate security compliance issues. Iterating through critical resources and IAM permissions, send required requests for approval renewal using a Cloud Function. Remove access for any permissions not renewed within 14 days.View all technical guidesAll featuresAll featuresRedundancy and fault-toleranceWorkflows are automatically replicated across multiple zones and checkpoint state after each step, ensuring executions continue even after outages. Failures in other services are handled through default and customizable retry policies, timeouts, and custom error handling.Self-documentingSpecify workflows in YAML or JSON with named steps, making them easy to visualize, understand, and observe. These machine-readable formats support programmatic generation and parsing of workflows.Wait up to one yearWait for a given period to implement polling. Connectors provide blocking steps for many Google Cloud services with long-running operations. Simply write your steps and know each is complete before the next runs.Event-driven, scheduled, and programmatic triggersWorkflow executions are low-latency, supporting both real-time and batch processing. Through Eventarc, workflows can be executed when events occur, such as when a file is uploaded to Cloud Storage or when a Pub/Sub message is published.HTTP callbacksCreate unique callback URLs inside your workflow. Then wait (with a configurable timeout of up to one year) for the URL to be called, receiving the HTTP request data in your workflow. Useful for waiting for external systems and implementing human-in-the-loop processes.SecurityWorkflows run in a sandboxed environment and have no code dependencies that will require security patches. Store and retrieve secrets with Secret Manager.Seamless authentication within Google Cloud Orchestrate work of any Google Cloud product without worrying about authentication. Use a proper service account and let Workflows do the rest.Low-latency executionFast scheduling of workflow executions and transitions between steps. Predictable performance with no cold starts.Fast deploysDeploy in seconds to support a fast developer experience and quick production changes.Integrated logging and monitoringOut-of-the-box integration with Cloud Logging with automatic and custom entries provides insight into each workflow execution. Cloud Monitoring tracks execution volume, error rates, and execution time.PricingPricingPay-per-use, with an always-free tier, rounded up to the nearest 1,000 executed steps. Pay only for the executed steps in your workflow; pay nothing if your workflow doesn’t run. Use the Google Cloud Pricing Calculator for an estimate.INTERNAL STEPSPrice per monthFirst 5,000 stepsFreeSteps 5,000 to 100,000,000$0.01 per increment of 1,000 stepsSteps after 100,000,000Contact sales for pricing optionsEXTERNAL HTTP CALLSPRICE PER MONTHFirst 2,000 callsFreeSteps 2,000 to 100,000,000$0.025 per increment of 1,000 callsSteps after 100,000,000Contact sales for pricing optionsIf you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.View pricing detailsTake the next stepStart your next project, explore interactive tutorials, and manage your account.Go to consoleNeed help getting started?Contact salesWork with a trusted partnerFind a partnerGet tips & best practicesSee tutorialsGoogle Accountjamzith nmajamzithnma0@gmail.com \ No newline at end of file diff --git a/Zonal.txt b/Zonal.txt new file mode 100644 index 0000000000000000000000000000000000000000..5f4fd886f86020a1546c2872317b2f9a31966c29 --- /dev/null +++ b/Zonal.txt @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/deployment-archetypes/zonal +Date Scraped: 2025-02-23T11:44:37.575Z + +Content: +Home Docs Cloud Architecture Center Send feedback Google Cloud zonal deployment archetype Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This section of the Google Cloud deployment archetypes guide describes the zonal deployment archetype. In a cloud architecture that uses the basic zonal deployment archetype, the application runs in a single Google Cloud zone, as shown in the following diagram: To be able to recover from zone outages, you can use a dual-zone architecture where a passive replica of the application stack is provisioned in a second (failover) zone, as shown in the following diagram: If an outage occurs in the primary zone, you can promote the standby database to be the primary (write) database and update the load balancer to send traffic to the frontend in the failover zone. Note: For more information about region-specific considerations, see Geography and regions. Use cases The following are examples of use cases for which the zonal deployment archetype is an appropriate choice: Cloud development and test environments: You can use the zonal deployment archetype to build a low-cost environment for development and testing. Applications that don't need high availability: The zonal deployment archetype might be sufficient for applications that can tolerate downtime. Low-latency networking between application components: A single-zone architecture might be well suited for applications such as batch computing that need low-latency and high-bandwidth network connections among the compute nodes. Migration of commodity workloads: The zonal deployment archetype provides a cloud migration path for commodity on-premises apps for which you have no control over the code or that can't support architectures beyond a basic active-passive topology. Running license-restricted software: The zonal deployment archetype might be well suited for license-restricted systems where running more than one instance at a time is either too expensive or isn't permitted. Design considerations When you build an architecture that's based on the zonal deployment archetype, consider the potential downtime during zone and region outages. Zone outages If the application runs in a single zone with no failover zone, then when a zone outage occurs, the application can't serve requests. To prevent this situation, you must maintain a passive replica of the infrastructure stack in another (failover) zone in the same region. If an outage occurs in the primary zone, you can promote the database in the failover zone to be the primary database, and ensure that incoming traffic is routed to the frontend in the failover zone. After Google resolves the outage, you can choose to either fail back to the primary zone or make it the new failover zone. Note: For more information about region-specific considerations, see Geography and regions. Region outages If a region outage occurs, you must wait for Google to resolve the outage and then verify that the application works as expected. If you need robustness against region outages, consider using the multi-regional deployment archetype. Reference architecture For a reference architecture that you can use to design a zonal deployment on Compute Engine VMs, see Single-zone deployment on Compute Engine. Previous arrow_back Overview Next Regional arrow_forward Send feedback \ No newline at end of file diff --git a/h-s_mtjY.txt.part b/h-s_mtjY.txt.part new file mode 100644 index 0000000000000000000000000000000000000000..8d914c18e8a5a76c08d93c58ac1a374dafaf87f6 --- /dev/null +++ b/h-s_mtjY.txt.part @@ -0,0 +1,5 @@ +URL: https://cloud.google.com/architecture/migration-to-google-cloud-minimize-costs +Date Scraped: 2025-02-23T11:51:50.707Z + +Content: +Home Docs Cloud Architecture Center Send feedback Migrate to Google Cloud: Minimize costs Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-11-20 UTC This document helps you minimize costs of your single- and multi-region Google Cloud environments, and of migrations across Google Cloud regions. This document is useful if you're planning to any of these types of migrations, or if you're evaluating the opportunity to do so in the future and want to explore what it might look like. This document is part of the following multi-part series about migrating to Google Cloud: Migrate to Google Cloud: Get started Migrate to Google Cloud: Assess and discover your workloads Migrate to Google Cloud: Plan and build your foundation Migrate to Google Cloud: Transfer your large datasets Migrate to Google Cloud: Deploy your workloads Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Migrate to Google Cloud: Optimize your environment Migrate to Google Cloud: Best practices for validating a migration plan Migrate to Google Cloud: Minimize costs (this document) This document provides guidance about the following topics: Assessing your current costs and projecting the growth of your Google Cloud footprint. Establishing your cost reduction requirements and goals. Implementing cost governance and reduction processes. Adopting the cloud FinOps framework. This document assumes that you've read and are familiar with Migrate to Google Cloud: Optimize your environment. That document describes the steps to design and implement an optimization loop (a continuing and ongoing optimization process) after a migration to Google Cloud. Those optimization steps are largely applicable to minimizing costs as well. Assess your costs Assessing the current and projected costs of your Google Cloud environments is essential to develop a comprehensive understanding of your resource consumption, and where potential growth opportunities might lie. To assess your current and projected costs, you can do the following: Assess the cost of your current Google Cloud environments. Assess the cost of future migrations across Google Cloud regions. Project the growth of your Google Cloud footprint. Assess the cost of your current environments To gather a comprehensive understanding of the costs of your environments, consider the following: Google Cloud billing model. Google Cloud uses a transparent and efficient model to bill resource usage. To fully understand how the model works and how Google Cloud bills you for resource consumption, we recommend that you learn how the Google Cloud billing model and product pricing work. Cloud Billing. To assess the current and projected costs of your environments, we recommend that you use Cloud Billing, a collection of tools that help you track your current and projected Google Cloud spending, pay your bill, and optimize your costs. For example, you can create budgets and budget alerts. Discounts. Google Cloud offers discounted prices in exchange for your commitment to use a minimum level of resources for a specified term. When assessing the cost of your current environments, we recommend that you gather information about the committed use discounts that you purchased and the products, services, and resources to which they apply. Carbon footprint. Google Cloud supports measuring and reporting the carbon footprint of your current environments. Gathering this information is useful to establish a baseline from which you can reduce your carbon footprint as part of your cost minimization efforts. For more information about how to set up resources for access control and cost management, see Guide to Cloud Billing resource organization & access management. Assess the cost of future migrations across regions If you're considering a migration across Google Cloud regions, we recommend that you assess how this migration might affect your costs. To assess how much a migration across regions might cost, consider the following: The price of Google Cloud resources in the target region. When migrating your workloads, data, and processes across Google Cloud regions, you will likely need to provision resources in the target region. You can use the Google Cloud Pricing Calculator to assess how much it might cost to provision new resources and migrate data to a new Google Cloud region. The cost of multi-region Google Cloud resources. To meet your reliability requirements, you might need to use multi-region resources. We recommend that you consider how those resources might affect the migration and its costs. For example, you're using dual- or multi-region Cloud Storage buckets, and one of these buckets is in the same region as your target migration region. In this case, you might not need to migrate data in those buckets because Cloud Storage handles data replication for you. The egress network traffic. Besides the cost of provisioning and maintaining Google Cloud resources, transferring data from one region to another might incur network egress costs. We recommend assessing these projected costs to avoid unanticipated billings. The time, training, and other collateral costs. The cost of migrating across regions involves more than the costs related to resource provisioning and data transfers. There are also collateral costs, such as the time and training needed for your teams to design a migration plan and complete the migration. When assessing your migration costs, we recommend that you account for your collateral costs as well. On top of these recommendations, Google Cloud offers the Google Cloud rapid assessment and migration program. This program provides you with free migration cost assessments, and guides you through the whole migration process with the support of Google Cloud professional services and partners. Project the growth of your Google Cloud footprint As part of regular environment maintenance, we recommend that you continuously monitor the costs of your environments. This type of monitoring provides the information that you need to establish cost governance processes. Such monitoring also keeps you informed of the current costs of your environments and their short-term projection. In addition to regularly maintaining your environments, we also recommend that you develop a long-term growth strategy. Such a strategy lets you better plan your budgets and the resources that are required for your Google Cloud footprint to grow organically with your business needs. To develop a long-term growth strategy, consider the following: Business requirements. Evaluate whether your environments are still inline with the business requirements that they are designed to support. For example, if you foresee an increase in demand in certain business areas, you might consider your options to grow the environments that support those areas. Trends and patterns. Use the Google Cloud Observability to evaluate the monitoring, logging, and performance profiling data that are associated with your workloads, data, and processes. From this evaluation, you can discover trends, derive demand and traffic patterns, and gather useful insights about these trends. Sustainable growth. Evaluate how much growth your current environments can sustain, and at what point you might need to design, provision, and configure additional environments. For example, if the costs of growing an existing environment outweigh the benefits to gain from that growth, you might consider provisioning a new environment instead. When evaluating how much growth your current environments can sustain, consider the affect of this growth on the carbon footprint of your environments. To learn more, see Carbon Footprint. Establish your cost reduction requirements and goals After projecting the growth of your Google Cloud footprint, we recommend that you establish the following: Cost reduction requirements. A requirement expresses a need for improvement and doesn't necessarily have to be measurable. By establishing these requirements, you indicate the areas where you want to focus your cost reduction efforts. Cost reduction goals. A goal is a measurable property that might contribute to one or more requirements. By establishing measurable goals, you make your cost reduction efforts measurable themselves, and you can continuously evaluate your current stance against those goals. For more information about requirements and goals and their definition, see Establishing your optimization requirements and goals. To establish your cost reduction requirements, we recommend that you start by defining what types of costs need to be improved in your environments. For example, a cost reduction requirement might be to reduce the cost of computing services. After establishing your cost reduction requirements and validating their feasibility, you define measurable cost reduction goals for each requirement. The set of goals relevant to a requirement should let you completely define all the characteristics of that requirement, and should let you measure your progress towards meeting that requirement. For example, consider the previous cost reduction requirement about reducing the cost of computing services. For this requirement, you might define a cost reduction goal of reducing the costs of your Compute Engine instances by 5%. After establishing your cost reduction requirements and goals, we recommend that you evaluate the feasibility of each requirement by relying on data gathered during the cost assessment phase. For example, you can use assessment data to evaluate the feasibility of the previous cost reduction goal to reduce the costs of Compute Engine instances by 5%. That is, use the assessment data to evaluate whether you can hit that goal by small refactorings to your environments and processes, or whether you need to heavily modify their design instead. Implement cost governance and reduction processes During the cost assessment phase, you gathered information about your current and short term spending. Then, by establishing your cost reduction requirements and goals, you outlined the way forward to reduce costs. Both activities are necessary to develop long-term strategies to reduce costs, and to grow your Google Cloud footprint and the business it supports. However, those activities alone don't address implementation. To implement those strategies, you also need cost governance and reduction processes. You should approach these cost governance and reduction processes in the following order: Monitor costs. Control resource provisioning. Reduce costs. Monitor costs To maintain control over your costs, it's essential to continuously monitor the billing and cost trends of your environments. We recommend that you do the following: Regularly review billing reports. Cloud Billing provides built-in reports about your usage costs, the details about your invoices and statements, cost breakdowns, and pricing tables. To maintain a current and comprehensive understanding of your costs, we recommend that you regularly review these billing reports. If you need to gather further insights beyond what the built-in Cloud Billing reports provide, you can export billing data to BigQuery for further analysis. Configure labels and tags. Labels and tags are key-value pairs that you can attach to your Google Cloud resources. You can use these key-value pairs to implement your own cost tracking and analysis reports on top of what Cloud Billing provides. For example, you can break down costs by label, or perform chargebacks, audits, and other costs allocation analysis by tags. For more information about how labels and tags compare, see Tags and labels. Configure budget alerts. Budgets and budget alerts can help you track your actual costs and how they compare against planned costs. To avoid unanticipated costs, we recommend that you set up budget and budget alerts to provide you with enough time to promptly act. Control resource provisioning Google Cloud supports various resource provisioning tools, such as Google Cloud console, Google Cloud SDK, Cloud APIs, and Terraform providers, modules, and resources. Users in your organization can use these tools to provision resources in your environments. Provisioning additional Google Cloud resources or scaling existing ones up or down might incur changes in your spending. For more information, see the pricing for each resource. To avoid uncontrolled and unanticipated spending, we recommend that you design and implement processes to control resource provisioning. To implement these processes, consider the following: Adopt infrastructure as code. By managing your infrastructure as code, you can manage the provisioning and configuration of your Google Cloud resources as you would handle application code. You can also take advantage of your existing continuous integration, continuous deployment, and audit processes. For example, you can manage your infrastructure as code with Terraform, and you can enforce policy compliance as part of your continuous integration pipeline. Review changes before applying them. To avoid unanticipated changes in spending, we recommend that you implement processes to review changes to your environments before applying them, regardless of the tool that you use to provision and scale Google Cloud resources. For example, if you adopt infrastructure as code, you can add a mandatory human review step before applying any substantial change to the Google Cloud resources that support your environments. Document your environments and detect drift. When provisioning and configuring your Google Cloud environments, we recommend that you document the following for each environment: The environment's characteristics. The Google Cloud resources that you provision and configure in that environment. The preferred state for each of those resources. Documenting the characteristics of your environments makes auditing the current state of your environments easier. Documentation also lets you design and implement processes to detect any drift from the preferred state, and take corrective actions as soon as possible. For example, you can use Cloud Asset Inventory to analyze all your Google Cloud assets across projects and services. You can then compare that analysis against the preferred state of each environment, proactively decommission any unmanaged resources, and bring managed resources back to their preferred state. Configure Organizational policies. To configure controls and restrictions on how your organization's resources can be used, and to avoid misuse that might lead to unintended charges, you can use the Organization Policy Service to enforce constraints. For example, you might restrict the usage of certain Google Cloud products, or you might restrict the creation of certain resources. For more information about the constraints that Google Cloud supports, refer to Organization policy constraints. Configure quotas. Google Cloud uses quotas to restrict how much of a shared Google Cloud resource that you can use. To limit the use of particular resources, you can set your own quota limits up to a cap. For example, you can prevent creating Compute Engine instances over a certain number by limiting how many Compute Engine instances can exist in a given region. Adopt least privilege access methods. To avoid privilege escalation issues where users of your Google Cloud resources elevate their privileges and bypass reviews, we recommend that you grant the least amount of privileges to users and service accounts. For example, you can grant the minimum privileges necessary to users and service accounts using IAM. Reduce costs Monitoring the costs of your environments and implementing processes to control resource provisioning help you with the following: Controlling the current and projected costs of your environments. Avoiding unanticipated and uncontrolled costs. Providing a cost basis that you can use when trying to reduce costs. In this document, reducing costs means designing and implementing processes and mechanisms to meet your cost reduction goals. You can design these processes to be reactive (act as a consequence of another action or status change), or proactive (act anticipating other actions or status changes). Often, the recommendations in this section are applicable to both reactive and proactive processes. Also, many cost reduction processes can be both. To design and implement cost reduction processes, consider the following recommendations: Evaluate usage discounts. Google Cloud offers several options to reduce your costs based on your usage patterns of Google Cloud resources. For example, you can get access to discounted prices in exchange for your commitment to use a minimum level of resources for a specified term with committed use discounts. Some Google Cloud services offer discounts on resources that you use for a certain amount of time or level. For example, Compute Engine offers sustained use discounts on resources that are used for more than a certain period of the billing cycle. Decommission unneeded resources. As your business requirements change over time, the environments that support those business requirements evolve as well. As part of this evolution, your environments might end up with unneeded resources, or resources that scale to unnecessary levels. To reduce usage costs associated with unneeded resources, we recommend that you both assess the affect of each unneeded resource on your costs and how decommissioning those resources might affect your environments. For example, you can view and apply idle resources recommendations and idle VM recommendations to identify unused resources and Compute Engine instances, and eventually decommission them. Rightsize over provisioned resources. To avoid underutilizing the Google Cloud resource that you provisioned and configured, we recommend that you both assess your environments to evaluate whether there are resources that you might need to rightsize. Rightsizing resources might lead to cost reductions. For example, you can use the data that Google Cloud Observability provides to assess how much of a particular resource that you're using and whether there's room to rightsize those resources. Another example of rightsizing resources would be to apply machine type recommendations for Compute Engine instances. Configure automatic scaling. Many Google Cloud services support automatically scaling resources up and down according to demand. Automatic scaling (also known as autoscaling) helps you reduce costs by scaling Google Cloud resources to match your current demand. For example, Compute Engine offers autoscaling to automatically add and remove instances to managed instance groups based on load. Migrate to managed services. To help you reduce operational costs and eliminate toil, consider migrating from self-managed services to Google-managed services. Google accumulated decades of experience in running planet-scale, globally distributed systems, and makes this expertise available to Google Cloud customers when they use managed Google Cloud services. For example, if you're running a self-managed Kubernetes cluster on Compute Engine, you might consider migrating to Google Kubernetes Engine (GKE). Migrating to GKE might free up resources that your operations teams can direct to other efforts, such as increasing the efficiency of your environments and reducing their costs. Derive patterns. On top of the autoscaling features that Google Cloud offers, you can also assess the data that Google Cloud Observability provides to derive usage and traffic patterns that help you build resource demand models. Building these models might help you design and implement proactive cost reduction processes that take advantage of the insights provided by these models. For example, you might find out that some of your environments receive heavy demand only during certain periods of the day or the week. Thus, you can proactively scale up those environments in anticipation of those periods, and scale them down when they aren't needed. Schedule low-priority workloads efficiently. Usually, not every workload running in your environments is high-priority and business-critical. To reduce costs, you can take advantage of the non-critical nature of those workloads. For example, you can shut down those workloads and their related resources when they are not needed. Alternatively, you can run them in more affordable runtime environments, such as Spot VMs, instead of running them in Compute Engine or GKE. Manage the data lifecycle. Data stored in your environments can grow to significant amounts in short periods of time. To help you reduce costs, we recommend that you design and implement processes to automatically manage the lifecycle of your data just as you do with your Google Cloud resources. For example, you can design and implement processes to delete unneeded data. Or you might generate aggregate data from more detailed data and move only the aggregate data into long-term storage. Or you might even consider moving data that you need less frequently to less expensive systems designed for infrequent access. Also, some Google Cloud services support automated object lifecycle management. For example, Cloud Storage offers Object Lifecycle Management to automate typical lifecycle management actions on objects and the Autoclass feature to automatically transition objects to appropriate storage classes based on each object's access pattern. Reduce costs of specific Google Cloud services. Google Cloud provides guidance to reduce and optimize your costs when using specific Google Cloud services such as Compute Engine, GKE, and Cloud Storage. For more information about optimizing costs of specific Google Cloud products, see Google Cloud Architecture Framework: Cost optimization. The previous recommendations are applicable regardless of how your Google Cloud resources are distributed across regions and zones. To learn about how to reduce costs of your single-region and multi-region environments, continue reading this document. Reduce costs of single-region environments In single-region environments, Google Cloud resources are typically distributed across multiple zones in only that region. Distributing resources across multiple zones in a region helps you to reduce the effects of zonal outages, and thereby, help minimize the affect that these outages can have on your business. For example, if you run a workload on a Compute Engine instance, and there's a zonal outage that affects the zone where you provisioned that instance, that workload might be affected by the outage. If you have multiple replicas of that workload running on Compute Engine instances in different regions, that workload is less likely to suffer due to a zonal outage. Usually, replicating resources across multiple zones costs more than provisioning resources in a single zone but potentially helps provide better reliability. Note: For more information about region-specific considerations, see Geography and regions. For more information, see Migrate across Google Cloud regions: Design resilient single-region environments on Google Cloud. When designing your single-region environments, we recommend that you assess the reliability requirements of your workloads, processes, and data. This assessment can help you decide which Google Cloud resources you need to replicate and distribute across multiple zones in a region, and which ones tolerate zonal outages and are fine in a single zone. For example, you might consider a zonal deployment for non-business critical batch workloads, and a multi-zone replication and distribution for more critical workloads, processes, and data. Reduce the costs of multi-region environments In multi-region environments, Google Cloud resources are typically distributed across multiple regions. Distributing resources across multiple regions helps reduce the affect of regional outages. For example, if you use a multi-region Cloud Storage bucket, your data is replicated across multiple regions, and has a better availability compared to regional buckets. In addition to the recommendations in this section, consider the ones described in Reduce costs of single-region environments because they are applicable for multi-region environments as well. To reduce the costs of multi-region environments, consider the following: Multi-region resources. Several Google Cloud products support replicating and distributing resources across multiple regions to increase the reliability of your environments. For example, Cloud Storage supports dual-region and multi-region buckets to replicate your data across multiple regions. Usually, replicating and distributing resources across regions costs more than provisioning resources in a single region. For example, Google Cloud bills dual- and multi-region Cloud Storage buckets with different prices compared to single-region buckets, and charges for inter-region replication. To minimize product costs, we recommend that you consider using multi-region replication and distribution only when needed to meet the reliability requirements of your workloads, data, and processes. For example, you've determined that the data to be stored in a specific Cloud Storage bucket doesn't need to be distributed across multiple regions to mitigate the effects of a regional outage. For this data, you can save costs by provisioning a single-region bucket to store this data instead of provisioning a dual- or a multi-region bucket. Another cost savings example would be if you have a non-business critical workload that doesn't need the increased reliability provided by a multi-region deployment. You could consider deploying that workload to a single-region or even single zone. Region-specific prices. You can provision Google Cloud resources in several regions. Prices for these resources can vary by region. For example, Compute Engine instance prices differ from region to region. You might be able to deploy some of your workloads, data, and processes to a region where they are the cheapest if those resources meet these requirements: Those workloads, data, and processes can tolerate the added latency that incurs when provisioning resources on which they depend in other regions. Those workloads, data, and processes aren't subject to regulatory requirements that force you to provision these resources in particular regions. Before trying to reduce costs by provisioning resources in other regions, assess whether or not the cost of the resulting inter-region network traffic negates the cost reduction of using region-specific pricing. Network egress costs. Google Cloud charges for network traffic between regions as egress traffic. To reduce costs, we recommend that you minimize the inter-region network traffic by concentrating closely-related Google Cloud resources that need to exchange data in the same region. For example, the workload that you have deployed on a Compute Engine instance needs access to the data stored in a Cloud Storage bucket. You can avoid inter-region traffic if you provision that Compute Engine instance in a region where the bucket replicates data. Minimize the costs of migrations across Google Cloud regions Migrating your environments and Google Cloud resources across regions helps you expand your environments to multiple regions, and also help you achieve compliance with regulatory requirements that mandate resource locality. For more information about migrating across regions, see Migrate across Google Cloud regions: Get started In addition to the recommendations in this section, consider the ones described in Reduce costs of multi-region environments because they are applicable to reduce the costs of migrations across Google Cloud regions as well. To reduce the costs of a migration across Google Cloud regions, consider the following: Data replication. When evaluating your options to migrate data from one region to another, we recommend that you consider both a self-managed migration and the replication features that several Google Cloud products support. For example, you need to migrate data stored in a regional Cloud Storage bucket across regions. You can assess and compare the costs of migrating that data in another single-region bucket in the target region to those of migrating that data in a multi-region bucket and having Cloud Storage handle the data replication across regions. Data migration strategy. When evaluating a data migration strategy to migrate data across Google Cloud regions, we recommend that you consider the strategies that let you minimize migration costs. For example, your workloads might start writing data to both the source region and to the target migration region by adopting a Y (writing and reading) strategy. With this strategy, you'll need to only transfer historical data during the migration. For more information about migrating data across Google Cloud regions, see Migration to Google Cloud: Transfer your large datasets. That document is about migrating data from other cloud providers and on premises environments to Google Cloud but it's also applicable to migrating data across regions. Adopt the cloud FinOps framework The guidance in this document aims at designing and implementing mechanisms and processes to monitor and govern costs, and to reduce expenditure inefficiencies, and it's designed so that you incrementally follow it to bring your cloud expenditure under control. When you're ready, you can adopt the cloud FinOps framework. Adopting this framework is a transformational change that brings technology, finance, and business together to drive financial accountability and accelerate business value realization. For more information about the cloud FinOps framework, see Getting Started with FinOps on Google Cloud. What's next Learn when to find help for your migrations. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. ContributorsAuthor: Marco Ferrari | Cloud Solutions Architect Send feedback \ No newline at end of file